Category Archives: Open Source Software (OSS)

Product Composition Risk Management

When I first heard the term Software Composition Analysis (SCA), I was excited to hear of a new vision for what was thought of as only an open source discovery tool. I knew the vendors in this new SCA space were thinking more deeply about the problems faced by product owners than just generating a bill of materials which detailed the open source code used by and distributed with their proprietary code.

However, after thinking about the broad spectrum of what SCA vendor are actually doing, I came to realize that the only word in that market categorization which is fully applicable is: composition. Both the words software and analysis are far too narrow for the work being done by SCA vendors.

Software, Firmware, and Webware

Even while they have been benefitting by this market category, SCA vendors have been processing not only their customers’ desktop and server software, but also their mobile application software, device firmware, and webware written with open web APIs.  Just being positioned as servicing “software” limits the perception of the wide variety of intellectual property delivery and deployment models SCA vendors process daily.

Risk Detection, Assessment, and Mitigation

Merriam-Webster defines analysis to be a “separation of a whole into its component parts”. Not only is this redundant with the word composition, but SCA vendors have gone beyond simply identifying open source components.

SCA users have consistently received more than a bill of open source materials. They have achieved well-defined business outcomes that have resulted in minimized risk around the security, data privacy, operations, license compliance, and terms of use compliance.

Product Composition Risk Management

Therefore, to represent the actual scope of benefits provided by SCA vendors, the category “Product Composition Risk Management” is more appropriate.

A modern digital product is composed of one’s own proprietary code, code from commercial and non-commercials providers, and web service providers. The word product is not limited to software, firmware, mobile, or web development; it encompasses all modes of digital product composition which use all types of intellectual property.

There is risk in composing one’s product only from one’s own proprietary code, which is why that code is measured against multiple non-functional requirements. However, composing one’s product from intellectual property owned by others creates an inherent risk that is much greater. You don’t know the care with which that IP was created and don’t know the resources available to maintain it.

SCA vendors not only identify open source risk, they assess the risk, and provide mitigation alternatives for their customers.

So, while the SCA market categorization served its purpose for a few years, it is time to acknowledge the greater benefits that SCA vendors bring to a customer’s entire supply chain.

Data Privacy Requires Data Security, Just Ask Equifax

The following post was originally published here by Black Duck Software…

The EU’s General Data Protection Regulation (GDPR) will be enforced starting May 25, 2018. One of its goals is to better align data privacy with data security, as depicted in this simple Venn diagram:

That is, you can have data security without data privacy, but you can’t have data privacy without data security.

Equifax painfully has come to this same conclusion, and well before the May 25, 2018 date.

A Little History on Data Privacy Principles

Many years ago, Equifax could have successfully argued that they have complied with data privacy requirements because they have not sold consumers data without those consumers’ permission. That was how low the bar was set when data privacy first became an issue.

Even as long ago as 1995, one of the data privacy principles in Directive 95/46/EC required appropriate security controls when handling private data. However, data privacy had focused only on issues of consumer consent and intentional disclosure of private data; that is, until Equifax clarified for uslast week that that is not enough.

Behind the Equifax Breach: A Deep Dive Into Apache Struts CVE-2017-5638

GDPR: New Requirements for Security Controls

Just like with Directive 95/46/EC, one of the data privacy principles of the GPDR requires similar security controls, but the important requirement that GDPR adds is that companies must provide evidence of those security controls.

Certainly, GDPR regulators will want to see evidence of security controls, but even companies that are not directly targets of regulators will be required to produce such evidence to their customers if any company downstream in their supply chain perceives themselves to be a target of regulators. Evidence of security controls will be a condition of doing business.

The Equifax breach makes clear in a visceral way what the GDPR will make clear through regulations: the consequences to the private individual are just as damaging, if not more, when their private data is breached compared to when it is sold to an unauthorized party, ask the 140 million individuals in Equifax’s database.

David-Znidarsic-Corporate-Photo-200x300.jpg

David Znidarsic is the founder and president of Stairstep Consulting, where he provides intellectual property consultation services ranging from IP forensics, M&A diligence, information security management, open source usage management, and license management. Learn more about David and Stairstep Consulting at www.stairstepconsulting.com

Compliant? Sure, But With What?

The following post was originally published here by Black Duck Software…

The term compliance is used more and more in business. Some job titles even include the term: VP of Compliance, Compliance Officer, Compliance Manager. Usually these roles have focused on the legal and operational requirements imposed by external groups like licensors and regulatory agencies.

While abiding by such external requirements is the cost of you doing business, you give up control of your business or product development by only following the requirements of others and not establishing your own policies and complying with them.

Limited Scope

Let’s look at how the term “compliance” has been used to limit the scope of open source governance

Open source compliance has been narrowly interpreted to mean that one must abide by the open source author’s license terms. Indeed, that will always be a requirement, but consider that an open source author’s work is replacing the work of one of your own software engineers.

If the only hurdle to cross before using open source is to be compliant with the author’s license terms, that is like saying you fully trust all the code developed by one of your software engineers if and only if your management meets its legal requirements during the hiring and employment of that engineer!

A Question of Trust?

While that seems preposterous, in practice, you probably impose many more requirements on the work product of your own engineers than on the work product of open source authors. Is it your intention to trust open source authors more than your own employees? The assumptions you might be making are:

(a) every open source project is staffed by many more development, testing, and maintenance engineers than your company can deploy to solve the same problem, and

(b) those engineers know and have fixed all security vulnerabilities.

However, www.openhub.com shows that might be true for some open source projects, but not all. Therefore, unless your product teams perform the appropriate due diligence, they won’t know whether their assumptions are valid.

Explore projects in OpenHub

Open source management best practices require organizations to know the open source in their code in order to reduce risks, tighten policies, and monitor and audit for compliance and policy violations. Automating identification of all open source in use allows development and license teams to quickly gain visibility into any known open source security vulnerabilities as well as compliance issues, define and enforce open source use and risk policies, and continuously monitor for newly disclosed vulnerabilities.

David-Znidarsic-Corporate-Photo-200x300.jpg

David Znidarsic is the founder and president of Stairstep Consulting, where he provides intellectual property consultation services ranging from IP forensics, M&A diligence, information security management, open source usage management, and license management. Learn more about David and Stairstep Consulting at www.stairstepconsulting.com

 

 

 

 

Best Technology Stack Transcends Language

In Entrepreneur.com, Rahul Varshneya observes that a technology stack is often chosen by your same software or firmware developer who will be responsible for writing code in that stack’s programming language.

Who would be brave or foolish enough to recommend themselves out of a job by choosing a stack which requires expertise in a language they do not understand? Mr. Varshneya warns you to use an evaluator unbiased towards programming language.

This is because the programming language should only be one of the criteria when choosing a technology stack. However, even if an unbiased evaluator chooses a stack that meets the current and future technical needs of your company and uses the correct programming language, they can still make a wrong choice if the technology stack supplier is not right for your company.

Often evaluators choose a technology stack containing non-commercial software components that have been developed by open source authors. The additional challenge is to choose these open source “suppliers” based on your non-functional requirements.

Does your evaluator consider the security vulnerabilities that have been disclosed for each component of the stack they choose? Do they know if anyone is working on that open source component? Even if enough people are working on the open source component, how active are they? Are they making fixes, making scalability improvements, and plugging security and data privacy holes that you would expect from your own developers, or are they only adding fun-to-develop features?

Make sure you and your evaluator choose your open source technology stack suppliers based on all the same criteria you would apply if you were to hire an employee or outsourcer to develop those components for you.

Web APIs are the New Open Source Software

If you are relaxing because you have your open source usage under control, beware. There is another increasingly common type of ungoverned third-party code that your engineers are using in your products: Web APIs.

There are many Web APIs published that, like open source software, are free of cost, readily available, provide great value, but are not free of obligations or risks. For example, https://www.programmableweb.com/api/keystroke-resolver is a Web API for mapping keystrokes from one type of keyboard to another. Perhaps useful, but what is this open source service doing with those keystrokes? Retaining them (if so, in what country)? Selling them? Marketing to your customers based on them?

Sometimes Web APIs are available to you as part of your license for a commercial software product or service. For example, you can build your own web applications using DocuSign’s published Web APIs. Use of those APIs is covered by your DocuSign license and access to them is only available to holders of an API key issued by DocuSign to paid licensees. However, even these commercial Web APIs have pitfalls for the products and services that use them.

Mistaken Assumptions About Web APIsNon-Commercial Web APIsCommercial Web APIs
API terms of use will remain sameMaybe NotProbably
API implementation will remain sameNoNo
API interface will remain sameMaybe NotProbably
API will process data locallyNoNo
API will be hosted in same legal jurisdictionMaybe NotMaybe Not
API will be available 100% of timeNoNo
API has an SLA
NoMaybe Not

The Web API author’s ability to instantaneously change it is good if they fix bugs and security vulnerabilities. But it is bad if they just as instantaneously introduce new bugs and vulnerabilities, and bad if they change the functionality or interface to break your application. You have no control over whether or not you use those daily changes because you’re always using their current implementation.

Even if the Web API uses strong encryption for data in transit between your application and their server, the fact that some of this data might be personally identifiable information means not only will it be sent over a public network, but it may even be sent to another country.

Here is an example of a Web API. The current weather at a particular latitude and longitude can be found using the following URL (visit it yourself to see the results):

https://api.weatherbit.io/v2.0/current?lat=48.8583701&lon=2.2922873&key=876daf42ac7f4488956caf9011a83212

If I were a French citizen and visiting a web page that uses the weatherbit.io Web API to find out the weather at my current location, my latitude and longitude would be sent to their server in New Jersey, USA. Certainly, a data privacy concern.

To take it a step further, what Web APIs hosted by yet other parties might weatherbit.io be calling to map the latitude and longitude to my time zone? to my city? to my state? to my country?

This is another example of the newest technology being adopted by organizations before management knows about it or can govern it. This is what happened with Shadow IT. Then Shadow Engineering emerged when software developers started using open source without permission from their management or procurement departments. Now, shadow web development via Web APIs is an increasingly common way for programmers to efficiently build web applications. Today, building web applications is a composition of proprietary code, outsourced code, open source code, and open source online services accessed via Web APIs. You must understand and manage the provenance of each of these components.

Shadow Engineering

Do you allow a supplier’s goods and services to be acquired and used by your employees without the approval of your management? Certainly not any more. You’ve probably spent years applying better governance around the acquisitions made by Shadow IT.

However, even before the emergence of shadow IT, your engineers have been making acquisitions from ungoverned suppliers: open source software authors.

Shadow IT mostly acquires compute and storage resources for internal use, but “shadow engineering” has been exposing your customers to ungoverned intellectual property by using open source software in your products.

Even though there are no subscription, licensing, or maintenance fees charged by these authors, their effects on your products are significant.

Just as shadow IT has helped organizations be more efficient and elastic, shadow engineering has done the same, but you must better govern what shadow engineering is acquiring.

Can brand categorization inspire SCA innovation?

Can the creation of a brand category inspire the expansion of existing products and motivate new entrants ?

Yes, this is what can happen with Software Composition Analysis (SCA).

Prior to 2014, Black Duck and Palamida were helping their customers comply with the terms of the licenses that covered the open source they used.

Enter the Heartbleed security vulnerability in open source, and those vendors added reporting of previously disclosed security vulnerabilities in open source. This was an opportunistic leap forward.

Then came the marketing experts. They created the Software Composition Analysis category and now there is a bold direction to inspire innovation.

Yes, open source licenses and security vulnerabilities in open source are still important, but all levels of the software supply chain should care about all the non-functional requirements (NFRs) of software they get from all suppliers.

Whose software to analyze

A Procurement team (with a capital P, not just the finance team) should analyze the composition of software they source from any supplier: commercial software vendors, partners, outsourcers, acquired companies, other divisions and groups within their own company, and (yes) open source authors too.

What software NFRs to analyze

Software can be flawed in many ways: non-compliance to open source license terms and existence of previously disclosed security vulnerabilities are only two of them. When a company develops its own software (either for its own use or for the use of others), it analyzes many NFRs of that software. Since procured software is intended to be an efficient time and cost replacement for in-house developed software, the same analysis should be applied to procured software. A Procurement team should analyze all of the following NFRs of the software they source:

  • Quality
  • Maintainability
  • Scalability
  • Extensibility
  • Portability
  • Compatibility
  • Reusability
  • Usability
  • Accessibility (by those with disabilities)
  • Exportability (based on distribution and use of cryptography)
  • Vulnerability (previously undiscovered, analogous to anomaly-based IDSs)
  • Vulnerability (previously disclosed, analogous to signature-based IDSs)
  • Licensability

Granted, in-house developed software doesn’t always meet every one of these NFRs, but the Procurement team should acknowledge more than just the final two on this list.

The marketers who created the Software Composition Analysis category have pointed the way as a challenge to existing and new SCA product providers to create products and services that analyze many more NFRs.

Continue reading Can brand categorization inspire SCA innovation?

Open source governance hot potato

Who owns open source governance? Legal?

Let’s step back and consider what OSS is replacing. OSS is an alternative to developing, testing, and maintaining the software in-house.

Therefore, the use of each OSS package should be governed similarly to how one governs other external software suppliers. Unlike in-house developed software, you don’t control the open source developers, but governing open source mimics the governing of software sourced from other external suppliers.

Here is the priority of who should govern open source:

  1. Role which governs use of software developed by partner companies
  2. Role which governs use of software developed by outsourcing companies
    • Often this role makes too many optimistic assumptions about the quality of software produced by the outsourcer, so this role might not spend the necessary time evaluating the quality of the open source
  3. Role which governs use of software developed by commercial companies
    • Often this role is too concerned with financials that don’t apply to open source
  4. Each product team

So if an organization doesn’t already have one of the first three roles listed above, the responsibility for open source governance should fall to each product team. Hopefully the common leader of these product teams will recognize the overhead of distributed open source governance and centralize it, effectively creating one of the first three roles.

Certainly, the role which governs open source should consult with the Legal department as needed. However, any one of the above roles will be better able to apply all the right quality controls to ensure the security, maintainability, etc. of the open source than can the Legal department.

Simple question, not so simple answer

I have often been asked the question: “Is this a good open source license to use?”

First, this is the wrong question: the developer will not be using the license, they will be using the OSS covered by the license.

To be fair, some open source licenses are so liberal that any OSS covered by those licenses can be used in any way with no obligations or fear of legal consequences.

However, for the majority of open source licenses, the answer to the simple question depends upon complicated issues: how the OSS will be used, whether the developer can fulfill the license obligations resulting from that use, and whether the developer’s business agrees to fulfill those obligations.

In a common case, distributing a product which dynamically links with an LGPL-licensed library at least requires the developer to publish the OSS library’s copyright notice and make the OSS source code available to any customer.

In an uncommon case, distributing a product which statically links with that same LGPL-licensed library also requires the developer to make the proprietary source code of the product available to any customer. Same license, same library, but different use results in unacceptable obligations.

In another case, distributing modified CPL-licensed OSS requires the developer to make the modified source code available to any customer. If their modifications are clever enhancements that the developer’s business wants to remain trade secret, then that usage (that is, modification) results in unacceptable obligations.

Are the LGPL and CPL licenses bad? No, but they are a type of license that poses more risks, so the developer has to be careful how they use the OSS covered by these licenses.

 

Tarred with the same brush

OpenSSL consists of two major component libraries: the secure socket library and the core cryptography library (see the second sentence here).

The core cryptography library is often used by products independently from the secure socket library, but binary and source code application scanners can’t detect this distinction because both component libraries are marked with the same OpenSSL “brand”.

The many security vulnerabilities found in the secure socket library have caused all of OpenSSL to be considered as highly insecure. Therefore, when an application scanner run by an interested party (e.g. customer, partner, acquirer) detects artifacts of OpenSSL in a product, the scanner flags the entire product as insecure even if that product only uses OpenSSL’s core cryptography library.

This has either forced the software owner to patch its version of OpenSSL even when the patch only fixes vulnerabilities in the secure socket library unused by them, or forced the owner to reimplement their product to use a different cryptography library… efforts that could instead be spent on addressing security issues that are applicable to their product.

It is time for OpenSSL to separate its core cryptography library from its secure socket library and re-brand the core cryptography library to draw the distinction necessary to avoid this busy work.

Crowdsourcing without the crowd

Prior to the discovery of the Heartbleed security vulnerability in OpenSSL, the only criteria used to evaluate open source software (OSS) was whether its license terms were acceptable. Even though evaluators have since added the security of the OSS as a second criterion, that is still not sufficient.

Evaluating OSS must use all these other criteria used for proprietary software: maintainability, extensibility, usability, reliability, scalability/performance, portability, compatibility, and reusability (aka Architecturally Significant Requirements, Non-Functional Requirements, software -ities)…

…then must consider whether that OSS itself violates any copyrights or infringes any patents, and also whether all OSS it uses meets all of these criteria.

Like with proprietary software, you are not likely to find OSS that perfectly meets all these criteria, but you need to know these same criteria are relevant and know how well the OSS meets each criterion.

For example, it is tempting to assume that each OSS project is developed, tested, and maintained by its own crowd of specialized engineers. However, many OSS projects have been abandoned, which puts the burden of maintaining and extending it on each proprietary product that uses it: crowdsourcing without the crowd.

Retrieved April 27, 2017, from  https://www.openhub.net/explore/projects