Understanding the security of third-party components.
When I started working on computer security, organizations that worried about security were concerned about the security of software that they created themselves and shipped to their customers. Today, a lot has changed – many (most?) organizations that deliver software products or services rely a lot on components that other organizations or individuals have created. Many of these “third-party components” are open source software while some are commercially-licensed libraries or subsystems. There are a variety of ways of incorporating third-party components – from copying a source code snippet found on the web to calling a library to embedding a complete functional module or product. (“Third party” refers to the provider of the component that’s incorporated – the first party is the supplier of the product and the second party is the customer.)
The security of third-party software components is a serious issue. A security vulnerability in a third-party component can expose a software product or service to attack just as a vulnerability in code your developers have written. This means that your developers or suppliers who decide to incorporate third-party components need to pay attention to the security of the components, just as they do to the security of developer-written code. Recognizing this fact, SAFECode published a free guide to the secure use of third-party components early last year.
Over the last few months, I’ve been involved in quite a few discussions with customers and policymakers who are worried about software “supply chain” and in particular about the security of third-party components. Some of the discussions have focused on giving customers confidence that their suppliers apply practices such as those documented in the SAFECode guide to manage the third-party code their products include. This is a very reasonable concern for customers to express: a supplier should take responsibility for the product or service delivered, including having a sound and effective approach to managing the security of third-party components.
But some of the discussions have taken a different turn. Recently, I’ve heard a lot of questions about “third-party component transparency” – the notion that if a developer incorporates third-party components, the developer should provide end customers with a complete listing of those components down to the individual version number for each component. The idea is that if there’s a report of a vulnerability in a third-party component, the customer will be made aware of that and do something in response.
The pressure for component transparency seems to be based on an analogy to food labeling. If one of my kids is allergic to peanuts, I can look at the list of ingredients before I buy a box of cookies and if the list includes peanuts, I’ll buy something else. Easy and effective.
But software products and services aren’t cookies. A product that incorporates a vulnerable component isn’t necessarily affected by a particular vulnerability – it may not expose the vulnerability to external input or call the vulnerable interface (the example I’ve heard is embedding OpenSSL but only to call on its cryptographic random number generator). And users may not be able to do anything effective to protect themselves in any case. They shouldn’t replace the old version of the component with a new one without regression testing to make sure the new one doesn’t cause other problems for the product, and the product developer should already be doing that. They may be able to make a configuration change to mitigate the impact of the vulnerability, but they probably need information about how the product is affected from the developer before they can do that effectively.
The common thread is that the product developer is in a position to review the third-party component vulnerability and the product’s use of the component, and then tell customers “we don’t use it that way; nothing to worry about,” or “we’re releasing a patch with an update to the vulnerable component,” or “we’ll be releasing a new version, but in the meantime, here’s a configuration change that will protect you.” If a product developer incorporates a third-party component, he or she should be doing the required analysis and providing customers with that sort of information.
The thing a customer can do with a list of third-party components is ask the product developer’s support line for information when a vulnerability in a component is discovered. But all those requests for information just create extra noise in the developer’s system and probably don’t help get the answers any faster. And they may actually distract the developer from working on information, testing and patch development that help to protect customers not only from vulnerabilities in third-party components but also from the other concerns that a comprehensive secure development process will address.
So my bottom line is that developers absolutely have to manage their secure use of third-party components. But it’s important to understand the differences between software products and cookies, and to allow developers to provide customers with information they can actually use.