The members of SAFECode have all made major investments in our development processes aimed at improving the security of the software we produce.  Security is important to customers who use software to process critical information and to manage critical business processes.  Our real-world experience has shown that having a secure development process is the most effective way to improve software security.

Governments around the globe have a particular interest in improving the security of software, hardware, and services acquired for government applications.  Numerous proposals intended to support this goal have been floated recently, including pending legislation and the DHS Resilience STAR proposal in the United States, the security standards, audit, testing and certification envisioned for India’s new Joint Working Group on Cyber Security, and China’s pending cloud standard.  Because SAFEcode members are committed to advancing effective software assurance methods, we thought it might be helpful to share a technical perspective on a few of the approaches contemplated in these proposals.

Secure software development is a process and its implementation varies from one organization to another depending on the organization’s security maturity, technical skills and culture.  Therefore, when we see potential requirements that are highly prescriptive and/or technical, we carefully review them for unintended consequences.  Mandating specific practices, tooling, or country-specific standards could inadvertently increase costs for government and industry without actually reducing risk; furthermore, such mandates could stifle the innovation necessary to counter existing and emerging threats.  Rather than mandating specific practices, we believe that governments seeking additional assurance about the software they acquire should focus on vendors’ secure development processes, leveraging global standards.

In our review of recent proposals, SAFECode members have seen a variety of prescriptive requirements, including:

  • Requirements for the use of specific secure coding practices and creation of unique country-specific secure coding standards as well as the use of independent code assessment organizations to evaluate compliance with standards.
  • Requirements for the use of automated static or dynamic code analysis, including specific “qualified” static analysis tools and authorized third party testing organizations to complete the analysis.
  • Requirement for vendors to provide the government with source code to assess functionality, quality and security.
  • Requirement for vendors to declare the absence of security vulnerabilities in their products.

Some of these requirements appear to be the same or similar to practices that SAFECode members apply and have documented in our Fundamental Practices for Secure Software Development (2nd Edition).1   But there are key differences between what SAFECode members – and other organizations that are committed to software security – actually do and how they do it and what these proposals seek to require.  In order to help to explain SAFECode members’ concerns with these requirements, we have summarized these differences below.

Studies have repeatedly shown that complex software has errors (bugs) and a small number of these errors are security vulnerabilities.  A strong software engineering process can be directly linked to the reduction of these errors, but even the most mature software engineering organizations do not completely eliminate software errors.  For this reason, any vendor claiming the absence of vulnerabilities in the code it ships would be only demonstrating its lack of understanding of the field of software assurance.

Coding standards are an important element of a secure development process because they help our developers use programming languages in safe ways, and we can automatically detect deviations from those standards and tell the developers to correct them.  But coding standards vary because we use different compilers, different operating system platforms and versions, and build software that’s used for different purposes.  If our members had to comply with a “one size fits all” government secure coding standard, that would be a major problem.  We’d spend time recoding to the standard – which wouldn’t be adapted to the security needs of our code bases – instead of improving the real security of our software.

As part of our organizations’ software assurance processes, SAFECode companies also have used outside organizations (contractors and consultants) to help with testing, evaluation and certification.  We are able to effectively guide their efforts because we understand our code bases.  But if countries take individual approaches to standards and testing, we will have to train personnel at multiple different outside code assessment centers, diverting our security experts from making our software more secure.

SAFEcode reports, such as the Fundamental Practices paper discussed above, acknowledge that static analysis is an important practice as part of a secure development process and SAFECode members all use static analysis tools.  But we use a variety of different tools, often more than one, and some companies use “home grown” tools or tool extensions.  Regardless of the tool or tools used, every company has to “tune” or tailor them to find software vulnerabilities while minimizing the number of “false positives” that the tools emit.  These practices – tuning the tools, extending the tools – are required because each of us has a different code base with different technical attributes.  If policy proposals or procurement requirements mandate the use of government “qualified” tools, some of our tools surely won’t be on the “qualified” list.  As a result, we’ll still have to use our own tools, tuning and extensions to improve security, and also run the “qualified” tools and then manage the increased number of false positives they produce – which will incur more cost and extend product cycles.

A final point concerns the lifecycle of security requirements.  While many of the proposals we are seeing refer to static standards and tooling, the reality for all of us is that security needs evolve and how we implement the various software assurance practices must do so as well.  SAFECode members assess their development processes, including vulnerabilities identified after release, to determine when and how additional practices or variations in implementation may be needed to improve security.  For example, it’s not atypical for one of our members to update their secure development process – static analysis tuning and extensions, coding standards, and other requirements – once or twice a year.  Government-mandated tools and standards simply could not keep up with that pace.  Better software assurance is delivered through a comprehensive development process that can evolve and adapt.

Each of the SAFECode members has their own secure development process.  Our processes have common elements, and those elements were the basis for the Fundamental Practices document.  We find it interesting to see some of the 17 practices and principles outlined in our Fundamental Practices paper included in various policy proposals around the globe.  But no one practice is a silver bullet to provide better software assurance.  Rather, the efficacy and efficiency of each of these practices varies based on how they are applied as part of a holistic process within each unique organization developing software.

If governments seek additional assurance about acquired software they should work with vendors to understand the processes they use to develop software.  Emerging global standards, such as ISO 27034-1, may also offer a way to verify that an organization has a software development process and is following it.  Prescriptive mandates for specific practices, coding standards, tools or consultants could make vendors’ processes impractical or, worse, have effects counter to their original intent.  Such mandates will not provide the flexibility necessary to address today’s complex threats nor to innovate in the face of a rapidly changing threat landscape.  On the other hand, asking vendors to document their security development processes will encourage the right behavior on the part of vendors and their developers and enable them to innovate over time.  Knowing, and, as appropriate, verifying that vendors have a process for addressing security in the development process will give acquirers additional confidence in the software on which they depend.