These days I find myself in a lot of meetings where folks talk about things like risk management and compliance as well as software security. Those meetings have gotten me thinking about how and why secure development programs succeed in organizations.

When we created the SDL at Microsoft, my team was part of the Windows security feature development organization. Trying to figure out secure development was one of our roles and initially the smallest part of the team. But secure development was part of the product engineering organization, so the approach we took – pretty much from Day One – emphasized training, motivating and enabling the engineers who wrote the code to design and develop secure software. We started with training and motivation. Over time, we added more and more enablement in the form of tools and very specific secure development guidance.

What we didn’t do was put a lot of emphasis on after-the-fact compliance or testing. The SDL was mandatory, but our approach, even when we did penetration testing, was to use it early to look for specific design problems. (This was actually adversarial design and code review although we called it penetration testing.) We also used some penetration testing later in the development cycle to confirm that the developers had followed the process, applied their training, run the tools and fixed any problems the tools reported.

We had security people assigned to work with the development groups, but we made their role primarily providing advice on threat modeling and helping with gnarly problems – not checking on developers. Because the process was mandatory we wanted to confirm that it had been followed, but we tried to do that with automation that could be integrated with the tools and build systems so that a single database query would tell us any places where the developers hadn’t followed the process or met the requirements.

As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.

Conway’s Law

Back to risk management and compliance. Early in my professional career, I came across Conway’s Law. Conway’s Law says “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” It’s normally interpreted to say that the structure of systems is the same as the structure of the organizations that create them. For software development (from Wikipedia): “…the software interface structure of a system will reflect the social boundaries of the organization(s) that produced it, across which communication is more difficult.”

The interaction between development teams isn’t the same as the interaction between a development team and a security team. But thinking about Conway’s law, I’ve been wondering if software security assurance teams that aren’t part of a development organization might be doomed by the social boundaries of their organization to trying to achieve software assurance with after-the-fact inspection and testing. If you’re part of a compliance or audit or inspection team that’s organizationally separate from development, the natural approach may be to let the developers build the software however they build it, and then check it afterwards to see if it’s secure. That approach conforms to the model of security as an outside compliance function. But from the perspective of secure development, it’s a flawed approach.

It’s really a tough approach to make work, because it means the developers (and the security team) only find out about security problems after the software is pretty much ready to ship. This approach, at best, makes it difficult and expensive to correct errors and increases pressure to “ship now and accept the risk.” In this model, you say you’ll correct the security bugs in the next release – and hope no vulnerability researcher discovers them, and no bad guy exploits them, in the meantime. Not good for product or customer security. Not good for corporate reputation either. And this situation can be bad for developer morale too. I remember back before we created the SDL when the Seattle Times published big front-page headlines after the discovery of vulnerabilities in a new operating system version. My security team was unhappy, but the development staff had a lot of pride in the company, and you can believe they noticed the headlines too!

I’m not saying that the only way for a software security program to work is for the software security team to be part of the development organization. But I am saying that a successful software security team has to understand the way the development organization works, work cooperatively with the development organization, and focus on enabling them to build secure software as part of their task of building software. This is why I keep coming back to Conway’s Law. A lot of software development is about communications. How the different organizations within a company developing software communicate is a key factor in the successful creation of new secure products.

That focus on enablement implies a commitment to training, tools and guidance for the developers as well as an approach to compliance that relies on artifacts of the development process rather than after-the-fact effort. Especially today, with development teams using agile or devops approaches and feeling pressure to ship in hours or days rather than months or years, that’s really the only way software security can work effectively for an organization.