*First published Jan. 23, 2018, in CSOonline

(Managing) risky business
By SAFECode Executive Director Steve Lipner

Focus on risk management is a common element of cybersecurity today. To take two examples, my LinkedIn network includes a lot of people with the title of “risk executive,” and government initiatives and policies in the US and EU aim to encourage or mandate risk-based decision-making about security.

It has to be that way – we can’t achieve perfect security, and if we tried we’d have to invest infinite resources. Instead, we try to invest in enough security so that the expected consequences of attacks are acceptable. We expect that the most serious attacks will fail, and the attacks that succeed won’t do much harm.

The challenge of risk management is deciding “how much.” Risk is defined as the product of threat (how hard is an adversary going to try to attack our system and how good the adversary is at attacking) times vulnerability (how likely is it that there’s a way for the adversary to get in) times consequences (what harm can the attack do if it manages to find and exploit a vulnerability). Unfortunately, we don’t have good ways of measuring any of those things I said we need to multiply.

Instead of measuring threat, vulnerability, and consequences, we rely on experience and judgment. Government agencies, industry groups, and auditors provide advice or requirements that they believe – or hope – are appropriate to the assets that need to be protected, systems that need to be operated, and experience with threat actors. Sometimes the advice works well and systems operate securely. When the advice is flawed, smart organizations learn from their mistakes and update the guidance they issue. (More on that last point in a future blog.)

The “bug bar”

Software development organizations are performing risk management when they decide what security requirements to impose and what security bugs to treat as “must fix.” When the software security team specifies mandatory training, tools, and processes, they are really applying their experience with threats vulnerabilities, and perhaps consequences to tell the developers how to achieve an acceptable level of risk at a cost that’s acceptable in terms of time and effort. Meeting the requirements enables the developers to appropriately create secure software without having to be security experts.

Sometimes the development organizations find that it will be costly in resources or schedule to fix a security bug. The bug might have been discovered late in the development cycle or it might be in a part of a system where even a minor change would necessitate a time-consuming test pass. What to do then?

One way to prepare for that situation is for the security team to create a “bug bar” that assigns a severity rating to each likely scenario that can result if a vulnerability isn’t fixed. An Elevation of Privilege vulnerability in a network-facing server component would have a critical severity (think Code Red) while a denial of service vulnerability that would require restarting a single client application might have a low severity. A sample bug bar that the Microsoft SDL team created a few years ago is available.

The bug bar alone provides useful guidance to a development team that is trying to decide how to deal with a late-breaking or high-impact bug, but it also has additional value. A secure development process should include a final security review that confirms that the development team has in fact met the security requirements before release. If the team has done that, the software goes out the door and the team goes off to the release party.

If there are unmet requirements, the bug bar can help guide the management review that decides whether to fix the bug in the next release or delay release and fix it now.

The review should involve a discussion between a development team manager and a software security team manager at a peer level with the development manager (e,g. vice president to vice president).

The product team manager has the ultimate authority and responsibility to accept risk, while the security team manager has the responsibility – and experience and judgment – to ensure that the development manager has a clear understanding of the risk being accepted.

In my experience the combination of sound, secure development requirements, a clear bug bar and (when necessary) a management review between security and development peers leads to sound and conflict-free risk management decisions – and usually to secure code.