Defining Success: Is Your Security Champions Program Working?

Posted on

Part One: Start 2019 Strong: Join SAFECode for Our Month of Champions
Part Two: Building Secure Software: It Takes a Champion
Part Three: Putting a Face to Software SCs
Part Four: How to Build an Effective Security Champions Program
Part Five: Warning: Six Signs Your Security Champions Program is in Trouble
Part Six: Kicking off and Keeping up with a Security Champions Program

Also check out our podcast discussion with the series authors on the importance of Security SCs here.

By Vishal Asthana, Security Compass (former) with Manuel Ifland, Siemens

“If You Can’t Measure It, You Can’t Improve It” — Peter Drucker

We’ve come to the final post in our Security Champions (SC) series. This last discussion will focus on measuring the success of an SC program once established.

Metrics are a tangible way to track the effectiveness level of an initiative, program, or one or more activities. Their absence often results in reliance on guesswork, slow/no progress, loss of momentum etc. Note that pure quantitative metrics often carry the risk of being gamed to show superficial progress and should therefore be considered in combination with qualitative metrics. Simply put, avoid relying only on a single metric to measure the ongoing success of the SC Program.

We recommend organizations customize a combination of metrics that align with their program objectives. To help guide this process, here is a list of commonly used quantitative and qualitative metrics for consideration:

Quantitative Metrics

  • Percentage of team members who have received foundational and/or code-specific training with the SC’s guidance: For example, in a 30-member team with one SC, let’s say 20% were trained in the first session, how many have been trained incrementally, since?
  • Percentage of senior team members who have received training: For example, if the SC started with training 50% of lead/senior developers, is this number showing an upward trend over time?
  • Decreased number of entry-level questions to SCs: For example, if an SC was handling an average of 5 questions around how to perform vulnerability triaging from 20 developers, when the program was rolled out first, is the number showing a downward trend?
  • Decreased number of recurring similar (internally and externally found) vulnerabilities: For example, if an organization keeps seeing SQL injection and XSS vulnerabilities again and again, it is a positive trend if the number of those decreases over time.
  • Completion of individual training modules by the development team. For example, which and how many training modules did developers complete including final tests?

Qualitative Metrics

  • Feedback (from SCs): As developers learn more about AppSec under their SCs’ guidance, is there a demonstrated decrease in them pushing back/resisting security efforts?
  • Periodic short surveys (from development team): Use real anecdotes from developers regarding SC Program’s effectiveness. eg: 1-3 question surveys asking ‘Do you think the SC Program is effective? Why/why not? Do you know how to report a problem to the security team?, etc.’  Doing so keeps metrics sane and prevents over-reliance on quantitative aspects only. Amazon uses this anecdote-based approach in a big way. On one occasion, Jeff Bezos called customer care support line during an Upper Exec meeting and had to wait for 4+ minutes before getting through to a support representative. He did so as a means to do a quick run-time validation after the Customer Support Head claimed <1 minute wait time (https://www.businessinsider.in/Amazon-executives-sat-through-a-brutally-uncomfortable-phone-call-that-showed-them-just-how-much-Jeff-Bezos-cares-about-customers/articleshow/63851499.cms).

A combination of these metrics can then be rolled up into a dashboard for Engineering Leadership, Central Security Team, and other potential stakeholders. This information can help guide decisions about sustaining program investment as well as maintaining or adjusting specific program elements. In short, an effective measurement approach can help you determine if your SC program is successful, and if not, provide some insight into how to get it there.

As we near the end of SAFECode’s Month of Champions, this marks the final post in our series. Next week, we’ll wrap things up with a podcast from our authors where they’ll share some of their own implementation war stories and highlight the key takeaways from this series.

Copyright © 2007- Software Assurance Forum for Excellence in Code (SAFECode) – All Rights Reserved
Privacy Policy

Share
Share