Today, we continue our Meet SAFECode series with an interview with Codenomicon’s Mike Ahmadi. Mike is one of our newer members and we couldn’t be happier to have him as part of our team.

Interview with Mike Ahmadi, CISSP, Global Director of Business Development at Codenomicon

Mike-Ahmadi-headshot1Q. From the DNS flaw to Heartbleed, we’re seeing some foundational flaws emerge. What would you see as another area ripe for exposure and how do we find/fix these?

One thing that really struck us at Codenomicon when we discovered Heartbleed was that we had been testing TLS/SSL for years and didn’t find Heartbleed because our software technology had not yet advanced enough to catch the bug. What stood out most as a result of that discovery was the realization of the fact that the flaw was always there but it was dormant, no one had discovered it yet that we know of. Because of that, one thing that is cause for concern is the question of what else is out there that is dormant? And as the technology of the tools and capabilities of the research community or hacker community continue to evolve what else is going to be discovered? There are a lot of bugs in software and when you look at things from a non-functional perspective, there is an infinite number of ways to abuse those bugs. The question is, how much can you shrink down infinity to a point where you can discover what is most essential to mitigate?

With Heartbleed, there was so much contamination of the bug that in some cases, such as critical infrastructure or medical devices, fixes might not happen for years. There are some affected devices that have yet to be discovered and the end users don’t even know that they have the bug because the layers of the onion have yet to be peeled back. So when word of Heartbleed broke, there was a rapid response where some things were fixed quickly, but since it was also so widespread for some it could take quite some time to fix the problem.

Up until somebody discovers the bug, products that you look at today that may be considered extremely secure, may in fact have a bug in them that is yet to be discovered. We live in this space every day at Codenomicon. We find untold numbers of zero days every year and the number just keeps climbing.

In order to best defend against these vulnerabilities being introduced, testing is critical to find bugs as early as possible. I like to use the analogy of looking at systems like carbon-based life forms. If the first chicken that had chicken flu was killed then that would have been the end of it. But if it’s not caught early enough it ends up contaminating a lot more and eventually becomes a pandemic. Heartbleed is an example of that. Testing as early as possible and continuing to go back and test again in the future to make sure something new hasn’t come out is essential.

Q. What is the first and most important piece of advice you give a customer on how to evaluate the security of software?

My first suggestion would be to try to find all of the vulnerabilities that you possibly can. When evaluating the security of software ask yourself what exactly it is that you’re trying to secure. Is it securing against DDoS attacks? Breaking encryption? What is your most common threat? It’s similar to if you were building a house and putting in a security system, what is your greatest concern? Someone coming in through the front door or breaking in through the windows? Do you need metal bars or do you simply need a good lock?

More specifically, something seen a lot in the healthcare industry is that software companies are jumping on the security bandwagon and telling the medical device manufacturers and healthcare organizations that what they have is the solution to all of their cybersecurity problems. However, what they’re buying is dependent on how good the salesperson is. When this happens the healthcare organizations aren’t actually evaluating what their problems are. They’re not thinking specifically enough about what their vulnerabilities and exposures are. If organizations considered these questions more they could be wiser with their budget. Fortunately, I believe that organizations like SAFECode create more awareness of the importance of these considerations to the software, and software buying community.

Q. With regard to apocalyptic fears and real world weaknesses in power grids and other infrastructure, what are the real and realistic fears about critical infrastructure vulnerability and risk that keep you awake at night?

I define a critical system as being any system where the worst case scenario is direct death. For example, financial systems are not necessarily critical systems although they can indirectly cause death. An example of a critical system is a safety system at a nuclear power plant, chemical plant, or in a medical device. What keeps me awake at night is that we’ve discovered that there are a lot of serious issues with these devices having bad control systems, the most concerning I’ve seen lately being in medical devices. While there has been more effort to secure them over the past few years, what we’re still finding with the medical systems is that they’re incredibly weak to the point where, in many cases, as long as you know how to communicate with the device, it’s an open channel. The word security in this case doesn’t mean anything because there is nothing authenticating.

There are a couple of issues at hand that are holding systems back. One is that the requirements for security from the FDA have not evolved as quickly as the hackers and attackers and research community are finding exploits. Unfortunately I think that, where medical devices are concerned, the only thing that will cause there to be serious action is going to be a very serious incident where multiple people die as a result of an attack on a system or systems.

To offer some real-world insight, at Codenomicon we went to a major hospital and were part of a group of teams tasked with testing approximately fifty medical devices…and all of them failed, some very badly and in different ways. One of the biggest issues the security researchers discovered on most devices was that they don’t log. In previous testing scenarios I was involved with, we tested some pacemakers and insulin pumps and they have no logging capability. If someone were to attack one and cause a failure you would have no way of knowing where it came from. It would just look like the patient died of a heart attack or that the device failed somehow. Even cases where equipment had a log, it was completely unprotected. And if you’re smart enough to get in and do some damage you’re also smart enough to find the log and erase traces of your existence.

We’ve also done some interesting testing on IPV4 networks. In one test we found huge banks of devices such as infusion pumps and patient monitors that are all sitting on a network and we were able to send two packets to the broadcast address of the network and cause every device on that network to fail all at once. In some cases when we tested a controller for a control system, not only did it fail but it erased the firmware. That doesn’t necessarily mean that this has happened in an actual attack, but we were able to do it and there is probably someone out there that has the same capabilities and could be malicious enough to deploy such an attack.

Q. What first drove you to software development/technology/security?

I was a CIO of a retail organization in 2003 and we were getting hacked. I found out that a group of hackers were using our network as a mule to transport movie files and were choking all of our bandwidth. They were very clever in the way they hid their tools on our network. The fact that they thought to put an FTP server in the recycling bin left me completely surprised at how ingenious and stealthy this group of fraudsters were…and it fascinated me.

I originally got into the technology because of the collective intelligence of the industry, but when I started looking at what hackers were doing I started to think that they were extremely clever and I felt they were like me and were more rebellious. I began to think it was an interesting field of study and as I looked into it more I became increasingly fascinated. Not to mention, when I go to medical device companies it’s satisfying to be in a room of PhDs and doctors and people that build the devices and be a lot smarter than they are about something. They don’t know nearly as much about security as the average security guy does and it’s a joy to have a role in educating them on the criticality of security systems.

Q. What trend did you personally dismiss as “going nowhere” that for better or worse has come to fruition?

What I thought was going nowhere and is still somewhat common to this day is formulaic risk management or formulaic risk assessment. This practice, which has been going on for years, is when inputs are used to create a risk score.

When you’re looking at risk management in terms of the functional world it tends to work well because there is empirical data that is somewhat easy to gather and plug into a formula. The difference when you’re looking at malicious misuse is that you’re looking at a non-functional space. At Codenomican, when we’re looking for zero days, that’s what we call an infinite space. So in the functional world there is one right answer to a question, in the non-functional world it’s a matter of how many wrong answers there are to the question and that number is infinite. When you look at risk management from the non-functional perspective the problem is that you cannot adequately gather an infinite number of inputs and because of that we can’t really come up with a good formula or an answer to a formula. When risk managers look at the input space, regardless of the fact that there are simply some inputs that they cannot verify, they just simply accept it. The issue with that is if organizations are looking at risk management formulas in order to determine what they need to do to secure systems it really comes down to things like budgets and the answer that they want to give that is going to make them look best. Unfortunately it happens quite frequently and is not in the best interest of security.

Companies should look at the criticality of their system and try to determine the worst case scenario and then ask yourself, if the worst case scenario were to happen, do I have a mitigation in place to deal with that? I think real risk management and the most intelligent approach is understanding the worst failure that could happen and preparing for it. It’s through knowledge sharing organizations like SAFECode that we teach companies what should be top of mind, such as the aforementioned questions to consider, when buying software and building secure systems.