By Souheil Moghnie, NortonLifeLock and Kostya Serebryany, Google with Lisa Napier, VMWare; Rohit Shambhuni, Autodesk; and Adith Sudhakar, VMWare

At SAFECode, we members often compare notes on secure development practices that are proving effective in our individual software security efforts. One of the most commonly cited of these practices is fuzzing. Fuzzing, sometimes referred to as fuzz testing, is an automated software testing technique that involves providing invalid, unexpected, random, or semi-random data as input to a computer program. The program is then monitored for exceptions such as hangs, crashes, failing built-in code assertions, or potential memory leaks (1).

Fuzzing is a great way to test for bad behavior (both intentional and unintentional) in software, network protocols, embedded systems and devices, device drivers and pretty much any computing system that can talk to another. In fact, fuzzing is arguably one of the most effective methods to find the most significant, grievous bugs in almost any computing system. 

And yet, despite this effectiveness, adoption of fuzzing seems rather low. We believe there are a few reasons for this, including the fact that fuzzing is perceived to be highly complex and difficult to execute, and can be intimidating for those new to it. However, this doesn’t have to be the case. There are new tools and ways of doing fuzzing that can help most developers ramp up fairly quickly.

SAFECode recently formed a Fuzzing Working Group to compare notes on our own fuzzing practices in order to build upon our individual successes and to help others benefit from fuzzing. We’ll be sharing many of the lessons we learn throughout this collaboration with the broader industry through a series of Focus on Fuzzing blog posts. Our goal is to provide practical advice based on our own actual experiences. We’ll cover things like: what types of fuzzing exist and which one to choose in a specific case; what tools are available for various languages and ecosystems; how and why to fuzz continuously; and, how fuzzing fits into the larger software development lifecycle. 

Why Fuzz 

So why a focus on fuzzing? The reasons to fuzz software might be the same as the reasons to test it: we want to detect as many defects and security vulnerabilities as quickly and as cheaply as possible. Fuzzing and traditional testing are complementary. However, fuzzing tends to find many bugs that traditional testing misses.

In “unsafe” languages, such as C and C++, fuzzing is effective at finding memory access errors such as buffer overflows and use-after-free. In most programming languages, fuzzing can find null dereferences, assertion failures, memory or resource leaks, concurrency bugs, uncaught exceptions, infinite recursion, divisions by zero, integer overflows, and other types of bugs that can be observed at run-time.

Fuzzing is also good at finding logical bugs. As we will elaborate on later, Differential Fuzzing finds discrepancies between two implementations of the same protocol (example). Similarly, Round-trip Fuzzing can find functional bugs in APIs that can process data to perform a function and its reverse (e.g. compress/decompress fuzz target example). (2)

Some APIs (e.g. OS kernel system call interfaces, networking protocols, or browser IPCs) are so complicated that no amount of traditional testing will ever find all corner cases, but fuzzing can. 

And finally, we know that adversaries will fuzz any software that processes data from untrusted sources and that they will exploit any vulnerability they can find. We want to fuzz (and fix) it first.

Stay Tuned for More

We hope we have piqued your interest in learning more about fuzzing and that you’ll watch this space for future posts. Up next we’ll share a Fuzzing Taxonomy that outlines the different types of fuzzing. If you are a SAFECode member and would like to join our discussions on fuzzing, please reach out at [email protected].


  1. Source:
  2. What makes a good fuzz target? More: