Making changes late in the Software Development Life-cycle can be disruptive to your workflow. Security assumptions made early on can be difficult to undo, especially when facing pressure to deliver features rapidly. To sidestep this potential issue, many organizations try to identify security defects early on with code scanning (i.e., Static Analysis Security Testing or SAST). Their goal is to catch and fix security errors before releasing their software. Code scanners are a vital part of a secure development process, but they are far from sufficient — even for achieving a basic level of due diligence. In fact, they miss about half of known software security issues — many of which have caused major breaches throughout the years. Here, we will review how code scanners work, we’ll discuss the strengths and weaknesses of these tools, and we’ll propose a solution to this problem.
For a more in-depth review of how code scanners fall short, check out our Scanner Gap Analysis full report.
How Do Code Scanners Work?
All code scanners function in a similar way: they accept the code, model the program fed to them, use security information to analyze the program, and deliver the results back to the user. Ideally, these results should reflect whether a security defect exists in the code. However, there are many instances where what a code scanner detects does not align with the actual state of the code. The industry terms for this are ‘false negatives’ or ‘false positives’, but it’s easier to think of them as ‘security defects that scanners can’t find’ and ‘non-issues that scanners incorrectly flag’ respectively.
The Problem of False Negatives and False Positives in Code Scanners
When using a scanner, you have 4 potential test results: either your scanner identifies a security defect and there really is a true security defect, your scanner identifies a security defect and there is no true security defect (false positive), your scanner does not identify a security defect and there is no security defect or your scanner does not identify a security defect and there is a true security defect (false negative).
False positives and false negatives are commonly produced by code scanners, and there are several reasons to believe that they always will be. Here are a handful of reasons explaining why unreliable test results may always be an issue in code scanners:
- It’s highly unlikely that a scanner which will catch all known security defects can even be created.
Many scanners work by understanding how code runs from start to end. The problem is that as software becomes more complex, it becomes increasingly difficult to know when or indeed if the code will stop running. Not being able to analyze all possible flows of an application means code scanner simply can’t catch everything.
2. Scanners are optimized for certain kinds of scanner defects.
There are different classes of scanners, and they typically focus on specific kinds of security defects. .For instance, some scanners might focus on how information flows through the code to determine where a user’s input is not being properly checked for potential attacks (i.e. input validation) while other scanners do not attempt to trace data from end users. To fill in the gaps, some organizations try to employ multiple scanners, to catch more security defects. This is still problematic, however, since scanners may produce contradictory results, at which point human intervention is required anyways.
A study by a group of researchers found that a number of commercial scanners combined can find less than half of one of the most devastating security flaws: buffer overflows. This the same security issue that leads to the famous Heartbleed defect.
3. Scanners don’t “understand” what programmers intend.
When we perform manual code reviews, there’s a human behind the program who has an understanding of what the code is doing. A scanner, however, cannot discern meaning. As a result, the scanner can’t tell that some areas of code are security sensitive.
4. Many issues are simply too complex to understand just by examining the flow of code
One of the most common ways for a hacker to break into a system is called a brute force attack, where they try many different common passwords on known or guessed usernames. Understanding whether or not your application is susceptible to a brute force attack requires knowing whether you have an entire function in your software dedicated to stopping the attack. It’s not as simple as looking at a line of code and detecting a defect.
The same is true for many related areas to security, such as privacy. You can’t reliably scan to find out if your application offers a method to opt-out of communication.
For a more comprehensive discussion on how code scanners fall short, check out our full report.
So, if scanners are not enough, what should you do?
Given the fact that code scanners don’t catch everything and often produce false alarms, we need to move even earlier into the software development lifecycle, into the design and requirements stage. Here, we can shape the code to clarify its intent and, hence, reduce some of the resulting noise. Thankfully, such a framework already exists in certain policy to procedure platforms, like SD Elements.
The process starts by automatically generating the correct security controls to be implemented based on several different archetypes. It then also generates a set of software security requirements from well-known repositories, like OWASP, as well as well-known regulations, like the GDPR. The platform integrates with the users’ native ALM (e.g., Jira), so that controls are seamlessly integrated into their own work environment. It also integrates with testing, so that it can determine whether a particular control was met. In the end, the platform generates an easy-to-understand audit report, displaying which controls were met, which controls weren’t needed, and which controls weren’t met. In taking these preliminary steps, programs’ vulnerability levels are significantly reduced.
To learn more about how SD Elements can fill the code scanner gap, visit here: https://www.securitycompass.com/scanner-gap-analysis/