The Strengths and Limitations that Result from Usage
Many organizations invest in Static Analysis Security Testing (SAST) solutions like HP Fortify, IBM AppScan Source or Checkmarx or Coverity to improve application security. Properly used, SAST solutions can be extremely powerful: they can detect vulnerabilities in source code during the development process rather than after it, thereby greatly reducing the cost of fixing security issues versus dynamic analysis/run time testing. They can also discover kinds of vulnerabilities that dynamic analysis tools are simply incapable of finding. Because they are automated, SAST tools have the capability to scale across hundreds or thousands of applications in a way that is simply impossible with manual analysis alone.
After investing in SAST, some organizations refrain from making further investments in application security. Stakeholders in these organizations are often under the belief that static analysis covers the vast majority of software security weaknesses, or that they cover the most important high risk items like the OWASP Top 10 and are therefore “good enough”. Instead of building security into software from the start, these organizations are content with getting a “clean report” from their scanning tools before deploying an application to production. This mindset is highly risky because it ignores the fundamental limitations of SAST technologies.
The book “Secure Programming with Static Analysis” describes the fundamentals of static analysis in detail. The books authors Brian Chess and Jacob West were two of the key technologists behind Fortify Software, which was later acquired by HP. In the book, the authors state, “ half [of security mistakes] are built into the design” of the software, rather than the code. They go on to list classes of software security issues, including context-specific defects that are visible in code. They go on to say, “no one claims that source code review is capable of identifying all problems”.
The NIST SAMATE project sought to measure the effectiveness of static analysis tools to help organizations improve their use of the technology. They performed both static analysis and manual source code review on open source software packages and compared results. Their analysis showed that, between one-eighth and one-third of all discovered weaknesses were “simple”. They further discovered that tools only found “simple” implementation bugs but did not find any vulnerability requiring a deep understanding of code or design. When run on the popular open source tool Tomcat, the tools produced warnings for 4 out of the 26 or 15.4% of the Common Vulnerability & Exposure entries. These statistics mirror the findings in Gartner in the paper “Application Security: Think Big, Start with What Matters” in which the authors suggest “anecdotally it is believed that SAST only covers up to 10% to 20% of the codebase DAST another 10% to 20%”. To put this in perspective, if an organization had built a tool like Tomcat themselves and run it through static analysis as their primary approach to application security, they would be deploying an application with 22 out of 26 vulnerabilities left in-tact and undetected.
Dr. Gary McGraw classifies many of the kinds of security issues that static analysis cannot find as flaws rather than bugs. While the nature of flaws varies by application, some of the kinds of issues that static analysis is not reliably capable of finding includes:
- Storage and transmission of confidential data, particularly when that data is not programmatically discernible from non-confidential data
- Issues related to authentication, such as susceptibility to brute force attacks, effectiveness of password reset, etc.
- Issues related to entropy for randomization of non-standard data
- Issues related to privilege escalation and insufficient authorization
- Issues related to data privacy, such as data retention and other compliance (e.g. ensuring credit card numbers are masked when displayed)
Contrary to popular belief, many of the coverage gaps of static analysis tools carry significant organizational risk. This risk is compounded by the fact that organizations may not always have access to source code, the SAST tool may be incapable of understanding a particular language or framework, and the challenge of simply deploying the technology at scale and dealing with false positives.
While static analysis is a very valuable technology for secure development, it is clearly no substitute for building applications with security in mind from the start. Organizations that embed security into the requirements and design and then validate with a combination of techniques including static analysis are best positioned to build robust security software.