×

Access the report now!

First Name
Last Name
Company Name
!
Thank you!
Error - something went wrong!
   

Gap Analysis of Code Scanners

April 8, 2019

Gap Analysis of Code Scanners 

A deeper dive into the problem of false negatives. 

Many large organizations still rely exclusively on Static Application Security Testing (SAST) to find security gaps in their software. Is this really enough? While code scanners have their role, our report looks at security gaps that code scanners cannot catch (false negatives). Large organizations need a shift left strategy to help manage this risk.

Static analysis is an important part of the SDLC

Reworking changes late in the Software Development Lifecycle (SDLC) can be quite disruptive. Security assumptions made early in the lifecycle can be difficult to undo especially when there is pressure to deliver features rapidly.

Taking static analysis too far

To get around this problem, our research (Managing Application Security Survey, 2017) indicates that large organizations typically complement their current testing approach with code scanning as part of static analysis during development. Their intent is to try and catch security errors before initiating a release. Unfortunately, some teams have used code scanning results as a means of implying their software is secure.

The problem of False Positives and False Negatives

There are four different possible results when using scanners:

 

The big question is, what is the ratio of the red squares (indicating errors) to green squares (indicating correct results)?

Here we focus our attention on false negatives. In other words, are there situations where scanners are unable to catch certain types of software security errors?

Without getting into technical details, we found several reasons why scanners will always produce False Positives and False Negatives. A noble goal of the code scanning community is to minimize both types of errors in a reasonable time. (Note: for those who would like a more in-depth treatment, please download the full report by completing the forms on this page)

1. CODE EXECUTION PATHS

In programming, our intent is to have a software program execute controllably from start to finish. In other words, we want our software to halt in a controlled manner. However, as our applications become more complex, it is not possible to determine every path through our code to ensure we have identified all possible execution paths. That means we can’t truly know whether our system will, in fact, halt in a controlled manner. One of the entry points of security vulnerabilities is the exploitation of such halting problems.

2. SCANNER OPTIMIZATION

There are different classes of scanners and they typically focus on specific patterns. Some focus on the syntax and try to look, for example, at whether SQL statements are being statically generated through string manipulation. Other scanners focus on how information flows through the code and determine where external input is not being properly validated.

Because code scanners tend to focus on specific classes of security vulnerabilities, they will not catch everything. In other words, they are very useful, but they don’t catch everything.

Here are the results of one study focused just on buffer overflows where a combination of single and multiple scanners was used:

3. COMPILER OPTIMIZATION

After fixing security vulnerabilities in code, it is possible for an optimization compiler to reorder the execution of program logic in order to maximize performance. This can unknowingly lead to a security vulnerability especially when the areas being affected use critical variables.

4. SEMANTIC MEANING

When we perform manual code reviews, there is a good understanding of what each of the variables means. A scanner, however, has no such context. Variables that ought to be treated with more sensitivity may not get this level of priority.

5. FRAMEWORK GAPS

OWASP has identified areas that are not possible candidates for automation. Given that code scanners are typically used for automation and scalability, it is important to realize that not everything can be caught even using a best practice mindset.

Where do we go from here?

Given the fact that scanners will not catch everything, and will produce false alarms, we need to shift even earlier in the lifecycle—to the design and requirements stages to help reduce some of the noise. Thankfully, such a framework already exists in policy-to-execution platforms, such as SD Elements by Security Compass.

1. APPLICATION ARCHETYPES

Automatically generates the correct security controls to be implemented based on several different archetypes (web, ERP, IoT, etc).

2. SOFTWARE SECURITY REQUIREMENTS

Being able to generate a set of software security requirements from well-known repositories like OWASP, NIST, and regulations like GDPR and SOX.

3. INTEGRATION WITH ALM

Providing a least invasive approach to the DevOps team by seamlessly integrating control points into their environment (Jira, Micro Focus ALM, Microsoft TFS, CA Agile Central, IBM Rational Team Concert, etc).

4. INTEGRATION WITH TESTING

Using a bidirectional integration with code scanning tools and other third-party testing tools to determine whether or not a particular control was met successfully.

5. AUDIT REPORTING

Generating an easy to read audit report that shows what controls were met, not needed, or incomplete.

There are several tools in the market today that implement a portion of this framework, but fall short of addressing all of the five attributes described above:

MyAppSecurity Threat Modeler
Microsoft Threat Modeling Tool
Continuum Security IriusRisk
OWASP Security Knowledge Framework
OWASP SecurityRAT

 

Previous Flipbook
Managing Application Security 2017
Managing Application Security 2017

Investigate our managing application security research results to learn about how security is addressed in ...

Next Flipbook
Cybersecurity Challenges in the Automotive Industry
Cybersecurity Challenges in the Automotive Industry