Over the past 20 years, source-code scanning using static analysis has been a principal method for testing the security of software in development. This includes many of the same static application security testing (SAST) tools that are still in place today. For some time, scanning offered the best way for an organization to spot application vulnerabilities—but times have changed. Modern software development environments and applications themselves have dramatically evolved in recent years. Development cycles move much faster thanks to DevOps and Agile. Application code volume has expanded and grown much more complex through innovative adoption of things like open-source libraries and application programming interfaces (APIs).
And yet SAST offerings have not fundamentally advanced since the early 2000s. Within modern development contexts, traditional SAST solutions no longer provide sufficient usability, scalability, or solution effectiveness—only offering about 26% accuracy with their results. But poor SAST accuracy is actually a fundamental problem, dating back to how these tools were first used.
Results Oriented: How Quantity Beat Quality
When SAST was first being embraced, many organizations specifically wanted higher quantities of vulnerability alerts rather than higher-quality results. There are several reasons why they chose alert volume over alert accuracy:
-
False-negative phobia. Application security teams are typically the ones in charge of evaluating application security testing (AST) solutions for purchase. One of their chief concerns with SAST was that a tool that produced fewer results may be generating false negatives—actual vulnerabilities that passed through undetected.
-
Security-centric workflows. Back before the DevOps/Agile era, scans were not run very often; an occasional long list of alerts to triage and evaluate was simply part of the job for a security analyst. Sifting through noisy results to disregard high volumes of false positives was more of a feature than a bug for application security teams because it gave them more potential leads to chase down. At the time, development workflows and delivery cycles were minimally impacted, if at all.
-
Tuning. A massive set of potential vulnerabilities was viewed as a valuable baseline by security teams because they could just tune their SAST tool for greater accuracy as needed.
Seeking to satisfy the desire of application security teams to produce greater quantities of alerts, SAST vendors competed to give security teams what they wanted (regardless of the validity of those alert results). As the time between release cycles shrank and more and more organizations embraced DevOps/Agile, SAST tools went from being noisy to generating so many false positives that outputs became mathematically impossible for security teams to triage.
The demands of today’s continuous integration/continuous deployment(CI/CD) pipelines often require that an organization scan applications every day—or even multiple times per day. This is why more than half (55%) of developers currently admit to sometimes skipping scans.
SAST Needs a Modern Makeover
Only about one-quarter of organizations claim to be capable of fully reviewing all their scanning alerts. In most cases, each security alert received consumes an hour or more of security team time—which ultimately slows down remediation processes and bogs down development schedules. Traditional SAST is a pipeline bottleneck because it wasn’t designed for what is needed today—quickly and accurately locate application vulnerabilities.
Effective static analysis for the demands of today’s software development life cycle (SDLC) must address a number of existing solution problems.
Massive alert noise masks, even suppresses true vulnerabilities
While thorough analysis of each and every alert finding could eventually expose the limited numbers of true positives scattered throughout the total results, the practicalities of today’s development processes make this a virtual impossibility.
With many applications and thousands of alert findings to sift through, security analysts often close things in bulk when they sense a pattern in order to save time. Manually separating out a few true vulnerabilities (positives) from the vast quantity of false positives can become its own error-prone process due to the pressure to turn around findings. In many instances, analysts may have thousands of other applications and thousands of scan results still to review.
Tuning reduces sensitivity for true positives
The need for tuning to increase SAST’s speed and efficiency by reducing the overwhelming numbers of false positives in scanning reports also reduces their effective ability to locate true positives—actual vulnerabilities that may require remediation. Tuning may speed things up a bit, but if the tool simultaneously allows false negatives to slip by undetected, then traditional SAST results suddenly become even less useful to the organization.
Getting complete analysis in a timely manner
While scanning every line of code in an application is possible, scanning all code paths is not possible. Static analysis modeling has many limits and every tool uses analysis governors and checkpoints to limit scan times. Scanning obviously needs to happen in a reasonable amount of time for the user. This pragmatic reality ultimately impacts each scan’s completeness. Technical challenges involve branching, reflection, inversion of control, dependency injection, dynamic code, and inheritance. These issues all present significant obstacles to delivering analysis that would be considered both timely and comprehensive.
Can SAST Be Saved From Itself?
For SAST to really find a place in modern development systems and become a useful tool for finding and fixing critical vulnerabilities, it needs to reinvent itself for accuracy rather than volume of alert results.
It needs to be purpose-built for modern development and CI/CD pipelines. It needs to harmonize the objectives and workflows of security and development teams. And it needs to reduce the number of vulnerabilities in production without slowing down delivery cycles. Even better, it should simplify the remediation process to actually accelerate development of higher-quality applications.
To learn more about how scanning can become better integrated with today’s application development environments, read the “Pipeline-native Scanning for Modern Application Development” white paper that I authored or listen to this recent Inside AppSec Podcast on the topic.