VIDEO
Overview of the application security industry today
Application development has changed significantly over the past decades, especially with the rise of DevOps, but Jeff Williams, Founder and CTO of Contrast Security, argues that application security (AppSec) has not kept up. After all, the OWASP Top 10 has remained mostly unchanged ever since Jeff originally developed it.
In this in-depth interview with Chris Hughes, a Cyber Innovation Fellow at the Cybersecurity and Infrastructure Security Agency (CISA) and the co-author of "Modern Vulnerability Management: Managing Risk in the Vulnerable Digital Ecosystem," Jeff Williams highlights why organizations need to focus on finding and fixing real vulnerabilities and the need to move away from legacy, static tools to scan code.
What to expect in this 20-minute Q&A:
- An overview of the current state of software and application security
- How organizations need to think about and prioritize application vulnerabilities
- The need for greater observability, and why static scanning tools lead to security gaps
- The benefits of protecting applications in runtime environments
Full video transcript
Chris:
Jeff, thanks so much for being here.
Jeff:
Thanks, Chris. Great to talk with you.
Chris:
Yeah. I'm always excited to chat with you. You know, you've been around a long time. You've done a lot of this stuff.
You know, but here we are. It's 2024. You've been working with OWASP on the top ten for quite a while. You know, what's changed? What's the saying? Why are we still wrestling with this stuff?
Jeff:
Well, it's funny. Yeah. I mean, the first OWASP top ten came out 22 years ago. And, really, today's OWASP top ten is the same stuff, and that's really tough.
I mean, it means that the software industry has moved along a huge amount in 22 years, but security really hasn't changed that much. I think some of the reasons are just software complexity, the lack of visibility into software. I mean, you think about a typical app, it's composed of dozens of repos, hundreds of libraries, frameworks, different language platforms, containers, cloud, APIs, servers, they're really complicated today. Back in the early days, you know, it was a database and a web app, and that was what we pen tested.
But it's a lot more complicated now. And I I think the tools really just haven't kept up nearly at all.
Chris:
That's interesting, though, because we see, you know, we see this plethora of acronyms, you know, SAST, DAST, RASP, ASPM. I know we're gonna get into a lot of these. But yeah, when you say tools haven't kept up, why do we see new categories? We see new players. You know, what do you think the gaps are that they're not addressing still?
Jeff:
Well, most of the tools in this space are still what I'll call outside in. They scan from the outside or they firewall from the outside, and I want to cover both the preproduction side as well as in production.
But really those tools lack good visibility into how the app and how the software is actually behaving. And because they don't have good visibility, they can't be very accurate. So they make a lot of mistakes. And everyone complains about the false positives from tools like SAST, DAST, SCA and WAF.
And they talk about how those vulnerabilities that get found aren't reachable or aren't exploitable.
That's really concerning, because we want people fixing the real problems, the ones that actually matter.
And I'm serious about this. It's important for the future of the world, because everything that you and I care about is managed by software. It's not just finances. It's your health care and your government and your elections and your utilities and your social life. Everything is managed by software these days, and we are not good at writing secure software.
Chris:
No. Most certainly not. And it's, you know, almost been incentivized to, like, find more things. Right?
And we've, you know, historically dumped these massive spreadsheets or vulnerability lists on our peers in development teams, engineering, and so on. We've heard about shift left and DevSecOps and, but you point to the fact that we need to cut through the noise and get to, you know, real risks that the organizations face. And as you said, software powers everything from consumer goods to, you know, national security systems, you name it. How important is it to get, you know, through that noise and kinda highlight the real risks and and, you know, rather than just finding as much as you could possibly find with no context?
Jeff:
Yeah. It's absolutely critical. So, Ponemon did a study and found that the average org, average enterprise has over a hundred thousand app vulnerabilities in their backlog. That's crazy. That's, the average time to fix a vulnerability found with static tools is 290 days. The average time to fix one found with DAST is 315 days. I mean, these are not healthy statistics.
If I was to describe a healthy AppSec program, I would say they need to be remediating vulnerabilities, serious vulnerabilities, in much less time than that, like, less than a week, probably days. It would be healthy.
And we're not anywhere near that for most organizations, at least the ones that are relying on this sort of last generation of tools that came out about the same time that the last top ten came out.
Chris:
Yeah. Definitely. And you know and, actually, I saw a stat the other day that exploitation is down to 22 minutes in some cases. So you talk about those remediation timelines and hundreds of days.
Right? And exploitation timelines are significantly faster. So it really emphasizes the need to focus on what really matters, where are the real risks, where should we be focusing our time and attention, knowing that we can't fix all the things or, you know, address all the noise that we're seeing. And you chat with organizations, you know, over the course of your career. You've been doing this a long time. You know, when you chat with security leaders, CISOs, and so on, AppSec leaders, you know, what's top of mind? Where are some of the things that they're really talking about the most?
Jeff:
Well, API and app security is at the top of most organizations' list of priorities. They know that that's a serious problem. And, you know, they have EDR. They've got XDR. They've got firewalls and CNAPP, and they've got infrastructure in place, but there's nothing like that at the application layer.
At the application layer, you know, in production, the best people have is a WAF, And WAFs are notoriously bypassable. They're really a speed bump. They might slow down attackers, and they stop a lot of easy attacks. But I think most serious researchers would just say, yeah, you know, fairly quick to bypass a WAF. And most companies don't even have it in block mode. They've got their WAF in just log mode or monitor mode.
So we really have a big gap around application security in both development and production.
I'd say most organizations are still you know, I talked to a lot that are they don't say they're just checking the box. But when you dig under the hood, that's what they're doing, if they're not serious about getting better at writing secure code. They're just doing it for compliance reasons, and that's usually not a really good sign.
Chris:
Yeah. I agree. A lot of checkbox activity or kinda just doing what they know they should be doing or, you know, whatever is recommended right from a regulatory compliance perspective versus what can really, you know, make a difference or move the needle when it comes to reducing risk. You threw out, you know, some acronyms there, and I know what I mentioned earlier was ASPM, application security posture management.
I think I've even seen application detection and response now is, you know, kind of, coming about. You know, what are your acronyms? Do you think they're really addressing, you know, what what you think needs to be addressed and what's a real problem, or is it just kind of, you know, putting a new name on something, a new category, a new, you know, kind of up into the right, you know, quadrant for someone. What do you think about that?
Jeff:
Yeah. ASPM or application security posture management is something that people have been trying to do since the early two thousands. The idea is, let's run a bunch of different tools. We'll pull all the results together, correlate those results and come out with some prioritized findings that we can focus on.
So it's a way of sifting through the noise. Unfortunately, that idea hasn't really come to fruition yet. And there's a lot of people trying. Some people are using AI. But in my opinion, the underlying data isn't good enough.
It's not reliable enough, and, you know, AI works if the input data is good. But it doesn't work very well at all if the input data is garbage. And, unfortunately, you know, a lot of the ASPM tools are running open source scanners and all kinds of crazy stuff. The results aren't really that strong.
And when you try to correlate it and merge it all together, you just get, you know, garbage in, garbage out. I think we've really gotta focus on high confidence vulnerability detection if we want to improve that situation. We need to know how the applications work, how they're using libraries, how the data flows through the application. When you have that context, then identifying vulnerabilities is easy.
Chris:
You said something that made me think of another question I wanted to ask you, which is, you know, you talked about the need for context, high fidelity findings, you know, that nuance. And I know there's a big push for shift left and things like that, and you're scanning it as source code. But you take an emphasis on runtime environments, most particularly, I think, interactive testing. You know? What makes that difference? Why should folks be more prioritized and focused on runtime environments?
Jeff:
Yeah. So remember I mentioned modern applications are composed of lots of different parts. And if you're scanning those parts individually, you never see how they all fit together. Be like, you know, scanning the parts of your car unassembled, and then deciding, like, well, I don't see any problems here, so the car is gonna be great. But when you put it all together, it's a total piece of junk.
And that's kind of the same problem with SBOMs. It's a list of ingredients, but I know if I use the same ingredients as my mom to make an apple pie, mine's gonna come out garbage. Hers is going be fantastic because it's not about the ingredients. It's about how they all fit together, how they're prepared, how they're used that makes the difference.
And so runtime application security watches the actual application run. It uses the same techniques that application performance management does. APM tools like New Relic and AppDynamics and Datadog, they watch the application run, and they've instrumented it with performance checks. Runtime application security instruments the application with security checks.
That allows runtime AppSec to watch the application as it runs. It sees how everything's used. Most importantly, it sees how dangerous methods are used. So, you know, every application stack is like a big pile of functions, and some of those functions are really dangerous.
They do powerful things like start native commands and parse XML and serialized objects and things like that. Developers don't get any guidance in how to use the thousands of dangerous methods in their stack, and you can't really train them in a thousand methods. And so it explains why they make these mistakes and why we end up with security vulnerabilities.
So what runtime AppSec does is it puts security checks directly in those methods. And if a developer misuses one, they get instant feedback on how they should have used it securely.
And the same approach works for attacks in production. So if an attacker tries to abuse one of those functions, like with a SQL injection attack, those same checks say, hey, this data just changed the meaning of the query. That should never happen.
And so it can intervene and block the attack, not at the perimeter where it has no context like a WAF would try to do. But as that data is used in the code, so it has all the context to say, we know that's an attack attempt. It has to be. And it's super accurate.
And so we can give super accurate results back to developers, super accurate results about their open source use, which libraries are used, which methods are used, and so on. And in production, super accurate details about what's actually trying to be exploited. And all that is one solution.
Chris:
That makes sense. It kinda reminds me of observability, right, in the security context, right, rather than underlying infrastructure and systems, but in the security context from the security perspective.
And you used a term that we hadn't really thrown out quite yet, but the term open source. And I feel like when we think about software supply chain security, the conversation has heavily emphasized and focused on open source software, and for good reason. Right? There's been some attacks against the ecosystem, and people are realizing how pervasive its use is and, you know, things like that.
But I know you're a big proponent of also focusing on your first party code. Right? You know, things that you wrote or you directly have contributed to and so on. So, you know, why do you think that's important as a counterbalance to the kind of hyperfocus on open source in some cases?
Jeff:
You know, I look at the whole application put together, and almost all open source vulnerabilities require you to do certain things in the code anyway. Like, even Log4j. Log4Shell required developers to use untrusted data and log it.
And so it's that relationship between the app and the open source that allows you to know that there's a real exploitable vulnerability in place. You have to analyze the whole thing together.
There is no difference between first party code and third party code. When it's assembled into an application, it's all smashed together in the same apple pie.
Chris:
It reminds me that now you talked about the ingredients and, like, how it's actually used that's going to differentiate, you know, whether it's dangerous or something you could be concerned with or has a vulnerable aspect to it or not. So you talked about, you know, the thousands of scenarios and different things to consider for developers, you know, and how that can be daunting.
Is it hopeless? You know, where do we go from here? What do you think the path forward is? Do we have some potential to kind of remediate these long standing challenges?
Jeff:
I think the right approach is instrumenting the application stack so that we can get really good visibility into how applications work, how they secure themselves, and where they're vulnerable, and where they're being attacked. Those things are relatively easy to see if you're inside the running application watching it as it runs. And, this is what we've brought to market at Contrast, technologies to do exactly that. And the cool thing about it is it's not just another scanner.
This is a real way of changing the way that security fits into your software development process.
So if you instrument your applications, your developers and testers and operations teams don't need to change what they're doing right now. There's no extra steps. There's no, I have to run a scan. I have to tailor the thing. I've got to write new rules. I've gotta respond to false positives and so on. Instead, they just do their normal work. They write code and they test it.
In the background, Contrast is doing its job, finding security vulnerabilities in the fully assembled running application and giving developers super accurate findings that tell them exactly what they did wrong and what they need to change in the code to make it work securely.
Chris:
Yeah. That makes sense. It's that context of, you know, it's one thing to have a finding, right, or something that you want to point them to, but now so what? Like, what do we do to address it? Like, why does this matter, and how do I go about resolving it? It can be super valuable, as you talked about. Because it's impractical to think that developers will know how to resolve every situation or they're vested in, you know, building up this competency around secure software development practices and methodologies and stuff like that, so informing them.
Jeff:
Well, that stuff isn't working. I mean, we've had twenty years of people suggesting that you need this big heavyweight, secure software development life cycle with requirements and security architecture and threat modeling and pen testing and code review. And, I mean, the list of things is massive. It's incredibly expensive, and it doesn't work.
All those things make sense taken separately. They're logical. It ought to work. But we have twenty years of evidence that proves that it doesn't work. People that try that end up mostly ignoring it, but usually end up with a big pile of vulnerabilities at the end of the day anyway.
We can be much more efficient about application security than, you know, trying these big heavyweight processes. We need to be a lot more agile. That's what DevSecOps ought to be about, but it got kind of twisted and mangled along the way, and some vendors are responsible.
But the idea that you can just take tools that were designed for security experts like SAST, DAST, SCA, and WAF, and shove them onto development teams and get good results has not borne fruit. There's a big backlash to the idea of shifting left. And while it makes sense in theory, it doesn't work in practice because of the inaccuracy of those tools. In fact, the farther left you shift, the less context you have. And so the more false positives you'll have and the more time you'll waste fooling around.
What you need to do is wait until the application is fully assembled, like fifteen seconds later in your pipeline when you're running your tests. That's when you do security testing because then the whole thing's assembled, then you can get really good results.
Chris:
Yeah. I agree. I think we're starting to see and many are agreeing that we're starting to see some kind of cracks in the, you know, the DevSecOps model, shift left, of breaking down silos. The things you're talking about, this arduous, you know, heavily cumbersome process is actually bolstering silos.
Jeff:
Engineers, developers don't wanna engage with us. We're slowing them down. They view us as an impediment. It's not making us any friends.
Chris:
Most certainly not. And as you said, you know, the results speak for themselves. Like, we're not seeing a drive down of systemic risk, for example, and things like that. So we gotta step back and ask, is it actually effective?
Jeff:
I'll go one further. I'll say that the tools that most people use, SAST, DAST, SCA, and WAF, and ASPM, drive silos. They create silos because it naturally creates a big backlog, and then that backlog creates its own chunk of work, and managing that backlog then becomes a silo. And it's unfortunate because it's a really inefficient way to do application security. We need to empower the teams to do AppSec themselves. That's the only way out of this mess. You have to use the big machinery of software development.
You can't expect a small AppSec team to be in the critical path for all your applications. That's just never gonna work.
Chris:
Yeah. I agree. I think we're seeing that starting to play out in reality here. One last thing I want to ask you about is, like, you've talked about this need for transparency. Now why is software transparency so critical to this whole initiative here?
Jeff:
Well, as I mentioned, transparency goes hand in hand with understanding how the software works.
And the only way to decide whether something's secure or not is to actually understand how it works. We've seen that these scanners that just focus on syntax and, you know, external visibility, external connectivity, Like, they don't have enough context to really understand how the software works, so they make all these mistakes. And transparency is the way through that.
We need to have better ways of understanding what the software does with respect to security so that we can make informed decisions about it. And one thing that we're bringing to market that I'm super excited about is security observability or AppSec observability.
We're automatically generating app security blueprints by watching applications and APIs run. So not only will you see the attack surface, the full attack surface, like, from the inside, you'll also see all the security mechanisms that are in place behind each endpoint.
You'll see what dangerous things each route does. So if you're doing pen testing, now you've got a blueprint to tell you how the app works. I did a lot of pen testing and, you spend a lot of time just figuring out how it works.
Where are the access control checks? Where is encryption used? Which routes are processing expressions and so on? So we can massively accelerate that.
We can do most of the legwork in a threat model this way as well.
I spent many years doing threat modeling, and the hardest thing is gathering the data about how the app works. And you gotta go to, like, old Visio diagrams and questionnaires and interviews with developers who don't know. And it's another garbage in, garbage out situation. If you get bad data, your threat model's gonna be wrong, and you'll miss a ton of stuff. But we can generate an automatic blueprint for how your application actually works, and show you how security fits into that blueprint. So I'm really excited about security observability.
That's just just one cool thing that's coming to help transparency.
Because ultimately, that's the root cause here is until we get past the transparency problem, we'll never fix AppSec.
Chris:
Yeah. I love that emphasis on transparency. Obviously, I wrote a book called software transparency, but, I regularly you're talking about, you know, accelerating a lot of these activities. They're typically manual, typically based on documentation that's out of date or perspectives that may not be accurate.
Jeff, thanks so much for taking the time to chat today. We ran through the gamut of, you know, long standing systemic issues, where we're headed in the future of AppSec, things like transparency and so on. So thanks so much for taking the time to chat today.
Jeff:
Appreciate it, Chris. Thanks as always. Great to talk to you. And, hey. You guys should check out Chris' book, Software Transparency.It's important.
Secure your apps and APIs from within
Schedule a one-to-one demo to see what Contrast Runtime Security can do for you.