In today's interview, I have the pleasure of talking with my friend, Neil Matatall, who is a security engineer at Twitter. He also runs the OWASP Orange County chapter, and he was the organizer of the hugely successful AppSec California Conference which I spoke at.
Neil and I discuss his philosophy on automating applications security and doing continuous delivery - DevOps style. Neil shares his process and focus at Twitter and how he chooses and fine tunes tools that help developers code more securely. I also pick Neil's brain on the idea of a common language in the application security world where every tool could report vulnerabilities in the same format.
The following is a brief excerpt of our interview.
Jeff Williams: I'm looking at a bunch of the different presentations you've given at AppSec U.S.A., and LASCON, and HackaCon, and one of the recurring themes in those presentations is about automating applications security and doing continuous delivery - DevOps style. Tell us a little bit more about your philosophy on how that ought to work.
Neil Matatall: Just to state the obvious, automation isn't the silver bullet, but it's pretty darn close. We get amazing returns when we leverage the continuous delivery infrastructure to integrate into wherever we can and we use the machine to help us.
There seems to still be somewhat of an adversarial relationship between security and moving fast, but they play together very well. I think it comes down to two main points, to grossly oversimplify, realizing that as a security group, or just as a group, you can't stop the business. Business wants to move forward. You don't want to get in the way. You don't even want to try to get in the way. That might even be hazardous to your health. If you take a step back and look at your chances to leverage what's there, like I said, it's an amazing return on investment.
I think the second part is computers are smarter than people. I can run a tool, but so can a machine. Tools are great. Tools don't go on vacation. They don't silo information. They don't get lazy. They don't have bad days. They do get fired, though, from time to time.
Things that can be automated and integrated into the business scale very well. Automated things tend to stand the test of time assuming they don't get fired. They improve. They have pretty low maintenance costs. The obvious point is that the more we automate the more we can focus on things we can't or focus on how we can automate the things we think we can't.
A manual process just has absolutely none of these characteristics. That's why we shy away from anything that involves any manual process. Even Homer Simpson knew that automation was good as soon as he put that drinking bird to push that key on his computer. That still did the job well 80% of the time, and that's sort of the artificial threshold we look to meet.
Jeff Williams: You hit on something. I had an interview with Nick Galbreath. We were talking about how agile and DevOps have helped organizations establish a firm delivery process. High speed delivery forces them to crystallize their software development process, and it makes security easier, because now we have ways of adding security steps into a defined process as opposed to older development styles that were much looser and didn't have a lot of hooks for security. How do you feel about that?
Neil Matatall: If you look at the desirable properties for each of those, continuous delivery is all about being able to get it to play out as soon as possible. It's also about making sure you don't break everything in the process. A lot of these things like feature toggles or gradual rollouts are actually security features in disguise as development features. The more you leverage these things the more you realize that it's all one common goal there.
Jeff Williams: Tell me a little bit about what you do at Twitter. I know one of your main focuses is to get tools in place that help developers code more securely. Tell me a little bit about how you choose those tools and what kind of things you're looking for in those tools.
Neil Matatall: Right off the bat, some things that can be sort of non-starters, but it's not black and white like that, are if it's closed source or it's difficult to automate. Again, those aren't non-starters.
A lot of our other teams use a lot of commercial tools. We tend to not do it on the AppSec team specifically, but we pretty much only have one tool that we use that we that we don't automate, and that's Grep.
Everything else we do has to be automated. To automate something and to leave it alone and let it be its own self sustaining thing, a false positive is pretty much the worst thing you can have. Obviously, a false negative is worse, but on the context of automating security, getting that false positive rate really low such that every alert is actionable and clear, or at least is almost always actionable and clear, is super important.
Jeff Williams: Tell me a little bit about the process of tuning those tools. Like, you have to spend some time tuning them to the software that you're analyzing and getting that false alarm rate lower. Is that something that's continuous, or is it sort of a big push up front and then you lock in a tool and then it's there forever? How does that work?
Neil Matatall: It's definitely a little of both. There's always going to be ongoing maintenance. I'll take Brakeman, for example. Brakeman is an open source stack analysis tool that a team member of ours wrote. That is our first example of automation that went really well.
Brakeman itself has a pretty low false positive rate, but even when it does have a false positive since it's open source and we have intimate knowledge of the tool we can either fix it or determine how we can legitimately filter things like this out forever. We've declared that we don't care, so it's producing less false positives and then getting rid of the existing ones as you go along.
Jeff Williams: Let me ask you this. One of the things I've always thought was missing from the applications security world was a common language where all these tools could report vulnerabilities in the same format, and then we could build interesting dashboards on top of that, but for some reason it eluded our market.
This goes back to the very beginnings of OWASP. One of the first projects there was a common vulnerability language. I worked with Mark Curphey for a few days on a version of it. For some reason, we've never been able to get that going in AppSec. Why is that? What do you think is going on there?
Neil Matatall: That's a good question. I think part of it might be the divide between the security industry and the developers. If I look at a lot of the similar efforts that have been coded in tools that are commercial tools that people use to then generate a PDF that they then hand to a developer, which is probably still very common, it's in a language they don't really want to understand. It doesn't really explain the issue. It explains the results. There are various different enumerations like that.
Jeff Williams: I think that's a fair point. There are a lot of different stakeholders in security, so defining a common language for them all to try to communicate in maybe is some of the problem.
Let me ask you about getting developers to code more securely. We talked about the tools. What else do you guys do at Twitter to help get developers writing secure code the first time?
Neil Matatall: This is probably my favorite part of my job. It's getting people to write secure code. I think there are plenty of ways you can do it. In the end, it's again coming back to realizing you're all on the same team.
At H.O. we give them a simple set of rules that says these are things in code that we do not do here, or things that are pretty much guaranteed to have someone question what you're doing, or it's just going to make your life difficult if you do these things. In a very helpful way, naturally, a very empathetic way, we don't just tell them what not to do, but we tell them the right way to do it. The best part is...
Jeff Williams: This is not like writing Dynamic SQL, those sorts of things.
Neil Matatall: Yeah, exactly, or like inline JavaScript or JavaScript that builds HTML elements by concatenating streams. These things, if you look at the code of the wrong way juxtaposed against the right way, it's almost obvious that the wrong way is very ugly and is not even pretty code to look at compared to the right way which is very methodical, and very obvious, and very clearly defining what it's doing.
The problem is a lot of our frameworks don't support pretty code, so I feel like there's a lot of effort that the AppSec community can do to improve the frameworks to make writing secure code just as easy as writing pretty code that the computer science nerds would also appreciate.