In today's interview, I have the pleasure of catching up with Nick Galbreath. Nick is CTO and Founder of Signal Sciences, a new company focusing on web application defense and security monitoring. Over the last 20 years, Nick has held leadership positions in number of companies, including Etsy, Right Media, UPromise, Friendster, and Open Market. He is the author of "Cryptography for Internet and Database Applications," and was awarded a number of patents in the area of social networking.
Today, Nick and I discuss DevOps, continuous security, how a companies should handle high rates of deployment, and frustrations that companies have in regard to application security.
The following is a brief excerpt of our interview.
Jeff Williams: One of the huge advantages I see in DevOps kinds of development approaches are that development and operations actually are closer. So there's much more feedback. I work in all these organizations where these're really strict lines, where developers can't access anything in production, and production can never reference anything from development. I really think that those lines are getting much blurrier as we go here.
Nick Galbreath: Yeah. That's a great segue. So one is security guys should really start leveraging their other team's tool sets. So development has got a bunch of tools for code quality. QA has got a bunch of sort of automated test systems. Certainly, operations already has a ton of monitoring. Ideally, you should be leveraging all those to help you do your job as well. So another great segue is developers on production, really strict lines.
Sort of the issue is like, "Oh, no. Developers should never be in production." Then DevOps, "Oh, yeah. Why not have all developers on production?" Really the right question is "Why does anyone need to be on production? And can you start prioritizing or helping the operations group?" It's like, a developer needs to be on production because they need to see some log file. Can you shift that log file somewhere else so the guy doesn't even need to be on production?
That's what the high rate of continuous deployment is. You're able to see what's happening in production, but you don't need to be on production. So all those log files, all that stuff, get them out and make some interface, so developers can see what's happening on the box without them going on a box. That eliminates just an enormous number of mistakes, access control, machine configuration. Just really simplifies everyone's job and powers people to make their own decisions on what needs to happen.
Jeff Williams: Yeah, it's that level of instrumentation, I think, that makes it unnecessary to have anybody on production.
Nick Galbreath: Yeah, exactly.
Jeff Williams: In fact, Gene Kim said, "If you have code that's important enough to deploy, you have code that's important enough to instrument."
Nick Galbreath: Absolutely. That's, I think, the whole source of why do manual deploys happen. It's because everyone' environment is different, and someone needs to look at the logs and make sure that happened.
Jeff Williams: Right.
Nick Galbreath: A great goal of a security person is like, "Hey. Let's help developers get all the information they need from operations, so they don't need to be on the box."
So that's really interesting because that's not exactly security per se. You're actually helping developers and helping operations. But at the same time, the result is a much more secure production environment.
Jeff Williams: Yeah, it's interesting. I mean, security happens as a side effect of doing some of these other things just better and actually faster.
Nick Galbreath: Honestly, I'm not sure you can do it reversed. I see a lot of people in security jobs, and the organization they're at is somewhat immature in their processes. They might be a very mature company, but their actual processes are somewhat immature. They're not able to really make effective change for security, just because the development and operations teams aren't cooperating. They don't have the tools or tooling in place.
So the security job really ends up being bottom of the line, sort of whack-a-mole compliance stuff, which I think leads to really high rates of burnout and finger-pointing and pretty much not really doing a very good job at really anything. So it's like really . . . Bugs and security problems start with operations and development. So helping them really is the way to move security forward.
Jeff Williams: Yeah, I think that's exactly right. I see a lot of companies that their development teams have gone off and started doing agile or started doing DevOps, and there's an existing security team that's trying to secure those projects. They're trying to take their old heavyweight security processes. Like they might have a security architecture review or a code review, or test that they might have, a threat modeling thing. They might do all these processes. Basically, they're trying to shove them into these agile projects, and it's very much like the square peg in the round hole kind of problem.
So they do these weird things. Like they take these big processes, and they try to trim them down and streamline them a little bit and shove them in. They do some tricks, right? Like they might try to skip every other sprint or every two sprints.
Nick Galbreath: Sure. Sure.
Jeff Williams: Microsoft SDL-A works that way. Other ones, they try to create these dumbed-down versions of their big processes. I have this feeling that it's exactly the wrong thing to do, like it eviscerates all the value out of the security process, and it probably pisses everybody on the agile or DevOps team off.
Nick Galbreath: Sure. So just to rewind just a touch, high rates of deployment doesn't mean you don't have a security review, and you don't have code reviews, and you don't have all these things. It's just that when they're complete, you're able to ship it really quickly. Hopefully, you've pre-loaded most of that discussion already. So they're not necessarily at odds.
Maybe a good way of getting started is, and I think this is where a lot of the friction comes from, is you have a pretty big code base. Not everything in there is a high-risk security threat. If it is, you have a different problem altogether. My guess is the designer can edit the CSS and can shift the CSS file without doing a security review. Can you structure your code or your deployment process so the low-risk stuff gets shipped out, pretty much on demand, which will enable those people to do their job, get those things out right away.
You might wish to -- password reset or login gets changed, hey, that gets an immediate holdup and needs a sign-off. But if it's not getting changed, why not release some of this stuff?
Jeff Williams: Yeah.
Nick Galbreath: Most of the scripting languages are not like C, where it's like pointer D ref, and your whole site comes down, and there's a buffer over . . . Most languages are not like that. So it's like there's already sort of an inherent safety level. It's really about empowering people to do their jobs. So it's like, you need a security review? Do that upfront. Make sure the developer is on target. But after that, they have to be responsible for it. I'm not sure code audits and reviews, I've been through enough of them, are really effective ways at finding every security flaw.
Jeff Williams: I agree with that. Some types of flaws, though, they're critical. I guess one of the things that I've been trying to do to solve some of this problem that I just raised is to try to break up those big longitudinal reviews of the entire code base against every single security requirement that you might have into little pieces. You start to realize different tools are way better or worse at verifying different kinds of things. So static is good for a few things, and dynamic tests are good at a few things. You need manual for other kinds of things. I think if we try to break up those reviews, actually it ends up sort of making them, those activities themselves into DevOps style activities.
Nick Galbreath: There's no reason why a security person . . . Most languages, in fact, all I know of, have some type of interface to the database, right? So it's not a lot of work to actually do a diff at every release and see if there's any new database calls, in which case you should be able to analyze if there's a SQL-I in that extremely rapidly, because you're only pulling out the actual part that changed.
Jeff Williams: Right.
Nick Galbreath: That small diff is actually much easier to code review than doing the two-inch stack of printout, where you're going through the entire code base, which mostly ends up being style guide issues and stuff like that, just not useful for security. So actually small, incremental changes, code reviewing, much easier than a big blob. As you said, if you can split up your code base into high-risk, low-risk, medium-risk, you're going to be able to deploy a lot faster and actually have a much more productive, more effective security auditing.
Jeff Williams: So tell me about what are some of the frustrations that you hear from companies and that you deal with about today's security products?
Nick Galbreath: Sure. So the few I've heard is that a number of them are actually just too hard to use, either through usability or false positives or require a specialist to run.
I, myself, have been sort of burned by this, spending $25K for a license on something, only to realize after we get it and really do a little bit more than the two-week evaluations, that we actually need to hire someone full-time to run it. That's $25,000-tool just turned into $125,000-tool. That's not really feasible for most companies.
Jeff Williams: Yeah, that doesn't scale real well.
Nick Galbreath: No, it does not scale real well. So the end result of these tools might be great, but they're just too hard to use for most people, especially in a constrained marketplace.
Jeff Williams: So they're expert tools? Like tools for experts only?
Nick Galbreath: Yeah, it's partially due to just the high rate of false positives and requires a long sort of tuning process to actually see what's going on and weed out the stuff that's actually, "That's how we do stuff. It's not a security flaw," versus, "Oh, that's actually actionable, and that needs to get fixed right away."
Jeff Williams: Right.
Nick Galbreath: You run some of these tools, and they'll produce a list of 50,000 things. Well, someone has to go through and either review that, or what mostly happens is they just get ignored. You'll never actually get done solving the problems.
So the other "gotcha" I've heard is on the other side, sort of security researchers, is the research they're doing is now just so far advanced of what basic organizations can do. They're finding really exotic flaws, and they're trying to get fixes done in the organization, but the organization is having trouble keeping up with patches.
You don't need an exotic research staff to figure out new vulnerabilities in the site, blue team or red team type of stuff. If the company is not even doing basic patching, then it just doesn't matter. So there's a little frustration sort of sense on researchers that they're making advances, but the organizations just can't keep up. They're still stuck doing very remedial type of work. That's a really sort of double-whammy unfortunate combo.