On Oct. 30, the Securities and Exchange Commission (SEC) charged SolarWinds and its former Chief Information Security Officer — Timothy G. Brown — in a 68-page complaint alleging that the company and its then security head defrauded investors and customers through “misstatements, omissions and schemes that concealed both the company’s poor cybersecurity practices and its heightened — and increasing — cybersecurity risks.”
Well, that’s disturbing, said Contrast Security CISO David Lindner. Unfortunately, it’s not the first time that a cybersecurity leader has faced accountability for the security posture of their organization, and it won’t be the last, Dave suggests.
According to court testimony and documents, the team of former Uber Chief Security Officer Joe Sullivan had referred the culpable attackers to Uber’s bug bounty program in order to funnel payments to the crooks, as if they were “white hat” researchers responsibly reporting security vulnerabilities. They weren’t: They were two hackers demanding at least $100,000 to sit on stolen personal data belonging to about 600,000 Uber drivers and additional personal information associated with 57 million riders and drivers. The bug bounty program capped payouts at $10,000, but Sullivan and his team allegedly paid the malicious actors $100,000 and got them to sign a nondisclosure agreement. (Sullivan is appealing the conviction.)
Given such alleged criminality, it’s easy to parse why the court found Sullivan guilty, says Contrast’s CISO: “That, I understand.” But without more information than what’s been released, prosecuting SolarWinds’ former CISO Brown is more confusing.
Given the SEC’s failure to require companies to disclose security knowledge among their boards of directors, it appears that now, the Commission is just going after CISOs as fall guys, Lindner says. “From my perspective as a CISO, it's becoming increasingly clear that technical security expertise is an essential requirement for the role,” he comments. “Each day, CISOs are tasked with making critical decisions, such as approving or accepting timeline adjustments for security risks that have the potential to be exploited. But without a deep understanding of the technical intricacies involved, a CISO risks ending up in a situation similar to Timothy G. Brown, facing legal repercussions.”
Read on for his advice on how security heads should protect themselves in this new era of scapegoating:
Q: How do you feel about the CISO's role in light of the SEC suing SolarWinds’ CISO?
A: Honestly, I'm stressed. I mean, security incidents like SolarWinds happen. And the CISO is now the de facto scapegoat. I don't understand how it's always the CISO.
If no one else is aware of these things, something's way broken.
Hold the board accountable. Hold the CEO accountable. Why is it 100% on the CISO?
Q: Can you explain the SEC’s proposed changes regarding the requirement for security representation on corporate boards?
A: In March, the SEC proposed a bunch of things, including notification periods about breaches and incidents. Everyone has to comply: Breach notification is now a matter of hours — the rule requires notification to the Commission within four days of discovering that a significant cybersecurity incident is material — instead of months.
One of the things the SEC was pushing was that all SEC-regulated corporations were required to have security representation on their board.
There was a lot of pushback, and that was one thing that got dropped.
I’m like, ‘They're trying to create accountability,’ and holding a board accountable and liable for these issues, which may occur from time to time.
But now they turn around and they go directly after — who? A guy who's now the CISO? Brown wasn't even the CISO when the thing happened.
(Brown was SolarWinds' VP of Security and Architecture and head of its Information Security group between July 2017 and December 2020 and has been the company's CISO since January 2021.)
Q: What can happen when decisions affecting security are made without the CISO's input?
A: The things the board talks about often change the control environment.
And if there’s nobody there that understands the security implications of the decisions that they're discussing or making, that's a difficult pill to swallow.
They make the decision, ‘We're going to do XY and Z.’ Now, the CISO has to figure out how the decision applies to the control environment that's already functioning in a specific way. If the CISO isn’t there, in on the decision, it can have downstream consequences that aren't good for the business.
Take, for example, modifying the engineering department from a more antiquated approach where there are many centralized groups performing one function to a more DevOps-focused team that does all the things: If a company were to make this move, it could lead to tasks being dispersed among many more people and require access to be dispersed among many more people.
At one point, the CISO may have been able to tell a customer that, say, ‘five people potentially have direct access to your data.’
That CISO can't just all of a sudden say, ‘Now, there's 200.’ This sort of change dramatically increases the threat landscape and would change the controls required.
Decisions may be made in cases that really put us in a bind from a security and privacy perspective — which, of course, would have been the CISO’s input to the board.
There are structural and organizational changes that have impacts to the way security and the environment work and function. There are things security professionals do: We monitor processes, we follow compliance frameworks like SOC Type 2 or ISO. We have specific controls we have to follow there. And when you start making big, business-level, architectural structural changes, it impacts all of that stuff, and those could be detrimental impacts, depending on what it might be.
You at least need to be in front of the board and executives to provide the input, like, ‘Hey, this is going to impact this.’
Because if you're not there and the decisions are made, there will be expectations that it just gets fixed.
You could be getting rid of an entire department that has downstream impacts on your risk profile or your control environment. There are just so many occasions when someone with the knowledge and understanding of what's really happening in the trenches and how that applies to contractual obligations — and all the other things that we have to deal with on a daily basis — has to be at the table. You can't make these decisions without those things laid out. If that person doesn't exist and is not there, I don't know how any business can properly function and at least attempt to prevent a breach.
Q: How does the fear of legal consequences influence a CISO's risk appetite and decision-making?
A: It's very common that the CISO is not at the table. That makes it a lot easier to have the CISO as the scapegoat, to have that ‘Well, we didn't know,’ screen of ignorance to hide behind.
But you need to know. You need to hold the people who are running the business accountable. Security is a business problem. And if you're going to hold security heads accountable, they need to be at the board and executive levels — period, full stop, end of story.
The trend to scapegoat CISOs almost makes it worse for CISOs who are trying to get things done. It's like, ‘Well, we don't care, the CISO’s the one they're going to come after,’ which is ridiculous.
Every CISO should be paying attention to this, and I’d be surprised if it doesn't change their risk appetite. If you, in the back of your mind, know that one bad decision or one risk acceptance could create a potential lawsuit issue down the road, you're going to change your risk appetite to be a lot less risk-averse. You're going to make ‘Fix it!’ decisions, regardless of whether or not you have people to do it or it pushes out that new shiny feature request.
All these decisions happen all the time, as in, daily. Daily, CISOs or security leaders get requests such as: ‘Can we push the fix out for this because of XYZ?’ Or, ‘Can we accept this risk?’ Or, ‘Hey, we're not even using this component.’
It's multiple times a day that most leaders get those questions, and we have to make those decisions on the fly. We have to balance the business success and protecting the business from potential exploit and breach. And when you start seeing things like this, it's going to reach the point where security is going to once again become that blocker, because the CISO can't give in at all, because they're worried about the fact that the SEC is going to come after them if there’s a breach.
Q: How should CISOs protect themselves?
A: CISOs need to take proactive steps to protect themselves from the rising threat of lawsuits. There are several strategies they can consider, such as:
- Requiring their inclusion in the organization's Directors and Officers (D&O) insurance policy. This would provide a layer of legal protection in case their decisions are questioned.
- Furthermore, CISOs should demand direct access to the Board of Directors, which would ensure that their concerns and recommendations are heard at the highest level of the organization.
- They should also insist on a seat at the executive table, where strategic decisions are made. This position allows them to align security with the business's goals and ensure that security isn't an afterthought but an integral part of the organization's strategy.
- In addition, CISOs should work to include specific severance packages in their employment contracts. These packages can serve as a safety net, offering financial protection in case they face dismissal or legal consequences for security incidents beyond their control.
In a rapidly evolving cybersecurity landscape, it's crucial for CISOs to take proactive measures to safeguard their careers and mitigate the risks associated with their roles. By integrating these protective measures into their positions, they can better navigate the complex and often high-stakes world of cybersecurity leadership.
Read more: