A lot of people think that insider threats are always malicious, but most of the time, they are unwitting vectors of risk. Join Harris Schwartz as he talks to Co-Founder of the Cyentia Institute, Wade Baker, about his research on internal risks. Discover how much of it comes from human risks like email phishing or downloading malware. Learn how they try to curb people away from doing these risky events. Finally, find out Wade’s thoughts on business continuity and disaster recovery function.
—
Listen to the podcast here
Finding Unwitting Vectors Of Internal Risk With Wade Baker
Wade, it’s nice to see you. Thanks for joining the show. How are you?
I am doing well. Hopefully, I won’t be harmed by the friendly fire, but I’m looking forward to it.
I figured we should probably start off with you talking a little bit about your background.
I have been in some aspects of cybersecurity for about two decades mostly in research, which is a little bit unusual. I started as a network systems administrator at a university and got some security training there. I always liked working in a university and thought it would be a good career to teach. I went to get a PhD so I could be a professor, but at some point along the way, I decided that academic research was a little bit too stuffy for me.
A lot of people think that insider threats are always malicious, but most of the time they are unwitting vectors of risk.
I liked the applied industry-based research, so I chose that path instead. I’ve always had one foot in academia and one foot in the industry. I worked for quite some time at TruSecure, which became CyberTrust, which was acquired by Verizon. Probably I’m best known for starting what would become the Verizon Data Breach Investigations Report while I was there. That blossomed into its own thing for a while. I led that team.
I have research from the Cyentia Institute. We continue to do that industry-based and data-driven research and work with a lot of vendors like Elevate Security. I also have found my way back into academia. I’m on faculty as a professor in the business school at Virginia Tech. I get to teach students about all of these things as well. It’s a nice blend. I’ve had a fun time doing that straddling between the industry, academia, and security research.
You have a great background. Cyentia and Elevate have done some research projects together and whatnot. It might be good to dive into some of the key findings that came out of some of the research.
This has a good backstory. At Cyentia, we work with lots of different vendors on lots of different topics. At some point, we threw something out on LinkedIn and said, “What should we research next? What would be a good topic for us to take on?” A bunch of people said something related to insider risk or some topic around that. I believe it was Masha or Robert Fly that saw that conversation, reached out and said, “Maybe we should do something. We have data on that.” It was pretty neat.
That started the relationship. We were thrilled because most of the other things that we have studied have been about external threats, vulnerabilities, and all of those kinds of things. To be able to analyze the data collected in the Elevate platform was very different for us. A view of the security landscape that a lot of organizations know is important but it’s hard to see outside of your organization.
With external threats, we have all these threat intelligence reports and all kinds of stuff going into that, but not many people say what’s going on inside their organization and collect that. It was cool. We have done two reports. We have looked at everything from how often employees click on phishing emails or download malware to contribute to browsing violations. We have tried to study what seems to work in terms of curbing those behaviors, what might not be working as well as we think it would, and all of these risk factors that correlate. There is some good stuff in those reports.
It has enhanced one of the things that I saw when I was at Verizon. A lot of people think that insider threat or insider risk is always malicious, “The insider is always doing bad things. We should stop them.” The reality is that most of the time, insiders are this unwitting vector of risk. That pops up in lots of different ways. It’s cool to be able to quantify those different things and look at all the ways in which humans are part and parcel of these things that we’re concerned about in security.
I thought it was interesting because I read a DBI report that came out. I’m starting to see with a lot of the reports that they are talking about the small percentage of users that are causing the majority of instances and things like that from a non-malicious factor. They’re also saying, “Once you identify them, what do you do with those people?” What are your thoughts as far as using controls or other measures to drive change with user or human risk issues?
Equipping users to do the right thing will help ensure that they do the right thing.
That is the million-dollar question or maybe the multimillion-dollar question. It’s a tough one. I will readily admit that other than the research that we have done, I don’t consider insider risk management my area of practice in security. I’ve been in incident response, threat intelligence, and other things like that. That is my area but this is one where I’m learning as we do this research. There are a couple of things that are critical to keep in mind.
One, in the latest report that we did with Elevate, it’s the size and shape of workforce risk. Something that you mentioned popped out. Whether you’re looking at phishing, malware, or whatever measurable objective risk or concern, a small proportion of users were responsible for an outsized portion of the overall risky events. A low proportion of users are downloading a high proportion of malware that is introduced into the environment.
If memory serves, there’s something like 0.05% of users across all the organizations that we looked at were considered high risk, meaning they were at the top of those three areas of risk that we measured downloading malware and clicking on real phishing emails and browser violations. From a control standpoint, that tells me that a lot of the things that we’re doing are misplaced, at least from my perspective.
Everybody goes through the same training. There’s an equal suspicion placed on all users. Therefore, you make your job onerous for all users. That’s wrong based on the data that I’m seeing. We need it to be a whole lot smarter on where we place those controls and the extent to which we implement them or enforce them based on those user risks.
I was talking to a CSO who said, “We monitor these things. If they go outside of the okay zone, they’re in isolation. They’re browsing from then on.” That’s how we treat them. That may seem mean but it is in keeping with the reality that we right-size those controls and focus them because then they will be more effective. We don’t have to take these super broad brush strokes where we assume everybody is at equal risk, which we don’t do on external threats. Why should we do it on internal threats?
It reminds me of back in the day when DLP was starting. Let’s say a company is looking at someone that might be communicating sensitive information like Social Security numbers or credit card numbers. It’s not everybody. It’s a small percentage of users that are doing it. It’s having the controls available to either put up guard rails, curb them, or stick them in a sandbox somewhere. It’s the same thing.
There’s another thing that I noticed that at least I hope is the case. That is that incentives are important in helping employees to do the right things. One of the points that stick out to me from some of the research that we have done together is training and other things like that hit a point where further investments in them are not going to get the return, “We need to do some of these things but they’re not going to drive risk down to zero,” and at the point where the benefits flatten out, “Let’s switch to something else.” Something that popped out that had a lot of promise was the use of multifactor authentication, password managers, and other things like that.
To me, that was an indication that equipping users to do the right thing will help ensure that they do the right thing. If it’s hard for them to do their jobs and they have to memorize ridiculous passwords that change all the time, they’re going to try to work around that because it bothers them and it keeps them from doing their job. If you can make it easy for them to act securely and do their job with minimal inhibitors, then that’s going to be better for all around. That’s going to be an improvement to security. I saw a lot of promise in that research. It’s a thread I want to pull on in the future.
Organizations that used chaos engineering had much stronger and better BCDR.
To shift a little bit, here’s one of the hot topics that I’ve seen. There have been a lot of discussions around it. Who does the CSO report to? Where should they report in an organization? I’m sure you could agree, traditionally, at least in the past few years or so, it has typically been the CIO. There have been a lot of new discussions. Some of the tech startups have a CSO reporting to a CTO, which is interesting. Others are talking, “Should it be a C-level executive?” What are your thoughts on this?
This is something that we have done some amount of research on. It’s always a topic in my classes. We have a module on how we organize for security and things like that. A lot of this question depends on the type of organization, how regulated it is, and all of those kinds of things you said. A nimble tech startup is different than a giant old regulated financial institution. There’s some of that but in general, there are some things that I’ve seen from research.
For example, we did some research with Kenna on vulnerability remediation. The security team was separate from the IT team in terms of finding vulnerabilities and fixing vulnerabilities. They did better when they were separated. I don’t know why exactly, but when those organizations were separate, the performance on vulnerability remediation was improved. I wonder is that because you have equal parties? Maybe it’s not all about, “Get everything working. CIO, make the technology work.” You have this counterbalance to that, “We’ve got to secure this stuff too as we roll it out and deploy it.” That’s one.
We did a survey of 5,000 different security and IT professionals, and asked about where the BCDR or Business Continuity and Disaster Recovery function should be located. It turns out that when that function has board-level oversight or when the board is cognizant of what’s going on there, those programs do better. When that security ran that for a CIO, a risk organization, or somewhere else, those programs did better. I have seen evidence that it does matter where certain things are run, where security reports to, and how far up in the chain they are.
It was interesting to hear you mentioned BCDR. One of my favorite subjects is cyber-resilience. I typically have this conversation with a lot of people on things around IR readiness, business continuity, disaster recovery, and stuff like that. What are your thoughts as far as some of those subjects and their importance for a cybersecurity program?
I am very interested in this topic. It is interesting to me over the time that I’ve been in security. It seems like when I started years ago, which was in the internet worm era, the business continuity and the availability side of security were the most prominent things. It slipped into confidentiality, data breaches, the APT era. Confidentiality took the prime. It’s almost like we forgot that availability and business continuity is important in security. It’s back again spurred on by ransomware and lots of other things like that. It’s cool to see those ebbs and flows.
There are a couple of things I have found interesting. Going back to some research we did, this was with Cisco. It was a study. Something I found super interesting is that organizations that used chaos engineering had much stronger and better business continuity, disaster recovery, and resilience performance in general. It had more effect than anything else that we measured. Testing and all of those kinds of things are important, but that was one that bumped up performance. That has to do with the nature of what we’re trying to control.
We’re trying to control unexpected events that hit when we don’t expect them and where we don’t look for them. Introducing that randomness, “We’ve got to respond to this. It can happen at any time,” works those muscles in a way that very planned forms of readiness training and response don’t because that’s how those events hit. That’s something where I see a lot of promise. I would venture to say that would be in the top three lists on most people’s risk radar. It’s those various threats to resilience. I’m hoping to do more work and research there.
It certainly should be at the top. I appreciate your time. Thanks for joining us.
Likewise. Take care.
Important Links
About Dr. Wade Baker
Dr. Wade Baker is a Co-Founder of the Cyentia Institute, which focuses on improving cybersecurity knowledge and practice through data-driven research.
He’s also a professor in Virginia Tech’s College of Business, working to prepare the next generation of industry leaders.
Prior to this, Wade held positions as the VP of Strategy at ThreatConnect and was the CTO of Security Solutions at Verizon, where he had the great privilege of leading Verizon’s Data Breach Investigations Report (DBIR) research team for 8 years.