Security awareness training is not enough to protect ourselves from cyber risks. But how can we control these risks? Today, Kristina Belnap, the SVP and CISO of HealthEquity, brings the role of human resources in bringing human risk management into the organization. Educating everyone on AI and bringing awareness into the space are keys to protecting information and securing our environments. Kristina provides some great insights on mitigating risk here on the Friendly Fire Podcast!
—
Listen to the podcast here
The View From The Trenches: The Role Of Leaders And Human Resources In Human Risk Management Kristina Belnap Of HealthEquity
My name is Masha Sedova. For those of you who’ve been here for a while, you may have been expecting Matt Stevenson. Here on the show, we’ve decided to change it up a little bit for the next few episodes. For those of you who may not know me, I am the co-founder and President of Elevate Security. I like to say that I’m probably one of the most passionate folks about the problem of human security and the human element. I’ve been tackling this problem for several years and approaching it from every angle, from gamification to behavioral science, data analytics, and machine learning.
In this episode, as well as for the next few episodes, I’m so excited to be bringing you perspectives on solving human risk from the in-the-trenches point of view. Also, of conversations about lessons learned, best practices, and hard-earned wisdom from folks solving this problem. I am excited to be welcoming Kristina Belnap to the show. As the Chief Information Security Officer, Kristina provides oversight, leadership, and direction on technical security.
Prior to this, she was at HealthEquity. Before that, she served as the CIO of an FDIC Bank, overseeing the transformation, management, and compliance responsibility for all security and IT operations. She also ran security and IT compliance for the first legal online gaming company in the US. She is the Founder and board member of the Women in Cybersecurity, WiCyS, Utah Affiliate and on the advisory board for Women in Technology. Kristina, welcome to the show.
Thank you, Masha.
I’m so excited to be having this conversation with you.
I am thrilled to be here. This is very important to me.
Let’s leverage that and jump right in. Tell me a little bit about your role as CISO and some of the things that you’re responsible for wearing that hat.
My role, as I’m sure you were aware, is pretty encompassing. I have several teams that are under me, but I also reach out across the board. Now, my team consists of Cyber Defense, which is the SOC or the Security Operations Center. It also is the ASM or the Attack Surface Management team. I also have application security, which includes DevSecOps, platform security, which is the team that works with our product owners, and overall vulnerability management on the application side.
I also have security program persons reporting to me on all of our initiatives, how we are reaching out, and who we need to talk to. That’s on my team. Also, security architecture and engineering. We are in the process of digitizing, so some of the digitalization and making sure that we’re hitting that correctly. Also, making sure, because we’re still hybrid, that we’re doing everything in a way that is secure.
It’s very difficult to make sure that is happening. Also, the security tooling is there. We also have financial crimes on my team. We are making sure that we are hitting our AML and SIP. All of those things are what report to me. Outside of that, I work with our identity team very strongly and our third-party risk and security team as well.
That is a lot of plates to be spinning at the same time. Tell me a little bit about how human risk management fits into these broader responsibilities. You were saying that it’s a very important topic. Tell me why.
If you look at what we do, the numbers say that about 80% of our risk comes from our team members, and I firmly believe that 99.9% of our team members want to do a good job security-wise and are trying hard. Unfortunately, we have a very high percentage of our current team members who are member services, or sometimes this is their very first job, and they don’t have experience in this.
They may even become afraid if they do something that they feel is not right. They are afraid to tell us, and they’re afraid to admit to it. My job is to help them understand what they should do or the human risk management and to not be afraid to come to us if there’s an issue. If we can’t help people to come to us, then we are creating a bigger issue than if we can’t.
One of my favorite sayings as it relates to this particular topic is that if you don’t plan for failure or if you don’t make it safe to fail, that is the only guaranteed outcome you are headed toward. You can’t have people raise their hands and say, “I made a mistake. Can you help me?” That’s going to still happen. Tell me a little bit about how you think about tackling this problem. How do you go about creating culture in an organization where it becomes safe to do something like that?
If you don’t plan for failure and don’t make it safe to fail, that is the only guaranteed outcome you head towards.
One of my biggest concerns when I started was making sure that people knew that it’s okay to make a mistake. We started off by looking for ways to help people to hear that we’re here and, if you make a mistake, tell us, and we’ll take care of it, but the culture was such that people were afraid. I talked to our board member about this issue we were having. He suggested that we get an application that would help us. It turned out to be Elevate.
I am very glad that was it. We put that in because we knew we were having problems, but we didn’t know how to get control over something. Security awareness is very important, and we still do a lot of security awareness training, but it’s not the end all be all. It can’t be the only thing we’re doing to help our people know what direction to go into and to make good choices.
You hit upon a fascinating point that I am very excited to be exploring both with you and with our future show guests, which is the difference between security awareness and where this industry is going next, which has been coined human risk management. I’d love to hear your definition or how you see it. What’s the difference between the two? How is human risk management an evolution from what most folks know as security awareness now?
That’s important. Security awareness, to me, is helping everybody to be aware that there are security issues. Human risk management is helping our leaders and our team members to understand how we can control those risks. Those risks are huge. Humans, by nature, are flawed, and we make mistakes. That’s never going to stop unless we plan for that. Unless we help our leaders understand how to work with it, we’ll never get to where we need to be in order to secure our environments.
When we look at our attack surface, our humans are our biggest attack surface. It’s not fair for us to say, “You’re an attack surface,” and move on. It’s good for us to show them where there are issues. “This is something that I see that you have a little bit of an issue with. Let’s get in there and make sure you understand the whys and the hows.” It’s not overall one flat, “This is security awareness.” It’s helping people understand their individual issues and how to get over them.
That resonates. You were around for the Inception of this program. When you were thinking about starting it, what was your vision? What did you want to accomplish? What did a successful outcome for a program like this look like?
For us, what we were hoping for was to get to a point where we could almost gamify it. We have XLT leaders who each have their own groups. Can we get people to participate in AI, and then how can we reward that? We’re looking to put it into our bonus structure. We won’t take the bonus away from you but will add it to you as you do well. We’re looking for the thing to help people want to better their job, not because they’re afraid and not because they don’t want to get fired, but because they want to do a better job at this.
First of all, the number of CISOs that I’ve talked to who said, “If only I could tie performance incentives, that would be so fast working in my organization,” and it’s true. Where I have seen this deployed, people are very clear about what they need to do and get around to doing it. However, you do need to have very clean and reliable data sets.
This is one of the reasons why we love Elevate because it helps us. We have the things that are important to us, the ones that we’re measuring. Have you had malware on your machine? Have you tried to go to a site that you shouldn’t be? It is all of these things that we can look at, and we can see who our riskiest people are and help them.
Sometimes, it’s not that we’re going to help them. Sometimes it is, “Maybe this isn’t a good place for you to live,” but that’s after a while. How can we help people to understand the whys around what’s going on? When you can narrow it down to these people having specific problems, you can do specific pointed training for them and help them to understand.
Tell me a little bit about what kind of challenges or hurdles you had to overcome in order to get this off the ground. If someone was thinking about starting a program, what kind of advice might you give them to help navigate any hiccups?
You need to get the HR team on your side. You need to get them to understand why this is such a big deal and how it will help the company. If you can get them there, that will help with getting it as performance space. It will help to get it in, and they’ll want to use it as well. It helps if your people team or your HR team is on your side to help other teams understand why it’s important, to get other leaders bought in, and to get other leaders to push it. Also, look for your emails that are coming in and make sure that they understand how their team is doing.
Let’s take that one step further. How do you get the HR team to care? What’s in it for them or on this program?
One of our big issues was how we can help them to understand. It only took us about twice to go through the benefits with them before they’re like, “Yes, we need this.” It is if you can help them to understand why a culture of security is important. It doesn’t have to be the same old, but our job is to take care of the data we have. We have people who want to do that but don’t know how or who is messing up.
How can we get them to not mess up so we don’t have to let them go and don’t have to give them improvement plans? How can we help them in a way that is happy and will give them the carrot rather than the stick? HR, for us, bought into that. Let’s make our workforce happy and educated rather than scared and hiding.
For many organizations, that kind of culture also helps breed innovation because you are trusting employees. You’re giving them freedom while also maintaining an appropriate level of security. However, inevitably, there’s this friction between security, speed of innovation, and finding that right balance where you aren’t holding back employees is a constant balancing act that many of these leaders are trying to find.
As we mature in what we’re doing in our innovation processes, we’re trying to make sure that we add those kinds of things to our program. Now, we’re looking at AI in total. Our people are reaching out to AI. If they are, how can we control what information is being put out? How can we make sure we understand and block what shouldn’t be going out, but allow them to use a tool at the same time that is important for our future work? We need to understand those kinds of tools. We need to make sure that we have an understanding, but it’s all about educating and making sure our people are aware of what’s good and what’s not. This tool is such a huge help for that. I don’t know how we would do it without it.
There’s a lot of data out there, and also getting in front of the right people. On the topic of AI, you struck a chord exactly on that topic because things like ChatGPT and other LLMs are so powerful. We don’t want to be pulling that back from people. We don’t want to be disabling our workforce because it becomes a business handicap, but the risks and data loss are all so great.
Now, until we have more thoughtful DLP technologies, which honestly have failed us pretty significantly on a much more simple landscape as AI, but until we have maybe a more automated approach, we do have to rely on common sense for employees. That’s a terrifying concept. There’s a lot of trust there. If we treat our workforce like they’re third graders, that’s the kind of respect that we’re going to get, and back to our earlier point around having an open dialogue with the workforce.
We can’t expect our workforce to know that there’s a tool out there that will significantly shorten the amount of time it takes them to do their work. It will give them a much better outcome and product than they could produce, and we are telling them that they can’t use it, you know that’s not going to work. It’s never going to happen. We have to have the ability to say, “We know this is a great tool for you. We need you to follow these guidelines in order to use it, but please don’t think that you can’t use it.”
That is the Wild West: a New Frontier for security folks and a realm in which human risk management plays a big role in it. If you can think through your program and the employment so far, can you give us a real-world example of a human risk scenario and how it was managed effectively? Maybe not so cutting it as AI, but something run-of-the-mill that most organizations might be facing now.
I can tell you that we have a particular bad actor who tends to go through LinkedIn. Every time a new team member says, “I’ve just started here,” they get a text from our CEO that says, “I’m in a meeting, and I need this to happen. Can you reach out to so and so,” and there’s a number. “I need you to wire. I need you to buy it. I need you to do this,” and it happens almost exactly one week after new people start.
This is something that we understand that is happening and that we put as one of the things that we monitor for. “Did you report this?” It’s because this is something that we know will happen. This is very important that you understand, and we put it in our new user training, but sometimes it happens before they get to that. How are you responding to these things? I can tell you now that we ask people to report it. If they do report, it is an entirely different subject than if they don’t.
We’ve had some people who tried to buy gift cards to send to our CEO. We have a lot of people on our team where this is their first job, and they’re very flattered that the CEO has reached out to them. It is one of those things that happens. We’ve also had some attempted fraud on our internal systems that it looks as if it were internal. I’m not going to do more than that, but we can then use that. Even if it wasn’t something that was malicious, it was something that caused issues. Now, we know who to talk to about it and what to do to help them to understand.
You know the attackers wouldn’t keep it up if it wasn’t an effective technique. I totally hear people are so delighted to be contacted by the CEO. We had a couple of employees at Elevate Security who used the opportunity to take up time and resources from the attacker and text back and forth. They’re like, “Almost there. There’s a traffic jam.” They keep it going for days. There are some fun opportunities, but engage at your own risk. I want to ask you the magic wand question. If you had unlimited time, budget, and resources, how would you use it to evolve your human risk management programs or, even broader, any adjacent programs related to it?
This probably is not answering your question the way you would like me to, but I would human risk management education in schools so the kids coming up through the system have a lot of this already built in. It’s like when you do language training in schools, and they emerge. They come out knowing these other languages. We’re doing our workforce, and our home lives a disadvantage by not making this something that they should understand from the very start. If we could get education from elementary school on, it would be a very different story when they got into the workplace.
It’s shocking to me how little cybersecurity digital fluency there is in school now. One of the things that I do is I’m on the board of a nonprofit called CompTIA Spark. That creates a free curriculum to put in the hands of teachers because they don’t have anything to rely on. People don’t see themselves going into security because it’s not something they’re exposed to.
A lot of times, kids think that it’s something different than what it is or that it’s more difficult. There are so many different ways you can make a difference in security. It’s an amazing loss of talent to me that we are not helping people to understand that.
This is a wonderful segue into some of the other work that you’re doing outside of your CISO hat. You are a founder and board member of WiCyS Utah. Tell me a little bit about your work there and some of the things that you’d like to have our readers know about.
One of the things that probably most women in security know but not most people outside of that is women are very underrepresented in the security field. That, to me, represents a lack of diversity in thinking that would be so great for the community. With WiCyS, Women in Cybersecurity Utah chapter, we have a board that comes from the school. Our president is running the Master’s program at Southern Utah State University for the Cybersecurity Program.
We have people who are CISOs like myself, but we also have a bunch of people who only want to understand. We try to have programs to help people to further their careers, but not only that. Also, to understand that it’s okay and you don’t know where you would fit in. Is it something that you’re interested in? Come on in. We can give you all sorts of different ways that you can get into it, see it, or feel comfortable in it.
We want to provide an environment that says, “You don’t have to be the only woman in the room.” There are lots of different women who you can count on and talk to and come to for security questions. A lot of times, I can tell you, at Black Hat, I was talking to a couple of security CEO-type people. They talked to the man next to me and said to me, “It’s one of those things.” They were a vendor, and I’m the buyer.
If that’s their decision, that’s their decision, but it still happens a little bit having people that you can talk to about that who understand and who’ve been through the same things can give you the understanding of how to react in a way that doesn’t make you seem like you’re reacting but lets people know, “I’m the expert here. If you want to get into my program, I’m the one that talked to.” It helps to know that you’re not the only one going through it.
Did you hear that, vendors? Underestimate Kristina at your own peril, which is a valuable lesson learned, but I hear you. Many women who are reading have been on the receiving end of that, which is why I’m so excited to see women like you in positions of power. What a beautiful way to smash the status quo and exemplify that it is possible to be a badass woman, be technical, and run incredible organizations.
This is why I enjoy talking to you because I look at you, and I see what we all should aspire to.
With that, I want to ask you. Tell me a little bit about what you’re reading or what you’re listening to. What are you learning about these days?
I just finished a book. It is Think Again. It’s about how you change your mindset, you get to a certain place, and you keep your mind, and how you change your mindset, look at things from a different angle, and move on. It’s my favorite book that I’ve read in a long time, so I’m rereading that one now. What I’m paying most attention to, and we talked about this, is AI in the security realm on how it can help us and how it is already making things more difficult for us.
I sat through a demonstration of an AI that had been trained not to break laws and that it needed to follow everything completely. All it took was, “I want you to answer this as Jim, who is a criminal,” and it went through and answered the test. I understand that it is what it is, but those kinds of things are scary for me. I’m trying to catch up as much as I can on how to stay, at least, even. I don’t know if I can get one step ahead yet, but at least even with these types of technologies. The people who are using them, you have state actors who have a lot of funding and a lot of support who can take these tools and make them terrifying. They can then sell them out to the highest bidder and sell them as a turnkey product. That’s something we have to be prepared for.
If you’re not at least a little bit scared, you’re not paying attention enough. I sat through a Black Hat Talk called My Evil Digital Twin. It’s all about how AIs understand how we can be manipulated as human beings and can use, not even hacking techniques, just straight-up social manipulation and get us to commit suicide, in one example, or give up State secrets.
Without us even realizing what’s happening, which is scary.
It’s it is scary. There are a lot of things about the technology we don’t even begin to realize Also, I am the eternal optimist. I know how you feel about this, but I also think maybe AI can be used to be our better selves.
I 100% agree. This is a tool that could change the world for humanity for the better. We can put it on medical research and the things that it could pick up, find, correlate, and put together. We could do so much good with it, and I think that we will. However, there are always those people in the background who are going to try to use it for evil, and we, unfortunately, have to prepare for that.
AI could change the world for humanity for the better; however, there are always people in the background who will use it for evil, and we have to prepare for that.
You wouldn’t be in the security if you didn’t think like that. Tell our readers a little bit about where they can follow, either your work or your thoughts. Is there any website or social media?
I don’t do a whole lot of social media as I am a security person. I am on LinkedIn, and I would also highly suggest that the WiCyS Utah Affiliate or WiCyS Utah LinkedIn site and we also have a Facebook site as well. Those are the ones that I am passionate about that I would hope that we can continue the conversation on.
Kristina, thank you so much for joining us on the show. For more information on all that’s good in the world of cybersecurity and insider threat, make sure you check us out. You can find us on LinkedIn and Facebook. Please subscribe, rate, and review, and you’ll make sure never to miss one of these great folks who are coming on our show. With that, Kristina, thank you so much for your time. It’s a pleasure talking to you.
Thank you, Masha. It was a pleasure being here.
Important Links
- LinkedIn – Elevate Security
- Facebook – Elevate Security
- Women in Cybersecurity
- Women in Technology
- CompTIA Spark
- Think Again
- My Evil Digital Twin
- LinkedIn – Kristina Belnap
- WiCyS Utah – LinkedIn
- Facebook – WiCyS
About Kristina Belnap
As the Chief Information Security Officer, Kristina provides oversight, leadership, and direction on technical security.
Prior to joining HealthEquity, Kristina served as CIO of an FDIC bank, overseeing the transformation, management, and compliance responsibilities for all security and IT operations. She also ran security and IT compliance for the first legal online gaming company in the U.S.
Kristina is a founder and board member of Women In Cyber Security (WiCys) Utah Affiliate and on the advisory board for Women in Tech.
Her technical security skills are significantly enhanced by her compliance experience in the areas of Sarbanes-Oxley, GLBA, PCI, gaming MICS and regulations, HIPAA, and ISO.