People make mistakes, and these unintentional mistakes drastically impact an organization. You shouldn’t dance with risks and wait for cyber attacks to steal your stage. It’s time to take action with Andre Russotti of Altria Client Services as he explains how Zero Trust does not solve the user-behavior problem that causes insider risks. Would you relax security controls? You better hop on to this conversation because Andre fires up more touch points for driving user behavior risk at a minimum.
Listen to the podcast here
Stop Dancing With Risk And Start Driving User Cyber Behaviors With Andre Russotti
For those of you who’ve been joining us for a while, you may have been expecting Matt Stevenson. This time, we’re going to change things up a little bit. For this episode, as well as for the next few ones, we’re bringing you some in-the-trenches interviews with folks solving human risk management. For those of you who may not know me, I’m the co-founder and President of Elevate Security. I’ve been solving the problem of human risk from every angle possible for many years now, from gamification to behavioral science, data analytics to machine learning. We’re excited to be welcoming Andre Russotti to the show.
Andre is a seasoned professional with many years of service at Altria. His impressive career includes a role in IT Risk Management Security Governance, Compliance, and Controls, where he’s been instrumental for several years. His responsibilities span identity, privilege and database, access management, cybersecurity awareness, GRC, disaster recovery governance, and HIPAA, to name a few. Andre has proven himself as a leading expert in the field.
His background also includes leadership roles in fraud management, Sarbanes-Oxley, compliance audits, financial compliance, and various technology directorships. Andre’s expertise is further grounded in his education, holding a Bachelor’s degree in Computer Science from Western New England University and a Master’s degree from Pace University. Andre, welcome to the show.
Thanks again, Masha.
Tell me a little bit about how your role and the list of impressive hats that you wear intersect with human risk management.
It first starts when you’re responsible for cybersecurity awareness for the enterprise. That’s the foundation for making sure that people have the skillsets, knowledge, and training to make sure that they’re doing the right things and that they’re acting and looking. Before they click, they’re thinking in that space. Having that foundation, you understand how you would apply certain behaviors or actions, things that can help somebody understand what they need to do and have better insights or knowledge with that. It says, “If you understand the people that are out there, let’s give you a tool or let’s utilize a tool that’s out there that would allow for better interaction with them and more targeted learning.”
If I know who has certain behavioral issues are more human risk, as the new term goes, then I can make that person better without having to look at everybody. The one thing everyone hates is when one person screws up, then everybody gets punished. It’s the way human nature seems to go, but we’re trying to move past that. Let’s find the people that need the extra assistance for whatever reason and, generally, you can find that out and put those two together.
Everyone hates when one person screws up, and then everybody gets punished. It’s how the human nature seems to go.
It’s a far more targeted approach that is much kinder to our employees who need fewer touch points from security and the right touch point for people who need a little more. Backing up a little bit, you’ve been at Altria for so many years which is a significant span of a company’s lifetime. I’m so curious about your experience. Tell me how the evolution of Altria’s program has been. What you described is quite cutting-edge and state-of-the-art as it relates to managing user risk, but I’m sure it hasn’t always been like that. Tell me a little bit about the history of the company as you were looking at it, what does the program used to look like? How did it get to the evolution of the program that you described?
It starts off when it’s pretty immature, when there’s no training, and there’s not a lot of systems. You’re monitoring the environment. There are not a lot of controls or standards in place. It’s before NIST. NIST made it easier for a lot of folks who are out there. When you’re first starting, you have to build everything. You had to figure out how to have a world-class program. How do you phish the population that’s out there? How do you determine what standards apply to the IT side versus standards that the actual user has to understand and be compliant with from a cybersecurity perspective that’s out there?
No one should be afraid. Even small companies or big companies, everybody started at one point where you didn’t have everything and you build as you go. If you think you’ll create all of this in a short time, you will fail. It will be a lot of work for nothing. If I look at those years, starting with nothing to now, we’re top-rated for our awareness program that we have.
Now, we are starting to ingest the understanding of the security tools and how human risk can take that and make actionable insights through metrics. Reporting to people is where you see that whole difference. It was the next evolution because, to be honest with you, it wasn’t there years ago. It’s starting to come, and it’s not even there yet. There are still a lot of great things that can come into, the human risk management area.
People are confusing it with Zero Trust and the user behavior stuff for insider threat. That’s there, but it’s that evolution. Adding a tool, optimizing it, and making sure you’re getting the automation computers and the actual protection that you have then mature and go to the next one, then keep growing it. Don’t jump to your employees too soon because they don’t understand it. They’re like, “Why don’t you have tools in place first?” You got to get the majority of tools.
There is a lot for us to unpack there, but I want to go back to the moment you said you started doing this years ago. What was the trigger? I speak with many organizations who are still very much in the previous state where they are doing phishing simulation awareness training, but that’s the full extent of the program. Why was that not enough for you? Why did you want to move away from that set of best practices?
Two simple things: the high rate of users being phished causing breaches or security incidents, number one. It is continuing and it’s still in the ‘80s that originated that. The second is the increase of Insider threats. That’s people that are not dedicated to companies anymore. There are hackers out there who will pay you to send them to work at a company to give them the information that they’re looking for.
There are a lot of creative people on both sides doing some of that. You have people who aren’t getting it, then you have the ones that you have to watch out for, inadvertently. You have Insider threats that are mistakenly not configuring something or posting a spreadsheet with some data on it that’s there. Those are the insiders that need to have some of that. You want to be watching for those behaviors. That trend is continuing, and we want to get out in front of it.
Gartner’s been doing some interesting work where they’re redefining the field of insider threat to a broader category of insider risk. It’s a much more inclusive space where insider risk considers negligent actions as an insider risk all the way through insider threat, which is malicious, intentional actions. What their studies have shown is that 63% of insider actions tend to be from carelessness, and a smaller subset of that is from malicious. If we only focus on the malicious piece, while extremely valuable from a protection perspective, it is not the thing that many organizations are cleaning up from these days.
That’s why I mentioned that we look at the unintentional. Remember, insider threat once did not include that. Now, the newer terms, I know more they’re breaking it out, but from the early days, you had to include the negligence with that.
You mentioned a little bit earlier about how there is some confusion with BA and with Zero Trust. Tell me a little bit about what intersection of those terms that you’re seeing and why it’s not the same thing.
People think that Zero Trust will solve everything with everybody, but the differentiation is the user behavior. It’s great that they don’t have access, but when they do because you gave it to them because they needed it, then they did what they shouldn’t have done. Zero trust doesn’t solve the whole user behavioral component, which blows my mind that everyone thinks that that’s going to do it. You gave them access, then they made a mistake. That’s exactly what’s going to happen. Take it to the bank with that, so that’s the difference there.
I wanted to take a small detour into one of the other hats that you wear around identity and privilege and database access management. When we think about initiatives like Zero Trust, often, for many organizations, that’s a capability that’s owned by IT or at least in partnership with security. What we’re talking about and what you mentioned around risky users and embedding that seems to belong in the realm of security.
I’m curious. From your perspective, do you see that being a problem as far as the risk mapping being owned in one technology and being consumed and figuring out who you give access to based on the risk level? Are you finding that in both your experience and also in the market? Are those two teams learning to work well together or are they siloed and divided, causing issues? How do we tackle initiatives like Zero Trust?
That partially depends on the company and the group’s willingness to play together because what the groups will find is it doesn’t matter where IAM is and the human risk factor solution that’s there. As long as those groups work together and the direction for human risk could be from these behaviors, start passing stuff to the access group and it may be, “You are real risky. We’re taking away your privileged access,” then tie to Zero Trust.
Now, you’re not trusted. You have to ask for it every time you need it. Does that need to be in the awareness group or even the risk management area? No. Does it make it easier for stuff? Yes. You can tie those things together, but the human risk tool is a perfect candidate to be able to start working with the access group and that whole zero-trust component that you have there. I can’t answer how well it will work together. For those that see the value, it’s very easy to play in that space and pass that information on and make a company more secure and more quickly.
I’m excited about what you mentioned because it is a burgeoning trend and why security awareness as a term is moving away and we’re moving towards human risk management. It is because the potential of tools that we have at our disposal is so much broader if we stop thinking about it as an awareness. People know they’ll do something different if they’re aware, which is not proven to be true over the decades we’ve tried it. If we think about understanding the problem and risky behaviors you mentioned earlier, what you were talking about is embedding this into access management and controls, which is the tool that security teams and identity teams have at their disposal already.
It’s a much broader way of thinking about the solution as opposed to, “This is a communications problem.” It’s a security problem. One of the problems that I see in getting this initiative off the ground and what Altria has done uniquely well is empowering someone at quite a high level on the security team to hold the mantle of this initiative and thinking about this as a very strategic problem that affects many different teams as opposed to a compliance checkbox problem.
If we can back up a few more steps here, I’d love to hear your vision from where you sit in the organization with your experience and security. Empowered with user risk insights, how do you see evolving the program? What might you want to do with it going forward? What are the things that you’ve already accomplished that are wonderful? Where do you want to take it next giving your purview now?
That’s three questions, but let me start with where this thing could go. There’s a balance between the user behaviors in the carrot-and-a-stick. If this is helping drive the behaviors, you’d like to see it go where it allows the company to get a better ROI on the security tools because they have all these tools looking for whatever that thing is for the anomaly. The data stays somewhere, but then no one’s taking it. Most companies aren’t taking that extra data and then turning it into the behavior that someone did in order to have that thing occur or be captured, whether it’s going to a website and clicking on a link that was there and then extrapolating down and it was a watering hole or what have you.
I want to make sure that people are educated with that, but being able to more quickly bring in the tools that allow for better data understanding and better behavioral tweaking and ratcheting to passing the information on, understand how privileged access should be allowed, how additional act if you install software on your machine, then giving access to a USB.
You say you need it for this. The score drives that. You’re a risky person. I can’t approve that. Allowing for the integration of these tools that I’m already paying for, that’s just sitting in the background doing something and not adding extra value to the actual behavior that someone has. You take that learning and help that person understand, “Here’s why it’s bad for a website. You went here and there. This is what happened.” Now I understand what it is and my role.
We have to get better at putting that in and better at having it educate them on what they need to do to get better. It’s not there now. It’s clunky trying to have it parse through the different APIs from the different tools that are there, but that’s where it’s going. We’re helping to drive that, and that’s where the end users are getting something out of it, which makes them or the program much more effective because they understand that they have a place to go.
They directly see their actions and potential caused because of what they did in that space. That’s a maturity thing that I want to get this to, but when people are embracing it and moving forward, and now I don’t do that X anymore because I understand how that can impact the company and how it’s tracked, they see it and know how to avoid it.
Those are three big buckets that I’ve heard you mention. First, you already had the tools. Security tools can rack up quite an expensive fee and realize that you already have the logs. My experience, especially in helping organizations deploy human risk management technology like Elevate, is they have all these logs. They’re already oriented around stopping an attack, but we design them for detection and response.
We stop the scary thing from happening and we’re done, we walk away, and we’ve saved the day, but that doesn’t complete the circle. If we think about the NIST life cycle, it goes to respond to recover back to identity. From a security maturity perspective as an industry, we haven’t quite figured out the recovery back to the identity, but if we know an employee attempted to download malware and we successfully stopped it, that’s wonderful, but the employee still attempted that action.
You have the data and the insights to take action on. Why not make use of those investments? I love seeing people’s light bulbs go off and like, “I have so much of these insights already available,” but you’re not taking action on it. The other piece that I am struck by what you were saying is the balance of there’s this technology approach to the human piece. The communication, the feedback, the transparency, bringing the employees along for the ride, and balancing the two would be a thoughtful way of thinking about the problem space.
It’s not just security that has this edict, “We think you’re risky. Therefore, we will control your experience.” It’s a two-way conversation. On that topic, I’d be curious if you have any anecdotes or instances that come to mind around where having this data has enabled conversations with folks in your organization and enabled more dialogue or transparency that wasn’t available to you all before.
As I got to earlier when we talk about you having a problem, they penalize everybody because a few do it. What you’re getting at and what the crux of the best part of this is, I now know that I can tailor this element of how these people are having a bad behavior or not understanding the right behavior and get that target right to them. I don’t bother everybody else because they’re doing it right and they’re not clicking on that aspect of it.
It gets back to saving everyone being hassle of taking another training. I saved thousands of dollars of time with people who are out there because I’ve got these individuals. They’re even smaller groups, and you find out what is the commonality that’s causing these individuals to have that. You add that to the awareness, especially to new folks. Remember, it’s cradle-to-grave. When you come on here, you don’t know any of these things. You’re now at a college and you didn’t have all this understanding. You thought that it was a Wild West out there and you were allowed to tame it, then you get here, “Here’s how you make sure that you don’t fall into that trap and you do that,” then you grow them over time. Remember, the hackers are better than everybody, or else, we wouldn’t be in this situation.
They’re always one step ahead. As they step and you see what step they took, what did you learn and is that something that can be applied to people? Part of what you get out of this is because you see that the riskier ones make things pop up. You see things. You marry it with good intelligence and stuff like that. Having that allows for better focus. I leave the people alone with anything to reward them, the people who are champions. You put them in old drawings and you publicize who won the drawing with that. Never forget about the positive side. You have to do both on that. That’s how you segment the different areas.
The hackers are better than everybody, or we wouldn’t be in this situation. They’re always one step ahead.
I know folks have tied it to bonuses to pizza parties and did shout-outs. Once you have the data, you can recognize it as well as reinforce it. I want to ask you, in my experience, a provocative question. If you know people are low risk, would you relax security controls for them and make their security experience have less friction?
There is a possibility for that. If you demonstrate over a period of time all the right behaviors with that, then there is that possibility, but there are so many caveats. If you’re a privileged access user, then probably not, just because you’re doing the right things. You have too much access to the business. An end user is accessing their data and some documents, either 1 or 2 applications that help them get their job done that are not critical to the business then you could do that, but you should never go on a blind eye to that. The beauty of elevating the human risk tools that are out there is that give you the data to allow you to make those decisions and then decide how to relax something like that and maybe not relax it for others.
In some ways, it’s a philosophical debate in security because we understand how to ratchet up the stakes. We understand what more security looks like. We’ve never stopped to say, “What would it take for us to unwind this?” We constantly talk about the balance between security and productivity, but we never collectively think in our design of systems, “These are the criteria for adding.” What is adding security controls or friction? What would it take for me? What trust would I need in order for me to allow a more permissive environment?
To your point, it vastly varies. There are certain things no one is ever going to do without endpoint protection. Lateral movement is a thing, and even if you’re totally trustworthy, we’re all interconnected, but sometimes things like multi-factor timeout. They find out you’re way less likely to compromise yourself in an account takeover. Maybe you don’t need to log in as often, or some use cases. There are edge cases where it’s an easier place to start in that mindset.
You have to have a good, clean environment. If you know you have role creep as you move throughout your time in the business and you get added to groups and those things, if you don’t have a good handle on that, yet they have access to stuff that you would never have expected that’s risky, then you could cause a problem. As you said, it is very complex and there are a lot of moving parts that you would need to address, including you may not be risky now, but the hackers figured out a different way to do it, and you’re the person who picks up UBS drives. It’s tough. It’s easier to ratchet than to let that go. I’d say there is an opportunity, especially in the example you used.
For folks reading, let’s say they’re curious. They’re human risk management curious. They want to know where they’re starting. If you were to go back to your earlier self before you started this. Are there any pieces of advice that you would give yourself, any challenges that you had to overcome, or things you knew ahead of time that would have made the path a little bit easier for you?
It’s in management buy-in with this. We had a great buy-in, but it’s getting them the buy-in to the final deliverable sooner. I don’t know if we have a question about it, but one biggest thing that loses credibility that you have to focus on and I would have told myself, is false positive. You give people a bad score because of bad interpreted data. I’m not going to say bad data. They’re like, “That’s not me. I don’t have that. How did you arrive at that?” You create that mistrust, the anger towards it, and the extra follow-up that’s there. False positive is one. You got to understand the processing of the data immediately, how it is scored, or what is its score. Let’s understand what that means and how it impacts. We agree that that’s how we should go forward. Now let’s do the A5, versus we did it backward, where we put something in then you retro.
You try to figure out what was appropriate, what didn’t give this money false positives, and what gave a better digestible component to transfer into a score on that one. That’s key number one. That’s when I thought about it. I moved that one to number one then keeping management out of it. As you’re trying to create a report, let the people who know how to create these things, have done it, and developed an intuitive type of communication provide some insight into that. A little bit less pretty and how everyone’s going to react to it, let’s save some on that. With false positives, you save a lot of time, effort, and pain for the end users.
Let the people who know how to create these things do it, and developing an intuitive type of communication provides some insight into that.
There are two things I have learned over time in building this product with Elevate. One of them is that transparency is key. You’re going to score, access, or be tied to a performance bonus or, in any way, hold people accountable. You better be showing up with the receipt as to how you got to that score. For every profile, we should display it on the platform. It is in every event associated with the individual that there is open dialogue and transparency. God forbid you have a black box score.
Some people do it.
3 out of 10. I hear you on that. It is the open dialogue in the communication having an area of feedback, “Are we off? Let us know.” That creates a dialogue, which can sometimes be a little bit surprising when security teams aren’t used to hearing back from individuals, but the conversation can be quite enlightening. You realize there are people, on the other hand, who care and who want to make sure they’re doing the right thing, which one is aligned on the security.
Correct. There are. Those are good advocates, especially if you can turn them into advocates for you.
That’s right. If they see that you’re you hear them, it transforms a relationship. My third piece of wisdom learned on this is that it’s not a binary thing. It’s not, “Don’t use this data set at all or do.” Start with the things you have high confidence in because there are certain things like, “I’m certain this is a positive match.” As you get better at your tuning and tooling, you can expand it. Starting with high confidence, to your point, false positives erode trust, but it’s not a 0 or 1. That’s worked for some technologies and certain deployments.
It’s not so much the high confidence of the data. It’s the high confidence of the API accurately providing that. If I would tell myself, all the APIs say one thing and some don’t do what they say. They do it a little differently. Some do it great and spot on, then there are others who are like, “They’re using this field. They combine these.” That’s where it gets a bit tricky. To what you said, ratchet it down and take the two things that we know are accurate. It may produce seven but you’re not ready, and it’s okay. That’s what we’ve learned. We’re going to take the two that are spot-on and accurate every time. We’ll fix those other ones, but if you try to do them all, you’ll never get there. What you said is perfect, and it’s okay to just take a portion of that tool until it’s ready.
How do you measure the success or effectiveness of a program like this? How do you do it for your organization
We’ve started rolling it out. We’re bringing our contractors in as the next wave after employees that we have there. We’re hoping to see it less cybersecurity incidents and less calls to the cybersecurity lines that are there. We have less malware attempts because somebody clicks on something that they have not. That’s all tracked. That’s all measurable numbers that people are not doing those things anymore.
You have your phishing rate. It drives down that. I can get into the whole thing if we have time for phishing. When we started, we were at a 30% failure rate. We were driving that down, then adding this at the end. We’re now down to 5% or under between our program and adding these elements for the awareness for people understanding reporting.
People understand reporting is very important. Most companies do have tools that, when you report, it’s said that is a time bomb that is malicious email and it pulls it out of everybody else. You save the 1,000 people from potentially clicking on that same one. If anyone says that their tool catches everything, I would fire them with that because it is not true. The people help, then it’s rewarding those people. That’s what we do every month. We say, “These are the real phish finders.” We take a segment of them and reward them points, gifts like pizza, and stuff like that you have there.
When you start out by measuring the outcome, you can see if it’s working or not. It’s a game-changer as it relates to the human element. We did not have access to these metrics when we were doing awareness and training as the main focus.
They got the score right, they cheated, or whatever. If they learn anything, take it away, but if things are truly not happening, people aren’t making mistakes, figuration type of issues, and they’re being less risky with security, then that’s great.
Switching topics here a little bit, tell me a little bit about what you’re keeping an eye on in security. Are there any emerging trends or developments in the field of humorous or security at large that you’re keeping an eye on?
I don’t want to overuse it. I would say AI is one of the main things. We’re not going to talk about ChatGPT. What everyone’s got to be thinking about is whatever tools you’re using, Elevate as well, you should be asking, “How are they integrating AI? What elements of AI and what does it do?” That’s why I’m on top of it to see what the government is moving on and how the big companies are trying to take over that space a little bit.
It’s a topic for another day on what I think the way they’re doing it is not nice. When you’re in that AI space, that’s going to be the extra learning that the tools are going to benefit from, but you’re going to get these extra insights because it is having your scoring seeing how you act. If it seems over time, remember, time builds up. That’s how it does it. It ingests these things, and it will then better understand and predict. That’s the difference.
Now, we are just stopping you from doing something because of the behaviors, but with AI, it’s going to help predict what you might do because of the data that it had and it learned how you did something. Either, it can prompt you not to do something which is going to be real interesting. If something pops up, something says, “Don’t click on that.” It’s like, “Wow,” in that space, but that whole prediction could stop or get it back and tool to block them that they don’t know that something’s going to happen or we can better anticipate that you’re going to go to a site.
We see what they’re looking at. If they’re going to go to a site, let’s put some extra protections where it takes what they do, puts it into a Sandbox, and allows their web browser activity to go there. If they click on malicious stuff, it never gets out of it. That’s what I find exciting about. Help vendors see this added value to take their products to the next level because that’s when I’m going to benefit from that.
If you can predict and stop them from clicking on a malicious link and a website because I’ve contained them and because that’s what I sense what they’re doing, I made the company so much less risky and more secure with that. Keep an eye on AI, learn what you can, and encourage the people of the companies that you work with to incorporate that and explain it to you, how you’re using it, and make sure that it’s the right way.
Keep an eye on AI. Learn what you can, and then encourage the people that the companies you work with to incorporate that and explain it. Ensure it’s the right way.
I have always had a theory that I would like to prove with a large-scale study that people make more security decisions when they’re hungry. I would love an AI to say, “This looks potentially suspicious and you haven’t had lunch yet according to your calendar. How about I just deliver to your inbox after you’ve had lunch?”
That’s creepy, but that’s interesting. You extrapolated the way it was thinking. You already took it to the next level.
I have some friends who are like, “Don’t deliver anything unknown to me while I’m at happy hour. Here’s my calendar. Manage my social and food intake with my wrist.”
It’s common because you’re wearing a watch. I can’t tell if you’re wearing a watch, but the people that are, and now, all the medical data that everyone’s doing, they got closed that monitor and all that stuff. It will have known when you have eaten, you’re low blood sugar, and those types of things. It can start correlating that and exactly why I said AI is a large machine learning type of tool that is out there. It will be leveraging everybody to see when those types of things happen and that occurs.
It always seemed very futuristic, yet it seems like it’s getting closer by the day. With that, I’m excited to look around at what’s to come. It is a little bit terrifying and, a little bit exciting, depending on how you feel about our robot overlords. We will be wrapping up this episode. Andre, thank you so much for a delightful, robust conversation. Thank you to our readers for joining us. For more information on what’s going on in the world of cybersecurity and human risk management, find us on LinkedIn. Andre, if folks would like to connect with you, where should they follow up with you?
It’s the best way to explore me out there on LinkedIn with that. I can’t guarantee you I can get back to you, but I’ll see how interesting your question is or what you’re into with that.
With that, thank you all very much.
Stay safe out there.
You too. This was great. Thank you very much.
About Andre Russotti
Andre Russotti is a seasoned professional with over 36 years of service at Altria. His impressive career includes a current role in IT Risk Management Security Governance, Compliance, & Controls, where he’s been instrumental for nearly 11 years. With responsibilities spanning Identity, Privilege and Database Access Management, Cybersecurity Awareness, GRC, Disaster Recovery Governance, and HIPAA, to name a few. Andre has proven himself as a leading expert in the field. His background also includes leadership roles in Fraud Management, Sarbanes-Oxley, Compliance Audits, Financial Compliance, and various technology directorships. Andre’s expertise is further grounded in his education, holding a Bachelor’s Degree in Computer Science from Western New England University and a Master’s Degree from Pace University.