Data security awareness is indeed valuable, but how is it actually framed? Most people in the security space rely solely on the human firewall, and that is where the problem occurs. In this episode, Ira Winkler, author of Security Awareness for Dummies, explains why organizations should stop depending on user awareness alone in protecting their data. He explains how strong passwords may still prove unhelpful when cyberattackers are using password crackers. Tune in for an enlightening talk with Ira Winkler and Matthew Stephenson who help take your data security to the next level!
Listen to the podcast here
Ira Winkler: Stop Relying On The Human Firewall
Here on Friendly Fire, we are bringing you all of the top experts in the industry for a chat about anything that is interesting and worthwhile in keeping our world secure. Speaking of that, on this episode, we are excited to welcome Ira Winkler to Friendly Fire. Ira is the Field CISO for CYE. His latest book is Security Awareness For Dummies, but he is also the author of You CAN Stop Stupid, Spies Among Us, Zen and the Art of Information Security, Through the Eyes of the Enemy and a few other bestsellers. Our man gets after it.
He sits on multiple boards. He also serves as Executive Advisor to multiple companies and is a faculty member at the University of Maryland, Baltimore County. Do not sleep on the University of Maryland school system when it comes to cybersecurity because their curriculum is no joke. In his previous life, he was the Chief Security Architect at Walmart. CSO Magazine named Ira a CSO Compass Award Winner as the Awareness Crusader, which is cool. He could be a Marvel character and is named 2021 Top Cybersecurity Leader by Security Magazine. I could keep going, but then Ira wouldn’t get a chance to speak. If you are familiar with Ira, you know that’s not an option. Ira, welcome to Friendly Fire.
Thanks for having me. I’m embarrassed by that introduction, but I provided you the material, so be it.
Embarrassed? Are you kidding me? Your life provided the material for that thing. All I do is report what you have done in the world.
I’ll take it that way. It was your pleasure then, how’s that?
Let’s go with that. One of your books, which is my favorite title among all of the books, is You CAN Stop Stupid. This book is a bestseller. Check it out at Amazon or other top booksellers near you. You sometimes criticize security awareness efforts. Something to open up with that is you are a security expert. You are someone that people come to talk to about security awareness being part of it. How do you balance that? The notion of promoting the very thing that you are critical of.
It’s ironic. I wrote Security Awareness for Dummies after You CAN Stop Stupid. It was a unique thing and everybody’s like, “How do you do that?” I go, “I’m not critical of security awareness. I’m critical of how it’s framed.” I think security awareness is incredibly valuable. For example, anti-malware is incredibly valuable, but if somebody tells me that they are taking care of the entire problem of malware inside an organization by solely putting anti-malware software on their systems. I have a serious problem with it.
Likewise, the problem is when I hear people talking about, “We’re going to have the user as our last line of defense.” A user clicked on a phishing message. All we need is a more aware user and that will solve the problem, but we’re not blaming the user. Essentially, these people don’t have a clue because if you are saying, “The only problem is that a user needs to be more aware,” you’re blaming the user for not being aware.
Fundamentally, the problem with awareness is a very valuable risk reduction tool and should be implemented. However, relying upon awareness as the sole and only solution for preventing user-related problems is where I have the problem with awareness, much like you have a firewall. Therefore, clearly, you will never have a security problem. “You have awareness. You will never have a user-related problem.” That’s ridiculous.
The problem is not with awareness, but the problem is how the awareness profession frames awareness, setting itself up for failure by somehow implying this is the solution to the user problem. Awareness is never going to solve everything. Frankly, if a user is malicious, just fundamentally malicious, more awareness helps the user be more damaging. If you are making your user the last line of defense because they’re aware, your entire security program has failed because you are relying upon a malicious party as your last line of defense.
That is absurd, but what I said specifically is highlighting all the problems with how awareness is framed. That’s fundamentally what’s wrong with awareness. Not awareness itself but the reliance and framing of awareness is where the problem is. A good awareness program is worth its weight in gold, but it has to be used as a risk-reduction tactic within a larger strategy.
A good data security awareness program is worth its weight in gold. Use it as a risk-reduction tactic within a larger strategy.
Forgive my cynicism and as far as big awareness is concerned, we’re all doing our part to get the word out about anything that needs to be gotten out, but it does. The way that you frame it makes me think of whether it’s the sports leagues wearing the various ribbons for the thing that they want to raise awareness of during the months or whenever the social media campaign comes out against some third-world dictator or whatever.
Suddenly, everybody’s aware, but there isn’t anybody going in there and curing cancer at that moment. Just because you put something in the end zone of an NFL team, the audience is not going to be the ones to go in there and create the new thing that gets rid of that thing. Am I being too cynical with that approach or is that in line with what you’re saying?
It’s in line. There’s this whole concept of what’s called the information action fallacy, which was ironically coined by BJ Fogg, who everybody seems to miscoin in this information about behavioral change and everything like that. BJ Fogg said, “There’s an information action fallacy,” which is the fact that if people believe, if you provide people with information, clearly, they will then do what’s right. Say, “I’m probably 10 to 15 pounds overweight.”
I know, in theory, I can lose that 10 to 15 pounds by eating right and exercising more and I don’t. I’m okay. There are people who are going to die because of their weight. People still know the same things and still don’t do it, smoking, likewise, literally on cigarette boxes. It says to the point, “You will die if you smoke these cigarettes,” and people still do it and that’s personal. Now, imagine you will die if you do this compared to, “By clicking a phishing message, you will potentially let a bad person in who will then go ahead and possibly cause some damage to this organization that you’re not happy with in general.” How effective is that?
Now I need the think piece from the Atlantic where they show the Venn diagram of the overlap between people who know they’re going to die and smoke and click the phishing link. I’m curious to see if there’s some type of statistical correlation.
I’m sure there is.
Given your career and the companies that you have worked for, whether it’s full boots on the ground, being a circuit, security architect for a corporation as large as Walmart, the advisory boards that you’ve set on all the things that you do. You can’t make anybody do anything, but what can you do that can push the difference between awareness and action and get people to a place where they understand it?
Here’s the concept. I’ve been talking about this concept of human security engineering for a few years now. I’ve written about You CAN Stop Stupid as essentially the principles of human security engineering because the concept is that if a user clicks on a phishing message, everybody points to the user as the problem.
You have to look at this from a safety science perspective, an accounting perspective and all these other disciplines because in cybersecurity, we act like we are the only profession in the history of mankind that has ever had to deal with human error or human harmful user action. When you think about it, what do all these other professions do? They acknowledge a human is potentially going to create damage no matter what we do.
What they do is they start taking all of these other mitigating controls in place. For example, if you look at a phishing message. Everybody says, “Users need more awareness.” The user was the problem. I’m like, “In the first place, probably a half dozen technologies had to fail for that message to hit the user’s inbox. You need a D mark to prevent spoofing. You needed a bad guy who somehow created a phishing infrastructure.” In that phishing infrastructure on the internet, systems throughout the internet were vulnerable and so there’s a whole bunch of vulnerabilities there.
Likewise, we had to have a secure email gateway, which led the message through to the user. Before it got to the user, it had to go to an email on the user system, which then had to present the phishing message to the user and nothing filtered out this message. There are things that we can do, like nudging, having user experience to say, “This message has some warnings to it that you should heed.”
Even if we assume, if you do not assume that some user somewhere in your organization is going to do something harmful, you should be fired and they should pull every extent they ever paid you out of your bank accounts because you have to fundamentally acknowledge a user will do something harmful. It is inevitable.
If you don’t acknowledge that as a security professional, even if you have the best user human firewall tool out there. You should be fired, but even then, the system should say, “That’s a stupid harmful thing to do. Do you want to do that?” The system should say, “The user’s going to do this.” When the user clicks on a phishing message, for example, there are generally three types of harm, which can be the compromise of credentials, can be a compromise of sensitive materials or letting malware on the network.
You have to acknowledge a user will do something harmful with your data. If you don’t acknowledge that as a security professional, you should be fired even if you have the best user human firewall tool.
You have to predict this. Anti-malware software should theoretically help prevent malware on top of the fact that if a user doesn’t have permission to download and run software. It should impact the ability of malware to load on the system. For data leak prevention, again, you have to go ahead and assume a user might send files out. Data leak prevention, web content filter should filter it out, leak of credentials, again, the same thing, permissions, web content filters and a variety of other things.
If you are not expecting these things, there’s something wrong with you. Going back, we look at it from a systems perspective, which again is what safety science does, what accounting does and a whole bunch of other fields, every other field in an industry that I can think of expects users failing in one way or another. Why are we the delusional group that if we give or show somebody a video or have a little widget that will make them a human firewall and we will never have that problem and don’t have to deal with it? Sorry, that’s my soapbox.
Trust me. It’s why we wanted you to come on. Not only do you have a soapbox, but it’s built out of tungsten steel because we still need to hear it and we still need you standing on it. I have a terrible pun joke that if I don’t get out it’s going to haunt me for the rest of my life. When you talk to a security leader about what happened, why did they get breached and they say the user is the problem. It sounds like you could say, “The user is the problem,” when speaking to security leadership.
What I talk about to people is, I’ve been quoted as far back as 2014 on this and it’s true. If a user can click on your network and ruin your network it sucks. It’s not the user. Nothing should be destroyed. In theory, I always believe that nobody can promise perfect security. The only people who promise perfect security are fools or liars. There should be so many things that have to fail in order for a failure to occur when a user is involved. I hate this. It sounds like something out of the Brady Bunch movie, like when you point at somebody, there are four fingers pointing back at you.
That was the second joke I was going to make. I agree 100%. I’m right there with you.
Blaming the user, the people who call the user stupid are the stupid people. You CAN Stop Stupid, by the way, stupid is not referring to users. Stupid is referring to the people who create the systems that allow the users to appear to be stupid.
Dear readers, I’m sorry that we could not bring a guest on board who was willing to be straight with you and not couch things in opinion in case he might say something that’s borderline controversial perhaps next time. Anyway, everybody gets that because they all get these things. You mentioned safety science before. This is another hot-button issue for you where you have spoken about the difference between old-school and new-school safety science.
For our hip-hop readers out there, there’s a line of demarcation in what people consider old school and new school when it comes to music. For you, when you look at the notion of safety science and this is a two-part question, is there a line of demarcation? Is there a boom left and right? Where did the old school fail that led to that thing, that moment of the boom where the new school happened and things started to change?
I have to admit, everybody should look at the work of Sidney Dekker, a brilliant guy. Frankly, he’s the person who coined these terms old school and new school. Essentially, the old school was, “That person got themselves killed in a fire in a factory. Why was that person so stupid enough they got themselves killed?” All of the investigations focused on what that person did wrong that got themselves killed.
When you look, for example, at airline accidents. You have to look at disasters, unfortunately. You look at the 737 Max Jets. Every time a jet crashed, they were like, “Why did the pilot do that and cause the jet to go on the ground?” That’s the old-school way of looking at it but then, when you started looking at things in a much more advanced way. When you started looking at things and saying, “Okay.”
This also happened. If anybody saw the movie Sully about Captain Sullenberger, who landed the jet on the Hudson River, there was a whole investigation like, “Why did they land on the Hudson River? What did they do wrong? Why didn’t they go back to land in a safe airport?” They said a pilot had a simulator where the pilots were able to do that and they didn’t take it into account in the simulation. The movie was clear that the simulation did not incur having the thought process of realizing what is wrong.
It’s like in the simulator, the pilots were like, “The engine stopped. Time to turn back.” When Sullenberger, “It’s okay. Wait for a second, the engine’s not off. Let’s try to restart. Let’s try to do this. What’s the status of all these different things?” That is what a pilot should be doing. It was like, “What led in the system’s perspective to that happening?”
Now, going back to safety science, if somebody gets themselves killed in a factory, the concept then becomes, “Why was the person there in the first place? What was the person doing? Does that person have to be there to get themselves killed? What were the factors that led them to do their job? Are there things around their job that could have helped them do things better that would’ve been safer? Are there other issues, for example, of controls?”
When safety science started doing studies, they found out that 90% of workplace injuries were the result of the workplace. Not because of some specific failing of the person. For example, if someone injures themselves and something falls on them, why did it fall on them? What was it doing at a place? If there was something on the machinery or something that caused the injury. What in the machinery could have been safer that the injury would not have occurred and so on?
90% of workplace injuries were because of the workplace, not because of some specific failing of an individual.
New school safety science says the user is a piece of the system. If the user, being the worker and if the worker injures himself, that is a failing of the entire system. Let’s look at the system from start to finish and figure out how that user could not have been injured in the first place? Even if the user was injured, how can injuries be minimized? How can injuries be mitigated? Can we make it safer? Which is why you have a lot of workplace safety rules. Going back, I’m a master scuba diver trainer. I’m giving a presentation on the Cayman Islands. Everything I need to know about security, I learned from teaching scuba diving.
How did I not have that as part of the WF-style intro for this? I would’ve led with that before getting into the dozen books that you published.
Anyway, everybody should go to BSides Cayman in March. Cheap plug for them. I get nothing, but when I took scuba instruction and how to teach people, at the end of the day, the whole scoop instruction process, these are the dozens of ways people have got themselves killed. How do we go ahead and potentially not have them get killed but also, how do we protect the industry? How do we protect dive professionals and so on?
For example, when you teach somebody how to dive, there is a literal list of skills on a folder that people have to initial to say, “The instructor taught me this. The instructor taught me that. The instructor taught me this and this.” If a person inevitably dies, which is, unfortunately, a fact. Scuba diving is safer than bowling, statistically. Keep that in mind.
When somebody inevitably dies in scuba diving and somebody says, “My brother was killed and I’m suing on behalf of his estate.” It’s like, “Nobody ever taught my brother that he should not, for example, come up too fast.” They’re like, “Here are his training materials and here he initialed that he was instructed to come up at a safe rate. Here are all the places he should have read that and so on.”
The whole dive industry takes into account how can we tell people how to do things safely? How to check their equipment? For example, what are the standards for letting people dive on a dive boat? What type of training should they have and so on? It’s because of this that diving is a reasonably safe activity. As well as hopefully, a multibillion-dollar industry and that would not happen if there wasn’t, I would phrase it as a new school of safety science approach. Sorry, it went way off course there but hopefully, that all fits together now.
That reframes everything you said on this thing. The idea that you teach scuba, taking that metaphor over to the security side of things. It’s one thing when you are teaching people cybersecurity. When you are teaching people how to teach cybersecurity, is there a difference in communicating how you communicate the message as opposed to communicating the message?
In both cases, what you’re doing is there’s a combination of teaching people fundamentals. A lot of cybersecurity, frankly, is good cyber hygiene. When you start looking at that, teaching people scuba diving. Here are some basics that I try to teach. I try to focus on the basics. To be a divemaster, by the way, which is the lowest level of professional certification. It goes dive master, assistant instructor, open water instructor, master scuba diver trainer, master instructor and course director.
To be a divemaster, ironically, probably 80% to 90% of the training is fundamentally how do you do twenty basic scuba skills to an expert level? That’s what being a divemaster theoretically is. Once you do that, then you have some other business aspects. Fundamentally you think, what’s a master? Sorry, this is something I spoke about in my Zen and the Art of Information Security. I also have a black belt in karate. For example, somebody once joked, “Do you know the secrets of the ninja?” I go, “Here are the secrets of the ninja. There are no secrets of the ninja.” I go, “There are only so many ways to punch, to kick and to block,” which is what martial arts is.
Cool black pajamas.
White sometimes. Depends on the program, but I like the black ones. What happens is, why are some people good? What they have done is they have perfected the basics. They’ve perfected the ways to punch, kick, and block. For example, as a black belt, especially in Kempo, where the form I took. You’re trained not to block with how to block but also specifically where to block.
If somebody throws a punch and you block on the inside of their arm. You could hit a nerve. Not only are you blocking the punch. You’re pretty much taking away their ability to move their arm and causing pain. It’s almost like the block becomes the strike, but it’s still a basic block you do when you perfected this timing and how and where to block.
Likewise a Kip Punch; punching and doing it in such a way that you know not to punch the limber and lock it up right at the last second then bring it back even quicker, making the punch even harder. You’re like, “You’re punching loose?” I’m like, “I’m punching loose until it’s about to hit,” because I get more speed and, therefore, more force into the punch.
The red teaming ninja that one.
They are the fundamentals. It’s not a secret of ninja. It’s refining and perfecting how you do it over time. You can’t teach somebody how to block and hit a nerve when they’re a white belt. You got to get them to do the movement basically properly. As they do, as they start getting more familiar, as it becomes muscle memory, you can start getting people to get into the right places and practice and get the muscle memory there. Get it as a second nature and get the whole body movement, so everything is coming together at once.
In cybersecurity, trying to get people, “Here’s a piece of information.” Now let’s embed this into how they do their activity, so it becomes second nature. It’s not. I am giving people a special piece of information to do. Instead, in addition to something, I’m getting people to integrate what they do into what they’re doing.
I defy you, the rest of the security industry, to have a ninja scuba instructor come in and talk about how to protect your network because we got him. Granted, he’s been on a couple of other shows and he has written a bunch of books, but are you kidding? I can completely trace the line that you’re talking about and what we’re doing there. To that point, when you think about it, whether you’re going into the water or into the room. I’m a wrestler, so it’s always a room. I want to use Dojo or the wrong word for that thing.
As you were looking out at your environment, you have your student. By that, I would assume that we’re going to call the users of the students but then also the devices, which might be the tools that they use for their training. How do you or do you consider there to be a difference in what you apply the approach to stupid, to dummies, to awareness and how you look at those two things? Is there a difference?
Devices and people are completely different. Devices are the environment. The user operates. When I look at that, fundamentally, like I said, we want the user to know what to do properly but fundamentally, we want security to be built into the environment. The systems that users are doing on a regular basis.
Do I care if a user knows what to do? I don’t give a damn if a user knows what to do. I only care, does the user do it? I don’t care if a user knows how to create a good strong password. That’s the quote. I do, however, care if the user has a good strong password because the system, at certain criteria, the user and/or, frankly, they use a password manager. I don’t care if a user knows what a good password is. I don’t care if they can sit and take a test.
I care that they have a password. That is the only test that matters. If that password is strong, I don’t care how it got strong. The system should force the password to get strong. Frankly, password strength is pretty irrelevant these days. Very few, if any, attackers sit there and guess passwords or even have to run a password cracker against them. They’re usually sniffing passwords, phishing passwords or doing something else but the system should enforce password strength in one way or another. Do the users know it is there? That’s the cherry on top.
We have to have passwords, though, just so at the end of the year, everybody can write their articles about the twenty lamest passwords of the year and everybody can point and laugh.
It makes people feel better, but frankly, even some of the stupidest passwords out there are strong enough because the attackers are going to sniff passwords and download password files and even if you have a nice strong password, you put it against a special purpose device. No matter how strong it is, it’s going to inevitably crack. It doesn’t matter. Either way, multi-factor authentication takes care. One of my pet peeves, people are saying multi-factor authentication fails frequently. I’m like, “It does, but it makes it exponentially more difficult.”
Define frequently. If it fails 10,000 times in a year, but it was 12 quadrillion attempts to get it done, that’s a lot of work. You have a long and illustrious career that we barely scratched the surface of, but you’ve been battling the bad guys from external attacks and insider threats. Either malicious or unwitting, as what we’ve been talking about.
When you look at the landscape now, in your opinion, what is the bigger threat? Is it somebody coming in looking to do harm or is it someone thinking they’re doing their job the way they’re supposed to because they’re doing it the way that they were instructed and trained to do it or option C, something else?
There’s the concept of malicious versus malignant threats. I’ve written about this since my first book, Corporate Espionage and the concept of malicious versus malignant. In general, you look at everybody’s afraid and I’ll give you one of the best examples. Anybody who remembers, like post-September 11th, everybody was bending over backward and afraid of terrorists and stuff like that.
Soon thereafter, hurricane Katrina came in and caused more damage than Osama Bin Laden, in his wildest dreams, could have ever thought of. Even a nuclear bomb could not have caused more damage than Hurricane Katrina in general, from a damage perspective. I’m not going to talk about life at this point, but you gotta stop and think. Malignant threats are something that happens that frankly, we give very little thought to.
I was talking to people from Homeland Security around the same time. They were talking about trained security, the transport of hazardous materials. What happened was they wanted to take the numbers off the side of trucks and train cars that were transporting hazardous materials because they didn’t want to give terrorists the opportunity to know which train car to blow up. They started proposing that. It went far down until first responders started coming back.
Trucks get into accidents on a daily basis. There are more train derailments than anybody ever knows about. If we take this number off of the side of cars, we are going to exponentially increase the threat to the public because these malignant things happen and derailment accidents happen so frequently. We need to immediately know how to react to these incidents. You are afraid of a terrorist. Terrorists do not happen that frequently.
We have never had a case that I know of a terrorist attacking a train car. With that in mind, they’re trying to endanger people by being afraid of the malicious, ignoring the malignant. I use the example of knights and dragons. Everybody’s out there inventing a dragon so they can be a knight, but the reality is, when you look at who causes death, we’ve never had a dragon cause of death that I know of.
At the same time, we have rats that cause the bubonic plague because you have rats and you have snakes. You need an exterminator. You don’t need a knight because of all these little things people take for granted that are annoying, but they cause more harm from a health perspective than any imaginary wild chemical attack people have seen.
I would say, Sir, ask the ganisters and the agrarians if they feel that a dragon has never caught a death because I think that they may want a word. When you use the word malignant, we look at this in terms of insider threats and even taking it back to the notion that people still smoke. People don’t smoke because they want to get cancer, but they get cancer because they smoke.
When we look at the notion of malignancy inside an organization regarding awareness, training, and people who aren’t necessarily that good at what they do, what can organizations do to put their people in better positions? I don’t want to use the word incompetence but to neuter their own weaknesses in order to allow them to be what they’re good at, why they were hired to do their job, sitting in front of the machines that are protected by the latest artificial intelligence anti-malware, blah, blah?
Here’s another concept. When you look at these things, how can we stop malicious threats? In getting back, I made the analogy without saying it. Will meaning users cause significantly more damage from every study I’ve read than malicious parties ever have? That’s a given. Whether it’s somebody accidentally deleting something or somebody accidentally doing things. I’ve created, unfortunately, damage. I would love not to have created theoretically and you have all these little incidents that add up.
I did not cause a disaster risk effect at the National Security Agency, but it did have a slight impact periodically on operations like everybody else had at some point in time. Now that aside, we also have to consider a couple of different things. In the first place, we have to do human security engineering like I described, to put guardrails around all users. Whether they are good, bad or indifferent, we need to go ahead and say, “These people are trustworthy. These people are not trustworthy.”
Even if they’re trustworthy, we need to go ahead and give them the capability. By trust, I mean acting reasonably safe and that’s a different type of trust than what most people think of. “That person’s bad. That person’s good.” I don’t care if it’s bad or good. What I care about is even a person who is reasonably security aware and practices good operational things are going to go ahead and create some form of damage every so often.
The reality, though, is it’s like from the studies I’ve seen. You generally have something like 5% of people causing 90% of the damage because you have some people who have proven to be significantly more vulnerable than others for various reasons. As part of my doctoral work, I’ve done psychological research and other things and found that some psychological traits make people more vulnerable, such as phishing attacks.
There were some people like you who could send them a phishing message. This is true. I once was doing an awareness program for a large company. We sent them a phishing message then they clipped on the training. The pretext was it was a resumé that a senior executive said to send to them. They clicked on the message and they got the training. They replied back to the message and said, “There’s a problem with your resume. Can you please resend it?” I was like, “Am I being punked?” They were serious.
There are some people who are like this. There are people who are highly cooperative. There are people who have depressive tendencies and want interaction even if it’s potentially harmful, which is, for example, why these Nigerian scammers are successful. Most of the time, they’re not successful because the people are greedy. They’re successful because the people that they are interacting with are lonely and value the interaction more than they care about the loss.
We have to look and say it is a legitimate concern to say, “These people who,” for example and one example, “Who are repeat clickers on very simple phishing messages.” Should I potentially be disciplined or at least have permissions pulled back in some way because these people, for whatever reason, are more vulnerable and present a greater threat to the organization, even if they’re the most well-meaning people on the planet? We need to put some extra protections around them because they present more threats than an actual bad guy would.
You’re going to stop grabbing that hot handle sitting on the stove eventually if it burns you enough. This is a little bit of a hard segue, but it’s going to carry back. For our North American readers, it is NFL football season and we’ve seen a lot of things going on with concussions, so we’ve learned more than we need to.
One thing that we know, once you’ve got to one, you’re more likely to get another one. For an organization that has been breached, is that an appropriate analogy? Once you’ve been breached and have healed, is it the same? Does the scar tissue protect you or are you statistically more likely to get breached again once someone has gotten in?
Yes and no. If anybody wants some entertainment, they could google Ira Winkler and Syrian Electronic Army way back when. I’m a cockroach just for your entertainment. Basically, I would get called to investigate attacks by the Syrian Electronic Army. They were very prolific about twelve years ago and what would happen is, I call them cockroaches.
They keep trying until they get in. They weren’t sophisticated in most ways. They were persistent. What would happen was they were still highly successful and they would get in once, then they would put some back doors in. Let’s say the known incident responders of the world would come in and clean up the organizations. These people would be back the next day because they got in through the same type of phishing messages because they didn’t go ahead.
People would call me in and I did what I call human incident response, which basically involves saying, “You clicked on a message. You’re the smart person? Why did you click on this message?” For example, I spoke to the CFO of a multibillion-dollar company. I’m like, “I’m going to assume you’re smart if you’re in your job position. This message is clearly a phishing message. Why’d you click?”
The guy was British. He was working in the United States. Everybody else got USA Today’s phishing messages. He got a message from somebody supposedly from the UK office sending, say, “We were featured on the BBC. Click on this link to see the article,” so he went. I’m like, “Why didn’t you check the link?” He’s like, “I don’t know how to check a link.” I go, “You don’t know how to check a link on an iPhone?” He is like, “No.” I’m like, “Your awareness materials never covered it? No, they didn’t.”
I was like, “Wow.” I looked at their awareness materials from the leading vendors out there, and nobody ever said in their three-minute video how to check the link on a mobile device, so that’s how he clicked in. We put an emergency awareness program together but these cockroaches would come back in and know that they would go ahead and somehow be able to constantly try and get back in. That’s what led to success.
Now, being a victim once, it’s more likely to say, “Once they’re in, they know they’re there.” It’s also a bad time for many organizations because the organizations will now know not to go ahead and they’ll be more secure. They’ll be more aware and so on. That way, they might be a victim and everything like that. It’s a catch-22 to answer your question, but again, unless you fix the root vulnerability or the root lack of awareness in the awareness case and not put in multifactor authentication or a variety of other mitigating controls then you’re going to be hit again. Whether it’s by the same attacker or a different attacker, you’re going to be hit again if you don’t fix those things.
Unless you fix the lack of data security awareness and put in multifactor authentication or a variety of other mitigating controls, you will be hit again by the same or a different cyberattacker.
That’s why I make the concussion analogy. You’re not going to get hit by the same car or the same football player. It doesn’t mean you’re not going to get hit in the head. I feel you on that. We are creeping close to time and I have one more question before we get into Leadership Corner. As someone who operates in the world you do is a writer, a speaker, a known personality, dare I say, a cybersecurity gadfly man about town. I wanted to say gadfly more than anything else.
We’re coming out of CES, which was awful. We are rolling into the first of the major shows, South by Southwest, HMMs, and RSA. All of these things are around the corner whether you, me or producer Sharon will be attending any of these or not. How are you feeling about the industry now? We are supposedly on the other side of the pandemic. We’re trying to get back to these events, but what are we bringing to these events compared to where we were in the teens leading up to the collapse of the world for two years?
I almost need you to be specific. Are you talking, for example, the awareness industry or cybersecurity?
I’m talking about cybersecurity and not just the parties and the food and all that thing, but this is where we all used to get together to talk about the next big thing. That’s where the cloud broke and what AI, God forbid, crypto and all.
In general, a lot of us are playing, me too and catch up. There are a lot of the same things being handled and the same type of problems. The problem with cybersecurity is like CES is a great example and the whole Web3. We didn’t get into that, but that’s a nightmare, where everybody is out there talking about all of these incredible capabilities that I have no idea why people need to pay for a GIF of a monkey.
Your NFTs didn’t take off? You’re not retiring from your bored ape NFT from years ago?
I did not even, nor would I ever purchase one at this moment. We’re sitting there seeing all this capability with nobody mentioning anything with regards to cybersecurity built in, which is the same way when the web took off but now we’re still seeing these incredible massive losses. Hundreds of millions of dollars are being stolen from crypto exchanges in a single theft. Crypto exchange is going down and everybody’s like, “It’s unregulated. We wouldn’t want the government.” It’s like, at this point, if you lost all your life savings in a crypto exchange and at least the government didn’t have any ability to stop it. Do you think that’s a good thing?
You wouldn’t want to call a cop if you got robbed?
I’m sitting here thinking there’s this guy. He goes by @NFT_God on Twitter and he highlighted how basically he fell for a phishing scam and somebody drained his entire account, sent messages to all his contacts and everything like that. Basically destroying his reputation and now he’s out there touting, “At least I’m here and blah, blah, blah.” Acting like an influencer. Not once in his Twitter feed did he ever say, “Maybe I should think about practicing better cybersecurity in protecting this.”
Web3, don’t get me wrong. Fundamentally, there are some good things about Web3. There are good things. Not the hype up, but there are good things there potentially. Web3, as a whole, is built on a house of cars. It’s like taking a handheld metal box and putting it on the shore of the ocean, waiting for the ocean to drag it away. Maybe you’ll say, “They never got into the safe.” It doesn’t matter. You don’t have your safe anymore. That’s the problem with a lot of these technologies. I see some things coming out of CES, but where’s the security embedded in it?
I got to call this one because we could go for another hour and I’m already getting the frantic waving for producer Sharon, “Save it. We got more to talk about,” but I will say this, if the most exciting thing coming out of CES is a toilet that reads your pee and tells you everything about you. That better be secure because you are uploading all of my biological information into the cloud and I don’t have any control over that. That’s how lame CES was.
Let me play devil’s advocate here. Let’s say somebody finds out you have high glucose or something. I was going to say, let’s say somebody finds out you have marijuana in your system or whatever it might be.
It’s because of the 1978 cult.
Does it matter? People are looking at, again, the malicious versus the malignant. When I start sitting here thinking, “Somebody’s going to get hold of this.” Do I want somebody to understand my biochemistry? The answer frankly is probably not, but what is the actual risk to me if it does happen? We got to think about every technology. Not, does it sound bad, but what is the fundamental underlying risk of it happening? Maybe somebody wants to kill you and they find out you’re diabetic. They all of a sudden slip a whole bunch of sugar in your food or something. I don’t know.
I don’t want somebody 3D printing my genetic information because they got it from my connected toilet, but that’s my own tinfoil hat.
Think about something that sounds bad. I realize I wouldn’t care about Web3 from a security perspective if people were not losing billions on a regular basis and being irresponsible and not even saying, “I lost billions but don’t give it up.” Be like, “Now’s a good time. Let’s talk about basic cyber-hygiene before I start trying to get you to buy a new NFT.”
All right, because we could do another hour just on that. Let’s do a quick turn into leadership corner because you, as we have learned, are a pretty fascinating dude. When you’re not doing this stuff, what are you listening to? Is it vinyl, Spotify or Apple Music? Do you sing out loud? Do you have books in the bathroom or magazines on the coffee table? What’s happening in Ira’s house?
Generally, I have a lot of books I don’t read. Anybody who’s smart probably doesn’t, I think. Every so often, I listen to motivational business books driving in the car or whatever the latest pop music is on the radio station.
I was seriously hoping you dropped something like, “I enjoyed side two of Beastie Boys, Paul’s Boutique,” but you never know. Everybody’s got their own thing.
It’s on the radio, maybe.
There we go. Shameless plugs, we know that you’ve got some things going on in the springtime. We have mentioned the books, but please feel free to mention them again. Anything that’s happening in the world, you want to shine a little bit of light on. This is my turn to be quiet.
First off, I currently work for, as you said, Field CISO for CYE Security. Everybody should go to CYESEC.com. I like what we do because, frankly, I’ve always said and we didn’t get into this, but the problem with most cybersecurity programs is they get the budgets they deserve. Not the budgets that they need. They need to learn how to deserve what they need. We have an incredible system that is good risk-based. It’s a machine learning dream. It applies machine learning. Not as a buzzword, but it features. Not a marketing tool to help people determine what budget they need, what controls they need in place, how to mitigate that and looking at all the vulnerabilities out there.
The problem with most cybersecurity programs is they get the budgets they deserve, not the budget they need. They need to learn how to deserve what they need.
Frankly, an Israeli company that has Israeli nation-state capability embedded throughout the entire company. That’s my day job, but then I created a free program since this is on the human factor. I worked with DevOps Institute and I created a course on how to be a security awareness manager. If you go to DevOpsInstitute.com, you should be able to log on and find that. Everybody should buy my wonderful books, You CAN Stop Stupid, Security Awareness for Dummies, Advanced Persistent Security is still out there.
I’m a co-author of Cybersecurity All-in-One For Dummies. That might be coming out soon. The other thing, I’ll be speaking at Cyber Tech in Israel. I’ll be giving a workshop. That’s the only acceptance from RSA Conference and BSide Cayman Islands, I mentioned and a CISO forum in Los Angeles in February. Those are the big things.
You had me at BSide Cayman Islands. That one I will work hard to get to. Also, not for nothing, our man is imminently Googleable. If you google Ira Winkler, you will see this beautiful haystack of white hair and this grinning mug who is bringing you not just the information but the attitude that is appropriate and according for that. We didn’t even get into it all, Web3 or any of this stuff. Please consider this the official invitation to come back for the next round of conversation about all of that stuff.
Remember, Ira Winkler and SEA or cockroach and you’ll be entertained too.
I like the way he says cockroach. All right, that’s it. Once we say cockroach for the seventh time, that means it’s time to go. Thank you for joining us on Friendly Fire. A friendly reminder that all the comments reflected are the personal opinions of the participants and not necessarily those of their employers or organizations. We are happy, though, that they come and join us on Friendly Fire to talk about all this stuff.
For all the information that’s good in the world of cybersecurity, make sure you check us out on LinkedIn and Facebook. The mothership is ElevateSecurity.com. You could find me @PackMatt73 across all the socials and the show, wherever you go that’s where we are. The only thing we ask, subscribe, rate and review. You will never miss out on all the great folks who are coming in. I cannot guarantee you that we will have a scuba instructor black belt but we’re going to try because we got to keep going. Right after Ira, we got to find somebody else this cool. Until then, we will see you on the next episode.
- LinkedIn – Elevate Security
- Facebook – Elevate Security
- LinkedIn – Ira Winkler
- Security Awareness For Dummies
- You CAN Stop Stupid
- Spies Among Us
- Zen and the Art of Information Security
- Through the Eyes of the Enemy
- Corporate Espionage
- Advanced Persistent Security
- Cybersecurity All-in-One For Dummies
- @PackMatt73 – LinkedIn
About Ira Winkler
Ira Winkler, CISSP is the Field CISO for CYE (pronounced Sigh) Security, former Chief Security Architect at Walmart, and author of You Can Stop Stupid, Security Awareness for Dummies, and Advanced Persistent Security. He is considered one of the world’s most influential security professionals, and has been named a “Modern Day James Bond” by the media. He did this by performing espionage simulations, where he physically and technically “broke into” some of the largest companies in the World and investigating crimes against them, and telling them how to cost effectively protect their information and computer infrastructure. He continues to perform these espionage simulations, as well as assisting organizations in developing cost effective security programs. Ira also won the Hall of Fame award from the Information Systems Security Association, as well as several other prestigious industry awards. CSO Magazine named Ira a CSO Compass Award winner as The Awareness Crusader. Most recently, Ira was named 2021 Top Cybersecurity Leader by Security Magazine.
Ira is also author of the riveting, entertaining, and educational books, Advanced Persistent Security, Spies Among Us and Zen and the Art of Information Security. He also writes for a variety of online sites, including RSA Conference, DarkReading and ComputerWorld, and for several other industry publications.
Mr. Winkler has been a keynote speaker at almost every major information security related event, on 6 continents, and has keynoted events in many diverse industries. He is frequently ranked among, if not the, top speakers at the events.
Mr. Winkler began his career at the National Security Agency, where he served as an Intelligence and Computer Systems Analyst. He moved on to support other US and overseas government military and intelligence agencies. After leaving government service, he went on to serve as President of the Internet Security Advisors Group, Chief Security Strategist at HP Consulting, and Director of Technology of the National Computer Security Association. He was also on the Graduate and Undergraduate faculties of the Johns Hopkins University and the University of Maryland. Mr. Winkler was previously elected the International President of the Information Systems Security Association, which is a 10,000+ member professional association.
Mr. Winkler has also written the book Corporate Espionage, which has been described as the bible of the Information Security field, and the bestselling Through the Eyes of the Enemy. Both books address the threats that companies face protecting their information. He has also written hundreds of professional and trade articles. He has been featured and frequently appears on TV on every continent. He has also been featured in magazines and newspapers including Forbes, USA Today, Wall Street Journal, San Francisco Chronicle, Washington Post, Planet Internet, and Business 2.0.