Cybersecurity is a high-stakes card game, and sometimes, it’s not about winning every hand but mastering the game as a whole. In this episode, we welcome Allison Miller of Cartomancy Labs for an illuminating discussion on “The Theoretical Floor” and why resilience is the secret sauce to defending against modern cyber threats. She expounds on the world of human risk, shedding light on how we can redefine our understanding of cybersecurity by looking to lessons from and fraud prevention. Join us as we dissect the challenges faced by cybersecurity professionals, the evolving trends in the industry, and how we can better protect ourselves, both as individuals and as organizations.
—
Listen to the podcast here
The Theoretical Floor And The Importance Of Resilience With Allison Miller Of Cartomancy Labs
In this episode, along with the next few episodes, I’m excited to be bringing you perspectives on solving the human risk problem in the trenches of practitioner’s perspectives on lessons learned, best practices, and hard-earned wisdom from folks solving this problem. I am very excited to be welcoming Allison Miller. Allison is the Founder and Principal at Cartomancy Labs, an advisory firm that guides teams in innovating and solving problems anywhere that people, money, and technology mingle.
With decades of experience at the intersection of cybersecurity, fraud, and abuse, Allison is known for implementing real-time risk prevention and detection systems running at the internet scale and has a proven track record of building and protecting customer-facing platforms and services, both B2C and B2B. Before establishing Cartomancy Labs, Allison was the CISO and the VP of Trust at Reddit where she led cybersecurity, privacy, risk, and safety teams.
She has held technology and leadership roles in security risk analytics and payment/commerce at Bank of America, Google, Electronic Arts, Tagged/MeetMe, PayPal/eBay, and Visa International. That’s quite the Rolodex. Allison speaks internationally on security, fraud, and risk, sits on the faculty at IANS, and has been recognized by SC Media as a power player in IT security. That’s an understatement, I would say the least. Allison, welcome. It’s so good to have you on the show.
Thanks, Masha. This is going to be a good conversation. Thanks for inviting me.
It’s my pleasure. We started the early parts of this conversation when we were at BlackHat over a happy hour, and it was so robust that we decided that it was time to sit down and bring it to light. Before we get into some of those pieces, you have worn many different hats across many giants like Bank of America, Google, and Reddit before founding Cartomancy Labs. Can you walk us a little bit through the journey, especially to landing yourself as a partner and founder?
It is a wild and weird journey for sure. As an undergrad, I was studying business, finance, and economics. I was interested in digital money, commerce, and how technology was going to play an even bigger role in helping customers but also how things could go wrong. I entered my career wondering where I could work on the juiciest problems. In the early years, I shifted from one environment to the next, trying to find new types of risks and things to explore. In the process, I ended up jumping back and forth between payments, fraud management, technology, and cybersecurity. I kept going back and forth.
One of the interesting things that happened is, in some cases, I was a fish out of water. I would be working in enterprise cybersecurity, but I was glancing lovingly over transactional risk management – or vice versa. In some cases, I was explicitly a bridge between those two worlds. As I was thinking about how I wanted to move my career next, I realized there are a lot of organizations that seem to need someone who can act as a bridge or help speak to one of these disciplines in the language of another.
That’s where Cartomancy Labs comes from. I’m seeing a lot of cases where there are fraud teams, for example, that are looking at customer authentication as a potential solution, which is a traditionally security-related technology. On the other hand, I’m seeing cybersecurity teams being asked to go broader and step into the anti-abuse space because, in some ways, it is an extension of product security.
These days product security is more than strictly application security. We are asking more questions than “Is there a bug that someone can exploit?” It’s still true that we want to answer those questions around AppSec, but if you consider the actual legitimate use of the application, legitimate use can be bent and twisted in weird ways that lead to outcomes that are not great for customers and the business behind them – that’s where product security is going. And that’s where my eye goes when I enter a system.
I like the perspective of, “What are all the angles we can explore, not legitimate use?” It depends on the motivation of the actor behind it, and it shouldn’t be siloed across different teams. Bridging between fraud, risk, and security, there are a lot of lessons learned that we can share among these teams. Before we go too much further in security, can you explain the name behind Cartomancy Labs? Where did that one come from?
Cartography is the study of maps. Mapping-out the abuse, fraud, and security ecosystem has been something I have worked on a lot. I’ve also mapped out data flows because a large part of my career is spent on quantitative methods for detecting risky behaviors. This idea of working with maps has been something that has followed me through my career. I was Google’s Underground Cartographer. I took that title with me over to Reddit to a certain extent. I love maps.
The suffix “mancy” usually means the magic of, but Cartomancy doesn’t mean the magic of maps, even though in my heart, there is a lot of magic in maps. Cartomancy means magic with cards. That’s an homage to all of my time spent in payment and billing fraud. A lot of that is focused on credit, debit, and prepaid cards. So my time in the card industry is also recognized in that little turn of phrase.
Thank you for walking us through that. I’m curious about how your diverse experiences across different companies and sectors shaped your view on the human element of security. Where do you find the most innovation and future thinking happening in the sectors that you’ve had an opportunity to work with?
Data science and pushing the bounds of data science is the bread-and-butter in the world of fraud detection, or quantitative risk management, and is something that is used more in security programs to try and make problems manageable. As we can’t stop talking about AI, I do think that there’s still a lot of innovation to happen with machine learning, AI, and different algorithms and such.
Pushing the bounds of data science is the bread-and-butter in the world of fraud detection or quantitative risk management.
That’s one place where I see innovation. As far as which industry, that is hard to say, but I still like video games. I do think that high-tech and social meet are places where there’s a lot of innovation, mostly because those are the companies that are masters of a lot of this advanced data technology. Their ability to continue to customize product offerings to meet customer needs is interesting.
The gaming industry has been a huge role model in my work around gamification and positive psychology. They figured out to the tune of billions of dollars how to get people to want to do something instead of having to. There are so many lessons learned from a business perspective around how we orient games that security can learn from because we have applied it to our product, to your point earlier, and has a lot of overlap around how we think about the human element. Switching to the next question, let’s start with the Dark Ages question. Given all the roles you’ve worn in the past, what’s been your experience with traditional security awareness practices? Where have you seen them fall short of their promises?
The goal of a security awareness program isn’t to have a security awareness program. And while we want to set success metrics around a program, typically, what happens is folks in year one see massive improvements, and then they’re like, “What can we do to continue to show improvement?” The focus becomes on the awareness program as opposed to the real-life outcomes that we’re trying to get to. That’s one place where the method becomes the goal.
The other thing that I have noticed is that you can’t divorce an approach to successful security awareness from the culture at large. If you are in a very rigorous, regulated, and serious security environment, an awareness program probably will become another compliance obligation. You can make it as fun as you can within the bounds of that culture.
On the other hand, if you’re deploying a security awareness program in a highly unstructured, very social, and rambunctious culture, you need to tune the program to that. It can’t be a compliance check-the-box. You won’t get the uptake that you need to be successful. It’s great when any awareness activity, security or otherwise, can help amplify the culture, reinforce the culture, and be contextually appropriate to the culture.
I’m so glad you brought that up because when people say, “Security Culture,” every one of us has such a different idea that pops into our heads based on the water we swim in our organization and security culture. When I worked for the government, you got fired if you crossed the line. In the security culture of a tech startup, you get put on the wall of shame if you leave your computer unlocked. You buy drinks at the next happy hour. There are very different cultural components of it but understanding what you’re building toward is important.
Understanding what you’re building towards is important as far as company culture is concerned.
As far as company cultures, my favorite resource in this space is the Competing Security Cultures Framework. If you Google it, it’s a wonderful ten-question questionnaire that lets you map what your organization is from a security culture perspective and how you might start using words to describe it so that you can communicate with other people. I wanted to go back and ask a couple of questions about this. The name of awareness or the technique of awareness becomes its destination. In organizations where you’ve seen it be useful and get to some of the outcomes that we all hope our programs might achieve, what have you seen those programs do differently?
In a very straightforward way, those are the programs that encourage people to report, or take a positive action, as opposed to getting overly indexed on something like click rate, or “We will continue to train you until you stop clicking.” The value is in creating muscle memory. I heard about a security incident that happened recently – actually, it would be considered a breach. In the write-up of the situation, it was described that someone messed up with 2FA. They accidentally allowed the attacker to leverage their authentication credentials, but they figured it out.
Their spidey sense clicked in. Instead of not knowing where to go or being afraid to report it because they had messed up, which if you hadn’t practiced this, you might have thought that you were going to get in trouble, they reported it immediately. The contagion was capped to a minimum because the responders were able to respond very quickly after that initial point of entry had been established. That was a year’s worth of security training and awareness for the general population that paid off. That investment paid off in reducing the downstream effects of an incident like that.
If you hope that you will never have to deal with failure, and that’s your whole plan instead of practicing it, the only guarantee is that you’re going to fail. Making it safe to fail and psychologically safe to report without punishment is such a crucial component of any security culture.
All resilience depends on practice because if you’re planning to not have any accidents, errors, or failures, you’re going to be out of luck, and then resilience is about getting back up after something’s happened.
I’m going to butcher this quote. It’s, “In moments of crisis, we don’t rise to our greatest potential but we fall to the level of our training.” That’s true for security awareness too. I wanted to move a little bit into some of your fraud experiences. I would be curious if you could help define human risk in the context of cybersecurity and fraud prevention. We have been using security awareness as a term in the time we have been talking, but I would love to get a broader perspective from you on how you define human risk in the context of both cybersecurity and fraud.
Human risk as a term has come into vogue to replace insider threat, which makes me happy because insider threat or even insider risk sounds bad. It sounds malicious. It sounds like insider trading. Using “Humans” serves to humanize the concept a little bit, and “risk” is expansive enough to include malice but also mistakes and errors, which are a big part of what we are trying to guard against. Human risk doesn’t paint the actor in that case as a problem, but also something to be protected. That’s my thinking on what human risk encapsulates.
One of the reasons why I like talking to you is because when we’re talking about human risk in an enterprise cybersecurity program, it usually means a lot of insider issues related to risk, but I find so many parallels between that internal paradigm and then what you’re thinking about when you’re thinking about customers. Customers are also human. Even when the customers are businesses, there are humans at those businesses, I promise. Some of the things that you might consider for your internal humans, you might also want to consider for your customer humans,
For example, both are prone to getting phished. Both could get spear phished. Both could be victimized in a social engineering attack. When you’re thinking about customers, then you have some of those product risks we were talking about before. Even if your product is being used as advertised, “WAI” is the acronym I like, Working As Intended. It still could lead to not great outcomes for them. We can be expansive in how we talk about that problem set and that problem space. It’s a problem for humans that we care about.
There are a lot of places for us to go. I want to go back to talking about human risk being an extension of insider risk and insider threat. When we think about the problem as an insider threat, “Give them more training. If they are malicious, fire them,” that’s often the approach, historically speaking. We had two options, which were way too ineffective for some problems and way too much for others.
I’m curious. When you were in your role as a security leader and CISO, did you have other things in your Rolodex thinking about, “This person is simply more vulnerable?” This is a more expansive approach around human risk and insider risk. This person makes more mistakes. They are more likely to fall for that, but that doesn’t mean they’re malicious, which is the traditional insider threat option. From your perspective as a leader and other leaders, how might you consider managing a problem set like that doesn’t involve cutting out the knees of your employees and firing them without much empathy for their mistakes?
Everything comes back to IAM. If you take a risk-based approach, that’s why I like the term human risk. It suggests we could take a risk-based approach to some of these problems. Some teams are more likely to be targeted for certain types of attacks. For example, if you’re talking to people on the phone or if you’re talking to customers, there are certain types of things that folks may try with you if you can move money or approve large spending. The attackers are smart. They’re going to try and make their lives easier by going to where they can get what they want.
When we bring it back to technology, we have the whole class of folks who have privileged access. In a lot of cases, privileged access means they have administrative-level privileges in a production environment, but privileged access could mean a little bit more than that. It could mean these are the people who can run the batch job that decrypts the things that should never be decrypted. It could mean they have access to the most sensitive of customer data for doing analytics for perfectly rational reasons, but there are different variations of what privileged access can mean. We can be sensitive.
They may not even be more prone to mistakes. They may be less likely to make mistakes, but what they have access to is so valuable. There’s a potential that they would get hammered on by folks willing to play the long game, not a quick phishing-in-the-moment mistake but a concerted attack. In addition to training, we want them to be aware and know what to do if something happens but also see if we can’t create technical buffers, like different things we can do with access management or segmenting environments. Maybe they have to hop from one bastion host to another before they get access to the thing, but it’s a nice little buffer. It helps protect them, their access, and their assets.
The analogy in my head is thinking about a bulletproof vest or protection going into battle. Certain people aren’t simply going to go into riskier places. They might get shot at more often, and because of that, there’s a little bit more friction. They can’t move as freely, but it is because there is simply more risk in the work that they’re doing. You do need more armor, and it may slow you down from a capability standpoint, but it is also going to keep you safe and keep you working and alive. Thinking about similar analogies, we might be introducing more friction, but it is because this person is more privileged or is being attacked more frequently.
We did a similar analysis for folks at LA Security and found that our CFO not only receives more phishing emails, which makes a lot of sense, but the number of foreign IP addresses that are trying to connect to some of his accounts was surprising. It’s a thousand times more than some of the lesser-known engineers on our team. Realizing that different people have very different risk thresholds regardless of their actions or what mistakes they make helps us think we know how we should be armoring certain individuals in a better way than others.
Part of what you’re describing tickles me a little bit because when we talk about human risk, a lot of times in cybersecurity, we’re talking about things that are happening inside the shop, but when we are trying to get the full picture of the human risk, someone’s LinkedIn account, someone’s Facebook account, their X account, their bank accounts, their emails, or those things that are off of the platform and that the cybersecurity team is not responsible for securing are also potential elements or pieces of the puzzle that go to how someone could be attacked.
The point that you made earlier is it’s out of the scope of security teams, but it is not out of the scope of what is accessible to an attacker. It’s more work than we have time or resources to do. I want to switch topics a little bit. You and I have had the opportunity to talk about the concept of the theoretical floor. It’s a concept that you were telling me that exists in fraud, but can also be thought about as security efficacy to include awareness. Can you tell our audiences a little bit about what the theoretical floor means in the concept of both fraud and security as you were thinking about it?
I’m not sure how common it is, but it was a term and a project that I got to be a part of that was helpful. In the world of fraud management, let’s say you have a scoring system. As every transaction comes in, you’re scoring it, and you’re deciding if you’re going to allow that transaction to go through if you’re going to decline it, or if you’re going to let it go through and investigate it later.
The way these models work is there’s a trade-off. The more that you decline, the less fraud you have, but you are going to make mistakes, and there are good transactions that will come in there too. As you’re trying to optimize that curve, you can find an optimal decline rate, but you’re optimizing for something. You’re optimizing for your fraud rate or your payment volume and revenue. You’re choosing in there.
You only have so much latitude because the model is as powerful as it is, but you can choose that trade-off between how much you’re willing to let walk out the door versus how many good customers you’re going to disrupt. That was important to know as our team was trying to figure out how to tamp down on fraud because as you tamp down on fraud, you need the business to continue making a lot of money. As we were investigating, we could pick it apart and get it down by doing this, that, and the other, but at some point, the investment is going to be bigger than the value.
What we were trying to figure out is what the theoretical floor is. What is the theoretical lowest risk rate, What we were trying to figure out is what the theoretical floor is. What is the theoretical lowest risk rate, fraud rate, and loss rate conceivably that we could ever get to? A basis point is a percent of a percent. A hundred basis points is 1%. What is the number where if we got one basis point lower, it wouldn’t be worth it? We were able to calculate that theoretical floor. Thinking about the idea of cybersecurity is mind-blowing because there’s that adage that I hate, which is, “Defenders have to get it right every time. Attackers only have to get it right once.” That’s the game. I hate that so much, especially coming from fraud because, in fraud, each game is a hand or a card. It’s not the whole show.
There’s a point at which investment becomes irrational. That’s the economist in me. I wish there was a way to get to something so quantitatively clear when it comes to cybersecurity, but it’s very rare for me to find places where we get there. There are little places. I don’t know if this happens, but you used to be able to set your biometric if you were using biometric room access or building access.
You could set the sensitivity of the thresholds so that you know false positives and false negatives and sometimes tune your SIEM system to a certain extent, but it doesn’t have the same feel of a business trade-off, or an error rate where costs of getting right or getting it wrong are clear. We don’t have that. We still have that idea of one thing slipping through and ruining everything. I wish we could design for the idea that some things are going to slip through.
We were talking about that in the context of something like phishing simulations. Let’s say you’ve been whittling at it. Although we should be using the report rate instead, your click rate is at 1%. Let’s say you’ve gotten it down to 1%. What do you do? Do you make the tests easier? Do you start doing phishing simulations every day, ten times a day? How do you make it better? Should that be the goal?
Maybe you recognize that 1% or 2% is the click rate. That’s the floor, and you’re going to now refocus your strategy on making outcomes better somewhere else, “It’s 2% of the general population. We’re going to work with accounts payable and get them down to 2%.” When do we say, “This is our goal. We have hit it, and now we’re going to solve some different problems or more targeted problems, looking at it from a risk basis.”
That would be such a valuable mindset for us to have insecurity across multiple parameters like phishing simulation. One of the things that we regularly see in our datasets with Elevate Security customers is that on average, you’re going to get to about a 5% click-through rate, and then it’s going to hover there. The people whom you’re going to keep doing repeat phishing are going to be immune to more phishing simulations and more training. It is important and a good use of your resources and time instead of dropping it to the next point to think about other ways of mitigating it.
This is where I love your analogy of an attack being a card game. For those 5%, the attackers are going to win that hand, but then how do you get them on the next hand? That’s going to potentially be the better use of resources. For those 5% of employees, you may not win the phishing hand, but you can win on the MFA hand, the access hand, or whatever it is that you’re going to use as your defense conversation in the whole kill chain that’s happening.
You’re thinking about it holistically and saying, “My dollar spent on this prevention will give me much greater yield than if I applied it to this intervention.” The thing is it’s going to be a combination. For a subset of your employees, phishing simulation is going to be wonderful and a very cheap approach up to a point, and then it gets more expensive to the curve that you were talking about.
I do wonder if we take this analogy outside of phishing simulation if folks have ever done this in the concept of DLP. Do we block or alert? Do we prevent this transaction from happening and stop business? Do we let it go through and try to recover after the fact? That’s what you’re trying to protect and how good your fraud detection capabilities are. The analogy for security is sensitive data mishandling capabilities. These are multiple parameters in the space, but we’re quite nascent in security in the space. An analyst once said, “We wanted data loss prevention, and we got DLP instead.” We could use a lot of lessons learned in the fraud space around this. It seems like it’s a very timely area to be thinking about this theoretical floor concept across multiple parameters.
Thanks. I do think it’s interesting. To your point on DLP, any detection technology is tunable. It has a false positive rate and a false negative rate, and you can play with what you can play with. Most security teams are to a certain extent making decisions on the alert on this or not, “High alert on this or stuff it into a SIEM and let some heuristic or machine learning raise it to us some other time after some other data is considered,” but the thing that you mentioned how should we block it and stop business is a huge difference in the way that cybersecurity teams and fraud teams work.
Fraud teams are making decisions in real time. That’s a lot of pressure. It means there’s a lot of investment there though so that they can have reliability. The Law of Large Numbers does help them out, but they’re deciding at a particular point in a flow. They have usually been collecting data all along. They have historical data and all these things. They’re making one decision.
All cybersecurity technologies are point decisions across a surface and maybe even a solid because it’s not one nice line and a point on it. It’s a surface with points everywhere. If all of those technologies are detection technologies and things that blink in the night, or if all of them could make a decision, it would be a mess. It’s very rare for folks to try and make those blocking decisions in real time in a complex environment. A company might do this in and out of a web gateway. Traffic is allowed out. Traffic is allowed in. They’re happy. They’re sitting at the gateway.
All cybersecurity technologies are point decisions across a surface. It’s not one nice line and a point on it. It’s a surface with points everywhere.
They have turned a surface into a point by routing all the ingress and egress through this gateway. Typically, they don’t necessarily make the fancy decision. They are typically using lists and heuristics. Maybe they’re updated regularly, but it’s a high wire without a net even though everything is connected in these nets to be taking actions in real-time. Given the complexity of the enterprise cyber player, we’re going to continue to drown ourselves in alerts because it’s risky to be blocking business.
Someone introduced the concept to me of having our databases be a system of records and moving it eventually to become a system of intelligence. Instead of a point in time, how do we connect all of our points into making contextual decisions, which is where the intelligence piece comes in? That requires a level of integration, coordination, history, and analysis that we as a security industry are still catching up with. We’re still putting out fires, let alone making very intelligent and multidimensional decisions at a point in time, to your earlier point. We will get there.
It requires lots of data scientists. That’s expensive. It requires developers and data scientists. They’re as expensive as cyber people. When you have a cyber data scientist-software developer combo, that’s bananas expensive. Those are big investments. That would be amazing. Every defender feels deep joy when they think of a place where all of the alerts would go in a way that could be consolidated, and intelligence could be generated out of that, but it would be so expensive. We see vendors trying to step in and help pull the pieces together. We’re going to continue to see one tool to rule them all and all of that, but I let them innovate. Innovations tend to sort themselves out.
That seems like a great segue to my next question for you. Are there any emerging trends or developments in the field of human risk management or cybersecurity at large that you’re keeping an eye on?
It’s hard to look away from AI. It’s an unsatisfying answer to the question because that has been the answer to the question for the last few years, with a brief segue of, “Maybe blockchain could help us.” That came and went. I don’t even want to say it because I know it’s a thing. I just don’t like it as a thing. I’ll call it consumer authentication. I’ve heard it called Customer IAM, which I don’t like.
Customer authentication and bringing some of the security concepts to customers are happening. The way is being led through authentication, which is interesting because there have always been some customer or consumer-specific solutions out there, but now, we see the folks who are making advancements in enterprise authentication, which can be very rigorous, and trying to see if those frameworks can be externalized. We will see how that goes, but that’s something interesting because we see this in security sometimes. We see the externalization or consumerization of what started as enterprise tech. Given my interest, I’m always interested in what we’re feeding consumers or giving them to protect themselves.
I’m curious to start watching this. I promise I won’t call it consumer IAM. To wrap us up, tell us a little bit about what you’re reading or listening to.
It will probably be of no surprise to anyone who heard most of this conversation. I love scams, and there’s lots of podcasts and documentaries on this topic. I also like MLM or Multi-Level Marketing content. There’s a lot out there to review, but that’s something that I find interesting.
Is that like Catfish and that kind of drama?
There’s The Tinder Swindler and Bad Vegan. Those are weird, but there are a couple of documentaries on some Multi-Level Marketing things that have occurred that are good for listening. It turns out there’s a whole crew of YouTubers and podcasters who are in there. Scams, MLMs, and cults are all super interrelated. I consume all of that content.
Then, something completely different. I’m also reading and listening at the same time to The Artist’s Way by Julia Cameron. For folks who aren’t familiar with that book, it’s interesting. The Artist’s Way is about recovering your creative spirit. I’m working through the book with a group of friends. All innovators trade on their creativity. It happens to be creativity applied to a business problem, a technology problem, or something like that, but we are all creatives and we all get blocked. I’ve read the book before, but I’m working through it a little more seriously now to see if I can unleash some of that creativity back onto the world. It’s a good read.
Most folks in the security space probably could use a little boost in creativity. The more outside the box, we can think, the more advantage we have against some of the newfound problems we log into every day. I’m adding that to my to-read list. Thank you.
It’s a good one.
If folks want to follow the work you’re doing in the world, where can they do that?
I’m primarily haunting LinkedIn. Come find me. I’m Allison E. Miller. We’re the only Cartomancy Labs out there. Let me know if there are any problems you’re working on that you would be interested in getting some insights from me or just to say, “Hey.”
Allison, thank you so much for joining us on the show. To folks reading for more information that’s all good in the world of cybersecurity or human risk management, make sure to check us out. You can find us on LinkedIn. Please subscribe, rate, review, and never miss one of these episodes where we will have great folks coming on the show.
Important Links
- LinkedIn – Elevate Security
- Facebook – Elevate Security
- Cartomancy Labs – LinkedIn
- The Artist’s Way
- Allison E. Miller – LinkedIn
About Allison E. Miller
Allison Miller is the Founder and Principal at Cartomancy Labs, an advisory firm that guides teams in innovating and solving problems anywhere that people, money, and technology mingle. With decades of experience at the intersection of cybersecurity, fraud, and abuse, Allison is known for implementing real–time risk prevention and detection systems running at internet scale, with a proven track record of building and protecting customer-facing platforms and services (both B2C and B2B).
Before establishing Cartomancy Labs, Allison was the CISO and VP of Trust at Reddit where she led the cybersecurity, privacy, risk, and safety teams. She has also held technical and leadership roles in security, risk analytics, and payments/commerce at Bank of America, Google, Electronic Arts, Tagged/MeetMe, PayPal/eBay, and Visa International. Miller speaks internationally on security, fraud, and risk, sits on the Faculty at IANS, and has been recognized by SC Media as a Power Player in IT Security.