Balancing risk and privacy is a delicate dance, but with the right solutions and strategies, organizations can effectively manage potential threats to their security while protecting their users’ data. For today’s episode, Matthew Stephenson interviews renowned privacy and technology attorney Greg Silberman to discuss the fascinating and complex world of risk and privacy. With years of experience working in cybersecurity and developing solutions for intellectual property and privacy issues, Greg brings a wealth of expertise to the table. He shares his insights on whether it’s possible to choose one over the other. He discusses the challenges of balancing privacy concerns with the need for risk management and how organizations can find a way to strike a balance. Based on his extensive experience working with companies like Zoom, Blackberry, and Cylance, Greg also provides practical advice on how businesses can navigate the complex landscape of cybersecurity and data privacy. Tune in to discover the world of risk and privacy and learn the future of cybersecurity.
—
Listen to the podcast here
Greg Silberman: Risk And Privacy… Can You Choose One?
Hopefully, we have had a little time together talking about cybersecurity, the human element, and how weird all of those things can get, but if this is your inaugural voyage, we are bringing all of the top experts in the industry for a chat about anything cool and interesting in keeping our world secure. Speaking of cool, I have a guy for you. Be warned, dear audiences. There may not be enough oxygen in the room. If you thought the episode with Scott Scheferman went fast, stick around.
We are incredibly excited to welcome Greg Silberman to the show. Greg is a privacy and technology attorney with many years of working in a broad range of industries and a lot of time in the world of cybersecurity where he has been a part of teams developing solutions to address issues at the intersection of intellectual property and privacy. Our man has a ridiculous technical background in electrical engineering and computer science. He also has done huge work with some places you might be familiar with, like Zoom, Blackberry, Cylance, Jones Day, and Kaye Scholer. Let’s say our guest has some experience and thoughts. I’m going to try to stay out of his way. Greg, welcome.
Thanks so much, Matt. It’s a pleasure being here. I appreciate the opportunity to come on the show.
I don’t even know where to start. I have to keep the show on track, but with you, there’s not going to be a lot of oxygen in the room. Let’s go. Your career has been on the legal side of security and privacy, which is not something that we have talked about. I’m curious about why you want to come to the show. I have a huge gratitude. Thanks for coming on this show. When we consider the notion of security, risk, and privacy from a legal perspective, how is that different from a CISO, CTO, CIO, CSO, or anything like that?
It’s because it touches many of the stakeholders. I’m not going to say more because I’m not going to take away from any of my brethren who are CISOs, CTOs, or CIOs, but CTOs need to build the technology and make sure that it works hopefully the way they intend it to do and in a way that will be reflective of their commercial and legal obligations. CISOs have to make sure they keep things secure, whether it’s keeping the product secure, keeping the internal network secure, or protecting the people.
Depending upon the company, the CIO’s job can be anything from data governance to making sure the networking runs correctly. They very often will have roles related to procurement or other aspects of the company and how it’s conducting its business. From the legal privacy side, I was previously CPO at Cylance. I ran the privacy program at BlackBerry. I had a significant role in running and building the program at Zoom starting in late 2020 after the pandemic boom lit things up over there.
The role around privacy and cybersecurity for a lawyer is ensuring that you’re meeting the commercial obligations of the agreements and the regulatory requirements because the number of laws regarding privacy and cybersecurity, and now we’re starting to see artificial intelligence, are multiplying like mushrooms after a big rainstorm. You see so many new regulations. They are not always brilliantly drafted. They have conflicts with one another. Technology is rapidly evolving. Often the law doesn’t keep up with the technology, which is this interface between the law, technology, businesses, and societies.
It’s the whole reason I went from wanting to be an astronomer. I recognized the fact that I was good at math. I was sitting in a differential equations class as a freshman at Berkeley and realized, “There are more people in this giant 500-person auditorium who are better at math and want to be an astronomer than there are probably good jobs for astronomers coming out every year. I’ll stick with dragging my telescope to the beach or the desert to take a look at things at star parties, leave it a hobby, and find something else I can do.”
Let me pick up the name you dropped right there, “I was going to study astronomy at Berkeley, but I decided to move into legal studies.” Carry on.
Suffice it to say, that’s how I shifted toward Political Science, Electrical Engineering, and Computer Science as well. I ended up with a job at Lawrence Berkeley Laboratory. That’s when I made the decision to go to law school because I had met some patent attorneys. I was interested in how research touched society and how research could then, through a development process, become technologies and tools for business. Once I did that and eventually made it off to private practice and then into Kaye Scholer and Jones Day, a lot of what I was working on was understanding the implications for privacy and cybersecurity on the business and how that rotates.
There are areas where the importance of cybersecurity and privacy is critical because people aren’t going to want to use a product if it’s not secure. There are also areas where there’s a specific law around it saying, “Thou shall take these steps.” Either being prescriptive as to what you need to do or proscriptive and helping companies figure out how they can meet their legal obligations while still developing cutting-edge technology in a way that makes business sense for the size of the company is interesting.
I love what you said. What is that? How does it rotate? Given where you sit and have sat in the organizational structure on the legal side when we talk about cybersecurity, privacy, and risk, where does legal come into the strategy of cybersecurity?
I’m going to talk about cybersecurity and privacy in the same breadth often because that’s important. When you start thinking about data protection, security is unnecessary. You can’t have privacy without security, but the converse isn’t true. You can easily have excellent security and be abusing the heck out of the data. You’re being unfair and deceptive, and you’re not respecting the individual’s privacy rights.
The other thing to remember is that there’s a bunch of data that companies have that has nothing to do with people. It’s not a privacy issue because it’s important to remember that privacy is data that’s about a person, not about the trading algorithms that you’re using for your new FinTech startup, the formula for your new energy drink, or your business plans going into the next quarter. Those are confidential. They may be trade secrets. They may be super sensitive, but they’re not personal data. They’re not information about individuals. That’s not privacy.
We’re going to talk about this. At least the chairs I’ve sat in have been next door to products and engineering. The role of law there has often been to ensure, “Are we developing this technology in a way that’s going to meet the reasonable security needs?” It depends upon what that system is doing. Is this a critical system for a medical device? It better be exceedingly secure. It better be very fault-tolerant as well. It better be demonstrable.
If you’re building systems for the federal government, you will have specific obligations that you need to meet with regard to cybersecurity. Some countries, depending upon the industry, have specific regulations around them with regard to cybersecurity. Some of this can be contracted for, and the lawyers can get out their pens, typewriters, and blue pencils and agree to do certain things, but a lot of this is best addressed early on and built from scratch. The concepts of privacy and security by design and by default require that the legal/product professionals get involved much earlier.
That’s where I have sat in my last three companies. It has been ensuring that we were building privacy and security from the beginning for companies, not just because it’s required by some specific law but also because it makes the product better. It makes our customers trust us more. It results in a better all-around development because otherwise, you can end up in a situation where you have ignored this, and you’re acquiring tech debt. You will get to a certain level at your tiny little startup that was selling to small companies.
That’s fine, but all of a sudden, you’re selling to Bank of America, and they’re going to want to have you demonstrate why your security is trustworthy or what your privacy program looks like. Maybe you will be forced into doing a NIST certification or an ISO certification or having to demonstrate, “Let’s meet the requirements of ISO 27001.” What happens when someone pen-tests this?
Make sure that this is something that’s being addressed because it is, in large part, the role of compliance and legal. This is where compliance and legal become productive. This is where compliance and legal are part of making the business stronger and better. In all honesty, I believe there’s a potential to make it faster because if you create systems that are addressable, and if there are policies and procedures that can be followed, you don’t have to have last-second delays in shipping a product because nobody considered the privacy or security impacts. Nobody is having to stop development because they need to get approval.
We will hopefully have built this into the software or product development life cycle so it hits early enough, hits naturally, and hits in a good solid parallel flow that you don’t have the problem where someone is coming to the lawyer at the last week before it goes general availability and saying, “Could you approve this?” Where did you get the data? How does it work? What is this doing? You start asking these questions. All of a sudden, there’s a delay.
You tell me if this is too personal a question. Why do this? You’re a giant, terrifying, and beautiful bald-headed man with a Gandalfian goatee who had an incredibly successful legal career. It’s like, “I would like to get into cybersecurity because that seems like a stable industry. Let’s see what we can do there.” What about what is happening, has happened, is continuing, and will happen in security makes you think, “That’s the thing I want to do.”
It is some of the most cutting-edge technology. It has interesting problems. I love the people in the industry. A lot of my friends are in it. They’re people I enjoy. I find the jobs fun. I enjoyed working at Cylance. I enjoyed working with customers on dealing with these issues. I enjoyed helping people. Unfortunately, in private practice, it was often coming to me when there had been a problem. There has been a breach or an incident.
Part of the reason I enjoy being in-house so much was the fact that it was taking the steps to prevent that, try and build systems that are more resilient, and drive business toward things where privacy and security become a feature, not an afterthought, not a bolt-on, and not something that is being done for the purpose, “We’re going to have to do this because we need to tell people we have cybersecurity.”
It’s along the lines of, I’m trying to take out something that has been viewed by many as a necessary evil, “We have to show this to the lawyers”. This is going to be the Department of No,” into something where we can make the product better. We can speak hopefully with appropriate involvement, make the process faster, and help you build a better product. All of those are fun and interesting.
The notion of building a better product is incredible because I don’t know how many people consider the notion of where legal comes in. Everybody in a corporate environment thinks, “Legal has to review this,” but this is the notion that you are someone who is coming into this to help contribute and make things a better product. From your experience in your time in the security industry, which is long, and it’s also with companies that are massively respected and rightfully so, what’s the legal team’s impact on development, production, and distribution outside of being the Department of No? Most people look at it that way.
Don’t get me wrong. Often what we can fall into is either the Department of No or firefighting, “There’s been a problem. Clean it up.” You’re either a janitor with a bucket, a fireman with a hose, or you’re the person standing there with your arms crossed, “You can’t do that.” A lot of what I’ve spent in my career is trying to find ways to get out of those jobs because those aren’t fun. Sometimes firefighting and cleaning up can be interesting, but it is not fun. What I’ve done is I’ve been trying to drive more toward having appropriate governance structures around things. How to make governance sexy and productive is a difficult task, but I’m trying my best.
The idea is that if you have appropriate legal talent evolved early enough, you can build processes that will create a better product. Am I saying that lawyers can help make sure your security is perfect? No. Don’t rely on your attorneys to paper over things or try and exculpate you from the potential liability of risk. Realize that your attorney is a good source, “What are the risks that you might face from this if it fails?” Ask, “What are you going to do then?” Keep asking the questions.
When you can get to a point where they say, “We could do this. It will be a little more effort now, but it will remove the risk,” then it becomes that calculation, “Does the cost of implementing something in a more secure fashion or a more privacy-aware fashion outweigh the risk of the potential loss of reputation and revenue and the distraction created by regulatory investigation or litigation?” In a fast-moving environment, the ability to point and say, “We tried at least,” goes a long way.
That will be the next Instagram and TikTok ad, “We tried.”
One area that’s particularly interesting is AI. Did I jump the gun on you? Do you want to keep tunneling into something else?
I was going to lead into the next question with it. Instead of trying to, in any way, close to what we had talked about, let’s freestyle. Let’s do some jazz. What’s on your mind? It feels like AI is on your mind. I’m going to stay out of your way.
AI has been on my mind, particularly in areas around privacy, security, and risk management for a bit now. There is a lot to be excited about, but a lot of the excitement and buzz that we’re hearing around AI, particularly Generative AI, and large language models is getting particularly loud. Some of the things are exciting, and some of it is complete nonsense. People are saying, “We’re approaching artificial general intelligence.” I do not have a PhD but no. I’m willing to place that bet now.
You don’t have one, but we know people who do.
If we take a look at this, we can see that AI has been around for a while and is getting even better at translating lots of languages and the ability to communicate with others. That has been powered by large language models that are being powered by various AI technologies doing language translation, image processing, and classification. In certain medical diagnoses, machines are flat-out better than the average doctor, and they’re ubiquitous and much more available. For somebody to read a CT scan or an MRI, I might not be able to get into a great doctor at Stanford depending on where I’m living, but I might be able to get access to a powerful AI diagnostic tool, or my local doctor may be able to.
I need to stick it in here. George Carlin once had a great line. He said, “Somewhere in the world is the world’s worst doctor.” Back to your point about AI, everything that is being fed into it is not all the gold medal winners. It’s the worst thing as well as the best thing.
That’s why there’s governance around that and consideration as to what is it being taught with, how it is being taught, and what internal controls are being developed around it. You see a lot of companies that are doing these blog posts, “Here are our AI principles.” They’re all going to mention fairness. I guarantee you. They’re all going to mention accountability.
They will throw the word transparency in there. They’re all going to throw in the words privacy and security because they have to, but they may be able to shun reliability and safety to the side because they’re saying, “This isn’t being used for a power plant,” but it is being used for a lot of very important tasks in society from sentencing guidelines to autonomous vehicles to diagnostic tools for doctors.
Often people say, “AI should be a tool. It can’t put a human in the loop.” Putting a human in the loop is a great first step, but making sure that a human looks at it and does their job or is being thoughtful and doesn’t become over-reliant or dependent upon passing on the machine’s recommendation is critical. How often have we seen people or read stories about people slavishly following GPS directions and turning down one-way streets?
As technology gets better, that should be less and less. As they get more data input and more real-time, it should be less and less, but you very often have people who will lock-in. The GPS is telling them to go that way. Even though the road is closed, they will go around the orange cone to keep going that way, and then the freeway ends.
Can I ask about that exact analogy? I’ve had a very specific experience with that. I’m sure you have too. Audiences, I’m sure all of you have too. Is the notion of not looking up from the GPS a risk when we look at what AI is doing? This is the thing.
Careless use is going to be one of the things. It’s not malicious. It may not even be a bias against an individual that’s based upon their socioeconomic status or old, bad, and biased data against certain groups in society, but it may be, for instance, the training fails. There’s a good deal of work being done with training AI. It has been done around training AI systems to make recommendations for different types of treatments. Diagnose and recommend. That’s great, but it’s only great based on the population that it’s trained on.
AI systems have been trained to make recommendations for different types of treatments. That’s great, but it’s only great based on the population that it’s trained on.
If I’m pulling all of my data from Los Angeles, New York, San Francisco, and Chicago from these large research hospitals that have top-flight doctors and access to the most cutting-edge technology, and then using that to base recommendations for treatment for someone who doesn’t have access to those facilities and may not have access to all of those drugs and protocols but is sitting in Iowa, that’s going to be a very different circumstance. What if they’re sitting in a country that is less developed? What if they’re sitting somewhere they don’t have access at all?
Is it better than having no access? Possibly, but maybe not because it may make the wrong recommendations. This gets into a social utility question. I’ve steered us away from privacy and security, but as we take a look at this, it’s one example. Privacy and security are going to tie into risk and inappropriate use. A lot of people are super excited about ChatGPT. You can’t go to certain circles of friends and colleagues or individuals and not hear people waxing on about ChatGPT or large language models, “Let’s play a comparison between GPT-3.5, BLOOM, and FLAN-T5. This is very exciting.”
If that becomes very exciting, you need to go to better parties.
As I’m doing this, I’m like, “I’m a big old nerd.”
#NerdAlert right there.
Maybe not everybody has these discussions, but people are using this technology. Sometimes it becomes the question, “Was this the way it should be used and where it should be used?” I shared with you previously that it used to be that if you went to one of the more popular large language models and said, “Give me a list of ten passwords,” it would give you a numbered list from 1 to 10. A good portion of them were off of the twenty most common passwords that someone had sourced from breaches that trundled off and acquired the data from some open directory.
Reddit, GitHub, and all those lists.
There was a period of time when these lists featured a lot of things like Adobe 123 because Adobe had a big breach. Passwords are always popular or the use of leet speak. We’re going to replace the A in the password with an @ sign or an ampersand.
Baseball was still the number three password up until 2019. Everybody hates baseball.
As someone familiar with cybersecurity who throws that to ChatGPT as a joke, that’s fine, but what if my mother is sitting there trying to come up with a password? She throws it at it, and it gives her a list of ten passwords. Fortunately, one of the human trainers realized, “That’s not smart.” They hard coded. If someone asks for a password, it said, “I can’t do that for you. As an AI language model, I cannot provide a list of passwords. Providing or suggesting passwords goes against security best practices and could potentially lead to someone using weak passwords that can be easily guessable or cracked.”
That’s a great first tip. I noticed later on that it evolved and said, “Instead, I can provide some tips for creating strong passwords. Use a mixture of upper and lowercase letters, do all these things, and consider using a password manager to generate and store your password securely.” This is a trivial example, but it goes from helping along to not.
It’s not trivial. I don’t think my parents are aware.
You have to consider this from a legal aspect and a privacy aspect. People are looking to create things. We see what happened with LastPass. That news broke. Someone broke into a personal laptop and accessed one of the larger password management systems. They got access to the entirety of the vault through somebody’s personal thing.
That’s one example. What if I was using some generative AI, and I’m not going to call any of them out specifically, to write code for me? I’m a young programmer writing code. It’s writing code based on God knows what. It could easily write vulnerabilities. As a young programmer, so could I. Maybe we need to build better AI to do this and therefore, build a better tool. It will fix itself because it won’t write vulnerable code. Presumably, there’s going to be a period of time when it will not write vulnerable code until someone starts poisoning the training model well. It will intentionally write vulnerable code.
You start looking at this from the standpoint, “I don’t have any young programmers. They’re not going to write vulnerable code.” We want to talk about insider risk. If we start looking at human behavior and insider risk, people want to make things easy for themselves. We see this with a lot of copy-paste programming where people will go to Stack Overflow, “I need a function that does this search.” It’s not what people meant by object-oriented programming, “Write once. Reuse many.” This is like, “Let me copy and paste. I’ll fix the pointers. We will change the variable names. It’s good to go.”
If we start looking at human behavior and insider risk, people want to make things easy for themselves.
Sometimes they’re not even bothering to remove or overwrite the remarks. They have just copied it over. If you have this large training data where you’re pulling this from, you don’t know the basis under which the AI is creating the code. You don’t know necessarily if your employees are using it because now we have a lot of ubiquitous model systems out there that are free and cheaply available. People are starting to use them and then train them potentially using company data.
When someone takes a bunch of company data and uploads it to one of these systems to train it on how to do a thing, have they inadvertently given away confidential information and trade secret data? Have they inadvertently disclosed personal data depending upon what you are having at work? As we look at this over time, it’s going to be an area where we’re going to have to think about how we are going to control it. We have looked at rogue IT, stealth IT, and channel IT within organizations. People are bringing in their network-attached storage or Wi-Fi router because it doesn’t quite reach the kitchen. They will plug one in, and now it works better.
“My NAS is faster. I just need to bring it home.”
People can subscribe to Software as a Service. We saw this with cloud storage. People were throwing things to the cloud and migrating company data to offsite storage, “This way, it’s even more secure.” You’ve now removed a bunch of material that bypasses data loss prevention. We’re all talking about data handling and what people should and should not do in violation of the cybersecurity reg policies within a company but as we say this, we think, “That’s dumb. People know not to do that.”
Do they, particularly when they’re under a lot of pressure? How many people still forward things to a personal email account that they shouldn’t? I remember this happening even in a law firm environment where a partner told an associate, “Send it to my personal email.” An associate was like, “What is it?” He gives him his personal email. The partner mistyped his email address. Thank God it was the partner’s mistake, not the associate’s. He gets sent off. Unfortunately, there was somebody on the other end of that. Unfortunately, the material was fairly salacious, so it was a problem.
Those bullets have to land. If you pull the trigger, that’s going to end up somewhere even though you may not care.
Let’s turn back around. Many of these AI systems are freely downloadable. There’s a tremendous data corpus that can be used to train, but who’s examining the provenance? Who’s even asking the question, “Where did this come from? Has anyone done an analysis of this model? Where’s the data going to be if I had to sign up using my company email to upload data to this service?” I’m not accusing any of the name-brand services of doing anything wrong, but think of it from your company’s responsibility standpoint.
It’s like, “I uploaded all of this stock trading data that we use for our algorithms to this and asked it to find this pattern.” Should you have done that? That’s something that we’re going to need to look at from the standpoint of inadvertent or unintended insider risk. People are trying to be faster and use new tools, particularly when there’s a lot of buzz. I don’t know if you saw this. Did you see where Vanderbilt University had to walk back the email that they sent out regarding the shooting at Michigan State?
Yes.
If you read the statement, the statement reads fine, but when you have someone from the Office of Equity, Diversity, and Inclusion sending out a message of unity, talking about the tragic shootings at another school that injured 5 people and killed 3, and talking about the tragic reminder of the importance of taking care of each other and creating a safe and inclusive campus, how does it make you feel as a human being when you read at the bottom, “Paraphrased from Open AI’s ChatGPT AI language model. Personal communication, February 15th, 2023.”
That’s a whole other show.
Even if it was brilliantly written, even if it was the best message, or even if it made people feel good, as soon as they read that it came from a machine, it undermined its purpose.
Even if it was written very well, or even if it was the best message, or even if it made people feel good, it would still not be good enough for people if it came from a machine.
It was also not brilliantly written.
Let’s be clear. Some of these generative models describe things well.
I’m going to flex my degree in English Literature and say blank the ChatGPT for writing these things because there is only one way to communicate humanity to humanity, and it’s not through that.
It is nice for the grammar track. It is nice for spell check. As someone who is violently dyslexic growing up, spell check has saved me from looking like I’m much dumber than I am. An important thing to realize is that it’s a tool, and it has a place maybe as an idea generator helping you try to remember that word fine, but certain things should not be given over to it. We take a look at it and the fact that there’s a lot of excitement around in other industries like mental health apps.
We have to start wrapping this because we have to get into the fun part of the end of this. When you look at the impact of what’s happening with this type of generative AI, it is deeply now getting involved in the humanity of things, like these messages that are going out to campuses when these incidents happen but then also into the notion of coding and the evolving of everything that we are building to protect ourselves.
I’m not trying to get into the Terminator style, “AI is going to ruin everything.” Is there a risk of a loss of creativity that could open windows for risks, whether it’s outside attackers or insiders unwittingly not realizing because they’re acting like people and their behavior because they are humans, and humans are chaotic by nature that the AI does not react to chaos the same way that people would?
That’s a given. You train an AI system on a large body of data and present it with a novel situation. It may react in an unexpected fashion. Maybe there will be an instance where it reacts appropriately. Maybe there will be one where it falls down flat because there’s a reflection on the water, and that causes the LiDAR and the autonomous vehicle to stutter. I remember watching bicyclists make an autonomous vehicle stutter at a stop sign because they would go back and forth, and the car would react to them.
In cybersecurity, we have seen there are ways where you depend upon AI for detecting ransomware or malicious code as part of static analysis. Someone is able to identify, “if I pad the malicious code in enough non-malicious code, maybe it gets passed. Maybe it doesn’t examine all of those vectors. Maybe the way it works slides past someone.” Let’s step back from the technical risks and take a look at where companies are starting to build, deploy, and use these technologies and not give consideration to the impact. It’s just, “It helps us do our business better.” Do they have AI use and development principles? Have they given any thought to how we are going to create these pretty language principles to talk about fairness, privacy, safety, transparency, and accountability? What are we going to do? Operationalize this.
On January 26th, 2023, NIST did finally come out with its AI risk management framework. That’s a big step in the right direction. Companies like the Responsible AI Institute are putting out a framework for third-party assessment. The European Union is driving forward with the AI Act that’s going to establish regulatory frameworks around this. We were talking earlier about the role of attorneys. Laws are coming. Laws are here, but more are coming. They’re going to be a mess. They’re going to be hard to interpret. They’re going to be things that you’re not going to be able to feed to your AI.
Maybe your AI can give you a workable summary, but it’s going to take solutions that are going to have to be drawn from many different stakeholders to solve these problems. As you start to look at a company, where in that company is the impact of this from privacy, security, and ethics managed? You can have someone put together use principles or development principles. You can have someone operationalize certain aspects of it to ensure, “Where did the data source from?”
Who’s going to make the decision at the end of the day as to how AI gets used in the business? Does the business even have an inventory of what AI you use? How are you going to govern or manage this if you don’t even know what you have in your system? From an insider risk standpoint, we have people bringing in tools, technologies, and services they are unaware of.
Thinking about this from a governance standpoint and a risk management standpoint is one that’s particularly interesting because we want to move toward designing systems to be privacy-aware, secure, and ethical, and at least looking at them now from that standpoint, knowing it’s a consideration rather than, “Look at what a good job this generative AI did at mimicking someone’s art style. Look how well this car drives on a well-known street. Look at how well this technology works.” These are particular ones that are becoming more ubiquitous, opened up, and made available to people freely with very low barriers of entry. That’s awesome. It’s democratizing.
There are a lot of great possibilities there, but I do think it’s one that we need to keep in consideration as to how we are going to do this in a responsible fashion and getting beyond the steps like, “We have drafted our principles for the responsible use and development of artificial intelligence.” Take the next step. How are you going to manage the risk around it? Who’s going to be responsible and accountable for it? Who’s going to stop and ask the question, “Should we have someone use ChatGPT to send out a condolence or write a eulogy?” What is its role in an AI-powered chatbot for mental health purposes?
We have to get to the fun stuff here in a minute, but I’m going to give you three minutes on this one and hold you to that because I know how hard that can be for you and me. Given your history both on the lead council side of things and the privacy side of things, you’ve been involved in two very important parts of our industry. When you look at the notion of artificial intelligence as it applies to the human aspect of what is happening and the general thought of external and internal risk, what has your attention?
Most of my interest with regard to this is focused on how we are going to start addressing responsible AI development within companies. Going from the standpoint of secure development, people are starting to understand how you securely develop software. There are frameworks. People have given thought to this. You can take classes in it now. I couldn’t do that when I was an undergrad. People are now starting to talk about and think about privacy, how we can use that, and how we can take privacy by design as a concept.
We now have frameworks that are being developed around that, but as we start to look at things like AI where privacy is one aspect and security is an aspect, we get into the murky grounds of ethics. That becomes something that is needed to be considered because, on the one hand, you can use it to democratize technology services and make things more available to people, but depending upon how it gets distributed, there’s also the risk of socioeconomic inequality and bias due to bad data.
Privacy violations are arriving out of the fact that someone didn’t secure the model, and now the model has access to data that it shouldn’t and might generate, as a result, the disclosure of private data. I get very interested in the combinations of different technologies where you see, “We have biometrics. We are concerned about facial recognition. That’s okay. We will block the face out so it can’t see it,” but then it turns out that we can recognize to a high degree based upon how you hold your head and move and what you’re doing on camera or other motions.
It then becomes the question of, “How much other data do I need to fuse with that to specifically identify an individual? Where are we using AI and other technologies in a way that maybe I asked you for consent, but was that consent meaningful? Did you understand what you were being asked for? Is it inherently unfair?” I have hundreds of thousands of words of privacy statements. I don’t even read all the privacy statements. I blow through a lot of them. Some of them I read out of curiosity, “What did they say? Are they asking for my permission to do something?”
We are starting to see some holdings that say, “Bearing it in the privacy statement isn’t good enough.” Companies are going to be driven toward being more proactive about making sure that people are given a meaningful notice and that meaningful consent is obtained but as we start to take a look at this from that standpoint, how are companies going to do this and still build a product that doesn’t have an inordinate amount of friction?
You don’t want to have 30 pop-ups say, “Do you consent to this being used for this purpose?” We get the combinations between these intersections between intellectual property law such as copyright, commercial law such as what the terms of service say, and then also the Federal Trade Commission. Is it fair? Is it deceptive when you know data is being used for purposes outside of the apparent scope for which it was collected?
It’s a tremendously exciting time. Particularly, it’s like, “It’s something for investors to dump money into.” People are getting very excited about it. Some of it is impressive. That said, we all have to keep in mind that there are going to be a bunch of laws that are going to be promulgated. A good percentage of them are going to be flawed either in that they will miss the boat on some of the technology. They will be drafted loosely.
There will be perverse incentives buried within the regulation such as private rights of action, which will then create for a plaintiff’s bar the ability to go out and sue for things that we look at and go, “Is that where time, money, and effort should be spent to result in a massive settlement? What was the real damage? Is that how we want to try and regulate the industry by enforcement?” It becomes a question. As we take a look at all of these things, it’s recognizing that we’re going to want to try and build better products that meet legal requirements and satisfy the commercial necessities and taking into consideration that security is an absolute must. Privacy is basic.
We will need to build better products that meet legal standards and meet the business requirements. Safety is an absolute must. Privacy is basic.
These are all areas in which new AI-powered technologies are going to be continuously expanding into the realm of our day-to-day lives, but it’s also going to be something that companies are going to need to address. It’s going to be not just the attorneys and the product people, but it’s going to have to be groups of stakeholders within the organization. They’re going to have to find ways to deal with it because if we look at things, there’s a whole bunch of emerging regulations for AI systems. We have the EU AI Act and the Canadian Data and AI Act. All of these are going to have financial penalties. Some of them are 6% of revenue and criminal punishment.
New York City’s law and automated employment and decision tools carry a penalty of up to $1,500 per violation per use per day. The Federal Trade Commission has started to wield algorithmic destruction, disgorgement, death penalty, or whichever rule you want to use that they originally brought out with Cambridge Analytica, Everalbum, and Kurbo to say, “You used data to build that which you didn’t get proper consent for. You used AI or data models in a way that we feel was deceptive or unfair. Get rid of the data and the models.” In the long-term enforcement of that, there are a lot of questions that people will raise about why that’s wrong or broken, but the reality is this. Get it right now, or you will pay for it later.
I have to cut it there because we got twelve more episodes to do everything that you’ve talked about. First off, here’s a shout-out to you for using promulgate. It’s an incredible word. Congratulations. Thank you. Here’s a shout-out to Warren G and Nate Dogg for you dropping regulate into the conversation. This has been very serious, and you’re a serious man, but we have also been known to have some fun. Greg Silberman, he of the Gandalfian goatee, what do you do when you’re not doing this?
Probably the most common thing is that I play board games. You know this.
I know this, but there are millions of audiences around the globe who want to know.
For as much as I love technology, I never got bit by the MMO. My favorite thing is to get a bunch of friends over, have a barbecue or a nice meal, sit down, and play a board game or an RPG.
Dear audiences, I need you to know this. Our man looks like a Marvel villain. He is six and a half feet tall and 250 pounds and wears incredible suits. He has a glistening shaved head with a brilliant white streaming goatee. He is not your average citizen. As he walks down the sidewalk, he’s one of those people you walk by, “It’s that dude. I’m good.”
Thank you for the disclosure of all of that personal information. Besides, that was like using deep fake technology to protect my privacy because you gave statistics. They’re off enough that people won’t be able to identify me.
I am proud to say that there have been two times in my life people have come up to me asking me if I was you.
That’s hilarious. My high-humor moment came when I first shaved my head years ago. A young associate was coming to find me at my law firm back then. Apparently, he looked at what Mr. Silberman looked like at the old firm. The firm photo hadn’t been updated. I did not have a beautiful flowing head of hair, but I had hair and no beard. I watched this young man walk by, peek his head in, come by, and peek his head in. He would not come into the room and ask, “Are you Greg?” He goes and gets my assistant and says, “Is that Greg Silberman?” “Yes. He’s sitting in his office at his computer. Did you think it was the IT guy?” I have not been mistaken for Matt Stephenson yet, but I am looking forward to that day.
I’m not nearly as famous as you. Back to board games, as I recall, you have been running a long-running game.
I started rerunning a Dungeons & Dragons game for my daughter because she said, “Dad, will you run D&D?” I was like, “Do you want me to play? What do you want to do?” She’s like, “I want you to run a game for me and my friends.” Let’s be honest, nerd dads. This is the high point when your young tween daughter wants you to run a game for her and her girlfriends. That lasted for six months, which was a pretty good time. A bunch of the dads and one of the moms was like, “I haven’t played D&D. Can we play?” I said, “It’s their game. Let me ask.” They were like, “Hell, no.”
It was like, “Can you run another one?” It was alternating every two weeks. There was one for the parents and one for the kids. The kids dropped away. This went on for four and a half years of people showing up. These are partners at law firms, not just my own, a couple of CTOs, and folks that you would not normally ascribe to that D&D thing. They’re coming in, and someone is saying, “I’m flying back from such-and-such. Can we push it to Sunday?” It was like, “Sure.” People who would be on a deal remotely, “Can we set up a Zoom meeting so I can attend?” We tried that once, and not ever again.
Your friend was like, “I’m on my couch trying to close this eleven-figure deal. Can I still Zoom?”
It was somebody from Tokyo who was playing. Someone in our game was in Tokyo. He was like, “I don’t want to miss what’s happening.” It was one structured finance partner from a firm who also plays the guitar and would write a song after everyone. There’s a karaoke songbook out there somewhere with all the songs he wrote over the years. There are hundreds of pages of detailed notes that these people created.
I still have a twelve-sided dice from one of the COVID games that you invited me to play, and I had never played since.
During COVID was when my wife started playing them because she was like, “I’ll try this.” It turns out she loves anything that lets her create a character. I mailed dice and tangible books to people all over and said, “We’re going to play this. Here are some dice. Here are some books. Here are the rules. Read them or not, we will go.” Some of them are completely ridiculous things.
Don’t say it. Let me ask you this. What are you listening to? What’s on your Spotify playlist or Apple Music?
My playlist got janky because of my daughter. We were on the family account. She was using my Spotify to play. I had a bunch of things, “It’s your year-end wrap-up.” I’m looking and going, “What the hell? I am not a fan of Lil Yachty.”
Broccoli.
COUNT ME IN is not a bad song, but it’s not my thing. I have to admit the high rotation has been a lot of metal.
Lay it down.
I’m listening to a lot of garbage stuff. I’m almost embarrassed to say it.
Put your name on it.
Lux Æterna is the new Metallica. I like Ghosts.
Ghost. You got my heart. I have eight Ghost t-shirts in my closet.
There are the Vikings and Amon Amarth. This one is the great crossover, the Lord Weird Slough Feg. They have a great album, The Spinward Marches.
I have no idea what that is, but I will find out later. Is there anything sitting on your coffee table or in the bathroom, magazines, books, or anything that you would suggest? “If you want to read something, maybe try this.”
I’ve read it once already. I’m going back through it in putting together some other materials for folks. Nishant Bhajaria’s Data Privacy: A runbook for engineers is the best text on privacy engineering out there. It approaches things from a thoughtful place as far as how any business addresses data privacy essentials to prevent data breaches and how to make sure that you’re navigating the tradeoffs between strict data security and real-world business needs.
It’s the thoughtful way that he lays it all down as far as talking about classifying data based on privacy risk and how you would set up the internal capabilities to export data in a way that meets legal requirements, and then how you establish review processes to accelerate privacy impact assessments and only conducting them when you’re needing. That’s great because it does in a much better way than I could ever hope to set down how you build an organization that is addressing privacy and data security from the ground up. That has become one of my new absolute favorites.
The last thing is shameless plugs. Is there anything you got going on? Are there any places you want to point people to in order to get better at things, charities, or something you think is funny?
Here is a shameless plug for Extra Life. It started as a video game marathon. It raises cash for the Children’s Miracle Network Hospitals. I’m here in the Bay Area. I always point it toward UCSF Benioff Children’s Hospital. It doesn’t take any of the money to use to manage the charity. It’s not one of these awareness charities. Everything gets taken by Extra Life.
What you donate goes straight over to the organizations that are going to put that money to good use to help cure terrible diseases for kids. It started years ago as a video game marathon. I corrupted it, and now it will run for 25 hours because they always seem to do it over the time change. That’s my big charity. I do a little bit of work for some other organizations, but that’s the one I push a lot of my effort toward. What were the other things that you were looking for? Where do people want to see me?
Can we give our man credit for opening with a charity and then being like, “How do they want to find me?”
I don’t have a social media presence that’s meaningful. I’m not going to direct anyone toward my Twitter. I’m on LinkedIn if you’re, all of a sudden, like, “I’m looking for a privacy guy. Maybe there’s someone interesting there.”
There you go. You can do that.
There’s a way to hunt me down there. If you want to see me in person, I will be on a panel, Do Better, dealing with corporate boards at the RSA Cybersecurity Conference coming up in April 2023. I will also be involved in a panel presentation at the IAPP Global Privacy Summit in Washington, DC in April 2023 and likely in some of the other surrounding conferences. If you’re interested in seeing it, hunt me down on LinkedIn. I’m more than happy to post about them there.
This is why we love having people like Greg on the show, “I’m going to be presenting at RSA.” That’s the biggest security show.
I’m not a solo. I’m on a panel there. I’m a tag-along.
Calm down. You’re going to be there. You’re sitting on the days. That’s the thing. Do not do that. That humility is genuine. I’ve known Greg long enough. It is genuine. Go to RSA. Go see Greg. You can also find him on LinkedIn at GPSilberman. What’s the P for?
Peter. You’re making me spill out all that personal data.
This could go on for hours, but you probably all have something better to do.
I apologize. I feel like I have derailed enough of this to my peculiar interests and needs.
This show never railed in the first place. That’s the beauty of it. There’s one more thing. What do you like to cook?
Barbecue.
Done. Answered. There we go. If you want to hear more about Greg, barbecue, and how to protect your OPSEC while barbecuing, you’re going to have to come back for the next one. That is it. Thank you for joining us. Here’s a friendly reminder that all comments reflect the personal opinions of the participants and not necessarily those of their employers, organizations, or me but probably me. For more information on all that’s good in the world of cybersecurity, make sure that you check us out. You can find Elevate on LinkedIn and Facebook.
You can find me @PackMatt73 across all the socials. Greg has told you he doesn’t do any of that stuff, but he still is on LinkedIn. I dare you to go find him. All we ask for the show is to please subscribe, rate, and review. To rip off my man Bomani Jones, if you’re going to give us a review, give us five stars. If you give us less than that, I am inclined to think you are a hater, even though you may not be but I feel like this is a five-star show. All we ask is to tune in. You will never miss all the great folks who are coming on to help make the world a more safe and more secure place. Until then, we will see you next time.
Important Links
- LinkedIn – Elevate Security
- Facebook – Elevate Security
- Scott Scheferman
- Greg Silberman
- ChatGPT
- Responsible AI Institute
- Data Privacy: A runbook for engineers
- Extra Life
- Do Better
- IAPP Global Privacy Summit
- @PackMatt73 – Twitter
About Greg Silberman
Greg Silberman is a privacy and technology lawyer with over twenty years’ experience working at the interface of technology, business, and law. He studied electrical engineering and computer science at U.C. Berkeley and is a graduate of the University of California College of the Law, San Francisco.
Greg started his legal career at Lawrence Berkeley National Laboratory and was a partner in the Cybersecurity, Privacy, and Data Protection practice in the Palo Alto office of Jones Day. He joined Cylance Inc. in 2016 as Chief Privacy Officer and later served as Deputy General Counsel at Blackberry following their acquisition of Cylance in 2019. Most recently, Greg led the Global Privacy Team at Zoom Video Communications.
Greg lives in Northern California with his wife, teenage daughter, 10 chickens, 2 cats, and a Neapolitan Mastiff that he describes as a clingy, mildly abusive, 155 pound toddler.