AI-Based Attacks , Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime
Proof of Concept: How Can We Outpace Deepfake Threats?
Sam Curry and Heather West on Authentication, AI Labelling and Adaptive Security Anna Delaney (annamadeline) • July 25, 2024As deepfakes evolve, they pose significant cybersecurity risks and require adaptable security measures. In this episode of "Proof of Concept," Sam Curry of Zscaler and Heather West of Venable discuss strategies for using advanced security tactics to outpace deepfake threats.
See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation
"We must assume digital interactions aren't always real and build our processes accordingly," said Curry, vice president and CISO at Zscaler.
West, senior director of cybersecurity and privacy services at Venable, underscored the importance of rigorous processes. "We need to be proactive with robust authentication methods," she said. "We can't rely on technology alone. It's about creating a culture where processes are followed meticulously to ensure security."
Curry; West; Anna Delaney, director, productions; and Tom Field, vice president, editorial; discussed:
- The need to evolve security processes to counter deepfake threats;
- Why traditional authentication methods are insufficient;
- Strategies for adapting to AI-driven fraud tactics.
Curry, who leads cybersecurity at Zscaler, previously served as chief security officer at Cybereason and chief technology and security officer at Arbor Networks. Prior to those roles, he spent more than seven years at RSA - the security division of EMC - in a variety of senior management positions, including chief strategy officer and chief technologist and senior vice president of product management and product marketing. Curry also has held senior roles at Microstrategy, Computer Associates and McAfee.
West focuses on data governance, data security, digital identity and privacy in the digital age at Venable LLP. She has been a policy and tech translator, product consultant, and long-term internet strategist, guiding clients through the intersection of emerging technologies, culture, governments and policy.
Don't miss our previous installments of "Proof of Concept," including the March 21 edition on opening up the AI "black box" and the May 22 edition on ensuring AI compliance and security controls.
Anna Delaney: Hello. This is proof of concept, a talk show where we invite security leaders to discuss the cybersecurity and privacy challenges of today and tomorrow and how we could potentially solve them. We are your hosts. I'm Anna Delaney, director of productions of ISMG.
Tom Field: I'm Tom Field. I'm senior vice president of editorial at ISMG. Anna, welcome to the long, hot summer of deepfakes in AI.
Delaney: So we are back talking about AI, and specifically, one of the most pressing and intriguing issues in today's digital landscape: deepfakes. So if I were to say deepfakes to you, what comes to mind? Tom?
Field: Well, there's this new - instead of an urban legend, it'll be an enterprise legend that we talk about now - which is the Hong Kong businessman that let go of $25 million after participating in a meeting of executives that was deepfaked. So for me, it's the use of the technology to enhance business email compromise and to coerce unsuspecting employees to releasing finances, releasing secrets, giving access, whatever it may be that they're after. This seems to be the flavor du jour.
Delaney: And it's definitely not a myth anymore. We've seen some financial institutions already experienced this. The question is how can we trust what we see and what we hear in a world where deepfakes can mimic anyone with eerie precision? and I know you've been deep faked a couple of times.
Field: It's scary, because you go to these conferences and the FBI and the Secret Service stand up there and say, "It only takes three seconds of audio or 11 seconds of video, and the fraudsters can create a compelling deepfake." We've already given more than that just in this conversation. If anyone's going to be concerned, you and I ought to be concerned.
Delaney: So, what is the solution? Organizations have to consider the systems they use for authentication. Are they robust enough to withstand the sophisticated nature of defects, and how can we innovate further to safeguard against these evolving threats?
Field: Everything we've trusted is now up to question. It becomes not a matter of, we like to say the cliche, trust, but verify. It's verify, then trust.
Delaney: There's also the issue of regulatory and ethical standards for labeling content. And I believe that the EU, the US, they're drafting laws to combat harmful deepfake content aiming to preserve the integrity of information. However, the question of enforcement remains a significant hurdle.
Field: And P.S., while all this is going on, how many major elections are there in the world this year, including U.S. presidential election coming up in a few months, ample opportunity for the fraudsters to wield their craft.
Delaney: Interesting to see what the impact debates will be on the elections. We just had our election. We have a Prime Minister who apparently won fairly. So maybe they weren't such an influence on our campaigns, but let's see what happens in the U.S., very different landscape. At this point, perhaps it's time we brought on the experts to shed light on how we can navigate this complex terrain. Welcome back to the ISMG.Studio, our Proof of Concept regulars and AI experts, Heather West, senior director of cybersecurity and privacy services at Venable, LLP, and Sam Curry, CISO at Zscaler.
Field: Thanks for being with us again today to continue this discussion of AI.
Sam Curry: Good to be here. Thanks for having us.
Field: We started talking about deepfakes and security implications, and I mentioned this now legendary story about the Hong Kong businessmen and $25 million. From your perspectives, what are the most pressing security challenges posed by deepfakes? And what examples can you share about how deepfakes have been used maliciously and the impact on their targeted organizations?
Curry: First of all, there are many examples, but I think we have to proceed on the assumption. We have to proceed on the assumption that you aren't dealing with a real person, at least for now, unless you're face to face. And I say for now, because even that over the long term, isn't a for-granted thing. And so we should start from the premise that we don't trust the person to be who they are. And so we have to build processes. The answer isn't necessarily more AI, you can't even necessarily trust the tools that you might deploy for that. And we can get fairly sophisticated with point counterpoint, measure countermeasure. That's a race, and you never know when the race is broken for the next step. So the thing we have to do is to get really rigid in our processes. We don't love that. But what that means is we say, this is the way money moves, this is the way that it's not establishing trust, even this is the way that transactions are done, that things happen, and it is a changing culture that's the hardest thing to do in many cases. So the answer here isn't just find this whiz bang widget that you can do this with. Instead, it is this is how, for instance, with everybody, I think the urban legend and news said that everybody on the Zoom call was not real, so don't use Zoom calls to authorize payments, or at least large payments at all. And so this is how it happens. And that message has to come from the leadership, from the top down. And it will be a fireable offense to do that, even if you're on zoom with everybody. And that cultural change takes a lot of training, and it varies enormously all over the world and by rank in a company how hard that challenge is, but that's what we have to do, and that's my take. Heather, I I'd love to hear what you think, though.
Field: I thought we were past the days where one individual could sign off on multiple million dollars.
Curry: You can do that too. But even then, you have to be careful - putting people in the same space. But Heather, what's your take?
Heather West: No, I agree. We're at this interesting point where all of our safeguards can be spoofed. And so the idea - we've seen this for years, someone getting a call or a text from a relative and saying, "I am stuck in this other country. I need you to give me money right now." And it's getting more and more sophisticated. The Hong Kong example is an outlier. That is a very sophisticated setup, the ability to spoof multiple people and to have it be live. But the payoff was very high for them, and we're going to continue to see more of that. And it will move us away from some of these ad-hoc authentication methods. And I hope it does. I hope that we come up with better processes. But to your point, Sam, I think the processes are going to be rigid, and frankly, they're going to be intended to add friction to the system, which nobody enjoys.
Curry: But even families can do this. Your example of the relative who calls certainly with my children, we have safe words for when someone thinks they're in danger, they need help. There's a word that we've discussed before a trip, if that word appears in conversation for that trip, it's got to be for that trip, then that helps to get more trust in a situation. Even that can be faked, but if someone's held at gunpoint to give their safe word, then you probably want to help them anyway. But you can use these tricks even at the family level.
Delaney: So you've laid out some of the scenarios, the challenges that, how can organizations detect and then mitigate these threats?
Curry: I think that's going to be a shifting target. So there are ways you can do things out of band. There are ways that you come up with some sophisticated tools you can use, for instance, phones and other channels, and you can use things that are sub-audible. For instance, you can use all sorts of cryptographic tricks. They will all have a shelf life. When I tend to think of security as a cost to break, what does it cost an opponent in terms of opportunity and actual operational and development costs, and our jobs to make that cost as high as possible. And so if you're a family, and you're trying to protect yourself from run-of-the-mill criminals, then it doesn't take the cost to be very high. If you're a nation-state, on the other hand, and you're faced with an election and you're concerned about election tampering, as we started this conversation with, that's a different target and cost to break that you have to achieve. So now keep in mind, cost to break comes down over time. And one of the things that our AI/ML, that large toolkit is doing is meaning that the cost is coming down faster over time. And so what we've got to do as corporations or government organizations is figure out how do we have a sustainably high cost to break and then how do we deal with the cases where somebody's willing to make that investment? So use things like deception. Use ways of detecting when somebody's doing that unusual thing. You put baits and lures out there, for instance. So there are some identities that are false, but there's also some targets that if anyone goes after them, then you know that that's absolutely the opponent, and so you've got to have a more sophisticated strategy than just a single tech that you deploy if you're in that category, if someone's going to pay that very high cost to break to get to it.
West: And I think that part of this is that a lot of our safeguards have been predictive. We've been using AI to try to predict when something is fraudulent. And we've been using AI to say, "Is this actually Heather? Is this the way Heather usually behaves?" And those outliers are perhaps not as useful now, and so moving toward ways of authenticating and identifying people that are deterministic, using public key cryptography, that's going to add friction to the system, and it's going to be annoying until we get used to it. But realistically, we don't have a good way of detecting a lot of this, and we don't have a good way of predicting it.
Curry: The 'Office Of No' may be coming back as we've often called, you know, the security department, but when I started out in cybersecurity, before we called it that, cyber or information security was about one half to 1% of the total IT spend. Not only has the IT spend overall grown, now it's 15 to 17% of the IT spend. At some point it will come down again. But there's an important reason for that. We're doing so much online, and it is so attractive to attackers of all sorts. So it is worth that friction when it's a $25 million transaction, or it's the integrity of a nation-state's elections. Hate to say it, but that's true, and I despise saying that, because for years I've been saying, let's not be the Department of No.
West: I think that it's definitely - we forget how much easier these transactions are than they were 10-15 years ago. I can just push a button and magically, Sam, you have money.
Curry: I like that, by the way. That's good.
West: Adding some more friction, we're just going to have to grind our teeth and bear it, especially when you're talking about people with a big line item, that they can spend that but also for my own individual use, I use some of these apps to tip people. And so there's all sorts of things where they're going to be impactful, and adding more friction to the whole system isn't going to work very well. And so Sam, to your point, it's what does that cost to break and what's the right investment? I suspect that we're going to spend even more money from our IT budget on security.
Curry: This is that zero trust phrase, where it often gets co-opted. It isn't about buying a zero trust. We have built the internet in a maximal trust way. We've built it in thinking in terms of, I can connect this and I can connect that. That has been a wonderful thing. All the RFCs out there have been built in a way to say, "How do I maximize connectivity and establish protocols and standards for everything?" And that's great. The challenge used to be, how do you connect and how do you make things more connectable? Now we live in a world where everything is fundamentally connectable, and I might be minimizing that to some extent. So now we have to flip it around and go, "How do we remove options from attackers? How do we put the things we care about into channels that are incredibly hard to break," and even then, when somebody makes that investment, it gets harder. That is a world of least trust. And I think it's really important that we start to design that way, because the AI applications, as we've seen in many historical examples of where it's been applied, in adversarial games and what have you, in conflict, is going to find new at-attack vectors so we can't leave purchase around for them to find ways in now.
Delaney: Heather, earlier, you said that we might see organizations moving away from more ad-hoc authentication methods. How should organizations be thinking about right now about how to strengthen their authentication mechanisms?
West: I think you start building that authentication into your processes, and that might be a little bit of bolting something on, but it doesn't look like, "Give me a call anymore," does it? It? It has to be something technical if you really need that safeguard and figuring out how to do that well in a way that doesn't get in our way, is something a lot of people are thinking about.
Curry: It's also more than just authentication. It's also authorization. So it is adaptive authentication, as we've said for years, but adaptive authorization, it's also conditional access, so based on how you're connecting, and what you've inherited, from a property perspective, from the machines you're using, the location you're at. What's around you, who's around you, what are the policies that apply? And you might have several different sources of policy. We have to think a whole new policy system up. It's very doable when you're an enterprise, but we have to think about far greater context, and this is where government might play a role too. How do you do things from a citizen perspective? How do you do things from an agency and from a department perspective? How do you do it on the private sector? All of these things might combine, because we've got many devices, and the conditions under which they exist, and the conditions of access are going to change. So it's not just, "Do I trust you to be Heather, Tom or Anna?" It's to what degree and what sort of access should you have right now under these conditions. And that is a variable thing that needs to be continually assessed. That's not something we've generally thought about in the past, but all the tools exist to do it, and all of the ability to do it exists in small examples right now throughout the internet, but we have to think about that on a large scale, and people can do it.
West: That's a good point. All this is mutable. What I'm allowed to do today may be different than tomorrow.
Curry: And will be different, in fact, and even moment to moment, but it's the sort of thing where we have to be able to take multiple sources of authority for policy and render them and come out with an answer right now for whether or not the camera can be on, whether or not your storage can be accessed, and if you should be able to have the thick client work, or you should be put through a browser instance instead. And we can do that with respect to one or two sources of authority, but multiple ones as you move, maybe you're on an airplane, those sorts of things have to be thought about at a higher scale than has been thought about, and then do so in a least trust world.
Field: I do want to follow up on what you just said. Let's talk about biometrics and multi-factor authentication, particularly phishing resistant MFA, as we talk about now, what advancements are you seeing that might help us be able to pass? I guess it's not a liveness test anymore. It's a realness test.
Curry: So I used to joke that biometrics were the password you couldn't reset, but your notion of being phishing proof is an important one. I was fond of behavior metrics at one point, because behavior metrics change over time as you learn tasks and skills. However, even that can be spoofed in a world where AI can be brought to bear. So the question is if you go back to that, that cost to break example that I gave you, we used to talk about two-factor, and in two-factor, there's a cost to break A and a cost to break B. And then, why do we do that? Because there's an X factor above it. How difficult is it to do A plus B at the same time? And so we used to say, make A's and B's cost to break as big as possible, and then the aggregate will be as hard as you can be. There's your friction. Heather, as you mentioned, what we never did was, why don't we make each individual one really small as a cost to break and bring so many of them to bear that X factor becomes just incredibly large. So what if we had 60 little factors, each of which is relatively easy to break, but in the aggregate, is extremely difficult. And if we start thinking about that, how do you pool those things? How do you track them continuously? It becomes very hard to clone someone, or to duplicate them, or to replay them, because you're watching them from 60 different angles. Privacy has to be taken care of, but we can do that in the service-based world. We can do that in a world where you can do service, you can do security at the edge. So this is something you could actually say we're continuously monitoring so many factors that for somebody to inject themselves into 60 factors and to be able to clone them is way harder than just taking two factors offline somewhere and doing something with it. So imagine 60 behavior metrics, some biometrics, some plastic stuff. And let's throw those passwords out finally. That's the world that I think we want to get to, rather than let's find that big, beefy, Uber fantastic, hard-to-break-form factor we've been searching for 40 years.
West: As long as you don't make me type in 60 different codes.
Curry: Oh, no, not happening.
West: I think that there are easier ways to have some of those factors that make it better for the user.
Curry: We can get low cost to break and low friction, but so many of them, that in the aggregate, it's got a very high cost to break.
Field: Heather, Sam, I want to talk a little bit about regulatory standards and ethical standards as well. What labeling standards and regulatory frameworks do you feel should be implemented to address the challenges that are posed by deepfakes? Everyone to build them ensure content integrity.
West: I think it's interesting, all of these discussions, because we're hoping that there's some silver bullet, that we can just label the stuff and then we can authenticate it, and we'll have it figured out. But part of this is, as always the case with regulation, it's so context dependent what the right way to do this is and whether labeling even works. Outside of the cybersecurity space, people are worried about synthetic content. And we create deepfakes, we create images, we create video of some star, and the idea that we label it or detect, it's similar, but the application is different. And so some of the proposals around someone's likeness are going to be a little bit different in that context. I don't know that we have a good idea of exactly what helps in these situations. And so you start getting very specific, and you say, "I need this piece of information authenticated." And I think it might be more promising to authenticate real stuff than trying to inject it into the AI supply chain. There have been some proposals that AI content in some way, if it's been modified, if it's been generated, needs to be labeled, and I think that's a losing fight. So we need to be thinking about these authorizations and authentications in in the let's-label-the-things-that-are-good and label-the-things-that-are-real-using-public-key cryptography, other things that are reasonably well-understood, to think about that. I don't know what role regulation plays on this one, because part of this is when we're talking about fraud and we're talking about crime, it's already fraud and it's already crime, and so it's a question of protecting and maybe there's some room for standards. Maybe there's some room for new kinds of tools, but it remains to be seen a little bit. So I don't actually have a good answer for you.
Curry: I have a slightly different answer. I agree with you completely on the framing of the question. So I think it might even be a false dilemma, but this is what government knows how to do, what do you say? Government, what are you going to do? And they're like, "The meetings will continue until morale improves." The desire is to say, we want to do the right thing, so we're going to make you do the right thing. And I totally sympathize with that. There's another set of tools the government has at its disposal, though, aside from using that and using diplomacy, it is to stimulate innovation because we don't have the answers, and as soon as we realize that there are tools - I don't think it's a moonshot either, but I think it might be to publish a series of problems and to provide things like tax breaks and to provide things like incentives, to use things like DARPA. We did this before. We did it when we had a DDoS problem. We did it when we stimulated things like going to the moon as an example, or the internet itself came from things like that. Countless breakthroughs happen that way, and it's happened in many areas. I think that needs to be seen. And I think we could do a lot to give organizations and entrepreneurs and academia real boost with government support and help, and that going hand in hand with things like labeling, when we know what we want labeled, and regulations, when we know what needs to be regulated, will be a very powerful combination, but that missing element needs to be there as well. Heather, I don't know what you think of that, but speaking from the private sector, I'd love to see that.
West: I think that's right. And NIST, probably imminently, will be releasing a report. They released a draft report on reducing risks posed by synthetic content. It's 100 pages long. It's a wonderful survey of the state of play and where we need to do more research, where we need to do more work. And I do think that that is going to go in and inform the way that the government authenticates their own content and tracks its own content provenance, so that they are pointing at something and saying, "This is credible. This comes from the CDC. This comes from DHS," and going to be able to make that easier, that's going to be an ecosystem challenge, though. To your point, Sam, I think there's going to be a lot of work from a lot of different places, including industry, to make that real, because right now, you can authenticate your content for me, and I will never know.
Curry: Also, Tom, you mentioned one key word, and what you said that Heather and I usually talk about too, you said ethical. I also think it's really important that we continue having the public dialogs, because any tool that comes out could get abused with changes of administration, or over time, or as the state of the art changes. And I'm concerned about not just breakthroughs in AI, but in other areas, like quantum cryptography or synthetic manufacturing, if we're talking about authenticating real objects, which I think are great ideas, using things like PKI and using actual rooted sources of trust. But we've got to be having those discussions, and we've got to have it be an all-in-all of the community type discussion, and even with other countries. So honestly, we need to be making sure that that becomes part of the process in a continual way.
Field: I hope it does, because too often, hearing from my chief privacy officer friends, that when the word ethics comes up in board meetings, the CIO and the CISO say, "We're not tackling that one. It's yours."
Curry: Everybody should be having that conversation at least at the board level to set the tone. It's absolutely vital, because there's four levels. You've got this top-level ethical discussion to have. Then you've got the "what does it mean for our business?" Then what do you do in it? And then you've got at the bottom, what do you do in cyber? So those are all interrelated, and I put them in that order.
Field: Well said. Anna, let me turn this back to you to bring us home.
Delaney: Well, fascinating, as always, you've educated us. Sam and Hannah, thank you for sharing your perspectives and thoughts on this huge topic today.