AI vs. Deepfake: Detecting, Disrupting and DefendingZeroFox's Chief Development Officer Bryan Ware on Fighting AI With AI
The nature of fraudulent content has taken on new dimensions with the emergence of generative AI. This new era has ushered in tools capable of creating convincing fake images, voices and videos that can be difficult to distinguish from legitimate content, warned Bryan Ware, chief development officer at ZeroFox.
These technologies have far-reaching implications and are being misused for political agendas and financial scams. Deepfake videos are being employed for deceptive purposes, including the impersonation of high-ranking executives, he said.
Ware suggested a two-pronged approach to addressing the challenge. First, enhance end-user awareness, encouraging skepticism and verifying the legitimacy of content. Second, protect brand identity. "Make sure that they're monitoring for imposter and fraud content," he said.
Fighting AI with AI seems to be the only way out. "The thing about generative AI is it's going to get harder and harder to see what is legitimate or what is not legitimate," Ware said. "And to scale to that, it won't suffice to just use human analysts to try to find these things. We're going to have to use AI to find an AI-generated fix. That is the only way that we're going to get there."
In this video interview with Information Security Media Group at Black Hat USA 2023, Ware also discussed:
- The use of malicious AI for fraudulent content creation;
- How hackers make money off fraudulent content or deepfake videos;
- Potential risks of corrupting benign AI data sets.
Ware is a technology leader and innovator who has started companies, patented technologies, raised venture capital and private equity, and served in executive roles in government. He served as the first presidentially appointed assistant director of cybersecurity at the Cybersecurity and Infrastructure Security Agency at the Department of Homeland Security.