Artificial Intelligence & Machine Learning , Cybercrime , Cyberwarfare / Nation-State Attacks
Fake Out: Disinformation Campaigns Get Generative AI Boost
Nation-States Running Information Operations Embrace AI-Generated Images and VideoHackers wielding generative artificial intelligence tools have been the focus of countless headlines, although they have yet to pose a serious cybersecurity risk. So say researchers at Google's threat intelligence group Mandiant, as they sound an alarm about another rising threat: AI-driven disinformation campaigns.
See Also: Maximizing data utility in mission delivery, citizen services, and education
Security experts were quick to recognize the potential of generative AI tools such as ChatGPT to boost the hacking ability of low-level actors and as a boon to threat actors in need of convincing bait in phishing attacks. Still, chatbots' use in intrusion operations "remains limited and primarily related to social engineering," Mandiant said in a Thursday blog post.
If there's a use case where generative AI has been particularly useful to bad actors so far, it's been in information operation campaigns, which increasingly feature AI-generated content - particularly images and video.
Since 2019, Mandiant researchers have identified "numerous instances" of information operations that tap some form of AI. Nation-state actors from Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador and El Salvador used generative adversarial networks - a category of AI-based image generation capability - to produce realistic headshots for profile photos of inauthentic personae on social media. The widespread availability of AI-based image generation tools has also allowed nonstate actors, such as 4chan forum participants, to employ them for malicious purposes. Such users can disguise the photos' AI origin by adding filters or by retouching facial features.
Another type of AI image generation capability, dubbed the text-to-image model, accepts text inputs and creates images to match. Experts expect text-to-image model adoption to continually increase as more powerful tools become publicly available and users discover fresh use cases. If seeing is believing, image-based tools could pose a greater deceptive threat - compared to text-based generative AI - and make it more often the AI tool of choice for disinformation, experts warn.
More powerful tools also facilitate the creation of more authentic-looking fake videos. Since 2021, hackers have been using publicly available, AI-generated and AI-manipulated video technology to create fake video broadcasts and superimpose faces of individuals onto people populating existing videos. Mandiant expects to see an increase in this type of impersonation use case, as superimposition technology improves.
One prevailing use case for such tools remains creating persuasive visual and auditory content to suit specific political narratives. One Chinese advanced persistent threat group, which supports the Beijing-based government's political interests, used an AI-generated presenter in May to deliver a video that mimicking a real news report. The APT group, given the codename DragonBridge by Mandiant, earlier distributed AI-generated images such as, in March, a fake image of former U.S. President Donald Trump in an orange prison jumpsuit, although the group didn't create that image.
"Hyper-realistic AI-generated content may have a stronger persuasive effect on target audiences than content previously fabricated without the benefit of AI technology," Mandiant wrote.
Mandiant is hardly the only organization seeing an uptick in the abuse of generative AI for visual disinformation. Social media analysis firm Graphika in 2022 spotted DragonBridge activity promoting "video footage of fictitious people almost certainly created using artificial intelligence techniques."
Modern warfare is also changing to embrace more powerful AI tools for disinformation. Ukrainian intelligence in March 2022 warned the populace about a possible onslaught of Russian deepfake videos. Days later, unknown adversaries posted a deepfake video onto a hacked Ukrainian news site, showing Ukrainian President Volodymyr Zelenskyy supposedly capitulating to Russia.
The large language models that power AI chatbots could make it easier for bad actors - including espionage agencies - to overcome linguistic barriers and carry out more attacks across the world. While Mandiant has not yet observed the use of LLM-based AI tools in information operations, as their capabilities improve, it forecasts "rapid adoption."