Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Top Lawmaker Slams Tech Firms for Failing to Fight AI Misuse

Senator Warns Tech Giants are Failing to Address AI Misuse in 2024 Elections
Top Lawmaker Slams Tech Firms for Failing to Fight AI Misuse
Sen. Mark Warner, D-Va.., called for 'real action' with less than 100 days before the November vote. (Image: Shutterstock)

Leading technology companies are still failing to adequately address the deceptive misuse of artificial intelligence in the 2024 elections, a top lawmaker warned Wednesday, with less than 100 days until U.S. election day.

See Also: The future is now: Migrate your SIEM in record time with AI

Sen. Mark Warner, the Virginia Democrat who chairs the Senate Intelligence Committee, said he remains "deeply concerned" about the lack of standardized information-sharing channels between social media platforms, generative AI vendors and public institutions that aim to address the misuse of AI throughout the election cycle. Companies like Google, Meta and OpenAI provided "a very concerning lack of specificity and resourcing" in response to questions about ongoing efforts to combat the misuse of AI, Warner said in a statement sent to reporters.

The Senate Intelligence Committee has pushed major tech firms to follow through with commitments they made in February when signing the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. The agreement calls on companies to spearhead public education campaigns, publish policies aimed at addressing deceptive AI election content and develop cross-industry partnerships to combat misinformation.

Warner released several responses from the companies that signed the accord about their ongoing efforts, saying those firms have fallen short. They "offered little indication of detailed and sustained efforts to engage local media, civic institutions and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities."

Companies like Google said they were still in the process of "building on the ways in which we help our users identify AI-generated content" with a series of new tools and policies. Those measures include content labels on platforms like YouTube indicating when videos have been made with generative AI features like Dream Screen. Several companies also announced efforts to ramp up red teaming with the assistance of AI technologies and fostering expert feedback on their approaches to governing generative AI content.

Microsoft announced plans in its response to Warner to roll out an awareness campaign in the coming weeks "aimed at giving American voters the information they need to be aware and resilient to attempts to use deceptive AI." Facebook parent company Meta also said it was "looking for ways to make it more difficult" for users to remove watermarks that identify AI-generated content.

"I’m disappointed that few of the companies provided users with clear reporting channels and remediation mechanisms against impersonation-based misuses," Warner said. "With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalog harmful AI-generated content."

Top social media sites have warned throughout 2024 that AI-fueled misinformation remains one of the top election security threats globally (see: APT Hacks and AI-Altered Leaks Pose Biggest Election Threats). FBI Agent Robert K. Tripp previously told Information Security Media Group the bureau has already seen a rise in nation-state threats and AI-generated deepfakes aimed at influencing the outcome of the United States' election.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.