Attack of the Clones: Feds Seek Voice-Faking Defenses

If AI Voice Cloning Can't Be Stopped, That Would Serve as Red Flag for Policymakers
Attack of the Clones: Feds Seek Voice-Faking Defenses
Image: Shutterstock

Do you have what it takes to build defenses that can easily and reliably spot voice cloning that is generated using artificial intelligence tools? If so, the U.S. Federal Trade Commission wants to hear from you.

See Also: Strategies for Protecting Your Organization from Within

The agency last November announced a Voice Cloning Challenge designed "to encourage the development of multidisciplinary approaches - from products to policies and procedures - aimed at protecting consumers from AI-enabled voice-cloning harms, such as fraud and the broader misuse of biometric data and creative content."

The challenge promises to reward $25,000 to the top entry, provided it meets three key requirements, and $4,000 for second place and $2,000 to each of up to three honorable mentions.

The FTC said it hopes the Voice Cloning Challenge will "foster breakthrough ideas on preventing, monitoring and evaluating malicious voice cloning," as the ability of AI tools to generate ever more convincing-sounding fakes improves.

The challenge is open for new entries until Jan. 12. Entrants must submit a one-page abstract and a 10-page detailed explanation, and they have the option to send a video showing how their submission would work.

Terms and conditions apply. Only individuals or small groups - comprised of fewer than 10 people - can win the cash prizes, although one large organization could win a recognition award that comes with no remuneration.

The FTC said all entries will be judged on the following three criteria, tied to answering the specified questions:

  • Practicality: "How well might the idea work in practice and be administrable and feasible to execute?"
  • Balance: "If implemented by upstream actors, how does the idea place liability and responsibility on companies and minimize burden on consumers?"
  • Resilience: "How is the idea resilient to rapid technological change and evolving business practices?"

The FTC doesn't just want consumer-level defenses that would be easy for individuals to implement. Ideally, it wants to see defenses that work "upstream" to battle such things as fraudsters attempting to extort victims, as well as to combat the illicit use of actors' own voices, before such attacks can even reach consumers. Ensuring those defenses can maintain users' privacy is another goal.

While this might sound like a tall order, the challenge is also designed to test whether effective defenses against AI voice cloning might even exist.

"If viable ideas do not emerge, this will send a critical and early warning to policymakers that they should consider stricter limits on the use of this technology, given the challenge in preventing harmful development of applications in the marketplace," the FTC said.

Fraudsters Challenge Security Defenses

The agency's challenge highlights how fraudsters keep turning the latest tools and technology to their advantage.

One tactic increasingly being adopted by criminals involves virtual kidnapping or "cyber kidnapping," in which they pretend to have abducted an individual, as seen in a recent case involving a Chinese teenager in Utah. In some cases, experts say, criminals also use real-sounding audio of the supposedly abducted individual as proof, and sometimes they hijack the individual's SIM card so they can't be reached by family or co-workers, who are pressured to pay immediately - or else (see: Top Cyber Extortion Defenses for Battling Virtual Kidnappers).

AI tools that don't just generate convincing-sounding audio but also convincing-looking deepfake video to match are another growing concern. Last week, Singaporean Prime Minister Lee Hsien Loong warned that scammers have been using deepfake videos featuring his likeness to hawk cryptocurrency scams and cautioned Singaporeans about claims of crypto giveaways or guaranteed crypto "returns on investments."

Criminals are already using deepfake videos to try and bypass financial services firms' "know your customer" identity verification practices, and the problem is only going to get worse, experts warn.

"As AI makes it easier and cheaper to impersonate someone's likeness and identity markers - often found in a breach - it will become simpler for attackers to take over accounts and steal money, data, impact brands," and more, said Rachel Tobac, CEO of SocialProof Security, in a post to X, formerly known as Twitter.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.