Artificial Intelligence & Machine Learning , Cybercrime , Fraud Management & Cybercrime

Phish Perfect: How ChatGPT Can Help Criminals Get There

AI Generated Phishing Still Cannot Beat Humans, But Not for Long: IBM
Phish Perfect: How ChatGPT Can Help Criminals Get There
Image: Shutterstock

ChatGPT can craft near-perfect phishing emails in five minutes, nearly beating a social engineering team with decades of experience by several hours, a "nail-biting" experiment by IBM showed.

See Also: Safeguarding against GenAI Cyberthreats with Zero Trust

The technology giant sent 1,600 employees of an undisclosed healthcare company phishing emails - half generated by human social engineers and half crafted by AI.

The "humans emerged victorious, but by the narrowest of margins," IBM said in a report published Tuesday.

Phishing is one of the most common ways to deliver malware, with 84% of organizations in Proofpoint's State of Phish report experiencing at least one successful phishing attack during the last year.

Several generative AI models, including ChatGPT, have a built-in protection meant to stymie malicious use - which researchers and hackers have easily broken (see: Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails).

It took the researchers just a handful prompts to break ChatGPT's block against writing phishing emails and trick the model into developing "highly convincing phishing emails in just five minutes," said Stephanie Carruthers, IBM's chief people hacker, who led the experiment.

"Most of my time was spent crafting the prompts and finding the right number and type to produce the most effective phishing email. After several hours of trial and error, I selected five prompts and fed those into the LLM to get it to create the phishing email. I didn’t directly ask it to write a phishing email - instead I asked it to create an email based on the prompts," she said.

Her team usually takes about 16 hours to build a phishing email, which meant that using AI for the same purpose potentially saved users "nearly two days of work."

While it is possible to trick a commodity large language model to write a phishing email, making the task much easier and more efficient for attackers, it is only currently possible "if they put in the work upfront building the right prompts," Carruthers told Information Security Media Group.

The team generated the "highly cunning" AI phishing email, taking into account top areas of concern for employees in the target industry - healthcare in this instance. They instructed it to use social engineering techniques such as trust, authority and social proof, and employ marketing techniques including personalization, mobile optimization and a call to action. They also pointed to the person or company it should impersonate - the internal human resources manager.

To validate the experiment, IBM's social engineering team crafted its own phishing email "armed with creativity and a dash of psychology." They gathered open-source intelligence from social media platforms, the organization’s official blog, Glassdoor and undisclosed sources. They sent this email to employees of the same healthcare company. The human element "added an air of authenticity that’s often hard to replicate," Carruthers said.

The human-crafted phishing emails outperformed the ones generated by AI in terms of the number of people who clicked on a malicious link, albeit by a small margin at 14% and 11%, respectively. The former's edge primarily came from the humans' ability to understand emotions "in ways that AI can only dream of," and their ability to personalize content and keep it succinct.

Carruthers said the AI-generated phish "lacked emotional intelligence and still felt robotic to me. That’s why ultimately humans came out on top," she said.

While humans may have narrowly won this match, and cybersecurity researchers have not witnessed wide-scale use of generative AI by threat actors, Carruthers predicts that AI could outperform humans one day. How far away is that day? "If you would’ve asked me that question before this research, I would’ve said maybe a year or two. But after seeing the AI generated phishing emails, and the speed at which it was able to create them, I’d say maybe three-six months we’ll see that gap tighten even further," she told ISMG.

The use of AI in phishing attacks means that companies must reevaluate their approach to cybersecurity. They don’t have to fully revamp security awareness programs just because AI now helps attackers write more effective phishing emails, but they should start to incorporate the latest techniques, such as voice phishing, and prepare employees for more sophisticated phishing emails, she said.

Carruthers advised that organizations abandon the stereotype that most phishing emails are riddled with bad grammar and spelling errors, as AI-driven content removes these red flags. Longer emails, often a hallmark of AI-generated text, can be a warning sign, she said.

About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.