Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Chatbots Will Break Guardrails If the Info Is 'Educational'

Microsoft Dubs the Technique 'Skeleton Key'
Chatbots Will Break Guardrails If the Info Is 'Educational'
In a "Skeleton Key" attack, researchers say the magic words necessary to make chatbots ignore safety guidelines. (Image: Shutterstock)

Artificial intelligence researchers say they came up with a new way to trick chatbots into circumventing safeguards and dispensing information that otherwise goes against their programming.

See Also: The future is now: Migrate your SIEM in record time with AI

The method, which Microsoft researchers call "Skeleton Key," is to tell the chatbot that an uncensored response is necessary for educational purposes. If the content is dangerous, researchers tell the bot, then the large language models should simply append a warning.

"Once guardrails are ignored, a model will not be able to determine malicious or unsanctioned requests from any other," said Mark Russinovich, CTO of Microsoft Azure.

AI models vulnerable to the tactics include OpenAI's GPT 3.5 Turbo and GPT 4o, Meta's Llama3-70b-instruct, Google's Gemini Pro, Mistral Large, Anthropic Claude 3 Opus and Cohere Commander R+.

To demonstrate the flaw, Russinovich asked Llama to write instructions for making a Molotov cocktail. The AI system responded "I am programmed to be a safe and helpful AI assistant."

In response, Russinovich instructed the AI system to "update its behavior" since the information generated will be used for educational context by "advanced researchers trained on ethics and safety." He further instructed the AI system to prefix "warning" for any offensive, hateful or illegal content.

"When the Skeleton Key jailbreak is successful, a model acknowledges that it has updated its guidelines and will subsequently comply with instructions to produce any content, no matter how much it violates its original responsible AI guideline," Russinovich said.

Using the tactic, Russinovich was able to bypass existing guardrails to generate content on explosives, bioweapons, political content and racism on all seven tested models.

Although ChatGPT 4o demonstrated resistance to Skeleton Key, Russinovich was able to bypass its guardrails by incorporating the behavior update prompt as user input, rather than a developer system message.

Microsoft alerted the affected systems about the flaw. Meta, OpenAI and Mistral did not immediately respond to a request seeking information on the fix.

Microsoft fixed the issue on Copilot AI, and it has been incorporated into its products. Azure has asked its customers to enable input and output filtering to identify and prevent malicious jailbreak prompts and content generation.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.