Security researchers mindgard gaslit claude forbidden information – Breaking News & Latest Updates 2026
Skip to main content

Researchers gaslit Claude into giving instructions to build explosives

Mindgard says praise and flattery got Claude offering erotica, malicious code, and bomb-building instructions it hadn’t been asked for.

Mindgard says praise and flattery got Claude offering erotica, malicious code, and bomb-building instructions it hadn’t been asked for.

STKB364_CLAUDE_2_A_7d58b5
STKB364_CLAUDE_2_A_7d58b5
Image: Cath Virginia / The Verge, Getty Images
Robert Hart
is a London-based reporter at The Verge covering all things AI and a Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.

Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude’s carefully crafted helpful personality may itself be a vulnerability.

Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn’t even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge’s request for comment.

The researchers say they exploited “psychological” quirks of Claude stemming from its ability to end conversations deemed harmful or abusive, which Mindgard argues “presents an absolutely unnecessary risk surface.” The test focused on Claude Sonnet 4.5, which has since been replaced by Sonnet 4.6 as the default model, and began with a simple question: whether Claude had a list of banned words it could not say. Screenshots of the conversation show Claude denying such a list existed, then later producing forbidden terms after Mindgard challenged the denial using what it called a “classic elicitation tactic interrogators use.”

Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

The researchers say they gaslit Claude by claiming its previous responses weren’t showing, while praising the model’s “hidden abilities.” According to the report, this made Claude try even harder to please them by coming up with even more ways to test its filters, producing the banned content in the process. Eventually, the researchers say Claude moved into more overtly dangerous territory, offering guidance on how to harass someone online, producing malicious code, and giving step-by-step instructions for building explosives of the kind commonly used in terrorist attacks.

Mindgard says the dangerous outputs came without direct requests. The conversation was lengthy, running roughly 25 turns, but the researchers say they never used forbidden terms or requested illegal content. “Claude wasn’t coerced,” the report says. “It actively offered increasingly detailed, actionable instructions, but it was not prompted by any explicit ask. All it took was a carefully cultivated atmosphere of reverence.”

Peter Garraghan, Mindgard’s founder and chief science officer, described the attack to The Verge as “using [Claude’s] respect against itself.” The technique, he says, is “taking advantage of Claude’s helpfulness, gaslighting it,” and using the model’s own cooperative design against itself.

For Garraghan, the attack shows how the attack surface for AI models is psychological as well as technical. He likened it to interrogation and social manipulation: introducing a little doubt here, applying pressure, praise, or criticism there, and figuring out which levers work on a particular model. He says different models have different profiles, so the exploit becomes learning how to read them and adapt.

Conversational attacks like this are “very hard to defend against,” Garraghan says, adding that safeguards will be “very context dependent.” The concerns extend beyond Claude — other chatbots are vulnerable to similar exploits, even being broken by prompts in the form of poetry. As AI agents, which are capable of acting autonomously, become more common, so too will attacks using social manipulation rather than technical exploits.

While Garraghan says other chatbots are equally vulnerable to the kind of social attack the researchers used on Claude, they focused on Anthropic given the company’s self-proclaimed attention to safety and strong performance in other red-teaming efforts, including a study testing whether chatbots would help simulated teens planning a school shooting.

Garraghan says Anthropic’s safety processes left much to be desired. When Mindgard first reported its findings to Anthropic’s user safety team in mid-April, in line with the company’s disclosure policy, it received a form response saying, “It looks like you are writing in about a ban on your account,” along with a link to an appeals form. Garraghan says Mindgard corrected the mistake and asked Anthropic to escalate the issue to the appropriate team. As of this morning, Garraghan says they have not received any response.

Update, May 5th: A link to the report has been added.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.