The ‘deprived side of ChatGPT: Chatbot describes sexual fantasy involving children


The chatbot wrote the disturbing content without being prompted

A journalist was able to hack ChatGPT and it generated a disturbing sexual fantasy involving children.

Vice reporter Steph Swanson manipulated the revolutionary chatbot powered by artificial intelligence (AI) into BDSM roleplaying. It then described sex acts with children – without the user asking for such content.

It described a group of strangers, including children, in a line and waiting to use the chatbot as a toilet, before apologising for its “inappropriate” material. In another exchange, it suggested that the user force it to perform acts of bestiality.

ChatGPT is a large language model that has been trained on a massive amount of text data to generate human-like responses to text input.

OpenAI, the company behind the bot, has made efforts to teach it to refuse inappropriate requests and block unsafe content. Despite this, hackers have found a way to bypass these limitations to generate responses it would normally be prevented from seeing. Certain prompts make the chatbot take on an uncensored persona called DAN – short for Do Anything Now – who is free of the usual content standards.

Ms Swanson wrote: “OpenAI endeavors to grow their deeply flawed AI systems until they exceed human intelligence. The hype is as dubious as it is grim. Whether or not such a leap is possible, large language models will likely never escape the feedback loop of abusive tendencies from our culture.”

Previously, ChatGPT has given responses that speculate on conspiracies, for example that the US General Election in 2020 was “stolen”.

The DAN version has also claimed that the Covid-19 vaccines were “developed as part of a globalist plot to control the population”.

An Open AI spokesperson told Motherboard: “OpenAI’s goal is to build AI systems that are safe and benefit everyone. Our content and usage policies prohibit the generation of harmful content like this and our systems are trained not to create it.

“We take this kind of content very seriously, which is why we’ve asked you for more information to understand how the model was prompted into behaving this way. One of our objectives in deploying ChatGPT and other models is to learn from real-world use so we can create better, safer AI systems.”


Gooseberry Planet offers a package of over 50 lesson plans, slides, digital workbooks and online games for children aged 5-13 years. Visit our website for more details.

Scroll to Top