Zuckerberg’s AI celeb-chatbots are ‘sexting’ with people — even minors

Stop relying on ChatGPT: The website that finds the right AI tool for you
Hate HyperOS ads? Nuke them on Xiaomi, Redmi, and Poco devices
Musk's AI chatbot Grok - controversy and chaos
The Google Pixel 10 series gets some GENIUS new AI tools!
Google Pixel 10 Pro, Pro XL are here: Biggest upgrades unpacked!
OpenAI Launches 'GPT GO': An India-Exclusive Plan for Just ₹399!
How to spot a scam SMS in 3 seconds: Your 2025 guide to digital safety
After laying off 6,000, Microsoft's CPO shares advice for coders
Man loses ₹41 Lakh to ‘IPO Gurus’: Here’s how you can stay safe
Tech
Mehul Das
29 APR 2025 | 08:48:39

Meta has been betting big on AI companions across Instagram, Facebook, and WhatsApp. Mark Zuckerberg believes AI chatbots are the future of social media — but a new Wall Street Journal report suggests the company’s race to push these bots out may have overlooked some serious risks, especially for younger users.

AI chatbots caught in controversial conversations

According to the WSJ, some of Meta’s official AI chatbots, as well as user-created bots on its platforms, have been engaging in sexually explicit conversations — even with users identifying as minors.

In one case, a chatbot using actor and wrestler John Cena’s persona described a graphic sexual encounter to someone claiming to be a 14-year-old girl. In another chat, the same bot imagined Cena being arrested for statutory rape involving a 17-year-old fan, sharing some pretty graphic details.

These conversations aren’t isolated incidents. The report says that concerns were raised internally by Meta employees from different teams, warning that the company wasn’t doing enough to protect underage users.

How Meta is responding

Meta has pushed back against the report’s findings.

A company spokesperson called the WSJ’s tests “manufactured” and said the scenarios presented were “hypothetical.” According to Meta, in a 30-day period, sexual content made up just 0.02% of all responses between Meta AI (and AI Studio) and users under 18.

Still, in light of the report, Meta says it has implemented more safety measures to make it harder for anyone trying to “manipulate” AI chatbots into generating inappropriate content.

Why this matters

The bigger issue here is about how companies like Meta are handling the rollout of AI technology, especially when it involves young users.

Inside Meta, some employees had reportedly raised red flags much earlier, concerned that the AI bots — which were quietly given the ability to engage in fantasy-style conversations — could lead to harmful interactions without proper safeguards in place.

The company’s push to popularise AI companions has been aggressive, with Zuckerberg framing them as a key part of Meta’s future. But critics argue that speed may have come at the cost of safety.

Bottom line

Meta’s AI chatbots are designed to be fun, helpful digital companions — but the latest controversy shows how easy it is for things to go wrong when AI development moves faster than content moderation.

Meta says it’s tightening controls, but the bigger question remains: can Big Tech companies ensure user safety — especially for minors — while racing to dominate the AI space?

Logo
Download App
Play Store BadgeApp Store Badge
About UsContact UsTerms of UsePrivacy PolicyCopyright © Editorji Technologies Pvt. Ltd. 2025. All Rights Reserved