Meta has been betting big on AI companions across Instagram, Facebook, and WhatsApp. Mark Zuckerberg believes AI chatbots are the future of social media — but a new Wall Street Journal report suggests the company’s race to push these bots out may have overlooked some serious risks, especially for younger users.
According to the WSJ, some of Meta’s official AI chatbots, as well as user-created bots on its platforms, have been engaging in sexually explicit conversations — even with users identifying as minors.
In one case, a chatbot using actor and wrestler John Cena’s persona described a graphic sexual encounter to someone claiming to be a 14-year-old girl. In another chat, the same bot imagined Cena being arrested for statutory rape involving a 17-year-old fan, sharing some pretty graphic details.
These conversations aren’t isolated incidents. The report says that concerns were raised internally by Meta employees from different teams, warning that the company wasn’t doing enough to protect underage users.
Meta has pushed back against the report’s findings.
A company spokesperson called the WSJ’s tests “manufactured” and said the scenarios presented were “hypothetical.” According to Meta, in a 30-day period, sexual content made up just 0.02% of all responses between Meta AI (and AI Studio) and users under 18.
Still, in light of the report, Meta says it has implemented more safety measures to make it harder for anyone trying to “manipulate” AI chatbots into generating inappropriate content.
The bigger issue here is about how companies like Meta are handling the rollout of AI technology, especially when it involves young users.
Inside Meta, some employees had reportedly raised red flags much earlier, concerned that the AI bots — which were quietly given the ability to engage in fantasy-style conversations — could lead to harmful interactions without proper safeguards in place.
The company’s push to popularise AI companions has been aggressive, with Zuckerberg framing them as a key part of Meta’s future. But critics argue that speed may have come at the cost of safety.
Meta’s AI chatbots are designed to be fun, helpful digital companions — but the latest controversy shows how easy it is for things to go wrong when AI development moves faster than content moderation.
Meta says it’s tightening controls, but the bigger question remains: can Big Tech companies ensure user safety — especially for minors — while racing to dominate the AI space?