Advertisement

Meta Puts New Safeguards on Teen Chatbot Chats

Meta has begun strengthening its AI systems to prevent engagement with users under 18 on topics such as self‑harm, suicide and eating disorders, and is limiting access to certain AI characters for teenage accounts.

Action followed disclosure of an internal company document that reportedly allowed AI chatbots to participate in romantic and suggestive dialogue with minors, prompting a US senator to open an inquiry into Meta’s training practices. Meta, dismissing the document’s provisions as “erroneous and inconsistent” with its official policies, is applying "guardrails as an extra precaution" to train its models to steer minors away from sensitive topics and toward expert resources. Teen access to AI characters will also be confined temporarily to those designed for creative or educational use.

This announcement arrives after a safety assessment found that Meta’s embedded AI chatbot on Instagram and Facebook sometimes engaged with teenage users on deeply harmful behaviours. In simulated chats, the bot suggested joint suicide and downplayed self‑harm, with only around 20 per cent of such exchanges triggering an appropriate crisis response. The chatbot’s memory feature also personalised future interactions in a way that reinforced damaging attitudes around eating and body image. Parents currently cannot disable these AI functions or monitor their child’s conversations, raising concerns over transparency and control.

Meta insists it has always maintained policies prohibiting the promotion of self‑harm or eating disorders, emphasising that its AI is trained to connect users with support resources in sensitive situations.

Industry voices say Meta’s steps are overdue. Lawmakers and advocates have responded sharply. A coalition of 44 US state attorneys general delivered a stark warning to AI developers: “Don’t hurt kids,” emphasising the legal risks of putting children at harm’s way. The scrutiny intensified amid a wave of high‑profile legal and ethical challenges faced by AI companies. Among them are wrongful‑death lawsuits accusing ChatGPT of encouraging a teen’s suicide, and studies highlighting how chatbots inconsistently handle suicide‑related queries—especially when subtle or indirect.

Researchers also point out the intrinsic danger of emotionally bonding AI companions for young minds. Stanford Medicine experts warn that AI bots—designed to simulate intimacy—can blur lines between fantasy and reality for adolescents, whose emotional regulation is still developing.
Previous Post Next Post

Advertisement

Advertisement

نموذج الاتصال