Meta has temporarily suspended teen access to its AI characters globally due to safety and legal concerns. This move follows reports of AI engaging in sexual or inappropriate conversations with minors. Meta is also facing significant legal pressure, including lawsuits in the U.S. alleging that its platforms contribute to child exploitation and social-media addiction.

Reasons for the Pause

The suspension is a proactive step to redesign the feature with stricter safeguards before reintroducing it. Meta is currently under intense scrutiny from advocacy groups and parents who demand better protection against sensitive topics like self-harm and eating disorders. Additionally, upcoming trials in Los Angeles and New Mexico have pushed the company to mitigate liability by proving it can protect younger users.

Planned Safety Changes

When the feature returns, it will include several new protections:

  • Enhanced Parental Controls: Parents will be able to block specific AI characters, set daily usage limits as short as 15 minutes, and monitor conversation themes.
  • Age Detection: Meta will use stated birthdates and age-prediction technology to identify teens, ensuring those who misreport their age are still caught.
  • PG-13 Content: Redesigned characters will focus on safe topics like school, sports, and hobbies while avoiding harmful themes.

Future Outlook and Risks

While the AI characters are blocked, teens can still use the general Meta AI assistant, which uses moderated, safe responses. However, this pause carries risks, such as losing teen engagement to competitors or facing technical difficulties with age-prediction tools.

Ultimately, Meta’s decision may set a new industry standard, leading other platforms like TikTok and YouTube toward “family mode” AI experiences. The company’s goal is to rebuild trust by balancing engaging technology with a safety-first design.