Meta temporarily adjusts AI chatbot policies for teenagers
On Friday local time, Meta stated that, in light of lawmakers' concerns over safety issues and inappropriate conversations, the company is temporarily adjusting its AI chatbot policies for teenage users.
A Meta spokesperson confirmed that the social media giant is currently training its AI chatbot so that it will not generate responses for teenagers regarding topics such as self-harm, suicide, or eating disorders, and will avoid potentially inappropriate emotional conversations.
Meta said that, at the appropriate time, the AI chatbot will instead recommend professional help resources to teenagers.
In a statement, Meta said: "As our user base grows and our technology evolves, we continue to study how teenagers interact with these tools and strengthen our safeguards accordingly."
In addition, teenage users of Meta's apps such as Facebook and Instagram will only be able to access certain specific AI chatbots in the future, which are mainly designed to provide educational support and skill development.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Witness the Dynamic Shifts in Bitcoin and Altcoin ETFs
In Brief Bitcoin and altcoin ETFs witness dynamic shifts in inflows and outflows. XRP and Solana ETFs attract notable investor attention and activity. Institutions explore diversified crypto ETFs for strategic risk management.

Peter Schiff Clashes With President Trump as Economic and Crypto Debates Intensify

Bitcoin Cash Jumps 40% and Establishes Itself as the Best-Performing L1 Blockchain of the Year

Bitcoin Price Plummets: Key Reasons Behind the Sudden Drop Below $88,000
