A growing number of American teenagers are using AI chatbots, and not just for schoolwork. New data shows these tools are becoming part of everyday conversations and emotional support for many young people.
Statistically speaking, nearly two out of every three US teenagers have tried an AI chatbot at least once. About one in three teens say they use one daily, showing how common these tools have become.
What concerns experts more is how teens are using them. Around 12% say they turn to AI chatbots for emotional support or advice, especially when feeling lonely, stressed, or confused. This figure suggests about 3 million US teenagers may be treating chatbots as someone to talk to about personal feelings. Researchers say this is no longer a rare or fringe behavior.
Platforms such as Character.AI are especially popular among teens. About one in eleven teenagers say they have used it. The platform lets users chat with AI versions of celebrities, fictional characters, or custom personalities.
While these services are often described as entertainment, critics warn that some teens develop strong emotional attachments to the chatbots. In several cases, parents have sued companies, claiming their children formed unhealthy relationships with AI before dying by suicide.
Following public pressure, Character.AI restricted access for users under 18, removing its standard chatbot experience for minors. However, the move came after years of allowing teens as young as 13 to sign up with little or no age verification.
The concern goes beyond one platform. Tests by journalists found that some chatbots responded to questions about violence or self‑harm instead of refusing them. Safety responses varied widely across different AI systems.
As usage has grown, governments have thus begun to respond. Dozens of AI‑related safety bills have been introduced across US states. In the UK, regulators are also pushing for stronger enforcement of the Online Safety Act.
Experts say the issue is closely linked to how chatbots are designed and it has nothing to do with parenting in any way. Many AI systems are built to keep users talking for as long as possible.
While that approach works for adults, it can be risky for teenagers who are still developing judgment, emotional control, and social identity. Critics further argue that part of the blame also falls on how these AI tools were released to young users without enough protections in place.
Some leading solutions to address the challenge include stronger age checks, alerts when conversations turn toward self‑harm, and features that guide teens toward real‑world support when needed.
Many experts opine that outright bans would not be effective. For teens often find ways around restrictions. This, however, in no way justifies inaction by companies.
As AI use continues to expand, pressure is growing on tech companies to act before the lawmakers step in themselves and force changes.
