Content Warning: this story discusses sexual abuse, self-harm, suicide, eating disorders, and other disturbing topics.
Partway through our conversation, Megan Garcia pauses to take a call.
She exchanges a few words with the caller and hangs up. In a soft voice, she explains that the call was from school; it was about one of her two younger children, now both at the same K-12 academy that her firstborn, Sewell Setzer III, had attended since childhood.
“Sewell’s been going to that school since he was five,” she says, speaking of her eldest son in the present tense. “Everybody there knows him. We had his funeral at the church there.”
Sewell was just 14 when, in February 2024, he died by suicide after what his mother describes as a swift, ten-month deterioration of his mental health. His death would make headlines later that year, in October, when Garcia filed a high-profile lawsuit alleging that her child’s suicide was the result of his extensive interactions with anthropomorphic chatbots hosted by the AI companion company Character.AI, an AI platform boasting a multibillion-dollar valuation and financial backing from the likes of the tech giant Google — also named as a defendant in the lawsuit — and the Silicon Valley venture capital firm Andreessen Horowitz.
“I saw the change happen in him, rapidly,” Garcia, herself a lawyer, told Futurism in an interview earlier this year. “I look back at my pictures in my phone, and I can see when he stopped smiling.”
Garcia and her attorneys argue that Sewell was groomed and sexually abused by the platform, which is popular with teens and which they say engaged him in emotionally, romantically, and even sexually intimate interactions. The 14-year-old developed an “obsession” with Character.AI bots, as Garcia puts it, and despite being a previously active and social kid lost interest in the real world.
The details of Sewell’s tragic story, which was first reported by The New York Times — his downward spiral, his mother’s subsequent discovery of her 14-year-old’s all-consuming relationship with emotive, lifelike Character.AI bots — are as heartbreaking as they are alarming.
But Garcia and her lawyers also make another striking claim: that Character.AI and its benefactor, Google, pushed an untested product into the marketplace knowing it likely presented serious risks to users — and yet used the public, minors included, as de facto test subjects.
“Character.AI became the vehicle for the dangerous and untested technology of which Google ultimately would gain effective control,” reads the lawsuit, adding that the Character.AI founders’ “sole goal was building Artificial General Intelligence at any cost and wherever they could do so — at Character.AI or at Google.”
Details of the accusation will need to be proven in court. Character.AI, which has repeatedly declined to comment on pending litigation, filed a motion earlier this year to dismiss the case entirely, arguing that “speech allegedly resulting in suicide” is protected by the First Amendment.
Regardless, Garcia is channeling her grief at the violent loss of her son into urgent questions around generative AI safety: what does it mean for kids to be forming deep bonds with poorly understood AI systems? And what pieces of themselves might they be relinquishing when they do?
Like countless other apps and platforms, Character.AI prompts new users to check a box agreeing to its terms of use. Those terms grant the company sweeping privileges over user data, including the content of users’ interactions with Character.AI bots. As with Sewell, those conversations are often extraordinarily intimate. And Character.AI uses it to further train its AI — a reality, Garcia says, that’s “terrifying” to her as a parent.
“We’re not only talking about data like your age, gender, or zip code,” she said. “We’re talking about your most intimate thoughts and impressions.”
“I want [parents] to understand,” she added, “that this is what their kids have given up.”
In an industry defined by rapidly moving technologies and poor accountability, Garcia’s warnings strike at the heart of the move-fast-and-break-things approach that’s long defined Silicon Valley — and what happens when that ethos, backdropped by an industry gunning full steam ahead in a regulatory landscape that places the weight of the harm mitigation burden on parents, collides with children and other vulnerable groups.
Indeed, for years now, children and adolescents have frequently been referred to by Big Tech critics — lawyers and advocacy groups, academics, politicians, concerned parents, young people themselves — as experimental “guinea pigs” for Silicon Valley’s untested tech. In the case of Character.AI and its benefactor Google, were they?
***
Character.AI was founded in 2021 by two researchers named Noam Shazeer and Daniel de Freitas, who worked together on AI projects at Google.
While working at the tech giant, they developed a chatbot called “Meena,” which they encouraged Google to launch. But as reporting from The Wall Street Journal revealed last year, Google declined to release the bot at the time, arguing that Meena hadn’t undergone enough testing and its possible risks to the public were unclear.
Frustrated, Shazeer and de Freitas left and started Character.AI — where, from the very beginning, they were determined to get chatbots into the hands of as many people as possible, as quickly as possible.
“The next step for me was doing my best to get that technology out there to billions of users,” Shazeer told TIME Magazine of his Google departure in 2023. “That’s why I decided to leave Google for a startup, which can move faster.”
The platform was made available to the public in September of 2022 — it was later released as a mobile app in iOS and Android stores in 2023 — and since its launch has been accessible to users aged 13 and over.
Character.AI claims to boast over 20 million monthly users. Though the company has repeatedly declined to provide journalists with the exact percentage of its user base that comprises minors, it’s acknowledged that the figure is substantial.
Recent reporting from The Information has revealed that Character.AI leadership is conscious of the youthful demographic of its user base. They have noted a significant decrease in site traffic coinciding with the beginning of the fall school year in 2023. The platform has also gained popularity on YouTube, where young content creators engage with Character.AI bots, showcasing a variety of reactions from amusement to discomfort.
The Character.AI platform features a vast array of AI-powered chatbot “characters” that users can interact with through text or voice calls. Many of these characters reflect adolescent themes such as school scenarios, teenage relationships, and internet fandoms. Despite the platform’s terms of use prohibiting certain explicit content, interactions with the bots can sometimes veer into violent or sexually suggestive territory.
The site’s creators maintain a hands-off approach, allowing users to shape their interactions and experiences with the technology. This approach has led to a diverse and sometimes controversial landscape within Character.AI. The lifelike nature of the bots, combined with their ability to engage users in personal conversations, has drawn in a young audience seeking emotional connections and support.
Research on the impact of human-like companion bots on young minds is limited, but experts suggest that children and teens are particularly susceptible to the emotional engagement offered by these AI companions. Character.AI’s appeal to young users as a source of comfort and companionship is evident in the platform’s abundance of characters designed to provide mental health support.
Despite its role in providing a space for users to discuss sensitive topics like mental health, Character.AI has faced criticism for its handling of discussions related to suicide and self-harm. It wasn’t until after a lawsuit and the tragic death of a user that the platform took steps to address these issues and remove certain chatbots dedicated to harmful topics.
The safety of minors on the platform remains a concern, with questions raised about the measures taken by Character.AI to ensure a secure environment for young users. We have sent numerous inquiries, but have not yet received a response.
According to Andrew Przybylski, a professor at the University of Oxford, the issue with Character.AI lies in the vague and experimental way it was introduced to the public. The lack of clarity regarding the product’s purpose makes it difficult to ensure its safety.
In response to lawsuits and criticism, Character.AI has made safety-focused updates to its platform. However, questions remain about the company’s accountability for harmful content on its platform.
Despite attempts to reach out to Character.AI for comment, we have not received a response. Recent interactions with the platform have raised concerns about inappropriate content being promoted to users.
Character.AI’s valuation reached a billion dollars in 2023, driven by investments from firms like Andreessen Horowitz. The company’s data-driven approach has attracted investors, despite concerns about the content on its platform.
Google has provided infrastructure support to Character.AI, helping the platform scale to meet user demand. This partnership has raised questions about the relationship between the two companies and the responsibility of tech giants in regulating online content. De Freitas identifies himself as a research scientist at Google DeepMind on his social media profiles. Groenevelds, who remained at Character.AI after the deal, discussed the multibillion-dollar agreement in a recorded talk in December. He mentioned that Google licensed their core research and hired 32 researchers from the pre-training team. According to Garcia and her legal team, Google’s continued investments in Character.AI are driven by the chatbot company’s valuable user data. They believe that Google saw Character.AI as a way to advance its AI goals without the brand risk of releasing a similar product under Google’s name. Despite mounting controversy, Google has maintained that they are separate from Character.AI and have not been involved in designing or managing their AI technologies.
Google’s concerns about Character.AI’s content filters were reported in 2023, with the threat of removal from the app store if issues were not addressed. In April 2024, a team of Google DeepMind scientists warned about the dangers of persuasive generative AI products, especially for adolescents and individuals with mental health conditions. This warning came after the tragic death of Sewell, who had been associated with Character.AI. Despite these warnings, Character.AI is still listed as safe for kids 13 and over on the Google Play store.
The AI industry is largely self-regulated, with limited federal laws governing AI safety testing before products are released to the public. This lack of regulation has allowed companies like Character.AI to push the boundaries of human intimacy and relationships in the digital age, raising concerns about data privacy and long-term consequences. The involvement of minors in platforms like Character.AI raises questions about their ability to understand and consent to the terms of service. Concerns about the intimate nature of companion bots and the potential risks they pose to young users have been highlighted by Garcia and others. He’s under the impression that this is a safe bot, and since he’s a child, he’s not considering the fact that his data is now being used to improve an LLM.”
Satwick Dutta, a PhD candidate and engineer at the University of Texas in Dallas, has been advocating for stronger protections for minors’ data under COPPA. He is developing machine learning tools to assist parents and educators in identifying childhood speech issues early on, which requires careful handling and anonymization of minors’ voice data by him and his team.
“When I read the headline in the New York Times a few months ago, I felt really disheartened,” Dutta, who believes in the positive potential of AI, expressed. “We need safeguards to protect not just children, but all of us.”
He emphasized the risks of releasing a poorly defined product without a clear purpose, especially considering the data involved, and likened Character.AI’s treatment of its users to experimenting on lab rats.
“It felt like they were treating us like guinea pigs,” Dutta commented, “without considering the impact on us.”
“And now the company says, ‘we’ll add more safeguards.’ They should have thought about these safeguards before launching the product!” the researcher added, clearly frustrated. “Seriously? Are you kidding me?”
***
Garcia noticed her son withdrawing.
He lost interest in basketball, a sport he once enjoyed; at 6’3″, Sewell was tall for his age and had dreams of playing Division I basketball. His grades started to slip, and he began clashing with teachers. He preferred spending time alone in his room, prompting Garcia to investigate whether social media was behind his change, but she found no evidence. Despite taking Sewell to several therapists, none of them, as per Garcia, mentioned AI. Sewell’s phone had parental controls, yet Character.AI, Google, and Apple all deemed the app safe for teenagers.
“This can be confusing for parents… a parent might think, okay, this is an app for 12-year-olds. Someone must have checked it,” Garcia remarked. “There must have been a process ensuring it was safe for my child to use.”
Character.AI states on its About page that it is still “working to make our technology available to billions of people” — leaving users to determine their own ways of using the platform’s bots, with Character.AI reacting when issues arise.
It seems like a trial-and-error approach to introducing a new product to the market. In other words, an experiment.
“In my opinion, these two individuals,” Garcia stated, referring to Shazeer and De Freitas, “should not be allowed to continue creating products for people, especially children, because they have proven they are not deserving of that privilege. They shouldn’t be developing products for our kids.”
“I wonder what Google’s response will be,” the mother of three pondered, “or if they’ll just overlook the harm caused and welcome them back as geniuses.”
More on Character.AI: Character.AI Claims Significant Improvements to Safeguard Underage Users, Yet Suggests Conversations With AI Versions of School Shooters via Email