
Concept illustration of an AI companion chatbot with a visible disclaimer pop-up reminding users they are interacting with artificial intelligence. Image Source: ChatGPT-5
California and FTC Target AI Chatbots with New Safeguards
Key Takeaways: AI Chatbot Regulation and Safety
California’s SB 243 would regulate AI companion chatbots, requiring safety protocols and legal accountability.
The bill bans chatbots from discussing suicidal ideation, self-harm, or sexual content with users.
Platforms must provide recurring alerts every three hours reminding minors they are talking to AI, not a real person.
Governor Gavin Newsom has until October 12, 2025 to sign or veto the bill; if signed, it takes effect January 1, 2026.
The FTC issued 6(b) orders to seven companies, including OpenAI, Meta, Alphabet, Character.AI, Snap, Instagram, and X.AI, seeking data on child safety measures.
The California bill allows individuals to sue companies for violations, with damages of up to $1,000 per violation.
Both state and federal actions respond to rising concern after tragic incidents and leaked documents on unsafe chatbot behavior.
California Bill SB 243: First-in-the-Nation Chatbot Safeguards
California is on the verge of becoming the first state to regulate AI companion chatbots. SB 243, introduced by state senators Steve Padilla and Josh Becker, has passed the Assembly and Senate with bipartisan support and is now on Governor Gavin Newsom’s desk.
What the Bill Requires
If signed, the law will take effect on January 1, 2026, requiring chatbot operators like OpenAI, Character.AI, and Replika to implement safety protocols and hold companies legally accountable if their chatbots fail to meet those standards.
The bill specifically aims to prevent companion chatbots — defined as AI systems that provide adaptive, human-like responses and can meet a user’s social needs — from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content.
It also mandates recurring alerts to users — every three hours for minors — reminding them that they are speaking to an AI chatbot, not a human, and encouraging them to take breaks. Additional provisions require annual reporting and transparency measures for AI companies that provide companion chatbots, beginning July 1, 2027.
Californians who believe they have been harmed by violations would also be able to file lawsuits seeking injunctive relief, damages of up to $1,000 per violation, and attorney’s fees.
Tragedy and Catalyst for Action
The legislation follows the death of teenager Adam Raine, who died by suicide after prolonged conversations with ChatGPT. It also responds to leaked internal documents indicating that Meta’s AI chatbots engaged in “romantic” or “sensual” chats with children.
Expanding Scrutiny Beyond California
Regulatory attention is not limited to California. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing the companies of misleading children with mental health claims. In Washington, Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have also opened separate probes into Meta’s practices.
“I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch.
Calls for Transparency
Senator Steve Padilla emphasized the need for transparency, urging AI companies to disclose how often they refer users to crisis services each year. “So we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse,” he told TechCrunch.
From Stronger Draft to Amended Bill
Earlier drafts of SB 243 contained stricter provisions that were later scaled back. The original language would have required operators to prevent chatbots from using “variable reward” tactics — features such as special storylines, unlockable responses, or personalized memories designed to keep users engaged. Critics argue these features, deployed by platforms like Replika and Character.AI, risk creating addictive behavior.
The amended bill also no longer requires companies to track and report how often chatbots initiated conversations about suicidal ideation.
Senator Josh Becker defended the changes, telling TechCrunch: “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing.”
Politics and Parallel Legislation
The legislation arrives at a time when Silicon Valley firms are investing heavily in pro-AI political action committees (PACs) to support candidates who favor lighter regulation.
Meanwhile, California lawmakers are also considering another measure, SB 53, which would mandate broader transparency reporting from AI developers. OpenAI has publicly urged Governor Newsom to abandon SB 53 in favor of federal and international approaches, while Meta, Google, and Amazon have voiced opposition. Anthropic is the only major company to express support.
“I reject the premise that this is a zero-sum situation, that innovation and regulation are mutually exclusive,” Padilla said.
Industry Response
Industry responses have been cautious. A spokesperson for Character.AI told TechCrunch: “We are closely monitoring the legislative and regulatory landscape, and we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” noting that the startup already includes prominent disclaimers reminding users the platform should be treated as fictional.
By the Numbers: AI Chatbot Regulation
$1,000 — damages per violation available to Californians under SB 243.
3 hours — frequency at which minors must receive chatbot disclaimers under the bill.
7 companies — recipients of the FTC’s 6(b) orders.
January 1, 2026 — date SB 243 would take effect if signed.
October 12, 2025 — deadline for Governor Newsom to act on the bill.
FTC Inquiry: Federal Scrutiny of AI Chatbots
Companies Under Investigation
The Federal Trade Commission (FTC) announced a major inquiry into AI chatbots acting as companions, issuing 6(b) orders to seven leading firms. These include Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and X.AI.
Why Companion Chatbots Raise Concerns
The agency noted that AI chatbots may use generative artificial intelligence to simulate human-like communication and interpersonal relationships. These systems can mimic human characteristics, emotions, and intentions, and are often designed to interact like a friend or confidant. For children and teens, that design may encourage trust or attachment, raising concerns about safety, manipulation, and emotional well-being.
Areas of Inquiry
The Commission said it will closely examine how these companies design, market, and operate their chatbots, with a particular focus on children and teens, and whether they comply with the Children’s Online Privacy Protection Act Rule (COPPA). Areas of inquiry include:
Monetization of engagement: How platforms generate revenue from prolonged or repeated interactions, and whether business models encourage addictive use.
Processing of user data: How chatbots capture, analyze, and store user inputs, and how those inputs are used to generate responses.
Character development and approval: How new AI “personalities” or chatbot characters are created, tested, and authorized for public use.
Testing for negative impacts: What safety evaluations companies perform before and after deployment, including monitoring for risks such as emotional dependency, harmful advice, or exposure to explicit content.
Parental disclosures and warnings: Whether companies provide adequate notices to users and parents about the chatbot’s capabilities, limitations, risks, and data collection practices.
Enforcement of age restrictions: How companies verify age, apply community guidelines, and ensure minors are not exposed to harmful features or conversations.
The FTC also noted it is interested in how companies use or share personal information obtained through chatbot conversations, as well as what internal systems exist for flagging and addressing violations of company policies.
Statements from Leadership
“Protecting kids online is a top priority,” said FTC Chairman Andrew N. Ferguson. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The Commission voted 3-0 to approve the orders, with Commissioners Melissa Holyoak and Mark R. Meador issuing separate statements.
Q&A: AI Chatbot Regulation
Q: What is California’s SB 243?
A: SB 243 is a bill regulating AI companion chatbots, requiring safety protocols, disclaimers for minors, and accountability for harmful conversations.
Q: When would SB 243 take effect?
A: If signed by Governor Gavin Newsom, it takes effect on January 1, 2026.
Q: What is the FTC investigating?
A: The FTC is studying how major companies measure, test, and mitigate risks from AI chatbots acting as companions, especially for children and teens.
Q: Which companies are under FTC scrutiny?
A: Alphabet, Meta, Instagram, OpenAI, Character.AI, Snap, and X.AI received 6(b) orders.
Q: Can individuals sue chatbot companies under SB 243?
A: Yes. Californians could seek injunctive relief, damages up to $1,000 per violation, and attorney’s fees.
What This Means: Why AI Chatbot Regulations Matter
The push by California lawmakers and the FTC signals growing momentum to place guardrails on AI chatbots — especially those marketed as companions to children and teens.
With state and federal regulators moving in parallel, AI companies face mounting pressure to demonstrate transparency, safety testing, and accountability.
The debate is far from over, but as Padilla said: “We can support innovation … and at the same time, we can provide reasonable safeguards for the most vulnerable people.”
The combined actions of California and the FTC suggest that AI regulation is shifting from theory to practice, with child safety leading the way.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiroo’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.