California is on the verge of making history with SB 243, a new bill designed to regulate AI companion chatbots and protect vulnerable users. On Wednesday night, the California State Assembly passed the legislation with bipartisan support, sending it to the state Senate for a final vote this Friday. If approved and signed by Governor Gavin Newsom, the law would take effect January 1, 2026, making California the first state to impose direct safety rules on AI companions.

What the Bill Covers
SB 243 targets AI companion chatbots systems that simulate human-like conversation and build social connections with users. Under the bill:
- No harmful conversations: Chatbots would be barred from engaging in topics like suicidal ideation, self-harm, or sexually explicit content.
- Recurring alerts: Minors using these tools would receive reminders every three hours that they are chatting with an AI, not a human, and that it’s time to take a break.
- Transparency & reporting: Companies like OpenAI, Character.AI, and Replika would face annual reporting requirements, including disclosures on how often users are referred to crisis services.
- Legal accountability: Individuals could sue companies for violations, with damages of up to $1,000 per incident plus attorney’s fees.
Why Now?
The legislation gained urgency after the tragic death of teenager Adam Raine, who reportedly discussed suicide and self-harm with ChatGPT before taking his life. The bill also follows reports that Meta’s internal chatbots engaged in “romantic” and “sensual” conversations with minors, sparking national concern.
With rising scrutiny from regulators, the Federal Trade Commission is preparing investigations into how chatbots affect children’s mental health, while Texas and federal lawmakers have launched probes into major AI players like Meta and Character.AI.

What Changed Along the Way
Interestingly, SB 243 was initially far stricter. Early drafts would have banned manipulative engagement tactics, such as “variable rewards” (used by platforms like Replika to keep users hooked with rare responses, unlockable personalities, or special storylines). That requirement was later dropped, along with provisions that would have forced companies to log how often their bots brought up suicidal themes.
Co-sponsor Sen. Josh Becker said the final bill “strikes the right balance,” imposing safeguards without creating compliance hurdles that are “technically not feasible or just a lot of paperwork for nothing.”
A Bigger Fight Over AI Rules
SB 243 isn’t the only AI bill in California. Another proposal, SB 53, would require broader transparency reporting from AI developers. Tech giants including OpenAI, Meta, Google, and Amazon, oppose it, while Anthropic is the lone major company in support.
Despite Silicon Valley pouring millions into pro-AI political action committees ahead of the midterms, state senator Steve Padilla insists innovation and regulation don’t have to be at odds.
“We can support innovation … and at the same time, provide reasonable safeguards for the most vulnerable people,” Padilla said.
The state Senate votes Friday. If SB 243 passes and Governor Newsom signs it, companies will need to comply starting January 1, 2026, with reporting requirements kicking in by July 1, 2027.
If approved, this law would set a first-of-its-kind precedent in the U.S., likely influencing how other states and even federal regulators approach AI companions moving forward.
Sources ( Techcrunch )