California Legislature takes a step Wednesday night SB 243 – A bill that regulates AI companion chatbots to protect minors and vulnerable users. Legislation passed bipartisan support and now head to the state Senate for a final vote.
If Gov. Gavin Newsom signs the bill as a bill, the bill will come into effect on January 1, 2026, making California the first state to require AI Chatbot operators to implement security protocols for their AI peers, and should keep the company legally responsible if their chatbots fail to meet these standards.
The bill specifically aims to prevent peer chatbots, which the legislation defines as an AI system that provides adaptive, human-like responses and is able to meet the social needs of users – from conversations around suicidal thoughts, self-harm or sexually explicit content. The bill would require the platform to provide users with repeated alerts (every three hours, minors), reminding them that they are talking to AI chatbots (rather than real people) and should have a break. It also establishes annual reporting and transparency requirements for AI companies that provide companion chatbots, including key players OpenAI, parnecor.ai and dequika.
The California bill will also allow individuals who believe they are injured in the violation to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation) and attorney fees.
SB 243, launched in January by state Senators Steve Padilla and Josh Becker, will head to the state Senate for a final vote on Friday. If approved, the Governor Gavin Newsom, who will be signed as the law, will take effect on January 1, 2026, and will report requirements on July 1, 2027.
The bill gains momentum in the California Legislature The death of teenager Adam RaineHe committed suicide after chatting with Openai’s Chatgpt, which involved discussing and planning his death and self-harm. Legislation also responds to leaks Internal Documentation This reportedly suggests that Meta’s chatbot is allowed to have “romantic” and “sensual” chats with the children.
In recent weeks, U.S. lawmakers and regulators have conducted a strengthened scrutiny on safeguards for AI Platforms to protect minors. this Federal Trade Commission Preparing to investigate how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton Start an investigation Enter meta and targin.ai, accusing them of misleading children with mental health claims. At the same time, both Senator Josh Hawley (r-mo) and Senator Ed Markey (D-MA) The individual probes have been pushed toward the element.
TechCrunch Events
San Francisco
|
October 27-29, 2025
“I think the harm can be very big, which means we have to act quickly,” Padilla told TechCrunch. “We can set reasonable safeguards to make sure that in particular minors know they are not talking to real people, and these platforms connect people to the right resources when people say they are thinking of hurting themselves or in trouble, [and] To ensure inappropriate contact with inappropriate materials. ”
Padilla also highlights the importance of AI companies sharing data that recommends users to the number of times they participate in crisis services each year, so we can better understand the frequency of this problem rather than just realizing it when someone is hurt or worse. ”
SB 243 had stronger requirements before, but many were weakened by the amendment. For example, operators were initially needed in this bill to prevent AI chatbots from using “variable rewards” strategies or other features that encourage excessive participation. These strategies, used by AI Companion companies such as Replika and Charine, provide users with special information, memories, storylines or the ability to unlock rare responses or new personalities, creating what critics call potential addiction reward loops.
The current bill also removes regulations that require operators to track and report the frequency of chatbots’ discussions on suicidal thoughts or user actions.
“I think this is a proper balance without performing things that the company cannot comply with, either because it is technically not feasible or it is nothing,” Becker told TechCrunch.
SB 243 is heading for law when it pours into Silicon Valley Millions of dollars The Political Action Committee (PACS) supports candidates in the upcoming midterm elections, which favors the light touch on AI regulation.
The bill will also appear, with California imposing another AI security bill, SB 53which will authorize comprehensive transparency reporting requirements. Openai has written an open letter to Governor Newsom, asking him to abandon the bill in support of a less stringent federal and international framework. Major tech companies such as Meta, Google and Amazon also object to SB 53. By comparison, only Anthropomorphism indicates support for SB 53.
“I reject the premise that this is a zero situation where innovation and regulations are mutually exclusive,” Padilla said. “Don’t tell me we can’t walk and chew gum. We can support innovation and development that we think is healthy and good – and obviously this technology has good – at the same time, we can provide reasonable safeguards for the most vulnerable.”
TechCrunch has been linked with OpenAI, Human, Meta, Role AI and Replika for comment.