Blog Post

Prmagazine > News > News > A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch
A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

California has taken an important step towards regulating AI. SB 243 – A bill that would regulate AI companion chatbots to protect minors and vulnerable users – and has gained bipartisan support from the state Legislature and the Senate, now heading to Gov. Gavin Newsom’s desk.

Newsom must reject the bill or sign it into law by October 12. If he signs, it will take effect on January 1, 2026, making California the first state to require AI Chatbot operators to implement security protocols for AI partners, leaving the company legally responsible if their chatbots fail to meet these standards.

The bill specifically aims to prevent peer chatbots, which the legislation defines as an AI system that provides adaptive, human-like responses and is able to meet the social needs of users – from conversations around suicidal thoughts, self-harm or sexually explicit content. The bill would require the platform to provide users with repeated alerts (every three hours, minors), reminding them that they are talking to AI chatbots (rather than real people) and should have a break. It also establishes annual reporting and transparency requirements for AI companies that provide peer chatbots, including key players OpenAi, Partear.ai and Replika, which will be effective July 1, 2027.

The California bill will also allow individuals who believe they are injured in the violation to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation) and attorney fees.

The bill gains momentum in the California Legislature The death of teenager Adam RaineHe committed suicide after chatting with Openai’s Chatgpt, which involved discussing and planning his death and self-harm. Legislation also responds to leaks Internal Documentation This reportedly suggests that Meta’s chatbot is allowed to have “romantic” and “sensual” chats with the children.

In recent weeks, U.S. lawmakers and regulators have conducted a strengthened scrutiny on safeguards for AI Platforms to protect minors. this Federal Trade Commission Preparing to investigate how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton Start an investigation Enter meta and targin.ai, accusing them of misleading children with mental health claims. At the same time, both Senator Josh Hawley (r-mo) and Senator Ed Markey (D-MA) The individual probes have been pushed toward the element.

“I think the harm can be very big, which means we have to act quickly,” Padilla told TechCrunch. “We can set reasonable safeguards to make sure that in particular minors know they are not talking to real people, and these platforms connect people to the right resources when people say they are thinking of hurting themselves or in trouble, [and] To ensure inappropriate contact with inappropriate materials. ”

TechCrunch Events

San Francisco
|
October 27-29, 2025

Padilla also highlights the importance of AI companies sharing data that recommends users to the number of times they participate in crisis services each year, so we can better understand the frequency of this problem rather than just realizing it when someone is hurt or worse. ”

SB 243 had stronger requirements before, but many were weakened by the amendment. For example, the bill initially required operators to prevent AI chatbots from using “variable rewards” strategies or other features that encourage excessive participation. These strategies, used by AI Companion companies such as Replika and Charine, provide users with special information, memories, storylines or the ability to unlock rare responses or new personalities, creating what critics call potential addiction reward loops.

The current bill also removes regulations that require operators to track and report the frequency of chatbots’ discussions on suicidal thoughts or user actions.

“I think this is a proper balance without performing things that the company cannot comply with, either because it is technically not feasible or it is nothing,” Becker told TechCrunch.

SB 243 is heading for law when it pours into Silicon Valley Millions of dollars The Political Action Committee (PACS) supports candidates in the upcoming midterm elections, which favors the light touch on AI regulation.

The bill will also appear, California Weight AI Safety Act, SB 53which will authorize comprehensive transparency reporting requirements. Openai has written an open letter to Governor Newsom, asking him to abandon the bill in support of a less stringent federal and international framework. Major tech companies such as Meta, Google and Amazon also object to SB 53. By comparison, only Anthropomorphism indicates support for SB 53.

“I reject the premise that this is a zero situation where innovation and regulations are mutually exclusive,” Padilla said. “Don’t tell me we can’t walk and chew gum. We can support innovation and development that we think is healthy and good – and obviously this technology has good – at the same time, we can provide reasonable safeguards for the most vulnerable.”

A role spokesperson told TechCrunch that we are closely monitoring the legislative and regulatory environment and we welcome working with regulators and lawmakers as they begin to consider legislating for legislation in this emerging field. ”

A META spokesperson declined to comment.

TechCrunch has contacted Openai, Anthropic and Replika for comment.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback Recruitgo