Monday, humans Announce Formal recognition SB 53This is a California bill from state Senator Scott Wiener that will impose the First International Transparency Requirements on the World’s Largest AI Model Developer. Human recognition marked a rare and significant victory for SB 53, which major technical groups liked at the time CTA and Progress Room In the Lobbying Act.
“While we think it’s better to address border AI security at the federal level rather than a pieced together state regulations, strong AI advancements will not wait for consensus in Washington,” said anthropomorphism in a blog post. “The question is not whether we need AI governance, but whether we have to go through a thoughtful approach today or react to development tomorrow. SB53 provides a solid path for the former.”
If passed, SB 53 Frontier AI model developers such as OpenAI, Anthropic, Google and XAI will be required to develop security frameworks to publish public safety and security reports before deploying powerful AI models. The bill will also establish whistleblower protection for employees raising safety issues.
The bill specifically focuses on limiting AI models from contributing “catastrophic risk”, which defines it as damages of at least 50 people or more than $1 billion. SB 53 focuses on the extremes of AI risks – limiting AI models to expert-level help for creation of biological weapons or for use in cyber attacks, rather than more recent concerns, such as AI Deepfakes or Sycophancy.
The California Senate approved a previous version of SB 53, but it still requires a final vote on the bill before it can be promoted to the governor’s desk. Gov. Gavin Newsom has remained silent on the bill so far, even though he vetoed Sen. Weiner’s last AI bill, SB 1047, which includes many of the same measures.
Standardize the notes faced by cutting-edge AI models developers Major driving force Both Silicon Valley and the Trump administration believe that such efforts may limit U.S. innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator lead some opposition SB 1047in recent months, the Trump administration has repeatedly threaten Prevent states from fully adjusting through AI.
One of the most common arguments against the AI Security Act is that states should leave the matter to the federal government. Matt Perault, head of AI policy at Andreessen Horowitz and Jai Ramaswamy, chief legal officer, published Blog Posts Last week, many state AI bills today may violate the constitutional business terms, which restricted state governments from passing laws that transcend their borders and undermine interstate trade.
TechCrunch Events
San Francisco
|
October 27-29, 2025
But, Jack Clark, co-founder of Humanity, is in a Post on X The technology industry will build powerful AI systems in the coming years and can’t wait for the federal government to take action.
“We have long said we prefer federal standards,” Clark said. “But, without it, this creates a solid blueprint for AI governance that cannot be ignored.”
Chris Lehane, Chief Global Affairs Officer at Openai, sent a copy letter For Governor Newsom in August, he believes he should not pass any AI regulations that drive startups out of California, although the letter does not mention SB 53.
Miles Brundage, former director of policy research at Openai postal On x, Lehan’s letter is “full of misleading garbage about SB 53 and AI policies”. It is worth noting that SB 53 is designed to regulate only the world’s largest AI companies, especially those that generate over $500 million in total revenue.
Despite the criticism, policy experts say SB 53 is a more important approach than previous AI security bills. Dean Ball, a senior fellow at the American Innovation Foundation and former White House policy adviser, said in August Blog Posts He believes SB 53 has a great chance to become law now. Ball, who criticized SB 1047, said the drafters of SB 53 “expressed respect for the technological reality” and “a measure of legislative constraints.”
Senator Wiener before explain SB 53 has been greatly affected Expert Policy Group Governor Newsom convened a co-host by Fei-fei Li, a leading Stanford researcher and co-founder of the World Lab, to advise California on how to regulate AI.
Most AI labs already have some version of the internal security policy required by SB 53. Openai, Google DeepMind, and Anthropic regularly publish security reports on their models. However, these companies are not subject to any law, and sometimes they behind Their Self-implemented safety commitment. SB 53 aims to set these requirements to legal.
In early September, California State Assemblyman Revised SB 53 deleted a portion of the billing section, which needs to be reviewed by a third party for AI model developers. Tech companies have previously conducted these types of third-party audits in other AI policy battles, deeming them too heavy.