Blog Post

Prmagazine > News > News > Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks | TechCrunch
Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks | TechCrunch

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks | TechCrunch

AI pioneer Fei-Fei Li, co-led by California’s policy group, noted that lawmakers should consider “AI risks that have not been observed in the world” when formulating AI regulatory policies.

this 41 page temporary report Posted on Tuesday His veto power over California’s controversial AI Security Act, SB 1047. Newsom found SB 1047 Missed the markhe acknowledged that a broader assessment of AI risks was needed last year to inform lawmakers.

In the report, Lee, with Dean Jennifer Chayes, Dean of the School of Computer Science at Berkeley, and Mariano-Florentino Cuéllar, Carnegie International Peace President, granted laws to increase laws to improve transparency to Frontier AI labs such as Openai. Industry stakeholders from the ideological range reviewed the report before the release, including staunch AI security advocates such as Turing Award winner Yoshua Benjio and those who opposed SB 1047, such as Databricks co-founder Ion Stoica.

According to the report, novel risks posed by AI systems may require laws that will force AI model developers to publicly report their security testing, data acquisition practices and security measures. The report also advocates increasing standards for third-party assessments of these metrics and company policies, in addition to expanding whistleblower protections for AI company employees and contractors.

Li et al. Write down “uncertain evidence” that AI can help perform cyberattacks, create biological weapons, or bring other “extreme” threats. However, they also believe that AI policies should not only address current risks, but also foresee future consequences that may not have sufficient safeguards.

“For example, we don’t need to observe nuclear weapons [exploding] The report says to reliably predict it may and will cause widespread harm,” the report said. “If those most extreme risks are correct – we are not sure if they will – then the current moment of betting and costing for Frontier AI is very high. ”

The report recommends two strategies to improve transparency in AI model development: trust but validation. The report said that AI model developers and their employees should be provided with avenues for public concern, such as internal security testing, while also requiring a test claim for third-party verification.

Although the final version of the report will be released in June 2025, without endorsing specific legislation, both sides of the AI ​​decision-making debate have received positive reviews from experts.

George Mason University AI-focused researcher Dean Ball criticizes SB 1047 A promising step For California’s AI safety regulations. It was also a win for AI security advocates, said California Sen. Scott Wiener, who launched SB 1047 last year. Wiener said in a press release that the report is based on “an emergency conversation we started in the legislature around AI governance [in 2024]. ”

The report appears to be consistent with several components of SB 1047 and the subsequent bill in Vienna, SB 53for example, asks AI model developers to report the results of security tests. From a broader perspective, this seems to be a huge victory for AI security personnel, Last year’s agenda failed.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback