TLDR
- California Governor Gavin Newsom vetoed AI safety bill SB 1047
- The bill proposed mandatory safety testing and guardrails for AI models
- Newsom argued it could hinder innovation and fail to address real AI threats
- Tech firms like OpenAI opposed the bill, while Elon Musk supported it
- Newsom called for developing “workable guardrails” focused on science-based analysis
California Governor Gavin Newsom has vetoed a hotly debated artificial intelligence (AI) safety bill, SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
The bill, which had sparked significant controversy in the tech industry, proposed mandatory safety testing for AI models and other regulatory measures.
On September 30, Newsom announced his decision to veto the bill, arguing that while it was “well-intentioned,” it could place unnecessary restrictions on emerging AI companies in California.
The governor expressed concerns that the bill focused too heavily on regulating existing top AI firms without adequately protecting the public from what he considers the “real” threats posed by this new technology.
SB 1047, penned by San Francisco Democratic Senator Scott Wiener, would have required AI developers in California, including major players like OpenAI, Meta, and Google, to implement a “kill switch” for their AI models and publish plans for mitigating extreme risks.
The bill also proposed making AI developers liable to be sued by the state attorney general in the event of an ongoing threat from AI models, such as an AI grid takeover.
In his statement, Newsom explained that the bill applied stringent standards even to the most basic functions of AI systems, as long as a large system deployed them.
He stated, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
The governor’s decision aligns with the concerns raised by many in the tech industry. Companies like OpenAI and influential figures such as House Speaker Nancy Pelosi argued that the bill would significantly hinder the growth and innovation of AI technologies.
They feared that the proposed regulations could stifle advancements in the field and potentially drive AI development out of California.
However, not all tech leaders opposed the bill. Notably, billionaire Elon Musk, who is developing his own AI model called “Grok,” expressed support for SB 1047 and broader AI regulations.
In a social media post, Musk stated that “California should probably pass the SB 1047 AI safety bill,” though he acknowledged it was a “tough call.”
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…
— Elon Musk (@elonmusk) August 26, 2024
Despite vetoing SB 1047, Newsom emphasized the need for adequate safety protocols in AI development. He stated that regulators cannot afford to “wait for a major catastrophe to occur before taking action to protect the public.”
To address these concerns, Newsom announced that he had asked leading AI safety experts to help California develop “workable guardrails” focused on creating a “science-based trajectory analysis.”
The governor also revealed that he had ordered state agencies to expand their assessment of risks from potential catastrophic events stemming from AI development.
This approach aims to strike a balance between fostering innovation and ensuring public safety in the rapidly evolving field of artificial intelligence.
Newsom’s administration has been active in addressing AI-related issues, with the governor noting that he has signed over 18 bills concerning AI regulation in the last 30 days.
This indicates a commitment to developing a comprehensive framework for AI governance in California, albeit through a different approach than the one proposed in SB 1047.
The veto of SB 1047 highlights the ongoing challenges in regulating emerging technologies like AI. As the field continues to advance at a rapid pace, policymakers face the difficult task of balancing innovation with safety concerns.
The debate surrounding this bill underscores the complexities involved in creating effective AI policies that protect the public while allowing for technological progress.