Loading stock data...

In a landmark move, California lawmakers have approved Senate Bill 1047, which requires companies developing or modifying powerful AI systems to test their models for potential societal harm. The bill now awaits the decision of Governor Gavin Newsom, who will determine whether it becomes law.

Key Provisions of Senate Bill 1047

Senate Bill 1047 mandates that companies spending $100 million to train an AI model or $10 million to modify one must conduct safety testing. These tests are intended to assess the AI’s potential to cause significant harm, such as enabling cybersecurity attacks, infrastructure sabotage, or the development of chemical, biological, radioactive, or nuclear weapons.

Legislative Voting Record

Following a decisive 32-1 vote in the Senate in May, the California State Assembly voted 48-15 to pass Senate Bill 1047 late Wednesday afternoon. The bill then returned to the Senate, where Thursday morning it received final approval with concurrence on the amendments. The strong majority support in both chambers underscores the significance of the legislation as it now heads to Governor Gavin Newsom’s desk for consideration.

Controversy and Legislative Journey

The bill passed with strong support in both the Senate and Assembly, despite facing significant opposition from tech giants like Google, Meta, OpenAI, and others. These companies argue that the costs of compliance could stifle innovation, business growth, and job creation, particularly for startups, and discourage the release of open-source AI tools due to fears of legal liability.

Supporters and Opponents

Supporters of the bill, including former OpenAI employees, Elon Musk, and AI researcher Yoshua Bengio, argue that the risks posed by AI technologies are too significant to ignore. They believe that proactive regulation is essential to prevent potential disasters and ensure that AI development is aligned with public safety.

On the other hand, major AI companies have expressed concerns about the bill’s impact on innovation and business growth. OpenAI has argued that the costs of compliance could be prohibitively expensive, especially for smaller startups.

Voluntary Agreements and International Cooperation

In response to these concerns, major AI companies have entered into voluntary agreements with the White House and government leaders in Germany, South Korea, and the United Kingdom to test their AI models for potentially dangerous capabilities. These agreements reflect a growing international concern about the risks posed by advanced AI technologies.

Senator Scott Wiener’s Response

In response to OpenAI’s opposition to SB 1047, Senator Scott Wiener dismissed the idea that the bill would drive businesses out of California, calling it a ‘tired’ argument. He pointed out that similar predictions were made when California passed net neutrality and data privacy laws in 2018, yet those fears never materialized.

Daniel Kokotajlo’s Insights

Daniel Kokotajlo, a former OpenAI employee and whistleblower, echoed this sentiment, suggesting that SB 1047 could actually demonstrate how innovation and regulation can coexist. He predicted that, despite concerns, the pace of AI progress in California will likely accelerate if the bill becomes law, surprising many who feared it would stifle development.

Critics of the Bill

However, critics of the bill, including OpenAI, have argued that AI safety should be regulated at the federal level rather than by individual states. Wiener acknowledged this perspective, stating that he would have preferred Congress to take the lead on AI regulation. However, he criticized Congress for its inaction, noting that it has been largely paralyzed on tech regulation issues.

Amendments and Clarifications

SB 1047 underwent several rounds of amendments during its legislative journey. One significant amendment removed the proposed Frontier Model Division, which was initially intended to oversee the most advanced and powerful AI systems. Other amendments included clarifications on the scope of safety testing required and adjustments to the financial thresholds that determine which companies must comply with the law.

Governor Newsom’s Decision

The bill now sits on Governor Gavin Newsom’s desk. While Newsom has acknowledged the need for AI regulation, he has also cautioned against overregulation, particularly in a state that is home to many of the world’s leading AI companies.

Broader Implications for AI Regulation

Senate Bill 1047 is part of a broader movement in California to address the challenges posed by AI. In addition to this bill, other efforts include:

  • Ensuring the safe use of AI in schools
  • Preventing AI-related discrimination
  • Addressing the potential risks and benefits of AI technologies in society

Next Steps for AI Regulation in California

As the bill awaits the governor’s signature, the California Government Operations Agency is preparing to release a report on how AI could harm vulnerable communities. This report will provide further insights into the potential risks and benefits of AI technologies in society.

By taking proactive steps to address the challenges posed by AI, California can position itself as a leader in addressing the ethical and societal implications of this rapidly advancing technology.