California’s controversial AI bill’s fate will be decided this week
One of the most consequential AI bills the world has seen sitting on the desk of California Governor Gavin Newsom, and its fate could be decided with either a signature or a veto stamp any day now. Newsom has until September 30 to decide. The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires developers of very large AI models to implement safety measures and reporting protocols, as well as submit to third-party compliance audits, starting in 2026. The bill’s overall intent is to prevent the creation and distribution of powerful AI systems that could cause catastrophic harm. In one oft-cited example, such a system might create or enable the creation of a bioweapon. The bill’s opponents in the tech industry complain that the safety requirements put an undue burden on model developers and would shift the focus from improving AI to worrying about safety compliance. Open source model developers in particular feel threatened by the bill, as it would compel them to ensure that others cannot modify their models to do harm in the future. The governor’s office has been quiet on the bill for months as SB 1047 made its way through the California legislature. When Fast Company asked the office last month about the nature of Newsom’s dialogs with tech lobbyists on the matter, the office declined to comment. The tech industry has some powerful lobbyists, at least one with personal ties to Newsom, working to kill the bill. The biggest signal of Newsom’s thinking on the bill came last week during an interview with Salesforce CEO Marc Benioff at the company’s Dreamforce conference. “We’ve been working over the course of the last couple of years to come up with some rational regulation that supports risk-taking, but not recklessness,” Newsom said during the onstage interview. “That supports the ecosystem that’s so unique and vibrant here in California and does put us at a competitive disadvantage, but at the same time puts the rules of the road at hand.” He continued: “That’s challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have and the chilling effect, particularly in the open source community, that legislation could have,” Newsom said. “So I’m processing that in consideration more broadly of what are demonstrable risks in AI and what are the hypothetical risks. [We] can’t solve for everything; what can we solve for?” Some commenters on X observed that Newsom’s mention of stakeholders in the open source community may be a signal that he’s leaning toward a veto. But one shouldn’t always take the words of a politician in these situations at face value. It’s possible that Newsom was floating a possible decision on SB 1047 to learn from the reaction from various stakeholder groups, one insider tells me. If Newsom had already decided against the bill, he would have already vetoed it. Newsom’s office has been receiving pressure from multiple sides to veto SB 1047. Industry groups including the VC a16z have been loudly opposed, as has the high-profile incubator Y Combinator. A chorus of national political figures have written letters in opposition to Newsom. These include California representatives Ro Khanna (D-Santa Clara), Anna Eshoo (D-Palo Alto), and Zoe Lofgren (D-San Jose), and Democratic Party heavyweight Nancy Pelosi. However, SB 1047 supporters point out that California’s largest labor union, SAG-AFTRA, and the National Organization of Women (NOW), have come out in support, Garrison Lovely reports for The Verge. Actors Mark Ruffalo and Joseph Gordon-Levitt have posted video open letters of support to Newsom. While SB 1047 has been the subject of hot debate in Silicon Valley, the bill skated through the California legislature in Sacramento with relative ease. Some California lawmakers are still stinging from the state’s failure to regulate social networks such as Facebook and Instagram, and they don’t want to miss the boat on regulating AI. And SB 1047 could be influential on AI regulation well beyond California. For one thing the law would apply not just to AI companies located in California, but to any AI company whose models would be used to power services (such as chatbots) delivered to people in California. The bill could also influence other states looking to build a regulatory framework around developers of large AI models. California routinely takes the lead on new tech legislation, as the federal government increasingly proves too gridlocked to do it. Public opinion polls have consistently shown strong support for regulating the largest of the large AI models. This may reflect a general anxiety about the development of software more intelligent than humans. It could also reflect a distrust that profit-driven tech companies can be counted on to ensure the safety of AI systems present and future. The primary criticism from the tech world is
One of the most consequential AI bills the world has seen sitting on the desk of California Governor Gavin Newsom, and its fate could be decided with either a signature or a veto stamp any day now. Newsom has until September 30 to decide.
The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires developers of very large AI models to implement safety measures and reporting protocols, as well as submit to third-party compliance audits, starting in 2026. The bill’s overall intent is to prevent the creation and distribution of powerful AI systems that could cause catastrophic harm. In one oft-cited example, such a system might create or enable the creation of a bioweapon.
The bill’s opponents in the tech industry complain that the safety requirements put an undue burden on model developers and would shift the focus from improving AI to worrying about safety compliance. Open source model developers in particular feel threatened by the bill, as it would compel them to ensure that others cannot modify their models to do harm in the future.
The governor’s office has been quiet on the bill for months as SB 1047 made its way through the California legislature. When Fast Company asked the office last month about the nature of Newsom’s dialogs with tech lobbyists on the matter, the office declined to comment. The tech industry has some powerful lobbyists, at least one with personal ties to Newsom, working to kill the bill.
The biggest signal of Newsom’s thinking on the bill came last week during an interview with Salesforce CEO Marc Benioff at the company’s Dreamforce conference.
“We’ve been working over the course of the last couple of years to come up with some rational regulation that supports risk-taking, but not recklessness,” Newsom said during the onstage interview.
“That supports the ecosystem that’s so unique and vibrant here in California and does put us at a competitive disadvantage, but at the same time puts the rules of the road at hand.”
He continued: “That’s challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have and the chilling effect, particularly in the open source community, that legislation could have,” Newsom said. “So I’m processing that in consideration more broadly of what are demonstrable risks in AI and what are the hypothetical risks. [We] can’t solve for everything; what can we solve for?”
Some commenters on X observed that Newsom’s mention of stakeholders in the open source community may be a signal that he’s leaning toward a veto.
But one shouldn’t always take the words of a politician in these situations at face value. It’s possible that Newsom was floating a possible decision on SB 1047 to learn from the reaction from various stakeholder groups, one insider tells me. If Newsom had already decided against the bill, he would have already vetoed it.
Newsom’s office has been receiving pressure from multiple sides to veto SB 1047. Industry groups including the VC a16z have been loudly opposed, as has the high-profile incubator Y Combinator. A chorus of national political figures have written letters in opposition to Newsom. These include California representatives Ro Khanna (D-Santa Clara), Anna Eshoo (D-Palo Alto), and Zoe Lofgren (D-San Jose), and Democratic Party heavyweight Nancy Pelosi.
However, SB 1047 supporters point out that California’s largest labor union, SAG-AFTRA, and the National Organization of Women (NOW), have come out in support, Garrison Lovely reports for The Verge. Actors Mark Ruffalo and Joseph Gordon-Levitt have posted video open letters of support to Newsom.
While SB 1047 has been the subject of hot debate in Silicon Valley, the bill skated through the California legislature in Sacramento with relative ease. Some California lawmakers are still stinging from the state’s failure to regulate social networks such as Facebook and Instagram, and they don’t want to miss the boat on regulating AI.
And SB 1047 could be influential on AI regulation well beyond California. For one thing the law would apply not just to AI companies located in California, but to any AI company whose models would be used to power services (such as chatbots) delivered to people in California.
The bill could also influence other states looking to build a regulatory framework around developers of large AI models. California routinely takes the lead on new tech legislation, as the federal government increasingly proves too gridlocked to do it.
Public opinion polls have consistently shown strong support for regulating the largest of the large AI models. This may reflect a general anxiety about the development of software more intelligent than humans. It could also reflect a distrust that profit-driven tech companies can be counted on to ensure the safety of AI systems present and future.
The primary criticism from the tech world is that SB 1047 asks AI companies to anticipate and prepare for the future harms that might be caused by future AI systems—an unrealistic burden.
AI pioneer and Turing Award winner Yoshua Bengio responds to the industry’s objection this way in an X post last week: “But (1) AI is rapidly getting better at abilities that increase the likelihood of these risks, and (2) We should not wait for a major catastrophe before protecting the public.”