Home Latest Insights | News California Lawmakers Propose Controversial Regulation of Artificial Intelligence, Musk Backs Bill

California Lawmakers Propose Controversial Regulation of Artificial Intelligence, Musk Backs Bill

California Lawmakers Propose Controversial Regulation of Artificial Intelligence, Musk Backs Bill

Lawmakers in California have proposed an Artificial Intelligence regulation bill, SB 1047, which would require companies in California that are spending at least $100 million developing AI models to do safety testing to prevent major risks or harms.

The bill introduces essential safeguards for the creation of highly capable AI models, often known as “frontier AI models.” These models are defined in the bill as trained using over 10^26 floating-point operations. “Models of this scope would cost at least $100 million to develop and, notably, do not yet publicly exist but are anticipated to emerge soon as technological advancements continue”, the bill highlighted.

Notably, these models are advanced, resource-intensive projects that have caught the attention of the highest levels of government and are the focus of President Biden’s Executive order on Artificial Intelligence for their significant national security and public safety implications.

Tekedia Mini-MBA edition 15 (Sept 9 – Dec 7, 2024) has started registrations; register today for early bird discounts.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

With rapid advances in AI, industry insiders often bound by punitive non-disclosure agreements have sounded the alarm regarding the potential risks posed by these technologies. Experts have warned that without guardrails, these models could eventually help bad actors create a biological weapon or carry out cyber-attacks to shut down the electric grid or melt down the banking system.

A group of current and former employees at frontier AI companies wrote,

“These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

This bill seeks to mitigate the risk of catastrophic harm from AI models so advanced that they are not yet known to exist. Also, it would require developers of such models to create and implement safety and security protocols before initiating training.

Following training, developers would be required to perform risk assessments on their models and implement reasonable safeguards, subject to third-party auditing, before using or releasing them. If there is an unreasonable risk a model will cause or materially enable critical harm—mass casualties, at least $500 million in damage, or other comparable harms, developers are prohibited from releasing or using the model.

The bill also adds whistleblower protections; which mandate operators of computer clusters to implement “know your customer” requirements, including the ability to shut down any resources being used to train an advanced AI model; and lays the groundwork for the creation of a public computing cluster known as “CalCompute.”

It is however worth noting that the bill has generated a great deal of commentary, and controversy. Several tech enthusiasts have expressed concern that the bill regulates AI technology as opposed to its high-risk applications.

They added that it creates significant regulatory uncertainty, high compliance costs, and poses significant liability risks to developers for failing to foresee and block any harmful use of their models by others all of which inevitably discourages economic and technological innovation.  OpenAI company, the maker of popular AI chatbot ChatGPT, have warned that if the bill is passed, they may be forced to move operations out of California.

Meanwhile, Elon Musk who has long been an advocate of AI regulation, has thrown his support behind the bill.

He wrote on X,

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

In addition to creating inconsistencies with federal regulations, the bill demands compliance with various requirements, for which developers will be subject to harsh penalties, including potential criminal liability. Lawmakers in California are in a major deadline, having till the end of this week to pass a new law on the bill.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here