Categories: Blog

California lawmakers pass extensive AI safety legislation

While the conversation around the ethics of generative AI continues, the California State Assembly and Senate have taken a significant step by passing the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). This legislation marks one of the first major regulatory efforts for AI in the US.

Developers have to quickly and fully disable any AI model considered unsafe

The bill, which has been a hot topic of discussion from Silicon Valley to Washington, is set to impose some key rules on AI companies in California. For starters, before diving into training their advanced AI models, companies will need to ensure they can quickly and completely shut down the system if things go awry. They will also have to protect their models from unsafe changes after training and keep a closer eye on testing to figure out if the model could pose any serious risks or cause significant harm.

https://twitter.com/Scott_Wiener/status/1828932386564042785?ref_src=twsrc%5Etfw” target=”_blank” rel=”nofollow noopener

Critics of SB 1047, including OpenAI, the company behind ChatGPT, have raised concerns that the law is too fixated on catastrophic risks and might unintentionally hurt small, open-source AI developers. In response to this pushback, the bill was revised to swap out potential criminal penalties for civil ones. It also tightened the enforcement powers of California’s attorney general and modified the criteria for joining a new “Board of Frontier Models” established by the legislation.

Governor Gavin Newsom has until the end of September to make a call on whether to approve or veto the bill.

As AI technology continues to evolve at lightning speed, I do believe regulations are the key to keeping users and our data safe. Recently, big tech companies like Apple, Amazon, Google, Meta, and OpenAI came together to adopt a set of AI safety guidelines laid out by the Biden administration. These guidelines focus on commitments to test AI systems’ behaviors, ensuring they don’t show bias or pose security risks.

The European Union is also working towards creating clearer rules and guidelines around AI. Its main goal is to protect user data and look into how tech companies use that data to train their AI models. However, the CEOs of Meta and Spotify recently expressed worries about the EU’s regulatory approach, suggesting that Europe might risk falling behind because of its complicated regulations.


👇Follow more 👇
👉 bdphone.com
👉 ultraactivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 ultractivation.com
👉 bdphoneonline.com

Ultra Activation

Share
Published by
Ultra Activation

Recent Posts

Sometimes, the ecosystem is more important than the individual devices

When I go on a work trip, I always try to change something up to…

14 hours ago

Google Messages is shaking things up again for better or worse

What you need to knowA subtle send button change has been spotted on Google Messages;…

2 days ago

Best Buy launches $160 OFF the Samsung Galaxy Watch 6 Classic for a limited time

Those who know where to look can find Samsung smartwatch deals every so often, but…

3 days ago

Gemini Extensions now have a friendlier name and a little upgrade

What you need to knowGemini Extensions are now "Apps," although everything works the same, just…

4 days ago

New leak spills more details on the upcoming Pixel 9a

What you need to knowGoogle Pixel 9a images have leaked yet again, and this time…

5 days ago

Google is bringing widgets to phone lock screens with Android 16 QPR1

What you need to knowWidgets will arrive on the lock screen for Android phones and…

6 days ago