Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

AI is moving into New Jersey’s healthcare system

January 3, 2026

AI chip IPO wave kicks off 2026 for Chinese tech stocks

January 3, 2026

Talent is the missing ingredient in the AI ​​conversation

January 3, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»New AI regulations give Californians rare glimpse into development
AI in Technology

New AI regulations give Californians rare glimpse into development

January 2, 2026005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
091824 dreamforce fm cm 05.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

By Khari JohnsonCalMatters

"A
A new California law requires technology companies to disclose how they manage catastrophic risks from artificial intelligence systems. The Dreamforce conference hosted by Salesforce in San Francisco on September 18, 2024. Photo by Florence Middleton for CalMatters

This story was originally published by CalMatters. Register for their newsletters.

Tech companies that create large, advanced artificial intelligence models will soon need to share more information about the impact of those models on society and give their employees ways to notify us if things go wrong.

Starting January 1, a law signed by Governor Gavin Newsom gives whistleblower protection for employees at companies like Google and OpenAI whose job it is to assess the risk of critical security incidents. It also requires large AI model developers to publish frameworks on their websites explaining how the company responds to critical security incidents and assesses and manages catastrophic risks. Fines for violating the frameworks can reach $1 million per violation. Under the law, businesses must report critical safety incidents to the state within 15 days or within 24 hours if they believe a risk poses an imminent threat of death or injury.

The law began as Senate Bill 53authored by the State Senator. Scott Wienera Democrat from San Francisco, to address the catastrophic risk posed by advanced AI models, sometimes called frontier models. The law defines catastrophic risk as a case where the technology can kill more than 50 people through a cyberattack or injure people with a chemical, biological, radioactive or nuclear weapon, or a case where the use of AI results in more than $1 billion in theft or damage. It addresses the risks in the event that an operator loses control of an AI system, for example because the AI ​​tricked it or took independent action, situations largely considered hypothetical.

The law increases the information that AI manufacturers must share with the public, including in a transparency report that must include the intended uses of a model, any restrictions or conditions on use of a model, how a company assesses and manages catastrophic risk, and whether those efforts have been reviewed by an independent third party.

The law will bring much-needed disclosure to the AI ​​industry, said Rishi Bommasani, a member of a Stanford University group that tracks transparency around AI. Only three of the 13 companies his group recently studied regularly report incidents, and his group’s transparency scores for those companies have declined on average over the past year. according to a recently published report.

Bommasami is also a lead author of a report commissioned by Governor Gavin Newsom that heavily influenced SB 53 and sees transparency as key to public trust in AI. He believes the effectiveness of SB 53 depends heavily on the government agencies charged with enforcing it and the resources allocated to them to do so.

“In theory, you can write any law, but its practical impact depends heavily on how you implement it, how you enforce it, and how the business engages with it. »

The law was influential even before it came into force. New York Governor Kathy Hochul credited it as the basis for the AI ​​Transparency and Security Act that she signed on December 19. The similarity will grow, New York City and State reportedbecause the law will be “significantly rewritten next year, largely to align with California language.”

Limitations and implementation

The new law remains insufficient, even if it is well implemented, critics say. It does not include in its definition catastrophic risks like the impact of AI systems on the environment, their ability to spread misinformation, or their potential to perpetuate historical systems of oppression like sexism or racism. The law also does not apply to AI systems used by governments to profile people or assign them scores that could lead to denial of government services or accusations of fraud, and only targets companies that generate $500 million in annual revenue.

Its transparency measures also fail to ensure full public visibility. In addition to providing transparency reports, AI developers must also send incident reports to the Office of Emergency Services if any issues arise. Members of the public can also contact this office to report catastrophic risk incidents.

But the contents of incident reports submitted to the OES by companies or their employees cannot be provided to the public through records requests and will instead be shared with members of the California Legislature and Newsom. Even then, they can be redacted to hide information that companies classify as trade secrets, a common way for companies to prevent information about their AI models from being shared.

Bommasami hopes additional transparency will be provided by the 2013 Assembly Bill, a bill to become law in 2024 and also comes into force on January 1. It requires companies to disclose additional details about the data they use to train AI models.

Some elements of SB 53 won’t take effect until next year. Beginning in 2027, the Office of Emergency Services will produce a report on critical safety incidents that the agency will receive from the public and major border modelers. This report could provide more clarity on the extent to which AI can launch attacks against infrastructure or models act without human direction, but the report will be anonymized so that the AI ​​models that pose this threat are not publicly known.

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-No Derivatives license.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Talent is the missing ingredient in the AI ​​conversation

January 3, 2026

HIMSSCast: The obstacles of AI in the pharmaceutical industry

January 3, 2026

Customer challenge

January 2, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (45)
  • AI in Business (184)
  • AI in Healthcare (162)
  • AI in Technology (176)
  • AI Logistics (44)
  • AI Research Updates (80)
  • AI Startups & Investments (148)
  • Chain Risk (57)
  • Smart Chain (71)
  • Supply AI (51)
  • Track AI (50)

AI is moving into New Jersey’s healthcare system

January 3, 2026

AI chip IPO wave kicks off 2026 for Chinese tech stocks

January 3, 2026

Talent is the missing ingredient in the AI ​​conversation

January 3, 2026

Corporate investors take their bets cautiously as crash forecasts rise –

January 3, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (45)
  • AI in Business (184)
  • AI in Healthcare (162)
  • AI in Technology (176)
  • AI Logistics (44)
  • AI Research Updates (80)
  • AI Startups & Investments (148)
  • Chain Risk (57)
  • Smart Chain (71)
  • Supply AI (51)
  • Track AI (50)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.