Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»US AI Safety Institute signs AI safety research, testing and evaluation agreements with Anthropic and OpenAI
AI Research Updates

US AI Safety Institute signs AI safety research, testing and evaluation agreements with Anthropic and OpenAI

November 16, 2024002 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Shutterstock 2359936611.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

GAITHERSBURG, Md. — Today, the Artificial Intelligence Security Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on research, testing and AI security assessment with Anthropic and OpenAI.

Each company’s memorandum of understanding establishes the framework for the U.S. AI Safety Institute to have access to each company’s major new models before and after their public release. The agreements will enable collaborative research on how to assess capabilities and security risks, as well as methods to mitigate these risks.

“Security is essential to fuel disruptive technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, Director of the US AI Safety Institute. “These agreements are just the beginning, but they are an important step as we work to help responsibly manage the future of AI. »

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, working closely with its partners at the UK AI Safety Institute.

The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards and associated tools. Assessments under these agreements will advance NIST’s work on AI by facilitating in-depth collaboration and exploratory research on advanced AI systems across a range of risk areas.

Assessments conducted pursuant to these agreements will help advance the safe and reliable development and use of AI building on the Biden-Harris Administration’s executive order on AI and voluntary commitments made to AI. administration by leading AI model developers.

About the US AI Security Institute

THE American AI Security Institutelocated within the Department of Commerce’s National Institute of Standards and Technology (NIST), was established as a result of the Biden-Harris Administration’s 2023 Executive Order on Safe, Secure, and trusted sources of artificial intelligence to advance the science of AI. security and respond to the risks posed by advanced AI systems. He is responsible for developing the testing, assessments and guidelines that will help accelerate innovation in safe AI here in the United States and around the world.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026

Experts warn Canada risks losing top researchers in ‘global war for AI talent’

January 22, 2026

$30 million awarded to Binghamton University for new AI research center | News

January 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026

AI for Business: Practical Tools for Small Businesses

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.