GAITHERSBURG, Md. — Today, the Artificial Intelligence Security Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on research, testing and AI security assessment with Anthropic and OpenAI.
Each company’s memorandum of understanding establishes the framework for the U.S. AI Safety Institute to have access to each company’s major new models before and after their public release. The agreements will enable collaborative research on how to assess capabilities and security risks, as well as methods to mitigate these risks.
“Security is essential to fuel disruptive technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, Director of the US AI Safety Institute. “These agreements are just the beginning, but they are an important step as we work to help responsibly manage the future of AI. »
Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, working closely with its partners at the UK AI Safety Institute.
The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards and associated tools. Assessments under these agreements will advance NIST’s work on AI by facilitating in-depth collaboration and exploratory research on advanced AI systems across a range of risk areas.
Assessments conducted pursuant to these agreements will help advance the safe and reliable development and use of AI building on the Biden-Harris Administration’s executive order on AI and voluntary commitments made to AI. administration by leading AI model developers.
About the US AI Security Institute
THE American AI Security Institutelocated within the Department of Commerce’s National Institute of Standards and Technology (NIST), was established as a result of the Biden-Harris Administration’s 2023 Executive Order on Safe, Secure, and trusted sources of artificial intelligence to advance the science of AI. security and respond to the risks posed by advanced AI systems. He is responsible for developing the testing, assessments and guidelines that will help accelerate innovation in safe AI here in the United States and around the world.
