The National Institute of Standards and Technology's AI Safety Institute will be able to access future artificial intelligence models by OpenAI and Anthropic before and after public release as part of a recently signed agreement, SC Media reports.
Such access would help advance AI testing and research, as well as improve AI capability and risk assessments, according to NIST, which will also leverage the access to help OpenAI and Anthropic strengthen AI model safety.
"Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety. These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said U.S. AISI Director Elizabeth Kelly.
Such a development comes more than a year after OpenAI and Anthropic, along with five other AI firms, committed to bolster AI model safety and security.