5 Key Developments in US Government AI Safety Testing You Need to Know
The US government is stepping up its oversight of advanced artificial intelligence. Through the Center for AI Standards and Innovation (CAISI), a division of the National Institute of Standards and Technology (NIST) within the Department of Commerce, it has forged agreements with major AI developers to evaluate frontier models before public release. These moves signal a proactive shift in policy, aiming to balance innovation with security. Here are five critical developments in this evolving landscape.
1. New Agreements with Google DeepMind, Microsoft, and xAI
CAISI has signed evaluation agreements with Google DeepMind, Microsoft, and xAI, joining earlier pacts with Anthropic and OpenAI. These pacts grant the agency pre-deployment access to frontier AI models from these companies. The goal is to conduct safety tests and provide feedback before these systems reach the public. This expands the government's reach into the AI ecosystem, ensuring that leading developers submit their most advanced creations for independent scrutiny.

2. Pre-Deployment Evaluations and Targeted Research
Under these agreements, CAISI will perform pre-deployment evaluations and targeted research to better assess frontier AI capabilities. As stated in an official release, this work aims to "advance the state of AI security." The evaluations focus on identifying potential risks, such as vulnerabilities or misuse, before models are widely used. This hands-on approach helps the government understand cutting-edge AI and set benchmarks for safety.
3. Collaboration with the UK AI Safety Institute
The US agency is not working alone. It maintains close ties with the UK AI Safety Institute (AISI). The initial agreements with Anthropic and OpenAI, signed in August 2024, included plans for joint feedback on safety improvements. This international partnership strengthens the testing framework by sharing insights and methodologies, fostering a unified approach to AI governance across borders.

4. A Shift Toward Proactive Security
Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, sees these agreements as a pivot to proactive security for agentic AI. Government-led testing before and after deployment can "strengthen visibility into autonomous behaviors" and accelerate standardization. However, Jean-Louis notes potential hurdles, such as protecting intellectual property during evaluations. Despite these concerns, he calls the initiative a positive step for the industry, pushing toward security-by-design.
5. Potential Executive Order for a Vetting System
Following the CAISI announcement, Bloomberg reported that the White House is preparing an executive order to create a vetting system for all new AI models, particularly Anthropic's breakthrough Mythos model. The directive takes shape after Anthropic revealed Mythos could find network vulnerabilities and pose global cybersecurity risks. Independent analyst Carmi Levy links the CAISI testing framework to this broader policy direction, underscoring a significant shift in how the US approaches AI regulation.
These developments mark a pivotal moment in AI governance. By combining early access, continuous evaluation, and cross-sector collaboration, the government aims to build trust in advanced systems. As AI capabilities grow, so will the rigor of safety testing. The path forward will require balancing innovation with oversight, but these steps lay a foundation for responsible AI deployment.
Related Articles
- Google Home Unveils Major Summer Upgrade with Enhanced Cameras, Smarter Automations, and Gemini AI Integration
- Apple TV+ Bolsters Summer Lineup with Return of Three Beloved Sci-Fi Series
- docs.rs Default Build Targets: A Shift Toward Fewer, Faster Documentation Builds
- How to Update Rust to 1.94.1 and Apply Critical Fixes
- How to Get Your One UI Ready for Samsung's Galaxy Glasses: A Developer's Guide
- Top 6 New Features in iOS 26.5: What You Need to Know
- Cryptographers Warn: Big Tech Inches Towards Quantum 'Q-Day' as New Vulnerabilities Emerge
- Navigating the Apple Intelligence Settlement: A Step-by-Step Guide to Claiming Your Compensation