Google, Microsoft, xAI Agree to Pre-Release AI Reviews by US Government
In a significant expansion of federal oversight, Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the U.S. government to examine new artificial intelligence models before they are released publicly. The Commerce Department's Center for AI Standards and Innovation (CAISI) announced Tuesday that it will conduct pre-deployment evaluations and targeted research on these frontier AI systems.
CAISI, which began reviewing models from OpenAI and Anthropic in 2024, stated it has already completed 40 evaluations. Both OpenAI and Anthropic have renegotiated their existing agreements with the center to align with priorities set by President Donald Trump’s administration, according to the announcement.
"Pre-deployment evaluations help us identify potential risks early, from bias to security vulnerabilities," said Dr. Elena Marchetti, CAISI’s director of evaluation. "By integrating these checks before a model reaches the market, we can ensure that critical safety standards are met."
Background
CAISI was established within the National Institute of Standards and Technology to address the unique challenges posed by advanced AI systems. The center originally focused on voluntary reviews with a handful of companies, but the new agreements with Google, Microsoft, and xAI mark a widening of its scope.

The move comes amid broader global debates on AI regulation. The U.S. has favored a voluntary, industry-led approach, but critics argue that independent pre-release testing is essential given the speed of AI development. The Trump administration has emphasized American leadership in AI while also expressing concerns about national security risks.
Industry analyst Sarah Kline remarked, "This is an early signal that even the largest tech firms are willing to submit to government scrutiny to maintain public trust and avoid potential legislative crackdowns."

What This Means
The agreement sets a precedent for closer collaboration between the federal government and AI developers. For companies, joining the program may offer reputational benefits and a smoother path to eventual regulatory compliance.
For consumers and businesses, the reviews could lead to earlier detection of flaws in AI products, such as inaccurate outputs, privacy leaks, or harmful biases. However, the process remains voluntary, and companies are not required to delay releases based on CAISI’s findings.
"The effectiveness of these evaluations will depend on how transparent the companies are and whether the government can keep pace with rapid model updates," said Marchetti. "Our goal is to build a framework that evolves with the technology."
Looking ahead, observers expect other major AI players like Meta and Amazon to face similar pressure to join. The program may also influence international standards, as other nations watch the U.S. approach to pre-launch AI testing.
For now, the reviews cover only frontier models—the most advanced and capable systems. CAISI has not disclosed whether it will extend testing to smaller, specialized models used in healthcare, finance, or education.
Related Articles
- React Native 0.83: What's New and Why It Matters
- FBI Recovers Deleted Signal Messages from iPhone’s Push Notification Cache
- Astropad and MacRumors Launch Giveaway: Win a Mac Mini Optimized for AI Agent Workflows
- Apple Releases Safari Technology Preview 237 with Major Accessibility and CSS Overhauls
- Perplexity’s Mac-First AI Platform: Apple’s Endorsement and What It Means for Users
- Astropad Workbench Giveaway: Win a Mac Mini for AI Agent Control – Questions Answered
- Apple's iOS 27 Set to Transform iPhone Experience with AI-Powered Siri App and Satellite Upgrades, Sources Say
- AI-Driven Feature Rush Poses Existential Crisis for Software Product Managers