Trump Reportedly Considers Executive Order to Vet New AI Models

AI Regulations: Trump Admin's Cutting-Edge Plan?

I opened the tip and my screen filled with an image of a closed-door meeting. You can feel the gravity: three Silicon Valley giants, White House aides, and a proposal that could put new models under government review. I want to walk you through what that moment means, and why you should care.

At a secure meeting last week, executives from Anthropic, Google and OpenAI sat across from Trump administration officials.

The New York Times, citing anonymous sources, says President Trump is weighing an executive order to create an A.I. working group made up of government and industry representatives. If true, this group would talk through oversight plans and could set up a formal government review process for new A.I. models. I’ve tracked similar policy moves before; they often start as shape-shifting ideas and end up reshaping what companies can release.

What would the AI working group do?

From the reporting, the group would decide which agencies get involved — names floated include the NSA, the White House Office of the National Cyber Director, and the Office of the Director of National Intelligence (Tulsi Gabbard is said to be in that role). You should picture a cross‑agency vetting table where tech engineers and spooks argue over what counts as a security risk. If the group adopts a checklist, companies like Anthropic, Google, and OpenAI could face a pre-release review similar to regulatory pre-clearance.

In a Commerce Department statement after the change, CAISI’s mission was rewritten under the new administration.

There is an existing structure: the National Institute of Standards and Technology created the Center for A.I. Standards and Innovation (CAISI) under the Biden administration to vet models. The Trump Commerce secretary, Howard Lutnick, framed a post‑handover shift as freeing innovation while keeping national security standards. I’ve read that memo; its language tries to thread two opposing goals at once.

How would this affect Big Tech and startups?

If a formal review process becomes the norm, older, capital-rich players such as Google and OpenAI may be able to absorb delays and compliance costs more easily than smaller startups. You and I both know companies adapt fast when rules change, but the uneven burden could tilt the field toward incumbents. Think of regulation acting like a referee whose whistles favor whoever can pay for the best legal team.

At a briefing in the U.K., Anthropic showed a not-yet-released model and regulators reacted with alarm.

That British episode is the closest analogue reporters are citing: Anthropic’s Claude Mythos Preview was shown to U.K. banks and agencies and judged too risky for release, especially on cybersecurity grounds. Officials from the National Cyber Security Centre, the Financial Conduct Authority, the Treasury, and the Bank of England were reportedly scrambling to coordinate a response. The comparison matters because it provides a template for multi-agency vetting — and for friction between national security and commercial rollout.

You’ll recall the Trump White House’s policy document, A National Policy Framework for Artificial Intelligence, published in March, leaned toward light-touch rules — mostly prohibitions on broad regulation and an emphasis on age verification for certain content. That stance sits uneasily with a plan that would add a formal vetting step at the model level.

At last week’s meetings, tension was visible: company reps pressing timelines, officials pressing for control.

If a new executive order comes, it could change how quickly models move from lab to product. I can’t predict every outcome, but I track incentives: governments worry about misuse; companies worry about market and research pace. The result is a policy tug-of-war that will play out in contracts, compliance teams, and congressional hearings.

Across the Atlantic, after the Anthropic preview, U.K. regulators traded notes and options. In the U.S., a working group that decides which agencies weigh in would be powerful. It can act as a brake or as a gatekeeper, and gatekeepers shape markets.

There are real actors here: Anthropic, OpenAI, Google, NIST’s CAISI, the NSA, the National Cyber Director’s office, Howard Lutnick, Tulsi Gabbard, Vice President J.D. Vance — whose speech last year framed U.S. AI dominance as a national priority — and U.K. bodies like the NCSC and the FCA. Each name signals a different interest and a different set of pressure points.

At the point of decision, the mechanics matter: who reviews, what criteria they use, and what penalties follow.

Operational questions will dominate: Will reviews be advisory or binding? Will the process be public or classified? How will companies appeal a block or a delay? You want clarity because vagueness becomes leverage for those with resources. If agencies can halt releases without transparent standards, the review regime will behave like a pressure cooker where small risks lead to big stops.

I’m watching for three things: the exact language of any executive order, which agencies are named, and whether CAISI is reactivated as a central vetting body. Each detail will change incentives for Anthropic, OpenAI, Google, and dozens of startups racing to ship models.

Let him cook — but don’t assume the kitchen stays the same. Which side wins: faster commercial rollouts, or a slow‑moving national review that reshapes who gets to build next‑generation models?