Trump’s ‘Light Touch’ AI Framework: National Policy with Few Rules

Trump's 'Light Touch' AI Framework: National Policy with Few Rules

They handed me a three-page memo and a half-smile. You could feel the shrinkage—policy squeezed into a sheet the size of a grocery list. I read it and thought: this is the next closest thing to no rules at all.

I’m going to tell you what’s inside, who wins, and who’s being asked to swallow the bill. You’ll get the parts that matter for regulation, for companies like OpenAI, Google, Meta and Microsoft, and for anyone worried about kids, copyright, or harmful outputs.

The framework itself is posted by the White House as a three-page PDF; the language is intentionally light and, in places, protective of the major AI players.

In the West Wing, aides handed reporters a three-page memo.

The document reads like a checklist written for Capitol Hill rather than a regulatory roadmap. It presses Congress to pass laws on a few narrow points: age-verification measures for minors, mechanisms for copyright holders to license material to model trainers, and a national standard that would outflank state laws. I read it as a strategic nudge—enough to steer lawmakers without provoking a full fight.

What does the framework actually propose?

Short answer: a light touch playbook. It endorses “age assurance requirements” similar to elements found in the Kids Online Safety Act, asks Congress to create licensing pathways for training data while also saying the administration doesn’t think training on copyrighted material is unlawful, and urges federal preemption to avoid a patchwork of fifty different state laws. It also includes a clause that resembles a Section 230-style shield for AI developers, arguing states shouldn’t be able to penalize developers for third parties’ unlawful uses of their models.

State capitals had been drafting stronger measures at the same time.

California, New York and other states were already writing tighter AI rules—some aimed at consumer protections, some at company transparency. The framework’s preemption language is a direct counter: “Congress should preempt state AI laws that impose undue burdens,” it says, pushing for one national standard instead of a fifty-state patchwork. Politically, that’s an appeal to large tech firms that prefer uniformity; legally, it raises fights over federal authority and local innovation.

Will this override state AI laws?

Not automatically. The memo asks Congress to pass laws that would preempt state measures that the federal government views as burdensome. You should expect powerful industry lobbying—OpenAI, Google, Meta and Microsoft all have skin in this game—to press for national uniformity. But many members of Congress have their own proposals, and state legislatures aren’t going to sit still; this will be litigated and fought in committees and courtrooms.

Engineers at OpenAI, Google and Meta are watching the liability language closely.

The closing lines are the most consequential for developers: a sentence that reads like a non-state version of Section 230. It says states shouldn’t penalize AI developers for unlawful third-party conduct involving their models. That’s a broad protection that could limit remedies for victims of defamation, fabricated sexual content, or other harms caused by model outputs.

Are AI companies shielded from liability under the proposal?

Not completely, but the direction is clear. The administration frames the proposal as stopping state enforcement, not granting an absolute federal indemnity. Still, if Congress accepts that framing, companies could gain powerful defenses against state-level prosecutions and civil suits. Given Trump’s past criticism of Section 230, this pivot—supporting a Section 230-like shield for AI—is striking. It’s a political trade: regulatory certainty for industry, and less room for state-level accountability.

I don’t think this memo was meant to be a finished product. It’s a signaling device aimed at Congress and the lobbyists who move in and out of both chambers. It’s also a market nudge—encouraging licensing schemes so rights holders can monetize training data while softening the legal ground beneath class-action lawyers and state attorneys general.

There are two images that help explain the posture: one, a single sticky note attempting to hold back a fast-growing stream; two, a broad brushstroke meant to paint every coastline the same color. Both metaphors point to the same problem—you either give companies room to operate, or you give citizens a way to hold them to account.

For you as a consumer or a policymaker, the stakes are tangible: protections for minors, the ability of artists and authors to get paid for their work, and whether victims of harmful outputs can get recourse. I’ll watch how the bill language changes in committee, who writes the carve-outs, and which tech giants end up writing checks to move the needle.

If Congress follows this memo, we may see a national law that favors big-tech platform models and licensing markets over state experimentation—but will that keep America competitive with China while protecting civil liberties, or simply hand the largest players another layer of immunity?