Reflect back to May: Congress grappled with a significant budget bill, and amidst the discussion, Senator Ted Cruz introduced a contentious proposition—an unprecedented ten-year moratorium preventing states from regulating artificial intelligence (AI). This proposal presented a pivotal moment for tech policy.
Many observers viewed this as a disastrous move, considering the influence of a few tech giants on our economy. Their energy needs overshadow household requirements, their data demands infringe on creator rights, and their products have led to widespread job displacement and even emerging clinical mental health issues. Given Congress’s inability to enact effective consumer protections or market regulations, why would we restrict states—the very entities that have taken action to protect consumers, like California and Massachusetts—from addressing these urgent matters?
In response to Cruz’s proposal, seventeen Republican governors rallied against it in a letter, which ultimately played a part in its rejection by a rare bipartisan vote reflecting near-unanimity. However, this issue is once again on the table as House Republican leaders suggest it may be inserted into the annual defense spending bill. Recent leaks have revealed the Trump administration’s plans to enforce this state regulatory ban through executive action, raising alarms across the states that have begun to fend for their constituents.
This renewed proposal appears driven by a mix of conservative ideology, financial interests, and fears surrounding competition with China.
Proponents argue that allowing states to regulate AI would create a confusing patchwork that could hinder innovation necessary to compete globally. The AI sector has heavily poured funds into lobbying to support this narrative. But citizens should scrutinize this perspective. Shouldn’t we prioritize state representatives addressing our interests over enabling a few monopolistic entities to dictate Washington’s regulations?
In addition, the debate here isn’t just ideological; it’s becoming increasingly partisan. Figures like Vice President J.D. Vance caution against potential “progressive” states controlling AI’s trajectory, highlighting the growing polarization on this issue. Nonetheless, it’s crucial for both Democrats and Republicans to recognize their shared interest in safeguarding consumers from Big Tech’s potential overreach. For instance, Republican Senator Masha Blackburn articulated the risks of Cruz’s original moratorium, emphasizing that it would allow tech companies to further exploit vulnerable populations.
Addressing the concern that a patchwork of state regulations is hard to navigate, it’s essential to note that numerous industries adapt to local regulations—automobiles, toys, food, and pharmaceuticals—and most manage this successfully. The AI industry, with its immense resources, has shown it can comply with stricter regulations abroad, including in the EU.
The unique advantage states hold rests in their agility and connection to local needs, allowing them to explore regulations tailored to their communities. This experimentation is vital, especially in such a rapidly evolving field as AI.
Moreover, regulations should act as catalysts for innovation rather than limitations. Protective measures don’t stifle a company’s ability to improve but instead guide that innovation to benefit the public. Regulations related to drug safety, for example, ensure that new medicines are both effective and safe.
Importantly, the need for regulation is underscored by the concentration of power within the trillion-dollar AI industry—a concern that’s evident in the discussions surrounding governance and equity. As we argue in our book, Rewiring Democracy, the lack of decisive congressional action on AI has made states increasingly critical as the only effective tools for managing this growing influence.
Rather than stifling state-level regulation, the federal government should empower these efforts to channel AI innovation positively. If concerns about private sector performance linger, government involvement can foster the development of AI models that serve the public good, much like the approaches seen in Switzerland, France, and Singapore, which prioritize transparency and community benefit.
What if you’re uncertain about the federal government handling AI? You’re not alone in that sentiment. States can serve as better incubators for public interest innovations, given their proximity to the people and their trusted role in delivering essential services. This local trust and responsiveness can provide a suitable ground for testing various approaches that could eventually guide broader federal policies.
In sum, the call for a moratorium on state-level AI regulation should be met with robust resistance. Not only do states have the resources to engage in meaningful regulation, but they also have the inherent capacity to safeguard their citizens’ interests amidst a rapidly changing landscape. Let’s harness innovation to benefit everyone, not just a privileged few.
Have you been wondering how these changes might affect your community? It’s essential to stay informed and involved as states navigate their regulatory roles in AI. By engaging with local policies, you can actively shape the conversation around technology and its impact on society.
If you’re looking for more insights and discussions around technology and governance, be sure to explore more content at Moyens I/O.