Why a decade-long freeze on all state AI regs doesn’t make sense
COMMENTARY: Picture waking up to news that all state AI regulations nationwide on privacy, bias, and even healthcare disclosures are put on ice for the next decade.
That’s the reality if Section 43201 of President Trump’s “Big, Beautiful Bill” becomes law.
It’s passed the House – and just the other day passed the Senate, so it could become law right before the July 4th weekend.
States and local governments would instantly lose the ability to govern AI systems in one blow, establishing a 10-year national moratorium on passing new AI regulations. Proponents hail it as a masterstroke to eliminate a patchwork of conflicting laws. Tech companies hail the moratorium as necessary to turbocharge AI innovation. Yet this sudden freeze also risks throwing the brakes on essential guardrails that help protect organizations, workers, and consumers.
The case for the freeze
Advocates argue that a decade-long moratorium on state regulation will deliver national consistency, enabling startups to sidestep the hassle of managing numerous data-privacy or labeling requirements, thus reducing the cost of compliance and speeding up R&D. Unfettered access to enormous datasets and high-performance computing resources may speed up climate-tech breakthroughs, ranging from AI-based carbon sequestration to real-time wildfire forecasting. The pause will also strengthen the global leadership of the U.S., with American companies racing ahead without being delayed by state legislators or playing catch-up as China, the EU, and India unveil their own roadmaps for AI.
Although in principle, the moratorium could unleash high-impact breakthroughs in biotech, medicine and energy, such a pace without controls can resemble more of a high-speed pursuit than a carefully regulated test track.
The risks of unregulated AI
When innovation outpaces monitoring, the consequences are often appalling. Misinformation and Deepfakes can take root, with AI-generated videos indistinguishable from real life taking over elections, fueling hate speech, and destroying reputations in a matter of hours.
Unchecked models trained on biased data can perpetuate algorithmic bias, resulting in discrimination in hiring, lending, or law enforcement, and thus solidifying social inequalities. Customers subjected to unfair loan refusals or misdiagnosis will lose faith, slowing AI adoption in critical industries. Automation of routine tasks—from customer service to legal research—will bring tides of joblessness in the absence of sufficient social welfare measures and retraining programs.
Why state regulations matter
State-level AI laws are still in their infancy, but they address pressing concerns, such as consumer protections. For example, California’s privacy rules and Illinois’s biometric-data statutes limit unauthorized AI profiling and surveillance. Maryland and New York mandate human oversight, requiring “human-in-the-loop” for high-stakes decisions such as medical diagnoses or parole assessments. Washington state enforces workplace rights by prohibiting automated workplace monitoring without consent, preserving worker dignity.
Under the proposed moratorium, these tailored safeguards vanish overnight, replaced by voluntary corporate codes of conduct that lack teeth. States have historically served as “laboratories of democracy,” piloting creative solutions to local issues. Stripping them of regulatory authority risks silencing that grassroots innovation when it comes to ethical AI.
How to build responsible AI amid a regulatory freeze
Instead of mandating a 10-year, across-the-board regulatory freeze, a more viable solution would take a dynamic model, grounded in federal standards, reinforced by aggressive oversight, and supplemented by responsible industry practices in a bid to maintain both innovation and protection. Here’s how we can do it:
Formulate a federal AI baseline framework: With no oversight at the state level, we need a framework for AI at the federal level that defines minimum standards of privacy, fairness, and transparency. A framework that requires impact assessments for uses of AI that have high risk and makes disclosures mandatory where AI contributes to a decision associated with fundamental rights or essential services.
Promote voluntary industry safeguards: Incentivizing firms to adopt transparency labels on AI-generated material, third-party certifications for the appropriate use of AI, and behavior-use licenses that restrict harmful applications of AI can serve as temporary protection until formal regulation catches up.
Create regulatory sandboxes: Federally approved AI sandboxes would let chosen AI applications get tested in actual conditions while remaining under close observation and ethical examination.
Encourage AI literacy: In the absence of external regulations, companies need to reinforce internal governance by educating employees on AI-usage policies and data handling. We need to integrate AI literacy into the employee handbook, starting with onboarding. Organizations should offer workshops in explainability, bias, and prompt engineering.
Develop an acceptable AI-use policy: Develop clear and accessible policies that outline responsible AI use. Clearly specify approved AI tools and the contexts for their use. Assign ownership and accountability, including roles accountable for approval, monitoring, and addressing misuse. The policy should also cover data usage rules, compliance regulations, and penalties for AI misuse.
Take a security-centric approach to AI: An AI security-first strategy ensures that protection is not an afterthought, but an inherent part of the AI development cycle. This implies embedding protections across the entire lifespan: from data gathering and model training through deployment and ongoing monitoring.
A 10-year moratorium on regulating AI could accelerate innovation, but it could also engender long-term costs difficult to reverse. The technology we create today will largely determine how we react to crises, serve communities, and establish trust. Without guidance, we risk chaos. It’s time to play the long game by creating AI that’s not just fast and strong, but governed, accountable, and designed to serve society.