The United States finds itself at the most consequential inflection point in technology governance since the passage of the Telecommunications Act of 1996. Three decades after that landmark legislation reshaped how America regulated communications technology, the country faces an equally transformative challenge: constructing a regulatory architecture for artificial intelligence that balances innovation with safety, economic competitiveness with worker protection, and national security with civil liberties. The decisions made between now and the 2028 presidential election will establish the foundational framework — or the conspicuous absence of one — that governs AI development for a generation.
The Executive Order Era: Governance by Presidential Directive
American AI governance has been characterized by an unusual reliance on executive authority rather than legislative action. The pattern began accelerating in 2023 with Executive Order 14110, which established the most comprehensive federal AI framework to date. That order required developers of powerful AI systems to share safety test results with the government, directed the National Institute of Standards and Technology to develop evaluation standards, and created reporting requirements for companies training models above certain computational thresholds.
What made the executive order approach both necessary and fragile was the same underlying reality: Congress could not — and still cannot — achieve consensus on comprehensive AI legislation. The political dynamics are genuinely complex. Democrats generally favor stronger regulation but split between technology-friendly moderates who worry about stifling innovation and progressives who demand aggressive worker protection and algorithmic accountability. Republicans split between national security hawks who want to regulate AI to counter China and free-market advocates who view any regulation as innovation-killing government overreach.
The result has been a patchwork of executive actions that shift dramatically with each administration. The regulatory pendulum has already swung twice in three years. The Biden administration’s comprehensive framework was partially rolled back by executive action in early 2025, with the current administration favoring industry self-regulation and voluntary commitments over mandatory compliance requirements. This volatility itself has become a governance problem — companies cannot plan compliance strategies when the regulatory ground shifts every two to four years.
The State Laboratory: 50 Experiments in AI Governance
With federal legislation stalled, states have become the primary laboratories for AI regulation. As of early 2026, thirty-seven states have enacted some form of AI-related legislation, creating a complex and sometimes contradictory patchwork of requirements that companies must navigate.
California’s approach has been the most influential, building on the state’s long history as a technology regulatory pioneer. The California AI Transparency Act, which took effect in January 2026, requires disclosure of AI-generated content in political advertising, mandates algorithmic impact assessments for systems used in employment decisions, and establishes the nation’s first state-level AI incident reporting requirement. The law has effectively become a national standard for many companies that find it simpler to comply universally than to maintain separate systems for California and non-California users.
Colorado has taken a different path, focusing specifically on algorithmic discrimination in insurance and financial services. The Colorado AI Act requires companies using AI in high-risk decision-making to conduct annual bias audits and provide consumers with the right to appeal AI-made decisions. Illinois extended its pioneering Biometric Information Privacy Act to cover AI systems that process facial recognition and voice data, creating some of the strictest AI privacy requirements in the country.
Texas, by contrast, has pursued what Governor-aligned legislators call a “pro-innovation” approach, enacting legislation that preempts local AI regulations and limits the circumstances under which individuals can sue over AI-related harms. The Texas model has attracted significant AI industry investment, with several major AI companies establishing or expanding Texas operations in response to the regulatory environment.
The divergence between state approaches has created what legal scholars describe as a “regulatory fracture” — a situation where the same AI system may be legal, restricted, or effectively prohibited depending on the state in which it operates. This fracture has intensified calls for federal preemption, but the same congressional paralysis that prevented federal legislation in the first place makes preemption equally unlikely before 2028.
NIST and the Soft Power of Standards
While the legislative and executive branches have struggled with AI governance, the National Institute of Standards and Technology has quietly emerged as perhaps the most influential American voice in AI policy. NIST’s AI Risk Management Framework, first published in 2023 and updated through 2025, has become the de facto standard for AI safety evaluation not just in the United States but globally.
The framework’s influence stems from its pragmatic approach. Rather than prescribing specific technical requirements that rapidly become obsolete, NIST established a principles-based framework that organizations can adapt to their specific contexts. The framework’s four core functions — Govern, Map, Measure, and Manage — provide a vocabulary and structure for AI risk management that has been adopted by regulators, industry groups, and international bodies alike.
NIST’s work on AI evaluation methodology has been equally significant. The institute’s benchmark suite for large language model evaluation, developed in collaboration with academic researchers and industry partners, has become the standard reference point for assessing model capabilities and safety characteristics. When policymakers discuss whether an AI system is “safe enough” for a particular application, they are increasingly referencing NIST benchmarks and evaluation criteria.
The limitation of the NIST approach is that standards are voluntary. Companies can and do cite NIST alignment in their marketing and regulatory filings while implementing the framework selectively. Without mandatory compliance requirements backed by enforcement authority, the framework remains a recommendation rather than a regulation — influential but ultimately toothless against bad actors.
The 2028 Election as Regulatory Catalyst
The 2028 presidential election cycle is shaping up to be the most significant catalyst for AI policy action since the technology entered mainstream consciousness. Three converging forces are creating pressure for substantive policy development that has been absent in previous cycles.
First, the economic disruption caused by AI automation has moved from theoretical projection to lived experience for millions of American workers. The Bureau of Labor Statistics estimates that AI-driven automation contributed to approximately 2.8 million job displacements in 2025, concentrated in customer service, data entry, content creation, and basic legal and financial analysis. These displacements have created a constituency demanding policy action that did not exist in previous election cycles.
Second, the use of AI in political campaigns themselves has become a major issue. The 2024 election cycle saw the first widespread deployment of AI-generated political content, including synthetic audio, manipulated video, and AI-written campaign communications. While the actual impact on election outcomes remains debated, the public perception that AI threatens electoral integrity has created bipartisan support for at least some form of election-specific AI regulation.
Third, the international competitive landscape has shifted dramatically. The European Union’s AI Act, fully effective since August 2025, has created the world’s most comprehensive AI regulatory framework. China has implemented its own extensive AI governance regime. The United States’ relative regulatory vacuum has become a point of international concern, with some foreign governments and companies expressing reluctance to adopt American AI products that have not been subject to comparable regulatory scrutiny.
Each major party’s presumptive primary candidates have been forced to articulate AI policy positions far more detailed than anything required in previous cycles. The policy proposals range from comprehensive federal AI legislation modeled on the EU AI Act to market-based approaches emphasizing voluntary industry commitments and targeted interventions for specific high-risk applications.
The Institutional Gap: Who Regulates AI?
Perhaps the most fundamental challenge in American AI governance is institutional. The United States does not have a dedicated AI regulatory agency, and the existing regulatory infrastructure was designed for a pre-AI world. The Federal Trade Commission has emerged as the most active federal regulator, using its authority over unfair and deceptive practices to pursue cases involving AI-driven discrimination, deceptive AI marketing claims, and AI-related data privacy violations.
But the FTC’s authority is limited and contested. The commission’s enforcement actions are reactive rather than proactive — the FTC can penalize companies for AI harms after they occur but has limited authority to establish prospective rules preventing harms before they materialize. Multiple legislative proposals have called for establishing a dedicated federal AI agency, but disagreements over the agency’s scope, authority, and budget have prevented any proposal from advancing.
The Department of Commerce, through NIST and the Bureau of Industry and Security, has played an increasingly important role in AI governance, particularly in areas touching national security and export control. The semiconductor export restrictions targeting China have been among the most consequential AI policy actions of the past three years, affecting global supply chains and the pace of AI development worldwide.
The Department of Defense and intelligence community represent another critical institutional actor. The military’s adoption of AI for autonomous systems, intelligence analysis, and logistics has raised profound questions about accountability, international humanitarian law, and the pace at which AI capabilities are being deployed in national security contexts.
Looking Ahead: The 2028 Regulatory Landscape
The regulatory landscape that will exist on Inauguration Day 2029 depends on outcomes that remain deeply uncertain. However, several structural trends are likely to persist regardless of election outcomes.
The state-level regulatory patchwork will continue to expand, creating increasing compliance complexity for companies operating nationally. At least five additional states are expected to enact significant AI legislation before the 2028 election, further intensifying pressure for federal action.
International regulatory convergence will continue to shape American policy options. As more countries adopt comprehensive AI frameworks, the cost of American regulatory divergence will increase. Companies operating globally will face growing pressure to meet the highest common regulatory denominator, effectively importing international standards into American practice.
The technical pace of AI development will continue to outstrip regulatory capacity. Frontier AI capabilities that seem hypothetical today will be commercially deployed before any comprehensive regulatory framework can be enacted and implemented. This temporal mismatch between technological capability and regulatory response has been a defining feature of AI governance and shows no signs of resolving.
What remains to be determined — and what the 2028 election will substantially influence — is whether the United States pursues a comprehensive federal framework that provides regulatory certainty and international alignment, or continues with the current fragmented approach of executive orders, state laws, and voluntary industry commitments. The stakes extend far beyond regulatory process. The architecture of AI governance that America builds — or fails to build — in this period will shape the trajectory of the most transformative technology of the twenty-first century.
This analysis reflects conditions as of March 2026. Regulatory developments continue to evolve rapidly. USA 2028 AI updates its regulatory tracking database continuously and publishes revised analysis as material changes occur.