Global AI Market: $842B ▲ 34.2% | US AI Investment: $312B ▲ 28.7% | AI Patents Filed: 187,400 ▲ 41.3% | NVIDIA Market Cap: $4.2T ▲ 12.8% | AI Regulatory Acts: 47 ▲ 18.5% | AGI Prediction Index: 72.4 ▲ 8.1% | US Compute Capacity: 2.8EF ▲ 56.4% | AI Job Displacement: 14.2M ▲ 22.6% | AI Safety Funding: $18.7B ▲ 67.3% | Election AI Budget: $2.4B ▲ 340% | Global AI Market: $842B ▲ 34.2% | US AI Investment: $312B ▲ 28.7% | AI Patents Filed: 187,400 ▲ 41.3% | NVIDIA Market Cap: $4.2T ▲ 12.8% | AI Regulatory Acts: 47 ▲ 18.5% | AGI Prediction Index: 72.4 ▲ 8.1% | US Compute Capacity: 2.8EF ▲ 56.4% | AI Job Displacement: 14.2M ▲ 22.6% | AI Safety Funding: $18.7B ▲ 67.3% | Election AI Budget: $2.4B ▲ 340% |

Election Technology and AI: How Artificial Intelligence Is Reshaping the 2028 Presidential Race

Analysis of AI's growing role in election infrastructure, campaign technology, voter targeting, deepfake threats, and the regulatory response as America prepares for the most AI-influenced presidential election in history.

The 2028 United States presidential election will be the first in which artificial intelligence is not a novel curiosity but a fundamental infrastructure layer. AI systems will touch virtually every aspect of the electoral process — from how candidates identify and communicate with voters, to how campaigns are financed and organized, to how election officials process ballots and verify results, to how citizens consume political information and form opinions. The question is no longer whether AI will influence the 2028 election but whether the institutional, regulatory, and technological safeguards being developed can maintain democratic integrity in an environment where the tools of persuasion, deception, and mobilization have been transformed beyond anything previous democratic systems were designed to withstand.

Campaign AI: The Industrialization of Voter Contact

The use of AI in political campaigns has progressed from experimental novelty to industrial-scale deployment in less than four years. The 2024 election cycle saw the first widespread adoption of AI tools for voter targeting, message generation, and donor solicitation. The 2026 midterm campaigns refined these techniques. By the time serious 2028 presidential campaigns launch their operations — most already have, unofficially — AI will be embedded in every layer of campaign technology.

Modern campaign AI operates across several dimensions simultaneously. Voter modeling has evolved from demographic segmentation and historical voting pattern analysis to individualized prediction models that estimate the probability that a specific registered voter will turn out, their likely candidate preference, and their susceptibility to particular message frames. These models draw on an unprecedented breadth of data: voter file records, consumer purchasing data, social media activity, mobile location patterns, and — increasingly — proprietary data acquired from data brokers who aggregate information from hundreds of sources.

Message generation has been similarly transformed. Where campaigns once relied on teams of human copywriters to produce email solicitations, direct mail pieces, and social media content, AI systems now generate thousands of message variants optimized for different audience segments. A single fundraising email may exist in dozens of versions, each calibrated for the recipient’s predicted income level, issue priorities, emotional triggers, and communication preferences. The generation is automatic, the testing is automatic, and the optimization cycle runs continuously.

Phone banking and canvassing operations have incorporated AI voice synthesis and conversation management. AI systems can conduct voter contact calls that are indistinguishable from human callers for routine interactions — confirming voter registration, assessing candidate preference, and delivering scripted persuasion messages. These systems handle thousands of simultaneous conversations and adapt their approach based on real-time conversational cues. The regulatory status of AI-conducted voter contact calls remains ambiguous in most jurisdictions, a gap that campaigns have aggressively exploited.

The Deepfake Challenge: Synthetic Media and Electoral Integrity

The threat that AI-generated synthetic media poses to electoral integrity has been extensively discussed since at least 2019, but the practical reality has evolved far beyond early predictions. The quality of AI-generated video, audio, and imagery has improved to the point where detection by human observation is unreliable, and even automated detection systems face rapidly increasing error rates as generation techniques improve.

The most significant development has not been the creation of perfectly convincing deepfakes of major candidates — though such content has been produced and circulated — but the wholesale industrialization of synthetic political content at lower quality thresholds. The majority of AI-generated political content is not attempting to pass as authentic footage of real events. Instead, it takes the form of AI-generated “news reports” from fictitious outlets, synthetic “person on the street” interviews, AI-voiced commentary over real footage, and manipulated clips that alter timing, context, or emphasis without creating wholly fabricated content.

This category of synthetic content is harder to detect, harder to regulate, and arguably more influential than obvious deepfakes. A fabricated video of a candidate making a statement they never made can be fact-checked and debunked (though the debunking never reaches everyone who saw the original). A subtly manipulated clip that takes a real statement out of context, or an AI-generated “analysis” from a professional-looking but nonexistent media outlet, operates in a gray zone where fact-checking is more difficult and the psychological impact is more insidious.

The regulatory response to synthetic political content has been fragmented and largely reactive. The Federal Election Commission issued guidance in 2024 requiring disclosure of AI-generated content in paid political advertising, but the guidance covers only a fraction of the channels through which synthetic content reaches voters. Social media platforms have implemented varying content labeling requirements, but enforcement is inconsistent and the platforms’ economic incentives are not well-aligned with aggressive content moderation during high-traffic election periods.

State-level responses have been more aggressive but geographically limited. California, Texas, and Minnesota enacted laws specifically addressing AI-generated election content, with penalties ranging from civil fines to criminal prosecution for the most egregious cases. However, the interstate nature of digital content distribution means that content created in an unregulated jurisdiction can reach voters nationwide, and enforcement against anonymous or foreign-origin content remains practically impossible.

Election Administration: AI in the Machinery of Democracy

Beyond campaigns and content, AI is increasingly integrated into the administrative infrastructure of elections themselves. This integration offers genuine benefits — and introduces risks that election officials are still learning to assess.

Voter registration maintenance has been one of the earliest and most consequential applications. AI systems analyze voter rolls to identify probable duplicates, deceased registrants, and individuals who have moved out of jurisdiction. These systems are substantially more efficient than manual review processes and can identify patterns that human reviewers would miss. However, the accuracy of AI-driven voter roll maintenance is politically contested, with critics arguing that aggressive list maintenance disproportionately removes eligible voters from registration rolls — particularly voters who are young, mobile, or members of minority communities.

Ballot processing and tabulation remain largely non-AI systems, and for good reason. The combination of high stakes, low error tolerance, and intense public scrutiny makes election tabulation a domain where the transparency and auditability of traditional systems outweigh the efficiency gains that AI might offer. However, AI is increasingly used in adjacent processes: optical ballot recognition for mail-in ballot signature verification, automated poll book management, and real-time monitoring of election-day operations for anomalies that might indicate equipment failure or security incidents.

Election security monitoring represents perhaps the most important and least publicly visible application of AI in election administration. The Cybersecurity and Infrastructure Security Agency has deployed AI-powered monitoring systems that analyze network traffic patterns, social media activity, and other data sources to identify potential election interference operations in near-real-time. These systems proved valuable during the 2024 cycle in identifying coordinated inauthentic behavior on social media platforms and detecting attempted intrusions into election infrastructure.

The Information Environment: AI-Mediated Political Reality

The most profound impact of AI on the 2028 election may not be in any specific application but in the cumulative transformation of the information environment in which democratic deliberation occurs. The combination of AI-powered content generation, AI-curated content distribution, and AI-influenced content consumption has created an information ecosystem that is qualitatively different from anything that existed in previous election cycles.

AI recommendation algorithms determine which political content reaches which voters through social media feeds, news aggregators, and search results. These algorithms optimize for engagement metrics that do not necessarily align with informational value or democratic health. Content that provokes emotional reactions — outrage, fear, enthusiasm, contempt — receives preferential distribution because it generates more interaction, regardless of its accuracy or its effect on political deliberation.

The volume of political content has exploded. AI generation tools enable individuals, campaigns, interest groups, and foreign actors to produce political content at scales that would have required large organizations and significant budgets just five years ago. A single operator with modest technical skills can generate thousands of unique political messages, distribute them through automated accounts, and test their effectiveness in real-time — all for a fraction of the cost of traditional political communication.

This volume increase has overwhelmed traditional mechanisms for quality control in political information. Fact-checking organizations, already under-resourced relative to the volume of claims requiring verification, cannot keep pace with AI-generated content. Journalistic institutions that historically served as gatekeepers and arbiters of political information have lost both the audience share and the institutional authority to perform that function effectively. The result is an information environment in which the signal-to-noise ratio has deteriorated dramatically, and voters must navigate an unprecedented volume of content with diminished institutional support for assessing its reliability.

Regulatory Frameworks for 2028

The regulatory response to AI in elections is evolving rapidly but remains far behind the pace of technological development. Several frameworks are under active development or implementation as the 2028 cycle approaches.

At the federal level, the AI in Elections Act — introduced in both chambers of Congress but not yet enacted as of early 2026 — would require disclosure of AI-generated content in all political communications, establish federal standards for AI use in election administration, and create a reporting mechanism for AI-related election interference. The bill’s prospects are uncertain, with support from both parties on specific provisions but disagreement on scope and enforcement mechanisms.

The Federal Election Commission has expanded its guidance on AI in campaign communications, requiring campaigns to disclose AI-generated content in FEC filings and establishing guidelines for the use of AI in voter contact operations. However, the FEC’s enforcement capacity is limited, and the agency’s history of partisan deadlocks raises questions about whether guidance will translate into meaningful enforcement during a heated presidential campaign.

International coordination on election AI governance has also advanced. The G7 AI governance framework, agreed in 2025, includes specific provisions for election integrity that participating nations have committed to implement. The framework establishes common standards for AI-generated content labeling, mutual recognition of election interference indicators, and information sharing protocols for cross-border election-related AI threats.

Preparing for November 2028

The 2028 election will test democratic institutions and processes in ways that no previous election has. The technology is advancing faster than regulation, the threat landscape is more complex than the defensive infrastructure, and public trust in information systems has eroded to levels that make democratic deliberation genuinely difficult.

Preparation for this environment requires action on multiple fronts. Regulatory frameworks must be enacted and operational before campaign season reaches full intensity — which means federal action is needed by mid-2027 at the latest. Election administrators need training, resources, and technical support for AI-related threats that many jurisdictions have not yet assessed. Platforms must develop and implement content integrity measures that are effective at scale. And citizens need tools and education for navigating an information environment that is designed to manipulate at least as much as it is designed to inform.

The stakes are not abstract. The 2028 election will determine the direction of AI policy itself, creating a recursive dynamic in which the technology being governed is also reshaping the process by which governance decisions are made. Getting this right — or failing to — will echo far beyond a single election cycle.


USA 2028 AI maintains a dedicated Election Technology Tracker monitoring AI-related developments in campaign technology, election administration, and information environment regulation. The tracker is updated weekly during active campaign periods.