Should Election Deepfakes Be Illegal?

Election deepfakes — AI-generated content that falsely depicts candidates saying or doing things they never did — are proliferating. Whether to ban them involves difficult free speech tradeoffs.

The Problem: Realistic Deception at Electoral Scale

Election deepfakes — AI-generated audio, video, or images depicting candidates or electoral officials saying or doing things they never said or did — represent one of the most acute applications of AI to democratic vulnerabilities. Unlike traditional political advertising, which must label its source and is at least acknowledged as advocacy, a convincing deepfake video of a candidate appearing to confess to a crime, express a racist view, or announce a withdrawal from a race can spread virally as apparent news before it is identified as fabricated. The combination of visual realism, rapid social media distribution, and the time pressure of electoral cycles creates conditions in which deceptive deepfakes could meaningfully influence outcomes.

The concern is not theoretical. The 2024 New Hampshire primary saw a robocall using an AI-generated voice clone of President Biden telling Democrats not to vote. Deepfake videos of politicians making false statements have been distributed in elections in multiple countries, including Slovakia, Indonesia, and the United Kingdom. As the technology improves and the cost of production falls toward zero, the barrier to electoral deepfake deployment is primarily willingness to engage in deception rather than technical or financial capacity.

The policy question is urgent precisely because the problem is tractable in some respects but raises genuine constitutional concerns in others. Unlike many AI harms that are diffuse or long-term, specific electoral deepfakes can be identified, their intended deception can be assessed, and their timing relative to elections can be determined. This makes targeted legal response more feasible than for diffuse harms like misinformation generally. The First Amendment constraints on that legal response, however, are significant — the government's power to restrict even false political speech is limited, and election deepfake laws must navigate those limits carefully.

Historical Context: Propaganda, Doctored Photos, and Manufactured Speech

The use of fabricated or manipulated media in political campaigns has a history far predating AI. Political propaganda in the 20th century routinely used manipulated images, selectively edited quotations, and manufactured scenarios to damage opponents and advance political agendas. Stalinist propaganda erased purged officials from historical photographs. Cold War intelligence services planted fabricated documents and photographs in foreign media. American political campaigns used selectively edited audio recordings to make opponents' statements appear to mean something different from what they actually said.

The 'Daisy ad' of 1964 — which implied that Barry Goldwater would cause nuclear war by cutting from a child counting flower petals to a nuclear countdown — was not literally false, but it created a profoundly misleading impression through emotional association and editing rather than direct statement. The Willie Horton ad of 1988 manipulated racial fear through selective presentation of true facts. 'Swiftboating' — the coordinated campaign of misleading statements about John Kerry's Vietnam service in 2004 — used factually questionable claims to damage a political opponent. None of these techniques involved AI, but all involved deliberate manipulation of voters' perceptions through false or misleading communication.

Existing law has never fully prohibited deliberate deception in political advertising. The Supreme Court has held that the First Amendment protects false statements in political contexts (Alvarez), that political advertising receives the highest level of First Amendment protection (Buckley v. Valeo), and that the government cannot impose content restrictions on political speech without compelling justification (Citizens United). AI-generated deepfakes in elections are a new form of an old problem, but they occur in the context of established First Amendment doctrine that significantly limits the government's response options.

The Satire Tradition and Why It Complicates the Law

Political satire has been a cornerstone of democratic culture since at least the era of pamphlets and political cartoons. Satirists from Jonathan Swift to Saturday Night Live have depicted political figures saying and doing things they never said or did, often in highly realistic and damaging ways. Political cartoons regularly depict politicians engaged in metaphorical actions — swallowing money, wielding axes, sleeping through crises — that are understood as commentary rather than factual reporting. The legal protection for political satire is robust: in Hustler Magazine v. Falwell (1988), the Supreme Court held that even deeply offensive satirical parody cannot give rise to liability unless it would be understood by a reasonable audience as asserting actual facts.

The Hustler precedent creates a significant complication for election deepfake legislation. A law that prohibits AI-generated content depicting candidates saying things they didn't say would potentially prohibit: a comedian's satirical video showing a candidate in an obviously absurd scenario; a journalist's illustrative video demonstrating what a candidate's proposed policy would mean in practice; an opposition researcher's legitimate parody demonstrating a candidate's rhetorical contradictions. The sweep of a broadly written election deepfake ban could capture these protected uses.

Legislators have attempted to navigate this problem by writing laws that require deceptive intent — prohibiting only AI-generated content that is intended to deceive voters and that a reasonable viewer would not recognize as satire or parody. This distinction in principle is difficult to apply in practice: the same video can be obvious parody to some viewers and convincing misinformation to others, particularly when distributed without context in social media environments where provenance is unclear. Courts reviewing election deepfake laws will need to determine whether the laws are sufficiently precise to protect satire while reaching deceptive content.

The Free Speech Argument Against Bans

The most powerful First Amendment arguments against election deepfake bans focus on the difficulty of defining deceptive political speech in ways that don't sweep in protected expression. False statements of fact in political contexts receive First Amendment protection under Alvarez — the government cannot criminalize lying in political contexts simply because the statement is false. To be unprotected, a false statement must cause specific identifiable harm (defamation), occur in a specific context that takes it outside First Amendment protection (fraud, perjury), or meet the Brandenburg standard for incitement. Election deepfakes, absent a specific showing that they constitute defamation or fraud, may retain First Amendment protection even though they are demonstrably false and intended to deceive.

Broad election deepfake prohibitions also risk chilling legitimate political expression. Documentary filmmakers, investigative journalists, and political satirists all work with audio and video of political figures, and all sometimes use techniques — reconstruction of events, illustrative animation, composite audio from multiple speeches — that might fall within the definition of AI-generated content depicting a candidate. A law drafted broadly enough to cover truly deceptive deepfakes might also cover these legitimate creative and journalistic practices, and the uncertainty about what the law covers would chill legitimate expression by speakers who cannot afford the legal risk of an ambiguous law.

The viewpoint-neutral application of election deepfake laws is another concern. In theory, a law prohibiting deceptive AI-generated content about candidates would apply equally to all parties and candidates. In practice, enforcement decisions are made by officials who may have political interests in selective application. A law prohibiting deceptive deepfakes about candidates, enforced by a state attorney general who is also a candidate or a partisan official, creates obvious potential for politically motivated selective prosecution. The First Amendment is particularly protective of political speech precisely because the risk of government suppressing political opposition is highest in electoral contexts.

Disclosure vs. Prohibition: A Middle Path

The alternative to prohibiting election deepfakes is requiring disclosure — mandating that AI-generated content depicting candidates be labeled as such, enabling viewers to adjust their assessment of the content's credibility. Disclosure requirements impose less First Amendment burden than prohibitions: speakers retain the ability to create and distribute the content, they are merely required to identify its artificial origin. The government's interest in informed voter decision-making may be sufficient to justify disclosure requirements even in the political speech context.

Several states have enacted disclosure-based approaches. California's AB 730 requires labeling of AI-generated content depicting candidates that is distributed within 60 days of an election. Similar laws in Michigan, Washington, and other states impose disclosure requirements without outright prohibition. The Federal Election Commission has proposed requiring disclosure of AI-generated content in paid political advertising. These disclosure requirements face fewer First Amendment objections than prohibitions, but they raise their own practical challenges: disclosure labels can be easily removed when content is reshared, and many viewers may not change their assessment of content based on an AI disclosure label.

The effectiveness of disclosure requirements in reducing the harm from election deepfakes is empirically uncertain. Research on warning labels and misinformation suggests that labels can reduce belief in labeled content under some conditions, but the effect sizes are often small and labels are frequently absent when viral content circulates through informal sharing. A disclosure label that appears on the original post may not appear on the screenshots, re-uploads, and derivative content that constitute most viral spread. More robust technical approaches — mandatory content provenance standards, cryptographic authentication of authentic media, and platform-enforced watermarking of AI-generated content — may be more effective than disclosure labels, though each raises its own implementation and First Amendment concerns.

Current Laws, Pending Legislation, and What Comes Next

The legal landscape for election deepfakes is evolving rapidly. As of 2025, approximately 20 states have enacted laws specifically addressing election deepfakes, with widely varying scope and penalties. The earliest laws, enacted in 2019-2020 (California, Texas), focused on deceptive AI-generated content distributed with the intent to influence an election within a specified period before the election. Later laws have broadened in scope, some addressing AI-generated content more broadly, some adding civil remedies alongside criminal penalties, and some imposing obligations on platforms to remove identified election deepfakes.

At the federal level, the DEFIANCE Act (signed 2024) addressed NCII deepfakes rather than election deepfakes specifically. The AI Transparency in Elections Act and several other proposed federal measures would require disclosure of AI-generated content in political advertising subject to FEC reporting requirements. The FEC itself has proposed rulemaking to address AI-generated political advertising under existing campaign finance law. None of these federal measures constitutes comprehensive election deepfake regulation.

The rapid evolution of deepfake technology means that legal frameworks will need to adapt continuously. As AI generation capabilities improve, the distinction between AI-generated and authentic content will become less technically reliable as a basis for regulation. Content provenance standards — technical systems that authenticate the origin of media at the point of capture and track modification through distribution — may offer a more durable regulatory foundation than laws targeting specific technical methods of content generation. The broader challenge is developing regulatory frameworks robust enough to protect democratic integrity without becoming tools for suppressing legitimate political expression — a balance that the current fragmented state-by-state approach has not yet achieved.