Are Deepfakes Protected Speech?

Deepfakes occupy an uncertain legal territory — some may be protected satire or fiction, while others constitute fraud, defamation, or non-consensual intimate imagery.

What Are Deepfakes and Why Do They Matter?

Deepfakes are synthetic media — images, video, or audio — generated by artificial intelligence to depict people saying or doing things they never said or did. The term combines 'deep learning' (the AI technique used) with 'fake,' and it encompasses a range from sophisticated near-indistinguishable face-swap videos to cruder manipulations that skilled observers can detect. As AI image and video generation technology has improved rapidly, the quality and accessibility of deepfake creation have increased while the cost and technical skill required have decreased dramatically. A convincing deepfake video that required a specialized research team to produce in 2018 can now be created by an individual with consumer software in minutes.

Deepfakes raise distinctive legal and ethical challenges because they deploy a real person's likeness, voice, and identity to communicate false messages. Unlike traditional written misinformation, which requires the audience to imagine or accept that a person said something, deepfake video and audio of a public figure appears to show them actually saying or doing the depicted thing. The evidentiary power of video — 'seeing is believing' — makes deepfakes potentially more persuasive than equivalent text-based misinformation, and more difficult to rebut even after debunking.

The categories of harmful deepfakes are distinct and raise different legal issues. Non-consensual intimate imagery (NCII) deepfakes create false sexual content depicting real people without their consent — a category that primarily affects women and that has generated the most extensive legislative response. Electoral deepfakes depict candidates or political figures in false but realistic scenarios designed to influence elections. Commercial deepfakes impersonate celebrities or business figures to commit fraud. Harassment deepfakes target private individuals with realistic false depictions to intimidate or damage their reputation. Each category raises different questions about what legal regulation is appropriate.

Historical Origins of Synthetic Media and Manipulation

The manipulation of images and media to create false impressions has a history as long as photography itself. Victorian-era photo composites placed people in scenes they never inhabited. Stalinist propaganda routinely erased purged officials from historical photographs. The famous 'kissing sailor' photograph from V-J Day was not what it appeared to be. Film editing techniques from cinema's earliest days could create false impressions of events and sequences. Digitally manipulated images became commonplace in graphic design and advertising long before AI-generated deepfakes emerged.

The 2017 emergence of the term 'deepfake' coincided with the public distribution of AI tools that made convincing face-swap video feasible for non-specialists. The initial widespread use of the technology was in the creation of non-consensual pornographic imagery featuring celebrities — a use case that demonstrated both the technology's capacity for harm and the gap between the speed of technological development and the legal frameworks available to address it. The term quickly expanded beyond this specific use to encompass all AI-generated synthetic media depicting real people.

The history of political manipulation through manipulated media provides important context for the current deepfake debate. Doctored photographs were used in political propaganda throughout the 20th century. Selectively edited video clips have long been used to misrepresent politicians' statements. The differences between these traditional manipulations and AI-generated deepfakes are matters of degree rather than kind — deepfakes are more realistic, more accessible to create, and more scalable — but the underlying challenge of ensuring that political discourse is grounded in authentic representations is not new. This historical continuity is relevant to legal analysis: laws targeting deepfakes specifically may need to be distinguished from the broader category of manipulated media that already existed.

Legal Status: First Amendment Protections and Limits

Deepfakes exist in complex legal territory, with their First Amendment status depending heavily on their content and context. The general principle is that realistic synthetic depictions of real people saying or doing things they never said or did can constitute defamation (if they are false statements of fact that damage reputation), fraud (if they are used to obtain money or property), or intentional infliction of emotional distress (if they are outrageous and cause severe harm) — categories that fall outside First Amendment protection regardless of the medium.

The harder cases involve deepfakes that are satirical or clearly fictional — AI-generated videos that exaggerate or parody real figures in obviously unrealistic ways. Political satire has been protected expression since before the First Amendment was written, and the Supreme Court held in Hustler Magazine v. Falwell (1988) that even deeply offensive parody and satire cannot give rise to emotional distress liability unless it is reasonably understood as stating actual facts. A clearly labeled satirical deepfake of a politician in an absurd scenario is distinguishable from a realistic deepfake intended to deceive audiences about actual events.

The challenge is that deepfakes can be highly realistic while still being intended as parody — and the line between protected satire and defamatory false statement of fact is not self-drawing. A deepfake video of a politician appearing to confess to a crime is false, potentially defamatory, and potentially election-influencing — but it might also be intended and received as obvious parody. Courts applying existing defamation law must determine whether a reasonable audience would understand the depiction as factual or fictional, a determination that becomes more difficult as deepfake technology makes fictional depictions more realistic.

Non-Consensual Intimate Imagery: The Strongest Case for Regulation

Non-consensual intimate imagery (NCII) deepfakes — also called 'revenge porn' deepfakes — represent the category where the case for legal prohibition is strongest and the First Amendment objections are weakest. These deepfakes create realistic sexual imagery depicting identifiable people who have not consented to such depictions, typically targeting women with the intent to humiliate, harass, or coerce them. The harm is direct, personal, and severe: victims experience significant psychological harm, damage to professional reputation, and persistent online presence of the imagery that is extremely difficult to remove.

The First Amendment analysis for NCII deepfakes follows the framework established for non-AI intimate imagery: the Supreme Court in United States v. Stevens (2010) held that new categories of unprotected speech require historical grounding in the tradition of unprotected expression. NCII deepfakes can be defended as unprotected speech based on their similarity to existing unprotected categories: they are false statements of fact (the depicted sexual activity never occurred), they constitute harassment or intentional infliction of emotional distress, and they are distributed without consent in ways that violate reasonable privacy expectations.

Forty-eight states and the federal government have enacted laws targeting non-consensual intimate imagery, with an increasing number specifically addressing AI-generated NCII. The SHIELD Act (proposed federally) and various state laws impose criminal penalties for creating or distributing NCII without consent. Courts have generally upheld these laws against First Amendment challenges, recognizing that the personal privacy interests at stake and the absence of any legitimate expressive value in non-consensual sexual imagery justify prohibition. The NCII context represents the clearest current case for deepfake regulation with the most solid First Amendment foundation.

Election Deepfakes and the Special Case of Political Speech

Deepfakes in electoral contexts are particularly concerning because they combine high potential for harm with the highest level of First Amendment protection — political speech. Election deepfakes that depict candidates making false statements, engaging in illegal conduct, or expressing views they do not hold can meaningfully influence elections, particularly if distributed shortly before an election when debunking efforts have limited time to reach voters.

Legal responses to election deepfakes have emerged rapidly. Minnesota, Texas, California, Georgia, and several other states have enacted laws restricting deceptive AI-generated content about candidates in the period leading up to elections. These laws generally require disclosure when AI-generated content depicts candidates, prohibit distribution of AI-generated content intended to deceive voters about a candidate's statements or actions, and in some cases impose criminal penalties. The Federal Election Commission has proposed rules requiring disclosure of AI-generated content in political advertising.

The First Amendment concerns are significant. Satire and parody of political figures — including realistic parody — has been protected expression throughout American history. Deep learning-generated satirical content depicting politicians in fictional scenarios is protected political speech; the question is how to distinguish it from deceptive election misinformation. Courts reviewing election deepfake laws will need to determine whether these laws are narrowly tailored to target only speech that is both false and specifically intended to deceive voters — not merely realistic fiction — and whether they impose sufficient disclosure requirements rather than outright prohibition in ways that chill protected satirical speech.

State Laws, Federal Gaps, and the Regulatory Race

The legal landscape for deepfake regulation in the United States is a patchwork of state laws with significant gaps at the federal level. As of 2025, approximately 40 states have enacted some form of deepfake-related legislation, primarily in the NCII and election contexts. These laws vary significantly in their scope, penalties, and definitions — some address only AI-generated deepfakes, others address digitally manipulated media broadly; some impose criminal penalties, others provide civil remedies; some require deceptive intent, others focus on non-consensual distribution regardless of intent.

Federal legislation has advanced slowly. The DEFIANCE Act, signed in 2024, created a federal civil remedy for victims of non-consensual AI-generated intimate imagery. The NO FAKES Act has been proposed to address unauthorized AI-generated depictions of individuals more broadly, covering voice and likeness in commercial contexts. These proposals reflect ongoing Congressional engagement with deepfake harms, but no comprehensive federal deepfake law has been enacted.

The platform dimension of deepfake regulation involves the interaction between platform policies and legal requirements. Major platforms have adopted policies restricting deepfakes, particularly in electoral and NCII contexts, but enforcement is inconsistent and technically challenging — AI-generated imagery is increasingly difficult to distinguish from authentic imagery using automated detection tools. The arms race between deepfake generation and deepfake detection means that platform policy enforcement will continue to lag behind the technology's capabilities. Proposed solutions including mandatory AI-generated content watermarking, content provenance standards, and authentication requirements for media depicting public figures are technically promising but raise their own First Amendment and privacy concerns.