Deepfakes: Synthetic Media and the First Amendment
Deepfakes — AI-generated synthetic video and audio — present a spectrum of First Amendment questions. Political satire deepfakes are likely protected; non-consensual intimate deepfakes clearly cause harm; election deepfakes designed to deceive voters fall somewhere in between.
A deepfake is a synthetic video, audio, or image created using AI — typically to realistically depict a person saying or doing something they never said or did. As the technology has improved, deepfakes have become harder to detect and easier to create, raising urgent free speech and legal questions.
The First Amendment analysis depends heavily on the type of deepfake:
Satire and parody: A deepfake clearly presented as satire — depicting a politician in an obviously exaggerated fictional scenario — is likely protected expression under Hustler v. Falwell (1988) and the long tradition of political satire. The fictional nature of the content, clearly indicated, prevents it from being actionable as defamation or fraud.
Non-consensual intimate imagery (NCII): Deepfake pornography depicting real people without their consent causes severe harm to the subjects. Most states have passed laws specifically targeting NCII deepfakes. These laws face First Amendment challenges but are generally designed narrowly enough to survive scrutiny by targeting harmful, non-consensual distribution rather than the content itself.
Election deepfakes: Realistic deepfakes designed to deceive voters — depicting a candidate withdrawing from a race, making policy announcements, or committing crimes — are a serious democracy problem. Several states have passed laws targeting them. Their constitutionality depends on how narrowly they are drawn to target deceptive content while leaving satire protected.
Reputation deepfakes: Realistic deepfakes falsely depicting private individuals committing crimes or engaging in embarrassing conduct may constitute defamation — false statements of fact presented as true. Existing defamation law can reach this category.
The key constitutional line is between clearly fictional synthetic media (protected) and synthetic media designed to deceive audiences into believing something false occurred (potentially unprotected as fraud or defamation).