What Are True Threats?
True threats are statements that place a reasonable person in fear of violence or harm. They are not protected by the First Amendment, but the line between threats and protected speech is often contested.
What Are True Threats?
True threats are a category of speech that falls outside the First Amendment's protection because it communicates a serious intent to commit unlawful violence against a specific person or group. The doctrine reflects the principle that the government may protect people from genuine fear of violence without suppressing ordinary hyperbole, political rhetoric, or artistic expression. The challenge — which has occupied courts for decades — is drawing the line between speech that actually threatens and speech that uses threatening language without communicating actual intent to act.
Not every threatening-sounding statement is a true threat. 'I could kill you for saying that' in casual conversation is hyperbole; a note delivered to a specific person describing how and when violence will occur against them is a true threat. Between these poles lies an enormous range of cases: angry social media posts, rap lyrics, political rhetoric, and protest speech that uses violent imagery without, arguably, communicating sincere intent to commit violence.
True threat doctrine intersects with stalking, harassment, domestic violence, and hate crime law in ways that make it practically significant beyond abstract First Amendment theory. Restraining orders, protective orders, and harassment injunctions often turn on whether communications constitute true threats. The doctrine also governs criminal prosecutions for threatening communications under federal and state law — and the standard courts use to distinguish protected speech from criminal threats has significant implications for how broadly the criminal law can reach.
Historical Development of the Doctrine
The Supreme Court first used the term 'true threats' in Watts v. United States (1969), a case involving a young man who said at an anti-war rally: 'If they ever make me carry a rifle the first man I want to get in my sights is L.B.J.' He was convicted under a federal law prohibiting threats against the President. The Supreme Court reversed, holding that the statement — made in the context of political protest, conditional, and met with laughter — was political hyperbole rather than a true threat. The brief per curiam opinion established that true threats are a distinct unprotected category but gave little guidance on how to identify them.
For decades after Watts, courts applied an objective standard: would a reasonable person receiving the communication regard it as a serious expression of intent to commit violence? This standard protected speakers from convictions based on idiosyncratic victim interpretations while allowing prosecution of communications that objectively communicated violent intent. It also meant that a speaker could be convicted even if they subjectively intended no threat — creating tension with the principle that criminal punishment generally requires mental state (mens rea) as well as prohibited conduct.
The objective standard was challenged in cases involving domestic violence and stalking, where communications that might seem ambiguous to an outside observer are understood by the targeted person — who knows the sender's history of violence — as genuinely threatening. Courts developed the concept of the 'reasonable recipient' — whether the person with the victim's knowledge and experience would find the communication threatening — as a modified objective standard that takes context into account.
Counterman v. Colorado (2023): The Mental State Requirement
Counterman v. Colorado (2023) is the Supreme Court's most important true threat decision in decades, addressing whether the First Amendment requires proof of the speaker's subjective mental state in addition to objective threatening content. Billy Counterman sent thousands of Facebook messages to a musician he had never met, including statements suggesting he had been watching her, implying she should be dead, and describing wanting to harm her. The musician experienced genuine fear, cancelled performances, and eventually sought a stalking protection order. Colorado prosecuted Counterman under a true threats statute that applied an objective standard, without requiring proof that Counterman knew his messages would be perceived as threatening.
The Supreme Court held 7-2 that the First Amendment requires proof of at least recklessness — that is, the speaker must have consciously disregarded a substantial and unjustifiable risk that the communication would be viewed as threatening. A purely objective standard, the majority held, would chill protected speech by creating liability for speakers who use threatening language without understanding how it would be received. Requiring at least recklessness provides a 'buffer zone' that prevents speakers from being held criminally liable for unintentional intimidation.
Justices Thomas and Barrett dissented, arguing that requiring proof of subjective mental state goes beyond what the First Amendment requires and makes it harder to protect victims of harassment and stalking. The dissenters argued that the historical unprotected status of threats did not include any mental state requirement, and that protecting speakers who are reckless rather than merely negligent about causing fear was an unwarranted extension of First Amendment protection. The Counterman decision will affect prosecution of threatening online communications for years to come.
Rap Lyrics, Art, and Ambiguous Expression
One of the most contested areas of true threat doctrine involves artistic expression — particularly rap lyrics — that uses violent imagery as a rhetorical and creative device. Rap has a long tradition of using violent, threatening, and hyperbolic language as artistic expression, social commentary, and cultural performance. Prosecutors in dozens of cases have introduced defendants' rap lyrics as evidence of threatening intent or even as the threat itself, generating significant controversy about whether true threat doctrine is being applied consistent with its First Amendment limits.
In People v. Skinner (2016) and a series of similar cases, courts have grappled with whether a rapper's violent lyrics about identifiable people — even when posted online where the target could see them — constitute true threats or are protected artistic expression. The Counterman decision's recklessness standard provides some protection for artists who use violent imagery without subjective awareness that their audience would interpret it as a genuine threat, but the line between protected rap lyrics and a true threat remains contested and contextually dependent.
The rap lyrics issue has a racial dimension that critics of aggressive prosecutions emphasize: the violent imagery in rap is not treated identically to violent imagery in country music, heavy metal, or literary fiction. Studies have shown that mock jurors rate threatening content as more literal and less artistic when told it is rap rather than other genres. Several states have passed 'Rap Music on Trial' legislation limiting the admissibility of rap lyrics as evidence, reflecting legislative judgment that the genre's expressive conventions were being misread as literal threats.
Online Threats and Coordinated Harassment
The internet has transformed the practical significance of true threat doctrine. Social media, anonymous messaging, and the ability to direct communications at specific targets at scale have created a harassment and threat environment that the doctrine's framework — developed in cases of individual threatening communications — struggles to address. Online threats can be directed simultaneously at thousands of people, coordinated across platforms, and delivered from anonymous or pseudonymous accounts that make identification and prosecution difficult.
Doxing — the publication of private personal information (home address, workplace, family members' identities) accompanied by rhetoric encouraging violence — has become a significant online threat vector. Whether doxing combined with implicit or explicit calls for violence constitutes a true threat under Counterman's recklessness standard is unsettled. The communications may be structured to avoid explicit statements of personal intent while predictably leading followers to direct threats or actual violence at the identified target.
Targeted online harassment campaigns — coordinated efforts by groups of users to send threatening and harassing messages to a specific target — raise collective action problems that individual true threat doctrine does not resolve well. No single message in a coordinated campaign may meet the true threat standard, but the aggregate effect on the target may be as fear-inducing as a single clear threat. Whether the First Amendment permits broader regulation of coordinated threatening behavior remains an open question that Counterman does not fully resolve.
AI, Synthetic Voice, and Future Threat Doctrine
Generative AI creates new dimensions to true threat doctrine that courts have not yet fully addressed. AI can be used to generate threatening messages that appear to come from a human being, raising questions about whether a human sender who uses AI tools to generate threatening content satisfies the Counterman recklessness standard — arguably yes, since the human directed the generation of threatening content with awareness of how it would be received. AI can also generate synthetic audio or video of a person appearing to make threats against another — a form of attributed threat that may cause the same fear as an actual threat.
Deep concern about the volume of AI-generated threatening communications is already emerging. When automated tools can generate thousands of personalized threatening messages directed at specific individuals, the capacity for harassment at scale grows dramatically. Determining the mental state behind AI-generated threats — when the actual sender is a human directing an AI tool — will require courts to extend Counterman's framework to account for human-AI interactions that were not contemplated when the doctrine was developed.
The most difficult future cases may involve AI systems that generate threatening content without deliberate human intent — large language models that produce threatening outputs in response to prompts not specifically designed to generate threats, or AI-generated content in social media contexts where automated amplification directs threatening material toward specific targets. Whether liability attaches to the deploying company, the user, or neither in these circumstances is an unsolved problem at the intersection of true threat doctrine and AI governance.