Should Foreign Disinformation Be Censored?

Should the government or platforms suppress foreign disinformation campaigns targeting American democracy?

Russia, China, Iran, and other foreign governments operate disinformation campaigns targeting American public opinion. The question of how to respond without suppressing legitimate speech is one of the most difficult in contemporary free speech law.

The Case for More Speech

The critical distinction in foreign disinformation debates is between restricting based on content and requiring transparency about source. A law that says 'this political message is false and therefore prohibited' is content-based censorship. A law that says 'political messages funded or produced by foreign governments must be labeled as such' is a source-transparency requirement that enables informed public judgment without suppressing any message. The First Amendment permits the second and is deeply suspicious of the first — regardless of whether the content is foreign or domestic in origin.

The practical danger of content-based foreign disinformation suppression is that 'foreign disinformation' is a category that government officials will expand to cover inconvenient domestic speech. The definition of what counts as foreign influence is porous: a domestic outlet that publishes analysis consistent with foreign government interests, an American academic who repeats claims that originated with a foreign intelligence operation, a social media user who shares content that was first produced by a foreign-funded account — all of these can be characterized as vectors of foreign disinformation, and have been in government and platform enforcement actions. The category has significant potential for abuse against legitimate domestic dissent.

FARA, existing espionage statutes, and fraud laws already provide substantial tools against covert foreign political operations. The Internet Research Agency defendants were indicted under existing law for identity fraud and failure to register as foreign agents — not for the political content of their messages. Expanding government authority to suppress political messages based on foreign origin without requiring proof of deceptive conduct creates serious free speech risks without clear security benefits.

The Case for Restriction

Foreign governments operating covert influence campaigns against American elections are not exercising speech rights protected by the First Amendment — they are conducting covert operations against the United States. The Supreme Court has recognized that First Amendment protection does not extend to foreign governments in the same way it extends to American citizens and residents. Bluman v. FEC (2012) upheld a ban on political contributions and expenditures by foreign nationals, with the D.C. Circuit holding that the government's compelling interest in democratic self-governance justifies restrictions on foreign political participation that would be impermissible as applied to citizens.

The Internet Research Agency's operation during the 2016 election — documented in detail in the Mueller Report and the Senate Intelligence Committee's assessment — was not political speech. It was a covert intelligence operation that created hundreds of fake American political organizations, organized real political rallies under false pretenses, spent millions of dollars on targeted advertising without disclosing the source, and was designed to deepen social division and damage specific candidates. The scale of the deception, the foreign government sponsorship, the false-identity conduct, and the covert character of the operation collectively place it outside any reasonable understanding of protected speech.

AI capabilities have made these operations dramatically cheaper and more effective. The same operation that required hundreds of Russian operatives in 2016 could be run by a fraction of that workforce with generative AI in 2024 or 2026. Government and platform responses have not kept pace. The question is not whether to protect foreign governments' ability to conduct covert influence operations — they have no such right — but how to do so without creating tools that get turned against domestic speech.

Historical Context

The United States has regulated foreign political influence since the Foreign Agents Registration Act of 1938, enacted in response to Nazi propaganda operations in the United States. FARA requires agents of foreign principals engaged in political activities to register and disclose their activities and funding. It was rarely enforced for decades, but prosecutions and registration demands increased sharply after the 2016 election documented the scale of Russian influence operations.

The TikTok national security debate represents a newer iteration of the foreign influence concern: whether a social media platform with a Chinese parent company constitutes a foreign influence vector regardless of its content. The Supreme Court's 2025 decision in TikTok Inc. v. Garland upheld the forced-divestiture law under national security grounds, with the Court distinguishing TikTok's situation from direct speech restrictions. Foreign government ownership of communications infrastructure raises distinct concerns from content regulation — concerns about data access, algorithmic control, and structural influence rather than the specific messages transmitted.

First Amendment Context

The First Amendment does not protect foreign governments' speech rights on the same terms as domestic speakers. Bluman v. FEC (2012) established that Congress may prohibit foreign nationals from making political expenditures in U.S. elections. But the application of that principle to foreign disinformation is more complex: when foreign propaganda enters American discourse and is repeated and amplified by American citizens who may not know its origin, suppressing the content requires suppressing speech by American citizens about political matters — which receives the highest First Amendment protection.

Murthy v. Missouri (2024) addressed a related question: whether government pressure on social media platforms to suppress content — including content claimed to be foreign disinformation — constitutes unconstitutional viewpoint discrimination. The Court found insufficient evidence of unconstitutional coercion in that case, but the broader question of when government anti-disinformation efforts become censorship by another name remains live. The constitutional line runs between transparent, legally authorized enforcement actions targeting deceptive conduct and informal government pressure on platforms to suppress disfavored political content.

Internet & AI Implications

Social media platforms have become the primary venue for foreign disinformation operations because they combine global reach, microtargeting capability, and algorithmic amplification that multiplies the impact of small initial investments. The Internet Research Agency spent approximately $100,000 on Facebook ads in 2016; the same investment with generative AI in 2026 could produce an order of magnitude more content, distributed through more sophisticated synthetic personas, reaching more targeted audiences.

Platforms have developed detection systems for coordinated inauthentic behavior — patterns of account activity suggesting automation, foreign origin, or coordinated action. These systems are imperfect and have misidentified legitimate political organizing as inauthentic behavior. The detection gap between AI-generated influence operations and platform identification capabilities remains substantial, and the asymmetry between the cost of production and the cost of detection favors the attacker. Government-platform information sharing about known foreign operations — without requiring platforms to suppress based on content alone — is the current operational model, with its own accountability gaps.

Free Speech Atlas Editorial View

Editorial view

Foreign government covert influence operations against American democracy are not speech; they are hostile actions by foreign powers against the self-governance of American citizens, and the government has both the authority and the obligation to counter them. FARA enforcement, fraud prosecutions for identity deception, intelligence countermeasures against known operations, and mandatory disclosure requirements for foreign-origin political content are all legitimate tools.

The critical constraint is that targeting must be based on the deceptive, covert, foreign-government-controlled nature of the operation — not on the political content of the messages. When enforcement actions slide from targeting Russian front organizations to pressuring platforms to remove domestic political accounts based on claimed foreign connections, the tool has been converted from a democratic protection into a censorship mechanism. That line has been crossed in practice, and its maintenance requires institutional constraints on how government communicates with platforms about content removal.

The TikTok structural concern — foreign government control of a communication platform — is constitutionally distinct from content regulation and may justify structural remedies that would be impermissible as content restrictions. The two issues should not be conflated in policy or law.