Can Private Companies Censor Speech?

Private companies have broad legal authority to restrict speech on their platforms. The First Amendment only binds governments. But the cultural and democratic implications go further.

The Legal Reality: Private Companies and the First Amendment

The foundational principle of American First Amendment law is that the Constitution's speech protections apply to government action, not private conduct. A private company — whether a social media platform, an employer, a publisher, or a shopping mall — can restrict speech on its property or within its operations without violating the First Amendment. The Bill of Rights was designed to protect individual liberty against government tyranny; it was not designed to require private actors to be neutral arbiters of speech.

This principle produces outcomes that many find counterintuitive. A private employer can fire an employee for their social media posts; a private platform can ban users for their political opinions; a private bookstore can refuse to stock books with which it disagrees; a private university can impose speech codes that a public university could not constitutionally enforce. In each case, the private actor is exercising its own rights — to set conditions of employment, to curate its platform, to choose what to sell, to set its educational environment — without constitutional constraint.

The practical consequence is that most speech restriction in contemporary American life occurs through private action rather than government censorship. The debates about social media content moderation, workplace speech codes, and campus speaker disinvitations involve private actors making choices that are largely unconstrained by constitutional law. This has led some scholars and advocates to argue that a coherent commitment to free expression requires norms and laws that go beyond the Constitution's protection against government action — that private censorship, at sufficient scale and power, is as threatening to expressive freedom as government censorship.

Historical Origins of the Private-Public Distinction

The state action doctrine — the constitutional requirement that a constitutional violation requires government involvement — has deep roots in American constitutional law. The Civil Rights Cases (1883) established that the Fourteenth Amendment's equal protection guarantee applied only to state action, not private discrimination — a ruling that limited Congress's power to reach private racial discrimination and that shaped constitutional law for a century. The parallel principle for the First Amendment — that it constrains only government, not private actors — has never been seriously questioned as a constitutional matter.

The doctrine's history includes a significant exception: the company town doctrine established in Marsh v. Alabama (1946), where the Supreme Court held that the First Amendment applied to a private company that owned and operated an entire town as its property. Because the company town performed all the functions of a government — providing services, maintaining public order, operating public spaces — the Court held that it could not exercise those governmental functions while excluding First Amendment protections. The company town doctrine suggested that private actors performing sufficiently public functions might be subject to constitutional constraints.

Pruneyard Shopping Center v. Robins (1980) held that states could require private shopping malls to allow signature gathering and leafleting, without violating the mall owner's First Amendment rights — because the state was not compelling the mall to carry specific political messages it opposed. The decision suggested some flexibility in how states could regulate speech on private property. But the Court has generally declined to extend the company town doctrine to other private entities, maintaining a relatively clear line between government and private action for First Amendment purposes.

The State Action Doctrine and Its Limits

The state action doctrine is not absolute — courts have recognized circumstances where private actors' conduct is sufficiently entangled with government to constitute state action. 'Entanglement' theories find state action when private actors exercise powers traditionally reserved to government, when the government provides significant encouragement or authorization for the private conduct, or when the government and private actor are so closely involved that the private conduct is essentially governmental.

Attempts to apply state action doctrine to social media platforms have generally failed. In Prager University v. Google (2022), the Ninth Circuit rejected the argument that YouTube, as a dominant public communications platform, exercised enough governmental function to be subject to First Amendment constraints. The court held that private property does not become the functional equivalent of government simply because it is widely used and important. The 'public function' exception to state action doctrine requires that the private actor perform a function that is traditionally and exclusively performed by government — like running elections or administering a city.

Some scholars argue that the state action doctrine was developed in an era when private power and government power were more clearly separable than they are today. When a handful of private companies control the digital infrastructure through which most public discourse occurs, the theoretical distinction between private and government censorship may not capture the practical reality of speech suppression. These arguments have influenced legislative proposals to impose First Amendment-like constraints on dominant platforms, but courts have not accepted them as a basis for extending the First Amendment itself.

The Practical Concern: Private Power over Public Discourse

Even without a First Amendment violation, private company control over speech raises serious concerns about expressive freedom. The most immediate concern involves the concentration of platform power: a small number of companies control the digital infrastructure through which most public discourse occurs. When these companies make content moderation decisions, their decisions do not affect one person's expression in isolation — they shape the information environment for hundreds of millions of people. The scale of this power has no historical precedent in the private sector.

Employer speech restrictions represent a different dimension of private censorship. Most American employees work in at-will employment relationships — they can be fired for any reason not prohibited by law. Political speech is not a protected characteristic under federal anti-discrimination law (though a handful of states prohibit political discrimination in employment). The result is that employees face significant chilling effects on their expressive freedom: a tweet criticizing a company's management, a social media post expressing controversial political views, or a letter to the editor on a contested public issue can result in termination with no legal remedy.

Publishing and media concentration create further private speech constraints. When a small number of publishing houses control access to major book markets, when a handful of record labels dominate music distribution, when a few media conglomerates control most of the nation's news outlets, private editorial decisions about what to publish or broadcast shape public discourse in ways that aggregate to something like censorship power. The First Amendment prohibits none of this, but a commitment to genuine expressive pluralism requires confronting these private power concentrations as well as government censorship.

Proposed Reforms: From Common Carrier to New Rights

The mismatch between the scale of private platform power and the limits of First Amendment protection has generated numerous proposals for reform. Common carrier status — treating dominant platforms as public utilities required to provide non-discriminatory service, as telephone companies have historically been required to do — would impose neutrality obligations that limit platforms' ability to restrict speech based on viewpoint. Proponents argue that platforms large enough to constitute essential communication infrastructure should be regulated like other utilities; opponents argue this would force platforms to carry harmful, illegal, and abusive content they currently moderate.

Congressional proposals have ranged from narrowing Section 230 immunity to encourage better moderation to eliminating the immunity as a way of creating accountability for harmful content. The EU's Digital Services Act takes a different approach — imposing process and transparency obligations rather than changing content liability rules — requiring platforms to provide explanations for moderation decisions, maintain appeal processes, and submit to audits. The UK's Online Safety Act creates duty-of-care standards requiring platforms to protect users from certain categories of harmful content.

State-level legislation has attempted to fill the gap. California's AB 587 requires large social media companies to report on their content moderation practices. New York's proposed legislation would require platforms to have neutral policies. These state approaches face constitutional challenges: to the extent they burden platforms' editorial choices, they may conflict with the First Amendment rights platforms retain under Moody v. NetChoice.

AI, Automation, and Private Censorship at Scale

Artificial intelligence has dramatically expanded the scale and opacity of private content decisions. AI moderation systems make billions of decisions daily about what content to allow, restrict, flag, or remove — decisions that are largely invisible to users, difficult to audit, and implemented without the transparency and due process that formal government censorship would require. The combination of private action (no constitutional constraint) and AI automation (no meaningful human review at scale) creates a censorship capacity that dwarfs anything a government bureaucracy could achieve.

AI has also enabled new forms of private speech suppression that operate below the threshold of explicit removal decisions. Algorithmic demotion — reducing the distribution of content without removing it — affects expression without triggering the notice and appeals processes that formal removal decisions require. Demonetization — removing advertising revenue from content that remains technically available — creates economic pressure on speakers without technically silencing them. These softer censorship mechanisms are harder to study, document, and contest than explicit removal decisions.

The deployment of AI in private employment decisions creates parallel concerns for workplace speech. Automated monitoring tools that track employees' social media activity and flag posts for management review have made it practical for employers to monitor off-duty employee speech at scale. The chilling effect of knowing that one's social media activity is subject to automated employer surveillance may suppress speech that employees would otherwise feel free to engage in — a private enforcement mechanism that produces results similar to what government censorship would achieve, without any constitutional constraint.