Should Platforms Be Liable for User Speech?

Should social media platforms be legally responsible for harmful speech posted by users?

Section 230 of the Communications Decency Act gives platforms broad immunity from liability for user-posted content. Reforming or repealing it could make platforms legally responsible for harmful speech — with major implications for free expression.

The Case for More Speech

Section 230 of the Communications Decency Act (1996) provides that platforms shall not be treated as the publisher or speaker of third-party content. This single provision — 26 words — created the legal foundation for the modern internet. Without it, platforms would face a choice between two untenable options: moderate everything before it is published (eliminating the open, user-driven nature of the internet) or publish nothing at all. Section 230 enabled the user-generated content model that made the internet valuable for free expression.

Liability would predictably produce over-moderation, not better speech. If platforms faced legal liability for user speech, their rational response would be to remove any content that might expose them to a lawsuit — which means removing speech that is controversial, political, or dealing with sensitive topics, whether or not it is actually harmful. Large platforms could absorb the cost of massive content review operations; small and new entrants could not, entrenching dominant incumbents and eliminating competitive alternatives.

The analogy to publishers does not hold. Traditional publishers like newspapers are liable for what they publish because they actively select, edit, and present content. Platforms like Twitter or YouTube receive billions of posts daily — more content per day than could be reviewed by any imaginable human workforce. Holding platforms liable as publishers for content they did not select or promote treats passive hosting as equivalent to active editorial endorsement, a category error with enormous practical consequences.

Zeran v. America Online (4th Circuit, 1997) correctly interpreted Section 230's purpose. When Kenneth Zeran was harassed by anonymous posters using AOL's platform, he sought to hold AOL liable. The court held that Section 230 immunized AOL — not because AOL had done nothing wrong, but because imposing liability on platforms for user content would chill the development of an open internet. The ruling has shaped internet law for nearly 30 years and reflects a considered policy judgment about the tradeoffs between platform accountability and internet openness.

The EU's Digital Services Act offers a cautionary contrast. Europe's more demanding platform liability framework has produced visible differences: European platforms are more conservative in what they permit, more aggressive in removing borderline content, and less hospitable to political and controversial speech than their American equivalents. Whether this is a feature or a bug depends on your speech values — but it is a real consequence of a different liability framework.

The Case for Restriction

The immunity Section 230 provides is far broader than its original purpose required, and Congress has never updated it to reflect the emergence of trillion-dollar platforms with sophisticated recommendation and amplification systems. A law designed to protect early internet bulletin boards from liability for user posts should not provide unlimited immunity for platforms that actively algorithmically amplify content to maximize engagement — including harmful content.

The complete immunity removes all accountability incentives. If a platform cannot be sued for hosting harassment, disinformation, or incitement no matter how foreseeable the harm, it has no legal incentive to address these harms when doing so would cost money. The result is platforms that are highly efficient at generating engagement and advertising revenue but that internalize no cost for the social harms their amplification systems create.

Algorithmic amplification is not passive hosting. When a platform's recommendation system takes a piece of harmful content and actively distributes it to millions of users, the platform is doing something qualitatively different from passively hosting user speech. The argument that platforms are mere conduits — like telephone companies that transmit whatever users say — does not accurately describe the relationship between platform recommendation systems and user experience. A platform that algorithmically amplifies content it knows to be false or harmful is more like a publisher than a telephone company.

FOSTA-SESTA (2018) showed that targeted Section 230 modifications are possible. Congress amended Section 230 to remove immunity for platforms that facilitate sex trafficking, and the specific category of targeted harm — child exploitation — justified the targeted modification. A similar targeted approach — removing immunity for specific, well-defined categories of serious harm — is constitutionally and practically more defensible than either wholesale Section 230 elimination or the current unlimited immunity.

Historical Context

Section 230 was enacted in 1996 as part of the Communications Decency Act — a broader piece of internet regulation that the Supreme Court largely struck down in Reno v. ACLU (1997) on First Amendment grounds. The Section 230 immunity provision survived because it was not content-based regulation but a liability framework.

The provision's authors, Representatives Chris Cox and Ron Wyden, designed it primarily to solve a specific problem: Stratton Oakmont v. Prodigy (1995) had held that Prodigy was liable as a publisher for user content because it had moderated some content — suggesting that any moderation exposed platforms to publisher liability for everything they failed to catch. Cox and Wyden wanted to encourage moderation without penalizing it. The immunity was a means to that end, not the primary goal.

In the nearly 30 years since its enactment, Section 230 has been interpreted more broadly than its authors likely intended, sheltering platforms from liability even when they actively curate and recommend content. Both political parties have criticized it: conservatives allege it enables platform suppression of conservative speech, progressives argue it protects platforms from accountability for harmful content. This unusual bipartisan criticism has driven repeated legislative proposals, none of which has passed.

First Amendment Context

The First Amendment intersects with Section 230 reform in complex ways. Platforms have their own First Amendment rights — including the right to moderate content as editorial discretion and the right not to be compelled to carry speech they find objectionable. Moody v. NetChoice (2024) confirmed that platform curation involves protected editorial activity.

This means that some Section 230 reform proposals have First Amendment problems of their own. A law that required platforms to carry all lawful user speech — removing their discretion to moderate — would potentially compel platforms to host speech they object to, raising First Amendment concerns under Hurley and Tornillo. Conversely, imposing liability for failure to remove harmful speech might force over-moderation that harms speakers' rights.

Gonzalez v. Google (2023) presented the Supreme Court with a direct question about whether algorithmic recommendations lose Section 230 protection — and the Court declined to address it, deciding on narrower grounds and leaving the central question unresolved. The doctrinal framework for distinguishing protected platform editorial activity from actionable amplification of harmful content remains underdeveloped.

Internet & AI Implications

AI has introduced new dimensions to the Section 230 question that the provision's 1996 framers could not have anticipated. When a platform's AI system generates text completions, image suggestions, or content summaries in response to user input, the platform is contributing to the content in ways that blur the line between hosting and creating. Several lower court cases have considered whether AI-generated or AI-assisted content retains Section 230 protection, with inconsistent results.

AI-powered recommendation systems also challenge the passive-hosting framing more directly than human-curated algorithms. When a recommendation AI learns from engagement signals that extremist or inflammatory content drives more engagement and promotes it accordingly, the platform's algorithmic choices are making substantive content decisions at massive scale. Whether this is "editorial discretion" protected by both Section 230 and the First Amendment, or active harmful amplification subject to accountability, is the central unresolved question in platform liability law.

Free Speech Atlas Editorial View

Editorial view

Section 230 reform should be targeted and evidence-based rather than driven by political frustration with specific platform decisions. The provision's core function — preventing platforms from being liable as publishers for the full range of user-generated content — is sound and essential to internet openness. Eliminating it would predictably produce a worse speech environment, not a better one.

The case for targeted reform is stronger. When platforms' algorithmic systems actively amplify content they know to be harmful — not merely hosting it but promoting it to maximize engagement — the passive-hosting immunity rationale no longer applies cleanly. A carefully drawn modification that distinguishes passive hosting from active algorithmic amplification, and removes immunity for the latter in cases of specific well-defined harms, is a more defensible approach than either full immunity or full liability.

The EU's experience under the Digital Services Act is worth watching. If European platforms develop effective accountability systems without the predicted implosion of internet openness, it would provide evidence relevant to American reform debates. If DSA produces the anticipated over-moderation and speech chilling, it would strengthen the case for the Section 230 approach.