The Internet and the New Free Speech Landscape
The internet created the most powerful free speech technology in history and the most complex regulatory challenges. From Reno v. ACLU to Section 230 to Moody v. NetChoice, the arc traces an ongoing struggle to apply First Amendment principles to the digital public square.
The internet did not begin as a public phenomenon. For much of its early existence, it was a tool for academic researchers and defense contractors, operating through institutions that enforced their own speech norms. The commercial internet of the early 1990s — Usenet newsgroups, early bulletin board systems, and online services like America Online and CompuServe — introduced the first generation of platform speech problems.
The foundational legal question arose quickly: when a platform hosts user speech, is it a publisher (responsible for content it knows about) or a distributor (responsible only if given specific notice of illegal content)? In Stratton Oakmont v. Prodigy (1995), a New York court held that Prodigy's moderation of its bulletin boards made it a publisher, liable for defamatory posts by users. The decision created a devastating incentive structure: platforms that moderated at all were liable; platforms that moderated nothing were safe. Congress responded with Section 230 of the Communications Decency Act (1996), shielding platforms from liability for user-generated content regardless of whether they moderated. Section 230 became the legal foundation of the internet we know — enabling the development of social media, user-generated platforms, and content sites without platforms facing ruinous litigation exposure.
Zeran v. AOL (4th Cir. 1997) was the first major Section 230 decision, confirming that platforms could not be held liable even after receiving specific notice of defamatory content. The decision set the pattern for broad immunity that has persisted ever since, though it has generated ongoing controversy about whether platforms exploit Section 230 to escape accountability for harmful content they profit from.
Reno v. ACLU (1997) addressed the constitutional dimension. The Clinton administration had included broad restrictions on "indecent" and "patently offensive" online speech in the Communications Decency Act. The Supreme Court unanimously struck these provisions down, holding that internet speech receives the same maximum First Amendment protection as print media — not the reduced protection applied to broadcast. Justice Stevens described the internet as "the most participatory form of mass speech yet developed" and declined to impose the broadcasting scarcity rationale that had justified FCC content regulation.
The COPA litigation (Child Online Protection Act, 1998) extended this period: the law was enjoined, re-enacted, and ultimately struck down after years of litigation, confirming that the government faced a heavy burden justifying speech restrictions online even when the stated purpose was protecting children.
By the 2010s, social media platforms — Facebook, Twitter, YouTube — had become so central to public discourse that debates about content moderation took on the character of debates about government censorship, even though platforms are private actors not bound by the First Amendment. The 2016 election and Russian Internet Research Agency disinformation campaign raised alarms about platform responsibility for politically manipulative content. The COVID-19 pandemic produced intense pressure on platforms to remove health misinformation, and documented cases of government officials urging platforms to take down content raised "jawboning" concerns about covert government censorship through private intermediaries.
The deplatforming of President Trump following January 6, 2021, crystallized the debate. Conservative-led legislatures in Florida and Texas enacted laws requiring large platforms to carry all political speech regardless of editorial discretion. The Supreme Court's decision in Moody v. NetChoice (2024) vacated lower court rulings and remanded, but made clear that platforms' editorial and curation decisions receive substantial First Amendment protection — while leaving many fundamental questions unresolved.
Gonzalez v. Google (2023) addressed whether Section 230 immunized Google for algorithmic recommendations of terrorist content. The Court sidestepped the constitutional question but declined to narrow Section 230, preserving the existing immunity framework. The decisions collectively signal an internet speech law that remains in flux — with the basic questions about platform power, government pressure, and the meaning of the "public square" still actively contested.