Can AI Violate Free Speech?
AI systems built by private companies cannot violate the First Amendment — but the aggregate effect of AI-mediated speech may raise serious free expression concerns.
The First Amendment Answer: AI and the State Action Requirement
The threshold First Amendment question about AI and free speech is the same as for any private actor: does the First Amendment apply? The First Amendment prohibits government from abridging free speech — it does not, as a constitutional matter, constrain what private AI companies do with their systems. When OpenAI decides that ChatGPT will not produce certain content, when Google's search algorithm demotes certain results, or when Meta's AI moderation removes certain posts, these are private actor decisions that the First Amendment does not govern, regardless of their scale or effect on expression.
The state action analysis gets complicated when government is involved in AI companies' operations. If the government pressures AI companies to suppress certain speech — as occurred with some platform moderation decisions around COVID-19 and election misinformation — that pressure may constitute government action sufficient to trigger First Amendment scrutiny. Murthy v. Missouri (2024) addressed government communications with social media platforms about content moderation, with the Supreme Court holding that plaintiffs lacked standing to challenge those communications. The underlying question of when government pressure on private AI companies constitutes unconstitutional censorship by proxy was not definitively resolved.
Government-owned or government-operated AI systems present clearer First Amendment issues. An AI system deployed by a government agency to respond to citizen inquiries is the government speaking — and government speech raises its own First Amendment considerations about viewpoint, accuracy, and accountability. An AI system deployed by a public university, a government-operated hospital, or a state court system to filter or moderate communications implicates First Amendment protections that would apply to those institutions' human-operated equivalents.
Historical Context: From Telephone Companies to AI Assistants
The question of whether communication technology companies have First Amendment obligations to their users has been debated throughout the development of modern communications infrastructure. Telephone companies were subjected to common carrier regulation — required to connect all calls without regard to content — based on their monopoly position in communications infrastructure. Courts held that this did not violate the telephone companies' First Amendment rights because telephone companies did not engage in editorial functions; they were passive conduits rather than active speakers.
Internet service providers' First Amendment obligations were debated extensively in the 1990s and 2000s, with courts generally holding that ISPs, like telephone companies, performed conduit rather than editorial functions and could be subjected to certain content-neutral carriage requirements without First Amendment conflict. The argument was that conduit carriers are categorically different from editorial speakers, and that their commercial infrastructure function justified regulation that would be unconstitutional if applied to newspapers or broadcasters.
AI systems are categorically different from telephone networks and ISPs in the relevant sense: they are not passive conduits but active participants in generating, filtering, and shaping communication. An LLM that crafts a response to a user's question is engaging in something much closer to editorial judgment than a telephone company that connects a call. This distinction — between conduit and editor — is central to determining whether AI systems can claim First Amendment protection against regulation, and whether they have First Amendment obligations to users whose speech they restrict.
The Practical Concern: AI as a Chokepoint for Expression
Even if AI companies have no First Amendment obligations to users as a matter of current constitutional law, the practical concern about AI as a chokepoint for expression is real and growing. As LLMs become primary tools for research, writing, and information access, the design choices embedded in those systems — about what topics they will address, what perspectives they will present, what information they will provide — shape users' access to information in ways that have no historical parallel.
A student using an AI assistant to research a controversial topic receives information filtered through the AI's training and alignment choices. A professional using an AI writing tool creates content shaped by the AI's content policies. A user asking a search AI about a medical condition, a legal question, or a political issue receives answers that reflect the AI system's implicit and explicit choices about what information to surface. When AI systems are designed to avoid controversy, refuse to take positions on contested questions, or default to official consensus views, they produce a homogenizing effect on information access that could reduce intellectual diversity at scale.
The concern is not merely theoretical. Studies of LLM behavior have documented systematic patterns in how different AI systems handle contested political and social topics — some consistently presenting one-sided or sanitized views of sensitive subjects, others refusing to engage with entire topic areas. As these systems become more capable and more widely used, the speech policy implications of these design choices grow proportionally. The question of who makes these design choices — and what process governs them — is becoming as consequential as the question of what the choices are.
Government AI and First Amendment Obligations
Government deployment of AI in contexts affecting expression raises First Amendment issues more directly than private AI company decisions. Public universities, government agencies, public schools, and courts that deploy AI systems to moderate communications, filter information, or generate official responses are subject to First Amendment constraints in ways that private companies are not. When a public university's AI content moderation system removes student speech based on viewpoint, it raises the same First Amendment issues as a human administrator removing the same speech.
Government AI systems that process citizens' communications — for security screening, fraud detection, or public safety monitoring — implicate Fourth Amendment concerns alongside First Amendment concerns. The chilling effect of knowing that government AI is monitoring one's communications may suppress political expression even without any specific enforcement action, in ways that parallel traditional surveillance chilling effects. The scale and opacity of AI surveillance — which can process far more communications than human analysts — amplify these concerns.
The use of AI in government decision-making that affects speech-related rights raises procedural due process concerns as well as First Amendment concerns. When an AI system automatically suspends a government employee's security clearance based on their communications, or flags a nonprofit organization for political activities based on AI analysis of its public statements, or generates a criminal record assessment that affects a person's ability to engage in certain expressive activities, the opacity and potential bias of AI decision-making raise constitutional issues that courts are only beginning to address.
Regulatory Landscape: What Rules Govern AI and Speech
The regulatory landscape governing AI and expression is fragmentary and evolving rapidly. In the United States, no comprehensive federal framework governs what AI companies may or must do regarding content. Section 230 of the Communications Decency Act protects platforms from liability for user-generated content and for good-faith moderation decisions, but its application to AI-generated content — where the AI itself produces the speech rather than merely hosting user speech — is unsettled. The Federal Trade Commission has asserted authority over deceptive AI practices under existing consumer protection law, but this does not directly address free speech concerns.
The EU AI Act, which entered into force in 2024, regulates AI systems by risk level, with specific requirements for high-risk AI applications. AI systems used in content moderation at scale are subject to transparency and accountability requirements, though the primary framework for platform AI moderation is the Digital Services Act rather than the AI Act. These EU frameworks do not apply directly to US-based AI companies operating in the US, but they set global standards that influence corporate behavior and may serve as models for future US legislation.
State-level AI regulation in the United States has addressed specific aspects of AI and expression — several states have passed laws requiring disclosure of AI-generated political advertising, and California has passed legislation addressing AI-generated deepfakes in electoral contexts. These piecemeal state approaches do not constitute a comprehensive framework, and their interaction with the First Amendment rights of AI companies under Moody v. NetChoice is not yet fully tested in courts.
Future Questions: AI Speakers, AI Rights, and Democratic Control
The most profound questions about AI and free speech lie ahead. As AI systems become more capable of generating sophisticated, contextually appropriate expression, questions arise about whether and when AI systems should be considered speakers with their own expressive interests — and, relatedly, whether users of AI systems have derivative free speech interests in the AI's outputs that government regulation might burden. These questions have no established answers in current doctrine and are the subject of active scholarly debate.
The democratic governance question may be the most urgent. The companies that build and align large AI systems are making choices — about what information is and is not accessible, what perspectives are and are not represented, what topics are and are not discussable — that were previously distributed across many actors. Whether this concentration of expressive power in a handful of AI companies requires democratic intervention, and what form that intervention should take, is a question that current law and institutions are not designed to answer.
The international dimension adds further complexity. AI systems developed in the United States operate globally, encountering different regulatory requirements and different cultural norms about speech in different jurisdictions. AI systems developed in China operate under government requirements to promote Communist Party positions and suppress political criticism. As AI becomes a primary communication and information tool globally, the design choices embedded in dominant AI systems will shape information access worldwide in ways that transcend any single country's regulatory framework — raising fundamental questions about who should govern global AI speech infrastructure and how.