AI and 1A: Is Artificial Intelligence Protected by the First Amendment?

Key takeaways:
- The First Amendment protects people’s, corporations’ and other legal entities’ free speech rights from government restriction, but no court has found that AI programs themselves have the same free speech rights.
- AI-generated content is generally afforded the same First Amendment protections as content created by people or corporations and other legal entities, but it is also subject to the same established limits on speech.
- Efforts to regulate AI-generated content through disclosure or labeling requirements must be narrowly tailored to avoid violating the First Amendment.
- Liability for harmful AI-generated content depends on who created or shared the content. Platforms may be held responsible if the content is generated by their own AI tools.
Artificial intelligence is everywhere, generating text, images and videos on our phones and across the internet. As with every new medium of communication, from the printing press to the internet and now to AI, it raises pressing questions for courts and lawmakers to decide how the First Amendment applies to speech created, shared or shaped by these technologies.
Does AI have First Amendment rights?
The First Amendment limits the government’s power to punish or restrict speech.
When it comes to AI, the First Amendment rights at stake are held by the people, corporations or other legal entities who create and use AI; AI is a tool, not an independent entity, so it has no free speech rights itself. People using AI as a tool for expression and dissemination of speech are generally covered by the First Amendment and can raise the First Amendment if someone tries to hold them responsible in criminal or civil court for that expression and speech. Courts have consistently found that the First Amendment protects this speech regardless of the tools and mediums used to create and distribute it. Generating text or art with AI is still part of a person’s own expressive activity and a form of speech. So while AI cannot assert its own rights, using AI technology to produce or share speech, in all its different forms, is protected as part of one’s free expression.
The First Amendment also safeguards individuals’ rights to collect information as part of that expressive process. Generative AI, or AI that learns from existing data and uses that data to create new content, such as using a batch of stories to craft a new story, often serves as a tool for gathering and shaping such information.
What are some exceptions to First Amendment protections for AI-generated content?
AI-generated content — text, audio, images or video created by an AI program, such as ChatGPT — is treated similarly to human-generated content under First Amendment law. If AI writes an article or creates an image, that content is considered “speech.” The government cannot simply outlaw speech because it was produced by an algorithm such as AI. Any law singling out AI-generated expression would likely be viewed as a content-based regulation. Because content-based speech regulations look specifically at the message being conveyed, these types of laws face the highest level of review from courts. Courts will uphold these regulations only if they are necessary to accomplish an important government goal and if they’re narrowly tailored to achieve that goal.
AI-generated speech is treated the same as human speech in this regard and in terms of whether it falls into one of the traditional categories of unprotected speech (such as defamation, true threats, incitement of imminent lawless action, fraud, obscenity, etc.). For example:
- The government may punish or regulate AI-produced speech that amounts to a direct call to commit immediate, lawless action such as violence (known as “incitement to imminent lawless action”) or constitutes a true threat, where the speaker clearly intends to place someone in fear of bodily harm or death.
- Defamation, speech that harms the reputation of a person through either written (libel) or spoken (slander) expression, is not protected by the First Amendment. This is true regardless of whether the speech was produced by AI or a person. Currently, there are defamation lawsuits that have been filed over false statements produced by generative AI, with a court in Georgia finding the AI company not liable in one recent case.
In cases where the content created by AI falls into a category of unprotected speech, a person who has been harmed by that speech may try to hold multiple people or corporations responsible, including the person or company that: a) created the AI program, b) made it available for use, and/or c) intentionally used it to generate the content. Liability will ultimately depend on the facts of each case.
What about deepfakes?
Deepfakes are faked or manipulated images, video and/or audio content. As with other expressive tools, deepfakes are generally protected under the First Amendment when used by people to create or convey lawful speech. However, like with all content created by AI, that protection does not extend to deepfakes that fall into established categories of unprotected speech, such as defamation, fraud, incitement or true threats. Deepfakes may also be unlawful when their use violates state-based rights like the right of publicity. Courts evaluate these cases based not on how realistic the deepfake is, or whether it was created by AI, but on whether the content causes a type of harm that the law recognizes.
RELATED: Are Deepfakes Protected by the First Amendment?
Regulating AI content
Policymakers are exploring ways to address the challenges of AI-generated content. One idea is a disclosure or labeling requirement: For example, laws could require AI-generated media to be labeled as “AI-generated.” The aim of such rules is to inform viewers and prevent deception, so people know when they’re viewing deepfakes or reading bot-written texts.
From a First Amendment perspective, disclosure requirements must be handled carefully. A required label is a form of compelled speech, or the government telling speakers they must say something or include certain information. Courts have been wary of required labels, especially for political or creative content. Any law forcing labels on AI-generated speech would need to serve a compelling interest (like preventing real harm or deception) and be narrowly tailored to accomplish that aim. If the law is too broad or vague, it could end up chilling lawful speech. For example, California’s attempt to regulate political deepfakes was blocked by a judge as overly broad.
Encouraging companies and platforms to label AI content voluntarily is an alternative. AI developers and platforms can choose to watermark AI-generated content, for example, without legal compulsion. Since this is voluntary, and not required by the government, it wouldn’t raise constitutional issues.
Ultimately, disclosure laws for AI content might be possible, but they have to be tightly focused on preventing concrete harms or they risk being struck down by courts as unconstitutional.
Are platforms and websites responsible for AI-generated content on their sites?
When AI-generated content causes harm, a key question is who can be held responsible: Is it the user who prompted the AI, the company that created the AI model or the platform where the content appears?
Platforms and websites are generally not responsible for content posted by users, including AI-generated content. Section 230 of the Communications Decency Act protects websites and social media platforms from being treated as the publisher or speaker of content created by another person. This means that if a user posts AI-produced content on a platform like Facebook or YouTube, the platform is typically not liable for that content.
However, Section 230 protection does not apply if the platform itself created, developed or materially contributed to the content. For example, if the AI content comes from a tool owned by the platform, such as X’s Grok or Snapchat’s My AI, courts would likely find the platform responsible.
If a person uses an AI tool to write a blog comment and then posts it, that user, and/or the company that created the AI tool — not the website hosting the comment — will be responsible for the content. Whether the user, the AI company or both are responsible will depend on the specific set of facts, content created, and new legal theories that courts are currently developing.
For example, in May 2025, a court in Georgia ruled in favor of OpenAI, the maker of ChatGPT, in a defamation case. OpenAI had been facing a lawsuit over something its chatbot said about a radio show host. The court ultimately said that the radio host, a public figure, couldn’t prove OpenAI either knew the statement was false or recklessly ignored the possibility that it could be false (also known as “actual malice”), nor could he demonstrate damages related to the defamatory statement.
While AI doesn’t have First Amendment rights, people who use AI to create or share speech do, and that speech is generally protected just like speech created through any other tool. At the same time, AI-generated speech is subject to the same limits as human speech, and courts are actively working through how existing legal principles like defamation, incitement, and compelled labeling apply and who is held liable for speech when it is created or distributed with the help of AI.
This article was updated on June 4, 2025. It may be updated with future developments.
Ashkhen Kazaryan is a fellow for the First Amendment at Freedom Forum.
Student Press Freedom Day Trivia
Is Obscenity Protected by the First Amendment?
Related Content