Are Deepfakes Protected by the First Amendment?
Many fear that deepfake videos created with artificial intelligence will be used to manipulate the outcome of future elections. Sound farfetched?
Ask Ali Bongo about the effect deepfakes can have on a country’s leadership. The mere suspicion of a deepfake video resulted in a coup attempt against his presidency in the African nation of Gabon.
For several months in 2018, President Bongo made no public appearances and no televised speeches. Many in the country feared he was seriously ill or dead.
Finally, the government announced that he had suffered a stroke while out of the country but was healthy. President Bongo appeared on video soon after to deliver his annual New Year’s address to the nation.
Many people thought Bongo looked strange in the footage. People thought they were being duped by a deepfake government propaganda video. A week later the country’s military attempted a coup, citing the video as the reason.
The video was real, not a deepfake. President Bongo had, in fact, recovered from the stroke. But the mere suspicion that the government was using manipulated video was enough to destabilize an entire nation. (Bongo was eventually toppled by a coup in 2023).
Now, many observers fear deepfakes will sway voters in the United States during the 2024 election. There have already been election-related deepfakes, such as robocalls in which a man imitating President Joe Biden falsely told New Hampshire voters ahead of the Jan. 23 primary that voting in that contest would prevent them from voting in the general election in the fall.
Seeking to avoid this disruption, the U.S. Congress and state legislatures are considering – and some already have passed – legislation to address the use of deepfakes in elections and otherwise. It’s a laudable goal, but is it constitutional? Here's everything you need to know about deepfakes and the First Amendment.
What are deepfakes and cheap fakes?
Deepfakes and cheap fakes are faked or manipulated images, audio or video content.
Specifically, a deepfake is fake audio or video generated when an AI program is fed a series of real examples of a person’s image and voice and uses them to produce something new that looks real. The result is someone being portrayed as saying or doing something that never actually occurred.
A cheap fake is like a deepfake but involves more hands-on manipulation. Examples include simple – but slightly crude – face swapping, strategic editing of real footage, and speeding up or slowing down of video or audio.
Two famous examples of cheap fakes involve U.S. Rep. Nancy Pelosi when she was speaker of the House. In one, a video from the State of the Union address was edited to make it appear that Pelosi ripped up a copy of then-President Donald Trump’s speech in front of a former Tuskegee Airman - which many cited as an act of disrespect toward Trump and the airman. She did rip up the speech – only it was an hour later, after the State of the Union had ended. In the other, actual video footage was slowed by 25% to make it look like she was slurring her words, giving the impression that she was drunk or incapacitated.
Both cheap fakes and deepfakes attempt to distort reality. Both rely on real images, and both can be hard to spot. But deepfakes can be hard to spot and easy to create via AI.
The concept of manipulation is not new. It’s occurred since ancient times when a person’s identity was simply chiseled out of existence. Dictators have used edited images to control people for as long as there have been photographs, radio and television.
Made up content has long been used for entertainment, satire and education, too.
Now, deepfakes created with AI are increasingly convincing.
Are deepfakes protected by the First Amendment?
There is ongoing debate over whether faked or manipulated videos are protected by the First Amendment.
Those saying “no” argue that AI is not protected by the First Amendment. The First Amendment protects people from government infringement on freedom of speech. AI programs aren’t people. They can’t speak. They just respond to prompts fed to them by people.
But the prevailing view is that the First Amendment protects the people who use AI to create deepfakes. The First Amendment protects speech, not speakers. Real people create deepfake content. AI and other technologies – like the printing press, photography, video cameras, radio, television and the internet – are used to create and distribute speech.
The First Amendment right to access information could also protect deepfakes. In 1965, the Supreme Court, in Lamont v. Postmaster General, reviewed a law requiring the U.S. postmaster general to hold mail delivered from a foreign country and designated as “Communist political propaganda” until the intended recipient signed for it. The court declared that the law violated the First Amendment. As Justice William Brennan explained, even if the law doesn’t protect the foreign sender of political propaganda, it protects the right of the recipient to read it.
In addition, lying is protected by the First Amendment unless it causes actual harm.
When do fake or manipulated videos lose First Amendment protection?
There are several categories of speech that fall outside the protection of the First Amendment. Deepfakes could fall into some of these categories.
- Defamation. Deepfakes may falsely portray someone in a way that harms their reputation. People harmed by deepfakes of themselves may be able to sue the creator of the deepfake for defamation.
- Extortion. Deepfakes could portray someone in a compromising position – like cheating on a significant other or as “revenge porn” – and be sent to them with the threat that the faked video will be posted on the internet unless payment is received.
- False advertising. Deepfakes can be used to falsely portray the effectiveness of a given product or to make it seem like they are endorsed by a celebrity or trusted person.
- Fraud. Robocalls impersonating President Biden show a common use of deepfakes: to make people falsely think they are simply following a valid and trusted source. The faked calls could be considered voter suppression or interference. Similarly, in 2019, the CEO of a British energy firm received a call from someone who sounded like the CEO of his firm’s parent company. The “boss” told him to transfer almost $250,000 to a Hungarian company. He did so, not knowing he was being defrauded.
- Incitement to imminent lawless violence. Deepfakes can be used to incite a crowd into illegal activity. It could be illegal to create a video of a fictional disaster, attack, criminal act, or real person giving an inflammatory speech with the explicit intent of having a crowd react violently.
- Obscenity. Deepfakes of child sexual abuse or “revenge porn” — creating an image of a real person with the intent to harm their reputation — could be illegal obscenity.
- Perjury. Deepfakes could be used to manufacture false testimony in a video deposition, court testimony or other statements made under oath.
- Right of publicity. Deepfakes often manipulate celebrity likenesses, especially for commercial purposes, which violates “right of publicity” laws in several states.
- True threats. Deepfakes could be used to convey a true threat, a statement that is intended to make the recipient fear for their safety.
Do state laws limiting AI violate the First Amendment?
A law that bans AI or deepfakes entirely would violate the First Amendment. If challenged in court, the government could have to show that the law is necessary to prevent specific harm caused by deepfakes and that a ban is the best way to protect against that harm. While deepfakes present many concerns, an outright ban is an extreme measure. There are many other ways, short of a ban, to address these concerns.
Many federal and state bills take a more specific approach.
The Protect Elections from Deceptive AI Act was introduced by a bipartisan coalition of senators in September 2023. If passed, it would prohibit distribution of “materially deceptive” deepfakes relating to federal candidates or issues.
According to the Business Software Alliance, more than 400 AI-related bills were introduced at the state level in 2023. Some raise more First Amendment questions than others.
Some bills would ensure that users know when and how AI is used requiring a watermark or other visible icon on deepfakes. Such “compelled disclosure” often violates the First Amendment. To align with the First Amendment, laws compelling disclosure would need to apply in more specific situations than to all AI-produced content. For instance, if the goal of the bill is to avoid misinformation that might sway an election, the bill could apply only during election season. At least 39 states are considering a bill of this type. Laws with civil, not criminal, penalties would be the soundest from a First Amendment perspective.
The most First Amendment-friendly bills tend to focus on how the government uses AI or protects consumers and the public. Some of these state laws and proposed bills:
- Create statewide working groups to ensure that government use of AI occurs in collaboration with private stakeholders.
- Protect people from unauthorized collection and use of private information and data by AI.
- Require that AI systems do not discriminate and do not contribute to different treatment of people based on race, color, ethnicity, sex, religion, or disability in areas such as education, housing and employment.
Why might we want to protect deepfakes?
Not all deepfakes are harmful. Some are even helpful. Among the purposes deepfakes can serve are:
Creating movies, video games and entertainment
Hollywood has long used deepfakes in movies. So has the video game industry. The use of green screen or redubbing in post-production is now commonplace for fictional content. Deepfakes can also be used for action sequences that might have put actors or stunt doubles in danger, making productions more efficient and safer.
Computers can also create content faster than humans. Engineers can rapidly create a library of voices and likenesses for use in games, movies, and educational videos. Over time, production expenses go down, meaning the product prices can decrease as well.
And sometimes the deepfake is the entertainment. Deepfakes are often satire, like this video of Dwayne “The Rock” Johnson as Dora the Explorer.
Virtual teaching tools
For instance, medical students can now interact with a deepfake “corpse” to explore in anatomy class.
Israeli company Canny AI made a video that showed a fake Mark Zuckerberg bragging about manipulating data stolen from Facebook users. But the purpose was only partially to satirize Zuckerberg; it was also to spark a discussion about Facebook’s data policies and urge the company to do better. Another Zuckerberg deepfake was inserted into an advertisement advocating for more oversight of the technology industry.
Deepfakes are used to combat deepfakes. Actual footage of Richard Nixon was combined with a never-given speech that would have been delivered in the case of a failed moon landing. The resulting deepfake still tricked some viewers into thinking it was real, at which point the video became the basis of a teaching tool.
Media ethics
The growth of deepfakes has led some journalists to be more transparent to avoid public skepticism. The journalists will take extra steps to explain how they captured an image or video or provide additional documentation of authenticity.
How can people be aware of possibly fake or manipulated video?
Identifying deepfakes requires the same principles of media literacy as any content. Approaching audio and video content with a degree of skepticism before accepting it as real and sharing it with others can prevent being duped.
In addition to researching the content’s source and verifying producers as real and trusted, some deepfake-specific tips include:
- Focus on the face. The mannerisms of the “person” in the video may be inconsistent. They may blink too much – or too little. Or perhaps they look just a little too perfectly airbrushed – or too wrinkly.
- Listen to the audio. The audio may also betray a deepfake video. The person’s voice may not match up with their appearance. Or the audio and video won’t sync up at all.
- Look at the lights. Deepfake lighting, especially shadowing, may be inconsistent – or nonexistent. Colors may bleed or blend in a strange way.
- Reflect on reflections. Reflections in a mirror or window in deepfake videos may seem warped – or nonexistent.
- Details matter. In deepfakes, things like jewelry, buttons, shirt patterns, teeth and hair might be out of place or weirdly inconsistent.
- Examine the edges. Deepfakes often show flickering around objects, especially where a person blends with the background, which indicates that the background has been changed or artificially created. The same would be true of flickering where a person’s head meets the rest of their body.
MORE: Learn how to spot a deepfake with these tips and tools
What’s the bottom line on deepfakes and the First Amendment?
Deepfakes present the same challenges that all new technologies have presented: The technology outpaces our ability to keep up and is manipulated for bad purposes. But these new technologies, from the printing press to deepfakes’ cousins, are protected by the First Amendment because they can produce new, innovative and valuable speech that drives us forward.
Paul Robert Cohen and “His” Famous Free Speech Case
What Is Doxing and Is Doxing Illegal? Everything You Should Know
Related Content
$30,000 Giving Challenge
Support the Freedom Forum’s First Amendment mission by Dec 31st and double your impact.