How to Spot a Deepfake: Tips, Tools and More
There’s no shortage of people or news articles discussing artificial intelligence and the role of deepfakes in spreading false, misleading or out-of-context information.
Deepfakes are not a new concept, and there can be positive and helpful uses for AI images, video, audio, and text. What is new is the readily available technology to make deepfakes, the number of people who can do so (essentially everyone with an internet connection), and the speed at which they spread.
The stakes for deepfake technology and its ability to fool people are high — especially in an election year. Therefore, understanding how to spot a deepfake is critical to ensuring people have accurate and relevant information so they can make informed decisions.
In this post, we highlight ways for people to detect and debunk deepfakes and offer methods to think about the information they see — and share — every day.
Discover how to spot a deepfake with these tips, tools and games
1. Stop. Then think. Then act.
Vanilla Ice famously asked everyone to “Stop, collaborate and listen,” and that’s key to spotting deepfakes and AI-generated content.
When seeing information and content online, you can:
- Stop sharing before you know more.
- Collaborate with verification tools and online educational resources.
- Listen to your gut and the advice of researchers and fact-checkers.
The SIFT method from University of Washington information researcher Mike Caulfield has been around for years to help people refrain from sharing bad, misleading and false information like "fake news" articles online. You can apply the SIFT tenets to spotting and sharing deepfakes:
- Stop.
- Investigate the source.
- Find better coverage.
- Trace claims, quotes and media back to the original context.
The point of the SIFT method and all the ways of knowing how to spot a deepfake is not to stop it from ever being created.
More than any method can teach, the best defense against deepfakes is your own intuition and critical eye. Don’t underestimate the value of your gut instinct and simply stopping to ask yourself whether something seems “off” with the video, image, audio or text in front of you.
Arming yourself with the process and knowledge of how to think about AI-generated content and deepfakes is the first step in deciding whether sharing it with your closely knit family or vast social network online is a good idea.
2. Use deepfake and AI detection tools
There are many resources for checking images, video, audio and text to help determine if they are AI-generated. Some tools to try include:
1. Google Reverse Image Search and TinEye
- Purpose: Image verification.
- What is it? A basic way of seeing the history of an image online, where else it appears, and possible ways it’s been manipulated.
- Who's behind it? Google has reverse image search built into its Chrome browser. Just right-click on any image or visit images.google.com. TinEye is a private company in Canada that began reverse image search technology in 2008.
Don’t fall for that fake pic 👀🔁 #LetsInternetBetter #GoogleSearch #ReverseImageSearch
♬ original sound - Google
2. InVID
- Purpose: Image and video verification.
- What is it? A free tool with an internet browser plug-in to detect AI images and videos.
- Who's behind it? A group of researchers at European agencies and universities.
3. DeepFake-O-Meter
- Purpose: Image, video and audio verification.
- What is it? An open-source tool that gives a probability of whether a file uploaded is fake. The tool requires user login after setting up a free account.
- Who's behind it? Researchers at the Media Forensics Lab at the University at Buffalo, the State University of New York.
4. WinstonAI
- Purpose: Text and image verification.
- What is it? A tool that lets users input text to detect whether it was likely produced through ChatGPT or other text-generating tools. It also includes image detection. Premium services require a paid subscription.
- Who's behind it? A private company. Multiple other AI text and plagiarism detection tools are listed here.
These are just a few of the many resources available to help detect AI-generated content and spot deepfakes. But as with all tools, they are not guaranteed to work every time, and these and other deepfake detection tools aren’t always accurate.
What they do provide is a way to slow down the process of spreading deepfake content and help you question whether it’s wise to share.
3. Play deepfake and AI detection games
The following tools are meant to help you practice how to spot a deepfake using what’s available to you: your critical thinking and analytical skills. They’re made by researchers, companies and news organizations working to root out and slow the spread of deepfakes that aim to deceive.
1. Which Face is Real?
- What is it? Compare two faces side by side. One is a real human and the other is generated by AI. Click on the real one. It’s sometimes harder than you think, but it helps you think about what a generated fake face looks like directly compared to a real human’s photo.
- Who's behind it? University of Washington researchers Jevin West and Carl Bergstrom.
@greenskulltok Which face is real? Which is AI? #ai #artificialintelligence #funny #comedy #whichfaceisreal
♬ original sound - Greenskull
2. Spot the Troll
- What is it? A game to look at real items from social media and decide whether they came from a legitimate person or organization’s account or from an internet troll. Playing will help people learn the common ways fake content from troll farms, sometimes sponsored by foreign governments, seeks to confuse people online.
- Who's behind it? Clemson University’s Media Forensics Hub.
3. Odd One Out
- What is it? A timed game to spot the AI-generated artwork among four options. One is fake and the others are real pieces of art created by humans. It is ideal for kids in school settings.
- Who's behind it? Google’s Arts and Culture staff in conjunction with the Google News Initiative.
4. Detect Fakes
- What is it? Choose whether five images presented to you are real or AI-generated. With each question you can share how confident you are in your answer. At the end you can see how you did compared to other users and share why you thought the images were AI-generated.
- Who's behind it? Researchers Negar Kamali, Angelos Chatzimparmpas, Jessica Hullman and Matt Groh at Northwestern University’s Kellogg School of Management.
5. Spot the AI Images
- What is it? A side-by-side comparison of an actual picture or artwork created by a human with one created by an AI text prompt. You can quickly compare the real to the AI-generated image. The resource has additional tips and tools for how to spot a deepfake and other AI-generated content.
- Who's behind it? PCMag.
6. Can You Trust Your Eyes?
- What is it? A resource for better understanding and detecting misinformation and disinformation, particularly in an election year. It includes five questions with possibly fake or real images for you to select.
- Who's behind it? Axios, a national news outlet that started in 2017 with this ethos: “The world needed smarter, more efficient coverage of the topics shaping the fast-changing world. We pledged to put our audience first, always.”
7. Test Yourself: Which Faces Were Made by AI?
- What is it? Test yourself by selecting which faces among the multiple options were made by AI. Results show and explain how they are generated and offer tips on how to spot deepfakes and AI-generated images in the future. (May require a subscription or free login with email.)
- Who's behind it? The New York Times.
The legality behind deepfakes
You might wonder, if deepfakes and AI-generated images and videos are a problem, why can’t the government just ban all of it? The answer is that such content, even if unclear in its origins or intent, could be protected by the First Amendment.
Like other forms of speech, the images and videos you see online are likely protected by the First Amendment, which prevents the government from punishing the creator and/or publisher, except for specific exceptions like obscenity or defamation. Another reason the government might be able to regulate deepfakes would be if the government can show a specific harm that would result from a deepfake, and that the punishment is the most narrowly crafted way to avoid that harm. A full ban on deepfakes is likely too broad to meet this standard.
Similarly, people have a First Amendment right to receive, view, interact with and share all kinds of information online, including AI-generated images, video, audio and text. But before sharing, you might ask yourself: Just because you may have a legal right to do so, should you share potentially false or misleading information?
That’s a question everyone must answer for themselves — one that no tool can detect and no AI can generate for you.
Scott A. Leadingham is a Freedom Forum staff writer. Email
Editor's note: Some of the tools discussed here have been taught through the Google News Initiative in partnership with the Radio Television Digital News Association and Society of Professional Journalists, for which the author has been a paid trainer.
6 First Amendment Stories to Watch in 2024
The Next Battle Against Government Funding in Religious Schools?
Related Content