There are some things that are obviously First Amendment issues and there are others that just as obviously aren’t. Did you get arrested for criticizing the mayor of your town? That’s a First Amendment issue. Did you get kicked out of your book club because you said Malcolm Gladwell was overrated? That’s harsh, but it’s not a violation of your constitutional rights. The First Amendment prevents the government from censoring or punishing your speech, but it doesn’t apply to private organizations.
There’s another category of issues technically outside the scope of the First Amendment, but have an outsized impact on what some might consider the purpose of the First Amendment —ensuring that citizens can access information about the world around them, hear different viewpoints and share their ideas. Call them First Amendment-ish issues.
Take the decisions that Facebook, YouTube and Twitter regularly make about what types of content to allow on their platforms. As private companies, they’re free to create their own policies, censor posts and ban users as they see fit (that is, in fact, their First Amendment right). But, as the watchdog group OnlineCensorship.org puts it, “We treat these platforms as a ‘public sphere,’ using them to discuss issues both controversial and menial, to connect with friends far and near, and to engage in activism and debate. … We have seen before how powerful social media platforms can be in inspiring protests, fostering political movements and even influencing elections.” Now that so many of our conversations take place online on these platforms, the companies behind them have the power to influence the public discourse in a way that our government can only dream of.
The First Amendment-ish implications become even thornier when you consider fake news and misleading content. Censoring a fake news item may run counter to the spirit of free speech, but allowing it to spread undermines the purpose of a free press. Freedom of the press doesn’t exist to protect journalists so much as to benefit the public at large. It exists because the framers of our Constitution thought the public had the right to know about what the government and other powerful institutions were doing. When there’s so much confusion about what information you can trust and what you can’t, that whole system is compromised.
As technology grows more sophisticated, the potential for this confusion will only increase. Some developments, like deep fake video technology, will make it harder to debunk misinformation. Others, like artificial intelligence-enabled chatbots, will make it harder to identify real political expression. Today’s fairly simplistic chatbots already have the ability to skew online discussions (it’s estimated that a fifth of all tweets about the 2016 presidential election were published by bots). Tomorrow’s bots will be able to take things to the next level.
According to The Atlantic’s Bruce Schneier, “Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes and intelligently debate political issues on social media.” While this army of civically engaged bots might serve as a shining example to the rest of us, Schneier points out that their capacity to drown out any actual debate on the internet will pretty much obliterate the marketplace of ideas that a functioning democracy requires.
This is not a call to ban these technologies, so much as it is to keep the principles of free expression in mind as we develop regulations for them. As a society, we need to take a stand on clear-cut First Amendment matters, but should also keep a close watch on those First Amendment-ish issues — to ignore them is to ignore the reason we have a First Amendment in the first place.
Lata Nott is a fellow for the First Amendment at the Freedom Forum. Follow her on Twitter at @LataNott.