With Elon Musk in the news trying to buy Twitter, the subject of content moderation and censorship is hot again. Recently, social media saw the birth of a whole new language coined Algospeak, employing code words and phrases to prevent posts from being removed, demoted in rank, or demonetized by automated content moderation systems.
Early internet users would bypass word filters in chat rooms or forums with alternative spelling or “leetspeak” using character replacements in ways that play on the similarity of their glyphs via reflection or other resemblance. Now users are further bending their language as discussions are filtered through algorithmic content delivery systems. For example, instead of using “dead,” it is becoming common to say “not alive”. When encouraging fans to follow elsewhere people will post “blink in lio” for “link in bio.”
Partly accelerated by the pandemic, more and more on social media are twisting their tongues as discussions about important events are increasingly filtered. It seems like algorithmic delivery systems have had an unprecedented impact and given rise to a new form of internet-driven Aesopian language, conveying an innocent meaning to outsiders but holding a concealed meaning to informed members or insiders.
Social media platforms complain that although they try to uphold free speech, there are practical issues preventing them from doing so. The world has changed. In the past decade, the internet went from a “frontier” where people would go “to be free” to the place where the entire world is. It’s become the main battlefield for all our culture wars. Moderating it increasingly is becoming synonymous with upsetting everyone, an enormous task that can only be performed by AI-driven algorithms that make inhumane and sometimes even dystopian judgements that can’t be appealed.
Ángel Díaz of UCLA School of Law, which studies technology and racial discrimination, was quoted in a recent Washington Post article as saying: “The truth is that tech companies have been using automated tools to moderate content for a very long time, and even though this is touted as complex machine learning, it’s often just a list of words they think is problematic.”
The article continues to quote Evan Greer, director of Fight for the Future, a nonprofit digital rights advocacy group, said that trying to eliminate certain words on platforms is stupid business.
“One, it doesn’t actually work,” he said. “People who use platforms to organize real damage are pretty good at figuring out how to circumvent these systems. Second, it causes collateral damage to real speech.”
Trying to organize human speech on the scale of billions of people in dozens of different languages, and struggling with things like humor, sarcasm, local context, and slang, Greer argues, cannot be done by simply ranking certain words down.
“I think this is a good example of why aggressive moderation will never be a real solution to the damage we see from the business practices of big tech companies,” he said. “You can see how slippery this slope is. Over the years, we’ve seen more and more misguided demand from the general public for platforms to quickly remove more content, no matter the cost.”
As Algospeak becomes more common, and part of popular culture, even substitute words tend to get flagged and users are forced to be more creative to avoid getting caught in the filters. Social media is shaping and reshaping our language. It’s a never-ending game of cat and mouse.