Fighting Against Algospeak

Lesbians Who Tech & Allies
4 min readMay 10, 2022

--

The coded language developed to beat the algorithms is changing our language- and not for the better.

With the news of Elon Musk buying Twitter, the hot topics of free speech and content moderation have made their way into the headlines again. Some feel that his promise of “open source code” will bring transparency to one of the world’s top social networking platforms — others fear that Musk may view “free speech” as an online free-for-all, facts be damned.

But what isn’t speculation and is a ✨ very real✨ problem impacting the cyber-sphere is how users are employing code words to bypass community guideline filters.

It’s a tactic that’s been coined “algospeak.” Algospeak is a language barrier tactic employed by users when talking about a topic that is closely watched or highly monitored by content moderators (and bots) of the respective platforms they are posting on. Algospeak both helps communities to avoid being silenced — and forces communities to weather the storm of online hate by preventing the hate speech from being monitored or flagged by moderators.

It is difficult to pinpoint exactly when algospeak was first deployed online, but it picked up popularity in February 2017 when Felix Arvid Ulf Kjellberg, aka “Pewdie Pie”, the most subscribed YouTuber at the time, came under fire for posting videos that YouTube deemed anti-Semitic hate speech. Following this controversy, one would think that Pewdie Pie would lose his channel and be banned from the YouTube platform. But that would risk losing Pewdie Pie’s +111M subscriber base.

To ensure that YouTube advertisers would not be tied to hate speech without losing large audiences, YouTube’s solution was to program artificial intelligence bots to target language that can trigger demonetization or cause account suspension — and it didn’t take long for content creators to figure out which words the bots were looking for. Instead of risking their channel or livelihood, YouTubers started using code terms, or algospeak, instead of the flagged words.

Since then, most social media platforms have adopted the practice of content moderation. Flagging hateful or controversial content is the easiest, least expensive way to “monitor” content — but make no mistake, the real harmful content isn’t being monitored. Instead, homophobes, transphobes, and racists alike are skirting the algorithm by using algospeak to speak their hate in code.

Algospeak is now used by many social media users in order to talk about certain topics without the fear of their posts being taken down. But many of the words of the LGBTQIA+ community are being taken down with them.

Lesbian :: “Le dollar bean” or “le$bian”

As proud queer folks mobilizing the tech industry, you know we were troubled when the algorithm stopped letting us say “lesbian”, and relocated us to saying “le dollar bean” or “le$bian” in it’s place. This happens on TikTok, Instagram, and Twitter. In fact, many of our emails go directly to spam due to the name “Lesbians Who Tech & Allies”. The algorithm automatically flags the word “lesbian” and labels it adult content. In turn, opportunity to build community is taken away. In it’s place, internet trolls and digital hate crusaders can use algospeak to spread false information and openly spread hate. Anything that is considered harmful when spoken in regular terms is now free-game because of algospeak.

Media platforms have a duty to uphold free speech. When Elon Musk takes control of Twitter, we ask that he doesn’t ban the words of our community — we ask that he bans algospeak. It’s time for content moderators to truly moderate content and prevent online hate while allowing digital communities to thrive.

We encourage social platforms to invest in better AI and more human capital to monitor their mediums. Important topics need to be addressed with the correct words and accurate terminology. Our community should not be relegated to back-door discussions while hate fueled algospeak pushes us to the confines of the interwebs.

Allow us to use the long-fought words of our community, and take away the shield of algospeak that are used to do harm, organize and harass marginalized users online.

Read More About Algospeak & Algorithm Bias ::

⚡️ Algorithms and Bias, Explained

⚡️ How AI Systems Undermine LGBTQ Identity

⚡️ Why Artificial Intelligence is Set Up to Fail LGBTQ People

⚡️ I’m a Trans Woman — Here’s Why Algorithms Scare Me

⚡️ To Stop Algorithm Bias, We First Have to Define It

Love the newsletter? Subscribe to the byte here!

--

--

Lesbians Who Tech & Allies
Lesbians Who Tech & Allies

Written by Lesbians Who Tech & Allies

Lesbians Who Tech & Allies is a community of LGBTQIA+ women, nonbinary, and trans folks in and around tech (and the people who love us).

No responses yet