Twitch’s New AutoMod Tool Helps Filter Out Harrassment

It’s been a few days now that game streaming service, Twitch, introduced a new tool that can aid streamers to filter out inappropriate chats. And according to several streamers via Kotaku, the new tool has been “game changer”. AutoMod is Twitch’s new tool which is aimed at assisting in chat moderation and broadcast support. Last month, we reported about a study that found women streamers on Twitch face significant levels of objectification in comparison with their male counterparts. The study was conducted by the Indiana University Network Science Institute throughout a five-year period and its result concluded what was already pretty obvious but none-the-less still appalling.

The growing problem even forced Senior Vice President of Twitch, Matthew DiPietro, to release a public statement in regards to streamers’ concerns.

We’re dedicated to improving our policies, products, and features to offer broadcasters the tools and flexibility to manage their channels how they see fit and to protect themselves against harassment and other inappropriate behavior.

It looks like Twitch came up on that promise this Monday, when they released the AutoMod. The new tool uses machine learning and natural language processing algorithms to filter out inappropriate messages. When someone sends a message, the AutoMod tool is the first to receive the message. A channel moderator then reviews it for any inappropriate language, and once clear, depending on the appropriateness of its language, it either deletes it or approves it for other viewers in the chat to view. AutoMod can also detect any misspellings of prohibited words, inappropriate strings of emotes and even other symbols and/or characters that are often used to evade filtering.

But before you begin to ask what essentially can be labeled as “inappropriate” language, Twitch made sure to give the freedom of censoring their chat messages entirely to the broadcasters themselves. Broadcasters can establish their own baseline for what they believe is acceptable language and around-the-clock- chat moderation. There are four levels of filtering that streamers can select from to configure their channel. These levels depend on four categories: identity, sexual language, aggressive speech, and profanity.

AutoMod language categories:

Identity Language: Words referring to race, religion, gender, orientation, disability, or similar. Hate speech falls under this category.
Sexually Explicit Language: Words or phrases referring to sexual acts, sexual content, and body parts.
Aggressive Language: Hostility towards other people, often associated with bullying.
Profanity: Expletives, curse words, and vulgarity. This filter especially helps those who wish to keep their community family-friendly.

What AutoMod catches at each level:

Level 1: Only remove hate speech.
Level 2: Also, removes sexually explicit language and abusive language.
Level 3: Remove even more hate speech and sex words.
Level 4: All of the above, plus profanity and mild trash talk.

When a user sends a comment, the AutoMod tool will give you a notification that your message is being held for review, and depending on whether you message got denied or approved, you’ll be notified.

The new tools is currently available on Twitch for English users, while Beta versions are available in Arabic, Czech, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, and Turkish. For more information on how to Enable to AutoMod tool, check out Twitch’s website here.

Ramiro Gomez: Sci Fi junkie, flim fanatic, book nerd, and (on a good day) a decent gamer.
Related Post