Riot Games begins its crackdown on disruptive behavior, in the form of in-game chat moderation. Following Riot Game’s previous statement of combating toxicity and disruptive behavior in their games earlier this year, the game developer recently announced upcoming testing for Valorant’s new in-game voice chat moderation in a blog post on June 24, 2022.
For Riot, voice evaluation would help “collect clear evidence that could verify any violations of behavioral policies before we can take any action” and give players clarity as to why a particular action resulted in a penalty. This recent development was made possible after an update in Riot Game’s Terms of Service, now allowing the recording and evaluation of in-game voice communications when a report for disruptive behavior is submitted.
Tests for this new initiative start on July 13 for North America/English-only as training for their language models. Eventually, when the tech is ready, Riot plans for a beta launch later this year. Before it even gets to that point Riot wants to be confident in this new tech’s effectiveness and to account for any mistakes if they were to happen. Once the beta tests begin, Riot will begin to evaluate voice chats for disruptive behavior. The new voice chat moderation won’t be exclusive to Valorant. Eventually, as alluded to in the blog post, Riot will bring this new moderating system to its other games in the future.
Riot is aware of the possible hiccups this new tech could potentially bring, noting the many growing pains it will likely have in its development. Despite this, at the promise of “a safer and more inclusive environment” for players, Riot deems its newest tech worthy despite its challenges. Till then, Riot will continue to develop its newest tech with the hope of reducing toxicity and encouraging a healthy social environment for their player base.