Use AI and Machine Learning to make it easier to moderate

Machine Learning AI should suspend the racist and sexist folks for what they say in TEXT CHAT and VOICE CHAT. Configure it to get more false negatives than false positives and run the experiment.

AI has given us Chat GPT, it can generate art, read x-rays, and run spam filters. It has to be able to do this effectively as well. If there aren’t enough humans to do this then why aren’t we using robots and technology to handle the volume?

1 Like

i’m imagining a blizzard employee sitting there for 8 hours a day saying random racial slurs over and over again to see if the accuracy and precision are above a certain threshold

2 Likes

Blizzard was literally working on this 3 years ago for Overwatch, and was even rolled out

https://www.pcgamer.com/overwatch-toxicity-has-seen-an-incredible-decrease-thanks-to-machine-learning-says-blizzard/

“Part of having a good game experience is finding ways to ensure that all are welcome within the worlds, no matter their background or identity,” Brack says in the video. “Something we’ve spoken about publicly a little bit in the past is our machine learning system that helps us verify player reports around offensive behavior and offensive language.”

It was talked about heavily at this GDC.

https://schedule2019.gdconf.com/session/building-abusive-chat-detection-systems-with-deep-learning/861266

It was given by Ryan Brackney (Sr. Data Scientist, Blizzard Entertainment)

Now he also left Blizzard at the end of 2019, so make of that what you will, it could be that not much has moved on since then, but they did mention stuff in that space when they talked about Defense Matrix.

See Defense Matrix activated! Fortifying gameplay integrity and positivity in Overwatch 2 - News - Overwatch

For years, our team has used machine learning to detect and prevent disruptive behavior, cheating, and disruptive text chat. Our detection methods leverage multiple systems, including your in-game reports, to identify behavior that drives down the quality of the in-game experience. We’re expanding our detection capabilities by introducing audio transcriptions in the following weeks after launch.

2 Likes

Slippery slope. It starts off as “racist and sexist folks”, and then the tech sits around for another problem to tackle, like “bald people” or “anti-cat folks” (a big deal for the cat café team4) and pretty soon inflates to all kinds of foul speech, violation of freedom of speech, data privacy and invasiveness protection acts, etc.

Those are non-adversarial and/or static feedback environments.

Much of AI just breaks down when actors are actively spoofing it (spam filters are static, there isn’t much feedback loop). Humans in the bnet ecosystem will find ways to create noise, obfuscate, and evade. I’ve written essays on why data-based mmr rankings can’t work as well as random SR systems.

There adversarial examples where ML classifiers are straight-up defeated, or made so sensitive they basically DoS everyone with false positives. This alone just in the domain of voice-recognition. There are limit laws on what can be learned and recognized and spoofed, that apply universally.

1 Like

But usually it doesn’t go like that.

They have had it for 3-4 years, and I’ve not been banned for cat jokes yet.

Humans in the bnet ecosystem will find ways to create noise, obfuscate, and evade.

Unlikely for most of them.

1 Like

They already scan text and voice chat after you make a report of chat abuse.

Then use it and suspend people. I have reported people and have evidence of screenshots and video which they don’t ask for. I don’t get the notification that it worked which is disappointing.

They need to quit being wimps. Have a dedicated team (Not fluff marketing like Defense matrix) and if you make one, unprovoked mean message with very little ambiguous context, you will NEVER have a voice or the ability to text ever again for the rest of your life. Preferably all reported people will only play with other reported people

2 Likes

Oh no, it’s already a snowball on full slip.

Defense Matrix isn’t the tip of the iceberg. It’s the entire avalanche.

1 Like

Once it gets out that Blizzard is using something like this, people will spam chat with all sorts of things that just make the AI racist.

1 Like

The AI doesn’t learn how to be human. It learns what indicates a disallowed behavior and then it takes the appropriate action.