"We Will Train Machines to Ban Offmeta Heroes!"

So, they will try to detect patterns (that’s what prediction is roughly) of false reporting. For example, the new system may be more tolerant if the reasoning of the report is an off meta hero. Also, It’s oversimplistic to assume which classified they will use, since there would be a combination. For example:
Naive Bayes to detect abusive chat
Frequency and cluster analysis (wnn) to detect spam
Vector Autoregressive models (var) to detect short vs long run relationships
Nearest Neighbours to unify those attributes in multidimensional space
Use neural networks to predict
Develop a decision tree to take a final binary decision (ban or not)

Of course what I just described seems more like how a masochist would implement it, but I believe you got that it’s not that easy.

1 Like

Thanks!!!

I work in this area. So I know what the goals are, and what can be achieved in a reasonable time / cost.

1 Like

They don’t have to go all crazy with it, and I expect they will not.

I doubt they will, if they can clear up a bunch using JUST Bayes, and make a “bad word list which they will review” and they will use that.

at least to start with, they don’t need a system which catches all the bad cases, but they dont even need to catch all that many - they just don’t want to get a lot of false positives.

People who they want to catch will give them PLENTY of opportunities to do so.

That’s an understandable concern but if it has the ability to double check and they now have a dedicated human team to assist the with it then I just think we’re jumping the gun a little early is all. Plus you see that but at the end about “expanded social features” guild system finally? :o

Well, better raise awareness than not. Nothing wrong with that.

I’m not sure what this means.

This
20char…

Talk about reading what you want to see. First, there is no need for any complicated algorithm to identify off-meta heroes. Any programmer can code that in a few minutes - “If Hero X = Torb, Sym, etc. then do [whatever]”. It says that this algorithm will work together with player reports, not on the basis of player reports.

And in any event, if their idea of justice is majority rule, the current (stupid) automated report system is already doing that job, no need for any complicated algorithms to detect one-tricks since people report them at the drop of a hat anyway.

Honestly, this sound like run of the mill PR talk to me “Hey, look, we are doing something using the latest technologies, trust us!”. Maybe they are but maybe they are just trying to reassure us with platitudes. Or this machine learning will be used for simpler cases like players barely exiting spawn and moving from time to time just to avoid the inactivity kick-off. I very much doubt it will be used for subjective issues like one-tricking. And if it is, it will probably be used to detect the players who falsely report others (hopefully).

To be honest, I believe that they will merely implement a predictive algorithm to catch the false positives and nothing more. Then that SWAT like team will review the cases of the potential type I errors and as time passes they will adjust it ie the main banning algorithm will be less and less strict and they will try to have the maximum potential bans with the less possible errors

Oh, maybe it is a guild system, i don’t know. It would be interesting though.

Machine learning algorithms are good to alleviate the pressure from managing hundred of thousandths of reports. My concerns are that it will be used to leave the decision making to it more than to humans.

And i agree that the current system is terrible.

I don’t think it is the case. I saw job advertising for data scientists with natural language processing skills for Blizzard a while back, and I thought this was coming.

They soon after that started pushing for people to use their reporting system far more (I am thinking to gather data).

If I wasn’t 1/2 way around the world, I would have gone for the job in a heartbeat.

I am sure they will use some sort of machine learning to assess reports and/or player behavior. I just doubt it will be a major factor. There is just not enough incentive for that. Isn’t the main complaint toxicity in voice chat, what can machine learning do about that? As for one tricking, Blizzard has been happy to straddle the fence for a while, I very much doubt they will want to piss off any of the sides in this debate.

They said it will be used for abusive chat and gameplay sabotage. And the second is related to off meta and that whole one trick debate.

I can answer that! I was on a project to do EXACTLY that, and the results were REALLY interesting.

The system was EXTREMELY good at detecting people going toxic. whining trashing talking toxic people all sound somewhat similar.

If I played a bunch of people going toxic to you in another language, you would spot it right away.

The systems didn’t even need to decode what people were saying.

We thought it would fail on the humorously named “Angry or just Russian” data set we had.

Yes, it is exactly what it sounds like. Not exactly PC, but a problem we wanted to make sure it didn’t have…

Getting MANY waveforms from a set of compressed voice streams on to the graphics card to be processed in realtime was harder (or at least took longer) than the AI itself to write.

2 Likes

In Jeff’s message to Korea (don’t worry, he speaks in English), he clarifies that they’re using machine learning to identify toxic chat messages, and makes it sound like the machine learning systems bring toxic chat messages to the attention of human moderators rather than acting on their own.

That’s fantastic news!

yes it is… oh, if you want a harder dataset to train natural language classification on… (since I think you are in this area) which is funny as all hell…

Fox or Onion. It is the headlines of the articles…

If you are after a HUGE dataset which you can do a LOT of good with… GDELT. It is AMAZING… Most amazing dataset ever.

Well, if it can stop abusive voice chat, I am all for it.

I just finished watching the video, and he didn’t say anything like

He said that they “can have verified toxic chat actioned”, but it doesn’t necessarily mean its verified by humans. Which i hope it is, but its ambiguous.

And chat is one thing they can actually check and one thing they actually record. The main problem here is gameplay sabotage topic and how the reporting system can be abused on that front, and how false reports of it can slip into the system as being valid.

Yeah, I’m not sure how the voice servers are set up in overwatch. The GPUs are expensive but, they would only need to sample little bits of the voice streams, so they should be able to run over quite a few of them.

We didn’t have the 1080s we have now, I should rerun a bunch of stuff tomorrow to see how many streams it can run at once on a single card.

I LOVE writing in Julia :slight_smile: (language we were doing all this work in)