100% agree.
These are the cases i think they will get right. I’m talking about those cases that seem as “gameplay sabotage” in the algorithm eyes, when in fact its just people reporting off meta heroes because they know they can trick the algorithm collectively.
Honestly.
I’d SAY we had a system for detecting it, but I’d just put in something which detected Mei walls in spawn as the game started, and a number of enviro deaths RIGHT after a tele happened to the hero.
It would give the impression we had a working system, which would lessen the cases out of the gate.
I’d then start getting the game play stats on the people who dive 3+ ranks in a short period of time.
Since they should give you a clear signal.
So, they will try to detect patterns (that’s what prediction is roughly) of false reporting. For example, the new system may be more tolerant if the reasoning of the report is an off meta hero. Also, It’s oversimplistic to assume which classified they will use, since there would be a combination. For example:
Naive Bayes to detect abusive chat
Frequency and cluster analysis (wnn) to detect spam
Vector Autoregressive models (var) to detect short vs long run relationships
Nearest Neighbours to unify those attributes in multidimensional space
Use neural networks to predict
Develop a decision tree to take a final binary decision (ban or not)
Of course what I just described seems more like how a masochist would implement it, but I believe you got that it’s not that easy.
Thanks!!!
I work in this area. So I know what the goals are, and what can be achieved in a reasonable time / cost.
They don’t have to go all crazy with it, and I expect they will not.
I doubt they will, if they can clear up a bunch using JUST Bayes, and make a “bad word list which they will review” and they will use that.
at least to start with, they don’t need a system which catches all the bad cases, but they dont even need to catch all that many - they just don’t want to get a lot of false positives.
People who they want to catch will give them PLENTY of opportunities to do so.
That’s an understandable concern but if it has the ability to double check and they now have a dedicated human team to assist the with it then I just think we’re jumping the gun a little early is all. Plus you see that but at the end about “expanded social features” guild system finally? :o
Well, better raise awareness than not. Nothing wrong with that.
I’m not sure what this means.
This
20char…
Talk about reading what you want to see. First, there is no need for any complicated algorithm to identify off-meta heroes. Any programmer can code that in a few minutes - “If Hero X = Torb, Sym, etc. then do [whatever]”. It says that this algorithm will work together with player reports, not on the basis of player reports.
And in any event, if their idea of justice is majority rule, the current (stupid) automated report system is already doing that job, no need for any complicated algorithms to detect one-tricks since people report them at the drop of a hat anyway.
Honestly, this sound like run of the mill PR talk to me “Hey, look, we are doing something using the latest technologies, trust us!”. Maybe they are but maybe they are just trying to reassure us with platitudes. Or this machine learning will be used for simpler cases like players barely exiting spawn and moving from time to time just to avoid the inactivity kick-off. I very much doubt it will be used for subjective issues like one-tricking. And if it is, it will probably be used to detect the players who falsely report others (hopefully).
To be honest, I believe that they will merely implement a predictive algorithm to catch the false positives and nothing more. Then that SWAT like team will review the cases of the potential type I errors and as time passes they will adjust it ie the main banning algorithm will be less and less strict and they will try to have the maximum potential bans with the less possible errors
Oh, maybe it is a guild system, i don’t know. It would be interesting though.
Machine learning algorithms are good to alleviate the pressure from managing hundred of thousandths of reports. My concerns are that it will be used to leave the decision making to it more than to humans.
And i agree that the current system is terrible.
I don’t think it is the case. I saw job advertising for data scientists with natural language processing skills for Blizzard a while back, and I thought this was coming.
They soon after that started pushing for people to use their reporting system far more (I am thinking to gather data).
If I wasn’t 1/2 way around the world, I would have gone for the job in a heartbeat.
I am sure they will use some sort of machine learning to assess reports and/or player behavior. I just doubt it will be a major factor. There is just not enough incentive for that. Isn’t the main complaint toxicity in voice chat, what can machine learning do about that? As for one tricking, Blizzard has been happy to straddle the fence for a while, I very much doubt they will want to piss off any of the sides in this debate.
They said it will be used for abusive chat and gameplay sabotage. And the second is related to off meta and that whole one trick debate.
I can answer that! I was on a project to do EXACTLY that, and the results were REALLY interesting.
The system was EXTREMELY good at detecting people going toxic. whining trashing talking toxic people all sound somewhat similar.
If I played a bunch of people going toxic to you in another language, you would spot it right away.
The systems didn’t even need to decode what people were saying.
We thought it would fail on the humorously named “Angry or just Russian” data set we had.
Yes, it is exactly what it sounds like. Not exactly PC, but a problem we wanted to make sure it didn’t have…
Getting MANY waveforms from a set of compressed voice streams on to the graphics card to be processed in realtime was harder (or at least took longer) than the AI itself to write.
In Jeff’s message to Korea (don’t worry, he speaks in English), he clarifies that they’re using machine learning to identify toxic chat messages, and makes it sound like the machine learning systems bring toxic chat messages to the attention of human moderators rather than acting on their own.
That’s fantastic news!