"We Will Train Machines to Ban Offmeta Heroes!"

Edit:

I am glad this is the case and that i am completely off with my estimations, this is good news!

However, i have a few question.

Does this mean “toxic chat” is the only thing verified by human beings? because usually people don’t specify things when it applies generally to all of the rest.
If all report options were verified by human beings, you would say something less specific to one particular report option, like “before applying actions, human beings verify the validity of them”, you wouldn’t specify that it applies to “toxic chat” in particular.

Does the “toxic chat” include voice chat?

What about gameplay sabotage? is that verified by human beings?


I don’t know if people realize, but machine learning implies exactly that, it can be trained for anything. And if the community in majority condemns off meta heroes and will report them for “gameplay sabotage”, it will train the algorithm to detect off meta heroes that will be reported, as actual game play sabotage elements and will action them, regardless of whether or not the gameplay sabotage happened.

There were already cases where chatbots that use machine learning were trained by the masses to hilarious results(not good results), like saying “hitler was right” and other abominations, and there is a danger that the OW algorithm will mimic some horrible practices from OW community.

If there was an algorithm trained to detect abusive posts on forums, it could detect this post as being inappropriate because i quoted the chatbot in the link that said “hitler was right”, and the algorithm is not that sensitive to the context.





“Machine learning” isn’t some magical detector, it does what it is trained for, and it adopts the most common things people do if it is set up for that. If people condemn off meta, so will the system. If people condemn torb players, so will the system. It will mimic the majority of the community.





Don’t get me wrong, i love the idea that there will be some smart agent to detect bad behaviour and abusive chat, i just don’t have faith that mimicking the community won’t bring in very bad results.

20 Likes

Hmm you’re right but we don’t know any specifics. It could look for single words or small word combos in text chat, as a sort of low-hanging-fruit style of banning people. If someone is saying the N word, it’s pretty easy to pick up by a computer. There’s basically no context where that could be ok to say. I understand what you’re saying about context (for other words) and about it learning the wrong thing, but, I’m wondering what you think a one trick consistently puts in chat that you think might identify them to such an algorithm? I’m not saying there’s nothing, I just come up a bit blank when I think about it myself. Ive met one tricks who don’t chat a single word, what could an algorithm pick up there?

1 Like

If the community in majority condemns anything it’s probably not something that is healthy for the game.

7 Likes

Back in WW2 there were horrible things happening, and despite the efforts of an entire nation at condemning other ethnic, we know it wasn’t good for the world. Same goes to slavery and countless examples in history, the majority isn’t always right.

6 Likes

do you actually think the way they’re going to train the AI is to just have it comply with the majority of the reports? do you really think they are that dumb?

i assume the way they’ll train it is by having it observe what reports humans action and which they ignore, and after a while test in their internal server… and release it if they find it reasonable

6 Likes

I think if you polled the entire community (i.e. the world as a whole) you would have gotten a different response.

And are you seriously comparing those that don’t like getting one-tricks on their team to actual n*zis??

5 Likes

I’m pretty sure it also can be train not to. Machine learning is not some AI. Well, not yet. It do what the people managing it want it to do. If some variable cause them to do something they don’t like they could remove the variable or change how they account that variable into the process.

Here is a context where this is okay.

Using the word N(i won’t say it coz of CoC), is not acceptable, you should say “the N word” when discussing it so that you show it is unnacaptable.

You see?

Yeah that’s what I’m saying. That’s fine for an AI to pick up as bad because there is no ok context to say it.

If I worded it poorly so it sounds like I said the opposite I feel very embarassed. I will edit it to make it more clear.

1 Like

I just showed you an example where it is okay to say it.

You said it just fine without saying it, so I would argue you don’t need to say it to get your point across.

1 Like

Relax this isnt skynet lol

I’m sure there are checks and balances to what the algorithm can and can not ban.

Such as

If (sabotage_report + stats_low
{
Ban

}else ignore

Or other safety nets could be in place

(Also I doubt the code is that simple I am just trying to explain it)

1 Like

No, my point was that the majority isn’t always right. It doesn’t have to be related to WW2, it can be any other horrible example from history.

2 Likes

You have a point, the difference is that we both know what we are talking about, and this isn’t always the case. If the person you are talking with doens’t understand what is the discussion about because of the young age or not being an enlgish native speaker, it will be hard to explain what you want w/o using it once.

So maybe this system just detects the players that others don’t want to play with and has them queue into each other. It doesn’t have to be a ban. Just let them one-trick in each other’s games.

Wall street dinosaurs ostracized quants (mathematicians/engineers) in the 80s because finance was “about guts”, not math. They thought these nerds should stick to administrative tasks behind their computers.

By the 90s, quants had taken over every trading desk. Today, you can’t get a real finance job without graduate/post-graduate studies in math and/or engineering. Today’s finance majors will generally only lead to administrative tasks.

Things like that happen all the time, because no matter how much people love a good underdog story, the reality is nobody ever roots for the underdog until they’ve gone above and beyond what should normally be considered a success.

I’m not saying underdogs (off meta players) are secret geniuses, but I highly question your idea of enforcing mob mentality as the sole truth.

5 Likes

Yes that is a very fair point :slight_smile:

I can only hope that Blizzard will up their customer service to investigate cases like this if they happen, because I think it’s the nature of a game with a huge community to have to cut corners like this, but I definitely understand about the context bit.

Maybe we have to ask what’s the lesser of two evils? Less reports having actions because humans have to pour through everything? Or more reports having action but some people have wrong action? (Rhetorical because I don’t know the answer!)

Well, they said how they are penalizing, the gameplay sabotage is penalized by a suspension, and here are the new changes about abusive chat.

And to top it off, you could even “not know” that you are using abusive chat, which is baffling to me. Look:

Which implies the machine knows better than you what you are doing.

Also yeah don’t use that as an example, because I’m no history buff, but the overwatch community is a multicultural one spanning across several countries.

Not a country weakened by ww1 and taken over by a totalitarian dictatorship

1 Like

It is not how machine learning works. It has the rules of the game and the goal(in terms of game theory), the rest is being trained. There are no rules how to detect things, this is being trained.

1 Like