yes it is… oh, if you want a harder dataset to train natural language classification on… (since I think you are in this area) which is funny as all hell…
Fox or Onion. It is the headlines of the articles…
If you are after a HUGE dataset which you can do a LOT of good with… GDELT. It is AMAZING… Most amazing dataset ever.
I just finished watching the video, and he didn’t say anything like
He said that they “can have verified toxic chat actioned”, but it doesn’t necessarily mean its verified by humans. Which i hope it is, but its ambiguous.
And chat is one thing they can actually check and one thing they actually record. The main problem here is gameplay sabotage topic and how the reporting system can be abused on that front, and how false reports of it can slip into the system as being valid.
Yeah, I’m not sure how the voice servers are set up in overwatch. The GPUs are expensive but, they would only need to sample little bits of the voice streams, so they should be able to run over quite a few of them.
We didn’t have the 1080s we have now, I should rerun a bunch of stuff tomorrow to see how many streams it can run at once on a single card.
I LOVE writing in Julia (language we were doing all this work in)
Oh it comes, it comes into it at every step. They have to deal with laws in different countries all the time. Look, lootboxes are gambling now in some countries, so they have to deal with how the game works in some specific countries.
There is no reason this doesn’t affect voice chat.
About streaming, yeah they could do it on the fly, but when the system tells you this person is toxic, how are you going to verify that if you don’t have the voice chat? are you assuming the system is flawless? how will it be corrected when it is erroneous? They have to record it, and they don’t, for whatever reason.
Legal absolutely comes into it. Recording an electronic voice conversation without the participants knowing is considered wiretapping and is illegal without a warrant. Blizzard would have to prominently warn people they were being recorded—it couldn’t just be tucked away in a terms of service document somewhere. And different jurisdictions have very different requirements in their laws.
I’m a little disappointed that they’re only combating “toxicity” and not SR manipulators.
Yesterday I went against a 3 stack of derankers, complete with meme battletags. They were in “tryhard engaged” mode for my game, of course.
These are the players who need removed from the system and are the true detriment to my player experience. I can and do mute toxic players (and report them) so that I can at least forget about them and keep trying my best, but there’s nothing I can do when I get in a game against deranked Diamonds who decide it’s stomping time.
So it seems to me that, based on the sum of what we know,
A) Blizzard is implementing some kind of machine learning to help make reporting more effective.
B) They have hired at least one person who knows how to work with this stuff (i.e., they are not just flipping a switch and letting the system run amok)
C) They have expressed in the past that they do not want people reported or banned for hero choice alone. The uneven implementation of this in the past has been due to reports operating on sheer volume, with no way to verify their accuracy. (Almost like it would be helped by automated detection of a player making a reasonable contribution to the team… perhaps a machine that could learn what innocent players look like… )
D) The statements about “protecting customers from false reports” and “recognizing that anyone can have a bad day” indicate that the devs are already thinking about mitigating possible undeserved consequences, and that they’re not interested in being draconian over minor or one-off offenses. Of course it doesn’t mean that such things will never accidentally happen, but it does mean they aren’t glibly implementing systems without caring about the fallout.
E) Blizzard is interested in making money and retaining customers, which is severely undermined by an unsupervised ban-bot being allowed to spuriously chuck people out of their customer base.
I think it’s totally fair to express hope or ask about what kind of safeguards the new system might have. But I do think it’s not very generous to raise those concerns on the assumption that it wouldn’t even be on Blizzard’s radar (“I don’t know if people realize”; “[the system] will mimic the majority of the community”; etc.)
The report system has already been at the mercy of the majority of the (loudest, most toxic) community; I’m pretty optimistic that incoming changes are being designed to blunt that, rather than intensify it.
People as in we, those who will be under the system, not people from blizzard, which was obvious, but you wanted to strawman me.
And if you continue that till the end, it says “if it is set up that way”.
It is dishonest to do what you did. If you want to show what i said, quote it with the context and in its entirety, don’t rip out words from the sentence.
The current reporting system was designed to remove toxicity and griefers, but with it, it ended up removing some one tricks and some off meta players. You can have faith in the system if you want, i prefer having results and being proactive to avoid bad situations, hence i made this topic.
Of course blizzard intends to do the best things possible. Unfortunately, it doesn’t always happen.
I…I don’t think Blizzard bans people for wanting to play Sombra? I think the thing you’re thinking of is just the constant issue with the “one trick” debate this community runs into all the time.
You aren’t wrong. But as you know, putting automation into some process usually makes the process faster and more efficient, i doubt people would like to hear that banning off meta will become faster and more efficient, as a possibility.