So it seems to me that, based on the sum of what we know,
A) Blizzard is implementing some kind of machine learning to help make reporting more effective.
B) They have hired at least one person who knows how to work with this stuff (i.e., they are not just flipping a switch and letting the system run amok)
C) They have expressed in the past that they do not want people reported or banned for hero choice alone. The uneven implementation of this in the past has been due to reports operating on sheer volume, with no way to verify their accuracy. (Almost like it would be helped by automated detection of a player making a reasonable contribution to the team… perhaps a machine that could learn what innocent players look like…
)
D) The statements about “protecting customers from false reports” and “recognizing that anyone can have a bad day” indicate that the devs are already thinking about mitigating possible undeserved consequences, and that they’re not interested in being draconian over minor or one-off offenses. Of course it doesn’t mean that such things will never accidentally happen, but it does mean they aren’t glibly implementing systems without caring about the fallout.
E) Blizzard is interested in making money and retaining customers, which is severely undermined by an unsupervised ban-bot being allowed to spuriously chuck people out of their customer base.
I think it’s totally fair to express hope or ask about what kind of safeguards the new system might have. But I do think it’s not very generous to raise those concerns on the assumption that it wouldn’t even be on Blizzard’s radar (“I don’t know if people realize”; “[the system] will mimic the majority of the community”; etc.)
The report system has already been at the mercy of the majority of the (loudest, most toxic) community; I’m pretty optimistic that incoming changes are being designed to blunt that, rather than intensify it.