How unrealistic would a screenshot reporting system be?

It’s unclear how the system works with slurs.

In theory, detecting them shouldn’t be a problem and should be something that could be detected and dealt with without a report.

They have been playing with machine learning.

In practice, you aren’t the first person to mention repeated racist slurs not appearing to trigger a ban which would suggest something is off if an automated system is in use.

(It’s possible if the person was banned separately that you wouldn’t get the report notification)

They have also stated that they verify all toxic chat. People don’t necessarily believe that though.

I don’t know of anyone that has tested it much.

1 Like