I'm not really sure what you are angry about here because you dont seem to be arguing against the main conceit that bias of whatever form is a factor in constructing these data sets. Is it just that, for you, bias is inevitable and we simply have to toughen up?
In general I am angry about censorship ambitions and futile and mislead attempt to get rid of hate by banning it from the internet. Even while I am aware that hate can reproduce in a simple scheme that can lead to mutual radicalization, previous attempt to contain it all made the situation worse. But correction is nowhere in sight, it is as if people try to implement insanity by committing to the same mistakes over and over.
But I am not that angry and I don't think that can be read out of my comment aside from general disapproval.
I would assume that gig workers did not care as much about hate speech as some academics do and did not flag content as expected. This discrepancy is declared as bias. Fine, be that way...
> The Google researchers suggest that ‘[the] disagreements between annotators may embed valuable nuances about the task’.
On that I agree with the researchers, but would propose that any annotation (hate speech yes/no) would have to fall back to the 'no' and solve the dispute. Otherwise only asking the target will provide any additional understanding. Perhaps asking as supreme court too, but that is not feasible and not even the highest courts are infallible.