Now that such a model exists, I think we can expect lobbying for making this mandatory. For the children...
Once adopted this will lead to an increase in randomly locked out accounts due to model false positives, because looking for something very rare will lead to most items flagged being false positives (Bayes statistics etc.).
This blog had a few interesting articles on the limitations of these technologies
https://www.hackerfactor.com/blog/index.php?/archives/971-FB...
While this is comparing the currently used hash based approaches a classification model will have similar problems.
It's obviously the right thing to say it's an undesirable job, but with the right support, I actually think it could be a good job. You'd be able to go home each day knowing that you did a good part in helping to reduce the harm to both these children, other children, bringing the perpetrators to justice, and bringing everyone the help they need.
I remember reading somewhere that the folks who worked for the companies that Facebook outsources its moderation work to suffer from serious psychological problems.
back when orkut was a thing, google did this one weekend with internal employees. some co-workers participated; unsurprisingly they all said it was _very_ disturbing.
I'm honestly not sure why we can't have those 0.01% actually do the job. Like I get the optics are terrible but I think were I standing at the gallows I would prefer my hangman enjoy his job, less total suffering in the world created that way.
I think it's a holdover from a puritan mindset that work that needs doing but is unsavory— slaughterhouse workers, exterminators, executioners, and well… this are only okay if the person doing it feels bad the whole time.
For starters - how do you find & verify the 0.01% (or whatever) of decent people who do not find the "CSAM Verification" job horrible?
With how easily so-called AI's are to maliciously mis-train, there are major issues with having "non-decent" people doing this job. Vs. the homicide-enjoying hangman is not making judgement calls on society's behalf.
> how do you find & verify [...] decent people who do not find the "CSAM Verification" job horrible?
I think the distinction is:
Some people are okay with the job because they're pedophiles.
Others are okay with the job because they're insensitive to violence.
"Decent" is slightly moralistic and blurs the picture. The people can be absolute assholes, and that's okay, as long as they're not personally motivated to collect and spread CSAM material. So we're really looking for a bunch of psychopaths (in the most well-intended meaning of the word) indifferent to children. I think it's possible to make a qualifying test for both criteria.
Anyone who actually wants to be a hangman (or police, military, president, etc) should be immediately forbidden from applying for the job. The desire to wield power disqualifies anyone from actually wielding it wisely.
Once adopted this will lead to an increase in randomly locked out accounts due to model false positives, because looking for something very rare will lead to most items flagged being false positives (Bayes statistics etc.).
This blog had a few interesting articles on the limitations of these technologies https://www.hackerfactor.com/blog/index.php?/archives/971-FB... While this is comparing the currently used hash based approaches a classification model will have similar problems.