
New AI tool detects child sexual abuse material with ‘99% precision’ - dsavant
https://thenextweb.com/neural/2020/07/31/new-ai-tool-detects-child-sexual-abuse-material-with-99-accuracy/
======
alexhaber
Is this claiming to detect 99% of CSAM or claiming to distinguish between CSAM
and non-CSAM with 99% accuracy?

Relevant:
[https://bohemian.ai/blog/99-accuracy-99-lie/](https://bohemian.ai/blog/99-accuracy-99-lie/)

~~~
jdm2212
> Thorn, the non-profit behind Safer, says it spots the content with greater
> than 99% precision.

Sounds like it's claiming 99% precision, so 99% of positives are true
positives.

Recall probably isn't great, because (a) if it were great they'd brag about it
and (b) you don't get to 99% precision without sacrificing some recall.

Seems reasonable to aim high on precision, though, to avoid burying NCMEC in
mis-classified data and also to avoid wrongly banning users' content.

~~~
suizi
[https://sg.news.yahoo.com/2019-05-31-sex-lies-and-
surveillan...](https://sg.news.yahoo.com/2019-05-31-sex-lies-and-surveillance-
fosta-privacy.html) I don't trust Thorn.

~~~
jdm2212
What in that article specifically makes you not trust Thorn? They seem to me
like they're trying to do good work -- preventing child exploitation -- that
necessarily involves partnering with organizations whose values won't always
exactly line up with theirs.

------
L_226
This is great, and I am going to use it in my side project. The main question
I have is who has responsibility for the the 1% of CSAM not detected? Myself,
as the service inadvertently hosting the material, or Thorn for not having an
"accurate enough" algorithm? I suppose they just put it in their SLA terms or
something.

------
jchw
I am a bit confused; I may have missed it, but I don’t see a mention of
Project Arachnid, which is similar based on the description.

[https://projectarachnid.ca/en/](https://projectarachnid.ca/en/)

