No. The CSAM (Child Sexual Abuse Material) scanning is comparing hashes of photos about to be uploaded to iCloud against a specific set of images at NCMEC (National Center for Missing and Exploited Children) which are specific to missing and exploited children. It is not machine learning models looking for nudes or similar. It is not a generalized screening. If enough matched images are found, the images are flagged for manual verification. If the manual verification confirms that the images match specific images in the NCMEC database, law enforcement is informed.
Be aware that almost all cloud providers screen photos. Facebook reported 20 million images in 2020, Google reported half a million. Dropbox, Box, and many, many others report images. See https://www.missingkids.org/content/dam/missingkids/gethelp/... to see a complete list of companies that screen and report images.
The other thing Apple announced which is completely separate from the CSAM photo scanning is additional parental controls for the Messages app. If a parent opts in for their under-13 children, a machine learning model will look for inappropriate material and warn the child prior to showing the image. The child is also told that their parent will be flagged if the child looks at it anyway. For 13-18 year olds whose parents opted in, the teen is warned first about the content. If the teen continues past the warning the image is shown and no further action is taken. Parents are not flagged for children 13 and over. As I said, this is a parental control for pre-adult kids. It requires opt-in from the parents and has no law enforcement implications.
I am not sure the right questions are being asked.
1. Who is adding these photos to NCMEC?
2. How often are these photos added?
3. How many people have access to these photos - both adding and viewing?
Everyone is focused on Apple and no one is looking at MCMEC. If I wanted to plant a Trojan horse, I would point everyone towards Apple and perform all of the dirty work on the NCMEC end of things.
Exactly. An unknown mechanism adds hashes to a NGO subject to exactly what conditions?
This initiative makes me extremely leery of black boxes, to the extent that any algorithm between subject and accusation had damned well better be explainable outside the algorithm; else I as a jury member am bound to render a "not guilty" verdict.
Their system needs real images in the training phase, because they are building the system which produces hashes. There must be someone to confirm from Apple, that indeed correct photos are flagged. At least in the beginning.
We don’t know really how adding new hashes work. NCMEC has the whole new algorithm and they drag-n-drop new images? Hopefully not like that.
No chaos. The photos would be reported, reviewers would say "that's weird" since the false positive was obviously harmless and the industry would eventually switch to a different hash method while ignoring the false positives generated by the collision. If there were a flood of false positive images being produced the agencies would work faster to come up with a new solution, not perform mass arrests.
Right. Kind of like how copyright violations on YouTube are double checked and the humans say “that’s weird” and deny the request. Or maybe they will just report everything and let the law work everything out. If they’re innocent they have nothing to worry about, right?
I don't understand how most people are still willing to "trust the system" when it's evident that this type of mechanism keeps failing time and time again.
And in the example you gave we are talking about Google, not some early-stage understaffed startup.
Any local match causes a “safety voucher” to be uploaded along with the encrypted image. The voucher contains a (fragment of a) decryption key. If that fragment is combined with enough of its buddies from other vouchers, Apple gets to decrypt the image.
More precisely, once they have sufficient vouchers, Apple gets to decrypt the contents of the safety vouchers, which contains a low resolution, grayscale copy of the original image. Safety vouchers don't give Apple access to your photo library.
Be aware that almost all cloud providers screen photos. Facebook reported 20 million images in 2020, Google reported half a million. Dropbox, Box, and many, many others report images. See https://www.missingkids.org/content/dam/missingkids/gethelp/... to see a complete list of companies that screen and report images.
The other thing Apple announced which is completely separate from the CSAM photo scanning is additional parental controls for the Messages app. If a parent opts in for their under-13 children, a machine learning model will look for inappropriate material and warn the child prior to showing the image. The child is also told that their parent will be flagged if the child looks at it anyway. For 13-18 year olds whose parents opted in, the teen is warned first about the content. If the teen continues past the warning the image is shown and no further action is taken. Parents are not flagged for children 13 and over. As I said, this is a parental control for pre-adult kids. It requires opt-in from the parents and has no law enforcement implications.