Hacker News new | past | comments | ask | show | jobs | submit login

They don’t have the images, but they do have “visual derivatives”. https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...:

“The device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.

[…]

Once more than a threshold number of matches has occurred, Apple has enough shares that the server can combine the shares it has retrieved, and reconstruct the decryption key for the ciphertexts it has collected, thereby revealing the NeuralHash and visual derivative for the known CSAM matches.”

https://www.apple.com/child-safety/pdf/Security_Threat_Model... is even clearer:

“The decrypted vouchers allow Apple servers to access a visual derivative – such as a low-resolution version – of each matching image.

These visual derivatives are then examined by human reviewers”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: