Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple's "child sexual abuse image" monitoring proposal would do the same thing. Easy enough to implement at the device level--the authorities provide a list of forbidden hashes, any matching file goes away.


Apple's CSAM detection is based on known image's hash. Removing their own shoot images is very different task. How to detect? Maybe just based on geolocation?


It gets shared, the authorities notice, they ban it. Same as the CSAM stuff.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: