The original implementation also involved sending a "safety voucher" with each photo uploaded to iCloud, which contained a thumbnail of the photo as well as some other metadata.
The vouchers were encrypted, and could only be decrypted if there were, I believe, 30 independent matches against their CSAM hash table in the cloud. At that point the vouchers could be decrypted and reviewed by a human as a check against false-positives.
It sounds like with a raw byte hash they might be able to match a photo against a list of CSAM hashes, but they wouldn't be able to do the human review of the photo's contents because of E2E.
That would be interesting. Then all someone has to do is generate images that collide with the ones in the CSAM hash database and airdrop them to someone, then they’re suddenly the target of a federal investigation. I remember someone posting about a year ago a bunch of strange looking images that produced those collisions. If it’s all E2E then all Apple sees is a matching hash and can’t do any further review other than refer to law enforcement.
Someone mentioned here but I didn't confirm that Apple is stopping the CSAM scanning. It makes sense because there's nothing they could reasonably do even if they found matching hashes. It seems unlikely they'd report these findings to the police if there's no manual ability to review the contents first.
The vouchers were encrypted, and could only be decrypted if there were, I believe, 30 independent matches against their CSAM hash table in the cloud. At that point the vouchers could be decrypted and reviewed by a human as a check against false-positives.
It sounds like with a raw byte hash they might be able to match a photo against a list of CSAM hashes, but they wouldn't be able to do the human review of the photo's contents because of E2E.