Apple's "child sexual abuse image" monitoring proposal would do the same thing. Easy enough to implement at the device level--the authorities provide a list of forbidden hashes, any matching file goes away.
Apple's CSAM detection is based on known image's hash. Removing their own shoot images is very different task. How to detect? Maybe just based on geolocation?