If this were a hash collision with some abuse image, then... why can I still see it? I don't get how something can be bad enough to warrant an instant temp ban but not bad enough to hide from the public
So suppose this is a hash collision, I don't really think twitter is at fault here. Any solution you design has a false-positive rate. I am curious how they will resolve it though.
Spoken like a person who does not consider the sheer volume Twitter (and companies like it) have to deal with.
The company would go bankrupt if they hired enough people to handle all of the manual posts. They also can't just leave abusive images up... so this is the compromise.
I’ve always wondered that for hash table we always check that the key matches the target because of hash collision. Should the same be done here? For image hash matches, either run though an image similarity check or, if original image is not available (it’s so bad it can’t be stored), a secondary hash used to double check?