Hacker News new | past | comments | ask | show | jobs | submit login

If you have the whitelist, you compare the hash on the backend to known hashes. That's how "Safebrowsing" things work, no reason why it couldn't work here too.

You'd have to know which site each hash relates to, so that you know who to give the money to. So the hashing wouldn't gain anything, since the system only works if they can reverse the hash.

It can work for Safebrowsing because they can just have a gigantic list of hashes and it doesn't matter which specific site the user is visiting, only whether it's any site in the list. But for Flattr to work, you need to know which specific site it is.

If you produce a list of hashes, you can reverse those hashes simply comparing against the hash.

This still isn't private for any URLs in the list, but it does anonymize any URLs which get transmitted in case your white-listing goes bad and allows all. It also makes it clear that you're only transmitting exact matches and not fuzzy matches. This also makes it clear only the URL part and no URL parameters are transmitted which is also important for privacy. (Otherwise any URLs in redirects are also hoovered up).

I haven't looked at the flattr code but if it's regex without going through proper URL parsing I also wouldn't trust it to go 'bad' at some point, there are too many edge cases from the tricky to the mundane such as matching on http://example.com#twitter.com , etc.

A side benefit to hashes for safebrowsing but perhaps a bad thing for this use case is that it also effectively allows you to anonymize a whitelist from your end-users because you can then compare client-side without leaking the list.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact