Hacker News new | past | comments | ask | show | jobs | submit login
Obfuscation: A User’s Guide for Privacy and Protest (mitpress.mit.edu)
59 points by molsongolden on Sept 17, 2015 | hide | past | favorite | 15 comments



I'm not sure how smart it is to give people advice on privacy and which software to use to avoid surveillance via a traditional book. I'd much rather get my advice from EFF's Surveillance Self-Defense or TTC's Security In-a-Box because they're online and can be updated easily if a critical bug is discovered with any given popular privacy or security software. Capabilities change over time and when they do, you can't update a book as easy as you can update a website. If someone relies on this books' recommendations and think that they're secure when they're not for whatever reason, that's incredibly harmful.

Reading the overview though, the book is also seems to be about history of obfuscation and protesting laws (?) so I guess it could be useful for people who're seeking that information, but I wouldn't recommend this or any book that acts like a tutorial to someone who's trying to learn more about online privacy.


Would it be less ironic if they published the information on a website supported by targeted adds? There aren't many privacy-friendly options to monetize content these days without resorting to donations.


Well privacy and content monetization do not play well together. The only option I can think would be a blog with 1 add a-la "Daring Fireball", but you need a huge amount of hits in order to make that a viable business, as J. Grubber does.


It's certainly not a tutorial---it's written by academics, so the venue is pretty typical.


Same author created Ad Nauseum to click on every ad you encounter as opposed to selectively blocking them http://www.ethanzuckerman.com/blog/2014/10/06/helen-nissenba...

Obfuscation won't slow down data scientists. To see this in action open your gmail account and notice there's (likely) no spam in your primary inbox. Those spammers spend considerable resources to obfuscate themselves from Google's filters yet still fail.


But what if the user would "read" the spam?

Of course "fake behavior" could be dected if you have known real behaviour to compare it to.That also means that it is possible to simulate "real behavior" given the relevant data.

Whis is a win-win situtation, ad networks get to fight fake traffic, fake traffic becomes indistiguishable from real human trafic.


Why isn't this in the public domain?


It's also quite hypocritical to sell a book about privacy that is encumbered with DRM.


Aren't they two different things? The open software community has been, in general, a big supporter of human rights, but every author or artist should be able to decide how he or she sells or gives away their creations. (Am I missing some important point?)


Maybe in theory, but when the DRM tracks when you're reading the book:

https://www.adobe.com/privacy/ade.html


It depends on the DRM. If the DRM is the type that identifies and reports on the identity of the user, then it is ironic. But if the DRM is user-neutral and doesn't report back to anyone, then you are correct.


The ebook version from MIT press is Adobe Digital Editions. It is also available (more expensive) in Kindle format: http://www.amazon.com/-/dp/B0135G71BG


I was already thinking about a whole industry that could be spawned based on this premise. You could hire people that would inject "noise" into the internet on your behalf, and you could ask (other) people to measure the signal-to-noise ratio that remained.


It should be possible to add this kind of obfuscation noise algorithmically. Real human activity isn't random, but it does follow certain patterns.


At least for specific instances (e.g. "reputation management") such services have existed for a while.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: