What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their email, innocent or not, so that no one drew suspicion by asserting their email privacy with encryption. Think of it as a form of solidarity.
I'm seriously thinking of having this printed up on business cards (1st paragraph on the front, 2nd on the back) so I can just whip them out and hand them to people. By the time they've gotten done saying, "Well, if you have nothing to hide..." it'll be in their hands.
You disapprove of mandatory cavity searches before air travel or entering a school? Sounds you are a drug dealer.
The real issue here is the power imbalance combined with the lack of oversight and accountability. When I talk to NSA supporters, that's what I try to emphasize.
Did someone seriously say that to you? Did you rebut with "would you give $1 to end poverty globally?"
That's why the laws must change: to protect the people who want to do the right thing.
But in any case, I'm not sure if your example is analogous, because $1 from a person is incrementally helpful, whereas having that person's phone data is almost certainly not. This brings me back to the grandparent poster's anecdote -- I don't think that the NSA would have been able to stop 9/11 just by sorting through everyone's texting and phone calls. It just seems infeasible to really deal with all of that information unless the attackers were very obvious through their conversations -- correct me if I'm wrong.
But that's not the way it is perceived in the general populace. All the relevant authorities - police, government, NSA/CIA/TSA/... - tell you that this surveillance is important and continues to save lives. And they should know, right?, because they have the classified facts.
This narrative is also supported by our modern crime shows like NCIS where dangerous and resourceful terrorists threaten to kill thousands of people every few weeks and the heroes are kept from doing their work by petty bureaucratic restrictions like search warrants. Oh, and of course they save the world in the end, because they are awesome. In the back of their heads viewers know this is hyperbole. But it nonetheless changes the images that come to mind when people think about fighting terrorism.
We need to remind ourselves of the other side of the discussion at every step, because it doesn't help if we live in our little bubble of consent and do not hone our arguments against the views of those that we need to convince.
The inverted argument is that, if a known criminal gets sent a letter through the mail, should the government be allowed to open it? Or is privacy that absolute?
The fact that both extremes (this and the postcard argument) seem a bit silly show that there's some sacrifice in privacy to be made.
People send postcards all the time precisely because for the most part they don't care about privacy in their mail. In my personal life I can't think of a single example where I sent mail that I wanted to keep private (outside of maybe mail that had my SS#). I think most people are rather indifferent.
"If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut"
The main function of the envelope is to tied a bunch of papers together so they don't get separated in the mail.
The real problems arise if data is put together and interpreted against your will and probably without your knowledge. Imagine an ad network that combines your location profile, your Payback data and name/address and sells that to an insurance company which decides to not have you as a customer, because your are probably overweight (drives by car, no visits to any places associated with sport, buys a lot of food). Or imagine an automated alarm that is triggered when you try to enter the country, because you travel with three coworkers from Iraq and you have written an email to your friend telling him you intend to "devastate the USA" (meaning to beat the American branch of your company in a friendly soccer game after work). You cannot rectify these misunderstandings as nobody is telling you why you now always get the special attention of the TSA or why your insurance application is denied.
This is the stuff I really dread in the current focus on big data.
A great article on the same topic: https://chronicle.com/article/Why-Privacy-Matters-Even-if/12...
Are you so sure about this?
There's always a reason: it's too bothersome, we find the password prompts annoying, our friends don't use it, "it's not worth it for the unimportant stuff", "they'll get me anyway if they really want to". We complain about the NSA snooping, but we can't be bothered to properly encrypt our E-mail, even though the tools are right there, in front of us. For free.
If you're on a Mac, use https://gpgtools.org — there is really no excuse, it's so simple and straightforward to use. I'm sure there are even easier solutions on Windows and Linux.
For email clients, you can use Thunderbird or Evolution, both of which can handle PGP. There's also a new Chrome/Firefox extension that brings PGP to Webmail interfaces, such as GMail: http://www.mailvelope.com/ ... because of a Firefox limitation, it's kind of slow on Firefox right now, but should get fixed in 27 and it also lacks advanced features (e.g. PGP Mime, or syncing with key servers), but those features are coming.
Linux in general have the best security related tools. Ubuntu's graphical installer for example gives you the option to encrypt either your $HOME directory or your entire hard-drive (with dmcrypt). When formatting an USB Drive, you also have the option to encrypt it. And if you synchronize your files on DropBox or Google Drive, you can quickly create an encrypted folder by means of Encfs / gnome-encfs. And personally, I went full on with encryption. My hard-drive, my USB drives, my Dropbox, my Google Drive storage, all encrypted. I also link to my PGP public key in my email signature, which is cool, as my personal website serves links only through HTTPS.
I actually don't really understand the differences between PGP/INLINE and PGP/MIME but it has given me headaches, because some clients don't support both.
Evolution's devs refused to support anything other than PGP/MIME for a long time (seems to be fixed now):
K9 cannot read PGP/MIME
Here's what I blogged after trying it a few months ago:
I suspect the way most people on Windows will use PGP is by downloading GPG4Win and Enigmail. The fact that this is two separate things to install is a showstopper for a lot of people. Installation of Engimail requires right clicking on a link to save it, then locating the file in Windows Explorer and dragging it into an open Thunderbird window. I can do that and you can do that, but it's arcane wizardry for a significant portion of the population.
Also, the GPG installer appears to randomly crash under certain circumstances on Windows 8.
That's just installation. Then you have to get people to generate a key using a password that's different from the one they use to login to their email server. I suspect most users would have a difficult time using the GPG interface; it's very reminiscent of desktop Linux applications a decade ago.
There's more, but it's too long for here: http://jamesgecko.com/everyone-should-use-pgp-but-its-kinda-...
Encryption for mail will remain practically zero until someone implements an extremely simple, integrated user experience. It's one of those things no normal person will acquire unless you ship it with Windows and make it totally transparent.
Any suggestions for what to do on an mobile device to pair with desktop mail?
Cryptography software was subject to export-control at the time... so they printed it out, and scanned it in Europe to create a 'clean' version of the software for use abroad (PGPi ). They continued to do this for new releases of PGP until the export controls were released.
Must have made for a nasty devops relationship...
As I keep repeating, next 911 will not come in form of six people stealing our planes, on our airports hitting our buildings; it will come from Chinese/other government hackers stealing our own data, neatly stored and organized in our NSA's locker-rooms. And that's why its a horror idea for anyone to have all information on entire nation, within one database (I know its bit more complicated than that).
As the nation and most others based its order on organized information, you can imagine entire country on a full economical lockdown, once all your credit card, all your social security info and all your data leaks out to third parties/pirates.
Where do you draw the line between an individual's assertion of privacy over confidential information, and contrast it with the premise of intellectual property and the use of encryption to achieve goals traditionally perceived as less noble, such as DRM?
Where does the distinction emerge, that my social security number and HIV status deserve protection, but an episode of Game of Thrones does not?
I'm not asking this question because I don't understand the distinction. I'm asking this question because it's not often discussed.
If information wants to be free, then what is our objective measure, to define the reason why we might choose to confine some, all or none of our information, especially when employing encryption?
If I were to send you a GPG-encrypted email, it's a relationship of mutual trust: I know that you have full access to the information, and can do with it what you want. However, the ISP/NSA/GCHQ/KGB do not have access, because they are naturally not trusted.
DRM is different: if I distribute content to you using DRM, I am saying that I do not trust you to have the unencrypted content. Furthermore, DRM implies control over the hardware and software used to play content.
If HBO intended to only share Game of Thrones with their friends and not the public, then it's their right to encrypt it and only make it available to those people. If their intention is to share it with me because I paid for it, it's a little bit ridiculous to give me access to the decryption key (perhaps embedded in a blu-ray player I own) and then tell me how and when I'm allowed to use it under penalty of law.
DRM constitutes an attack on the loyal functioning of someone else's machine, in a way that the owner of that machine is not meant to counteract. Which is distinct from just having an encrypted file, which represents no such compromise to the loyalty expected of the computer it happens to be on at any particular time, (indeed encryption sort of assumes that it'll end up on an unfriendly machine in some form or another, else there'd be no point.)
I refuse to tolerate the idea of placing a high-powered machine in my living room, if that machine won't tell me what it's doing or who it's talking to. In that moment I feel completely justified in my attempts to break encryption, read the information collected about me, and reverse engineer or destroy the device.
This is an aspect that OP's article doesn't cover: Consumer products bundled with mandated encryption that operates against the interest of the end user, with designs to eavesdrop on the user simply to capture their ambient behavior as a rule, and disinform the user of the information collected about them, whether it be accelerometer information, GPS information, or channel changing habits.
This is the inverse of Phil Zimmerman's goal. Pervasive, continuous observation and telemetry, especially that which occurs without the consent or awareness of the observed.
But ultimately, the content is being defended with encryption.
If there's a dossier in an office somewhere, because I visited a doctor, I would want the clerks charged with handling that dossier to exert control over it. If they're saving it to disk on a server in a basement IT closet, I'd want it encrypted.
Meanwhile, if I was an employee at a production company, producing a television episode, I'd probably be expected to exert the same degree of control over the fragments of unfinished content, as they trickle in from the soundstage and are finished up in post production, but then, they undergo a different type of encryption, when locked up in DRM for release to paying subscribers.
The question is more an ethical one, not a technical one.
If we have encryption as a tool we might use, when is its use respectable, and when is its defeat admirable, and why? Why is it cool when we crack DVDCSS, but awful when the NSA breaks into Google and bypasses their SSL firewall? Are we just hypocrites who want to have their cake and eat it too, or is there an appreciable measure of difference between the two?
I expect my doctor and the testing center to control the dissemination of my protected healthcare information.
The production company and the TV station expect its viewers to contol the dissemination of their artistic works.
Is one expectation less reasonable than the other?
Not so much about breaking encryption, and attacking a static target.
I think Phil Zimmerman's assumption (at least, withing the context of this article) is that if you're attacking an sample of encrypted information, that act alone comes with the implicit understanding that you're probably acting against the interests of the person who locked the information away in the first place. Whether that person or group of people are worthy of the attack is its own judgement call.
Broadcasting, or reselling should be covered by legal protections against.
Does that mean you can only give one copy to one person? If not, then can you give one copy to as many people as you like? If that's true, what's wrong with using bittorrent to do all of that quickly and more easily?
I'd rather see a more nuanced legal approach. Ideally, a more balanced and fair system would probably lead to a greater respect for the law overall.
Off the top of my head: Running a few torrents is, and should be, ignorable. Running a huge amount of them should be a small fine, like a traffic ticket. Any kind of commercial operation or reseller should be shut down.
I'm actually fine with public trackers being outlawed, as long as search engines aren't neutered. If someone finds a tracker, and passes some kind of vetting process to gain access, it's clearly not publicly accessible so shouldn't be subject to legal investigation without a warrant.
Just my arm-chair policy on a lazy saturday :)
Even today, the distribution of cryptography is not totally open. Oh, and even if you used cryptography, you have to decrypt your data at the request of the justice, so it's more or less pointless against your own state.
Regardless of the state's ability to compel decryption, you should still encrypt everything because it gives you at least partial control. Your options are much, much better if the government can't build their case without your cooperation, even if a judge compels decryption, because your lawyer can try to get the order rescinded before you comply, or at worst, he can at least stall for a while, you can claim you forgot your password and take the risk of an obstruction charge if the risk of said charge is lesser than the risk of charges that would be made after prosecutors gained access to your decrypted content, etc.
If you leave everything plaintext, you have no possibility of option or control; the government gets access to the whole kit and caboodle instantly.
The reason computers and networks are useful to us is because they can instantaneously create perfect copies of long strings of numbers, which programs interpret to have specific meanings. If we have a use case where perfect, rapid proliferation may be problematic, computational digitization should be considered a suspect mechanism for the implementation of that use case.
tl;dr: implement [at least most of] your steganography the old-fashioned way.
"Hey, look, this is not encryption, we're just dumb!"
"Hi. I'm Jesus!"
"Wow, Jesus? Nice to meet you, can I have your public key?"
I don't trust any of you :)
I don't care whether a Linux kernel release is signed by a guy who has government-issued ID that says "Linus Torvalds" on it. I care whether it is signed by the guy who started the whole thing more than 20 years ago.
There would be value in a modified web of trust where, when you receive an e-mail from hipaulshi, you can be certain that it's the same person you replied to on HN. Who cares what her (or his) real name is? In a sense, what you really want for most internet-centric communication is something like certificate pinning.
Of course things are different when you're talking about people who you know in person. But for those you don't have to organize key signing parties.
And even for people I know in the real world, I would be happy with a solution that just pins the very first public key it sees as long as nothing obviously suspicious is happening. This suffices for 99.9% of use cases. 
 No, this is not sufficient if you know that a government is out to get you. But if it helps to move us into an encryption-by-default setting, it is still a win for the overall ecosystem.
The alternative would be to confirm the fingerprint of the public key out of band, but lots of entities aren't already authenticated to each other (i.e. I can authenticate my best friend but not Amazon) and don't have a safe/practical way to check each other's public keys.
This is exactly the reasoning behind proposals such as TACK.
Edited to add: Keep in mind that external verification basically never happens, as much as cryptographers want it to happen. In the real world, you have a choice between no verification at all and a pinning-based verification which is weaker than the theoretical ideal, but makes MitM'ing significantly harder and much more likely to be detected for everybody at no user interface cost.
ADD: Fuck this statist cesspool.
Side note: maybe there could be a way to irreversibly hash one's fingerprint or DNA sample into an electronic signature
1. A person has only one genetic code in their body.
2. A person's genetic code never changes.
Most of the time, this doesn't matter, but if we're talking about taking hashes for security, suddenly it does.
Obviously noone has the capability to do this, so they present you instead with "most likely" results. Not the level of proof I'd be looking for in a court of law.
I'm thinking identical twins. Got anything for #4?
And this is part of why authentication and identity are very difficult things to do right, mostly because very few people have thought about what it is they're verifying.
If I publish a public key and say it belongs to me, 'Bob Smith', the only practical use that has is that you can verify that a future message signed by 'Bob Smith' was signed by someone with access to the same private key as the guy who originally published the public key. Any assumption about who 'Bob Smith' actually is, and who that corresponds to in the real world (what other identities do they assert), and also that 'Bob Smith' is a single entity, are simply assumptions.
It's impossible to pin a human down to a single, guaranteed verifiable, non impersonatable and non revocable identity. 'Documents issued by men with guns' isn't foolproof, but we use it as a trust anchor mostly because everyone else does, and we don't have much alternative.
Maybe it's not advanced by HN standards, but you can do things that most email users would struggle with.
The BBC had political vetting. http://www.cambridgeclarion.org/press_cuttings/mi5.bbc.page9...
Special Branch spied on an elected member of parliament http://news.bbc.co.uk/1/hi/programmes/true_spies/2378459.stm
Unions in the 70s had extensive networks of police informants and undercover officers and agents provocateurs. https://www.wsws.org/en/articles/2002/12/spie-d10.html
That union scrutiny activity turned into black listing, meaning many workers were denied jobs. A secret McCarthy-esque activity. This black listing was so severe it lead to laws being brought in around data protection. Those laws originally only covered computer systems. Because the black lists were run on index cards the laws were changed to cover more forms of data retention. http://www.bbc.co.uk/programmes/b02xcn7d
Abuses of surveillance are common, and long lasting.
George Orwell foresaw the potential for abuse long before 1948 when he wrote "1984" <https://en.wikipedia.org/wiki/Nineteen_Eighty-Four>.