Hacker News new | comments | show | ask | jobs | submit login
Why I Wrote PGP (1999) (philzimmermann.com)
257 points by numlocked on Nov 30, 2013 | hide | past | web | favorite | 109 comments



Perhaps you think your email is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide, then why don't you always send your paper mail on postcards? Why not submit to drug testing on demand? Why require a warrant for police searches of your house? Are you trying to hide something? If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut? Do law-abiding citizens have any need to encrypt their email?

What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their email, innocent or not, so that no one drew suspicion by asserting their email privacy with encryption. Think of it as a form of solidarity.

I'm seriously thinking of having this printed up on business cards (1st paragraph on the front, 2nd on the back) so I can just whip them out and hand them to people. By the time they've gotten done saying, "Well, if you have nothing to hide..." it'll be in their hands.


My go to is something along the lines of:

You disapprove of mandatory cavity searches before air travel or entering a school? Sounds you are a drug dealer.


I don't think you understand why people approve of what the NSA is doing, which is why that's usually going to be an ineffective argument. The NSA promises a safer country, and all we have to is let them do a little bit of surveillance behind the scenes. We won't even notice -- it's not intrusive in the same way that a cavity search is. An NSA supporter once asked me if I would have given up the metadata on my phone if it would have prevented 9/11 and saved ~3000 lives. I wouldn't, for reasons that have been discussed on HN repeatedly, but I think it's still a fair question.

The real issue here is the power imbalance combined with the lack of oversight and accountability. When I talk to NSA supporters, that's what I try to emphasize.


> An NSA supporter once asked me if I would have given up the metadata on my phone if it would have prevented 9/11 and saved ~3000 lives. I wouldn't, for reasons that have been discussed on HN repeatedly, but I think it's still a fair question.

Did someone seriously say that to you? Did you rebut with "would you give $1 to end poverty globally?"


The real issue is close to this. The real issue is that when the organization is empowered it's up to an individual to say "no" to improper surveillance. That's what's wrong with the "well there are good people behind it and we are all on the same team" argument. When it becomes your job to prevent an attack, and you have the authority to spy more than you think is morally correct, do you spy anyway? Or are you willing to let 3000 people die, and face the music of the world knowing that you could have prevented it but chose not to for altruistic reasons.

That's why the laws must change: to protect the people who want to do the right thing.


To play devil's advocate, if everyone in the U.S. gave $1 to a fund to help the world's poor, that'd be $314 million in aid. Nothing to scoff at, even though it wouldn't end global poverty.

But in any case, I'm not sure if your example is analogous, because $1 from a person is incrementally helpful, whereas having that person's phone data is almost certainly not. This brings me back to the grandparent poster's anecdote -- I don't think that the NSA would have been able to stop 9/11 just by sorting through everyone's texting and phone calls. It just seems infeasible to really deal with all of that information unless the attackers were very obvious through their conversations -- correct me if I'm wrong.


This argument is not a fair question. It is a straw man. The debate has never been about the validity of seizing the metadata on one phone with reasonable suspicion. It is about warrantless wholesale surveillance of hundreds of millions of innocent people, who are being forced to give up essential freedoms on the remote, off-hand chance that, if the stars align a certain way, "the authorities" may stop a hypothetical criminal act. It is about a trade-off where the costs to everyone are exceedingly high and the benefits to anyone are exceedingly low.


> if the stars align a certain way, "the authorities" may stop a hypothetical criminal act.

But that's not the way it is perceived in the general populace. All the relevant authorities - police, government, NSA/CIA/TSA/... - tell you that this surveillance is important and continues to save lives. And they should know, right?, because they have the classified facts.

This narrative is also supported by our modern crime shows like NCIS where dangerous and resourceful terrorists threaten to kill thousands of people every few weeks and the heroes are kept from doing their work by petty bureaucratic restrictions like search warrants. Oh, and of course they save the world in the end, because they are awesome. In the back of their heads viewers know this is hyperbole. But it nonetheless changes the images that come to mind when people think about fighting terrorism.

We need to remind ourselves of the other side of the discussion at every step, because it doesn't help if we live in our little bubble of consent and do not hone our arguments against the views of those that we need to convince.


They had more than they needed to know to prevent 9/11. They didn't connect the dots. How would one more dot have helped?


but the dots they had came from somewhere, right? And those were collected by some release of privacy.

The inverted argument is that, if a known criminal gets sent a letter through the mail, should the government be allowed to open it? Or is privacy that absolute?

The fact that both extremes (this and the postcard argument) seem a bit silly show that there's some sacrifice in privacy to be made.


It's a hypothetical


I've found that this doesn't work quite as well because modesty (the desire not to be seen 'naked') tangles up with privacy and creates a "that's different..." that seems difficult to quickly dismiss.


I think the postcards example hurts his argument.

People send postcards all the time precisely because for the most part they don't care about privacy in their mail. In my personal life I can't think of a single example where I sent mail that I wanted to keep private (outside of maybe mail that had my SS#). I think most people are rather indifferent.

"If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut" The main function of the envelope is to tied a bunch of papers together so they don't get separated in the mail.


So it would be OK if your bank statements came rubberbanded together (or in transparent plastic "patriot envelopes") with your balances showing (for your convenience)? How about credit card statements showing what you owe? The results of that HIV test maybe?


No no no, if you argue like this you have already chosen the wrong battleground for this debate. Your Email and meta data is not open to everyone but only to a (")trusted(") third party: to law enforcement. People (rightly) assume that the stuff that most of the people want to keep private from their peers (conflicts with their spouse, sexual orientation and kinks, financial status, health issues) isn't of interesting to the NSA data analyzer.

The real problems arise if data is put together and interpreted against your will and probably without your knowledge. Imagine an ad network that combines your location profile, your Payback data and name/address and sells that to an insurance company which decides to not have you as a customer, because your are probably overweight (drives by car, no visits to any places associated with sport, buys a lot of food). Or imagine an automated alarm that is triggered when you try to enter the country, because you travel with three coworkers from Iraq and you have written an email to your friend telling him you intend to "devastate the USA" (meaning to beat the American branch of your company in a friendly soccer game after work). You cannot rectify these misunderstandings as nobody is telling you why you now always get the special attention of the TSA or why your insurance application is denied.

This is the stuff I really dread in the current focus on big data.

A great article on the same topic: https://chronicle.com/article/Why-Privacy-Matters-Even-if/12...


> People (rightly) assume that the stuff that most of the people want to keep private from their peers (conflicts with their spouse, sexual orientation and kinks, financial status, health issues) isn't of interesting to the NSA data analyzer.

Are you so sure about this?

http://www.huffingtonpost.com/2013/11/26/nsa-porn-muslims_n_...


That doesn't bother me at all. Other than your medical information, I'm pretty sure the government already knows all that information. And I'm not really sure how them knowing my HIV status leave me at a disadvantage.


Well they are currently fixing that part about not knowing your medical information. People are far too willing to give up their privacy if they can save a buck


The ability to read words and figures plainly exposed isn't restricted to the government.


It might if they ever decide that HIV positive folks are too expensive to keep treating.


This is the case with every insurer (unless one operated blind and didn't know what it was cutting checks for?) but government at least answers to anti-discrimination standards. If this became normal practice among insurers, then you'd be entirely screwed.


except that's not in the realm of reality for the US...


I think there's a misunderstanding. The author doesn't mean actual postcards – like holiday greetings and the like – those are actually sent in the public and nobody cares. The author means sending letters without envelopes.


How about asking people how they would feel if curtains were made illegal?


…and yet so many of us (yes, even the tech geeks) do not use it.

There's always a reason: it's too bothersome, we find the password prompts annoying, our friends don't use it, "it's not worth it for the unimportant stuff", "they'll get me anyway if they really want to". We complain about the NSA snooping, but we can't be bothered to properly encrypt our E-mail, even though the tools are right there, in front of us. For free.

If you're on a Mac, use https://gpgtools.org — there is really no excuse, it's so simple and straightforward to use. I'm sure there are even easier solutions on Windows and Linux.


For Linux there's Seahorse, the Gnome app for managing PGP and SSH Keys. It comes installed by default on Ubuntu. Generating a key and synchronizing it with a key server couldn't be more painless.

For email clients, you can use Thunderbird or Evolution, both of which can handle PGP. There's also a new Chrome/Firefox extension that brings PGP to Webmail interfaces, such as GMail: http://www.mailvelope.com/ ... because of a Firefox limitation, it's kind of slow on Firefox right now, but should get fixed in 27 and it also lacks advanced features (e.g. PGP Mime, or syncing with key servers), but those features are coming.

Linux in general have the best security related tools. Ubuntu's graphical installer for example gives you the option to encrypt either your $HOME directory or your entire hard-drive (with dmcrypt). When formatting an USB Drive, you also have the option to encrypt it. And if you synchronize your files on DropBox or Google Drive, you can quickly create an encrypted folder by means of Encfs / gnome-encfs. And personally, I went full on with encryption. My hard-drive, my USB drives, my Dropbox, my Google Drive storage, all encrypted. I also link to my PGP public key in my email signature, which is cool, as my personal website serves links only through HTTPS.


Yeah it looks good in theory, until you start really using gpg and promote it amongst friends and contacts...

I actually don't really understand the differences between PGP/INLINE and PGP/MIME but it has given me headaches, because some clients don't support both.

Evolution's devs refused to support anything other than PGP/MIME for a long time (seems to be fixed now):

http://mozdev.org/pipermail/enigmail/2010-July/012617.html

K9 cannot read PGP/MIME https://code.google.com/p/k9mail/issues/detail?id=5864


The Thunderbird-with-PGP experience on Windows is atrocious, absolutely disgraceful.

Here's what I blogged after trying it a few months ago:

I suspect the way most people on Windows will use PGP is by downloading GPG4Win and Enigmail. The fact that this is two separate things to install is a showstopper for a lot of people. Installation of Engimail requires right clicking on a link to save it, then locating the file in Windows Explorer and dragging it into an open Thunderbird window. I can do that and you can do that, but it's arcane wizardry for a significant portion of the population.

Also, the GPG installer appears to randomly crash under certain circumstances on Windows 8.

That's just installation. Then you have to get people to generate a key using a password that's different from the one they use to login to their email server. I suspect most users would have a difficult time using the GPG interface; it's very reminiscent of desktop Linux applications a decade ago.

There's more, but it's too long for here: http://jamesgecko.com/everyone-should-use-pgp-but-its-kinda-...


I use it wherever I can, but encryption is something that requires the participation of all communicants. The only option for an unwilling or unable participant is to be excluded from the mail completely and contacted in a separate channel (preferably Real Life). This, combined with the fact that encryption requires effort, makes it very, very hard to propagate.

Encryption for mail will remain practically zero until someone implements an extremely simple, integrated user experience. It's one of those things no normal person will acquire unless you ship it with Windows and make it totally transparent.


gpgtools is great. It integrates very nicely with Mail.app.

Any suggestions for what to do on an mobile device to pair with desktop mail?


I was just starting out with PGP today - and found iPGMail to be useful on iOS.


One of the best stories from the PGP saga is that Zimmerman worked with MIT Press to have the entire PGP source code published, in machine readable print. This allowed him to make the argument that his scary-dangerous-cryptosystem was in fact protected under the First Amendment as free speech.


It gets better, they not only printed it out but also scanned it!

http://www.pgpi.org/pgpi/project/scanning/

Cryptography software was subject to export-control at the time... so they printed it out, and scanned it in Europe to create a 'clean' version of the software for use abroad (PGPi ). They continued to do this for new releases of PGP until the export controls were released.

Must have made for a nasty devops relationship...


To all the people and law enforcement officials that live by the "if you don't have anything to hide, you shouldn't worry about your privacy", I say that's your very own and primitive explanation. I hide/encrypt/lock my stuff because I DO NOT TRUST you in handling my private info and therefore my life. Most government org cannot keep their own stuff private, not to mention my personal info. Same applies to a lock on my house: I do not lock my door because I sit at home and build an atomic bomb and do not wont officials to know about it; I lock my door because you (LE) do a pathetic job at keeping my house safe from burglars.

As I keep repeating, next 911 will not come in form of six people stealing our planes, on our airports hitting our buildings; it will come from Chinese/other government hackers stealing our own data, neatly stored and organized in our NSA's locker-rooms. And that's why its a horror idea for anyone to have all information on entire nation, within one database (I know its bit more complicated than that).

As the nation and most others based its order on organized information, you can imagine entire country on a full economical lockdown, once all your credit card, all your social security info and all your data leaks out to third parties/pirates.

http://en.wikipedia.org/wiki/Nothing_to_hide_argument


So, no one ever dares to ask this, but I'm going to because I never see mention of it crop up in these sorts of discussions.

Where do you draw the line between an individual's assertion of privacy over confidential information, and contrast it with the premise of intellectual property and the use of encryption to achieve goals traditionally perceived as less noble, such as DRM?

Where does the distinction emerge, that my social security number and HIV status deserve protection, but an episode of Game of Thrones does not?

I'm not asking this question because I don't understand the distinction. I'm asking this question because it's not often discussed.

If information wants to be free, then what is our objective measure, to define the reason why we might choose to confine some, all or none of our information, especially when employing encryption?


The real question here is not one of information freedom, but one of trust.

If I were to send you a GPG-encrypted email, it's a relationship of mutual trust: I know that you have full access to the information, and can do with it what you want. However, the ISP/NSA/GCHQ/KGB do not have access, because they are naturally not trusted.

DRM is different: if I distribute content to you using DRM, I am saying that I do not trust you to have the unencrypted content. Furthermore, DRM implies control over the hardware and software used to play content.


I draw the line between things that are intended to be private, and things that are intended to be publicly available.

If HBO intended to only share Game of Thrones with their friends and not the public, then it's their right to encrypt it and only make it available to those people. If their intention is to share it with me because I paid for it, it's a little bit ridiculous to give me access to the decryption key (perhaps embedded in a blu-ray player I own) and then tell me how and when I'm allowed to use it under penalty of law.


If HBO want to encrypt stuff on a system they own, or transmit it encrypted, that's their business. However, a computer system should always serve the owner. And the only way you can do effective DRM; leaving it viewable to me but only under your conditions; is through subverting my computer from my control - otherwise I'll just be able to read the key.

DRM constitutes an attack on the loyal functioning of someone else's machine, in a way that the owner of that machine is not meant to counteract. Which is distinct from just having an encrypted file, which represents no such compromise to the loyalty expected of the computer it happens to be on at any particular time, (indeed encryption sort of assumes that it'll end up on an unfriendly machine in some form or another, else there'd be no point.)


I agree. In that scenario, I feel absolutely no need for restraint in attacking someone else's encrypted information. Namely, when that someone else is an uninvited eavesdropper.

I refuse to tolerate the idea of placing a high-powered machine in my living room, if that machine won't tell me what it's doing or who it's talking to. In that moment I feel completely justified in my attempts to break encryption, read the information collected about me, and reverse engineer or destroy the device.

This is an aspect that OP's article doesn't cover: Consumer products bundled with mandated encryption that operates against the interest of the end user, with designs to eavesdrop on the user simply to capture their ambient behavior as a rule, and disinform the user of the information collected about them, whether it be accelerometer information, GPS information, or channel changing habits.

This is the inverse of Phil Zimmerman's goal. Pervasive, continuous observation and telemetry, especially that which occurs without the consent or awareness of the observed.


DRM and encryption aren't the same thing. I don't think anyone would complain that Game of Thrones is stored in an encrypted format on it's servers, or transmitted using encryption to broadcasting stations. Conversely, I doubt you would want your social security number or HIV status stored under DRM - you might lose access to that information in the future.


I know that DRM isn't explicitly defined as encryption, but encryption is usually the apparatus that achieves the goal of DRM. In most modern situations DRM schemes involve carefully implemented encryption.

But ultimately, the content is being defended with encryption.

If there's a dossier in an office somewhere, because I visited a doctor, I would want the clerks charged with handling that dossier to exert control over it. If they're saving it to disk on a server in a basement IT closet, I'd want it encrypted.

Meanwhile, if I was an employee at a production company, producing a television episode, I'd probably be expected to exert the same degree of control over the fragments of unfinished content, as they trickle in from the soundstage and are finished up in post production, but then, they undergo a different type of encryption, when locked up in DRM for release to paying subscribers.

The question is more an ethical one, not a technical one.

If we have encryption as a tool we might use, when is its use respectable, and when is its defeat admirable, and why? Why is it cool when we crack DVDCSS, but awful when the NSA breaks into Google and bypasses their SSL firewall? Are we just hypocrites who want to have their cake and eat it too, or is there an appreciable measure of difference between the two?


Encryption is a tool, like a knife. The use of it is what makes an action useful or not: to cut a steak, or to stab an innocent person.


Apples to oranges. Game of Thrones series are inherently meant for distribution, while my emails and your HIV report are not.


Both are intended for controlled distribution.

I expect my doctor and the testing center to control the dissemination of my protected healthcare information.

The production company and the TV station expect its viewers to contol the dissemination of their artistic works.

Is one expectation less reasonable than the other?


The 4th Amendment implies that conversations about anything including the bits to Game of Thrones should be free from snooping, if only when those conversations are encrypted. Law enforcement can get a warrant when they have sufficient evidence of illegal activity.


Yeah, Phil Zimmerman's article mostly addresses intercepting vulnerable transmissions over a wire or through the air waves.

Not so much about breaking encryption, and attacking a static target.

I think Phil Zimmerman's assumption (at least, withing the context of this article) is that if you're attacking an sample of encrypted information, that act alone comes with the implicit understanding that you're probably acting against the interests of the person who locked the information away in the first place. Whether that person or group of people are worthy of the attack is its own judgement call.


Personally I feel trading a single copy of any IP with a friend or individual should be covered under fair use.

Broadcasting, or reselling should be covered by legal protections against.


That seems so right, but it gets fuzzy so fast as to be impractical.

Does that mean you can only give one copy to one person? If not, then can you give one copy to as many people as you like? If that's true, what's wrong with using bittorrent to do all of that quickly and more easily?


Nothing, in my opinion. Making protected IP easily and publicly available rails pretty hard against the copyright legal framework, it's almost incompatible.

I'd rather see a more nuanced legal approach. Ideally, a more balanced and fair system would probably lead to a greater respect for the law overall.

Off the top of my head: Running a few torrents is, and should be, ignorable. Running a huge amount of them should be a small fine, like a traffic ticket. Any kind of commercial operation or reseller should be shut down.

I'm actually fine with public trackers being outlawed, as long as search engines aren't neutered. If someone finds a tracker, and passes some kind of vetting process to gain access, it's clearly not publicly accessible so shouldn't be subject to legal investigation without a warrant.

Just my arm-chair policy on a lazy saturday :)


Wow, that's one scenario I didn't even consider: the government making it illegal to use encryption. Blows my mind that you could some day be considered a criminal for using PGP.


Actually, usage of cryptography with computers was illegal at a point. Here in France, the strength of SSH was capped by law.

Even today, the distribution of cryptography is not totally open. Oh, and even if you used cryptography, you have to decrypt your data at the request of the justice, so it's more or less pointless against your own state.


For the record, mandatory decryption is an unsettled issue in the U.S. and conflicting rulings have been issued. It's entirely possible that a judge in a given encryption case will rule that the accused's right not to self-incriminate as defined in the Fifth Amendment precludes mandatory decryption.

Regardless of the state's ability to compel decryption, you should still encrypt everything because it gives you at least partial control. Your options are much, much better if the government can't build their case without your cooperation, even if a judge compels decryption, because your lawyer can try to get the order rescinded before you comply, or at worst, he can at least stall for a while, you can claim you forgot your password and take the risk of an obstruction charge if the risk of said charge is lesser than the risk of charges that would be made after prosecutors gained access to your decrypted content, etc.

If you leave everything plaintext, you have no possibility of option or control; the government gets access to the whole kit and caboodle instantly.


Chaffing and winnowing[1], as proposed by Rivest, can handle this. It'd be interesting to use this as a trunking mechanism to bundle many conversations together.

[1] http://en.wikipedia.org/wiki/Chaffing_and_winnowing


Could there be an encryption mechanism that outputs valid sentences? If anyone tries to ban encryption they'd have to argue your text is encrypted because it doesn't make much sense to them.


One example is Bananaphone [0], which is designed to help Tor traffic look more like normal traffic to evade DPI.

[0] https://lists.torproject.org/pipermail/tor-dev/2013-November...


Computer-based steganography is destined to failure in the same way DRM is destined to failure. If a computer program can interpret the data, someone else can reverse engineer the formats until the solution is found. It might slow something down slightly, but is not likely to be effective alone.

The reason computers and networks are useful to us is because they can instantaneously create perfect copies of long strings of numbers, which programs interpret to have specific meanings. If we have a use case where perfect, rapid proliferation may be problematic, computational digitization should be considered a suspect mechanism for the implementation of that use case.

tl;dr: implement [at least most of] your steganography the old-fashioned way.


Yes, there even was such a tool posted on HN some time ago. I can't remember its name, but basically it got its corpus from twitter data. Maybe someone else can post a link?


Such steganography is sort of possible, though it's hard to make it not look like the output of a markov babbler.


I and my friends regularly sound exactly like the output of a markov babbler.

"Hey, look, this is not encryption, we're just dumb!"


So.. Anyone wants to do a PGP key signing party in San Francisco? I asked on HN before but no one seems to be interested back then. I hope some meetup organizer can pick this up.


Here's the survey to see if people are interested in the meetup. Please register if you are interested.

https://docs.google.com/forms/d/19p-OY-Q5VKtVD2BVt8QVHMCcBO9...


Unfortunately only 2 guys actually signed up so far. So I am pretty sure now PGP is not as popular as it seems in terms of verification purpose. :/


"Hello! I'm Fish!"

"Hi. I'm Jesus!"

"Wow, Jesus? Nice to meet you, can I have your public key?"

I don't trust any of you :)


This highlights a huge problem of the PGP/GPG web of trust: It mostly solves the wrong problem.

I don't care whether a Linux kernel release is signed by a guy who has government-issued ID that says "Linus Torvalds" on it. I care whether it is signed by the guy who started the whole thing more than 20 years ago.

There would be value in a modified web of trust where, when you receive an e-mail from hipaulshi, you can be certain that it's the same person you replied to on HN. Who cares what her (or his) real name is? In a sense, what you really want for most internet-centric communication is something like certificate pinning.

Of course things are different when you're talking about people who you know in person. But for those you don't have to organize key signing parties.

And even for people I know in the real world, I would be happy with a solution that just pins the very first public key it sees as long as nothing obviously suspicious is happening. This suffices for 99.9% of use cases. [0]

[0] No, this is not sufficient if you know that a government is out to get you. But if it helps to move us into an encryption-by-default setting, it is still a win for the overall ecosystem.


You need authoritative, external real name because without it, all you can conclude is that if you are being MITMed, it's by the same entity MITMed all your other connections with that person.

The alternative would be to confirm the fingerprint of the public key out of band, but lots of entities aren't already authenticated to each other (i.e. I can authenticate my best friend but not Amazon) and don't have a safe/practical way to check each other's public keys.


In practice, 99% of use cases do not have to worry about a MitM attack that is successful 100% of the time. The probability of a MitM attack remaining successful for all your interactions (and that's what would be required for it to go undetected) drops exponentially under a lot of (though admittedly not all) attack scenarios.

This is exactly the reasoning behind proposals such as TACK.

Edited to add: Keep in mind that external verification basically never happens, as much as cryptographers want it to happen. In the real world, you have a choice between no verification at all and a pinning-based verification which is weaker than the theoretical ideal, but makes MitM'ing significantly harder and much more likely to be detected for everybody at no user interface cost.


No better way than to do it yourself! Create the meet up and submit it here. I'd be up for it.


Yes, I would definitely be interested


I'm curious, how would you verify an attendee's identity?



government id/picture id checking against pgp keyserver, to make sure you are who say you are. So you have to be physically present at the party with your key and id. that's how the trust network is built.


Yes, because pieces of paper given out by men with guns are the font of all truth.

ADD: Fuck this statist cesspool.


Odd. I don't recall anyone at the DMV forcing my license on me at gunpoint.


Would you be more comfortable giving your fingerprint or DNA sample to prove your identity?

Side note: maybe there could be a way to irreversibly hash one's fingerprint or DNA sample into an electronic signature


Falsehoods programmers believe about DNA:

    1. A person has only one genetic code in their body.
    2. A person's genetic code never changes.
etc. In the spirit of "falsehoods programmers believe about names" and "... about addresses".


Those are solvable problems though. Or are you saying 23andme doesn't work?


No, they aren't. I did not say "A person's tested genetic code never changes", I said exactly what I meant, which is that genetic codes can be changed. Retroviruses do it. Further, on a grand scale, chimeras [1] exist, with human chimeras often entirely unaware of it (while my priors would suggest I am not one, I can't even come close to proving that to you with the rather meagre medical tests I've had done on me), and on a smaller scale, small mutations in cells can easily propagate out into the body over time, even before we ignore the matter of cancer.

Most of the time, this doesn't matter, but if we're talking about taking hashes for security, suddenly it does.


23andMe is not going to provide you with an "authoritative" DNA "fingerprint" because how would they tell that this genetic sequence in this chromosome from this cell is legit but this contradictory sequence in this other cell is different.

Obviously noone has the capability to do this, so they present you instead with "most likely" results. Not the level of proof I'd be looking for in a court of law.


3. A person's DNA is unique

I'm thinking identical twins. Got anything for #4?


The problem with that, of course, is that someone's DNA or fingerprint isn't a secret. There's no reason why I couldn't take your fingerprint, embed it into a signature and claim to be you.

And this is part of why authentication and identity are very difficult things to do right, mostly because very few people have thought about what it is they're verifying.

If I publish a public key and say it belongs to me, 'Bob Smith', the only practical use that has is that you can verify that a future message signed by 'Bob Smith' was signed by someone with access to the same private key as the guy who originally published the public key. Any assumption about who 'Bob Smith' actually is, and who that corresponds to in the real world (what other identities do they assert), and also that 'Bob Smith' is a single entity, are simply assumptions.

It's impossible to pin a human down to a single, guaranteed verifiable, non impersonatable and non revocable identity. 'Documents issued by men with guns' isn't foolproof, but we use it as a trust anchor mostly because everyone else does, and we don't have much alternative.


The thing about a fingerprint or even a dna sample, in this use case is that you send your signature ahead of time and verify yourself phyisically at the party. Does your fingerprint hash match your fingerprint? That is more difficult (but not impossible) to spoof. At the end of the day, this discussion stems from notion that government ids are unreliable as a means of verification. Granted, but for what we are trying to achieve in practice is preservation of privacy and data. I was trying to point out that identification of a person - true identification - can be at conflict with our ultimate goal of privacy, since we have to give up a piece of data to prove we are who we say we are, and contemplated finding a way to technically make it happen (in practice) without sacrificing PII.


I see what you mean - you could probably do something like publish your public key and then publish a signed copy of your fingerprint hash. Anyone else could do the same thing, but an imposter wouldn't be able to convincingly produce your fingerprint on demand when physically present. At least, not without a lot of funding and cleverness.


So you are tying the key to fingerprints or other biometric, but how do you tie that to an identity e.g. a name?


Any hashing function would do it, but you leave your fingerprints and DNA anywhere. Would be easy to get access to it and copy.


that's fine, its a public key :)


I believe proving identity can only be done using a private key.


I think you mean indifferent bureaucrats without guns.


I am very interested.


[deleted]


I don't know yet. I am putting up a survey to see if people are interested. I posted the link in the same thread and also in the 'new' section.


>Advances in technology will not permit the maintenance of the status quo, as far as privacy is concerned. The status quo is unstable. If we do nothing, new technologies will give the government new automatic surveillance capabilities that Stalin could never have dreamed of. The only way to hold the line on privacy in the information age is strong cryptography


I just started experimenting with PGP this week and I was amazed at how easy it was. This page was all I needed to get started http://serverfault.com/questions/489140/what-is-a-good-solut...


Yeah, the command line tool was really easy when I used it as part of my project https://github.com/abemassry/wsend-gpg


It's easy for a developer, sure. We need to make it Dropbox-easy for the general population, though.


I'm not a developer. I just learned HTML/CSS so I'm a wannabe but there's nothing that I know less about than command-line interfaces.


Don't discredit your knowledge here. You knew or learned: - How to install GPG - How to open a terminal - How to change directories - About file extensions

Maybe it's not advanced by HN standards, but you can do things that most email users would struggle with.


Isn't that gpg?


As far as I'm aware, GPG is an implementation of OpenPGP.


But what if GPG is in fact PG in PGP GP Open G? PGP P¿


GPG, as I understand it, is a fork of PGP and still part of the OpenPGP Alliance


My heartfelt thanks, Phil, if you happen to be/visit here. Not just for the software, but for the accompanying philosophy and enlightenment, and dedication to same. The impact upon me, personally, has been significant. It is something I in turn attempt to share with those who matter to me, personally and professionally.

Regards


It's scary how this all snooping thing started way back when the internet was nascent.


It started before that. This has been a function of the NSA from it's beginning.

http://en.wikipedia.org/wiki/Project_SHAMROCK


British Special Branch said they wanted an informant on every street.

The BBC had political vetting. http://www.cambridgeclarion.org/press_cuttings/mi5.bbc.page9...

Special Branch spied on an elected member of parliament http://news.bbc.co.uk/1/hi/programmes/true_spies/2378459.stm

Unions in the 70s had extensive networks of police informants and undercover officers and agents provocateurs. https://www.wsws.org/en/articles/2002/12/spie-d10.html

That union scrutiny activity turned into black listing, meaning many workers were denied jobs. A secret McCarthy-esque activity. This black listing was so severe it lead to laws being brought in around data protection. Those laws originally only covered computer systems. Because the black lists were run on index cards the laws were changed to cover more forms of data retention. http://www.bbc.co.uk/programmes/b02xcn7d

Abuses of surveillance are common, and long lasting.


It actually started with the British whose spooks taught it to the US' Herbert Hoover before WWII. Read "Enemies: A History of the FBI by Tim Weiner" <http://www.theguardian.com/books/2012/mar/30/enemies-fbi-his... for the definitive historical overview of how domestic spying and its abuse got started.

George Orwell foresaw the potential for abuse long before 1948 when he wrote "1984" <https://en.wikipedia.org/wiki/Nineteen_Eighty-Four>.


Wow.. the whole article seems so relevant to today


It was never irrelevant. The problem has simply scaled up over time.


This particular version doesn't seem to be from 1991 as the current title suggests since it speaks of 1994 in the past tense.


Yes. It claims to be updated in 1999.


PGP isn't trivial but thanks to https://encrypt.to/ now everybody can send encrypted PGP messages.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: