Hacker News new | past | comments | ask | show | jobs | submit login

The last thing you should start with is "it's all in lobsodium". You didn't understand the point. "how do you get Alice's public key, but that's hard with any sort of public key cryptography." That sentence makes absolutely no sense, it's a public key, why would it be hard to send? The problem is people are like "just use product X, you don't have to understand it". Marketing wanketeering is what makes crypto more difficult in the first place. No one gives a shit about shipping your product. "Just use libsoduium" is an easy way to add to the problem.

The hard part about getting Alice's key is making sure that it's actually Alice's.

thanks from me and probably 10k+ more people for saying that :)

I think most would just hardcode it or a reference to it (API call to secrets management) because setting up PKI for a single encrypted channel feature is pretty demanding and also more chances to get something wrong.

If you can do that it seems to me like you could just use symmetric crypto, but I'm really not an expert on this.

> If you can do that it seems to me like you could just use symmetric crypto

No, because with symmetric encryption you'd have to protect the key against theft and tampering (i.e., against reading and writing by a third party), while you only have to protect an asymmetric public key against tampering, which is a lot easier in practice.

(also not an expert, just my understanding of it)

No, in this case you’d still have a private key you’d need to protect.

The private key is on a server you have physical and logical control over. The public key, however, is potentially on millions of consumer devices – some hacker reading out the public key is a question of when, not if.

Asymmetric key exchange is done with a private key on both end points. For things like HTTPS the “client” key is ephemeral. But if you are using the keys for authenticated communication, which I think is what this thread is about, both keys are vital (think: client-side certs).

I think most people would recommend symmetric encryption unless it now possible. With asymmetric keys each person still needs to protect their private keys.

Let's say you deploy 100 or more IoT devices, and need them to communicate with your server. It's your server, so you can hard code a key in the devices. Now you have two choices:

Choice 1: use a symmetric keys. This means one key per device, which you have to manage. Quite cumbersome. You could instead have one symmetric key for everyone, but then if only one IoT device gets compromised (which over time is a virtual certainty) the whole crypto system would be compromised.

Choice 2: use a public/private key pairs. One for the server, and one for each device. Now the system is only broken when the server key is compromised. If a device gets compromise, the attacker merely learns the server's public key, and can impersonate that particular device.

The main advantage of choice 2 vs choice 1 is that with Choice 2, you can use the same server key for everything. You'd still use a protocol with ephemeral keys, but you wouldn't have to manage many many keys. And if the IoT devices are untrusted (that is, they are assumed compromised or anonymous), the whole system only has to manage one key.

Now sure, you could still make it more performant by retaining symmetric keys around. And you'd have to perform fast key erasure (replacing the key by a hash of the key) from time to time to ensure forward secrecy, but with public keys around the symmetric key can act as a cache, which can safely be erased whenever you restart or update your system.

If I'm following this thread properly, the important difference here is that with symmetric crypto you'd also have to make sure nobody else saw the key. With public keys, it doesn't matter if someone snooped.

I thought Diffie-Hellman key exchange covers the snooping part, at least over the channel.

I guess it doesn't protect against something attacking and snooping on your machine though.

DH by itself protects against passive attackers, but most "snoopers" aren't passive. To securely exchange keys over an untrusted network, you usually want an authenticated key exchange, which is more complicated than DH.

With DH both public keys have effect on the randomness of the shared secret. If the app on the client generates a random DH key-pair for every session, and it uses a public DH value of the server pinned to it, the encryption is authenticated and secure to use.

If there are no public keys pinned to clients (say secure messaging apps like Signal where each user generates their own keys), users need to check the public key fingerprints to make sure there's no MITM attack taking place.

The public key fingerprints that need checking are important because they get introduced in 3DH, which is an AKE. Like 'tptacek mentioned.

At the end of ephemeral DH Bob has successfully agreed a random session key with _somebody_. Maybe Bob hopes it's Alice. No-one else can snoop on their conversation, but the trouble is that neither Bob, nor the other party (which might be Alice) are sure who they're talking to. In particular Mallory might be in the middle having conducted two separate DH agreements one with Alice and one with Bob.

So very likely unless Bob is comfortable with this situation he stills need a mechanism to find out who he's talking to. On the upside he does now have an encrypted channel on which to continue the work.

At scale that only practical answer is an Authority, a Trusted Third Party, people _so_ trustworthy that Alice, Bob and maybe even Mallory agree that they know who is who. In one sense this is so hard it might be impossible. But then again maybe it works anyway?

If you don't need scale, for example maybe you're a conspiracy of a few dozen people trying to bring down the Authority, then you have lots of other options depending on your circumstances including Out of Band verification and the Socialist Millionaires Protocol.

If you are a college kid and convinced that everybody on your Facebook friends list, and everybody on their Facebook friends lists, is a fundamentally good person - but that the Authority is a shadowy conspiracy against you all, you can use the Web of Trust, right up until the guy who once lived with a friend of your cousin's housemate steals your life savings and leaves you in a bathtub filled with ice with a hole where one kidney used to be.

Not hard at all. I send my public key over gmail. Recipient adds it to authorized_keys. I answer "yes" to whatever partially human-readable question ssh asks me to trust the server's key on first use. Now I'm in.

The difficulty you are describing assumes a user base of crypography pedants who make assumptions about third parties that don't matter to 99% of non-technical users (nor even technical users in many cases).

But when it matters, boy are the consequences dire.

One example: you work for the American government, and you witness something very wrong, very illegal going on. You'd better be sure, when contacting Laura Poitras, that you are indeed contacting Laura Poitras, and not some counter-intelligence operative from the NSA.

And it has to work even if you don't have Ed Snowden's skills. Without reliable crypto the rest of us can use, people will get caught, arrested, tortured, killed blackmailed… just for speaking up.

Maybe we don't want reliable crypto to be widely available. Maybe we want to have mass surveillance. But that's another debate. (Personally, I'd rather have everyone to have reliable crypto, and I'm willing to make wiretapping impossible in the process.)

Your comment sort of implies that there are complicated solutions to the key finding problem that are better than the simple ones. But then it doesn't bother establishing that argument.

How to beat Laura Poitras publishing a public key all over the place?

I swept a whole host of issues under the rug, not all of which are related to key finding. Let's see forward secrecy (one of the bigger ones). The internet works this way: when you send a message, it gets delivered to the recipient, and a copy is sent to the NSA.

Without forward secrecy, getting Laura Poitras' key will enable the NSA to read all past communications. They only have to seize her computer when it's still on, and the key is still in memory somewhere, or compel the poor journalist to give up here keys (possibly using that "non invasive" waterboarding torture, and justifying it with suspicion of helping terrorists).

Now if Laura kept the decrypted messages in her laptop, forward secrecy wouldn't do anything, but if she properly deleted them, it would be a shame if the messages were nevertheless at the mercy of the attacker.


As for key finding, well… the simple solutions do work pretty well. Snowden for instance didn't find Poitras' keys lying around on the internet. He asked someone he trusted would give him the right key.

How are you sure it's her key ? That's the real problem

The one that gets published on multiple social media accounts, a personal website and in the New York Times?

It isn't real ambiguous.

For instance, Snowden had someone tweet a key fingerprint: https://theintercept.com/2014/10/28/smuggling-snowden-secret...

That's a pretty good way of making sure, because you defer trust to the intermediaries. While it definitely works for high profiles like this, it is obviously not scalable for larger audiences

Maybe we shouldn't take security advice from folks with no need for security that is obvious to them.

Ok. Take if from the maintainer of GPG:


Hint: TOFC is a lot like what I described above, with the added usability that you don't have to type "yes" every time like a chump.

But you just encrypt something with it and ask Alice to decrypt it with her private key and you ask her if she was able to.

Oh god but how do you ask her without a guarantee that she's really who said yes?

Even if you meet Alice in real life to ask, how can you be sure the meeting isn't a dream or a simulation and the Alice before your eyes isn't a chosen plaintext attack by a cosmic man in the middle?!

The entire science of cybersecurity is bankrupt and founded upon untenable foundations!!

That is a ridiculous straw man and I'm pretty sure you are aware of this. At some point, there is trust involved. You balance the credibility of authentication guarantees based on the level of trust required for the transaction you're making.

If your threat model includes the possibility of yourself being simulated, I don’t envy you

"Shipping [your] product"? How do I, for example, make money off of libsodium?

The point of the article is all the ways RSA blows up. libsodium addresses that style of problem. Saying there are other problems ("how do we agree on a key") may be true, but isn't responsive to the discussion.

It’s hard to verify that the key you received is actually from Alice and not Eve who is sitting on the network between Alice and Bob

If that sentence makes absolutely no sense to you, it's because this article assumes a very basic understanding of cryptography.

When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering, and this is the problem of why healthcare is difficult? When they say something that doesn't make sense to you, do you argue with them from a position of ignorance?

Same goes with crypto. It's fine to learn the concepts of how to use a trusted library, but you really aren't likely going to understand the underlying tradeoffs and mathematics.

People here aren't just saying "just use product X", they're saying, "learn the concepts of product X and use that". That's about as good as it's going to get for any specialized, complicated domain.

> When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering ...

Yes, of course it is. Doctors know a lot, but they're not competent to assess the pharmaceutical literature.

So here's how it works. In med school, drug manufacturers provide free equipment, food, liquor, etc. And push their drugs. Because once you have someone in the habit of prescribing some drug, they tend to keep prescribing that drug.

Also, you pay influential doctors to deliver talks at conferences, praising your drug(s). You also pay ghostwriters to draft papers, which said influential doctors can submit for publication.

And last, you pay sexy, charismatic young things (of both sexes) to visit doctors' offices, pushing your drugs, and giving away food and stuff.

There's also a scientific backstop in medicine to counter balance unethical commercial interests from causing unbounded harm.

It is possible (even more likely) to have experts with legitimate opinions that are scientifically valid and have measurable outcomes that also correspond to commercial interests. Ie. "it helps people and makes money".

That model has worked for over a century. Yes, it's under attack by unethical people (more than in the past?), but I'm not sure what the alternative would be.

This isn't a recent issue. It may be less of an issue than it was a decade or so ago.

For example, I recall reading that well over half (maybe 70%-90%) of the medical literature on Neurontin (gabapentin) were ghostwritten advertorial spam funded by Novartis.

Medical doctors have given me ignorant, incoherent or harmful advice many times.

When someone says something that doesn’t make sense to you, do you just put whatever they’ve handed you into your mouth without asking questions?

It's fine to ask questions to improve your understanding. But it's not fine to argue from a position of ignorance, obfuscate facts, and to project false pretences. This is how we get anti-vaxxers and the broader death of expertise.

A medical doctor prescribing a treatment plan and drugs would be akin to a trusted coworker crypto expert telling me to "do x with libsodium" without further explanation.

On the other hand, getting crypto advice from random bloggers and HN commenters is akin to getting a medical treatment plan and drugs from random bloggers and HN commenters.

Except that these aren't random bloggers and HN commenters, there are some world-renowned security experts on this very thread saying "do x with libsodium".

If you can't be bothered to read their profiles or understand who they are, I refer back to my point about arguing from a position of ignorance.

> When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering, and this is the problem of why healthcare is difficult? When they say something that doesn't make sense to you, do you argue with them from a position of ignorance?

To be honest, this is literally the level of discourse of most Hacker News discussions about healthcare.

And it would be the correct level of discussion.

I have doctors in the family. The scariest stories of "marketing wankateering" in medicine I hear is from them.

To posit a strawman slippery slope for exposition, would your doctors in the family say the entire medical profession and science can't be trusted, and we should just stop vaccinations, nutritious diets, fitness regimens, cancer treatments, surgical procedures, etc. altogether?

Obviously not. So, where is the line drawn of what professional opinions are or are not trusted?

There's no doubt that "marketing wankateering" happens in all complex domains. Any "Market for Lemons" (i.e. a market with information asymmetry - a domain so complicated or obfuscated that consumers can't understand its fundamentals) will be exploited by charlatans. This is why we have professional (imperfect but functioning) backstops such as medical scientific research and the security/crypto research community.

OP was claiming that not even the professionals on this thread can be trusted to not be "wankateers" for a free/open source library, with no evidence, or even a hint of moderate understanding of the problem domain (i.e. why it's hard to distribute a public key), or desire to learn. Perhaps they were just frustrated with the complexity of the domain, but flaming people trying to help as being "wankateers" is rather fatalist.

> And it would be the correct level of discussion.

Arguing from a position of ignorance when people say something that doesn't make sense to you is literally how anti-vaxxing happens.

Just because arguing from a position of ignorance can sometimes produce outcomes which align with your personal anecdotes doesn't make it an intellectually valid method of discourse.

Where do you have position of ignorance? People on HN do know doctors, talk to doctors, have doctors in families, and some even are doctors themselves.

Anti-vaxxer beliefs aren't caused by people questioning the first medical advice they get from a medical professional when it doesn't sound right to them. Anti-vaxxer beliefs come from either not verifying and going with your gut, or verifying and then ignoring what you've learned.

Doctors are humans and make mistakes sometimes, and your own health is your own responsibility. So is safety of your own application, so you shouldn't plug in someone else's crypto if you don't feel comfortable with it, but instead try to understand the domain as much as you need to start feeling comfortable.

> not verifying and going with your gut

...which is exactly what "arguing from a position of ignorance" means. Once you attempt to verify medical advice (in good faith) you are no longer ignorant.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact