Hacker News new | past | comments | ask | show | jobs | submit login
RSA is a fragile cryptosystem (trailofbits.com)
708 points by ingve 14 days ago | hide | past | web | favorite | 341 comments



I'm a developer, not a cryptographer, and like most crypto articles, most of this post is gibberish to me. I really tried to follow it in the beginning, but then I gave up, scrolled to the "what should you use instead", and then realized I didn't understand that either.

If this is article is "easy" crypto, and I shouldn't roll my own crypto, then what should I do? "Just use elliptic curve crypto but make sure you choose the right curve! Here's an abbreviation for you, libsodium has it so you're all set" doesn't comfort me much. I don't have a good intuition of what a curve even is and why my sensitive data can't just go straight ahead like all the other data. I mean, when I call into libsodium with parameters I do not understand, then aren't I, for all intents and purposes, rolling my own crypto anyway? How is any of this of help?

If I want to allow my customers to send me encrypted data via an untrusted channel, then what do I do? I need public key crypto for that, right? Basically, I want the public key cryptography version of Coda Hale's classic blog post about password hashing [0], does that exist?

EDIT: thanks for all the comments folks, great insights! I particularly found [1] by CiPHPerCoder to be a great starting point (akin to Coda's password article) for all kinds of scenarios. I feel much more confident about this stuff now than I did 2 hours ago.

[0] https://codahale.com/how-to-safely-store-a-password/

[1] https://paragonie.com/blog/2017/06/libsodium-quick-reference...


It's all in libsodium. For example, below a pynacl API usage fragment, abridged from their docs [1]. No crypto parameters or nonces or authentication are exposed. Modern crypto is really relatively easy to use.

(What's never trivial is key management, how do you get Alice's public key, but that's hard with any sort of public key cryptography.)

  # Generate Bob's private key, which must be kept secret
  skbob = PrivateKey.generate()

  # Bob's public key, and similarly for Alice...
  pkbob = skbob.public_key

  # Bob wishes to send Alice an encrypted message so Bob must make a Box with
  #   his private key and Alice's public key
  bob_box = Box(skbob, pkalice)

  # This is our message to send (binary blob of data)
  message = b"Kill all humans"

  # Encrypt our message
  encrypted = bob_box.encrypt(message)


[1] https://pynacl.readthedocs.io/en/stable/public/


The last thing you should start with is "it's all in lobsodium". You didn't understand the point. "how do you get Alice's public key, but that's hard with any sort of public key cryptography." That sentence makes absolutely no sense, it's a public key, why would it be hard to send? The problem is people are like "just use product X, you don't have to understand it". Marketing wanketeering is what makes crypto more difficult in the first place. No one gives a shit about shipping your product. "Just use libsoduium" is an easy way to add to the problem.


The hard part about getting Alice's key is making sure that it's actually Alice's.


thanks from me and probably 10k+ more people for saying that :)


I think most would just hardcode it or a reference to it (API call to secrets management) because setting up PKI for a single encrypted channel feature is pretty demanding and also more chances to get something wrong.


If you can do that it seems to me like you could just use symmetric crypto, but I'm really not an expert on this.


> If you can do that it seems to me like you could just use symmetric crypto

No, because with symmetric encryption you'd have to protect the key against theft and tampering (i.e., against reading and writing by a third party), while you only have to protect an asymmetric public key against tampering, which is a lot easier in practice.

(also not an expert, just my understanding of it)


No, in this case you’d still have a private key you’d need to protect.

The private key is on a server you have physical and logical control over. The public key, however, is potentially on millions of consumer devices – some hacker reading out the public key is a question of when, not if.

Asymmetric key exchange is done with a private key on both end points. For things like HTTPS the “client” key is ephemeral. But if you are using the keys for authenticated communication, which I think is what this thread is about, both keys are vital (think: client-side certs).

I think most people would recommend symmetric encryption unless it now possible. With asymmetric keys each person still needs to protect their private keys.

Let's say you deploy 100 or more IoT devices, and need them to communicate with your server. It's your server, so you can hard code a key in the devices. Now you have two choices:

Choice 1: use a symmetric keys. This means one key per device, which you have to manage. Quite cumbersome. You could instead have one symmetric key for everyone, but then if only one IoT device gets compromised (which over time is a virtual certainty) the whole crypto system would be compromised.

Choice 2: use a public/private key pairs. One for the server, and one for each device. Now the system is only broken when the server key is compromised. If a device gets compromise, the attacker merely learns the server's public key, and can impersonate that particular device.

The main advantage of choice 2 vs choice 1 is that with Choice 2, you can use the same server key for everything. You'd still use a protocol with ephemeral keys, but you wouldn't have to manage many many keys. And if the IoT devices are untrusted (that is, they are assumed compromised or anonymous), the whole system only has to manage one key.

Now sure, you could still make it more performant by retaining symmetric keys around. And you'd have to perform fast key erasure (replacing the key by a hash of the key) from time to time to ensure forward secrecy, but with public keys around the symmetric key can act as a cache, which can safely be erased whenever you restart or update your system.


If I'm following this thread properly, the important difference here is that with symmetric crypto you'd also have to make sure nobody else saw the key. With public keys, it doesn't matter if someone snooped.

I thought Diffie-Hellman key exchange covers the snooping part, at least over the channel.

I guess it doesn't protect against something attacking and snooping on your machine though.


DH by itself protects against passive attackers, but most "snoopers" aren't passive. To securely exchange keys over an untrusted network, you usually want an authenticated key exchange, which is more complicated than DH.

With DH both public keys have effect on the randomness of the shared secret. If the app on the client generates a random DH key-pair for every session, and it uses a public DH value of the server pinned to it, the encryption is authenticated and secure to use.

If there are no public keys pinned to clients (say secure messaging apps like Signal where each user generates their own keys), users need to check the public key fingerprints to make sure there's no MITM attack taking place.


The public key fingerprints that need checking are important because they get introduced in 3DH, which is an AKE. Like 'tptacek mentioned.

At the end of ephemeral DH Bob has successfully agreed a random session key with _somebody_. Maybe Bob hopes it's Alice. No-one else can snoop on their conversation, but the trouble is that neither Bob, nor the other party (which might be Alice) are sure who they're talking to. In particular Mallory might be in the middle having conducted two separate DH agreements one with Alice and one with Bob.

So very likely unless Bob is comfortable with this situation he stills need a mechanism to find out who he's talking to. On the upside he does now have an encrypted channel on which to continue the work.

At scale that only practical answer is an Authority, a Trusted Third Party, people _so_ trustworthy that Alice, Bob and maybe even Mallory agree that they know who is who. In one sense this is so hard it might be impossible. But then again maybe it works anyway?

If you don't need scale, for example maybe you're a conspiracy of a few dozen people trying to bring down the Authority, then you have lots of other options depending on your circumstances including Out of Band verification and the Socialist Millionaires Protocol.

If you are a college kid and convinced that everybody on your Facebook friends list, and everybody on their Facebook friends lists, is a fundamentally good person - but that the Authority is a shadowy conspiracy against you all, you can use the Web of Trust, right up until the guy who once lived with a friend of your cousin's housemate steals your life savings and leaves you in a bathtub filled with ice with a hole where one kidney used to be.


Not hard at all. I send my public key over gmail. Recipient adds it to authorized_keys. I answer "yes" to whatever partially human-readable question ssh asks me to trust the server's key on first use. Now I'm in.

The difficulty you are describing assumes a user base of crypography pedants who make assumptions about third parties that don't matter to 99% of non-technical users (nor even technical users in many cases).


But when it matters, boy are the consequences dire.

One example: you work for the American government, and you witness something very wrong, very illegal going on. You'd better be sure, when contacting Laura Poitras, that you are indeed contacting Laura Poitras, and not some counter-intelligence operative from the NSA.

And it has to work even if you don't have Ed Snowden's skills. Without reliable crypto the rest of us can use, people will get caught, arrested, tortured, killed blackmailed… just for speaking up.

Maybe we don't want reliable crypto to be widely available. Maybe we want to have mass surveillance. But that's another debate. (Personally, I'd rather have everyone to have reliable crypto, and I'm willing to make wiretapping impossible in the process.)


Your comment sort of implies that there are complicated solutions to the key finding problem that are better than the simple ones. But then it doesn't bother establishing that argument.

How to beat Laura Poitras publishing a public key all over the place?


I swept a whole host of issues under the rug, not all of which are related to key finding. Let's see forward secrecy (one of the bigger ones). The internet works this way: when you send a message, it gets delivered to the recipient, and a copy is sent to the NSA.

Without forward secrecy, getting Laura Poitras' key will enable the NSA to read all past communications. They only have to seize her computer when it's still on, and the key is still in memory somewhere, or compel the poor journalist to give up here keys (possibly using that "non invasive" waterboarding torture, and justifying it with suspicion of helping terrorists).

Now if Laura kept the decrypted messages in her laptop, forward secrecy wouldn't do anything, but if she properly deleted them, it would be a shame if the messages were nevertheless at the mercy of the attacker.

---

As for key finding, well… the simple solutions do work pretty well. Snowden for instance didn't find Poitras' keys lying around on the internet. He asked someone he trusted would give him the right key.


How are you sure it's her key ? That's the real problem

The one that gets published on multiple social media accounts, a personal website and in the New York Times?

It isn't real ambiguous.

For instance, Snowden had someone tweet a key fingerprint: https://theintercept.com/2014/10/28/smuggling-snowden-secret...


That's a pretty good way of making sure, because you defer trust to the intermediaries. While it definitely works for high profiles like this, it is obviously not scalable for larger audiences

Maybe we shouldn't take security advice from folks with no need for security that is obvious to them.

Ok. Take if from the maintainer of GPG:

https://lwn.net/Articles/464137/

Hint: TOFC is a lot like what I described above, with the added usability that you don't have to type "yes" every time like a chump.


But you just encrypt something with it and ask Alice to decrypt it with her private key and you ask her if she was able to.

Oh god but how do you ask her without a guarantee that she's really who said yes?

Even if you meet Alice in real life to ask, how can you be sure the meeting isn't a dream or a simulation and the Alice before your eyes isn't a chosen plaintext attack by a cosmic man in the middle?!

The entire science of cybersecurity is bankrupt and founded upon untenable foundations!!


That is a ridiculous straw man and I'm pretty sure you are aware of this. At some point, there is trust involved. You balance the credibility of authentication guarantees based on the level of trust required for the transaction you're making.

If your threat model includes the possibility of yourself being simulated, I don’t envy you

"Shipping [your] product"? How do I, for example, make money off of libsodium?

The point of the article is all the ways RSA blows up. libsodium addresses that style of problem. Saying there are other problems ("how do we agree on a key") may be true, but isn't responsive to the discussion.


It’s hard to verify that the key you received is actually from Alice and not Eve who is sitting on the network between Alice and Bob


If that sentence makes absolutely no sense to you, it's because this article assumes a very basic understanding of cryptography.


When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering, and this is the problem of why healthcare is difficult? When they say something that doesn't make sense to you, do you argue with them from a position of ignorance?

Same goes with crypto. It's fine to learn the concepts of how to use a trusted library, but you really aren't likely going to understand the underlying tradeoffs and mathematics.

People here aren't just saying "just use product X", they're saying, "learn the concepts of product X and use that". That's about as good as it's going to get for any specialized, complicated domain.


> When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering ...

Yes, of course it is. Doctors know a lot, but they're not competent to assess the pharmaceutical literature.

So here's how it works. In med school, drug manufacturers provide free equipment, food, liquor, etc. And push their drugs. Because once you have someone in the habit of prescribing some drug, they tend to keep prescribing that drug.

Also, you pay influential doctors to deliver talks at conferences, praising your drug(s). You also pay ghostwriters to draft papers, which said influential doctors can submit for publication.

And last, you pay sexy, charismatic young things (of both sexes) to visit doctors' offices, pushing your drugs, and giving away food and stuff.


There's also a scientific backstop in medicine to counter balance unethical commercial interests from causing unbounded harm.

It is possible (even more likely) to have experts with legitimate opinions that are scientifically valid and have measurable outcomes that also correspond to commercial interests. Ie. "it helps people and makes money".

That model has worked for over a century. Yes, it's under attack by unethical people (more than in the past?), but I'm not sure what the alternative would be.


This isn't a recent issue. It may be less of an issue than it was a decade or so ago.

For example, I recall reading that well over half (maybe 70%-90%) of the medical literature on Neurontin (gabapentin) were ghostwritten advertorial spam funded by Novartis.


Medical doctors have given me ignorant, incoherent or harmful advice many times.

When someone says something that doesn’t make sense to you, do you just put whatever they’ve handed you into your mouth without asking questions?


It's fine to ask questions to improve your understanding. But it's not fine to argue from a position of ignorance, obfuscate facts, and to project false pretences. This is how we get anti-vaxxers and the broader death of expertise.

A medical doctor prescribing a treatment plan and drugs would be akin to a trusted coworker crypto expert telling me to "do x with libsodium" without further explanation.

On the other hand, getting crypto advice from random bloggers and HN commenters is akin to getting a medical treatment plan and drugs from random bloggers and HN commenters.


Except that these aren't random bloggers and HN commenters, there are some world-renowned security experts on this very thread saying "do x with libsodium".

If you can't be bothered to read their profiles or understand who they are, I refer back to my point about arguing from a position of ignorance.


> When a medical doctor prescribes you a treatment plan and drugs, do you claim it's all due to marketing wanketeering, and this is the problem of why healthcare is difficult? When they say something that doesn't make sense to you, do you argue with them from a position of ignorance?

To be honest, this is literally the level of discourse of most Hacker News discussions about healthcare.


And it would be the correct level of discussion.

I have doctors in the family. The scariest stories of "marketing wankateering" in medicine I hear is from them.


To posit a strawman slippery slope for exposition, would your doctors in the family say the entire medical profession and science can't be trusted, and we should just stop vaccinations, nutritious diets, fitness regimens, cancer treatments, surgical procedures, etc. altogether?

Obviously not. So, where is the line drawn of what professional opinions are or are not trusted?

There's no doubt that "marketing wankateering" happens in all complex domains. Any "Market for Lemons" (i.e. a market with information asymmetry - a domain so complicated or obfuscated that consumers can't understand its fundamentals) will be exploited by charlatans. This is why we have professional (imperfect but functioning) backstops such as medical scientific research and the security/crypto research community.

OP was claiming that not even the professionals on this thread can be trusted to not be "wankateers" for a free/open source library, with no evidence, or even a hint of moderate understanding of the problem domain (i.e. why it's hard to distribute a public key), or desire to learn. Perhaps they were just frustrated with the complexity of the domain, but flaming people trying to help as being "wankateers" is rather fatalist.


> And it would be the correct level of discussion.

Arguing from a position of ignorance when people say something that doesn't make sense to you is literally how anti-vaxxing happens.

Just because arguing from a position of ignorance can sometimes produce outcomes which align with your personal anecdotes doesn't make it an intellectually valid method of discourse.


Where do you have position of ignorance? People on HN do know doctors, talk to doctors, have doctors in families, and some even are doctors themselves.

Anti-vaxxer beliefs aren't caused by people questioning the first medical advice they get from a medical professional when it doesn't sound right to them. Anti-vaxxer beliefs come from either not verifying and going with your gut, or verifying and then ignoring what you've learned.

Doctors are humans and make mistakes sometimes, and your own health is your own responsibility. So is safety of your own application, so you shouldn't plug in someone else's crypto if you don't feel comfortable with it, but instead try to understand the domain as much as you need to start feeling comfortable.


> not verifying and going with your gut

...which is exactly what "arguing from a position of ignorance" means. Once you attempt to verify medical advice (in good faith) you are no longer ignorant.


FWIW the cool API here that PyNaCl exposes (but not every libsodium impl) is automatically selecting the nonce for you.


Indeed, the documentation mentions it and does expose it, but I abridged it out to show how trouble-free the default implementation is.


Taking you seriously for a moment. You don't need to understand elliptic curve mathematics. As with any other public key crypto system there will be a Private Key which you must never reveal to anyone else, and a Public Key which everybody can be told. They're bits. You don't need to care what those bits mean, if you find yourself using software which expects you to care what the bits mean that's too low level, bail out and find higher level software.

Unfortunately Coda Hale's post makes a bunch of assumptions. Reasonable assumptions for a lot of the audience, but they're still assumptions. So an equivalent post has to either make equally broad assumptions OR it has to ask you a bunch of questions.

Not crypto questions, but just questions about what you actually want to achieve by having public key crypto. That's going to dictate what you should do. In particular Public Key crypto almost invariably ends up with questions about Identity because of the Public Keys, how do I know this is Bob's Public Key? Do I care about that? Who is "Bob" anyway?

Example #1: For a Web Browser the answer is Identity means the Internet DNS hierarchy decides what hosts are named, the Web PKI provides us with an Authority that says this public key belongs to news.ycombinator.com. It's up to the end user to decide whether that's Hacker News or the Disney Company.

Example #2: For Signal the answer is that Identity is defined only locally by a user's own "address book" or similar features and they get to decide whether this contact is really "Dad" or "skrebbel" or "Jenny Smith", Signal promises to tell you if "Dad" doesn't have the same public key any more (which is suspicious), but paternity tests or verifying in person that "Dad" isn't just a proxy are your problem.

If in fact you "want to allow my customers to send me encrypted data via an untrusted channel" in 2019 you should just set up a secure Web Server, with a Fully Qualified Domain Name your customers know (e.g. write it on business cards, or the sides of vans, or a huge building) with a form on it, where they can submit things. Done. Choose a web server you like and go look at Mozilla's instructions for securing Web Servers.


Wish I could upvote this twice. A form on a secure web page covers both recipient authentication and secure ingress pretty well. Definitely way more approachable for both users and developers than encrypted email or other such channels.


> Mozilla's instructions for securing Web Servers.

I believe you're talking about:

https://wiki.mozilla.org/Security/Server_Side_TLS and/or https://infosec.mozilla.org/guidelines/web_security


> If in fact you "want to allow my customers to send me encrypted data via an untrusted channel" in 2019 you should just set up a secure Web Server, with a Fully Qualified Domain Name your customers know (e.g. write it on business cards, or the sides of vans, or a huge building) with a form on it, where they can submit things.

That assumes the customer's browser is trusted, and not configured to silently allow all encrypted connections to be intercepted, decrypted and potentially modified by someone else in the network, which is unfortunately still too common.


Do you have ideas for a more secure platform than this that the average person can actually use?


> by someone else in the network

That just means someone else on the network securely sent you stuff...



Disclaimer: I co-authored the Latacora one. I think "key size" is something maybe protocol designers need to be worried about, but it's still about two hops above how kobs-set-for-safety I like recommendations I make to be for the median programmer.


> I think "key size" is something maybe protocol designers need to be worried about, but it's still about two hops above how kobs-set-for-safety I like recommendations I make to be for the median programmer.

From the article:

> The most important thing to keep in mind about cryptographic key sizes in 2019 is they don't matter nearly as much as the general public likes to think.

:P

The "general public" I'm referring to is the set of people who aren't cryptography protocol designers.


Thanks for sharing.

Also, this[1] article helped me visualize how ECC works. It's a great read.

[1]: https://arstechnica.com/information-technology/2013/10/a-rel...


We wrote a blog post for that: https://latacora.micro.blog/2018/04/03/cryptographic-right-a...

To answer your specific comments: firstly, you don't need public key crypto. If both you and your sender can agree on a secret key, you can use secret key crypto just fine. For that, use libsodium's secretbox.

If you really need public key crypto for some reason, libsodium's box is what you want.

What's the "untrusted channel" here? Is this really about encryption in transit, or encryption at rest?


Suddenly I’m more confused. How can you agree on a secret key, remotely, without some foundational source of trust? I don’t trust the default CA packages anymore, given the prevalence of untrustworthy vendors like Comodo.


Yes, you need some foundational source of trust, but you were going to communicate that public key one way or another. If you don't trust that medium (let's say it's WebPKI TLS and an adversary gets a bad cert from Comodo somehow), the adversary could just lie about the public key too. A MITM adversary that's in a position to unseal and reseal libsodium secretbox messages can also do that with libsodium box.

(For everyone else: just because I'm playing along with this misissuance scenario doesn't mean I don't think ~everyone should trust WebPKI.)


He’s saying that if you can get away with generating something like a random API key and just having the client save that, that’s much safer than using public key cryptography.


Minor clarification:

> that’s much safer than using public key cryptography.

It's much harder to screw up than using PKC (thereby making it safer). It's something that is really important for UX - you want to make the easiest path the safest path.


No. Don't go looking for reasons to use public key where the underlying design doesn't necessitate it.

In a private scenario (i.e. most B2B usage), you can communicate the key through a different channel. One option would be to write it down on a piece of paper, and fly someone to the other business to deliver it. Now both sides have a copy of the key, and transmission is guarded by your trust in your employee to not skim the key.

Mail is another option, depending on how secure you think that is.


It's much safer to just send the public key over whatever medium and then use an authenticated channel to verify the authenticity of said public key.

How often do you have a reliable tamper resistant authenticated channel that isn't also secret?

I might be confident that bad guys won't realise what's going on and somehow break into my email system replacing your public key with their own in, say, six hours it takes to receive and act on the message; whereas I can't be reasonably confident that all my mail systems and all backups of mail systems will remain unbreakable for the foreseeable future or until I've deleted all trace of the symmetric key.

Cryptography is often not about "uncrackable" but about confidence in security over a foreseen interval.

On the 4th of June 1943 the details of Operation Neptune ("D-Day") were a priceless secret of the Allied Command, a break that allowed Berlin to know the plan could have resulted in the invasion force being expected by a heavily prepared and well-reinforced German defence. Those who made it back to England alive would have no reward for their efforts.

By the 7th of June 1943 plans for Neptune were a historical curiosity of little military value.


Good comment, but I couldn't help notice that you got the date (6/6/1944 != 6/4/1943) wrong for Operation Neptune/Overlord: https://en.wikipedia.org/wiki/Operation_Overlord

D'oh. I swear I had the year correct (it's supposed to be a few days before the invasion because the Germans wouldn't be able to respond instantly) in an earlier draft. Thanks for the correction.

And I did mean Neptune, the Overlord plans would have remained valuable for several weeks as they included details of the immediate objectives in Normandy and how resupply would be done, Neptune was just the invasion and related actions.


Establishing a secret key and a secure channel does not require trust. It just requires a pair of keys so that you can be sure that the message gets to the other party unintercepted and unmodified.

However, just because you have a channel that can't be MitM-ed doesn't mean you can actually trust whoever is on the other end. The problem of "how do you know you trust the other end" requires some foundational source of trust, whether that's trusting the transportation medium (that the person you handed the USB key to is actually who he/she claims to be), the issuer (could be yourself), and so on.


Would Diffie-Hellman (still) be a good way of exchanging secrets? ECDH?


Don't freelance your own kex protocol. Don't use multiplicative group DH. Don't use the NIST p-curves to implement ECDH, which will require you also to implement point validation to avoid invalid curve attacks. You want something like Curve25519 DH, but depending on your application, you probably also want an authenticated key exchange, which a naive Curve25519 protocol isn't going to give you.


> Don't freelance your own kex protocol.

Considering it took me 8 months to not even finish my own, making several serious mistakes in the process, even though I wrote my own goddamn crypto library and am standing on Noise's shoulders…

I tend to agree.

That said, we still have a problem: `crypto_box()` is nice, but it doesn't have any kind of forward secrecy, identity hiding, or key compromise impersonation resistance. Sure, offline protocols can't really have that (the best you can do is use signatures to avoid the key compromise impersonation issue, but then you loose some deniability). Online protocols however, can.

But then what do we do? We could use Noise, (the Noise explorer now generates code in several languages), but you at least have to chose a pattern. Which one? How can you even notice that XK1 maximises client anonymity, and IX (properly used) maximises server security? Even if you know that, which are you going to use?

And I'm not even talking about implementing your Noise pattern, should existing implementations not suffice (maybe you want to leverage a custom crypto library optimised for some embedded processor). Noise isn't simple. A serious drawback in my opinion.

This, if I guess correctly, is what led you to recommend that horror that is TLS for client/server security. Because the burden of choosing a reasonable TLS library, of properly configuring that TLS library to only use TLS 1.3 with a limited set of ciphers (possibly just Curve25519 and ChaPoly)… still remains less error prone than picking up a Noise protocol. <PKI matters conveniently left out>

That situation is deeply unsatisfactory. It's all well and good to tell people "don't make your own kex", but if you don't at the same time point them to an existing kex, they will be less likely to follow your advice. I mean, I didn't.


Depending on context the answer is "absolutely" or "absolutely not", so that's hard to answer. I definitely wouldn't tell you to use finite-field Diffie-Hellman in any context, or any derived cryptosystems (ElGamal or IES), no.


I might be misreading, but it seems like you advise against IES. If so, does that extend to ECIES? I ask because the actual article actually suggests ECIES.


"Finite-field" is the operative phrase in that sentence. I wouldn't tell you to use IES, as in the thing where you do g^xy mod p. I would tell you to use libsodium's box, which is a specific implementation of something sorta like ECIES, though.

The confusion is that IES/ElGamal sometimes means a family of systems and sometimes someone literally means FFDH + AESCTR or something. The abstract concept is fine, but you should use a hardened implementation.


Diffie-Hellman is what you use if you trust that the public key belongs to who you think it does. In many cases, that's totally fine. It'll either be hosted on their server, or even hard coded into the client application you're communicating from. RSA + DH is the attempt to deal with an untrusted public key.

^^^ I know this chap, he knows what he's talking about.


If you really want a minimum implementation that you can understand for educational purposes, perhaps you could try djb's original Curve25519 library to compute a shared-secret. It's extremely simple, a Diffie-Hellman exchange can be implemented within 30 lines of C by calling the library. I think nobody really uses it, but it's the reference implementation of the Curve25519 paper. WARNING: This implementation is probably not secured against side-channel attacks on a modern CPU, offline-use only. https://cr.yp.to/ecdh.html

1. Computing secret keys. Inside your program, to generate a 32-byte Curve25519 secret key, start by generating 32 secret random bytes from a cryptographically safe source: mysecret[0], mysecret[1], ..., mysecret[31]. Then do

     mysecret[0] &= 248;
     mysecret[31] &= 127;
     mysecret[31] |= 64;
to create a 32-byte Curve25519 secret key mysecret[0], mysecret[1], ..., mysecret[31].

2. Computing public keys. To generate the corresponding 32-byte Curve25519 public key mypublic[0], mypublic[1], ..., mypublic[31], call

     curve25519(mypublic,mysecret,basepoint);
where the constant basepoint is 9 followed by all zeros:

     const unsigned char basepoint[32] = {9};
3. Computing shared secrets. Given someone else's Curve25519 public key hispublic[0], hispublic[1], ..., hispublic[31], call

     curve25519(shared,mysecret,hispublic);
Done, you've just got a shared secret. Then you can use standard tools like OpenSSL or "gpg --symmetric" to encrypt the data using AES with that key.

But seriously, just use libsodium. It has received extensive audits, its abstraction model is easy to use and understand, while mistakes are hard to make.


The article suggests to "use libsodium", with Curve25519.


Yes, so I googled "Curve25519 libsodium" and I still have absolutely no clue how to get this going.

I mean the first hit starts with this sentence: "Ed25519 keys can be converted to X25519 keys, so that the same key pair can be used both for authenticated encryption (crypto_box) and for signatures (crypto_sign)."

I have no idea what do to next.


I think this would be a good starting point for what you are trying to achieve: https://libsodium.gitbook.io/doc/public-key_cryptography/aut...

The first example shows both sides of the communication. The public keys are shared (but you want to make sure that you have the correct one, for example by meeting the recipient in person and comparing the keys). The library defaults in libsodium are already chosen to be secure (using Curve25519 and secure algorithms).


That's really nice, thanks. I wonder how my Google fu failed me here. Bookmarking!


Maybe it's not your Googlefu. Maybe it's because the article isn't a video on YouTube, or some sort of social media post, nor is the page heavily weighted with ad. It is a web page full of useful/on-topic information, and the current algo at Google seems to make those types of pages lower ranked. At least that's how it feels



If I write something that uses libsodium, am I violating the near universal "don't roll your own crypto" admonishment, or does libsodium sufficiently hide all the scary dangerous cryptographic stuff so that mere mortals can safely use it?

Yes. However, it never hurts to test your code.

Assuming you're a C-programmer, read the libsodium docs first. https://download.libsodium.org/doc/public-key_cryptography/s...

If you're using higher level language, use a library that provides bindings for it https://download.libsodium.org/doc/bindings_for_other_langua...

By using libsodium, you're not rolling your own crypto. Rolling your own crypto would mean

-trying to find new one way functions for public key crypto -trying to implement RSA from textbook -trying to implement RSA-OAEP from papers, RFCs, books etc.

Using a library is not anywhere near those. There are other ways to fail cryptography too, from not doing public key authentication, to storing private keys in insecure places.

So it's highly recommended you take time to read a book on the topic. The best modern book currently availalbe is https://www.amazon.com/Serious-Cryptography-Practical-Introd...


> By using libsodium, you're not rolling your own crypto.

That's debatable. Libsodium does not have a proper authenticated key exchange. The key exchange does the job somewhat, but it has worse security properties than a properly crafted interactive protocol.

Problem is, designing your own key exchange protocol is more delicate than most construction. I know, I designed my own, and made several serious mistakes in the process (one of which voided an important security property).

Granted, rolling your own constructions is generally less error prone than rolling your own primitive. But it still shouldn't be done without at least having followed and fully understood an introductory course in cryptography (I recommend https://www.crypto101.io/). I mean, I did quite a bit more than that, and I still don't fully trust myself.


TL;DR - you're mostly avoiding the admonishment by using libsodium

There are still some dangerous things you can do (e.g. mismanage keys, reuse nonces).

I wrote a library called Halite for PHP developers that wraps libsodium and makes it even harder to misuse. My philosophy was, "You shouldn't even need to know what a nonce is to use it securely."

https://github.com/paragonie/halite

From elsewhere in the thread, PyNaCl takes a similar approach. So how dangerous "just using libsodium" is, with respect to the "don't roll your own crypto" guidance, depends a little bit on which binding you're using.

Also, a lot of tasks might require a specific protocol (PAKEs, VPN protocols, searchable encryption, etc.) that libsodium isn't suitable for.


Wow, your site is a goldmine! Really loving how you make this subject matter accessible for average nerds like me.


Thanks!

I wish I were a little better at SEO so average nerds would have an easy time finding it.


I just read the article a second time and I really like it. This is precisely the starting point that I was craving for and if I had known about it I might not have written this rant :-)

My only SEO advice is patience. Your articles are great, people will link to it increasingly often.

That said, why the sole PHP focus? The starting point you just linked to is pretty much independent from PHP, and there's libsodium bindings in plenty languages. If your goal is simply to make good crypto easier to do for "normal" developers then the PHP association may do more harm than good (particularly because PHP is not very hip right now)


I'm mostly to blame for PHP adopting libsodium in version 7.2. Focusing on PHP was pretty on-brand at the time that was written.

I'm going to need to update it (Argon2id is now the default), so I might generalize the code (or make a tab system to switch between examples).


Then have another developer or consultant handle that part of your project. It seems like you misread a non-existent imperative in the article that "anyone can crypto with these 3 easy steps". They made no such claim.


Where do those consultants/developers come from? Are they born from Crypto eggs with all the knowledge in their brain? Or how do those mighty people come to be?


There are good formal cryptography courses, though some of them focus on aspects that aren't relevant to what I'd call "cryptographic engineering". For example, differential cryptanalysis is very interesting, but basically nobody except the IOTA community is silly enough to try and implement their own primitives, so that's not a common problem.

I authored Crypto 101. One of my cofounders co-authored Cryptopals and founded the cryptographic practice at Matasano (as well as co-founding Matasano itself ;)). So for cryptographic engineering, I think the answer is "historically oral, and increasingly written, tradition"?


I would argue that the comment was more about long term crypto vs one choice.

A random person on the internet advising which curve to use (which I don't even know what that is) can be insecure as well.

And even if it's a secure choose now it may not be in a few years.


> A random person on the internet advising which curve to use can be insecure as well.

First, the advise does not come from "a random person".

* The recommendation of Curve25519 is originally concluded by its author after evaluating all available elliptic-curve standards under a set of well-defined, objective criteria. The research is published here [0].

* It's not an one-man's claim. This conclusion has been peer-reviewed by many leading researchers and institutions in the past 10 years. There have been enormous amount of discussions made in various communities, including cryptographic developers working on community projects, like OpenBSD [1], OpenSSH [2], Tor [3], industry leaders such as Apple [4], Google [5], Microsoft [6], CloudFlare [7], as well as Internet institutions such as IETF [7], now it's officially part of TLSv1.3.

Nearly everyone agreed the security claims by Curve25519 is valid and it offers many desirable properties compared to previous standards.

> And even if it's a secure choose now it may not be in a few years.

The theoretical security margin is comparable to any other 256-bit ECC encryption, yet it offers a more conservative and robust design against known attacks than other curves, including a good complex-multiplication field discriminant D against potential speedups to Rho Method, immunity against invalid-curve attack, indistinguishability from uniform random strings, and allowing easier, more robust implementations of constant-time (anti side-channel) addition and multiplication.

In other words, in theoretical cryptography, there is no known, computationally-feasible way to attack it. Perhaps in the future someone could find a way, but ECC has been thoroughly analyzed in the past 20 years and it's reasonable to believe a major weakness is very unlikely. "You cannot be sure that it cannot be attacked in the future" is not valid logic in applied cryptography. The valid logic is "You only choose something with a high-level of confidence".

Currently, the only way to attack is using a quantum computer, but when they come, all the public-key algorithms deployed on the Internet are vulnerable without exceptions (it has been claimed that ECC needs fewer qubits to crack than RSA, that's true. But when quantum computers are large enough to attack ECC, attacking RSA is only a matter of time in the short-term. Giving up ECC entirely today and facing all the potential problems of RSA with long keys for 10-20 years, just to buy a few years of time from quantum attackers is a very questionable sacrifice). Therefore, the research on Post-Quantum Cryptography has already started [8], with possible candidates like McEliece and NTRU. They are expected to replace current standard in the next decade.

> which I don't even know what that is

WE DO NOT CELEBRATE IGNORANCE.

References:

[0] https://safecurves.cr.yp.to/

[1] http://www.openbsd.org/papers/bsdcan-signify.html

[2] https://tools.ietf.org/html/draft-ietf-curdle-ssh-curves-08

[3] https://gitweb.torproject.org/torspec.git/tree/tor-spec.txt?...

[4] https://www.apple.com/business/docs/iOS_Security_Guide.pdf

[5] https://www.chromestatus.com/feature/5682529109540864

[6] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[7] https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/

[8] https://en.wikipedia.org/wiki/Post-quantum_cryptography


What I have learned about crypto is that there people that understand it well. And those people are not random strangers, they are security experts.

And even then if you found good solid well supported security advice from reputable sources but they are 10 years old... You are still insecure.

Therefore modern up to date information from reputable sources is the only true way to stay secure. Its a running battle not a static one.

I feel you have assumed I meant something entirely different.


The "Cryptographic right answers" series is an interesting page, that tells you what to do while still having a little bit of technical information.

There are multiple ways to "send encrypted data via an untrusted channel", none of which is perfect, but by descending order of ease of installation and use I would propose this:

- Use Signal or Wire to send that data through messages

- Use Keybase

- Setup an HTTPS server (the default configuration of caddy (https://caddyserver.com/) is good enough)

- Use the "box" function from NaCl, or libsodium, or Go's reimplementation. Those are all libraries, so it will require actual coding. But those libraries are all made so that misuse is very hard, and you don't need to know or even understand how the crypto works to be able to use it


I know this late to the party, but no one has responded to this:

> If this is article is "easy" crypto,

This article isn’t “easy” crypto. If you expect to understand it, you should already understand how (mathematically) RSA works.


> If this is article is "easy" crypto, and I shouldn't roll my own crypto, then what should I do?

X25519, Ed25519, XSalsa20-Poly1305--you just minimized your footguns.

Unless you have a different library accessible, use TweetNaCl or one of its wrappers--now you've minimized your library footguns: https://tweetnacl.cr.yp.to/

If the operation you want to do isn't part of that library, you need to ask yourself: "Do I truly understand the security implications of what I am try to do?" And then not do it.

There are still lots of implementation footguns (leaking values in memory, side channel attacks, etc.), but those 2 choices put you so far ahead of the pack that your can probably survive until you have enough money to actually pay a cryptographer to come review your stuff.


> XSalsa20-Poly1305

I'd suggest ChaCha20-Poly1305 in the AEAD construction defined in RFC 7539[1] (which is what most new libraries implement). ChaCha20 is more performant and these days is more widely used -- though the underlying construction is very similar and both were designed by Bernstein et al.

[1]: https://tools.ietf.org/html/rfc7539


That basically sums up my experience reading this.


OK, so how do you use libsodium in ssh?

Elliptic curve cryptography (ECDSA) is what's used in Bitcoin (Secp256k1).

Try reading up this guide for a great intro into application level crypto... https://tozny.com/security-guides/

This is a good guide for which one to choose:

https://safecurves.cr.yp.to


Tl;dr

Curve25519 for 128-bit security, to use with 128/256-bit symmetric cipher.

X448 for 224-bit security to use with 256-bit symmetric cipher.

-

For symmetric ciphers choose any of the three below:

-ChaCha20-Poly1305

-Salsa20-Poly1305

-AES-GCM.


Crypto can indeed be intimidating for ordinary developers. Discussions tend to be dominated by very technical arguments about algorithms, math, and "my crypto is better than yours" type arguments. Not always that productive. I tend to look at things like "how easy is this to use", "are good implementations available", and "what are smart people saying that I should probably listen to".

The consensus of not doing your own crypto is decades old at this point and still good advice but also a bit of a cliche. I don't come across a whole lot of projects actually doing their own algorithms. The opposite actually. People just blindly copy paste dependencies and code they think is good enough.

Despite the tone of the article (and its length), the advice is actually sound. RSA has been around quite long; can be quite complicated to deal with when integrating and definitely has a few pitfalls; mainly because it leaves a lot of things as an exercise to the developer and thus plenty of room to do it wrong.

I recently came across ECC and the curve they recommend in the article in Stellar (which uses it) and had to write a bit of code to figure out how to work with keys; this turned out to be very easy. What I like here is that from a developer point of view there's not a lot to configure here and when you use it as a black box that's a good thing. Basically key generation is generating a single random number. Which of course you should do using a secure random generator. You then calculate the public key from this secret. End of story. You probably want some tests around this to verify that you can work with some known key but this basically either works or doesn't.

The only thing I found slightly worrying was the lack of standard/commonly used implementations for different languages. For example, the Stellar Java SDK for stellar pulls in a library (net.i2p.crypto:eddsa) from the i2p project in version 0.3.0. I actually fixed a minor bug at some point to at least get rid of a fork of this that they were using but this suggests to me that there might be some issues lurking out there with not so great implementations.

The discussion for the relevant ticket to implement this in OpenJDK suggests that there are some not so trivial aspects to a proper implementation: https://openjdk.java.net/jeps/324. This only landed in v11. If you are targeting e.g. Android (like Stellar is), you'd need some third party implementation.

I've looked around and there are a few alternate implementations at this point for java and other languages but nothing that I would consider very mainstream yet. So, ECC curve 25519 looks easy to use but the implementations are all over the place in terms of maturity. That in itself should still be considered a red flag and sort of goes against the advice of not rolling your own crypto; which is effectively what you are doing when pulling in other people's libraries in really early versions.


You should read Bruce Schneier. He generally explains why you should care. His book is pretty good too. The first third is general theory before he gets into implementations, and it's worth buying even if you never read past the halfway point.

Nitpick:

This includes some attacks that I don't think really ever occur in the real world (bad private exponent size, Franklin-Reiter) and misses one of the great RSA attacks, Bleichenbacher's "rump session" e=3 signature attack, about which there was an excellent NDSS paper (also being presented at Black Hat) just this year.

So I guess I'm saying: RSA is even worse than this blog post suggests.

https://www.ndss-symposium.org/wp-content/uploads/2019/02/nd...


From the article: “When e = 3, or a similarly small number, many things can go wrong.“

This seems to be an oblique reference to the attack you were referring to.


The textbook e=3 attack is Hastad's broadcast attack (on textbook RSA), which is much simpler than the e=3 signature verification problem. (So the article is right that _lots_ of things go wrong, 'tptacek's example is just more exciting and mine is easier to grok :-))


Is that covered in the last paragraph of the public exponent section? I think Ben linked to an older ImperialViolet explanation over that paper, which maybe we should fix.


Oh, you did, and you linked to Cryptopals! Sorry. Not a good nitpick.

No worries! That paper is a great resource that we'll insert into the post somehow.

While the rule for devs is, "never roll your own crypto", what is the authoritative resource for what we should use?


It depends on what you're trying to accomplish.

Rule of thumb is: just use TLS 1.3 if you can. That includes using TLS 1.3 in mutual auth mode if you need to validate both client and server identities. It comes with a lot of PKI for free to help address the "how do you know you can trust the other guy on the line?" problem. If you want the client/server to trust a restricted list of servers/clients, most clients and servers have that capability.

If SSL/TLS won't suffice and you just need to create an end to end "thing" with proof of identity, then libsodium is probably fine.

If you're looking for a more general purpose crypto library where you need to mix and match algorithms, I'd recommend something like Google Tink (https://github.com/google/tink), or the high level interface to the native API's that come with the OS. On OS X that's the native security framework (https://developer.apple.com/documentation/security), and on Windows that's the .NET cryptography library (https://docs.microsoft.com/en-us/dotnet/api/system.security....).

If you need even more flexibility and are willing to do the research to avoid the footguns, there's always OpenSSL/LibreSSL, but that comes with a huge footgun warning.


> It comes with a lot of PKI for free to help address the "how do you know you can trust the other guy on the line?"

I would hesitate to trust a high stakes system (like, controlling a regional power grid) with TLS's default PKI. There are hundreds of certificate authorities out there, all of which can vouch for everything. Compromising one certificate authority is enough to mount a man in the middle attack.

Granted, this is not a trivial attack, and TLS as it is now is mostly sufficient for your regular online shopping. More critical applications however need a dedicated PKI.


Absolutely, which is why I included the extra point:

If you want the client/server to trust a restricted list of servers/clients, most clients and servers have that capability.

If you are running a regional power grid, TLS as a technology is still fine, but you should absolutely not trust the default PKI.


Oops, missed that, sorry.

> Trail of Bits recommends using Curve25519 for key exchange and digital signatures. Encryption needs to be done using a protocol called ECIES which combines an elliptic curve key exchange with a symmetric encryption algorithm. Curve25519 was designed to entirely prevent some of the things that can go wrong with other curves, and is very performant. Even better, it is implemented in libsodium, which has easy-to-read documentation and is available for most languages.

(it's unclear from the text whether ECIES is implemented in libsodium or only Curve25519, the internet seems to believe nacl (/libsodium) implements its own superior variant of ECIES called crypto_box)


ECIES is part of a family of things that are all more or less just ElGamal with a symmetric primitive attached. libsodium implements one version of that with Curve25519 + secretbox (which in turn is XSalsa20.) It's really mostly just superior in the sense that it's a) fully specified, including test vectors b) all the knobs twiddled for safety, c) incidentally, very performant.


I don't think ECIES uses any PKI. The setup is: given a DH pubkey P, generate a DH ephemeral key Q, do the DHKE to derive k, encrypt your message m, then send (Q, E_k(m)).

It doesn't, but I also didn't say it did?

My bad, I meant to say that it doesn't use any public key encryption like ElGamal. Or at least, I've only ever seen ECIES implemented with ECDH + symm enc

I was sort of surprised that he wrote a whole blog post about what's wrong with RSA and never mentioned that it doesn't provide perfect forward secrecy - if the private key is ever compromised, every communication that was ever secured using that key can be decrypted after the fact (if we're nitpicking).


Technically not true. You can generate an ephemeral RSA public key.

Is this different with other algorithms, such as the one implemented by libsodium?

It's different with all Diffie-Hellman key agreements (which I _think_ libsodium's key exchange API uses, although it's sort of difficult to tell for sure from the docs). Diffie-Hellman key agreements generate a new key on the fly and then throw it away so that there's nothing to compromise later.

Yes, of course. It uses X25519 which is an elliptic curve Diffie-Hellman exchange.

It's in the video linked in the post.

I generally enjoy the content that Trail of Bits puts out, but the harsh tone and use of explatives seems only for the sake of attention to a non-issue. RSA, the algorithm, is generally safe for real-world use. It is possible to use RSA securely (we've been doing it). However, there is indeed an issue.

Lets define the issue: Poor understanding of cryptography leads to poor implementation in practice.

This issue applies to any crypto algorthim, not just RSA and ECC. If everyone dropped RSA tomorrow in favor of ECC, would all of our problems be solved? No.

How to solve this issue?

1. Increase understanding of general cryptography. It is a dense subject.

2. Make implementation of cryptography easier. This is slightly more difficult though, as it depends on your use-case.


The specific tone of this post is purposefully bombastic to call attention to the fact that RSA is uniquely difficult to implement and utilize correctly. Newer cryptographic constructs use various methods to reduce the probability of incorrect implementation and improve misuse resistance. Are they perfect? Of course not! But we're improving all the time with signature algorithms like ed25519, NMR constructions like SIV, etc.


I don’t think the problem is that RSA is harder to implement correctly than, say, Curve25519. The problem is that it’s much easier to implement almost correctly.

> 2. Make implementation of cryptography easier. This is slightly more difficult though, as it depends on your use-case.

No, this is actually much easier than increasing general understanding.

Just get the experts (which is a much smaller pool of individuals than the general populace) to commit to designing things like WireGuard instead of OpenVPN, sodium instead of OpenSSL, PASETO instead of JWT, etc.


Unless I am stupid, these are all implementation specific problems (e.g.: bad key selection, bad key size selection, exploits due to coding errors, etc.) and this does not tell us anything new about prime number factorisation.

If this assumption is correct, why not just recommend a key size and some checks for generation than recommending elliptic curves? Elliptic curves have their own problems especially due to their complications around exploits of specific curve properties but more importantly the fact that programmers won't understand what they are calling in their code.

I understand that libsodium has been here now since 2005, but are we really at the point where we should stop using RSA just because a lot of Salesforce/Oracle/Azure/<insert boring job> developers are misusing RSA? I guess this question is really about whether we are at the anecdote stage or the herd immunity risk stage that OP mentions.


It doesn't need to tell you anything new about prime number factorization because practical RSA cryptosystems get broken with far more boring bugs than prime number factorization. RSA padding bugs aren't really "coding errors": RSA padding is just as critical to RSA security as the hardness of number factoring.

Furthermore, they don't just tell you to use some random curve thing: they tell you to use a specific implementation with knobs twiddled for security. You could work hard to maybe get everything right, or you can just do this thing and everything will be set for safety for you.


So, to be Devil's Advocate, does libsodium also provide an RSA implementation?

There is no place to use RSA instead of Diffie-Hellman. DH provides forward secrecy, and the ECC variants are much faster and use shorter keys for equivalent security. They are harder to implement in a wrong way.

No. Why would it?

If you only use RSA to sign and not encrypt, then you lose the padding oracle attach and you're left with... a ton of other issues still. But the Bleichenbacher family of attacks is really devastating, so I'd tolerate some RSA digital signature usage. But please, stop using RSA key transport!!

This comment reminds me of a rant I listened to once about implementing bsddb. It went, basically: Everything you need to know to correctly, reliably, use bsddb is somewhere in their documentation. But the chances you have of correctly using the library, without reading, understanding, and remembering every line of that documentation, is basically 0.

Which I guess explains why RedHat had corruption issues with their package database for, what, a few years before they figured it out? If RedHat needs a few years to get it right for a core component of their core product, who can be expected to?

This RSA discussion makes me wonder if it is in the same boat.


Yes, these are implementation specific problems. I think the article wasn't saying "just use ECC instead", it was saying "use ECC instead because it has a widely recognized and standardized implementation path that's proven secure, in contrast to RSA."

The biggest issue he describes is that, when implementing RSA, most of the parameter choices are necessarily private to the developer, meaning the dev is expected to supply good parameters on her own, when "good" means in terms of heavy math that no non-cryptographer can be expected to know. It's not just key size--there's a bunch of bad choices a dev can make, many of which are hard to recognize as bad choices.

Secondarily, many of those bad choices are driven by implementing RSA in constrained environments, where they're a tradeoff to achieve performance goals--so known-good techniques or algorithms are actually rejected.

ECC, by contrast, is almost all public parameters, so the security community can provide (and already has) known-good parameters that are provably secure, and a dev can simply choose a known-good configuration for their implementation to go with a well-regarded library like libsodium.


Are you also saying that an ECC public key necessarily provides those details that may remain private with RSA?

Not op, but that is partially the case. The idea is that in RSA, many of the parameters lie with key-generation. Both in how the private exponent is chosen, and how the primes that form the modulus are chosen. Unless the implementation is totally borked, those choices aren't visible from the key.

Meanwhile with ECC the main choice is which curve to use. This is visible in the public key. Hence, you can just check whether it is a curve that is generally accepted. For picking your private key, you only need a single integer chosen uniformly from a specific range. Whilst that is not totally trivial to do up to cryptographic standards, it is a lot easier than e.g. 'pick a pair of primes at random that meets these requirements.'


The argument is that the interface is too complicated for non-experts to parameterize securely.

Consider the libsodium example: https://libsodium.gitbook.io/doc/public-key_cryptography/aut.... The crypto implementation parameters are handled under the hood.


My takeaway from this article is that the problems are indeed implementation-specific, but that whilst the cryptographically-secure values represent a small subset of the possible values, it's still too large a selection to provide a generalised list of what values are suitable for purposes. They do admit that RSA can be implemented securely in theory, but that the range of possible implementations is so vast that it's incredibly difficult to know if the parameters chosen are vulnerable without a maths degree and some very deep analysis.

On the other hand, those of us who write code that uses crypto are generally beaten about the head with 'NEVER ROLL YOUR OWN CRYPTO' and use library implementations right from the start. These are libraries that I assume are indeed written by people with maths degrees and have done careful analysis of the outputs. I know for a fact that I wouldn't trust any encryption I could write myself in an hour.

The biggest obstacle I find with ECC is that it's relatively easy to judge security by RSA (assuming correct implementation) but it's surprisingly difficult to compare the two; if I want RSA-2048, what key size am I supposed to use. ECC was also heavily promoted as offering 'RSA-equivalent security' while using less CPU, which made it particularly suitable for mobile/embedded systems, without necessarily implying it was an improvement over RSA.


> I know for a fact that I wouldn't trust any encryption I could write myself in an hour.

That's what I would trust the most. Not that such a thing actually exists (Chacha20 is one of the closest), but if I can implement it quickly in an hour, and it is vetted by the community, then peer reviewed implementation like Libsodium are much more likely to be correct.

> The biggest obstacle I find with ECC is that it's relatively easy to judge security by RSA (assuming correct implementation) but it's surprisingly difficult to compare the two;

Then don't. Compare them with hashes instead. Oversimplifying things a bit, breaking a curve of a given size is about as hard as finding a collision of a hash of a similar size. So, Curve25519, which uses about 255 bits, is about as secure as Blake2b/256, or HMAC-SHA256. That is, about 128 bits of security.

Note however that this is more secure (without quantum crypto) than 128-bit ciphers like AES-128, because hash collisions and elliptic curves aren't as vulnerable to multi key attacks, and success rates drop faster when you try to be lucky with less computational resources than you should have used.


> these are all implementation specific problems

And ECC isn't always implemented (or even standardized) securely (https://cr.yp.to/newelliptic/nistecc-20160106.pdf) either.


Since 2013, not 2005.

From my point of view, RSA has been damaged by too many bad implementations and standards, eventhough it is a legitimate cryptographic algorithm. I even co-authored a paper[1] with Shamir himself that ended on recommending not to use RSA in 2019.

I'm currently writing a book targetted to developers[2] and I'm wondering how much I should write about RSA.

There are two cryptographic primitives (types of algorithm) exposed by RSA:

* Encryption

* Signature

The signature algorithm has two adopted standards usually named RSA PKCS#1v1.5 and RSA-PSS. The latter one is more recent and provides a proof of security but everyone still use the former. The former hasn't been broken and is still pretty solid. Most internet certificates are signed using RSA PKCS#1v1.5 I believe.

The encryption algorithm is the problem (also used to perform key exchanges). It also has two adopted standards usually called RSA PKCS#1v1.5 (same name as the signature scheme I know...) and OAEP. OAEP is more recent and quite secure, but nobody seems to use it. Instead the former is still largely used in applications, and is often the default algorithm you use when you write RSA in many cryptographic libraries. Unfortunately it has been broken by Bleichenbacher in 1998 and it is practical to attack it. There's been many attempts to "fix" it and they have been repeatidly broken as well. So don't expect the library you use to implement it correctly.

[1]: https://eprint.iacr.org/2018/1173 [2]: https://www.manning.com/books/real-world-cryptography


> The former hasn't been broken and is still pretty solid. Most internet certificates are signed using RSA PKCS#1v1.5 I believe.

FYI, RSA PKCS#1v1.5 signatures can be broken due to trivial implementation errors. [1]

[1]: https://www.cs.purdue.edu/homes/schau/files/pkcs1v1_5-ndss19...


Thanks! I hadn't seen that paper :)

From a cursory glance, all of these implementations are in C it seems like a C systemic issue, not an issue with RSA.

But I might be wrong because I've yet to read the paper.


I'm confident you were aware of the bug in that paper, David, since it's just the e=3 cube-root attack on unvalidated P1v15 padding.

Are you talking about the broadcast attack? This is on signatures, and it seems different from bleichenbacher's signature forgery.

(But again, haven't read the paper, also don't remember how bb signature forgery works)

(I'll read the paper but right now I'm in Hawaii doing snorkeling)


This is in Cryptopals! You were a Cryptopal! How did you get through that without implementing the e=3 sig attack?

The bug is straightforward: RSA implementations don't verify all the bits in the padding, but rather "parse" it to find the digest, and then verify that. But there are, of course, bajillions of potential padded signature block representations that contain any given digest, since the block is so much bigger than the digest. For e=3, and for particularly naive implementations (like Firefox's, at the time) you can almost literally just formulate the signature block you want, then take its cube root to forge a signature.


Oh right. Thanks for the reminder :)

Sorry to disapoint I did not do all the cryptopals :P filippo actually has a good blogpost on that attack IIRC.


You have, in fact, disappointed me! :|

(There are some set-7 problems I haven't done yet, for whatever that's worth. But e=3 sigs are a big one!)


Alright I will do it :D

Honestly, if you can find a broken implementation (or just write one; do the RSA "decrypt" of the signature block, and then just use a constant offset to get to the digest bytes), you should be able to knock it out just from the description I provided in like, 30 minutes.

The issues are not due to C but rather failing to verify the PKCS#1v1.5 format. For example, skip verifying the padding or metadata, etc. This allows to insert garbage data in the signatures which leads to successful signature forging.

Curve25519 and Curve448 should be strongly favored if you decided to start using ECC. Not unlike RSA, elliptic-curve cryptography has a lot of of edge cases, math pitfalls, and side-channel traps.

* Your implementation produces incorrect results for some rare curve points.

* Your implementation leaks secret data when the input isn't a curve point.

* Your implementation leaks secret data through branch timing.

* Your implementation leaks secret data through cache timing.

Curve25519 and Curve448 are designed with secure implementation in mind, naturally, they trends to have more immunity to pitfalls. There is no invalid curve attack in Curve25519, every random integer is a valid public key, secret-dependent branching can be avoided easily, etc.

https://safecurves.cr.yp.to/


It's not simply that they were designed to be secure - they utilised mathematically justifiable constants.

See https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number.

There's actually a counter example on that Wiki page - the highly complex numbers used in the NIST curves, for which no path to derivation was publicly provided.


DJB himself gave a talk about "nothing up my sleeve" curves, which can still have a backdoor. The limitation section on the page you linked to says it best:

Bernstein, et al., demonstrate that use of nothing-up-my-sleeve numbers as the starting point in a complex procedure for generating cryptographic objects, such as elliptic curves, may not be sufficient to prevent insertion of back doors. If there are enough adjustable elements in the object selection procedure, the universe of possible design choices and of apparently simple constants can be large enough so that a search of the possibilities allows construction of an object with desired backdoor properties.


> There's actually a counter example on that Wiki page - the highly complex numbers used in the NIST curves, for which no path to derivation was publicly provided.

It's both. The security of curves are evaluated according a set of criteria:

1. Whether the cost for attackers by the Rho Method is above 2^100? (security margin against known attack)

   - NIST P-Curves: Yes.

   - Curve25519/448: Yes.
2. Whether the curve has resistance against additive and multiplicative transfer to avoid potential attacks via index calculus? (security margin against known attack)

   - NIST P-Curves: Yes.

   - Curve25519/448: Yes.
3. Whether the absolute value of this complex-multiplication field discriminant D of the curve is larger than 2^100 to avoid attackers from benefiting from possible speedups to the Rho Method? (security margin against known attack)

   - NIST P-Curves: Yes.

   - Curve25519/448: Yes.
Notably,

   - Bitcoin Curve (secp256k1): No.
But it doesn't mean Bitcoin is broken, it just decreases our confidence that it would be still secure if HUGE speedups to the Rho Method are found for a practical attack. But finding it seems unlikely.

4. Whether the coefficients of the curve are justified mathematically? (security criteria against backdoor)

   - NIST P-Curves: No (!)

   - Curve25519/448: Yes.

   - Bitcoin Curve (secp256k1): Okay.
For example, coefficients for NIST P-256 (secp256r1, prime256v1) is generated by hashing random seed:

    c49d3608 86e70493 6a6678e1 139d26b7 819f7e90
Nobody knows what is it. NIST has its own defense: this random seed is only used as the input of SHA-256, since all cryptographers know that SHA-256 is secure, nobody, not even the NSA, can invert SHA-256, the output must be secure. They called it "verifiable randomness". But it's still room of doubts for sophisticated cryptographers, e.g. See this highly-amusing, satirical paper with lots of insight on this issue: How to manipulate curve standards: a white paper for the black hat https://bada55.cr.yp.to/bada55-20150927.pdf

Coefficients for secp256k1 are not fully explained, but nothing fishy and generally looks good.

And keep in mind that when Satoshi Nakamoto was developing Bitcoin, secp256k1 was the best choice among available, well-accepted standard curves from this perspective, obviously, Satoshi must be an experienced hacker and have made a careful decision.

5. Whether the construction of the curve allows "simple" multiplication and addition. (security criteria against implementation pitfalls).

   - NIST P-Curves: No.

   - Curve25519/448: Yes.

   - Bitcoin Curve (secp256k1): No.
Curve25519 utilizes a mathematical technique called Montgomery Ladder, which also allowed the practical implementation of constant-time, side-channel resistance crypto. On the other hand, NIST curves make it difficult.

6. Whether the curve still offers strong security guarantee even if an attacker has "twisted" the curve, e.g. fooling a program to choose a point which is not on the actual curve. (security margin against known attack and implementation pitfalls)

   - NIST P-Curves: Yes.

   - Curve25519/448: Yes.

   - Bitcoin Curve (secp256k1): Yes.
7. Whether the representation of curve points are indistinguishable from uniform random strings. (security usefulness in various protocols)

   - NIST P-Curves: No.

   - Curve25519/448: Yes.

   - Bitcoin Curve (secp256k1): No.
Therefore, we conclude Curve25519/448 is more robust than NIST curves.


I don't know any serious researcher who thinks the P-curves are backdoored (I sort of doubt even Bernstein thinks they are). They date back before fetishized justifications for curve parameters was a norm, and it's not all that surprising to see "drawn from a random seed" as a design rationale. The problem with the P-curves is that (1) it's harder than necessary to implement constant-time arithmetic on them and (2) they require point validation to use safely.

What you've written here is basically a sort of semi-accurate editorialized summary of safecurves. Safecurves is a cool site, but you should let it stand on its own, I guess.


Here's what happened:

I initially posted the parent comment, it's just the introduction of SafeCurves with its original URL, and I cited that Curve25519 has advantages of better treatment of timing side-channels and invalid curves in its design. I never mentioned "backdoor" at all because I thought it was unimportant.

Then user "yarg" replied my comment, said the point of Curve25519 here is that it uses mathematically-justifiable constants. And to me, what the author seems to be implying, is (1) that those security advantages I cited before are not too important, and (2) that the main concern is that NIST curves have "backdoors".

So as a response, I gave a somewhat editorialized summary of SafeCurve's findings for those who didn't want to read the whole thing. "yarg" seems to be interested on the issue of backdoor, so I gave some personal commentary on the concern of backdoors in the coefficients of NIST curves, and the Internet folklore of why Bitcoin used that unusual Koblitz curve.

But what I wanted to communicate was that it wasn't all about the backdoors, so I summarized all the criteria in SafeCurve to make the point. For example I said Curve25519 allowed "simple" multiplication and addition, and it makes constant-time implementation easier than NIST curves.

I think it's fair enough, do you?


I wasn't attempting to negate the worth of any of the known security and efficiency properties, I was simply adding that there's significant value in understanding how the curve came to be.

Something can be provably secure against all known attacks and still be back-doored.

I said "not simply" implying that there are other considerations and not "not" which wold have implied that what you'd stated was unimportant.

As to the justification for my opinion, that was based on Bruce Schneier's comments regarding the numbers - which he made long before the Snowden revelations.

I'm not saying that the numbers are back-doored, but the situation gives a reasonable justification for pause.


I wouldn't look to Bruce Schneier for insight about curves.

As a serious question, why not? Isn't Schneier basically a crypto deity?

No. More of a crypto hack and policy wonk.

Come on, the man's a legend!

Schneier designed Blowfish and was involved in the design of Twofish and Skein. When I was starting out, his book "Applied Cryptography" was both inspirational and informative, and I wouldn't be surprised if it was one of the most successful cryptography books ever written.

Schneier has also beeing blogging about security in general for decades; yes, sometimes he necessarily discusses government policy as a result, but I don't see what is wrong with that, or why that would make him a "hack" or "wonk".


Blowfish isn't good, and it's important never to use it (as a cipher; bcrypt is fine). But that aside: Schneier has had a weird aversion to elliptic curves, has never (that I know of) written anything technical about them, claims to "not trust the math" behind them (whatever that means), and my suspicion is that he simply hasn't studied them in any depth. All I said was: I wouldn't look to Schneier for information about curves.

Both of those are basically famous for being his work. They didn’t fare super well in the competitions they were submitted to. And they were collaborations, regardless.

His books are inspirational and somewhat informative. That means he is a successful author of cryptography books, not a successful cryptographer per se.


Blowfish predates the AES (and I believe he designed this one solo), Twofish was an AES finalist, and more recently Skein was a SHA3 finalist. I'd say his crpyptography credentials check out.

> Come on, the man's a legend!

He’s a self-promoter. A valuable life skill to learn at some point is how to differentiate those who are legendary due to the awesomeness of their own feats, and those who manufacture legend from mediocrity.


That's pretty harsh, given his contribution to the field.

FWIW, I met him briefly at a developer conference in Norway a few years ago, and he actually seemed quite humble.


Note that #7 is not actually true anymore, and actually hasn't been for quite a while.

A paper from Mehdi Tibouchi [1] shows how to represent points on essentially any curve as uniform random bitstrings.

Another paper by Aranha, Fouque, Qian, Tibouchi, and Zapalowicz [2] gives an even more efficient construction for curves over binary fields.

(As an aside, there are really good reasons not to use curves over binary fields. Discrete log has recently been getting easier much faster over GF(2^k) than over GF(p).)

[1] https://ia.cr/2014/043

[2] https://ia.cr/2014/486


The hardness of discrete logs over fields doesn't have much to do with the hardness of elliptic curve discrete logs. At this stage, I don't think we have any evidence that curves over binary fields are less secure than over prime fields, especially for cryptographically relevant curve sizes.

(The situation is different for pairing-friendly elliptic curves, of course, but that's a different kettle of fish).


Thanks for the update!

Indeed. If you're not a cryptographer then you shouldn't be rolling your own crypto. That includes selecting low level libraries, etc. You want a whole crypto system that's made by experts. Preferably one that's open to auditing.


This exactly. I wish the original article had this conclusion instead of trying to tell people how to "do it right", especially when the problem is people not understanding cryptography to a sufficient level in the first place.


How does one become a cryptographer then?

Do the people who perpetuate this meme with no basis not realise the clear and present danger it presents to humanity?

Yes certainly don't use homerolled crypto in production.

More importantly do not dissuade human beings with talent to roll their own, play with others codebases, and in general have a good fucking time with cryptography.

Straight out of the counter-intel handbook is how it looks when people repeat this bullshit ad nauseum.

Everyone on Earth should roll their own crypto. Don't listen to this nonsense, the real world needs you.


> Everyone on Earth should roll their own crypto. Don't listen to this nonsense, the real world needs you.

This is an interesting exercise to help understand cryptography, but if you think cryptography is subtle or magical, you haven’t started looking at cryptanalysis. There are nearly countless possible side channel attacks. This is why we don’t roll our own practically speaking. It can take advanced degrees in mathematics to merely give ourselves some sense of assurance that our home rolled system is safe.

A friend of mine did roll his own security. A decade ago he invented a new homomorphic encryption algorithm (type of encryption that allows meaningful operations on cypertext without revealing secrets). His first step? Getting advanced degrees in mathematics and at least a year of peer review to test against possible attacks. Only years later did he found a company on it. Unfortunately while the system was believed secure, a zero-trust system took too long and too much battery to generate keys on phones.


I'm using the phrase in the sense of create or implement your own, play with it, not actually using it for critical applications.

The world badly needs more cryptographers and their eyes on code whether from a purely academic background or digital tinkerer background. People should be encouraged everywhere to frolick in the ciphers.


Is there a link to the paper? As far as I'm aware, there is no practical FHE scheme in existence.

Any reference to their work?


> > If you're not a cryptographer then you shouldn't be rolling your own crypto.

> How does one become a cryptographer then?

> Yes certainly don't use homerolled crypto in production.

That is the point. When the glib phrase is repeated it is to people who are working on production, or something that might be in danger of one day soon being in production (we've all seen things go from proof of concept to production with insufficient reworking, it happens far too often). In that context it is correct to be constantly restated.

If you are explicitly trying to understand cryptography, then hopefully you are aware of the higher level dangers while you are poking around the lower level details and until you are ready for prime-time you are able to keep your exploratory code from going anywhere near production, at least not without proper scrutiny.

It is like the first two rules of optimisation:

Rule 1: Don't.

Rule 2 (for advanced programmers only): Don't yet.


> How does one become a cryptographer then?

Basically: studying it in depth (especially the mathematics part). Focus on cryptanalysis for a while, and find some flaws in existing implementations.

The moment you really start to fix those flaws, you probably know enough to implement a cryptosystem in a reasonably safe manner.

Even then, there likely will be many flaws in your implementation. So, you need other cryptographers to review it, several sets of eyes looking for flaws. Oh, and don't forget to make sure someone reviews for side channel attacks. (and some other things that I am probably forgetting right now)

In the end, the cryptographers that implement cryptosystems are a rare breed. These generally include people that are good at both the hard mathematics involved and the low-level programming knowledge that it requires to prevent timing attacks and the like, and even some hardware knowledge.

Who am I to say this? I have studied information security with some focus on crypto, including cryptanalysis and implementation of cryptosystems. Followed some MSc-level courses on implementation specifically, would not touch implementing a cryptosystem myself for production.


Quality reply, thankyou.

> So, you need other cryptographers to review it, several sets of eyes looking for flaws.

That's a real problem with no easy solution, for every "new" cryptographer trying to implement a cryptosystem you need a dozen half-capable eyes on what they are doing. For anything used widely you need hundreds.

Creating more half-capable eyes is a good thing.


> How does one become a cryptographer then?

Get an advanced maths degree. Minor in cryptosystems. Spend a few years looking at old cryptosystems. Understand how they were designed, what their weaknesses were, and how new systems are designed. Read a lot of papers. Try to write your own system and ask peers to point out the glaringly obvious holes you didn't notice.

> More importantly do not dissuade human beings with talent to roll their own, play with others codebases, and in general have a good fucking time with cryptography.

The consequences of lots of people using shit crypto is that a lot of people become less secure, and won't have any idea about it. At the same time, us saying "don't roll your own" won't actually stop anyone who has a burning desire to play with crypto. So we're definitely going to keep dissuading people from rolling their own.


While I want to commemd this comment, it all seems to have lit a bonfire of programmer-tier anger, so will still retaliate:

People aren't using shit crypto, virtually no one is using anything other than what is highly vetted or otherwise google/apple/ms/etc is telling them to.

What is the actual argument here? That some $randompeople play around with new cryptosystems and then share them with friends? The sheer derision that my original comment has directed at it is astounding, let the kids play with cryptography, who the fuck cares, I hope they roll all of the crypto themselves, the world will be a better place, who exactly is saying it should replace current elliptical curves decided so wholesomely for us all by NIST?

The simple clear fact is that Western countries do not like and are actively campaigning against modern cryptography. Lets pretend we have some dignity left please as human beings. The West used to represent something, now it feels like we are taking notes from oppressive regimes and playing catch-up.


> People aren't using shit crypto

From Why You Should Stop Using Telegram Right Now (2016) (https://gizmodo.com/why-you-should-stop-using-telegram-right...):

  According to interviews with leading encryption and security experts,
  Telegram has a wide range of security issues and doesn’t live up to its
  proclamations as a safe and secure messaging application. [...] 

  Telegram did what’s known as “rolling their own encryption,” which is
  widely considered to be a fatal flaw when developing encrypted messaging apps.

Every time there's debate over Telegram's encryption the shill argument "it hasn't been broken in the wild now has it" pops up. This is fundamentally flawed thinking. The end-to-end-encryption is most likely reasonably safe (no glaring holes were pointed by experts except the IND-CCA problem). The real problem is Telegram uses their secret chats as a poor excuse for justifying the lack of E2EE for practically everything: "Just use secret chats if you need end-to-end encryption"

1. Telegram's E2EE is not on by default, therefore 99% of users don't use it.

2. Telegram's E2EE is not advertising authentication, therefore ~90% of the people using it don't check for MITM attacks, therefore majority of E2EE is useless against active attackers.

3. Telegram's E2EE does not work across devices, therefore majority people who use secret chats also use non-secret chats because desktop client don't support it.

4. 100% of Telegram's group conversations can be eavesdropped by the server, because Telegram doesn't have E2EE for group chats.

Complaining about possible cribs in how Telegram implemented the protocol from cryptographic primitives is an insignificant problem compared to the fact the entire protocol is fundamentally FUBAR, how it's so glaringly obvious you can't even fill out a CVE form.

If Signal had vulnerability where 100% of group conversations were not properly end-to-end encrypted, every newspaper in the world would publish something about it. However, with Telegram it has been spun as a "feature".

Another big problem is Telegram has been mentioned by hundreds of publications as "Secure apps like Signal, WhatsApp and Telegram".

To experts it's like hearing news spout "Great writers like Leo Tolstoy, Paulo Coelho, and Stephanie Meyer", or "Great bunker materials like reinforced concrete, steel, and MDF".

Repeatedly claimed, anyone would make mental associations between the three, but when you actually find out what they're about you can't believe your ears.


Additionally, Telegram uses MTProto instead of TLS. What they should be doing is MTProto in addition to TLS. But, no.

Actually lots of people _are_ using shit crypto. The reason the message keeps being repeated is that it hasn't been effective enough yet. Believe me all of us have enough better things to do with our time to stop saying this if it was all actually fixed.

My previous employer was encrypting customer data (in a project I didn't work on) with RSA. Yes, they were actually using RSA itself to encrypt the user data. If you're thinking "Oh, and that's bad because RSA right?" then "No", that's not actually why - again, I direct you to our main thesis: Stop hand-rolling crypto, this is dangerous and you are going to hurt yourself.

But even if we restrict "people" to just my mother and sister, ordinary users with common hardware and software then that's often still people using shit crypto.

Popular libraries like OpenSSL are full of garbage fire shit crypto. Lots of it is "optional" but did you turn off that option? Does my sister know where the option is? No. Some of it is because people are trying to do very hard things and there's no margin for error, but as this article suggests you can solve that by not doing the very hard things any more. Doing RSA Key Ex with TLS _safely_ is very, very hard. Not doing it at all is easy. So just don't do it.

Cryptography is not like folk dancing or pottery, where it's OK to be fairly bad at it so long as you had a good time.

This is like heart surgery. We don't encourage everybody to "have a go" at heart surgery and hope maybe some of them will do a good job, that would be crazy. People spend years acquiring enough skills to even _find out_ whether they'd actually be any good as a heart surgeon, and some don't make the cut.

What does that look like? Cryptanalysis. Probably starting with a Mathematics degree, but it is possible to get there from another numerate background and a LOT of study.

That's where anybody at all serious - certainly this century and arguably going back to the middle of the twentieth century - starts. They analyse existing crypto systems and they find new problems. They start out a bit shy, hey, did anybody notice that X is actually a member of a Boze-Lechstein inverted group here? Doesn't that mean we could use the Stross-Baxter formula to find X in O(n) time? And after a few successess (and maybe one or two setbacks where they made an embarrassing mistake) they get a reputation so that others in the field show them exciting new things. Hey, you're the guy who first saw Stross-Baxter-Watts, take a look at our SHA-4 contender which relies on a related approach, see any problems?

After you've got a few years in cryptanalysis, maybe, if you feel up to it, you can start to propose new ideas. Your colleagues will respect you enough to take a look, and the first few will definitely get shot down. Ha, you forgot that the Benford-Barnes hypothesis doesn't apply to correlated members, so your new cipher has vast swathes of hard-to-detect weak keys. Not good, better luck next time. But maybe, if you're good, you will eventually make something good enough that people remember it when it doesn't make the cut for a competition. You are now "famous".

Notice how none of this was just some random guy in his bedroom having an idea and inventing a brand new cryptosystem? That's because that doesn't work. It did two centuries ago. If your adversaries are from the late 18th century, you should definitely go try that approach. But the adversaries got a lot better.


The answer is the same as when this always comes up: mentally append "for realsies". The advice is meant to suggest that people not publish code or run services with incompetent crypto.

Nobody is saying not to learn and play. They're saying don't set off a rocket while staring down the engine. The reason why it has to be said is that the consequences of doing so when writing crypto are not so impressively immediate or dramatic, but just as devastating to the outcome (and maybe cryptographer, depending on adversary).

> Straight out of the CIA handbook is how you look when you repeat this bullshit ad nauseum.

The CIA does not care if you play with libsodium.


> The CIA does not care if you play with libsodium.

Cryptographers are easy meat for lions.

Care to place a long-standing bet on that? 20yrs or so? We can find a solid reasonable middleman here surely?

They very much do care. I'm willing to bet on it becoming public knowledge within that timeframe. Keen?


How is the CIA going to stop you from downloading libsodium and doing whatever you want with it?


That was most certainly not the point made here.

Go contribute to libsodium, go contribute to any reasonably well used crypto library, you've now got a big red dot on your back from an assortment of governments in perpetual conflict.

I truly worry for anyone participating in these open source codebases and not being aware of how much they are targets.

Take care of yourselves friends. Please.


> I truly worry for anyone participating in these open source codebases and not being aware of how much they are targets.

So when you extorted people to play with other's crypto codebases in the previous post, were you trying to make them into targets?


Exhorted, you probably meant!

Yeah, thanks.

This reminds me of the Niemöller's poem. IIRC it went something like

First they came for the A2017U1s, or they would have, except he never opposed the wrongdoing.


Huh?

Nobody is telling you that you can’t play around with crypto in your personal projects; you’re attacking a strawman. What they mean is this:

> Yes certainly don't use homerolled crypto in production.


Even don't prototype with homerolled encryption.

You should know how easy prototypes enter production!


No, either you want to become a cryptographer and you’ll need to learn, implement toy algorithm, toy with libraries, etc. But you’ll need some level of proficiency before putting anything to production.

The real code contribution of cryptographers anyways tend to add some very specific details to libraries (like adding their latest invention).

Now the general advice is that if you want to skip the 15 years it take to become a cryptographer and you want to send a message, there are ways.


I think the track record of novel cryptography created by people inexperienced in the field indicates it’s something intelligence services would like people to do more of if possible. Are you sure you’re not working for the CIA?


First, start with a site that will teach you some surprising things about crypto that you may have not thought possible: http://cryptopals.com/. That is a good start.

And the NSA is recommending that we not use ECC anymore, either: https://threatpost.com/nsas-divorce-from-ecc-causing-crypto-...


There are some weak curves that may be vulnerable. This is mostly (conjectured) to be about quantum computing. However, the solution they offer is RSA 3072. No thanks, I'll stick with ECC. We'll know pretty instantly when ECC gets cracked as the whole crypto space will collapse like a balloon. You could also be more cynical and assume that the NSA has a backdoor for even huge RSA and just want to push people to something they pwn.

Given what we know about the NSA, do you have to be 'cynical'? I think a backdoor is even likely.

Koblitz and Menenez took a long look at that - they believe it's possible, but not necessarily likely: https://eprint.iacr.org/2015/1018.pdf

Given the known motivations of the NSA, and the deep involvement they had in designing and promoting SHA and its derivatives gives reasonable doubt to their benevolent credibility.

Look at the amount of trouble the NSA went into to create Stuxnet. They stole digitally signed certificates from big corporations, completely undetected.

If a known burglar installed your home security system, even if the security system was from a well known and trusted manufacturer would you trust that your house was actually secure while you were on vacation?


What's this supposed to mean? SHA-2, which is an NSA design, is one of the best cryptographic hashes we have. SHA-1 fell, but so did MD4 and MD5, the other mainstream hashes of that era.

The NSA might have a big quantum computer and know that they are the only ones who are even close, hence pushing people towards RSA

ECC is vulnerable to Shor’s algorithm as well.

Ahh ok, interesting. I stand corrected

Yes, you shouldn't use the NSA-infested P-256 & family curves. They're right about that.

To anybody put off by the clickbaity title, as I initially was — the article is actually informative and well-written.


Same here, especially considering the 3 men are alive and may read the title it's a bit rude.


Fully agreed, (alive or not!) and commented to the same effect before I saw yours.

'RSA is easy to get wrong, stop using it' would be better. More descriptive too.

https://news.ycombinator.com/item?id=20382895


> The common denominator in all of these parameter attacks is that the domain of possible parameter choices is much larger than that of secure parameter choices.

Parameter selection isn't any easier in elliptic curve land. In fact, by the point that I believe you can make an educated choice in elliptic curves, you can also make a reasonable choice in RSA parameters.

You have to:

1. Pick a curve. Which ones? You'll have to know. This article tells you to use Curve25519. Do you know if they're right? Not really. Is a Koblitz curve over GF(2m) a better choice? You sure can't tell. You'll have to rely on other people's guides, e.g. [1].

2. Pick a library that supports the curve. This can be easy if you're on the beaten path, especially in C. This can be very hard on off-beat languages, which are generally stuck wrapping OpenSSL (and then typically have a weird interface to match as a cause of having to wrap C).

3. Interoperability concerns: Sometimes you don't actually control both halves of the software. You'll also have to pick something that plays nice with other things.

And all of this is putting the cart before the horse. Programmers must first understand what kind of algorithm they need, only then they can make a choice of the actual cryptographic solutions, e.g. Crypto101[2]. I firmly believe that a basic working knowledge of cryptography is a hard requirement for a programmer that touches code going into production.

If by some ungodly anti-miracle you're stuck implementing your curve yourself, you'd better be hiring external help. Ideally, however, you can use an existing, respected library. What is a "respected" library? Well, now we're back to “you'll have to know”. Expect to spend upwards of a dozen hours getting to grips with the basics.

(Incidentally, public key cryptography has always been an implementation nightmare. ARX ciphers don't scare me as much and most hash functions seems fairly reasonable to implement, but public key cryptography is when the bignum math hell comes to bite you—and with elliptic curves, you also get “moon math”, as the article so aptly puts it, to go with it.)

[1] https://news.ycombinator.com/item?id=20382244

[2] https://www.crypto101.io/


Oh, hey, [2], that's me, glad you liked it.

FWIW: while I clearly subscribe to the notion of cryptographic education, I also thing that we should give people high-level recipes. Why are they the right ones? Yep, you gotta trust me. I'm fine with both of those existing ('tptacek and I co-authored our Cryptographic Right Answers doc from last year) because they have different audiences.


Hey, just wanted to chime in Crypto101 was the ~first book I read on crypto and it was really well written. Kudos for your work.

> Parameter selection isn't any easier in elliptic curve land. In fact, by the point that I believe you can make an educated choice in elliptic curves, you can also make a reasonable choice in RSA parameters.

The differences, according to the article:

1. RSA looks easy, so developers are tempted to roll their own, not so with ECC

2. almost all RSA parameters are necessarily secret, making it harder to peek on supposedly well-designed systems and know what to select (or even how to select), ECC parameters are public and other users can be used as examples


The CII best practices badge ( https://bestpractices.coreinfrastructure.org ), which identifies key best practices for OSS projects, doesn't forbid the use of the RSA algorithm. In many situations forbidding the use of RSA (when using a public key cryptosystem) is completely impractical today. RSA is really popular, and a lot of OSS projects must be compatible with systems and organizations that use RSA.

Besides, it appears to me that the primary problem is people who roll their own RSA implementations. As the article says, "In circumstances where people implement their own RSA, all bets are off in terms of using standard RSA setup procedures, and developers will frequently do strange things...". It also later shows a picture where the disastrous choice is "rolling your own RSA". Yes, there are a lot of ways to screw up an RSA implementation (as noted in the article), but that's less important if it's implemented by people who know what they're doing.

The CII badge does specifically address the problem of people who roll their own crypto. Specifically, its "crypto_call" criterion says: "If the software produced by the project is an application or library, and its primary purpose is not to implement cryptography, then it SHOULD only call on software specifically designed to implement cryptographic functions; it SHOULD NOT re-implement its own." https://github.com/coreinfrastructure/best-practices-badge/b...

Full disclosure: I lead the CII Best Practices badge project.


> it appears to me that the primary problem is people who roll their own RSA implementations

And why do you think they do? The article has an explanation for this: RSA looks easy enough to implement. If you tell people to use RSA, many will roll their own, even if they don't know what they're doing.

And to be honest, after having read this article, I don't trust even the properly peer reviewed implementations of RSA. And if compatibility is an issue, I'll first try to convince whoever I must convince to switch to Curve25519, which is simpler to implement, more secure, and faster in most cases. (Of course, one should still use an existing implementation.)


There are many advantages to Curve25519, but in many cases you need to use RSA. E.g., when you're trying to talk to arbitrary websites securely, you don't get to choose what was used to sign their certificate. There are many circumstances where you have to build systems that work with other systems, and ignoring that doesn't make it go away.

I think the key advice is to use an existing crypto library. A non-expert who tries to implement ECC themselves will almost certainly screw it up as well. Sure, it is a little less likely, but it is still possible.


For any of you who want to develop a deeper understanding of what is mentioned in the blog post, I can recommend the book Serious Cryptography by Jean-Philippe Aumasson. It's one of the best and most accessible crypto books that have recently been written.

https://nostarch.com/seriouscrypto


> TLS 1.3 no longer supports RSA

This isn't entirely true, is it?

AFAICT it no longer supports RSA as a key-exchange algorithm as earlier versions did, but it certainly supports RSA authentication. The only systems that used that for TLS I've known in the last 15(ish) years were government systems that specifically wanted to prevent PFS in their logged connections.


According to the F5 telemetry report (and my own anecdotal evidence), non-PFS suites were a lot more popular than that, the majority as little as 5 years ago :-)

[0]: https://www.f5.com/content/dam/f5/f5-labs/articles/20180423_...


Definitely. There are loads of EDCO people in that data.

(EDCO is Nalini Elkins' outfit, the Enterprise Data Centre Operators group - banks and that sort of thing, they tried pointlessly lobbying the TLS Working Group to get RSA Key Exchange back into TLS 1.3)

I don't have very good contacts inside that industry any more (I now work for a startup) so I'll be interested to see what if anything they _actually_ do about this over the medium term. Trying to promote "alternatives" to TLS? Deploying DH where the "random" private values are actually from a fixed or pseudorandom source under their control? No idea yet.

It is definitely true that some big name real world "security" companies will go out to a big corporation and tell them to mandate RSA key exchange in order to make the "security" product work because they want to do MITM. They're too cheap to do all the engineering to actually ship the session keys over the wire so they want you to just upload the RSA Private Key into their "security" product.

I got this mandate to require RSA Key Ex (at my old job for a B2B app I wrote) and was like "That ain't going to work, this here is a mutually authenticated TLS session". And I ended up in a bunch of calls in which more and more senior people from the "security" product team were brought on until they eventually found someone who actually knew how it works to say that yes, of course you can't MITM a mutually authenticated session without assistance from both parties, so the product can't work.


Total agreement re: groups of bigcos trying to sabotage TLS, but I think a lot of traffic was de facto using RSA because of default configurations, not because of a serious mandate. Otherwise, I don't think you'd see such a PFS uptake in the last 5 (as opposed to 15) years.

Current status is eTLS which, if I understand it correctly, is just static DH. Unclear why nobody isn't just disclosing session keys/premaster secrets per SSLKEYFILE, but my money is "organizational incompetence" (as yours is, I'm guessing from your B2B app experience :-)).

[0]: https://www.etsi.org/deliver/etsi_ts/103500_103599/10352303/...


Disclosing secrets per session requires a lot more work on the endpoints. Especially because it is an active service. That also leaves a lot more attack-surface on the end-points. It also creates quite a bit of sensitive network traffic. Things get more hectic if you want 'live' TLS-inspection.

As such, I can definitely see why TLS inspectors don't like the myriad of opportunities for mistakes created by disclosing secrets at the end-points.

The counter-argument would be that, if you really need TLS-inspection your security should be tight enough that disclosing the secrets should not be the hardest thing you are implementing. Moreover, neither should the world compromise its own encryption to make your targeted (and rarely required) ability to break encryption safer.


> developers often choose e = 3

Never seen this in all my failed life. It is basically 65537 everywhere I have seen RSA used. That actually led me to Fermat numbers and why they are used because it looked like a conspiracy to me.

I also never generated p,q or d manually and just trust some tools from some crypt-weirdos. So it is 1 parameter I have to remember, right?

I am only a user of RSA and would never implement it myself. But prominent web services mostly supply RSA parameters for their encrypted content. Some have shifted to ed25519, but it is still very rare in my experience.


> it looked like a conspiracy to me

Not a conspiracy ; ). It has to do with the exponentiation. Most fast math libraries exponentiate quickly by squaring and multiplying: if you want to compute, say, x^23, you can either multiply x by itself 23 times, or you can compute: ((((x^2)^2)x)^2)x^2)x: four squaring operations and four multiplications for a savings of 15 operations. You can do this factoring easily based on the binary representation of the exponent - in this case, 23 is 10111. So I’m going to square x 5 times (since the bit length of the representation is 5 bits long) and additionally multiply x by the result every time I encounter a one bit. Now, if I want to speed this up, it’s in my best interest to choose an exponent with the fewest one bits possible - preferably with a one at the beginning and a one at the end, so now I only have to square _n_ times and multiply twice. As it turns out, numbers whose binary representation have a one at the beginning and a one at the end turn out to be Fermat primes: 2^n+1.


So what is it? Just stop using RSA completely or stop rolling your own RSA? The article sends mixed signals on this, I completely understand the latter as bad but I have used the RSA framework within .NET to sign messages before and I don't recall it being hard at all.


I think the argument is to stop using it.

The problem with RSA is that it's easy to understand and easy to implement well enough to work in the sense of "my test cases verify and reject my good and bad signatures", but extremely hard to implement correctly with respect to a host of subtle number theoretic attacks against prime generation, key selection, and bit banging that have been successful in the wild over the past few decades. Given the number of attacks that have been found and their subtlety, it seems likely that there may be additional holes yet to be found in any given implementation.

There are other options that may be harder to understand and implement, but the trade off is that if you get them to work at all, you have probably done it safely (or more likely used a library written by a cryptographer that did it correctly) - Digital Signature Algorithm (DSA) for signing and Elliptic Curve DSA (ECDSA) and Elliptic Curve Diffie-Helman (ECDH) for key agreement which allows for encryption (via a symmetric method).


The conclusion is to stop using RSA entirely. The article argues that RSA parameters are so easy to screw up, even if you're not implementing the algorithm yourself, you're probably introducing security vulnerabilities by choosing parameters that leave you open to vulnerabilities that you've never heard of which will allow attackers to retrieve secret keys. Instead, you should use ECDSA with curve 25519. If you use that algorithm (obviously not rolling it yourself, but using a well-tested open-source implementation), you're much less likely to introduce vulnerabilities accidentally through bad parameter choices.


Yes. Stop entirely. Developers in fact MUST NOT use ANY crypto primitives directly. "If you're typing the letters A-E-S into your code, you're doing it wrong." You WILL do something subtly (or not so subtly) wrong and it won't be secure.

Use high-level constructs designed and peer-reviewed by cryptographers who know what they're doing.

Basically: use libsodium.


I really would like to use libsodium, but I use Java. Looking at https://libsodium.gitbook.io/doc/bindings_for_other_language... it seems that there are 5 different ports. The question which one is save to use?

One implementation docs says:

"I'm experiencing some issues on Windows. Do you have any idea?

I'm sorry but I'm completely clueless about Windows environment, but if you have any suggestions or PR changes. They will be more than welcome."

Can I trust them? How do I know there are no some Windows-specific issues in this implementation?

Another library listed on this page does not have a single release? Could it be trusted?

Next has been abandoned and moved into Apache Foundation incubator. Good but will it be maintained? Maybe use this, or maybe Lazysodium, which has a nice Github page?

If I am to verify if any of those ports is a proper one probably I would have to really become a crypto expert.

Isn't it safer to learn and understand how to use, say, AES algorithm as it is provided as a part of Java? There is a few important config choices, but still it looks easier than analyzing someone's libsodium port?


> Isn't it safer to learn and understand how to use, say, AES algorithm

No.

AES by itself is just a function that provides symmetric encryption of one fixed size block. You can't do anything with "just AES". Bad libraries will throw a choice of block cipher modes at you, and tell you to pick CBC or CTR. Then you won't even realize that you have no integrity checking at all on these messages. If you do, you'll maybe add an HMAC, and end up with encrypt-then-MAC or MAC-then-encrypt and ughhhh.

Just use a high level library that provides idiot-proof APIs with descriptive names like "secret box".

> If I am to verify if any of those ports is a proper one probably I would have to really become a crypto expert.

These things aren't ports!!! They're simple FFI bindings. Take a quick look at the code (heck, the readmes say things like "Java Native Access" already, this should ring a bell).


If you have portability issues, I happen to have written a crypto library¹ in portable C99/C++. It has been tested on a stupidly wide range of platforms, and is simple enough that you could bind to the higher-level constructions yourself. Platform specific issues are pretty much impossible. Oh, and it's just 2 files (one header, one source file), pretty easy to just paste them into your own project.

As for the maintenance, it hardly needs any. Should I get hit by a bus tomorrow, you'd still have something useable today. If you want client server security, I'm currently working on Noise like protocols, and should add that during the summer. After that, my library should be mostly frozen.

Now there is this thing where you might not trust yourself enough to decide whether you should trust me…

1: https://monocypher.org/


Honestly on Java, I would stick to Bouncy Castle. It's tried and tested, has regular releases, good documentation and is regularly maintained.

It's not equivalent in any way. Bouncy Castle is a kitchen sink library that provides everything except for a high-level, modern construction based, idiot proof secret_box/crypto_box/crypto_sign API.

Should we not be using bcrypt anymore?

bcrypt is a password hashing algorithm. It's totally fine, but newer better ones exist (scrypt and now Argon2). libsodium provides them.

With libsodium you still have to choose parameters though not that different from other crypto libraries(including .NET)


What parameters do you have to choose? Apart from nonce, which is confusing for most people who don't read documentation, and computational params for password hashing (unavoidable), there's no parameters to choose from (no curves, no hash functions, cipher modes, etc.)


    crypto_box_easy(ciphertext, MESSAGE, MESSAGE_LEN, nonce, bob_publickey, alice_secretkey)
Where do you see a place for parameters?

Even if there was some choice, it would've been a choice between known good options, no footguns.


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: