Hacker News new | past | comments | ask | show | jobs | submit login
Post-quantum encryption contender is taken out by single-core PC and 1 hour (arstechnica.com)
122 points by adrian_mrd on Aug 2, 2022 | hide | past | favorite | 53 comments



> In the US government's ongoing campaign to protect data in the age of quantum computers, a new and powerful attack [...] highlights the risks involved in standardizing the next generation of encryption algorithms.

I don't think that follows from the published attack at all.

As Bruce Schneier wrote nearly a quarter century ago "Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break."[0] The only way to get good cryptosystems is to get as many cryptographers as possible, outside of the ones who invented them, to attack those cryptosystems in as many ways as they can think of.

But people come up with cryptosystems all the time. How do researchers decide which ones are worth attacking? And, how do we know that enough researchers have spent enough time attacking any given cryptosystem to conclude that, if none of them have found a flaw by now, then it's plausible that the system might hold up for another 10-20 years or so in widespread use?

One of the best ways we've come up with is, have people submit a bunch of proposals to a regulatory agency (like NIST), who then with advice from academia and industry create a shortlist of likely candidates, and finally advertise those as the ones that as many people as possible should try to break. And for any who do find weaknesses, you get to publish a paper about it and receive a bunch of recognition - rather than finding a weakness in a system no-one cares about and getting nothing.

This is the kind of result that standardising encryption algorithms is meant to produce. It's meant to leverage widespread expertise to find the weaknesses in systems before they start to get widespread use.

[0] https://www.schneier.com/crypto-gram/archives/1998/1015.html...


> I don't think that follows from the published attack at all.

Maybe not - but from a layman perspective, the fact that a supposedly quantum-computing-proof algorithm isn't even classical-computing-proof does sound worrying.

Imagine someone said a cage is tiger-proof but then a house cat goes and destroys it in an hour :)

I guess the field will take a long time to mature, which is OK since practically useful quantum computers are probably far from being available.


> then a house cat goes and destroys it

or just squeezes out between the bars. "Wait, that's cheating..."


Yes it's worrying, but math is shockingly empirical because new and surprising discoveries happen (and did in this case re a novel genus-1 and genus-2 curve relationship.) Which means that the risk mentioned isn't limited to these particular new systems for cryptography. But I'll take your side to some extent - there are provably unbreakable systems such as one-time pads (with heavy trade-offs!) and other angles that we should probably be considering using more than we do.


In some sense, the field of classical cryptography has never matured. Outside of some very simple and limited applicability systems (e.g. one time pads), the field has never really moved beyond 'propose a cipher, and act as if it is secure until proven otherwise'. For all we know, all of the algorithms that are weak to quantom computers are also vulnerable to classical ones as well.


If the fact that no-one has yet come up with a way to either solve a particular mathematical problem (e.g. factorisation in polynomial-or-better time), or to prove that no such solution exists, means that a field has never matured, then you could say that the entire field of mathematics has never matured, because e.g. the Goldbach Conjecture has never been either proven or disproven.

Taking that line of reasoning further, given Godel's Incompleteness Theorem you could say that mathematics as a field will never have matured.


It doesn't seem elucidating to make an analogy between a thing-- math-- and it's own subset-- crypto.

Worse, crypto is defined by essentially having the added requirement to at least not be the weakest link in keeping jewels locked in a safe. In that sense the addition of axioms becomes a liability in a way they are not in broad math.

Put another way, math as a field was able to mature because it lacked this hard requirement that crypto has. If mathematicians suddenly decided add this requirement of jewel-keeping to math broadly then-- bam-- math instantly becomes the same "immature" field that crypto currently is.


Well, math was that "immature" field in the XVI century:

https://www.quantamagazine.org/the-scandalous-history-of-the...


That's not how it works. The central tenet is that you consider a cipher secure if you can prove mathematically that breaking that cipher is equivalent to solving a difficult problem, such as the factorization of a large composite number.


We don't even know if difficult problems exist; at least in a way that is useful to cryptography (e.g. become easy once you know a small shared secret).

We have some problems (factorization, and discrete log) that no one has found a way to efficiently solve on a Turing machine; but we don't really have a solid reason to think these problems are difficult. All we know is that our best mathamaticians have been stuck on them for generations.

The bigger problem is that even these reductions to assumed difficult problems represent only a small portion of cryptography. We prove things like AES and SHA512 are secure by publishing them and waiting a few years to see if anyone publishes a break. No meaningful reduction to well researched problems, just a hope that we wont discover we missed something after making it a standard.


The tiger / housecat analogy is great for this context! Even if a tiger fit in the housecat's cage it could easily rip it apart, while a housecat could slip between the bars of a tiger's cage easily.


I read all this more as

> We should try to come up with Tiger-proof cages

> This one might work

> Nope, it fails in this way

> Ok, onto the next idea....


Technicalities aside, I really appreciate people that are humble and say things like "I lump myself into the category of those many researchers who work in cryptography but do not understand as much mathematics as we really should".

Be humble to admit your mistakes or shortcomings and back to work trying to learn from what went wrong, we need this kind of people.


>The attack has no impact on the four PQC algorithms selected by NIST as approved standards, all of which rely on completely different mathematical techniques than SIKE.

This was an algorithm that was in the process of being evaluated. It seems like the process worked the way it should have and is weeding out the algorithms that don't work. There is nothing wrong with trying an idea that doesn't work out!


Still makes a fun story, though. And, if nothing else, it should serve as a warning: even professional crypto experts make weak crypto when they're trying their hardest. Don't roll your own!


It's worth noting that NIST already announced that the likely winner of the contest was CRYSTALS-Kyber and Crystals-DILITHIUM: https://www.feistyduck.com/bulletproof-tls-newsletter/issue_.... It sounds like it's just waiting for some i's and t's to be dotted and crossed.

Their concern that something that made it through multiple rounds turned out to be so easily broken is very valid, but as of today I wouldn't refer to SIKE as a "contender" anymore.


Different category. Now the favorite is Classic McEliece. http://classic.mceliece.org/ (djb et al)


There is still am ongoing round 4. There will likely be another NIST program for signature schemes with smaller sig size.

Falcon, SPHINCS+ were also standardized btw, not just Dithium.


My impression is that critical properties of these algorithms are not sufficiently understood/proven (in contrast to e.g. RSA), and that these are therefore open to yet unknown attacks. Be careful with those secrets...


Surprising contrast, since the security assumption of RSA is also not proven.


Prime factorization, matrix math, and elliptic curves are the primary "difficult" math problems we have to implement encryption.

If there are any others, I'd like to know.


Its properties are known. It’s safe until someone can efficiently factor large numbers.


That statement relies on the unproven assumption that there's no faster way to solve the RSA problem than to factor the modulus.


But RSA will 100% be broken by quantum computers. So you have the possibility of new algorithms being broken to the certainty of RSA being broken.


I just want encryption and hashing algorithms that will last for a human lifetime (ie. 100 years).

There are plenty of real world things I can be pretty confident won't exist in 100 years... For example we won't be able to make an atom-perfect cloning machine. We won't have solved teleportation for people, we won't have cured all diseases.

Well I just want an encryption algorithm we won't have broken into.


Scale some algorithms by increasing key size and/or increasing number of rounds.

You have to consider making it cost prohibitive in that target future for a near state actor to break it using classical computing, QC (maybe - it's still an unsolved manufacturing problem), FPGAs, and ASICs.

10M rounds of AES512 perhaps or perhaps not enough.


This one is going to suck for the other crypto, Bitcoin/Ethereum/etc. SIKE had the best performance in terms of public key size, under 400 bits. The next smallest I think is HQC which is 2k. So now signed block chain transactions will be at least 5x bigger


Note that SIKE is not a digital signature scheme, it's for key exchange. Hence SIKE cannot be used for signing Bitcoin transactions but can be used for encrypting communication over the wire.

For digital signatures, the 3 standardized schemes are Falcon, Dilithium, SPHINCS+.

Falcon/Dilithium can be used in blockchains though they are much larger compared to elliptic curve ones. SPHINCS+ is way too large.

There is likely going to be a newer NIST program for signature schemes with smaller signature size.


makes me wonder how many SIKE engineers have NSA connections...


Probably very few or none of them, cryptography getting broken is common. There is no hint that this weakness was introduced on purpose.

Now, did the NSA find the attack on their own, kept it secret and hoped for standardization? One can only speculate.

Also, the NSA does not always try to backdoor everything like they did with DUAL_EC_DRBG. For instance, they modified the S-box of DES during the NIST standardization and refused to elaborate on why. The reason was later found, as people invented differential cryptanalysis and found that the NSA-modified S-box was resistant to it, when the original S-box was not: the NSA actually made DES stronger on purpose.

https://arstechnica.com/information-technology/2013/09/the-n...


Especially with that name. . .


What would happen if the rest of the PQC finalists get broken by barebones classical attacks like SIKE was?


Anyone that's using PQC as more than a curiosity would presumably take a belt-and-suspenders approach by using multiple rounds of different PQC as well as classical cryptographic algorithms, hopefully carefully selected so that they rely on different mathematics and therefore do not share the same theoretical weaknesses.


> Tech consultancy Booz Allen Hamilton has warned that China will soon plan the theft of high value data, so it can decrypt it once quantum computers break classical encryption.

https://www.theregister.com/2021/11/29/china_quantum_ai_offe...


People would try to invent new better algorithms, presumably.


It's unlikely. There's a diversity of mathematical approaches represented, including some who's underlying systems have been studied for the last 50 years


Depends if it was before or after they started using them to secure important data. Before - it's no big deal, just the safeguards working as they should. After - that would be bad.


Why is there a autoplaying unrelated blade runner video in the middle of article? What is the point?


So the selection process is working, it seems


at first i figured the algorithm is a troll, especially given its acronym.


Are you a native English speaker from the West coast of the United States? I would guess so? It is a subtle linguistic reference that lost on most of the rest of the world.

Or as the Californian youth used to say: SIIIIIIIKE

It turns out that it was appropriately named!


I don’t think you need to be from the west coast to understand people saying SIKE. It’s a pretty common phrase across the US. I’m from Colorado and I heard that a fair amount growing up. Agree on the appropriate name though it was my first thought reading that it had been broken.


That’s nice to know, I didn’t realize that it wasn’t purely a matter of regional dialect. Thank you.


Looking through the list of authors:

Florida Atlantic University

Amazon

IBM Research

Microsoft, Microsoft, Microsoft, more Microsoft

LinkedIn

Lousiana Tech University

Texas Instruments

yikes... these are who we already rely on every day to keep our data, our services, and our softwares safe and secure.


I don't get your point. Cryptography (especially new protocols such as SIKE or anything PQ) must undergo public scrutiny for years or decades, and sometimes get destroyed in the process, that's how it works. Coming up with a usable public key scheme not trivially insecure is already incredibly difficult.

Also, while the attack here is devastating there is no blame to put on SIKE authors: the field is immensely vast, complex and abstract, no one can know everything. The attack relies on a somewhat recent result that is definitely not classical crypto textbook.

Furthermore the cryptosystems used by these companies are the ones that have been tested and standardized through the years, not the brand new algorithm designed by cryptographers (as new algorithms must be scrutinized first).

Also, as another commenter suggests cryptographers are not the ones in charge of the actual implementation at their companies.

Finally, you are making a big confusion between cryptography and security.


The particular attack is very specific and not really relevant to the implication of the last line in this comment.

"One unexpected facet of the attack is that it uses genus 2 curves to attack elliptic curves (which are genus 1 curves). A connection between the two types of curves is quite unexpected. To give an example illustrating what I mean, for decades people have been trying to attack regular elliptic curve cryptography, including some who have tried using approaches based on genus 2 curves. None of these attempts has succeeded. So for this attempt to succeed in the realm of isogenies is an unexpected development."


I doubt the researchers who invented this encryption algorithm are security engineers in charge of keeping those companies' software safe.

Even if they were the same people, the skill sets of each job are quite different. Being flawed at inventing new algorithms doesn't mean you're flawed at applying state of the art security to existing software.


I would go so far as to say that making an algorithm that doesn't work doesn't make you bad at inventing new algorithms. Trying ideas that might not work is part of science.


Agreed - I didn't mean to say the researchers are flawed (negative results are part of the process as you say). I just meant that even if they were that doesn't really matter, in order to zero in on the OP's concern.


Not that I have any form of sympathy for these companies, but I think it is safe to assume that regarding cryptography, what's done in research labs and what's put in production at the same time is not at all the same thing, even in the same company.


> Lousiana Tech University

I think you found the offender.


What do you mean?


Oh its just a dig at Louisiana where my family is from. It's fun to call them stupid ( they're not ).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: