If you'd like to hear someone who can barely do long division† discuss this vulnerability with one of the leading isogeny cryptographer researchers and the world's most isogeny-enthusiastic cryptography engineer, have I got a podcast for you:
There's even a transcript, if you want to read things like:
So I watched the, uh, I watched Costello's tutorial, like the, the broadcast he did for, um, for Microsoft. And I kind of worked my way through the, the tutorial paper. So like, is it, is it true that like, in sort of the same sense we're...
Just listened to that episode yesterday - it was great thanks! Listening to Deirdre's description of how it got ported to Sage and accelerated in short order - so she could break it in a few minutes on her laptop - in Python, totally reminded me of the line from Iron Man.
"Tony Stark Was Able To Build This In A Cave! With A Box Of Scraps!"
You need to be able to perform long division to take quotients in algebraic structures, e.g. polynomials. Yes, Wolfram Alpha can probably do it, but not always.
You don't need to understand long division to be able to do that. Long division is over the integers. It's got a weird notation, and I never learnt it.
There isn't any reason for 99.9% of mathematicians - even in abstract algebra - to know long division.
As a kid who chronically forgot to bring her calculator to school for about 12 years straight, I can now do long division well enough that I sometimes do it rather than get up to fetch my phone from the other room. It's also how I do division in my head.
I did pursue mathematics academically but I don't know if it ever came up in university math; more so in physics and computer science.
EDIT: apparently what I do is "short division", which is just long division but you don't write down all the steps.
After looking it up on Wikipedia and seeing the various notations used (most of which are awful IMO) I get the hate on long division, though I never felt anything like that.
It may be because it is familiar to me, but the notation I learned in school in Austria feels superior or at least less irritating. I guess German schools use the same.
Why the heck write the divisor first?
This is news to me that people can't/don't do long division. It's simple. I use it regularly to estimate quotients--just run the process to a couple places in your head.
It's much easier in base 2 than in base 10. Similar to the multiplication tables, the table for base 2 is just the 2x2 matrix at the top left corner of the 10x10 times tables.
He's wrong about the "cryptographic agility" stuff, at least the way he's framed it. But then, another place where I'd part company with him is about the urgency for getting PQC slotted into real-world systems.
Why do you think he's wrong? He's essentially saying IT systems need to be designed with the expectation that cryptographic algorithms will be attacked and may need to be replaced, as I understand it.
If by "crypto agility" one means "we should research diverse cryptographic primitives and constructions so that we can be ready if something we rely upon breaks", nobody disagrees with that. But that's not what Schneier means.
What he says instead is that "it’s vital that our systems be able to easily swap in new algorithms when required". That approach has a virtually unbroken track record of failure. It demands negotiation, which introduces bugs, and even after you get past that, it doesn't work: you literally always end up with downgrade attacks (see, for instance, the DNSSEC work at Black Hat this year). Sometimes those downgrade attacks introduce vulnerabilities for parties that would never have even attempted to use the legacy crypto.
Doesn't every major cryptosystem have multiple ciphersuites, though?
There's things like SSL, SSH and GPG, truecrypt, bitlocker, /etc/passwd, ntpsec - even git is trying to upgrade their hashes from SHA1 to something longer. There are only a handful of exceptions, like TOTP.
Isn't it a must-have feature? Or has the feature become less important than it was 25 years ago when those protocols were being designed?
Yes, and every one of those major cryptosystems has been a debacle, in large part because of the negotiations imposed by ciphersuites. It is not a must-have feature; it's a feature cryptography engineering best practice is rapidly beginning to recognize as an anti-feature. See WireGuard for an example of the alternative: you version the whole protocol, and if some primitive you depend on has a break, you roll out a new version --- which, historically, you've effectively had to do anyways in legacy protocols.
If you have multiple WireGuard versions, in a migration setting, you also need to do some negotiation at the start, no? Wouldn't that be potentially vulnerable to downgrade attacks as well?
So the migration looks like "upgrade the client or you won't be able to connect to the server any more"? What if you use the client to talk to multiple servers, some that use the old version, some that use the new version? Maybe via a config variable adjustable per server? Then you do out of band version negotiation, and you might get away with this in the VPN setting, where entering arcane config vars is commonplace, but not in e.g. the TLS setting.
I guess that's thanks to the fact that WireGuard is a new system and new systems have little legacy bloat. Maybe the WireGuard author had golden hands, and the system is perfect, and indeed it it is quite good, but I think instead that WireGuard will eventually require a new version. Then one such solution will have to be chosen.
No, it's pretty widely recognized that WireGuard is in a sense a repudiation of "agility". You can look at, for instance, the INRIA analysis/proof paper to see how a bunch of disinterested cryptographers describe it: "many new secure channel protocols eschew standardisation in favour of a lean design that uses only modern
cryptography and supports minimal cryptographic agility."
If you want to say "minimalist agility is good and you're just saying maximalist agility is bad", that's fine, we're just bickering about terms. But that's pretty obviously not what Schneier is talking about.
All the works I've read of Schneier have given me the impression of the above definition, "support multiple cryptographic primitives and do not be overly coupled to a single primitive."
"The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required."
Do you have a link to something that in your mind represents what Schneier is talking about?
A modern cryptosystem wouldn't be designed to swap in new algorithms; it would pick a single set of algorithms and constructions, and version the whole system. Which is how WireGuard works: you can't run AES WireGuard, or WireGuard with the standard P-curves.
I see what you mean. I'm not seeing that in this particular article, but without getting sidetracked by whether he's said that before - I think there are two flavors of agility here.
You're arguing against runtime agility. That makes sense; in fact, the more runtime agility you have, the less design-time agility you have, because all that dynamic negotiation is quite the constraint - and as you point out, a source of flaws.
Even negotiation has flavors. Classic TLS tried to make supporting old ciphers "secure", but even negotiation in which you assume the attacker _can_ control the cipher can be useful - the point then being not to do so long-term, but merely as a technique to allow non-instantaneous roll-out world wide. There's a difference between trying to support but never use legacy stuff and also refusing to specify the current gold standard, and merely having a protocol that supports multiple ciphers only to the extent necessary to occasionally replace a single old configuration with a single new one.
I'm not sure whether there's a huge difference between merely supporting research diversity, and having non-runtime but design-time agility. Perhaps those are the same thing. I guess it gets interesting where encryption primitives aren't just ephemeral communication tools, but a more intrinsic part of the software (such as in signing, and especially in stuff like blockchains) - is there a way to have at least the option of swapping primitives without weakening the guarantees today?
To me, it just means that the formats shouldn’t be hardcoded to a single algorithm (e.g. like Git is to SHA-1), but should allow for different (including future) algorithms, like e.g. X.509 certificates do. I do work in the electronic signature space, where this is important, and there are no negotiation scenarios there.
I don't think that being able to easily swap to new algorithms means negotiation. Signal e.g. can easily switch to other encryption schemes, just as Whatsapp shipped e2ee in a few weeks. After some time, you are forced to upgrade the app in order to be able to use it.
Well i agree generally, if we are talking about these new, basically experimental, post quantum algorithms, i feel like the balance shifts a bit since they are still so new (on the other hand maybe using them is just premature at this point)
sort of offtopic but it reminds me of the first auto shop I worked in. I just got certified on a new laser alignment tool and had a chip on my shoulder for almost a week, until an old timer manually dialed in an alignment that checked out perfect on my shiny new gizmo. I'd never felt so humbled in my life and spent that whole summer practicing manual alignments.
My anecdotal usage is a combination of these two. Until I saw your provided definition I would have described chip on my shoulder to mean a sense of superiority that leads to me be rude or difficult to interact with.
Yeah, pretty off-topic, but I'm glad you shared the story. It reminded me of a story my dad told about a similarly skillful shop owner he'd run into. My dad passed away a few years ago, so I'm grateful for you reminding me of a nice thing.
"To me what is most surprising is that the attack seemingly came out of nowhere,” says cryptographer Jonathan Katz at the University of Maryland at College Park, who did not take part in this new work. “There were very few prior results showing any weaknesses in SIKE, and then suddenly this result appeared with a completely devastating attack—namely, it finds the entire secret key, and does so relatively quickly without any quantum computation."
The funny bit about this is that the principles that broke SIDH were in the literature --- they owe to a late-1990's theorem† by Ernst Kani, a mathematician in Ontario. We spoke to Steven Galbraith about this (I wouldn't know who Kani was if I hadn't read Galbraith, just to be clear) and he'd even talked to Kani long before any of this came out. But Kani isn't a cryptographer and apparently isn't even especially interested in cryptography, so the dots didn't get connected until much later.
† That's the "25-year-old theorem" from the article.
+1, funny -- but for the sake of non-native English speakers, FTR, it's pronounced the same as "psych" and is colloquial for a sarcastic "ha-ha, just kidding"
For additional cultural context, I think this usage was popularized by Eddie Murphy in Delirious (NSFW language: https://youtu.be/Ft4kEk5CHrE, you'll want to listen to at least 2:15)
Given that Eddie is telling a story from his youth it's probably fair to say it was in common use 10+ years earlier - which is consistent with my vernacular at the time.
Further, "sike" has become a common spelling when used to indicate "not really." At least, according to urban dictionary and other online dictionaries.
If you're going to address non-native English speakers, using acronyms like FTR (I suppose that refers to for-the-record, but one can never be sure on the Internet... could easily be something like for-the-retarded in some circles) may not be the best idea, BTW (by the way :D).
> To me what is most surprising is that the attack seemingly came out of nowhere,
This wasn't my understanding at all. The specific issue in isogeny based cryptography which the attack exploits has been a source of worry in the cryptographic community for a while, and is exactly why NIST put SIKE in the "for further consideration & crypt-analysis" category when making their standardization decisions.
You KNOW they first had to do this in the normal way (large scale, distributed servers)..... and cracked it in like a second. Then for grins, the engineer HAD to say "I wonder if I could do this on my old Mac mini". And it worked.
And for embarrassment of the original design, the story, and clickbait... they did it on that old machine
They used an Intel Xeon CPU E5-2630v2, it's in the paper. What if in the process of crafting the attack on their old workstation PC they found that it was seemingly possible to do low key sizes very quickly and scaled up from there to a practical attack. Or maybe they have quite the competency in Mathematics and realized their attack was not that computationally expensive.
>Ran on a single core, the appended Magma code breaks the Microsoft SIKE challenges $IKEp182 and $IKEp217 in about 4 minutes and 6 minutes, respectively. A run on the SIKEp434 parameters, previously believed to meet NIST’s quantum security level 1, took about 62 minutes, again on a single core. We also ran the code on random instances of SIKEp503 (level 2), SIKEp610 (level 3) and SIKEp751 (level 5), which took about 2h19m, 8h15m and 20h37m, respectively.
Mathematicians do not have funding for „large scale“. A 10-year old mid-range server is exactly the kind of system I would expect Magma to run on in the average case. Perhaps even just a desktop pc.
Source: worked with algebra researchers using Magma.
I was being a bit facetious, but not by much. Maybe because they're mathematicians and had found a theorem - but a pen tester wouldn't have.
It costs less than a few hundred bucks to do numerous, multi compute AWS server spot instances for cracks on large dictionaries with large hash rates, on random seed password lists (where each password has it's own seed).
If it was trying to crack a quantum-safe where by design the classical computer shouldn't be able to even solve it (except for potentially with a theorem hole) - you'd think they'd start higher.
The authors of the the attack are primarily theoreticians. The only implementation they provided was a Sagemath one. I don’t think there was ever any distributed impl for this algorithm.
I wonder if someday we will see someone generating all the bitcoins on a laptop. How the article says, the math behind most cryptosystems were never proven to be unbreakable, it is just believed to be so, because no one managed to show otherwise.
As of right now bitcoin depends pretty heavily on SHA256 and with hash functions being quite important cryptography primitives, there's always ongoing work on breaking them (tremendous upside to anyone who can manage to break common ones), so it's pretty feasible that eventually it will be broken. (We've already seen the fall of MD5 and SHA-1 in recent-ish years)
However, cryptocurrencies are a human system as much as they are a computing technology. If weaknesses start being discovered SHA256 or the EC signing of bitcoin, then in all likelihood they'd just fork the chain and upgrade the hash or signing mechanism.
It’s pretty unlikely that sha2 will ever broken in a way which actually has a meaningful security impact to bitcoin, especially considering that almost every value in the system is sha2(sha2()) which nullifies a lot of attacks against hashes which need careful control of the input. Some newer tools in the system use a single hash (it’s unclear why a double one was used in the first place), but all the same it remains highly unlikely.
Complete breaks of ECDSA are likely to be devastating as many keys in the data are re-used hundreds of thousands of times, but a weakening of it can be mitigated by moving to a new signature standard, which isn’t even consensus incompatible due to the upgradability built into the script language.
Consensus compatibility is nice, but Bitcoin has a unique problem: those old signatures own coins. Phasing out a signature algorithm means confiscating the coins in question, as the rightful owner will no longer be able to spend them anymore. Leaving them open would just let private actors break wallets to confiscate the coins themselves, with the added bonus that burnt or lost coins could be recovered, effectively increasing the supply of coins on the market. And Bitcoin has a lot of early-adopter money supply locked up behind dead hard drives - it would crash the market.
Other systems that use ECDSA don't have this problem because they rely on CAs and central authorities. For things like, say, the TLS PKI; if you miss the flag date to change ciphers you aren't forever locked out of your domain. Your site just goes down until you bother to upgrade your servers and rotate keys.
Is there any known/stated policy as to how to handle phasing out a signature algorithm in Bitcoin?
No idea, but I could imagine something where you have a period of time on the chain where both signature types are allowed, and people can just migrate their coins by transferring from the old wallet to one based on the new signature system. Then after a certain cutoff, the chain can phase out the old signature system entirely.
Of course this only works if the old system is just "weak" rather than "broken". There is no way to recover if the signature system is completely broken, but if ECDSA is broken then we have more to worry about than just Bitcoins.
Not only that, but bazillions of coins are dead. My 4 bitcoins that were lost to a hard drive failure a decade ago are gone, and “reanimating” them wouldn’t even be theft.
> especially considering that almost every value in the system is sha2(sha2())
Why does this give a meaningful improvement? Is this just security through obscurity? Presumably if this had significant benifits sha2 would have been defined this way to start with right? Or is it just that other users will be broken before this "double strong" version so that you have more warning? But isn't shaw defined as a number of rounds anyways?
It’s a historical thing people used to do for length extension attacks, but it’s irrelevant where it exists in bitcoin, for example as branches in a merkle tree where every input is of a fixed length (another hash). For Bitcoin a good portion of all the CPU time involved in verifying is just doing hashes of hashes, so it just is what it is.
It has the side effect of making some attacks where you need control of certain bytes of the input (see the md5 commission tool) harder because you’ve now got to find an exploit which makes it through both hashes.
'breaking' a hash can mean many different things.
Among many others, two types of attacks are:
- a specific prefix/suffix data can be created to force a collision.
- A message of exactly N bytes can quickly create a collision.
Both attacks would be reported as a hash being broken.
But its assumed to be unlikely that a well designed hash would have a flaw that breaks the hash in both ways.
Keyword being assumed. But with good reason.
AFAIK sha has only been broken by a variable length suffix attack.
With sha2(sha2()) it would have to be broken both ways.
Basically, many cryptographic hashes support fixed-length hashes of variable message lengths by breaking the message into blocks, chaining their hashes* and taking the final hash.
The weakness here is if you know the length of a prefix and its hash, you can generate more valid hashes of messages that contain the unknown prefix but with custom suffixes. This is relevant if you use the hash for authentication (i.e. MAC) as it allows producing certain types of custom messages that would also validate.
However, this has largely been a non-issue for a long time now as it takes very little tweaking of the protocol (stuff being authenticated) to make adding suffixes useless. Double hashing is one such mitigation, because the outer hash is now working over a fixed size input, meaning to attack it you'd need to the signed message instead of just appending to it.
*: This approach of chaining hashes of blocks is also used in other contexts that you may be familiar with ;)
Even if you don't reuse keys you will be vulnerable the moment you do the first transaction - it will be the miner who sees your public key first. Even if you mine your own transactions you will be vulnerable, because the block could be orphaned, and anyone could then see your public key.
That's the one actual value of bitcoin / cryptocurrencies: we can be fairly sure that no one broke the used hashing algorithms (1). The valuation of these coins is such that not just any individual, but even a state actor would just go for it.
(1) to such a degree that it would allow the attacker to create new blocks at will.
Assume 1 bitcoin is $50k; break it only once a day and make over 15 million × block reward a year. I doubt many governments could resist.
I mean if you break either the hash or the signature, then there are bigger fish to fry than just Bitcoin. You'd essentially be on your way to breaking much of the crypto used to secure significantly more valuable information -- the kind of information you measure with human lives rather than dollars.
Actually, if you (or more likely a nation state actor) did break either the hash or signature, you'd be crazy to reveal that fact on something as trivial as Bitcoin. That'd be like breaking ENIGMA and just using it to publish the German weather reports lol.
Thats true, its a critical risk to Bitcoin, Ethereum etc. The digital signature scheme protecting Bitcoin accounts use elliptic curves which can also be broken using quantum computers.Transactions can be forged using broken accounts.
It's fairly easy to underestimate the time required to change a non quantum resistant to a quantum resistant one.
To protect Bitcoin from quantum computers, the blockchain has to be forked as early as possible, with all blocks re-signed with quantum resistant digital signature schemes. Devil is in the details though.
The Doge Protocol project will fork Bitcoin and move it to a quantum resistant hybrid scheme.
This is inevitable for all coins with published public keys, which all the early coins are. So, e.g. all of Satoshi's coins. For coins where the public key is hashed, perhaps not inevitable.
Depending on how you look at it, it's either the built-in doomsday for Bitcoin, or it's a built-in multi-billion dollar bounty for exposing the existence of a quantum computer capable of cracking the private key given a Secp256k1 public key (that's the EC curve bitcoin uses).
That's why I like simple hash-based cryptographic algorithms such as Lamport OTP for digital signatures. Hash-based algorithms are broadly believed to be quantum-resistant and this makes sense intuitively because hashing destroys information.
The statefulness of Lamport OTP adds some implementation and usability hurdles but IMO, the simplicity and intuitiveness of the algorithm makes it worthwhile.
Source: I worked on a quantum-resistant blockchain which is based on Lamport OTP and Merkle Signature Trees (for key reuse) - https://capitalisk.com/
We're discussing key exchange mechanisms, not signatures. The distinction is important: KEMs are what we need now if QC is a real threat, because they're what enable us to protect traffic from retroactive decryption.
The encryption side of things does appear to be a lot more challenging. I like solutions which allow complexity to be side-stepped somehow but I'm not aware of anything like that for encryption.
It was designed by engineers to be quantum resistant without enough deference to mathematicians who could see that it was not resistant to more conventional approaches.
That is true only in the banal sense that any number-theoretic crypto break can be attributed to paying insufficient attention to mathematical research, but isn't true in any useful sense, since a small army of mathematicians worked on isogeny Diffie-Hellman.
> It was designed by engineers to be quantum resistant without enough deference to mathematicians who could see that it was not resistant to more conventional approaches.
Ie, they didn't take enough time and money to consult with every cryptography and mathematics expert.
The idea that encryption algorithms need to be well tested over time is why many security experts do not trust Telegram's encryption algorithm (MTProto).
Not really...? Quantum stuff is real, there are real quantum computers that have been demonstrated to really do quantum operations. They're not close to being usable to break crypto yet, but it certainly makes sense to get ahead of it.
> There wouldn't be so much effort going into bridging air gapped systems if even traditional encryption could be trusted...
These are completely different problems. Encryption just keeps information confidential. By itself, it offers no _security_ guarantees. Even the strongest encryption would be moot against a keylogger. Crypto can be (and is being) used to provide some security, like via signed code, secure processors, and the such, but security is a multi-tiered thing -- you want all the protection you can get, and like keeping data encrypted at rest, air-gapping is just yet another layer of protection.
I'm not a believer (I'm not qualified to have an opinion but neither is almost anyone else here) in PQC, but to be clear, the logic behind moving forward on PQC is straightforward: everybody acknowledges that there are no known useful QC attacks on cryptography, nor really any on the horizon, but adversaries can easily stockpile terabytes of recorded network conversations today and keep them around to break when QC attacks do work.
If you think QC attacks are 20 years away from real-world demonstrations, then conventional cryptography has a 20-year ceiling, which would be a hair-on-fire analysis in any other context. How long are you willing to bet conventional cryptography will hold out? 50 years is also too short by cryptographic standards. And 50 years is a long time. You willing to bet 100 years? I am, but, like, nobody should listen to me on this.
This is also why KEMs are a priority over signatures for PQC deployment.
A larger number of qubits allows us to do effective quantum error correction. The idea is to group multiple physical qubits into one logical qubit, think of it as redundancy.
So what's the number of logical qubits we have achieved working practically then? Is this scalable, or is it just going to exponentially require physical qubits for each additional logical qubit?
Quantum error correction has been experimentally demonstrated for a single logical qubit, e.g. [0][1]. Even though there might be implementations of multiple such qubits, we're still very much in the "Noisy Intermediate-Scale Quantum" era.
Generally, the number of physical qubits scales linearly with the number of logical qubits.
And assuming there exists math that can't be solved easily by quantum computers that solve all math solution finding problems easily.
Surely it makes no sense to adopt encryption no one but a few individuals of questionable motives understand, to protect against a technology that is a long way from even proven yet. IMHO.
Anything else requires several leaps of faith that should be no where near "in use" encryption - research is of course a very different story, but stories like this are hardly confidence inspiring.
Wait, isn't the point of post-quantum crypto to be as good as existing crypto but also be secure against known quantum attacks like Shor's algorithm and factoring. I don't think the goal is to trade off anything for defenses against quantum attacks.
If anything these stories should be more confidence inducing. They show that the rollout is conservative and that the system works. PQC algorithm has a flaw and it is found. FWIW the way existing traditional crypto is proven safe is pretty much the same -- get a bunch of people to work on attacks and weed out the bad stuff.
The problem they are trying to solve is how to do encryption in a world when finding solutions to any and all math problems is quick and easy.
And this article only reinforces the idea that the solutions they are coming up with are just obfuscation that is at best no harder than existing problems.
Even theoretically, quantum doesn't make _all_ math problems easy... Can't they just avoid the things (like factoring) that are potentially vulnerable and just use other math?
The way quantum computers are purported to work, is they search a problem space simultaneously for all solutions, then spit out the correct solution.
It could be a factorisation problem, or any other.
for cryptography find x and y when f(x,y)=z given z
That is what "post quantum computing" means, aiui. It starts with x and y in all possible values of x and y, then spits out only the values that give z.
All encryption is only as strong as the difficulty of finding x and y given only z.
AIUI anyway. well aware I could have been misled - FUD.
I think it also has to be a problem that is able to be constructed in a way where incorrect solutions destructively interfere with each other while correct ones don't.
Think of it as being able to simultaneously calculate a bunch of inputs but only being able to report the "sum" of those calculations to you. So to make it useful you'd need to be able to reconstruct problems such that the incorrect answers when computed cancel each other out. Otherwise your desired answer would just be mixed in with garbage and you won't be able to get anything useful out.
It's not actually that easy to make useful quantum algorithms that work and there's only a handful of them around...
yes, and that is a solved problem for all encryption already, you need that to find any key in encryption.
what makes that hard now is the search takes so much time.
In a theortical post quantum world that search wont take much time.
So the only way I see that post quantum encryption can be secure is if it is impossible to guess a solution and test it for correctness - which for whatever reason... never seems to get addressed.
> My super cynical view is that the whole genre of "quantum safe" cryptography is being promoted to try and encourage adoption of weak encryption
Not a cryptographer, but surely if you're worried about this then you could first encrypt your data using classical algorithms and then encrypt the output of that via the PQC algorithms, to produce a ciphertext that is at least no less safe than the classical encryption alone.
Quantum safe crypto isn’t FUD, NIST’s steadfast refusal to specify a dual system, especially given their historical laundering of NSA back doors is super questionable, but there exist (at least one that I know of) crypto systems that have no exploitable bias. The problem is the impractically large key sizes. Afaict a lot of pqc work is trying to reduce the key sizes to something reasonable.
Horseshit. It's literally not NIST's job to design a "dual system"; the project was to standardize PQC constructions, not whole protocols. Everybody that deploys PQC anywhere is going to deploy "dual systems". This complaint is like claiming NIST is corrupt because they didn't standardize an authenticated key exchange along with SHA-3.
It is literally NIST’s job to define the standards that people are meant to use.
What you’re saying is that NIST not considering a dual system standard is fine because no one would consider relying solely on the standardized PQC algorithms and would obviously implement their own version of a dual system, only with less understanding of potential pitfalls or analysis for weaknesses.
No. Once again: the NIST PQC competition is a project to standardize post-quantum cryptography constructions. It's not a protocol competition, any more than the AES and SHA-3 competitions were.
This is literally spelled out on the competition page. I'm having trouble how anyone could have any confusion about this. It literally says: do hybrid systems if you want, that's outside the scope of this competition.
How would it even have made sense to pursue hybrid systems in this competition? Like how would that have actually worked?
It really isn't, when the annual budget for states crippling cryptographic standards dwarfs the salaries and tuition of everyone in the global academics maths community combined.
There is seemingly random interconnectedness in math, meaning that governments probably can't just throw money at some problem and force themselves much deeper than academia. For example you can hire 100 number theorists and ask of them to solve factorization (stupid example, i know), but it just might happen that the key insight to solving it comes from some random dude working in some seemingly disconnected problem in combinatorial algebra or something.
Does Tor use SIKE/SIDH? The proposals I've seen for PQC Tor all seem to run a PQC construction alongside a conventional one (the only sane way to do this right now), and so, no, a break in the PQC wouldn't let you recover sessions.
> One reason SIKE’s vulnerability was not detected until now was because the new attack “applies very advanced mathematics—I can’t think of another situation where an attack has used such deep mathematics compared with the system being broken,” says Galbraith. Katz agrees, saying, “I suspect that fewer than 50 people in the world understand both the underlying mathematics and the necessary cryptography.”
The smartest people in the world are always Americans. No wonder the rest of the world can't make things on their own, and always steal American technology!
It's a little unlikely. It's a code-able exploit with a big payoff, which is right in the wheelhouse, but then there's this that Steven Galbraith had to say about how the exploit works:
*What is this magic ingredient?*
It is a theorem by Ernst Kani about reducible subgroups of abelian surfaces.
*Is there a simple way to explain the magic ingredient?*
Nope. Go learn about Richelot isogenies and abelian surfaces.
As I understand it, even by number-theoretic cryptographic standards, the math here is abstruse. The challenges I think have done pretty well sticking to things where writing the exploit pays off with good intuitions. I guess "don't reveal auxiliary torsion points when exchanging details of an isogeny graph walk" is a useful intuition, maybe.
Even before being broken, the abstruse mathematics is one of the reasons to not go with SIKE or Rainbow. It’s not surprising they are broken.
NTRU is the easiest of the NIST PQC finalists to understand, and will probably beat Kyber because even a relatively new-to-cryptography programmer will be able to understand it and implement it.
You can see why people love it, though; the nuts and bolts of SIDH are extremely elegant. Like, it's a neat trick. I don't understand Richelot isogenies and abelian surfaces and can't speak to the elegance of the break; it's the break that exceeds the threshold for "abstruse" (of course, the abstruse mathematics of things that break cryptosystems do make the underlying cryptosystem abstruse! you have to grok them to use it!)
Can someone remind me why Merkle trees of Lamport signatures aren’t the solution for postquantum asymmetric signing? Sure the signatures are huge, but they’re secure unless you can trivially invert the hash function.
IIUC, NIST did not select a winner for post-quantum public-key-encryption, but rather winnowed the field to 4 potential candidates, and this is one of them.
No, it selected the CRYSTALS-Kyber KEM and then proceeded for an additional round to consider alternatives/understudy KEMs, of which SIKE was one potential one.
no, but an insider can leak your security through obscurity, at which point any publically known rammifications that were overseen are easily exploited.
I think the beginning of the title lead me to believe it would say "10-Year-Old Kid" as I was hoping for some sort of "emperor has no clothes" situation, or brilliant amateur insight.
Yet I still think there's a good "look beyond your strengths" lesson here.
Probably it's because of the word "hacked". I mostly saw it used in meaning the [human] activity of designing an approach, rather than executing code. Computers do not hack. (Unless we speack of AI, maybe)
What is the possibility that cryptowallets can succumb to these kinds of attacks? How is it possible that Satoshi's wallet has still, after so many years, not been hacked using a brute force mechanism?
It's because the search space (number of possible keys to guess) is astronomic; 2^256.
If you had a billion people, each person owning one billion computers, each computer capable of guessing a billion keys per second, then a billion years would still not have exhausted one-billionth of all the possible keys.
That makes sense. So his wallet will be broken in about 10 years plus 10 years marking the time for advancement of CPU and time for brute force to spend calculating with the advanced CPU.
I'm sure many people are trying. On a slightly related note - I wonder what the market effect would be on bitcoin (and possibly crypto as a whole) if anyone managed to transfer money out of the wallet, would it crash the currency?
https://securitycryptographywhatever.buzzsprout.com/1822302/...
There's even a transcript, if you want to read things like:
So I watched the, uh, I watched Costello's tutorial, like the, the broadcast he did for, um, for Microsoft. And I kind of worked my way through the, the tutorial paper. So like, is it, is it true that like, in sort of the same sense we're...
† Looks back and forth around the room furtively