Hacker News new | past | comments | ask | show | jobs | submit login
Did Schnorr destroy RSA? Show me the factors (sweis.medium.com)
287 points by sweis 48 days ago | hide | past | favorite | 207 comments



Isn't it typical to release the paper first, for peer vetting, ahead of any actual working proof?

It seems like the only reason for the "put up or shut up" reactions is that "destroys RSA" comment in the submitted abstract...which isn't in the actual paper.


The sum-of-three cubes announcement was tweeted pretty easily.

https://twitter.com/robinhouston/status/1169877007045296128

Its easier to drum up support for your paper when you have a quick way to prove to the community of mathematicians that your results are golden.

EDIT: The original webpage: http://math.mit.edu/~drew/sumsofcubes.html

As you can see, the sum-of-cubes announcements are very terse. Ultimately pointing to the following link: https://share.cocalc.com/share/900eec7e-0710-4e2f-a03a-dba01...

That kind of website / tweet is a "drop the mic" moment. It really makes people pay attention.


So this is why I’ve suddenly been getting new interaction on that old tweet! I’m glad I saw this comment, because I was quite confused.


That's not how science works. Yes, if your algorithm is simple enough and you can create an implementation, then it is good that you produce a working version. But it may be more complicated to implement the algorithm than writing a paper. This doesn't mean that the implementation is impossible.


That's exactly how maths works, specifically in the cases where where the claim is easily backed by a demonstration. A famous example:

https://en.wikipedia.org/wiki/Frank_Nelson_Cole

If you 'destroyed RSA' through better factorization, all you have to do is start publishing factors of RSA challenge numbers.

Matthew Green has a fun thread about other ways to approach this along with an interesting "real talk about factoring" sidebar by Nadia Heninger:

https://twitter.com/matthew_d_green/status/13669500931784990...


> That's not how science works.

This isn't science, it's math. As the article mentions, there is an 862-bit RSA challenge that hasn't been factored yet. Factoring it should be possible on commodity hardware if the claims in the paper are true. So why not just do it? The test of success is simple: either you win the challenge or you don't.


For a skilled researcher it may be an order of magnitude easier to write the whitepapar than to code the implementation.


There are example factorizations around page 11 if I'm reading this correctly. Haven't run them through yet because I'm refreshing my atrophied linear algebra knowledge and walking through some of the source papers, so some work is in there.


> But it may be more complicated to implement the algorithm than writing a paper

Then why would I trust it? You don't need to write code, you need to write an example

As Linus Torvalds says: talk is cheap, show me the code

Academia is full of "paper scientists" that put out papers but produce nothing of value.

They are also full of postgraduate students as well that would be more than willing to work together and put a proof-of-concept code with the paper.


If you really think your paper destroys RSA, I think there are ethical questions the authors must decide before publishing it.

In particular, I think the right process would be:

1. Give some brief description of the result (eg factoring numbers in O(...)), and some proof (eg a factorisation of the next rsa semiprime, possibly more) that convinces people that your claims are true

2. Wait a while for people to have the chance to not be burned

3. Publish the paper

Instead, the authors seem to be going for:

1. Publish the paper with a provocative abstract.

2. Wait to see who implements the algorithm first.

It doesn’t seem the best idea to me, but what do I know?


Unlike a zero day, there still remain a number of important factors (haha) to actually break a large number. But crypto systems are special because they rely on trust. The mere sign of weakness is sufficient to kill that trust. A sufficiently resourced state actor may even be ahead.. we don’t know


You're going to be killed or kidnapped between steps 1 and 3.


The pursuit of knowledge should not be subject to any annoying effects said knowledge may have on people hedged against such knowledge becoming available. That’s an anti-liberal recipe for a rather dark society. I don't see the point in tone policing people generating knowledge just to remind them that sometimes knowledge is inconvenient.


"tone policing"? What do you think that phrase means? Who was talking about "tone"?

Do you believe in "responsible disclosure" [1] of security vulnerabilities? How does your stated philosophy apply or not apply to ethics around disclosure of discovered software security vulnerabilities? Is that different?

[1] https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...


Technically, the poster the poster you replied to could be said to have been tone policing with the assertion of releasing a deliberately "provocative" abstract.

I mean, I get it, it's not the most straightforward mental leap, but I can understand the sentiment.

And as far as responsible disclosure goes, no, the responsible thing to do is to notify everyone at once. Keep in mind if it is right, this means that nation state actors have just been equipped with a roadmap to potentially cracking a lot of banked ciphertext, probably faster than anyone else.

You don't sit on that kind of thing, even if it does mean some corporate actors get burned.

If the only thing saving your rear was an RSA key... Take notice: the clock may have just been significantly advanced. Be you nation-state, corp, or someone who'd just prefer to remain in the shadows.

Suffer thee not information asymmetry to live lest you carry the blood of those you sacrifice on the altar of your limited disclosure. It also hedges against you getting disappeared and suppressing whatever other people you shared it with that remain to keep something so relied upon from being realistically entertained.

I mean, cmon, how long has everyone been joking they'd hate to be the person who discovered how to break RSA, because we all know it would lead t

<SIGNAL_LOST>


Sorry that is such a bullshit statement.


Can you explain?


Pursuit of knowledge can be noble, but it can also be horrific. Granted, this isn't horrific, but if RSA is literally "destroyed" then it has potential to do harm, even if we stop using it immediately (which is expensive by itself). Ethics governs good science, and math is no exception


>I don't see the point in tone policing people generating knowledge just to remind them that sometimes knowledge is inconvenient.

If you found a simple way to kill all of mankind, that could be mitigated by waiting a week to publish while safeguards were implemented, is it wiser to publish immediately and risk someone killing all of mankind or to notify proper groups and then publish later after it won't kill everyone?

Maybe there's some nuance in these things. Ignoring effects of knowledge is not wise.


For something of this magnitude, I think the expected behavior would be to delay the release of the paper until the ecosystem had time to adapt. To prove the paper is valid (and to assert precedence) you would offer to factor several "nothing up my sleeve" numbers -- like sha512("schnorr1") + sha512("schnorr2").

As it is, if the algorithm presented is valid then this potentially compromises currently operating systems.


I do not agree that so-called “responsible disclosure” would or should be the expected behavior. I do understand how someone accustomed to corporate bug bounties and private security mailing lists may think so, though. Full disclosure is a perfectly reasonable strategy especially when we’re operating in the academic realm. Industry always takes years to catch up to academia anyway.


If this paper allows to produce a working RSA cracker in a month, much of high-value IT infrastructure is under imminent threat.

Yes, you can replace your SSH keys with elliptic ones, and maybe adjust your TLS accepted algorithms. Even this is not always easy or cheap.

But other things that may rely on RSA (or triple RSA) may have trouble upgrading fast, and upgrading them at all is going to cost a lot.


Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually. Not having reacted yet is not the researcher's fault.

We know that Quantum computers can break it too, yet nobody really acts on it with any urgency. If suddenly there is a breakthrough and we can reach this state within a year, then there will be no time to adapt as well.

It all boils down to the general corporate attitude of not fixing catastrophic problems without precedence. We see the same with climate change and once it hits hard, it will be too late to adapt.


I asked a few years ago(I don't remember the specifics) what was the recommended post-quantum crypto, and basically the answer was that there wasn't any vetted and proven post-quantum crypto. So I don't think you can blame the industry here.


There are some promising experiments. All of them are much worse than the algorithms we use today (including RSA) in practical terms, except that Shor's algorithm doesn't apply to them, and so they specifically fulfil the criterion of post-quantum public key crypto.

[For symmetric encryption quantum computers would only matter if they were pretty fast/cheap and we didn't have ready-to-go 256-bit symmetric crypto, but we do]

OpenSSH actually has an implementation of a reasonable contender for SSH. Google have experimented (in Chrome builds) with some of these contenders for TLS too. What you would likely want to do - since by definition these are relatively untried algorithms and so might be unsafe against adversaries with an existing not-at-all-quantum computer - is combine with an existing algorithm, most likely elliptic curve based, but RSA would be possible, under a "swiss cheese" model where you're only dead if an adversary penetrates all the layers.

But like I said, much worse. Given that there aren't any suitably large quantum computers (and it's always possible that we'll eventually figure out we just can't build usefully large quantum computers, just like we eventually found out that while you can travel faster than sound you can't travel faster than light) it would make no sense to deploy this today, even though it continues to make sense to do research Just In Case.


Wouldn't surprise me if there are classified quantum computers crunching numbers as we speak tbh.


I would be very surprised. That would involve keeping some huge leaps in basic research under wraps.


Could you expand on "much worse?" Are they slower, are they harder to implement?


Just to expand the other answer you got.

Slower, in a different complexity class from the classical ones (AKA O(n^2) instead of O(n), but I don't know what exact complexities the top candidates have right now, as a function of key length).

Larger messages, in that a signature requires 100's or 10's of Mb instead of a few kb.

Larger public keys, as again 100's of Mb instead of a few kb.

Larger private keys, as in a few to 10's of Mb instead of 1/2 to a few kb.

But I guess the most relevant "much worse" is that people have much less confidence that those algorithms are secure, and they are complex enough to hide huge traps that make implement RSA look like a walk in the park.


You seem to be way outdated. A lot of work has been put on reducing key sizes. Dunno about signatures, but for key exchange SIKE (https://sike.org/) uses keys of a few hundred bytes, comparable in size to RSA.


Sure, SIKE is not so much bigger than existing approaches, but it is much slower than they are.

Some of the other choices aren't so much slower but are far bigger, for example McEliece systems.

There's lots of opportunities to make different trade-offs - at least if all of them survive a bit more scrutiny by smart motivated opponents - but they're all generally worse than what we have now - except that they resist Shor's algorithm.


That's true, a quick Google search tells me that an optimized SIKE implementation is ~30x slower than an optimized X25519 implementation. Still, according to this document, running time of SIKEp751 is ~25ms: https://sike.org/files/SIDH-spec.pdf

I don't think that's a problem for end users, you are not constantly generating keys. It will be a problem for servers handling thousands of connections per second, but I'm sure dedicated HSMs will appear if there is a need for them.

In any case, I'm not an expert in crypto, just a poor sysadmin-by-accident who likes reading about the latest security developments so the bad guys don't pwn my servers. And as you said, engineering is always full of trade-offs, let's see what the NIST PQC standardization process will decide.


slower

larger messages

larger public keys

larger private keys


Not only quantum computers. Every cryptographic method relies on fact that there is no easy way to decide NP problems. And we don't really know whether P!=NP (in fact, according to polls, about a quarter of researchers disagree), or, in case there actually is a polynomial algorithm, its exponent will make it untractable in practice.

So there might be DOOMSDAY in the future, where all cryptography will cease to work, because somebody just figures a way to decide NP problems quickly enough.


>Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually.

Interesting. Reference?


It's generally accepted for almost all crypto protocols that it's going to be broken eventually (a few rare cases are provably secure, one-time pads and not much else).

If nothing else, quantum computers should break RSA in particular (the algorithm is already known and just waiting for hardware) and the writing has been on the wall there for a long time.


> It's generally accepted for almost all crypto protocols that it's going to be broken eventually

Just like it was generally accepted that god exists. Those claims are of similar strength.

> If nothing else, quantum computers should break RSA in particular

Quantum computers with enough qubits do not exist and it's absolutely not obvious whether they will exist at all.


Cryptographically protocols have attacks designed against them all the time.


> triple RSA

Yeah, no. 3DES (Triple DES) is/ was a thing, but Triple RSA is not.


That was the joke.


> If this paper allows to produce a working RSA cracker in a month

The blog post claims that people have been trying to reproduce his results for two years, though.


Note that (despite the ill-considered claim about RSA in the abstract) this isn't security research. It's a maths paper in a very important area of number theory. It doesn't disclose a patchable flaw in RSA---it claims that RSA is based on an unproven conjecture that turns out to be false. That RSA is vulnerable to fast prime factoring has been known since it was first implemented. That's not fixable, so delay serves little purpose, especially with a (if it turns out to be true) seminal result in number theory.


If you find something that can break all cryptography in the world, then I think your best option (even for your personal security) is to release everything publicly.


A one time pad can not be broken.


I always enjoy the reaction when I tell students that we've had provably unbreakable encryption since the 19th century.


And yet any time someone proposes to actually implement it, the response is always negative: "don't roll your own encryption" or "sharing and storing the pads is infeasible" or some such.

Seems like we should have figured out a way by now to use one time pad encyption by default for critical paths, even if that requires new industries to distribute pads and guarantee their security.


OTP is perfectly secure, but (for many cases) perfectly useless. Transmitting the key safely is exactly as hard as transmitting the message safely.

They're useful in exactly one situation: when you have a temporary secure communication channel, and a long-term insecure channel. Then you can use the temporary channel to pre-share a lot of key material (say, a 1TB micro SD card carried covertly) and then use that for future messages. But that scenario is very rare.


The rarity of that scenario is dictated by there rarely being a need for the security it offers. But that, in turn, is a function of our knowledge of cryptography, and may change over time. Who knows; perhaps someday we'll see something like what Vinge described in AFutD:

"Our main cargo is a one-time cryptographic pad. The source is Commercial Security at Sjandra Kei; the destination is the certificants' High colony. It was the usual arrangement: We're carrying a one-third xor of the pad. Independent shippers are carrying the others. At the destination, the three parts would be xor'd together. The result could supply a dozen worlds' crypto needs on the Net for ..."


Yup, that bit from Fire Upon the Deep is exactly what I was thinking of when I mentioned that bit about industries to safely transmit OTP data.

I don't really think the parent comment understands that there are creative ways around the difficulty of sharing secure pads. We don't need it for all data; but I think Vinge does hint at a totally viable means of sharing, and scenario in which it's practical.


Quantum computing may eventually force us to move to OTPs. Of course it's going to be a pretty long time before we'll have quantum computers that can work across an 800-bit field.


It won‘t. Symmetric ciphers are holding up just fine against quantum computers.

Key exchanges would be annoying, but they are even more so for one-time pads.

But it will never come to even that: There are plenty quantum-safe asymmetric cryptosystems around.


Yeah. I take up the key exchange problem (and Diffie-Hellman) as well as difficulty of achieving true randomness.


I've made matlab scripts that pull true randomness out of the less-significant figures of the output of an audio microphone listening to static...it's not impossible.

If we were serious about it, sharing flash drives and even using couriers could make it work pretty well.


I haven't read the paper or anything, but if the expected adaptation is to drop support for RSA; no reasonable amount of time will make it a seamless transition.

There are so many devices and pieces of software that are stuck on RSA, a headsup of say 5 years would still result in a clossal mess; may as well have the mess now.


RSA has published the same set of "come and break me" keys for years, decades I think. Breaking those public (in both senses!) keys would be a good start.


You probably understand how serious this is, so many people are going to become very emotional as this strikes at the very foundations of the Internet and digital security as we know it. The reactions I've seen so far do seem very emotional and this will only become much, much worse if there's a PoC which is produced.


What does emotion have to do with it? For many years, every time someone claims to have broken RSA the response is the same: here is a test case you (the alleged RSA breaker) should be able to crack if your claims are correct. Put up or shut up. That seems like a fair, just-the-facts response.


Internet and digital security? This strikes at the very foundations of number theory.


Ehhhhhhh. I'm not sure I'm aware of anything in number theory that would substantially change if a fast factoring algorithm were discovered, outside of its applications to cryptography. It'd be very likely that a fast factoring algorithm exposed some unusual structure about the natural numbers, I guess?


I was exaggerating for effect, but yeah. The existence of a polynomial time prime factoring suggests the existence of a useful predictive pattern in the distribution of primes. That would break the Twin Prime Conjecture and possibly refute Riemann. Undermining those starts tearing the foundations apart. I’m almost certain that won’t have just happened.


We already know there is plenty of structure in the distribution of primes, so I don't see why finding some that can be exploited by a clever algorithm makes that much of a difference to foundational concerns. In fact, I'm pretty sure that even a proof of P=NP, which obviously subsumes fast factoring, wouldn't have any intrinsically interesting non-algorithmic consequences for proofs in number theory that don't directly assume factoring is hard (but this could just be ignorance on my part); all the supposedly absurd consequences people point to like the PH collapsing are more or less restricted to computer science.


> is that "destroys RSA" comment in the submitted abstract...which isn't in the actual paper

I think it is - https://eprint.iacr.org/2021/232.pdf


Ah, I see. It's been removed in a newer revision of the paper. https://www.math.uni-frankfurt.de/~dmst/teaching/WS2019/SVP9...


The newer 12-page version on the preprint server has a PDF creation date of 3/3/2021, 11:32:56 AM, created with pdfeTeX-1.30.4.

https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...

The previous (reportedly wrongly uploaded) version is from 12/5/2019, 9:10:13 AM created with pdfeTeX-1.30.4.

https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...

The university website version is from 3/5/2020, 12:00:19 PM created with pdfTeX-1.40.15.

These dates & times are MM/DD/YYYY & CET.

A co-editor of the Cryptology ePrint Archive confirmed the submission on twitter:

https://twitter.com/Leptan/status/1367103240228261894


Other way around I think. Your link is an old version of the paper. The one on eprint was just updated today with a version of the paper that adds the "destroys RSA" line and removes the "work in progress" line (put it in the wayback machine to see the version that was there yesterday without the claim of destroying RSA)


That times out for me, I guess www.math.uni-frankfurt.de is now getting more attention than usual ;-)

Here is a version in the Google cache, it has an old date on it "work in progress 04.03.2020" and does not contain the "destroys RSA" sentence: https://webcache.googleusercontent.com/search?q=cache:E0L-S3...


This is math. The working proof _is_ the paper. It just takes a long time to refute if there’s a subtle error. “Putting up” ISN’T proof but it will sure get a lot of important people to drop what they’re doing and check the paper faster.


I dunno, isn't a demonstration of a few factors proof that someone somewhere has broken RSA, or has a machine much more powerful than we'd expect to exist? It's not proof that any of the math in the paper is correct, but it doesn't have to be. Even if there's a subtle error in the proof-as-written it doesn't matter if the implementation actually works.


All that is true.

But if the author of the paper (or even someone else with a credible reputation) was to public factors and say they used this method then it's useful evidence.


I'd agree with that if he had merely tweeted about RSA. However, he put the claim to "destroy RSA" in the abstract. Claims made in the abstract do need to be substantiated, but RSA isn't discussed in the main text at all.


"Putting up" isn't proof that an RSA decryption algorithm exists, but it is proof that someone has found a way to reverse RSA either way.

The latter is much more important.


This is math claiming an algorithm that is faster.

For an algorithm, asking for results is not that weird. It certainly fits within the purview of a paper. Moreover, it would be really strong evidence for the claims made in the paper. It is much easier to check than the proofs.


I haven't looked into the details of the speed improvements... it's often the case that big-O improvements in complex problems come with bigger constant factors. It's entirely reasonable that this new algorithm is slower for factoring 384-bit numbers but faster for 4096-bit numbers.

I wouldn't be surprised if a demonstration that pushes the current publicly known record for largest factored RSA modulus costs hundreds of thousands or millions of dollars even with this new algorithm, and the algorithm is also slower than other methods for, say 384-bit RSA.

It may be difficult even for Peter Schnorr to get that kind of budget for a demonstration before his paper gets traction.


How broken does this claim RSA is? SHA-1 was known to be broken for a long time before actually pulling off a collision was performed for example.[1]

[1]https://en.wikipedia.org/wiki/SHA-1#Attacks


An attack is anything that makes it take less than a brute force effort (of 2^80 operations). A 2^63 effort is really expensive in 2007. But by 2015 computation was maybe 2^5 times cheaper and there was a 2^5 cheaper attack.

I believe that this breakthrough could be quite a bit bigger because it's changing the costs from exponential to polynomial and so speedup is likely a much bigger change.


The whole paper is linked, https://eprint.iacr.org/2021/232.pdf


And the article says it would take around a thousand CPU core years to break 800-bit RSA. Maybe Schnorr needs to publish first to get the budget for that.


> the article says it would take around a thousand CPU core years to break 800-bit RSA

That's for previous algorithms, not the one described in the paper.


He doesn't need to demonstrate it with an 800-bit number. Given the claims made about how much faster this is, the effect should show up even in fairly small numbers.


That can be solved within hours on a GPU cluster... Looking at a single core is usually not a great idea when you have billions of cores available.


Only if your algorithm is conducive to parallelism based speedup. Amdahl's law still applies.

The paper mentions some gains being possible through parallelism with one of the algorithms their work is based on, but also mentioned most prior art is not effectively parallelizable across discrete machines.


The factors could be included in the manuscript as an example..


It is in the actual paper.

Also, without that sentence nobody would be trying to read the paper, so props to Schnorr who understands how to create the buzz :P


Yep. A paper has to substantiate any claims made in the abstract, and this failure already undermines my confidence that the result is correct.


Here we have Léo Ducas testing Schnorr's new method in Sage: https://github.com/lducas/SchnorrGate

Apparently, "[t]his suggest that the approach may be sensible, but that not all short vectors give rise to factoring relations, and that obtaining a sufficient success rate requires much larger lattice dimension than claimed in [Sch21]."


CP Schnorr is emeritus professor from Frankfurt university. He is respected for his work in cryptography. He has, pun intended, nothing to prove but still works and furthers research.

Yes, claiming that "this breaks RSA" is bold, but this implementation shows that there is some advance in doing so in the paper.

Therefore signaling that this is a "scandal" via the postfix "gate" seems just inappropriate.

Apart from that kudos for the implementation to Ducas!

Calling it the "Schnorr attack" would imply that the outcome of it is still uncertain. And it also would sound way cooler ;)


I recommend you contact Ducas to tell him about your concerns directly. I do not know him personally as I first heard about this from his public Twitter account: https://twitter.com/DucasLeo

Just to make sure you get Ducas's main argument, I quote him here again: "Personal study (unfortunately, never written down cleanly) of this approach suggested me that this approach requires solving SVP in dimensions beyond reasonable, leading to a factorization algorithm much slower than the state of the art. My impression is that this is the consensus among experts having spent some time on it as well."

So it seems like the conclusion is clear-cut contrary to what you were suggesting.

Also wouldn't the name "Schnorr attack" lead to people thinking of attacks on Schnorr signatures instead?


Good point on the signatures.


This take is rather naive. Those RSA factoring records were done by a large international team of researchers, using well established algorithms and decades of work on implementing those methods as fast as possible.

The blog post says the paper mentions 8.4e10 operations for factoring, but I can't find that number in the paper anywhere. The post then states: "The 800-bit claims would be 36 bits of work." I don't know what that means.

[edit]: the numbers are in the new version (https://eprint.iacr.org/2021/232). I was looking at the old version uploaded yesterday.


> The post states that 800-bit claims would be 36 bits of work. I don't know what that means.

From the article: "Schnorr’s paper claims to factor ... 800-bit moduli in 8.4·10¹⁰ operations"

2^36 ~= 8.4·10¹⁰, so I guess "36 bits of work" means 2^36 operations. Analogous to how a password with 36 bits of entropy would require 2^36 guesses. My first time encountering the phrase "bits of work" as well, though.


It's in the abstract:

>Our accelerated strongprimal-dual reduction of [GN08] factors integers N≈2^400 and N≈2^800 by 4.2·10^9 and 8.4·10^10 arithmetic operations.


Increasing the length by a factor of 2^400 only increased the amount of work by a factor of 20? Staggering, if true in general.


Yeah, that's what got me. Doubling the length of the key only requires a single order of magnitude more work?. If that turns out to be true, I'm going to need to revise my beliefs about how the universe works. In particular, information theory and thermodynamics, because multiplying two numbers together doesn't preserve information about what the factors were. Or at least pretty much everyone thought so. (Caveat: if the values of primes turn out to have a predictable pattern, that could provide the needed information. However, that would mean that the Riemann Hypothesis is false, and that'd be an even more astounding result.)


Thing is, RSA keys are rather 'sparse' because they are the product of two primes. There aren't that many primes, so there aren't that many numbers that only have two (proper) divisors.

Hence, if you look at the strength of the currently best-known attack on RSA keys, you see that the key strength grows quite slowly as the keys get larger. This is purely from how sparse prime numbers are. From [1] which quotes NIST in 2015 we see (both collumns are in bits):

    Strength  RSA modulus size
      80        1024
     112        2048
     128        3072
     192        7680
     256       15360
> Because multiplying two numbers together doesn't preserve information about what the factors were. Or at least pretty much everyone thought so.

Technically the information is still there, it was just thought to be very hard to extract. This paper shows an easier way to extract it.

[1] https://crypto.stackexchange.com/questions/8687/security-str...


Thanks very much.


Actually, you're only increasing the length of the number by a factor of 2, since 2^400 is a 400-bit number.

If true, it's still leaps and bounds ahead of anything we have today, though.


Fair point. "Increasing the key [not the length of the key] by a factor of 2^400".


2^36 "operations" can still take a lot of time if each operation is multiplying two giant numbers, unless the meaning of "operation" is somehow normalized to mean e.g. 64 bit integer operations.


> 2^36 "operations" can still take a lot of time if each operation is multiplying two giant number

It took me 3.3 years of actual computation time to do about 2^46 multiplication+modulo of two 2048 bit numbers on a Core i7. 2^36 of 2048 bit numbers should be doable in a day on an eight years old CPU.

P.S: that was on a single core, for the problem I solved was explicitly created as to not be parallelizable.


It didn't take long for custom ASICs for mining bitcoin to emerge. It wouldn't take long for custom ASICs to do these kinds of operations a lot faster than on a general purpose CPU to emerge.


Supposing the paper does describe a more efficient factorisation algorithm, that does not imply that factoring a 800 bit prime (like the author of this article suggests) would be cheap.


factoring a 800 bit prime is definitely cheap. ;-)


In fact, I'll do it right now for you for only one dollar per bit.


For those of us less familiar with cryptography and RSA in general: what are the implications if RSA is broken? What are the mitigations that would need to occur in its place?


If RSA-2048 is practically broken or breakable:

The public web and code signing PKIs collapse overnight. Most certificate authorities use RSA-2048 either for the roots or intermediates. The HN site not only uses a RSA-2048 key in its own certificate, the CA issuing that certificate and the root CA issuing the intermediate also do.

All data transmitted without forward secrecy on most web sites is compromised. Most websites nowadays use forward secrecy and/or ECDSA, but data sent years ago may still be of value (e.g. passwords) and become decryptable now.

Any data (e.g. backups, past e-mails) encrypted using RSA keys is at risk.

Any authentication system relying on RSA keys has a problem. This can include systems like smartcards or HSMs that are hard to update, software or firmware updates, etc. Banking too.

Edit to add - if RSA-1024 is practically breakable but RSA-2048 is not: some systems that relied on RSA-1024 have a problem. These should be rare, but sometimes legacy doesn't get updated until it becomes an absolute emergency. Everyone realizes that RSA-2048 is only a matter of time, that time is running out quicker than expected, and starts upgrading to ECDSA with more urgency. This will likely take a long time due to legacy hardware.


Surely I'm not the only one who read this and thought "I wonder how long the NSA have known this result, and how much better their internal attacks are than public academic results? I wonder how much of their 'full take' internet backbone archive has been decrypted and keyword mined?"


There was a quote in a newspaper I unfortunately forget the location of about four years ago about a massive break through in encryption by the NSA post Snowden. Enough subtle hints about it. My working assumption had been it was RSA related. I noticed for example some interesting organisations changed their guidelines about its usage in past three years or so.


If it is what I think it is, then it's commonly believed that they broke commonly used Diffie-Hellman parameters, allowing them to break any connection encrypted using those.

The parameters can, in theory, be safely used by everyone, and generating them is relatively expensive. But because a few of these parameters were extremely widely used, and they were only 1024 bits strong, it is believed that a gargantuan effort to break them was worth it and the NSA did it.


Which organizations changed their guidelines?


This has been speculated since logjam was discovered.


>All data transmitted without forward secrecy on most web sites is compromised.

Forward secrecy does not protect against broken cryptography, so this is more about what methods were used and how much an new technique like this affects them.


True, but it does protect against broken RSA. Because RSA is a not used to encrypt the actual data. That's probably using AES.


If you break RSA then you get the AES session key. You don't have to break the AES.


Nope. Whilst that's how TLS_RSA_WITH_AES_128_CBC_SHA works, this not how Forward Secrecy enabled suites like TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 work. Most sites (and certainly any sites that think they're "secure") thus are not affected in this way.

In TLS 1.3 all suites (such as TLS_AES_128_GCM_SHA256) have forward secrecy so it isn't even explicitly called out.

In these modern modes (and in other modern protocols like SSH) the two peers agree random ephemeral keys (these days with Elliptic Curve Diffie Hellman) and long term private keys are only used to sign things to prove who you're talking to over the resulting securely encrypted connection.

So if you break RSA you can forge those signatures but you can't decrypt messages sent to and from the legitimate owner of the keys, those were, as your parent explained, secured with AES and not RSA. You would need to perform a live active attack, a MitM to interpose between the real server and its clients so as to decrypt all messages in transit.


The point of forward secrecy is that past key are not recoverable from future communications. You need to store the whole communication history to have any chance.


Unless you break the encryption. Then you get the past keys.

Forward secrecy only protects against the exposure of private key material. It does not protect against broken cryptography as it depends on the cryptography to keep old messages private. That's because it works by forgetting the session keys. If you can derive those session keys again then it is of no value.


Nope. A session key is normally not created by one party and sent to another, it's generated by something like Diffie-Hellman in which the long-living RSA keys are only used for authenticity verification. Diffie-Hellman requires discrete logarithms rather than factorization (and then there are more modern methods).


You always have to obtain the communication you want to decrypt; that's not the interesting part of the problem.

The interesting part is using a weakness in one part to help decrypt a different part.


Also any proprietary/ancient SSH implementation only supporting RSA that you'll find in all kinds of boxes.


Are alternatives already available that needs to be swapped to be durable again?


"Broken" generally isn't a binary event in cryptography.

It's a continuum from "impossible to do with all the time and energy of the universe and the most advanced computers we have" to "my commodity hardware can crack it in a few minutes".

The same goes for fears of quantum computing breaking current cryptography. It goes from effectively impossible to "yeah, we could break it with a few years of constant computation, which is plenty of time to switch to quantum resistant schemes".


Well that's generally true, sometimes breakthroughs do happen overnight. Its not impossible.


Yup. That's why I say generally.

Even if the paper is correct it seems to fall into the 'moving down the continuum' category.


> "Broken" generally isn't a binary event in cryptography.

If there were, for example, a way to glean a private key without factoring the modulus, I think we'd all agree that this amounts to "breaking" the system insofar as that it changes the applicability of the hardness assumption.

On the other hand, simply achieving a faster way to factor the modulus is, at best, part of a continuum as you say.


> which is plenty of time to switch to quantum resistant schemes

That's not how you treat broken cryptography. If your data is already collected and stored encrypted by a third party which still holds value after several years, you're already in bad shape.


Depending on exactly how broken, a large portion of traffic on the internet would lose almost all of its security guarantees (privacy, resistance to forgery). It would also be a huge effort, and take forever to get everyone to fix it. That's about as close to "movie disaster" level as it gets.


1. We kinda knew RSA has an expiration date due to quantum computers. Assuming the paper is true, this just brought the expiration date far closer to us.

2. Major issue is going to be webpki and replaying govt captured encrypted communications.

3. There are a lot of abandoned servers out there that use RSA. There is a lot of code signing that uses RSA. There is just a lot of identity proven on the web that uses RSA to prove the identity. It's going to be a clusterfuck of identity. Again, assuming the paper means RSA is just completely broken.


> 1. We kinda knew RSA has an expiration date due to quantum computers.

Only if you somehow "know" quantum computing is ever going to be practically realized. It may never be.


There's no real big theoretical problems in the quantum computer building space. There's problems of scale, and funding, and usual growing pains of a new industry, but scale went from 7 to 24 fairly quickly and all it took was more money. If I gave IBM $10T dollars, they could build me a 1024-qbit computer. Once it gets cheaper, which is the current problem, I don't see any reason why Azure Quantum (ex) wouldn't simply decrease in price to where it can be used practically.


>There's no real big theoretical problems in the quantum computer building space

The current quantum computers are just on the edge of what we can simulate classically, so we can't yet rule out the possibility that realizing a quantum computation requires an exponential amount of energy in the number of qubits. (Though it should be noted that quantum mechanics predicts that this will not happen.)


> it should be noted that quantum mechanics predicts that this will not happen

There is a possibility that QM will break somewhere, but I wouldn't consider this very probable...


This couldn't be further from the truth. If more money could get IBM to more qubits, they would be doing it. Fundamental problems with scaling have held them back for ages. The only people making any progress on scaling is D-Wave and their model is never going to have any impact on factoring anyway.


>There's no real big theoretical problems in the quantum computer building space.

I don't think this is true...


This is true - in general each bit you add to a quantum computer means the noise floor needs to be twice as low, so difficulty scales exponentially. That is unless a threshold is reached where error correction scales faster.


Does this technique for factorization by Schnorr have any implications for any other cryptographic methods as well (if confirmed)?


There are lots of alternative constructions. ECC, for example.

1024-bit and higher RSA is still unfactorable, so I don't think anyone will be attacking RSA directly any time soon.


ECC is considered even less quantum resistant than RSA because the key lengths are so short.


But for now, it's more important to ask whether ECC is vulnerable to some variant of Schnorr's attack, which uses conventional computers. We already had an algorithm to break RSA on quantum.


Well, the most obvious problem for the average lay person is that the movie "Sneakers" would suddenly gain relevance again, and most of the cast are too old (or too dead) to make a sequel.

Also, that whole thing about lots of computer encryption tech suddenly being effectively insecure.


He didn't claim it is broken. Only that it can be broken 2x faster than before. RSA 4096 as recommended by the FSF is still secure. RSA 2048 might be breakable by the NSA. But so far we are at 800-1000 at risk.


> He didn't claim it is broken.

But then there's that line: "This destroys the RSA cryptosystem" in the abstract of the paper.


Yes, because RSA 2048 is broken, not RSA 4096. Almost everything uses 2k, nobody uses 4k. The cryptosystem infrastructure would be broken. Not talking about the possible backdoors in the ECC NIST curves. Thanksfully we have the djb curves there.


This would massively break basically all traditional public key crypto i think (depends a bit on if it extends to eliptic-curve or just integer based RSA [edit: meant to say whether the algorithm can be adapted to solving discrete logrithms over eliptic curves]). It would be the biggest crypto thing to happen in the last 30 years at least.

The mitigation would be to move to experimental post-quantum crypto systems immediately (quantum computers have all the fuss because they can break rsa).

This is basically an unbelievable result. Without actually providing some factored numbers i am very doubtful.

[I have not read paper]

Edit: as pointed out below, i may have gotten overexcited. Still an incredible result if true.


There is no such thing as "elliptic-curve RSA".


> This would massively break basically all traditional public key crypto i think (depends a bit on if it extends to eliptic-curve or just integer based RSA).

"A bit"? A lot more than a bit. A world.

And on the surface, since it appears to be a factoring system, rather than a general purpose discrete log solver, the consequences, while incredible, are far more limited than the picture you paint. If this is even true; a matter over which I'm skeptical.


As stated in your comment, eliptic curve cryptography relies on discrete logarithm but this algorithm is a method for factoring integers, with ideas similar to the Quadratic Sieve algorithm (https://en.wikipedia.org/wiki/Quadratic_sieve).

It does not extend to breaking eliptic-curve cryptography, for the same reason that the Quadratic Sieve does not extend to eliptic-curve crypto: the underling math problem is different (factorisation vs discrete logarithm).


OK, here is a brief overview for people:

To factor a number N (assumed to essentially be the product of two very large primes), find a 'short' lattice vector [0] using LLL [1] (and BKZ reduction? [2]) that finds many relations of the form:

    (u_i) = p_i,0 * p_{i,1} * ... * p_{i,n-1}
    (u_i - v_i * N) = q_{i,0} * q_{i,1} * ... * q_{i,n-1}
where p,q are small primes.

Numbers that have all their factors less than some prime, B, are said to be "B-smooth". In the above, both (u_i) and (u_i - v_i * N) are p_{i,n-1}-smooth and q_{i,n-1}-smooth, respectively.

Construct many u_i and (u_i - v_i * N), so much so that you can create a product of primes, r_i, of the form:

    r_0^{2 b_0} * r_1^{2 b_1} * ... * r_{n-1}^{2 b_{n-1}} = 1 mod N
where each b_i are integers.

Since all exponents (2b_i) are even, we have the potential to find the square root of 1 which has the potential to resolve to two different numbers since N is composite. One of those is the product of r_i^{b_i} and the other is -1. Since y^2 = 1 mod N, we get (y-1)(y+1) = 0 mod N. If (y-1) or (y+1) are not 0, then then must share a factor of N and we've successfully factored.

The trick is, of course, finding the smooth numbers. To do this, a lattice basis is made such that you find a short integer relation of the form

    a_0 ln(p_0) + a_1 ln(p_1) + ... + a_{n-1} ln(p_{n-1}) ~= ln(N)
where ~= means "approximately equal to".

u is chosen as the product of primes of all a_i > 0 and v is chosen to be the product of all primes where a_i < 0. The hope is that (u - v*N) is also p_{n-1}-smooth, which, as far as I understand, most of the math in the paper is trying to justify.

The main innovation here, as far as I can tell, is that Schnorr is fiddling with the 'weighting' of the main diagonal when constructing the lattice basis. I interpret this as basically trying to randomize the initial lattice basis so that the chances of getting a different integer relation (for eventual construction of u,v) is more probable.

I've been confused about this for over a decade as variants of this algorithm, and Schnorr's work in general, have been well published. For example, there's a paper from 2010 on "A Note on Integer Factorization Using Lattices" by Antonio Vera which discusses Schnorr's [3] construction.

Is Schnorr trying to shout louder so people will listen or is there something else fundamentally flawed with this type of algorithm?

Just a word of warning, LLL solves polynomial factorization in polynomial time (given a polynomial with integer coefficients, find it's factor polynomials also with integer coefficients) [4] and has been used to break other (now very old) cryptosystems [5]. If there's a candidate algorithm to solve integer factoring, lattice reduction (LLL, PSLQ, etc.) are it.

I know of fplll that's a stand alone (FOSS) implementation of LLL and some extensions (BKZ, etc.) [6].

[0] https://en.wikipedia.org/wiki/Lattice_reduction

[1] https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%...

[2] https://www.newton.ac.uk/files/seminar/20140509093009501-202...

[3] https://arxiv.org/pdf/1003.5461.pdf

[4] https://en.wikipedia.org/wiki/Factorization_of_polynomials#F...

[5] https://web.eecs.umich.edu/~cpeikert/lic13/lec05.pdf

[6] https://github.com/fplll/fplll


OK, a little more context. Sophie Schmieg has a twitter thread discussing some of these issues:

https://twitter.com/SchmiegSophie/status/1367197172664389635

They mostly mirror the article this post is under, namely "show me the factors". Schnorr claims spectacular run times but it isn't clear he provides actual data from runs he's done. Schnorr also doesn't discuss complexity considerations in the detail they would deserve while focusing on basic details that I suppose wouldn't show up in a paper like this normally.


Thanks for summarizing, and talking about what's novel here.

In the paper Schnorr suggests that this algorithm factors 800-bit integers in ~10^11 operations [36 bits], whereas the Number Field Sieve uses ~10^23 [76 bits]. Does that 76-bit figure represent the current state of the art, more or less?

Also, since the paper talks only in terms of specific sizes of integers, I assume there's no claimed asymptotic speedup over existing methods?


LLL is effectively a polynomial time algorithm, though the exponent is large (it's like O(n^5) or O(n^6), also depending on bit length, etc.), so it might fall into the 'galactic algorithm' [0] territory.

The runtime depends on the frequency of primes, which is how you know you can run the algorithm and have a good chance of finding a "B-smooth" number. So that frequency might decrease too fast as to make it not a polynomial time or quasi-polynomial time algorithm.

In terms of my own opinion, in general practical large exponent runtime algorithms have a way of becoming increasingly more efficient so this isn't usually a big blocking point, especially not for long. For example the interior point method eventually lent itself to faster implementations. In terms of frequency of primes, I suspect this is also not a big slowdown factor.

Anyway, like I said, I've been confused about this point for a long time. It seems like this method is providing effectively a polynomial time algorithm for integer factoring. I read this paper and others by Schnorr as "this is 'efficient' in the theoretical sense but the polynomial exponent is so large as to make it computationally infeasible right now" but then I don't know why this hasn't been bigger news. Maybe because the algorithm has a randomness component to it?

I don't know. If anyone is wiser than me, I would appreciate some enlightenment.

[0] https://en.wikipedia.org/wiki/Galactic_algorithm


My impression was the big optimization was the success rate threshold to guide the enumeration processing more quickly to the actually correct vector, rather than wasting cycles on smaller iterative improvements, but I'm still digesting the paper though, and my linear algebra intuition is so rusty, I may need a tetanus shot from using it, and what little number theory I've got doesn't amount to enough to get a shoe shine in 1930.


If someone wasn’t a cryptographer, but does occasional security tasks at work, what is the takeaway? RSA needs to be 4096 or higher now, or that similar techniques in the future might make RSA a bad choice altogether?


Don’t worry - yet. This is either a nothingburger, or it’s going to be a nightmare for everyone, all at once (ever dealt with web PKI? you will get a chance if this is true)

But there’s no real current takeaway until we know if this approach works, and if so how extensible it is to RSA, especially 2048 bit RSA.


There are plenty of techniques in the past that make RSA a bad choice altogether.

If you are going with it anyway, yeah, 4k bits is a safe choice for making it reasonably secure right now (2k being a bare minimum), but remember, attacks always get better, never worse, and RSA has a fair share of possible attacks.


Devil's advocate: Posting the factors requires implementation work, then optimization, then a manageable but possibly still not trivial amount of resources and time - and likely a lot of trial and error. It is perfectly conceivable that a paper would be published before the implementation is actually better than a slower but heavily optimized approach. (I don't even try to understand the paper, but I've seen a mention that it's a storage tradeoff, which may make it a very different kind of optimization problem.)

Do we know that the paper is definitely from Schnorr? (Edit: The article claims its provenance is confirmed). The "destroys the RSA cryptosystem" claim is now part of the paper. While anyone can make mistakes, I would expect such claims to be at least somewhat carefully checked before releasing them.

Either way, I expect that we'll see either a retraction/correction or factors within weeks.


This was my exact argument: https://news.ycombinator.com/item?id=26323951

Should be trivial to show a working proof on a smaller-than-usual RSA number if "this really destroys RSA".


I can imagine a certain pure-theorist mindset being confident enough in their reasoning, but not yet their coding, to report this first. Or, strategically holding definitive proof back as a hammer to deploy once the doubters reveal themselves.

Why not let others do the rote reduction-to-practice?

Why not create an example where your theory was correct, & your reputation was on the line, that took a little while to resolve – but when it does, it does so definitively in your favor, so you are more trusted in future pre-definitive-verification pronouncements?

(I don't know enough about Schnorr-the-person to know if this fits his personality, but I can imagine such personalities.)


This was heavily discussed yesterday. (Edit: this next bit was out of date:) It seems the provenance of the paper and the 'destroy' claim are unclear.

“This destroys the RSA cryptosystem” - https://news.ycombinator.com/item?id=26321962 - March 2021 (140 comments)


Nah it’s confirmed as from Schnorr.

He even reiterated his belief it leaves RSA rekt.

All that’s left is the veracity of the attack.


Ok thanks! I was out of date.


An example, by hand, from the paper author, where he is using this algorithm to factor a number would be great. Even a small number that's easy to factor by brute force would be enough to actually proof that his claims are true. We'll do code implementation and run it against RSA challenge numbers, and see if this is a prank or the real deal.


Crypto noob question: Wouldn't it be prudent to switch to something like ECDSA(heardsay that it is stronger) if there was even a hint that it was possible?

If a major government got wind that such work was going on, wouldn't it be prudent to publish before you are disappeared? I assume high-profile crypto research people are spied on.


I know a lot of programming languages, but I have never wrapped my head around math notation.

Question for someone who is familiar math notation...was the abstract of this article easy to understand?

For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.


For context: I'm a computer science MSc student.

The notation is easy to understand (and as far as mathematical notation goes, really quite tame). I don't know what a nearly shortest vector of a lattice is in this context, but I do understand everything else. Note that means I have no idea how the actual method works, but I can understand what's being claimed.


Not an expert at all, but you can think of lattices as evenly-spaced grid points in a vector space. Given a set of basis vectors b0..bn, and arbitrary integers a0..an, a0b0 + ... + anbn are points on the lattice b.

You can have a "good basis" where the norms for b are low, or an equivalent "bad basis" with the same lattice points but with high norms. That's one hard problem (lattice reduction), but there are polynomial-time approximations.

The shortest vector problem, iirc, is to find the vector with the smallest norm in the best possible basis of that lattice.


The first half of the abstract is more akin to declaring the data types and structures used, and the second half is mostly a very high level summary of the overall method and results. It's not supposed to be interpreted like code. It's just setting up the context you need to start interpreting the meat of the paper, and giving you a heads-up about what background topics to Google if anything in the abstract sounds unfamiliar.


> For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.

You are mistaken. (Pretty much) all of mathematics is written as natural language, and those symbols are just abbreviations for stuff that could also be written as words. If I read those sentences out loud, another could write them back down and arrive at something that looks the same.

That's why all of mathematical notation is embedded in sentences - they are part of the sentence and can be read as such.

Further that is really basic notation a first semester student of any STEM discipline should be able to read, though I wouldn't expect them to know what a lattice and some of the other terminology is.


I'd love a "cheat sheet" or dictionary for mathematical notation. I don't know how to pronounce half of the embedded symbology, let alone what rules apply. It seems so esoteric and arbitrary sometimes, though I recognize it's most certainly not.


> It seems so esoteric and arbitrary sometimes

It kind of is since a lot of it grew organically. Often it isn't even consistent across authors/countries. It wasn't really "designed".

The German Wikipedia has a decent overview of symbols and I often use it as a cheat-sheet for LaTeX, whereas the English equivalent of that article has a better explanation of basic notation.

https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo...

https://de.wikipedia.org/wiki/Liste_mathematischer_Symbole?o...

Even If you don't speak German, the article might be useful because you can follow the links to the individual articles next to the symbols, then change the language back to English. Or use it as a LaTeX reference if you're like me and have trouble remembering some of the more wonky abbreviations it uses. Of course that article isn't comprehensive, but most of the stuff that is missing is very domain-specific.


Thanks for the info. I finished calculus years ago but am very rusty now. I am reviewing the notation links provided, the material is dense, but it is understandable.


This is perfect, can't believe I didn't go to Wikipedia right away (duh!) -- thank you. Into the rabbit hole I go.


For someone who took a matrix calculation (linear algebra) course like me, it was kinda understandable.


You will not be able to understand the notation if you do not understand the math


> the provenance of the paper has been confirmed: it is indeed Schnorr.

What I read is that someone contacted Schnorr over email to get this confirmation.

I’m not saying the confirmation is wrong. And I’m not saying email cannot convey information.


Well, it's definitely suspect now that RSA is broken.


So somebody cracked RSA and took over gmail.com domain just to fake him being Schnorr.


Beautifully done.


"Email cannot convey information"? Baloney. It does all the time.

You seem to mean something different from what your words say...


“I’m not saying...”

see?


Did you edit that, or did I misread it?


Actually come to think of it that’s a good question. Didn’t edit it adversarially if you know what I mean, nor long after posting, but there may have been a race condition as I was tweaking wording immediately after first saving. I see the edit indicator is there but I don’t recall if that exact portion was changed in the moment.


What does "36 bits of work" mean, sorry?


My naive assumption would be, takes 2^36 cpu operations


2^36 arithmetic operations is what is claimed. That's not quite the same as CPU operations, because we only have 64-bit CPUs with up to 512-bit vector instructions, but we're talking about factoring 800-bit numbers. So we need to allow for several CPU instructions to implement each of the required arithmetic operations.


If so, 2^36 ~ 7 x 10^10, so a few seconds on GHz processors.


Yeah I would be great if they could translate that into core-years to match the references they listed


I'm guessing it's a shorthand for the order of units of work. log2(8.4E10) = 36.3 bits of operations


Factors or gtfo


Get the factors out


Is prime factorisation used in SHA256 ? Would I be able to solo mine from my CPU again ?


No, to both questions. Implementing SHA256 is actually quite easy, no more than 50 lines of code. My current implementation that I use for my personal use is under 100 lines of code including variable declarations (not just executable lines of code).

https://en.wikipedia.org/wiki/SHA-2


No. Prime factorization is used for public key cryptography, not hashing.


That's really all there is to it. Pudding, proof, etc.


I am no cryptographer. I did implement, from the paper, Yao's "socialist millionaire" cryptographic protocol but... It was only a few lines of code and a very simple (to understand, not to come up with) paper.

Now I just looked at that Schnorr paper and, well, I can tell you that I'm not going to be the one implementing it : (


I wonder if Schnorr is going senile like Atiyah.


Reminiscent of Craig Wright's claim to be Satoshi.

It doesn't matter what you claim with words if you can't back it up with cryptographic evidence.

Shut up and prove you've done (or can do) the work.


Are you really comparing a con artist with one of the most famous cryptographers?


Dear lord no. I can see how it might come across like that.

More drawing attention to the wider theme that we generally should not take people at their word when we have the option to demand proof of work that can't be faked or mistaken.

Don't trust. Verify.


These are not trivial algorithms to implement, and the other factorization records require months of work from implementation experts. It's not an easy task, and theory work stands independently of implementation effort.


Still, if this new algorithm could threaten 1024 bit RSA using 10,000 computers for 10,000 days after a 10,000x speed up from optimisation, it should be able to solve the RSA-896 factoring challenge with a single computer for a single day without optimisation, shouldn't it?

After all, 2^896 is 38 orders of magnitude smaller than 2^1024.


The claimed number of operations is low enough that demonstrating the algorithm in practice does not require a highly optimized implementation.


The first thing this will be used for is stealing Bitcoin and other cryptocurrency I predict. So watch out for your wallets.


Bitcoin wallets don't use RSA, but ECDSA.


I was referring to the possibility of man in the middle attacks on Bitcoin applications.


Once people figure out the math to break bitcoin then they can transfer bitcoin from any address.

I don't know why people don't bring this up more often. It will likely happen long before quantum computers make it possible.


Because, simply, it's not true. I'm curious though what attack vector you're thinking of, especially one that's not related to quantum computers. Are you worried that the ECDSA public-key cryptosystem employed by Bitcoin will be broken, such that the private keys could somehow get derived easily from the public ones? If so, that still wouldn't let an attacker "transfer bitcoin from any address," since the addresses themselves are hashes of the public keys. So people would have to stop re-using addresses to receive bitcoin multiple times, since once an address has been the sender in a transaction, and its public key revealed, it would become vulnerable.


I'm skeptical. The paper is too tough for me to digest without spending days/weeks/lifetimes focusing on it (and there are many who can do it much faster obviously). But I think that if RSA is materially broken, we'll know it from movements in the ground (eg, sudden mysterious forged signatures) by the time a paper is published.

I don't think that such a secret can be kept for more than a few minutes with immediately proceeding to runtime weaponization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: