Hacker News new | comments | ask | show | jobs | submit login
An Almost-Secret Algorithm Researchers Used to Break Thousands of RSA Keys (algorithmsoup.wordpress.com)
420 points by williamkuszmaul 31 days ago | hide | past | web | favorite | 109 comments

This describes research published in 2012 by Arjen Lenstra et al. (https://eprint.iacr.org/2012/064.pdf), which relied on a scalable n-way GCD algorithm that Lenstra's team thought best not to explain to readers (in the hope that the attack wouldn't be quickly replicated for malicious purposes). Coincidentally, another team (Nadia Heninger et al., https://factorable.net/) published extremely similar research in a similar timeframe, without withholding details of that team's GCD calculation approach.

The Heninger et al. paper explains quite a lot about where the underlying problems came from, most often inadequately seeded PRNGs in embedded devices. As the linked article mentions, other subsequent papers have also analyzed variants of this technique and so there's not much secret left about it.

If people are interested in learning about the impact of common factors on the security of RSA, I created a puzzle based on this which you can try out, which also includes an explanation of the issue for programmers who are less familiar with the mathematical context: http://www.loyalty.org/~schoen/rsa/

Notably, my puzzle uses a small set of keys so you can do "easy" pairwise GCD calculations rather than needing an efficient n-way algorithm as described here (which becomes increasingly relevant as the number of keys in question grows).

I attempted to do your puzzle but the link was blocked by my employer's firewall:

> URL: http://www.loyalty.org/~schoen/rsa/

> Block reason: Violence/Hate/Racism

So... what exactly is loyalty.org all about?

"www.loyalty.org itself is the server for the web site of Californians for Academic Freedom, the group I founded to oppose the California loyalty oath, which is still a non-negotiable requirement for anyone who wants to work for the State of California -- including student employees of the University of California. "


That's the most controversial thing about the page. In my view it's not a big deal to have that stance but some people don't like others who rock the boat when it comes to the status quo.

My activism on this issue for the past decade and a half has basically consisted of corresponding for a few minutes each year with some new person who objects to signing the loyalty oath. I doubt that the site blocking has anything to do with this.

I was curious about the contents and contentions points of the oath so went looking and one of the files you link to is a 404:


Catch-22 should be required reading cf. "The great loyalty oath crusade"

I don't know why my web site was categorized that way by your employer's firewall.

Thanks, I'll look into that.

Could you tell me the brand? I would like to test few sites that I believe are much worse than loyality.

I'm guessing it's a false positive from using an IP similar to a sketchy site. It looks like the IP points to Linode.

For what it's worth, I enjoyed the puzzle when I completed it three years ago. A practical exercise cracking the RSA keys followed by a small classic code breaking puzzle.

I really doubt that shoen's page is any issue. Perhaps you can get to it in the way back machine.

My experience with these firewalls/blacklists is that they have quite a few random false positives. Requests for fixing them one-by-one seems to be a recurring theme on our company chat.

I would leave a company that used an internet blocker. They constantly get in the way of real work and benefit no one. The only valid use case I see is attempting to stop idiots downloading malware and installing it on company computers

In more "traditional" companies that have software dev departments, they are extremely common in my experience, so switching companies wouldn't help much if you're in such a field.

If 90% of the company employees only use IE and MS Word and are actually likely to install malware, and the last 10% are software engineers/data analysts/numerical simulation people, the latter are gonna have to work within/around these types of constraints.

Part of the reason I dont work at such places. Wouldn't want to be somewhere where I am treated like an idiot.

I guess we all find different optima. To me it's not really being treated like an idiot, it's "we need to have a policy that takes into account that most of the people are idiots, you smart guys work around it if necessary".

I don't know where you work, but off-hand I would postulate that working at "such places" also has advantages over where you work (e.g. better work-life balance, living in an area where property prices are not insane, etc.). As I said, we find different optima in these tradeoffs.

On a personal device it's reasonable to use a VPN.

On a personal device, it's reasonable to use LTE.

Not always feasible. Not everyone has a cash to spend on data and lives in an area with good wireless coverage.

It might be the case that some people employ VPN, or SOCKS proxies over ssh to their home computer, or similar schemes. I wouldn't know.

Maybe a particularly simplistic auto-categoriser mistook the few eighty-eights (88s) on the root page[0] for their use as Nazi symbolism[1]?

[0] http://www.loyalty.org/

[1] https://en.wikipedia.org/w/index.php?title=88_(number)&oldid...

For context, Seth Schoen is not only very tolerant, one of the least racist USAns I know, but also Jewish.

mine too interesting. which firewall do your employer use ? here it is sonicwall.

It appears to be a personal website; I believe it is miscategorised.

Nice puzzle. Feels like a good exercise for stretching the brain.

This website does a great job explaining how encryption works and how it's vulnerable to bad RNGs. Well done.

This idea has since been applied to several other domains. Last year we had a look at all archived RSA keys of Tor relays: https://nymity.ch/anomalous-tor-keys/

We found that several thousand relays that shared prime factors (most part of a research project), ten relays shared moduli, and 122 relays used a non-standard RSA exponent, presumably in an attempt to manipulate the Tor network's distributed hash table, which powers onion services.

Another group did a talk on this at DEF CON 26 last year: https://research.kudelskisecurity.com/2018/08/16/breaking-an...

They analyzed over 340 million keys from the web.

> As part of the presentation given at DEF CON 26, one of the outputs was Kudelski Security’s Keylookup application. On this site, you can submit your own public keys and have them tested against our dataset. We will let you know if your key is vulnerable to Batch GCD and ROCA attacks. If your key is in our database, we will be able to give you an answer immediately, if it is not, you may have to wait a bit until the tests complete.

> https://keylookup.kudelskisecurity.com/

I don't think ROCA is related to the vulnerability in the article?

Correct, but the main point of their talk and research was focused on Batch GCD processing of hundreds of millions of keys, with ROCA analysis done in addition.

What are the characteristics of those who generated an RSA key sharing a prime factor? Can they be linked back to a few bad CSPRNG implementations?

What are practical steps to be responsible about it?

It's contrived, but I just imagine that if I'm generating some particularly important keys, that I should somehow find a way to give /dev/urandom a kick of some kind. Even if that were possible, it's more likely to make things worse than better. Still, it makes me a little paranoid to even hear about theoretical weaknesses -- especially like collision attacks. I have no idea how long it takes for the CSPRNG to get properly seeded. Does it take a microsecond after booting? 10 minutes? A day?

At the time, there was uncertainty about the root cause, but yes, I think it's been traced back to a set of specific CSPRNGs.

You do not need to give urandom a kick of any kind; once the KRNG is seeded, urandom will for all intents and purposes perpetually feed you secure random bytes. It's likely your distro already goes through some effort to make sure urandom is seeded by the time you start up a shell.

Some RNG's use the time of the day in milliseconds as seed, I guess those are easy to brute force. I guess it's all about the size of the seed and it's randomness!?

This is probably the most famous issue about that phenomenon:


You could say that our understanding of PRNGs has improved a bit since then.

A recent thread about brute-forcing PRNG states in a game:


I would have thought https://github.com/g0tmi1k/debian-ssh would be the most famous issue in many people's memories of poor (read: absent) PRNG use. ;)

One would hope no RSA key generation software is so stupid as to use that kind of RNG. OTOH, apparently 0.2% of RSA keys were generated by something effectively that dumb.

A lot of people over the years have gotten the message that "use /dev/urandom and forget about it" is the final, end-all, be-all for secure random number generation.

In fact, this ideology (and that's what it is - an ideology) has been trumpeted right here on HN, in some cases by people who repeatedly seem to comment on topics that they don't fully understand. Security is hard, but there's also a high reputational value on being perceived as an authority on the topic. As a result, there are some nuggets of "wisdom" that require asterisks next to them, including this one.

Even though "just use /dev/urandom" is almost always true, it isn't always true. In fact, the universe of cases where some form of blocking entropy is needed (and again, this is a very tiny set) is growing, not shrinking.


How does a blocking CSPRNG mitigate this attack?

Again, it makes no sense to say that a CSPRNG can start "running low" on entropy.

Here's what djb has to say about this:

     Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that

    (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but

    (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).

    For a cryptographer this doesn't even pass the laugh test.
(Unless you're talking about the early boot seeding problem that /dev/urandom has on linux, which is a very real problem).

For reference, here's the classic source I think you're referring to: https://www.2uo.de/myths-about-urandom

The point is not about "running low on entropy", it's about the possibility of not having enough to begin with. At boot.

Part of the problem is that neither /dev/urandom nor /dev/random on linux do what most people want. /dev/urandom never blocks even right after you've spun up a fresh machine that has not seeded the PRNG, while /dev/random blocks very conservatively, to the point where it is not useful for certain things.

The proper approach for high-volume random numbers is probably to seed a userspace PRNG from /dev/random but that's extra work, particularly in a concurrent program.

Please do not seed a userspace RNG from /dev/random. Most major crypto RNG attacks trace back to userspace RNGs. If you can trust /dev/random to provide a seed for your userspace RNG, then by definition you can also from that point on trust urandom as well.

(getrandom(..., 0) is probably the right long-term solution).

BTW, I've read that before (don't use userspace RNG). Can you point me to a few problems that arose because of userspace RNG?

Yeah, next time we're drinking.

Now I remember about forking and VM cloning issues.

Also, you live a bit far from me, but see you at Black Hat maybe :D

> (getrandom(..., 0) is probably the right long-term solution)

Glad to see you've come around.


First, man - every time I have replied to you on HN, our discussion has turned into a flame war of one variety or another. I don't have time for that at this moment (nor the inclination for it, ever). Ideally, I'd just like to get along with you. We have many common friends; I'm confident they'll tell you that I'm a pretty relaxed guy and easy to get along with. And I'm pretty careful about refraining from stepping into discussions unless I have a good enough command over the material to avoid making false statements or giving bad advice. So I'm just asking - please just be civil. Let's stop clogging HN with attacks and instead just discuss (and achieve consensus on) best practices. Agreed?

Now: as recently as 5 months ago, when you and I discussed the matter, you replied to a post that expressly suggested `getrandom(..., flags=0)` (what you called the "blocking variant", although it's not that simple) and dismissed it in favor of /dev/urandom (and also strangely referred to security.stackexchange as "the wrong stackoverflow boards"). [0]

This was bad advice. It's good to see that you have come around to the position taken by the Python core team and just about everybody else that using getrandom(..., 0) in a best practice in this situation.

0: https://news.ycombinator.com/item?id=17786496

I assumed from context that he meant "GRND_RANDOM" as the "blocking variant" in that thread, which is different from flags=0.

That's not at all what that thread says, and this is an extremely weird and unwelcome addition to this thread.

it was hilarious though

I think that the approach taken in Python as of PEP524[0] gets it about right. And it achieves a very similar level of exposure and performance across platforms as well.

0: https://www.python.org/dev/peps/pep-0524/

> Part of the problem is that neither /dev/urandom nor /dev/random on linux do what most people want.

Agreed, but `getrandom(..., 0)` largely does. And thus, Python's `urandom` is also a good fit for very nearly every use case for randomness inside business logic.

> The proper approach for high-volume random numbers is probably to seed a userspace PRNG from /dev/random...

At a high-level, I don't agree. I think that for high-volume random numbers the best solution is nearly always a trustworthy hardware RNG.

I also think that, while it's possible to properly seed a userspace PRNG from /dev/random, it's rarely a good idea because 1) it's (at least relatively) difficult to do without introducing other vulnerabilities, and 2) if you trust /dev/random as a seed, then you have other viable options in every case.

If you have a need to use a userspace PRNG - and more importantly, the wherewithall and self-awareness to do it properly, then you are almost certainly in a position to need to seed it from an oracle other than your system's /dev/random.

it's worth, now everyone is dong that in docker.

PuTTYgen, being a Windows application, assumes that a mouse is connected and prompts the user to move the mouse to generate entropy during key creation.

By comparison, ssh-keygen documents the SSH_USE_STRONG_RNG environment variable -- but then recommends against its use (!) since it can cause key generation "to be blocked until enough entropy is available".

> "to be blocked until enough entropy is available"

which is fine. The idea that entropy is a consumeable resource is a crypto myth that needs to die.

It's not consumable, but it can be in short supply right after boot. So /dev/random should block until there's sufficient entropy, but never after that.

If I recall correctly, at one point, the network interface was used as a source of entropy. Then someone demonstrated that sending the right sequence of network packets to a machine would let you control the key that got generated. So they removed it.

Then folks discovered -- in production -- that some cloud computing environments just don't get any other new entropy after boot, and so instances would hang on generating SSH host keys.

Some folks went to /dev/urandom. Other folks decided to seed instances with entropy from another computer (with fancy names like "cloud entropy service"). And then someone had to decide how that machine gets entropy (like plugging in an FM radio into the mic jack).

> some cloud computing environments just don't get any other new entropy after boot

For environments like that, I think Haveged is the general approach these days. Latest dev code (revived project) is now here:


It's the (officially blessed, I think) continuation from the original Haveged:


The problem with "block until you have enough entropy" is that the amount of entropy you have is literally unknowable.

How can that be true? The kernel at least, knows how much you have and knows to block consumption until it crosses the desired threshold.

The kernel knows how many bytes you've fed into it. But it has no fundamentally sound way of computing a lower bound on how much entropy those bytes contained; the best approaches involve calculating an upper bound and then say "well hopefully it's at least X% of this".

RDRAND is a thing

What's the alternative on ARM?

IMO a bit clickbaity of a title --- I thought there was a recent breakthrough in integer factorisation, when in fact this is really attributable to insufficient randomness.

A bigger problem for many readers might be that this is a new post, mainly summarizing research from 2012. So it's not really appropriate to say "(2012)", but it also doesn't announce new discoveries since that time.

On this topic, I recall reading earlier computers (of the 80s) used radio tuners tuned into nothing to generate random numbers. Of course, you need some way to randomly choose the frequency but then what comes out should be pretty unpredictable.

I believe Random.org uses an approach similar to this. What is so special about this approach that we couldn't install it as a card in a desktop for example?

Modern desktops have had RNGs built into their chips since Ivy Bridge. Beyond that, they have plenty of decent sources for random numbers such as network traffic. Similarly, mobile devices can use sensor noise to create random numbers pretty well.

The devices people are concerned with are things like embedded devices and sometimes virtual devices.

A zener diode costs $0.01 and gives pretty good randomness. You mix network traffic, user input, other sensors when available and you are gold.

hi, I attempted to recreate the algorithm in the post:


This is very interesting. I wonder what key generators were used to create the insecure keys.

See https://factorable.net/weakkeys12.extended.pdf, which includes some identifications of some of the responsible devices. (Two groups of researchers published independently on this issue at nearly the same time.)

Bottom left of page 9 for RSA and middle left of Page 10 goes through which manufacturers were the mode.


Devices that create TLS and SSH keys just after boot when there is not enough entropy.

That's not a great takeaway.

The takeaway is: Linux's /dev/random and /dev/urandom interfaces are both broken, in the sense that neither is reliable for embedded developers during certain early boot conditions. Some of that was maybe worse in 2012 than today, but the fundamental interface properties have not changed.

Tl;dr: Use getrandom() instead of /dev/[u]random. Do not use GRND_RANDOM. Do not use GRND_NONBLOCK.

But maybe better pre-2009:

> Surprisingly, modern Linux systems no longer collect entropy from IRQ timings. The Linux kernel maintainers deprecated the use of this source in 2009 by removing the IRQF_SAMPLE_RANDOM flag, apparently to prevent events controlled or observable by an attacker from contributing to the entropy pools.

> Although mailing list discussions suggest the intent to replace it with new collection mech- anisms tailored to each interrupt-driven source [21], as of Linux 3.4.2 no such functions have been added to the ker- nel.

> The removal of IRQs as an entropy source has likely exacerbated RNG problems in headless and embedded devices, which often lack human input devices, disks, and multiple cores. If they do, the only source of entropy—if there are any at all—may be the time of boot.

Removing something from an entropy pool because it could be used by an attacker seems really odd to me.

It may have been inspired by arguments like https://blog.cr.yp.to/20140205-entropy.html (although that one was published five years later than the change we're talking about).

That would be rather misguided though. It's not like a device is going to know the exact time the interrupt handler routine would run, hash it, and go back "whoops, better not fire an interrupt yet because it won't be on the right tick!"

No. The fundamental problem — random blocks arbitrarily after seeding, and urandom does not block even before seeding — existed prior to the 2009 change.

For keys that are generated during early/first boot conditions, couldn't one wait until /proc/sys/kernel/random/entropy_avail is greater than 256 and then read from urandom? IIRC getrandom still isn't universally available on SoCs kernels.

I was 1/2 way through reading your comment and was thinking, "Yeah, but they can use the getrandom() that was ported from OpenBSD", but you obviously beat me to it. Take your upvote!

OpenBSD's interface is getentropy(), but yeah, basically the same idea! I believe Linux was the second actor and intentionally made getrandom() a superset of the getentropy interface (i.e., no EINTR below 256 bytes).

As a result, writing a getentropy() shim around getrandom is feasible; FreeBSD and Linux (glibc) have done so.

Yes, it was ported from OpenBSD. Ted literally calls it out in the commit:


> The authors hint in a footnote that at the heart of their computation is an asymptotically fast algorithm, allowing them to bring the running time of the computation down to nearly linear; but the actual description of the algorithm is kept a secret from the reader

How could something like that pass peer review? Their claim is effectively unable to be reproduced.

If I'm reading the numbers right, even using the slow way you'd expect to break on the order of tens of keys per day. Thus the claim that "two out of every one thousand RSA moduli [...] offer no security" is easily verified. The secret algorithm doesn't compute secret data that can't be verified.

It’s not clear to me that the journal it was published in is peer reviewed. Not all are.


It’s useful to have such places to publish things, but unfortunate that it’s not clear whether it’s peer reviewed.

A number of CS publications (or non-CS publications with custom software) don't include the source/enough exact parameters/... to reproduce their results.

Maybe the reviewers had access.

Anyone know of a service (https://factorable.net/keycheck.html was discontinued) or code that would help identify particularly weak keys?

Actually factorable.net does have code. Planning to analyze the 1200 user keys I have around to see if any are weak. From what I can tell you can just convert public keys into hex and feed them into their program.

Yes, but it will only batch-factor the 1200 keys you'd be feeding it with. For such factorization attacks to work best, you need a dataset as large as possible.

Have you tried keylookup?


Tried a few, doesn't seem to handle more than one key well. Worried that 1500 submissions would cause a DoS. After a few it stopped giving me results saying computations were queued.

Could you use a similar idea to go after bitcoin keys? If so, you may not be able to crack any particular key, but you could steal the bitcoins from the ones you did crack.

As sowbug and tptacek mention, there are no common-factor attacks against Bitcoin keys. However, that doesn't mean that the PRNGs used in cryptocurrency implementations aren't a concern. The most famous incident was probably this


(you can find subsequent journalism about the effects of this if you're interested).

There have also been other cryptocurrency PRNG attacks that weren't as high-profile as this issue.

I wish I had remembered the "Biased Nonce Sense" paper (which I heard about last week but didn't read!), which someone else mentions downthread at



(This is a nonce reuse issue rather than a common-factor vulnerability, but one reason for nonce reuse can also be CSPRNG seeding problems.)

Java SecureRandom: a userland CSPRNG!

This is JS. The (current-ish) Java ones use the OS-provided facility.

With a tiny number of exceptions, Bitcoin private keys are just random sequences of 256 bits. They don't involve primality. So unless a shared CRNG is truly awful, there shouldn't be any reason why keys would cluster around certain values.

Which is basically the thesis of the underlying "Ron Was Wrong Whit Was Right" paper --- that discrete log systems are safer than RSA. Since then, there's been more work done on the pitfalls of prime generation, perhaps most notably the "Prime And Prejudice" paper, which is really excellent and more interesting technically than this article:


New systems should generally avoid RSA, for this reason among several others.

Isn't the point of "Prime and Prejudice" that primality tests that are perfectly adequate for RSA key generation are inadequate to verify the primality of adversarial inputs, in protocols like negotiated DHE or whatnot?

Dammit. I just wanted a cool link to Prime and Prejudice!

You're right, of course.

> generally avoid RSA, for this reason among several others.

What are the others?

After decades of problems with seeding RNGs, why isn't there a electronic circuit that gets a seed from quantum noise or something like that? The circuit could be part of the CPU or support chips.

After all, amplifiers are always trying to increase the signal/noise, and the basis of the reliability of digital circuits is avoiding the noise. Instead, a circuit can amplify the noise and sample it.

There is. RDRAND for x86/x64 has been in all Intel/AMD for several years.

Most ARM SoC have some equivalent device, but they are nonstandard and require driver support.

Even the TPM chips in basically every desktop, laptop, and server for over a decade have hardware RNG. Again driver support is needed.

The problem is cheap “blue plastic boxes” may not have a hardware RNG, nor will Virtual machines or containers. Writing code to figure out what RNG is available and how to use it is a nightmare so few people do it.

This is why most security people say “use the OS CSPRNG always”. That way user-space code doesn’t have to carry all the platform specifics with it. And presumably integrating the hardware RNG can be done once at the OS layer.

I don't know why this is still a problem these days, let's just use lava lamps for entropy: https://blog.cloudflare.com/lavarand-in-production-the-nitty...

> But since they both used the same program to generate random prime numbers, there’s a higher-than-random chance that their public keys share a prime factor


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact