
The strange story of “Extended Random” - dankohn1
https://blog.cryptographyengineering.com/2017/12/19/the-strange-story-of-extended-random/
======
colmmacc
The IETF TLS working group has included a relevant suggestion in the TLS1.3
draft RFC:

[https://github.com/tlswg/tls13-spec/commit/465de0e189b2b5909...](https://github.com/tlswg/tls13-spec/commit/465de0e189b2b59090d0eac0acbc42942af9ca77)

The idea is simple: use two different instances of a secure RNG. One for
"public" material that can be seen on the wire, which would include server and
client random data. Another for "secret" material such as the private parts of
DH parameters. With that set up, it's hard to impossible for DUAL_EC style
attacks to work. Even if the RNG is "broken", there's nothing public to go-on
to hint at what the state might be.

We've been doing this in Amazon s2n since the first version, and I believe
BouncyCastle does it too. Hopefully we'll see more implementations pick up the
technique.

~~~
koolba
> use two different instances of a secure RNG. One for "public" material that
> can be seen on the wire, which would include server and client random data.
> Another for "secret" material such as the private parts of DH parameters.

Wow that's an incredibly simple solution. I feel like that general approach
(where possible split client facing vs internal states) could be applied to
many other places as well.

Are there any problems with the additional initial entropy requirements to
maintain separate RNGs (two per connection)? Or is it globally static,
internal vs external?

~~~
blattimwind
RNG state entropy is not a resource that is exhausted by using the RNG.

But even if it were, and assuming entropy limitations are relevant in this
context (which I'd argue they are not), the total entropy requirement would
not change by merely deriving the same set of values in another way. You might
only need more bytes (!= entropy) from your root RNG to seed two separate
RNGs. In which case you could just stretch the same number of bytes you'd
normally request using e.g. HKDF.

But again, it doesn't make much sense to think of entropy as something you can
count or use up, really. There is no way to tell, and just a few bits (say
128) of entropy would be sufficient for almost anything...

~~~
cesarb
> RNG state entropy is not a resource that is exhausted by using the RNG.

I thought so too, but a couple of years ago I was writing a comment on LWN
([https://lwn.net/Articles/642839/](https://lwn.net/Articles/642839/)) and it
suddenly clicked.

Entropy is a measure of unpredictability, that is, if you have 32 bits of
entropy, there are 2^32 possible states. Suppose you generate an 8-bit value
from that, assuming a function which uniformly mixes your state. For each of
the possible 256 output values, on average 2^24 of the possible states would
generate the same output value. Therefore, once you know the output value, you
can discard the other 2^32 - 2^24 possibilities; only 2^24 possible states
remain, so you now have only 24 bits of entropy left.

The reason this is not a problem in practice is that, since the output
function is one-way, an attacker would have to enumerate all possible states
to find which ones would generate the same outputs. This becomes unfeasible
once you have enough entropy in your state (256 bits or more).

(The whole trick with Dual-EC was that, if you know the secret key, the output
function is not one-way. Supposedly, either the key was discarded once
generated, or the points were generated at random and not from a secret key,
but nobody really believes that.)

~~~
tptacek
In reality, all mainstream CSPRNGs are essentially just stream cipher
keystreams (PRFs). Some of them run AES in CTR mode, some of them run HMAC.
The commonality is that the "raw entropy" (from system timings and whatnot) is
run through what is essentially a KDF and fed to the PRF.

To believe that tapping 128 bits of "entropy" from /dev/urandom is equivalent
to "depleting" 128 bit of entropy from the "system entropy pool", you must
also believe that encrypting 16 bytes with AES-CTR somehow materially weakens
the key.

Obviously, that's not happening. We don't "run out of key" as we encrypt with
a PRF. As Dan Bernstein puts it, securely expanding small secrets into very,
very long secrets of one of the core functions of cryptography. Look at the
key schedule of any block cipher for a very basic example. We reject ciphers
in designs based on their inability to encrypt _2^64 blocks_ worth of data
without potentially creating artifacts.

~~~
cesarb
> To believe that tapping 128 bits of "entropy" from /dev/urandom is
> equivalent to "depleting" 128 bit of entropy from the "system entropy pool",
> you must also believe that encrypting 16 bytes with AES-CTR somehow
> materially weakens the key.

As long as none of the 128 bits of the plaintext is known, we're also adding
128 bits of unpredictability, so the net result is zero.

If some of the plaintext bits are known, in theory the unpredictability should
decrease. For an extreme example: if all 128 bits of plaintext are known, and
all 128 bits of ciphertext are also known, there should be a single 128 bit
AES key that maps that input to that output, so the entropy left in the key
would be zero.

However, there is no known way to find that single key other than through a
brute force search through all the 2^128 keys, so it's still safe. That also
applies to the CSPRNGs: even if the entropy has decreased to zero, as long as
an attacker has to brute force through a large enough number of possible
states to take advantage of that fact, it's safe. It doesn't change the fact
that, when the entropy left is zero, there is a single state which could lead
to the observed outputs.

Yes, this means that after the entropy in the CSPRNG has passed the point
where brute force is no longer viable, the amount of entropy left on the pool
becomes a useless measurement. What matters is how much work an attacker would
have to do, not how much theoretical entropy is left.

~~~
tptacek
I've had a couple beers and am having trouble holding all these paragraphs in
my head.

But, at the risk of being repetitive, because maybe this simplifies things:

Generate a random 128 bit key by hashing something. This is what the Linux
KRNG does with raw entropy.

Initialize AES-128 with that key and start running it in CTR mode. Encrypt a
file with known contents, like /etc/ttys. That's ~1300 bytes of known
plaintext.

Now, recover the AES-CTR keystream (just XOR /etc/ttys against the AES-CTR
ciphertext).

Is it your contention that, having done that, you've placed yourself closer to
guessing the underlying 128 bit AES key?

If not, you see also why drawing out 1300 bytes of Linux KRNG randomness
doesn't help you predict future values. The randomness you read from
/dev/urandom is fundamentally just PRF keystream.

If you could look at AES-CTR keystream and use it to make an advantaged guess
at the underlying AES key, AES would be grievously broken.

------
otakucode
All I could think of when I read 'why would the government spy on the
government' was the CIA director having to admit and apologize for the CIA
hacking into the servers holding documents being reviewed by the Congressional
committee investigating them for their torture program... Different agencies
in the government certainly have incentive and historical precedent for spying
on and manipulating the others.

------
felber
The last part of the article made me think: Are there any printers whose
software is open?

Printers where always a blackbox for me that wasted ink and did not work when
I needed it...

~~~
s3krit
I'm afraid I can't answer your question directly, but printers being made
proprietary was one of the things that led Richard Stallman to starting the
GNU project, and the fight for companies to provide source code for the
products they were selling to users.

>In 1980, Stallman and some other hackers at the AI Lab were refused access to
the source code for the software of a newly installed laser printer, the Xerox
9700. Stallman had modified the software for the Lab's previous laser printer
(the XGP, Xerographic Printer), so it electronically messaged a user when the
person's job was printed, and would message all logged-in users waiting for
print jobs if the printer was jammed. Not being able to add these features to
the new printer was a major inconvenience, as the printer was on a different
floor from most of the users. This experience convinced Stallman of people's
need to be able to freely modify the software they use.

[https://en.wikipedia.org/wiki/Richard_Stallman#Events_leadin...](https://en.wikipedia.org/wiki/Richard_Stallman#Events_leading_to_GNU)

------
weinzierl
Do these printers use _Dual_EC_DRBG_ too (in addition to _Extended Random_ ),
to make them actually vulnerable?

> Ironically, the printers are now the only thing that still exhibits the
> features of this (now deprecated) version of BSAFE. This is not because the
> NSA was targeting printers. Whatever devices they were targeting are
> probably gone by now. It’s because printer firmware tends to be obsolete and
> yet highly persistent. It’s like a remote pool buried beneath the arctic
> circle that preserves software species that would otherwise vanish from the
> Internet.

Pretty chilling, but on the other hand: If this was true it should be possible
to prove digging through firmware archives and actually finding these targeted
devices.

~~~
tptacek
This is a really good question. Is it published anywhere that this version of
BSAFE enabled Dual_EC?

~~~
adzm
> From 2004 to 2013, the default CSPRNG in BSAFE was Dual_EC_DRBG

~~~
tptacek
I believe you, and think I remember reading this before. BSAFE is batshit.

------
cesarb
The annoying thing is that, if they change the key_share extension to another
number (and mark the current one as "reserved"), they'll have to run the TLS
1.3 experiments all over again (to see if there's anything on the new number),
delaying TLS 1.3 even more.

------
gumby
> In 2013, the Snowden revelations revealed the existence of a campaign to
> sabotage U.S. encryption systems.

Is the information assurance directorate even active any more? It's
interesting that they played a role (perhaps unwitting) in reducing security,
or at least that their efforts to improve things were surfed by the people who
gave RSA $10M.

Back in the days when Brian Snow ran it I actually thought the ISD was on the
side of the good guys, especially when we learned (via IBM) that their changes
to the s-box had improved things.

------
NelsonMinar
Does the IETF still take cryptographic advice from the NSA?

~~~
garmaine
Yes. It’s also hard not to, given the revolving door and the incestous nature
of the industry.

~~~
tptacek
There is no "revolving door" between the industry and NSA, or really between
the industry and any part of the USG. This is something you've made up.

~~~
garmaine
I actually work in this space, and yes it is often the case that you find
people on standards bodies with prior experience at major government IT
contractors with classified gaps in their CVs, and/or former private sector
people who rotate into those same government contracting positions. These also
tend to be the people making more ... weird ... proposals that are likely
backdoors.

~~~
tptacek
Point to one of such proposal, and we'll look at the resume.

