I have never heard of Jitterentropy but it sounds vaguely like previous attempts at RNGs.
/r/crypto had a good discussion about this topic a while ago: https://www.reddit.com/r/crypto/comments/9dln0v/whats_the_pr...
My intuition is that this is dangerous and will likely result in security bugs in the future.
> Using the Jitter RNG core, the rngd provides an entropy source that feeds into the Linux /dev/random device if its entropy runs low. It updates the /dev/random entropy estimator such that the newly provided entropy unblocks /dev/random.
This is a red flag. Entropy doesn't run low. https://www.2uo.de/myths-about-urandom
I'm calling it now: Don't use VeraCrypt.
They made a very questionable decision based on the sort of ignorance that leads people to use /dev/random and haveged rather than RtlGenRandom (Windows), getrandom(2) (new Linux), or /dev/urandom (old Linux).
Cryptography engineering requires care and this sort of ignorance tends to undermine secure implementations.
You can see your current entropy level, which is probably high due to having a keyboard, with this command:
You can watch this number drain by reading from /dev/random continuously.
http://www.issihosts.com/haveged/ Haveged has existed since 2003 and has lots of documentation and discussion of its randomness.
Entropy running low is not an actual problem. AES-CTR doesn't "run out of key".
If your OS's "entropy estimator" is producing small numbers and your userspace applications are using /dev/random, yes, that will degrade your performance. That's the actual problem.
The solution is for the developers of your software to stop using /dev/random, wholesale.
Saying that the actual problem is "entropy running low" is like saying "water is flammable". That might be true in extreme cases, but isn't in the general case.
Once the generator had been seeded (once), sure, read from urandom and don't worry.
The main issue is trying to read from it at boot time
> If you're on an ancient Linux kernel, you can poll /dev/random until it's available if you're uncertain whether or not /dev/urandom has ever been seeded. Once /dev/random is available, don't use /dev/random, use /dev/urandom. This side-steps the "/dev/urandom never blocks" concern that people love to cite in their fearmongering. This is essentially what getrandom(2) does.
This is outlined here as well: https://paragonie.com/blog/2016/05/how-generate-secure-rando...
This is what randombytes_buf() does in libsodium on older Linux kernels.
This is actually the best of both worlds: Although /dev/urandom on Linux will happily give you predictable values if you try to read from it before the RNG has been seeded on first boot, once /dev/random is "ready" to be read from once, you know that the entropy pool powering /dev/urandom has been seeded. And then you can guarantee that /dev/urandom is secure and nonblocking henceforth.
If you can't just use getrandom(2), do the "poll /dev/random, then read /dev/urandom" dance and even the fearmonger's favorite issue to cite becomes a non-issue.
1. /proc/sys/kernel/random/read_wakeup_threshold is 64 by default [1,2],
but the kernel only considers the pool initialized with >128 bits
. So in an early userspace you're likely to be reading from an uninitialized
/dev/urandom after waking up from poll if you're not also checking
the entropy count to be >128.
2. /dev/random could be a symlink to /dev/urandom, so the call to poll
would return as soon as possible.
3. The system could only be providing /dev/urandom.
So if you're going to have to check the entropy count anyway, a less
convoluted approach would be to repeatedly retrieve it via the
RNDGETENTCNT ioctl  on a /dev/urandom file descriptor and sleeping
while it hasn't reached >128 bits yet.
Don't take my word for the feasibility of this fallback method as I'm
not a cryptographer or implementer of cryptographic interfaces. Instead,
consider BoringSSL (Adam Langley et al.) that does almost the same in
its /dev/urandom fallback. 
You are observing the misguided attempts of fixing this non-problem.
To me with my overly vivid imagination, the fast forking of TrueCrypt to VeraCrypt looked like a hasty power grab by DGSE. Pure speculation, though.
In contrast, VeraCrypt developers did not seem to understand what a TPM is when I tried to discuss the topic with them.
It's a valid statement in context, and using it as a dependency is not a red flag.
(Though VeraCrypt doesn't seem to get enough randomness to be safe if jitter fails, which is pretty worrying.)
Having an answer to this question is going to require research, which might take time.
The keyfiles bug mentioned was previously disclosed against 7.0a: https://cyberside.net.ee/truecrypt/misc/truecrypt_7.0a-analy...
If you have an SLA that guarantees true randomness, and a certain response time or availability, you cannot really afford to block indefinitely while Linux builds up more entropy via its natural mechanisms.
Augmenting with haveged is pretty common, all it does is add more sources of randomness. I was hoping that jitter entropy (which seems kinda like the same thing) would alleviate the need to install one more package in these cases, but it's not clear. I will have to try it out sometime.
/dev/random doesn't provide "true randomness".
/dev/random and /dev/urandom provide the same kind of randomness. The difference is that, for 99.999% of developers, /dev/random blocks for no good reason.
All haveged does is pollute the kernel entropy pool to make /dev/random less unstable in production.
What you care about is whether the RNG is securely seeded or not. Once it is, you don't care anymore. Think of an RNG like a stream cipher (which is what many of them are under the hood). Key AES-CTR with a 128 bit random key and run it to generate all the "random bytes" you will ever realistically use in any usage scenario. AES never "runs out of key". In the same way, a CSPRNG never "runs out of entropy".
We regularly re-key the system CSPRNG (by updating it, in the kernel, with "more entropy"), but not because entropy is depleted; rather, because it provides a measure of future security: if our machine is compromised, but only briefly, we don't want attackers to permanently predict the outputs of the RNG.
What you want are random/urandom interfaces that block at system startup until the RNG is seeded, and never again. What you do not want are userland CSPRNGs and CPSRNG "helpers"; those aren't solving real problems, have in the past introduced horrible security vulnerabilities, and perpetuate confusion about the security properties we're looking for.
Sign one X.509 certificate or ten million of them; the same initial secure dose of "entropy" will do just fine.
What does Gutmann say in 2019 about /dev/urandom vs /dev/random?
Which of the two do JP Aumasson (author of Serious Cryptography and inventor of several cryptography algorithms used today, including BLAKE2 and SipHash), Dan Bernstein (Salsa20, ChaCha20, Poly1305, Curve25519, Ed25519, etc.), Matthew Green (professor associated with the TrueCrypt audit), et al. prefer in their own designs?
I can promise you the answer is /dev/urandom. Why do they prefer /dev/urandom? Because of the reasons outlined in the article I linked (which, unlike the mailing list post you linked, is occasionally updated with corrections).
It's not really that complicated: Use /dev/urandom.
If you're on an ancient Linux kernel, you can poll /dev/random until it's available if you're uncertain whether or not /dev/urandom has ever been seeded. Once /dev/random is available, don't use /dev/random, use /dev/urandom. This side-steps the "/dev/urandom never blocks" concern that people love to cite in their fearmongering. This is essentially what getrandom(2) does.
If you're on a recent Linux kernel, you can say "just use getrandom(2)" instead of "just use /dev/urandom", but the premise of the discussion is whether to use /dev/random or /dev/urandom not which of all possible options should be used.
See also: https://paragonie.com/blog/2016/05/how-generate-secure-rando...
The belief that /dev/random must somehow be better than /dev/urandom is, frankly, security theater.
Entropy is a measure of unexpectedness present in a system. It is not a property of the data you have, but the physical system that generates some signal, called information content. In other words, a given password doesn't have entropy, but the process that generates it does. What this measures is how unexpected an outcome is on average. If it produces the same outcome every single time, it has no entropy because you know exactly what it will do. The best you can do is a 50% chance any bit will be 0 r 1 - you can't get more unexpected than this. Failure to achieve something close to this is called having biais and the result is that you must go directly to the NSA. Do not pass go, do not collect 200 dollars.
Then you have functions called deterministic random bit generators. These are deterministic - they behave the same way for a given set of input parameters. They also produce a stream of output that is sufficiently random-looking for cryptographic use. They can produce very large volumes of this - up to 2^44 blocks of entropy for example from NIST's CTR_DRBG before you have to abandon this stream and start again.
The entire sequence of random bits has exactly the same entropy as the seed that started it. I said data doesn't have entropy and I stick by it: the process has expanded beyond "sample environmental data" to "sample environmental data and shove it through a function good enough for cryptographic pseudorandomness then generate a nice big 2^44 128-bit blocks of data from that". The _entire output_ has the same entropy, as it is part of the same process.
I'm not saying the linux kernel does this exactly, more using it as a simple explanation for what I'm about to say - once you've got some entropy up to some nice amount like 256 bits you can use this as a seed for a deterministic random bit generator. What you get is a huge stream of randomness, almost certainly unique to you never to ever be seen again. And you only needed a very small amount of real entropy from hardware to get so much you will never run out.
Of course, there is nothing stopping you from "reseeding". You constantly have more entropy available and there's no harm mixing more into what you're generating (more entropy available comes from doing some more sampling of the environment and throwing this in).
Entropy is never used up. The randomness you get from /dev/random is not better. The randomness from a €1000 euro quantum random number generator is not better. Randomness doesn't rot like eggs and it is not known to the state of california to cause cancer. In fact, if you generate a key, it isn't random. It's known. It's static. What you get is some data that was generated with a certain amount of "surprisingness" in the process involved in getting it. If you repeat the process, you get more. There are obviously limits to the amount you get depending on the amount of data you ask for, because there are only finite possible outcomes, so it can only be on average so surprising. So you ask for enough that there are sufficiently many outcomes that if you could enumerate them all, you could also bruteforce AES by trying every possible key (we would not have enough energy to do this if we converted the entire mass of the earth to energy to power a classical quantum computer, let alone to do it before the heat death of the universe) - approximately speaking this will have more than enough entropy.
Throwing away those generated seeds because a single application has used them is extraordinarily wasteful and stupid because there is no need to wait. You can use that seed and the massive amount of "permutations" it generates in the DRBG for pretty much ever - I mean, unless of course you happen to need 2^44 or so 128-bit AES keys (that's over 2500 aes keys for everyone on earth). In that case, you can basically start generating them while you wait for another seed in your own time. You can probably nip to the beach or leisurely read a book. You'll still have enough time for your random process to give you another seed generated with a process of suitable entropy about a billion times over and then you've got so much cryptographic-quality random material it's like leaking out of your computer all over the floor!!!! Bits! Bits everywhere? Have you ever spilt rice? Kinda like that!
I endorse the above answer by Scott entirely, especially the bit about using /dev/urandom. This applies to any desktop or laptop computer system and probably most smartphones. The only time you can have problems is in getting the initial seed in the first place, and this only happens in embedded contexts with little variation in their runtime and no way to observe environmental noise and in this case /dev/urandom is preferred because getting more seeds is hard work and basically requires more time waiting for data to be seen. So /dev/urandom is objectively the better choice.
Perhaps you don't care, but OpenSSL's new random number generator is based on NIST's CTR_DRBG. It samples entropy from the environment using the approximations I have glossed overhere (it matches the NIST Special Publication's entropy estimation methods). They then generate a master source CTR_DRBG for the application context. Every SSL context object gets its own CTR_DRBG seeded from the master one, which is also valid to do! I told you! Randomness all over the kitchen floor like the bag of rice I just dropped so much of it you can't even vacuum it all up again how did it get in my washing exactly!!!????
Stop it. Cease and desist using /dev/random.
So was snake oil.
Dangerous in what way? If it's properly mixed into the pool, it shouldn't make the pool more predictable.
Is JitterEntropy actually a CSPRNG or just a PRNG?
Is it fork-safe?
Is VeraCrypt's implementation secure?
I originally proposed BLAKE2 for this purpose on their issue tracker, circa 2014.
Page 15 describes it in detail.
I vaguely remember after it was announced that the project was disbanded, that there was one single commit from the authors whose only change was a textual replacement of "United States" with "USA" everywhere it appeared.
Coming from the highly anonymous and secretive TrueCrypt group, this was interpreted in this most scandalous possible way. I.e. the US govt was somehow strongarming them. I'm not claiming that part is true, I'm just curious if my memory is correct about the whole incident.
It's pretty likely that Paul Le Roux was behind TrueCrypt.
He worked on E4M, but I can't imagine that he did TrueCrypt, given the stuff he did at these (post-E4M) times. It simply doesn't fit. Also, from Wikipedia link:
"Le Roux himself has denied developing TrueCrypt in a court hearing in March 2016, in which he also confirmed he had written E4M."
> he [liked] the video game Wing Commander 
Somewhere between 1990 and 1995 I recall that Windows PCs in South Africa shipped with a games CD. You would get Ultima (immediately banned by your parents), Wing Commander, Sea Wolf and other games and this would be your staple for a while. When Warcraft 2 (or it may have been Starcraft) came out I think it cost about R 350 (USD 90 in 1995) so you would need to be more picky in the games you could buy than today. One should mention the rand's historic dissonance with regard to the exchange rate and PPP, but whatever.
I remember buying some shitty Disney game for the same amount and regretting it instantly (I played Warcraft 2 instead). It was not until 1999 that I saw games "in bulk" with Tiberian Sun, ReVolt and a multitude of other games being shared on DC.
 https://en.wikipedia.org/wiki/Paul_Le_Roux, edited for sensational language. Note that he had the console version, but I am not sure which console that would be. The PS1 was the first major console in South Africa (apart from "video games", i.e. famicon or NES clones) but I may stand corrected.
I feel really bad for his kids. When I studying in Asia, I met a few mixed-race kids that had racist expat parents and they often had major identity issues. For those that didn’t stay in the expat bubble, they would shun their “Western side” and try to completely blend in with the locals.
Something straight of a complex spy thriller.
> WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues
Which if you are into conspiracy theories can be rendered as
Using TrueCrypt is Not Secure As => using TrueCrypt is NSA
and Wikipedia article is accurate:
Additionally TrueCrypt has been out there and well known for a long time. It even had a pretty decent audit done. Even with its flaws this is a strong foundation to build on. Any brand new undertaking is pretty much guaranteed to have more flaws. Thus we should definitely focus on improving VeraCrypt. It would also be easier to do an audit on just the changes from TrueCrypt to VeraCrypt as opposed to a whole new program.
I used a workaround suggested on the VeraCrypt forums: renaming the Windows boot loader, replacing it with the VeraCrypt one, and pointing VeraCrypt at the renamed option. This meant that any Windows update that updated the boot loader would break the whole arrangement and to fix it I needed to boot from a USB stick.
I think I most recently tried on version 1.22. It does sound like this version has some relevant fixes though.
Another machine couldn't even update to 1809 without decrypting. After updating and re-encrypting, it refuses to mount any system favorite volumes. VeraCrypt is a mess. Or rather, Windows keeps breaking things.
Probably this does not even has to do with VeraCrypt: After a bit of research I found some posts. It seems like this error is caused by my SSD... I guess I have to reinstall my Windows.
Doesn't make sense to me. Cos on my laptops is running a 2015 version of windows 10. I refused to update it. Yet calculator still works there.
It's possible that you're right and mine only works cos I did a lot of firewalling.
The added advantage is that you can easily do backups of it.