Dear God. The CSPRNG situation on Linux is deeply depressing.
/dev/urandom is useless because it spews non-random data if it hasn't been seeded yet.
/dev/random is useless because it starts blocking if you try to read too much data from it, because of a mistaken belief that a properly seeded CSPRNG can run out of entropy.
Plus they're both slow as hell, so people try to implement their own PRNGs, often having bugs in the generator or seeding, leading to security issues.
Meanwhile the BSDs have handled this correctly for years. But inexplicably, instead of actually fixing /dev/(u)random, the Linux engineers decide to add a new getrandom() syscall which implements the correct behaviour of only blocking if the PRNG hasn't been seeded.
So finally with getrandom() Linux has a way to securely generate random data without unnecessarily blocking, and now Linus seems to be floating the idea to break it again!
The kernel has plenty of ways to securely seed a PRNG at boot on modern systems; IRQ timings, multicore tricks, sensor data, etc. Run some statistical tests on it to ensure you have a couple hundred bits of randomness and you're done.
Yes, getrandom() works pretty much the "right" way. But the problem is that it still can block during boot, indefinitely. And nobody really wants their computer to just stop working, because it can't guarantee that the entropy is not theoretically possibly "bad". Real users do not want this. But it happens.
The root of this is security paranoia. Security people didn't want the RDRAND instruction to be trusted. SystemD didn't credit the entropy pool when adding the saved seed file from the previous boot, until very recently it got an option to credit the entropy pool. These things are all mixed into the pool, and on any desktop machine /dev/urandom is absolutely fine, but security expert pressure has forced these systems to not trust that real entropy has been added from the many sources that are already implemented. You might be surprised how many people make this problem go away by running havaged, which provides very dubious entropy.
The recent AMD issues have shown that you certainly shouldn't trust rdrand blindly. Even using it after running some statistical tests would still have blocked the kernel bootup on affected machines.
> And nobody really wants their computer to just stop working
I would rather have my computer stop working than initiate cryptographic keys from an all-1s seed.
Of course filling the entropy pool should keep making progress, no matter how slowly, e.g. via the jitter RNG and eventually unblock the systeem. But until there's not enough entropy available for userspace it shouldn't pretend there is.
My personal favorite is:
# rngd -r /dev/urandom -o /dev/random
But does anyone know what Windows and OSX (and iOs for that matter) does to "warm up" entropy?
Since 2014, however, macOS has expected to get a random seed from the bootloader, which in turn gets it from rdrand if available, or some complicated timer stuff if not. I'm not sure how secure the latter is, but as of Mojave, there are no longer any supported Macs without rdrand, making the issue moot...
FWIW the OpenBSD folks first implemented getentropy() and recommended that Linux do the same, because devices causes various issues (e.g. chrooting, attacker-controlled FD exhaustion, …) having a reliable syscall is extremely valuable.
Sadly the Linux folks way over-engineered the thing with all kinds of tuning knobs so that you can get any preexisting behaviour you want: by default getrandom will read from the "urandom source" but block until the entropy pool has been initialised once.
However, you can also:
* GRND_RANDOM to have it read from the "random source" and block if "no random bytes are available"
* GRND_NONBLOCK to have it never block on lack of entropy, whether from the entropy pool not being initialised at all (!GRND_RANDOM) or because "there are no random bytes" (GRND_RANDOM), in which case it can also fail (getentropy should only fail if you give it an invalid buffer address or you request more than 256 bytes)
 https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00... also through the libressl project complaining about the lack of safe way to get good random data
Does it? Under what circumstances? Where can I read about it?
> When read during early boot time, /dev/urandom may return data prior to the entropy pool being initialized. If this is of concern in your application, use getrandom(2) or /dev/random instead.
How BSDs handle it correctly?
I'm not a cryptography expert, but this suggestion doesn't look right.
Edit: IMO the main problem is the lack of a forward progress guarantee for entropy generation, even if there are suitable sources for entropy in the system.
edit: to clarify, occasionally not using /dev/random when it may block is not actually a security issue (in most cases)
It is used as one source among many to seed the PRNG, so I think the answer is no.
> RSA and DSA can fail catastrophically when used with malfunctioning random number generators ... network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread ... we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufﬁcient signature randomness ... the vast majority appear to be headless or embedded devices ...
In the extreme case, this means you can run a PRNG with a fixed seed indefinitely, which is definitely wrong because such a PRNG will necessarily loop.
That might not be feasible to exploit, however.
Given infinite time, energy, and computing power, yes. Given computers made out matter and running on energy for use by meat-based intelligences, no.
This is really analogous to saying "technically a 256-bit encryption key is brute-forceable". In fact, this is so close to being the actual underlying situation it's barely even an analogy.
What might those be on, say, a freshly booted RasPi that hasn't even brought up much of userspace besides systemd?
Like inserting arbitrary reads of any available SSD or hard disk that has already been spun up, or something better if possible.
And in newer userspace, just properly save and restore a seed.
Anyone know if the audio for >=2016 era macbooks will ever be made to work? https://github.com/Dunedan/mbp-2016-linux
- MacBook 2015
- MacBook 2016
- MacBook 2017
- MacBook Pro 2016
- MacBook Pro 2017
Mind that MacBook Air's and MacBook Pro's from 2018 onward still don't have upstream keyboard and touchpad support.
1. GVE host will never see the light of the day
2. Never be used outside of Google's DCs
3. Hardware that backs its is very likely of Broadcoms origin, with their virtualisation API.
4. It made it! Unlike dozens of attempts by other companies to do the same for their proprietary in-DC hardware. How one does it?
EDIT: see sibling comment about wireguard and 5.4, apparently not slated for it this time but maybe 5.5.
With this he has simply copied an article from Omgubuntu and spammed it via his splog, where he even puts his own name on it.
The link should be changed to the correct Omgubuntu link to credit original and up to date content and not lazy copy and paste spammers.
Had sort of given up on it ever being resolved, but sounds like a patch got into 5.3 that may fix it. Installed 5.3-rc5 last night. Guess I'll wait and see if it actually resolves the issue.
sudo modprobe uinput.
Wireguard won't be in before 5.5.
5.5 should release in early february 2020 so it might be in Ubuntu 20.04.
That's about it. I wouldn't use Ubuntu for Navi though. Some rolling distro makes more sense.
How is IPv4 address availability up to the Linux kernel?
Paragraph 6, s/Chromebook harder/Chromebook hardware/
Spellcheck is not a proofreader.
A dependency tracking system with mix & match of versions of subcomponents of the kernel that either work together or not looks like a nightmare to me.
Isn't quite a large percentage of the Linux kernel in the form of loadable modules, such that a given running system will only running a fraction of the total code?
But maybe such a solution could work with a micro-kernel architecture too. It seems that a micro-kernel and its components could also be gathered in a same code base. You would get the kind of changelog the first commenter of this thread mentioned though. It seems we are speaking of two different things here: code management and kernel architecture.
I have professional experience in large organizations that use monorepos, and have been impressed with the results.
A competent engineering team should have a way to verify that all the important components of their system are controlled so that a different version can't sneak in. This could be through the revision control system, or it could be through some design notes and a verification activity.
But you have to do something to make sure your system is built from a known-working set of artifacts. And in a lot of cases your software configuration extends to your build system. I've been burned before by build system and host operating system changes that result in the software configuration of my embedded images changing and introducing bugs.
Additionally, pretty much all microkernels that support GPUs (which are very few) have about the same in kernel drivers as monolithic kernels since GPUs have their own MMUs and managing the MMU is about the only thing either wants to do in kernel space.