What do you want the behaviour to be when there isn't enough entropy to provide high-quality random numbers? For cases where you need the random number right now, you probably just want the best-quality number available (urandom). For cases where it's an important random number you'll be using for a long time, like SSH/SSL key generation, you probably want to block until more entropy is available (random).
There's no such thing as a "low quality random number" from a CSPRNG. The outputs of a CSPRNG are either insecure, because the CSPRNG hasn't been seeded properly, or they're secure --- for all intents and purposes, forever. That's the problem with the old version of the Linux man page.
The new version of the man page resolves this problem, and says outright that urandom is the preferred interface, and that /dev/random is obsolete; applications that run during early boot time should instead use the system call interface.
> The outputs of a CSPRNG are either insecure, because the CSPRNG hasn't been seeded properly
In what sense is this not a "low quality random number"? Every CSPRNG I've seen will output numbers that pass many statistical tests for randomness even if seeded with e.g. zero - is that not a "low quality random number" in the usual sense of those words?
What they are getting at is that thinking of this in terms of the "quality of the randomness" is thinking about it in quite the wrong way that leads one right up the garden path; so stop even thinking about it like that. Discard that mental model.
The randomness has the same quality. It's the same pseudo-random number generation algorithm. Only in one case, the world knows your seed value, and can predict anything that you do that derives from pseudo-randomness; whereas in the other case, the world does not know your seed value, should your seed value be discovered somehow you are regularly re-seeding anyway, and the world cannot predict your actions.
Is there a good reason why /dev/random AND /dev/urandom exist?