No. There isn't even a 1% use case where this is useful. I'm not saying "your effort is better spent elsewhere". I'm saying "doing silly things to get better random numbers is just as likely to harm your security as to help it".
High security and low subversion systems aren't allowed to have complex crap like Linux in TCB. They also prefer to offload RNG and crypto onto dedicated hardware. Example would be NSA's Type 1 certified products and my own gear from past.
Mere requirement of knowing every execution and failure state in TCB eliminates Linux + /dev/urandom instantly. Some FSM's + some analog circuits can be verified by eye, hand, and formal verification if desired.
I don't doubt that there are regulatory regimes with broken rules that require people to add harmful complexity to systems in order to satisfy rubber chicken security requirements, but I'm going to call those rules what they are.
They actually force simpler designs and catch more defects per... every report ever done on one. Last survey I read had 96% report great increase in QA metrics with almost half saying cost/time was negligible. Esp if you use automation. Reason? Virtually no time spent debugging broken components or integrations. The Windows and Linux stuff I've seen you trust has horrid track record on other hand. So many preventable errors, lack of POLA, covert channel analysis is about impossible... list goes on. Those processes you "call out" lead to none of that as they force discipline on the developers and reviewers.
Now the certification bodies, paperwork focus, etc can be harmful. It's why I got private evaluations for my stuff. I saw Sentinel do same for HYDRA using NSA and Secure64's SourceT (medium assurance) had positive review from Matasano for its POLA/resilience. And that's COTS. Sirrix too using Nizza architecture with tiny TCB. So, the basic principles are there, improvements are often dramatic, several companies are doing it, and so rest have no technical excuse for using weaker methods.