I've admittedly never had the 'millions of requests per second' clients, but the number of available threads has never been remotely the problem.
Back in 2006/2007 we had one app maxing out the IIS threads, but that was back when there were 100 threads max (as far as I can remember, might be wrong) and the actual underlying problem was slow SQL requests. Temp fix was to up the IIS threads, but once we fixed the SQL problem, no need for those extra IIS threads any more.
And even then, what will implementing async/await get you that you couldn't fix with a load balancer and web farm? Web farm is cheaper, easier and more effective. I think there's a Jeff Atwood piece somewhere on coding horror, or maybe a Joel one, to that exact effect. Cheaper to throw hardware at it then dev time. There's a certain scale that async/await actually fixes, the vast majority of C# web apps won't ever get close to it.
Every client I've taken on who were having performance problems were always fixed by monitoring SQL and fixing the bad queries. Or it was stupid loops calling moderately expensive SQL that did the same query over and over. Or something that could be easily cached. Or the same query being called several times in a stack where you could easily just pass the result down. One of those clients even had a previous dev who went async/await happy for extra 'performance' and it did zip. I ended up ripping most of it out, as I said, it infects code up and down the stack and complicates the code. All the 'optimized' async code was taking 1-2 milliseconds to run while the problem SQL was taking 5-10 seconds.
In a web app I can see the need if you're doing lots of external requests
I've seen far more issues with people not understanding how to use static and how it applies in a web application.