AFAIK storage is not the system bottle it used to be.
We always want more, but network and cores are relatively plentiful.
If we could magically (and safely) modify the software stack, which areas could give x2 or x3 improvements ?
I'm being totally serious. Backends are generally fast, and the backend engineers are performance-minded.
Front end engineers are not as cognizant of performance (somewhat necessarily, since arguably they have a harder problem to solve). Back in the mid-2000's, Gmail/Maps/Reader days Google had a lot of great JS talent, but it seems to have ceded some of that ground to Facebook and Microsoft.
If you have heard Steve Souders speak, he always mentions that he was a backend guy. Until he actually measured latency, and realized that the bottleneck is the front end. That was at Yahoo, but it's very much true for Google too.
As far as scheduling that was just one example of an important systems problem that hasn't been solved with machine learning. Not saying it can't be, of course. Just that this is a research direction and not a deployed system.
It's also important to note that there are plenty of other feedback-based/data-driven algorithms for resource management that are not neural nets. If neural nets work, then some simpler technique probably works too.
Well sure, we all want a God compiler.
Or are there also plenty server-to-server scenarios?
e.g. devs using remote build system ?
A local workload which access a mostly-remote db ?