Hacker News new | past | comments | ask | show | jobs | submit login

Can you please give some examples of resource-mis-allocations ?!

AFAIK storage is not the system bottle it used to be. We always want more, but network and cores are relatively plentiful.

If we could magically (and safely) modify the software stack, which areas could give x2 or x3 improvements ?




As far as Google goes, the easiest place to get better end-user latency/performance (2x-3x) is ... fixing the JavaScript.

I'm being totally serious. Backends are generally fast, and the backend engineers are performance-minded.

Front end engineers are not as cognizant of performance (somewhat necessarily, since arguably they have a harder problem to solve). Back in the mid-2000's, Gmail/Maps/Reader days Google had a lot of great JS talent, but it seems to have ceded some of that ground to Facebook and Microsoft.

If you have heard Steve Souders speak, he always mentions that he was a backend guy. Until he actually measured latency, and realized that the bottleneck is the front end. That was at Yahoo, but it's very much true for Google too.

http://stevesouders.com/bio.php

I would like to see a machine learning system rewrite JavaScript code to perform better and make UI more usable. I believe that's beyond the state of the art now, but it's probably not out of the question in the near future.

-----

As far as scheduling that was just one example of an important systems problem that hasn't been solved with machine learning. Not saying it can't be, of course. Just that this is a research direction and not a deployed system.

It's also important to note that there are plenty of other feedback-based/data-driven algorithms for resource management that are not neural nets. If neural nets work, then some simpler technique probably works too.


> I would like to see a machine learning system rewrite JavaScript code to perform better

Well sure, we all want a God compiler.


Network latency right now is the biggest issue we have. If we could magically (using your term here!) get computational resources and data dramatically closer to end users, it would easily give 2 or 3x improvement. Doing this safely of course means consistently in this case, and being able to solve things like safe replication of large data sets. I dunno how to do it, but you asked and that's the biggest thing I can think of.


So backend-to-frontend latency?

Or are there also plenty server-to-server scenarios?


After re-reading your comment, are you referring to end-users inside the enterprise perimeter ?

e.g. devs using remote build system ? A local workload which access a mostly-remote db ?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: