Hacker News new | past | comments | ask | show | jobs | submit login

Networks are thousands of times faster in bandwidth, but they don't have thousands of times less latency. Software is actively working to overcome the limitations of latency, such as with HTTP/2, HTTP/3, and TLS 1.3.



For a page load, why should latency matter once you get below 50 ms? The problem isn’t latency, the problem is that the software stack makes tons of round-trip requests to display some text and images. The modern web makes X11 seem like it was designed by a demoscene coder.


> The problem isn’t latency, the problem is that the software stack makes tons of round-trip requests to display some text and images.

From the post you replied to:

> Software is actively working to overcome the limitations of latency, such as with HTTP/2, HTTP/3, and TLS 1.3.

The whole point is to eliminate those round-trips, and just stream content to the browser at the limit of its available bandwidth.


X11 doesn't even do any image compression. X11 seems fast because everyone uses the DRI and/or MIT-SHM extensions to hack around the fundamental brokenness of the protocol, at the cost of network transparency. :)


`ssh -X` generally disagrees with you.


I much prefer the Web to that on a slow network connection.


Try the `-C` option as well. I think that's it. Enables network compression and it makes a HUGE difference over slow network links.


> For a page load, why should latency matter

> he problem is that the software stack makes tons of round-trip requests

Latency matters because of all of those round-trip requests. Each individual request incurs at least 2x the network latency value (one network latency trip out to the server, another network latency trip back). If browsers did not try to run parallel requests, then all those round trips would sum up to a substantial overall delay.


> If browsers did not try to run parallel requests, then all those round trips would sum up to a substantial overall delay.

That describes zero browsers. And "it's not loading parallel enough" is very much a software stack problem and not a network problem.

We could shove full pages over the wire in 200ms if we tried harder.


> Latency matters because of all of those round-trip requests.

Well then, since it's a physical impossibility to do away with them, we'd better start working on improving the speed of light.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: