
Why is the Internet so Slow? - okket
https://blog.apnic.net/2017/06/19/why-is-the-internet-so-slow/
======
Lagged2Death
_If the 3x slowdown from the infrastructure were eliminated, each round-trip-
time being 3x faster would affect all the protocols above, and we could
immediately cut the latency inflation from ~37x to around 10x, without any
protocol modifications._

The page this article is published on makes 87 separate network requests to 19
different domains. Even with a great deal of the content cached, it takes 7
seconds to reload over my WiFi today. This is because I'm using an ad blocker;
without one, the page makes more requests to more domains and transfers more
stuff, slower.

It seems likely that cutting bloat would have a much bigger impact on
responsiveness than infrastructure upgrades. Cutting bloat would produce
benefits even (especially) in times and places where signals or infrastructure
are marginal. Cutting bloat could mean your old phone or tablet computer could
browse the web tolerably instead of ending up in a landfill. Cutting bloat
from a website today produces a benefit, for all visitors, worldwide, _today_.
Improving infrastructure may be a good investment, but cutting bloat is a
force multiplier.

The responsiveness of the web has been adjusted to optimize the number of
eyeballs that see ads. If a site is too sluggish, people leave, but a site
that's too fast is leaving money on the table. How many advertising, tracking,
and affiliate domains would a web page like this connect to, if latency were
cut by two thirds?

~~~
thinkloop
As much as ads are annoying, this has nothing to do with the article. One
man's bloat is another man's content. No need for this nag to be top comment
on HN for an article about why data isn't moving at the speed of light.

~~~
rthomas6
It's not ads! Jquery.com, bootstrapcdn.com, cloudfare.com, cloudfront.net,
google-analytics.com, fontawesome.com, Wordpress, Twitter, Facebook, Pocket,
rating-widget.com, and typography.com are all used to put the content on this
website, and none of them are ads.

~~~
ouid
but this comment was

~~~
rthomas6
Buddy I hate all that shit. I use uMatrix and block as much as I can while
still seeing the real content. In fact that's how I know what the domains are.

~~~
ouid
I was joking. I think it wasn't a terribly good joke though, so sorry.

~~~
thinkloop
Jokes don't work on HN for some reason (I liked/upvoted it)

------
zkms
Since the OP compared Internet latencies to the speed of light in glass...look
up 802.1CM/IEEE1904.3 or "CPRI over Ethernet"/"Radio over Ethernet"! Those
describe techniques for conveying LTE _baseband_ signals (digital I/Q-samples)
over Ethernet -- so the radios can live far away from the equipment that
speaks the physical / MAC / radio-resource-control layers of LTE. There are
strict latency/jitter requirements involved (the phone and the eNB run a very
fast automatic-repeat-request protocol between each other, too much latency
breaks the loop) -- and large (but predictable) data rates involved.

Read
[https://drive.google.com/file/d/0B6Xurc4m_PVsZ1lzWWoxS0pTNVE...](https://drive.google.com/file/d/0B6Xurc4m_PVsZ1lzWWoxS0pTNVE/view)
for a general introduction about latency in Ethernet/IP networks and
[http://www.ieee802.org/1/files/public/docs2016/cm-chen-
front...](http://www.ieee802.org/1/files/public/docs2016/cm-chen-fronthaul-
bandwidth-analysis-0716-v01.pdf) for an introduction to the "radio over
ethernet business"

Even if you're not trying to send radio baseband data over ethernet networks
-- there's lots of noticeable queuing going on, _everywhere_. It's not just
the unmanaged and pathological "bufferbloat" queues (that occur when a device
has a network link much faster than its other network link) -- there's little
queues that are used for tasks like:

* frame aggregation

* holding onto data until a shared medium is idle

* for scheduled systems like DOCSIS or LTE, holding onto packets until an uplink grant has been given -- or holding onto packets until a station's downlink grant/timeslot becomes active

* inter-device communication (usually with ring buffers or the like)

* any processing/forwarding that isn't of the cut-through flavour usually involves queuing at least a single frame

* context switching / memory copying / interrupts

Retransmission mechanisms can introduce latency as well -- if there's no
special channel for acknowledgements, they'll use up airtime, and if data
needs to be retransmitted, it might induce delays for never-transmitted data
that's waiting for its first transmit opportunity.

In general, unless you're actively being paid to stomp out every imaginable
source of latency, there'll be little queues living everywhere in your system
-- from TCP's queues right on down to the CPU's store buffers-- because
queuing is the natural way to cope with subsystems that produce/consume at
different rates.

~~~
eru
If you are interested in bufferbloat and TCP, Google's BBR is a really
interesting attempt at solving part of the problem. See
[http://queue.acm.org/detail.cfm?id=3022184](http://queue.acm.org/detail.cfm?id=3022184)

BBR is a sender-only modification to TCP that basically solves the problem of
large buffers.

~~~
pcl
There was a great article + discussion about BBR a few weeks ago:
[https://news.ycombinator.com/item?id=14298576](https://news.ycombinator.com/item?id=14298576)

------
panic
If you're thinking, "it's fast enough!" \-- remember that a system's
performance determines what you're able to build on top of it. If GPU
designers had said "it's fast enough" ten years ago, for example, modern
neural-net AI may never have happened. It's easy to imagine doing the same
things faster, but it's harder to picture the new things that a lower-latency
Internet would make possible.

~~~
IshKebab
I don't think anyone that has ever used Skype, VoIP, Facetime, etc has thought
"this is fast enough".

~~~
viraptor
Actually, I think ~10mbit is fast enough. You can easily use Hangouts/Skype
with good quality video on that kind of link. The problems are elsewhere: it's
not stable enough, and it's not responsive enough.

Anything faster is just a cherry on top. Sure, files download faster, you can
stream 4k, etc. But I'd say that's luxury which you could choose. Lower speeds
are "enough" for a lot of people.

~~~
manmal
That's like saying 640K ought to be enough for everyone. Once we have stable
1GBit connections on all landlines and mobile devices, we might eg stop
hoarding data on our disks altogether. Apps (if they still exist) could be
downloaded on the fly when the user chooses to start them. Lifelike VR/AR
communications could be enabled by this. Teleworking could become a no-
brainer, from any location imaginable (perhaps with telepresence). I will be
able to run a dev machine on some EC2 instance and VNC into it with an
unnoticeable lag, so I won't need serious CPU horsepower in my laptop anymore.
That's what I could think of off the top of my head..

~~~
viraptor
I don't think that comparable to 640k. Things you mention are are either what
I define cherry on top (VR/AR meeting? Why? Teleworking is already present and
completely doable on low bandwidth, mid latency - I'm doing it every day). VNC
lag issues are mostly exactly what I mentioned. You already have enough
bandwidth - it's the latency that affects you.

Besides, high detail remote desktop are a small, specialised segment,
relatively speaking. If you're a developer, most of the time you work with
text. If you're an office worker, most of the time you work with fairly static
apps like office. Sure, you can't really work with remote blender at the
moment, but we're getting into improvements for the long tail now - which is
great. But once we're past 10mbps, improving latency will help more people
than improving bandwidth.

------
infinisil
ETHZ is developing a new kind of internet architecture: [https://scion-
architecture.net/](https://scion-architecture.net/)

It's deployed in some places and working already. Citing from the webpage:

> SCION is the first clean-slate Internet architecture designed to provide
> route control, failure isolation, and explicit trust information for end-to-
> end communications.

Also it will reduce latency, increase throughput and more, as mentioned in the
papers.

The project has really been getting traction in recent years, it may just
become our new internet base layer.

Currently the people at ETHZ are working on verifying router hardware
mathematically to guarantee SCION's properties.

~~~
jerardope
Very interesting approach. I've read about SCION before. I believe, however,
that its goal is primarily security. The reduced latency claim simply comes
from the fact that they try to restrict routing in such a way that, for
example, traffic with source and destination both in the US will never leave
the US because of some weird routing policies. I'm not sure about this.
Somehow, it seems to be against the principles of a truly free internet.

~~~
infinisil
Yeah, speed isn't the primary focus, more of a byproduct

I don't understand what this has to do with a free internet, routing doesn't
have any affect on that

~~~
giobox
I think the poster means "free" as in data travelling freely regardless of
State/national borders, as the internet largely does today.

------
exabrial
Because every time we get faster connections, we download more the client and
force more client side processing. Angular and friends are just causing a
worse user experience, not a better one.

We also don't take advantage of DNS enough. We should/could use it to pass
along information about the client's ISP to locate a server one or two hops
away from the client. DNS could easily solve the IPV4 with SRV records, but
instead, we allow service location to happen at the TCP level.

The internet _could_ be fast, but it's not :/

~~~
grovegames
How is angular making the web slower. An angular project can sit somewhere in
the 130kb range, with lighter frameworks like vue sitting at 30kb. These are
smaller than most images or gifs used on the web. So the trade-off is one
image on your page, or a robust RIA experience, with much more maintainable
code. If you build it correctly, you're decreasing the amount of bytes on the
wire significantly when using client side frameworks, over re-rendering each
and every page on the server for every interaction. I also maintain a large
vanilla javascript application that was written before I arrived, and the
javascript weighs in at almost 5MB, and we have a general no touch policy on
it because of how fragile the vanilla javascript is. It was poorly written,
but even a poorly written vue app, would come in at much less than the 5mb JS.
I just don't buy this as the reason the web is slow.

~~~
_jal
> So the trade-off is one image on your page, or a robust RIA experience

If I had the choice, I'd take an additional image in that trade, thanks. The
'rich' 'experiences' I've been offered so far are inferior to bog-standard web
pages. I'll grant you that it is friendlier to coders whose applications fit
the model. Beyond that, they don't work without Javascript[1]. The "richness"
is usually useless animation and similar, frequently employed because it is
there, rather than actually adding any value (Anyone remember the Jquery
animation explosion?). It breaks a lot of automation; see the JS comment.
These are all things that are important to me.

I get that folks are fine with losing me as a user/customer, and that's their
choice. But they should know why, thus my explanation.

[1] Which means I usually go elsewhere when I encounter it, because I default
to leaving it off.

~~~
ovao
If I understand you right, your contention seems to be more with animation and
'fluff' rather than with MV* frameworks like Angular specifically. To which
I'll simply say: in some cases animation is as important to the user
experience as an additional image would be, and in some cases the additional
image just contributes more 'fluff' than the animation or interactivity would.
There are many dimensions to the problem (and the article only hints at some
of them on the protocol side), and pinning JavaScript frameworks as the cause
strikes me as unfair.

What you're really asking for is for the components being used, be they
images, animations of frameworks, to be used with a deliberate purpose. Which
I think is what everyone is asking for, and what good developers and designers
strive for.

~~~
_jal
You're actually picking up the smaller of my two complaints. If everything
else about Angular were sensible, I'd probably grumble about dumb overuse of
trendy visual primitives, but that's about it. And I absolutely agree that,
used sensibly, animations and whatnot are very useful. This complaint
basically boils down to wishing all web designers were actually as good as a
lot of them think they are.

You missed my complaints about the uselessness of Angular apps without a JS
interpreter. That's important, because it means they're basically worthless
for consumption by nonhumans. (Sure, I can run Chrome headless, but that's an
entirely different can of worms that play poorly with pipelines, not to
mention an enormous, absurd runtime for what should be a trivial file
transfer.)

Web automation on a personal scale is enormously useful. Angular breaks the
underlying assumptions that make it work for sites that don't go out of their
way to accommodate it. I get that may be an unintended benefit to some people
who want to be control-freaky about how their offerings are used , but it
enormously reduces the value of the web.

A connection just hit me - it is similar-ish to way back when, when some
misguided designers wanted to publish PDFs on the web instead of HTML. They
wanted full layout control, at the cost of basically everything else. Angular
is similar, in that doing anything the creator didn't anticipate is very
difficult, despite complying with the letter of web standards.

And also, as I said, I default to browsing with JS off. When I hit a blank
page, I curse "Angular" and go back to the search engine. So that's annoying,
but I've yet to encounter an Angular site I can't live without.

------
3pt14159
If you want an easy speed up and you use a modern version of nginx, which you
should for many reasons but chiefly performance and security.

Add these lines to your nginx config file:

    
    
        # Only support HTTPS and enable http2 
        # which will allow you to multiplex requests.
        # In a separate config do a 301 from normal HTTP
        # for all paths / subdomains.
        listen 443 ssl http2 default_server;
        listen [::]:443 ssl http2 default_server;
    
        # Make TCP send multiple buffers as individual packets.                                                                                   
        tcp_nodelay on;
    
        # Send half empty (or half full) packets.                                                                                                 
        tcp_nopush  on;
    

It reduced our page load time by around 75%, and more for people accessing the
site from the other side of the world.

~~~
zucchini_head
I'm not very experienced with these nginx settings. Could you explain how this
improves speed so much for you?

It seems that you've followed [1] by the similarity of what that suggests to
what you've suggested. tcp_nodelay seems to force it to not wait 0.2 seconds,
which nginx does so that it doesn't send lots of really small packets. I can
see this increasing speed by 200ms (a lot) but increases network traffic as a
cost.

tcp_nopush seems to be about optimising each packet sent, reducing the total
size of them.

Interesting stuff though!

[1] [https://t37.net/nginx-optimization-understanding-sendfile-
tc...](https://t37.net/nginx-optimization-understanding-sendfile-tcp_nodelay-
and-tcp_nopush.html)

~~~
3pt14159
So I'm more of a machine learning / cyber security guy than a sys admin, but
with that proviso, here is my understanding:

Of course you'll have to check your own site for your own performance issues,
but while travelling from Toronto to Vilnius I found our site _much_ slower
and investigated why. We have a webapp running Ember with mix of largish
assets, like images, and smallish assets, talking to a JSON API standard API
written in rails and proxied through nginx (responses are usually around 1kb
of JSON, but this gets compressed with HTTP2).

In reality waiting 200ms is an enormous cost and carries a hidden cost: Larger
packets are easier to lose. If you're going mostly to browsers that is more
and more often going through wifi or cell tower networks with at least 1%
packet loss and when the resend happens you're stuck with that 200ms delay
again.

That's my understanding, but the real truth is "it just works for me when
testing around the world with real-world use cases, try testing it out
yourself and please let me know if you learn something new".

~~~
the8472
Nagle's algorithm is not a network level delay. If data is lost then it is
retransmitted as soon as the sender learns about the loss. Nagle only delays
application level sends.

The problem with webapps is a compounding effect of many requests.

a) dependency chains (asset A loads B loads C) mean the browser only knows to
load C once B has been loaded, so the latencies (including nagle) add up.
http/2 push is supposed to help with that.

b) statistics. your 95th percentile latency is irrelevant if you're loading
hundreds of assets and want to know when the site has finished loading.

------
lucozade
Did I read this correctly? Is it saying that a general internet connection,
without specialist low latency connectivity, is only 37x slower than its
theoretical minimum?

That's astonishingly good. It's great that they think they can do better but
bloody hell.

It's indistinguishable from magic.

~~~
threepipeproblm
You're saying that achieving 2.7% of the theoretical efficiency is good?

Many car engines have an efficiency of around 20% for example, as compared to
the maximum theoretical efficiency of 67%. If car engines performed only well
enough to meet your standard of amazingness, your gas costs would be 7 or 8
times higher. Or if you used a modern Toyota engine for comparison, something
like 14 times higher.

Obviously I took an example from a different sphere for comparison... if you
really find this to be an astonishing result can you elaborate?

~~~
the8472
Your car is not operating at 20% efficiency of theoretical limits if you take
the whole energy pipeline into account. You know... fusion, sunlight,
plankton, oil, refining, combustion, mechanical energy. On the other hand
speed-of-light to actual ping times is an end-to-end efficiency calculation.
Plankton in particular is terrible at converting energy.

sun -> mj-PV -> battery -> electric motors run circles around those.

~~~
krallja
> Plankton in particular is terrible at converting energy.

Says the person who can't even photosynthesize!

------
jpl56
Let's add to that all the fuss that occurs to deliver us ads according to our
recent navigation ... Improving physical delays will not speed up browsing!

~~~
FussyZeus
I get amazed every time I'm configuring a new install of an OS, because the
Internet is shit until I install uBlock. Then suddenly it's back to normal.

------
agumonkey
No mention of bufferbloat, it's in mainline linux since a few releases now. If
all devices used these patches or applied the same logic, could it also work ?

------
sporkenfang
Javascript frameworks. Try timing the loading of a text-only or text/html/css
site, then your favourite site built with React or Angular.

------
kingosticks
The idea that browsing the modern web transfers "tens of kilobytes of data" is
pessimistic. I think you can argue bandwidth constraints in the real-world are
real. Improving the underlying stack is one thing and would obviously bring
improvements, but browsing speed won't be one of them. If you make responses
faster, they'll be able to stuff more garbage in those responses.

------
lobster_johnson
Tangentially, I have been struggling, for several years, with Safari being
extremely slow at opening sites, compared to almost instant loading in Chrome.
Anyone seen this?

I'm always on fast networks (100mbps or so), but I find myself waiting 4-5
seconds or more before Safari even shows signs of connecting, even to things
like Google and YouTube. It seems to be limited to HTTPS sites, and seems to
have been introduced when Apple started using a new MacOS TLS/SSL framework
called something like "Apple secure transport" (I forget exactly) a few MacOS
versions back, and it's almost certainly not DNS (I get the same issue with
Google DNS as with my provider's).

I don't even know how to debug this, since I don't have a command line client
that goes through the same TLS/SSL framework code as WebKit. Curl is fast, for
example.

~~~
jrnichols
I've used Safari since the very beginning, and I haven't really noticed this,
no. Extensions, perhaps?

------
fredley
TL;DR: TCP and DNS, which is what you would expect. If you want light-speed
internet, send UDP packets with a laser.

~~~
IndrekR
Or any form of electromagnetic wave passing media with low dielectric
constant. Like microwave communication in athmosphere.

Speed of propagation in typical cable is around 66% of c. That is not too bad.
More latency is probably caused by buffering.

Speed of light is just too low. We should get a lobby-group working on it and
the limiting laws of physics.

~~~
the8472
You really shouldn't be complaining about the speed of light. We're leaving a
lot on the table by sending the data along the surface of earth instead of
through it.

With neutrino-based signalling you eliminate both the lower propagation speed
and the suboptimal path.

~~~
sgift
Has neutrino-based signalling been achieved or is it still a research topic?
Last time I read about it it was more a case of "if we could do this it would
be great, but we have no idea how to do it" and a quick Google search only
brought a Forbes article from 2012 which had a similar sentiment.

~~~
dukwon
> Has neutrino-based signalling been achieved or is it still a research topic?

An accelerator- or reactor-based neutrino experiment can detect whether the
source is on or off, so in principle you can send binary information. This has
been demonstrated using NuMI/MINERvA [1]. It's more of a novelty than anything
practical.

[1] [https://arxiv.org/abs/1203.2847](https://arxiv.org/abs/1203.2847)

------
WhoBeI
“Why are we so far from the speed of light?“

Because we don't know how to build an optical router with the same
capabilities as the ones we use today and the alternative is to give up on net
neutrality. I'm ignoring endpoint performance and protocol bloat.

There is work being done in quantum computing that might eventually lead to
some CPU like device operating on photons alone and hollow core fiber optics
also seem to be an active field. The work on upgrading the various protocols
together with limiting network requests seem to be the way to go until our
technology catches up.

------
Yellow_Boat
"Slow" internet is not so bad (especially when it is not really THAT slow).
Limitations in connectivity force designers and developers (therefore large
companies) to optimise things and save resources. Internet speed growth rate
could be 5% (compounding) would be enough in order to provide a friendly
environment for new frameworks, technologies in the field of network and web
to benefit all.

~~~
AndrewGaspar
Speedtest.net claims an increase of average internet speeds by 40% in 2016:
[http://www.speedtest.net/reports/united-
states/](http://www.speedtest.net/reports/united-states/)

~~~
eric_h
I'm curious how much of that is actual infrastructure improvements and how
much is ISPs increasingly prioritizing traffic to speedtest.net.

------
nspattak
because applications are written with the "if it is not un acceptable it is
acceptably fast" mindset

------
cathleenv87
You should move to Romania.
[https://motherboard.vice.com/en_us/article/jp5aa3/why-
romani...](https://motherboard.vice.com/en_us/article/jp5aa3/why-romanias-
internet-is-so-much-faster-than-americas)

~~~
baybal2
A lot of latency inflation over past 10 yeara came from increase in L3
switchings over L2. Nowadays, you can have over 10 IP switching in DCs, and
within the last mile.

You also get more hops thanks to some backbone networks abandoning MPLS in
favour of native routing/packet labeling.

In Romania, as I understand, networks are largely free of L3 switching
overhead thanks to large portion of networks running pure L2

~~~
vbernat
Most switches are routing switches and the same pipeline is used for both L2
switching and L3 routing. There is no sensible difference in switching and
routing unless the switch has the ability to completely disable L3 routing.
The typical latency is 500ns in store-and-forward mode (cut-through fallbacks
to store-and-forward in most situations). If L2 processing could be done in
300ns, this is 200ns. With 10 routers, this is 0.002 milliseconds. Not
something noticeable. Even if you compare with cut-through switching which
could lower the latency to 50ns, that's still only a 0.005 milliseconds
penality.

~~~
baybal2
My measurements give me from 10ms to 50ms for IP routing on mainstream
hardware (for the whole route).

L2 switching is faster than IP routing because even cheap "smart" L2 hardware
come with hardware acceleration, but a typical non-core router for a mid-tier
ISP is usually a consumer grade, PC hardware based router.

~~~
vidarh
Do you mean microseconds here? ms is milliseconds, and that just does not make
any sense. E.g. I'm 7 (visible) hops away from the nearest Google DNS, and
roundtrip pings to 8.8.8.8 are between 7ms and 8ms.

~~~
baybal2
Yes, milliseconds. The difference there is because routers above you most
likely are a carrier grade equipment like Cisco gear with ASIC based
switching. This is what you can expect from big ISP in a big city, and less
likely in networks serving more remote suburbs.

~~~
vbernat
An EX2200, a routing switch you can pay for about 500$, is also ASIC-based.
Linux on a server has no problem routing with a latency in the order of 100ns.
I don't see what kind of equipments would route in the order of 10ms. That's
just too huge.

~~~
baybal2
I meant 10ms for the whole route.

>100ms

Under load with anymuch sizeable routing table?

~~~
vbernat
Route lookup with 1 core with a full view on Linux is on the order of 100ns.
[https://vincent.bernat.im/en/blog/2017-ipv4-route-lookup-
lin...](https://vincent.bernat.im/en/blog/2017-ipv4-route-lookup-
linux#performance_1)

Linux is able to handle about 5 Gbps of traffic of small packets (64 bytes)
with one core. This gives a latency of 100ns to keep the pace. You need to
find a Linux router 10000 time slower than this to get a 1ms latency.

~~~
baybal2
Then, I must be getting something wrong

------
NelsonMinar
I appreciate the big picture they've presented. I think an important next step
is to measure more precisely. Specifically to look at how QUIC and HTTP/2
impact real world performance. After reading the paper it's not clear they've
looked at that at all.

------
kv85s
Answer: Advertising

~~~
nkrisc
The current common implementation of advertising. You can serve ads that don't
noticeably slow anything down.

------
chrisweekly
High-Performance Browser Networking is a phenomenal resource. Highest possible
recommendation. [https://hpbn.co](https://hpbn.co)

------
duke360
it it really interesting but simply "ignore the technical challenge for now"
makes the Whole discussussion pointless, as the problem here ARE the technical
challenges that a zero latency (above the theoretical limit) network
implies... Anyway it is good to know the numbers: in my opinion a 37x factor,
in this case is not so bad

------
IanKelling
CDF = cumulative distribution function

People who care about how fast the internet is are not the same set as people
who know statistics acronyms, and they don't show up in a simple web search.
You know how else we can make the internet better and faster? Publishing
articles with software and licenses that make it easy for people to improve
and share your article.

~~~
enthdegree
If you are unfamiliar with basic statistical concepts it is hard to believe
you will be able to provide any compelling contributions to making the
internet faster.

~~~
DamonHD
No, that really doesn't follow.

Some classes of issues will be better solved if the solver understands stats.
M/M/1, distributions, percentiles, etc, yes. If only I understood them...

By my local major London paper somehow needs 4MB of crap and >60s to display a
2 para story. I don't need anything as subtle as "stats" to tell them that
they're holding it wrong and why I avoid their site like the plague.

------
shmerl
ISPs should come out of the stone age, and should start using fiber optics
everywhere.

------
RichLewis007
Why are you in such a hurry?

------
gmny
If the Internet moved as fast as the fastest thing in existence it would be
faster. That's brilliant.

~~~
ankitnawlakha
Internet Explorer might face hard times! :D

