
It's the Latency, Stupid (1996) - ColinWright
http://rescomp.stanford.edu/~cheshire/rants/Latency.html?HN_report=1
======
pasbesoin
I ran into this when some moron management -- sold by greedy, moron venders --
wanted to share an IBM tools (Rational, etc.) installation between the U.S.
and India (primarily -- also some other locations).

Damned chatty software and in many small and synchronous/sequential pieces. It
ground to an utter halt and was, in its default configuration, entirely
unusable. Quelle surprise.

I'd argued for testing over deliberately hindered (latency et al. injected)
network links, before proceeding or at least before placing dependency upon
the rollout.

It says a lot that even in this fairly technical group -- a quite significant
development shop within a large organization -- no one seemed to get it. Well,
one or two did, privately, but there was no effective means to rock the boat.

I'm not sure what conclusion to draw from this, other than that persistently
over many years, our profession as a whole (as opposed to some more effective
parts) has refused to "get" latency.

~~~
pekk
I don't know which profession you mean; your example is a failure of consumers
of technology to "get" latency more than it is a failure of producers to get
it.

~~~
pasbesoin
I guess I mean some union or intersection of information systems and
information technology.

The enormous private employment sphere -- a substantial part of it being
significantly sized corporations -- where so many graduates go off to have
"careers". It's not "sexy" employment, and careers aren't what they used to
be, but there's still a lot of it around. (U.S. perspective)

In other words, Scott Adams could probably write a clever strip or three about
latency. Perhaps he has.

------
kokey
When I read things about latency I always somehow get reminded of the story of
the 500 mile e-mail:
[http://www.ibiblio.org/harris/500milemail.html](http://www.ibiblio.org/harris/500milemail.html)

~~~
hmsimha
I liked this story, but there were a few things I didn't quite understand. How
did they manage to get latency to roughly the speed of light? And furthermore,
wouldn't emails be routed through a series of intermediary SMTP servers,
likely closer than 500 miles in distance anyway?

~~~
sampo
I remember seeing (a long time ago) a further comment where he confesses that
he didn't remember the the details any more when he wrote the story, so he
just inserted some numbers.

------
zbowling
This is a problem that most people don't understand. The wireless at our
office has 256mb/s over 802.11ac but the wireless hardware adds about 5 to
15ms of latency over the gigabit ethernet.

For gaming this is death were it's not bandwidth that kills you, it's latency.

I wonder how this could work in the "internet of tubes" analogy to explain to
people?

~~~
jacquesm
Well... how to make this visible? If you imagine a tube that is just wide
enough to contain a single marble and you fill that tube with marbles all the
way then you can also image that if you push one new marble in on one end
_instantaneously_ another one will pop out the far end. It's almost as if the
marble you put in came out the other end. So even though the electron (the
marble) moved very slowly the signal propagated extremely fast.

Wireless hardware shares a chunk of the spectrum, much like cell phones share
a chunk of the spectrum. So a node is not always transmitting the way a full-
duplex ethernet connection is. Your packets wait for their slot to open up and
then they are transmitted. The same happens on the way back. So the throughput
can be extremely high but the latency can be murderous at the same time. And
that's assuming the packet made it in one piece and did not have to be re-
transmitted, in which case the lag will be even longer.

I hope that explains it clearly enough.

~~~
seacious
A slight nitpick that I mention only because I just learned this recently. The
marble on the other end wouldn't come out instantaneously. Time it would take
after starting to push the new marble in for the marble on the other side to
start coming out is determined by speed of the pressure wave that would
propogate through the marbles. You can see a really cool demonstration of this
principle here:
[http://a.gifb.in/092011/1317143770_slinky_dropped_in_slowmot...](http://a.gifb.in/092011/1317143770_slinky_dropped_in_slowmotion.gif)

~~~
taejo
> speed of the pressure wave that would propogate through the marbles

IOW, if it's not clear, the speed of sound in glass (assuming the marbles are
made of glass).

------
dtaht
Stuart Cheshire is the godfather of the bufferbloat project.

With algorithms like fq_codel we've reduced induced network latencies by
orders of magnitude...

Two examples of fq_codel in action:

100Mbit:

[http://snapon.lab.bufferbloat.net/~d/nuc-
client/results.html](http://snapon.lab.bufferbloat.net/~d/nuc-
client/results.html)

Fixing a cable modem:

[http://burntchrome.blogspot.com/2014/05/fixing-
bufferbloat-o...](http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-
on-comcasts-blast.html)

I enjoyed reading all the analogies on this thread. Stephen Hemminger and
myself try to explain the latency problems involved in queueing with demos
using water bottles in various configurations.

[http://www.bufferbloat.net/projects/cerowrt/wiki/Bloat-
video...](http://www.bufferbloat.net/projects/cerowrt/wiki/Bloat-videos#Dave-
Taumlhts-Water-Videos-November-2012)

------
nextweek2
This problem like the IPv6 deployment has a solution but ISP's are wary of
changing the network.

The Bufferbloat issue has been worked on for the last 20+ years and the CoDel
algorithm is a great step forward to fixing that issue.

Try it out for yourself on Linux: To turn on ECN to help feedback to the AQM
in the file: /etc/sysctl.conf add the following line: net.ipv4.tcp_ecn = 1

To alter the Active Queue Management scheduler alter the following file:
/etc/rc.d/rc.local and add the following line: tc qdisc add dev eth0 root
fq_codel

replacing eth0 with the name of the computers network card.

For windows: (All these settings require the “Run As Administrator”
permission) To show the current state of the TCP settings: netsh int tcp show
global

To turn on ECN: netsh int tcp set global ecncapability=enabled

To turn on CTCP: netsh int tcp set global congestionprovider=ctcp

~~~
greedy_buffer
On Windows 8.1 I was unable to configure CTCP using this command.

Running this under an admin Powershell reported CTCP:

(Get-NetTransportFilter | Where DestinationPrefix -eq '*' | Get-
NetTCPSetting).CongestionProvider

From here:

[http://social.technet.microsoft.com/Forums/windows/en-
US/51b...](http://social.technet.microsoft.com/Forums/windows/en-
US/51b6d5e2-19b2-4b71-afdd-313ce4d2fe8f/how-can-i-enable-compound-tcp-ctcp-on-
windows-8-81?forum=w8itpronetworking)

------
djf1
I just did the same ping from Stanford to Mit:

90.340 ms

It's interesting to me that the latency has not improved since 1996. I guess
it's probably hard to maintain (let alone improve) latency as network
complexity greatly increases.

~~~
virtuallynathan
In the past few years tech like 100G Coherent DWDM have reduced latency by
10-20% on longhaul fiber links, as dispersion compensation fiber is no longer
needed.

Other companies have simply opted to pull out the slack in the fiber to
decrease latency. It's basically "you get what you pay for"

------
chetanahuja
The post from 1996 is relevant again because the same story is repeating
itself over mobile connections. Note how the mobile networks never advertise
latencies (round-trip-time) of the network, always the bandwidth. I can't find
the reference now but I've seen a rule of thumb that basically says that in
all data transfer technologies (whether they be moving data between disk and
memory, DRAM and CPU or between two nodes on the internet), the advancements
in bandwidth go approximately as square of the advances in latency. E.g., a
newer radio protocol for mobile networks might have quadruple the bandwidth of
the previous technology but will most likely only improve the round-trip time
by a factor of 2. (Subject to of course the fundamental speed-of-light
limitation on propagation of signal... you're never going to get a 2x
improvement in coast-to-coast ping latency over fiber optics cable over what's
already achievable (something like .6c )

~~~
micro-ram
Isn't that why we used to have streaming protocols like Z-Modem that didn't
ACK every packet or at least did so asynchronously?

~~~
mobiplayer
TCP does that and that's why latency is important. If latency is low the
optimum TCP window will be bigger hence less ACKs need to be sent :)

------
fpgeek
Absolutely. For instance, improved latency is the big advantage of LTE for me.
The extra bandwidth is nice, but I hardly ever push that limit. It's the
latency improvement (~200ms to ~50ms, IIRC) that makes the connection
qualitatively different.

------
Gracana
A little off topic here, but that name, Stuart Cheshire, seemed familiar.
Where would I have heard of him before? From some old game I used to play, I
thought. RoboWar? No... Bolo!

Great little game on the Macintosh. Networked game with tanks and pillboxes
and resources and alliances and a "little green man" who you could command to
run from your tank to harvest trees and place mines and buildings and roads.
Lots of interesting mechanics packed into one little game form 1987. You could
even write "AI" modules to help you out or run the game entirely. I think it
used token ring networking, but maybe that was only when it was played on
AppleTalk.

[http://en.wikipedia.org/wiki/Bolo_%281987_video_game%29](http://en.wikipedia.org/wiki/Bolo_%281987_video_game%29)

Text and pictures describing how to play:
[http://bishop.mc.duke.edu/bolo/guides/bolomanual/](http://bishop.mc.duke.edu/bolo/guides/bolomanual/)

------
PaulHoule
One of the core ideas of corporate ideology is that nine woman can have a baby
in a month. It's the "throughput delusion".

Customers don't feel throughput, they feel latency. In fact, they don't feel
average latency, they feel worst case latency.

------
lilpirate
I've connection from a tier-1 network in India. It's a 6 mbps dedicated line
(a luxury for most) connected with fibre. ping 8.8.8.8 has delay time of ~40
ms. How's that possible considering the calculations given in this article?

~~~
kalleboo
Google have a node close to you. Try traceroute.

------
AnthonyMouse
This article is misleading. The fundamental flaw is that it's treating the
causes of latency as a black box while enumerating several ways to ameliorate
the causes of a bandwidth shortfall.

For example, suppose the reason your modem's latency is so terrible is that
it's a software modem and the software is poorly written. If you use a more
efficient algorithm then the latency goes down. Suppose the reason for the
high latency is that you have a router receiving packets from a fast
connection and sending them out through a slow connection, and the router
continually has a queue full of packets that have to be sent in front of
yours. In that case you can either increase the bandwidth of the slower
connection so that packets don't get backed up in that router, or at least
reduce the size of the router's buffer.

~~~
brisance
The person who wrote it is Stuart Cheshire, who invented Bonjour. I think he
knows what he's talking about.

For example: say you need to send a message from Earth to Mars, currently it
is 0.794 AU away. This means it takes light 6.603 minutes to travel this
distance. It doesn't matter how many routers or satellites to put along the
path, it will take at least this amount of time. Having more routers is just
going to make things worse. That's why he mentioned the speed of light in
fiber and the actual latency. Of course you could wait for the distance
between the planets to become closer since they are on different orbital paths
around the sun but that's a different argument altogether.

Or maybe until we get networking based on quantum entanglement but I don't
know enough about the subject to make any comments on that.

~~~
Tyr42
As for quantum entanglement, it's not going to help. Think of it like a pair
of dice that always sum to 7 when they are independently rolled. Yes, that
requires spooky action at a distance, but no, you can't transmit information
faster than the speed of light using the dice to communicate.

------
stelfer
"Latency Lags Bandwidth", Patterson 2004
[http://dl.acm.org/citation.cfm?id=1022596](http://dl.acm.org/citation.cfm?id=1022596)

------
known
[http://theos.in/windows-xp/free-fast-public-dns-server-
list/](http://theos.in/windows-xp/free-fast-public-dns-server-list/)

------
cientifico
I thought that the speed of electricity, actually electrons in a cable for a
consumer use was around 1cm/s.

~~~
ColinWright
Signals travel at about 30 cm/nsec, in coax about 2/3 of that. The actual
electrons themselves travel about 1 mm/sec.

Sort of.

You can find out more with web searches, but those are broadly the numbers.
Broadly.

[http://www.physlink.com/education/askexperts/ae69.cfm](http://www.physlink.com/education/askexperts/ae69.cfm)

[https://uk.answers.yahoo.com/question/index?qid=201304161316...](https://uk.answers.yahoo.com/question/index?qid=20130416131646AASIrg8)

~~~
Retr0spectrum
Why is it slower in coax?

~~~
zackmorris
One more way to think about it is the propagation delay of a transmission
line, or actually the opposite of that, which is the velocity factor,
proportional to 1/sqrt(LC):

[https://en.wikipedia.org/wiki/Wave_propagation_speed](https://en.wikipedia.org/wiki/Wave_propagation_speed)

Coaxial cables are great for blocking interference because one conductor is
inside the other, so external electric fields can't penetrate. But they have a
high parasitic capacitance because the outside conductor has a large surface
area.

So if capacitance is causing too much propagation delay, you can use twisted
pair. It also works to avoid interference, but because the wires constantly
change position in relation to electric fields:

[https://en.wikipedia.org/wiki/Twisted_pair](https://en.wikipedia.org/wiki/Twisted_pair)

The problem with coax for cable TV though is that for high frequencies, losses
can be high because the capacitor begins to approximate a conductor (and TV
uses really high frequencies!). So cable length becomes a limitation. Then you
use fiber optic cables (I find total internal reflection somewhat miraculous,
like an optical analog to superconductors).

Also in computer chips, parasitic capacitance is a problem because capacitance
grows with area and shrinks with distance. But the conductors are so close to
one another that even a small area results in a large capacitance. That's why
the latency to memory can be orders of magnitude higher than say, cache.
Optical interconnects would help here as well.

It's been 15 years since I studied this stuff though, so if I misremembered,
somebody please correct me!

~~~
jpmattia
> * But they have a high parasitic capacitance because the outside conductor
> has a large surface area. > So if capacitance is causing too much
> propagation delay, you can use twisted pair.*

The capacitance is not really "parasitic", it is the circuit element
representation of the electric field which is propagating. The dynamic
electric field gives rise to a magnetic field, which is the inductive element
in the expression 1/sqrt(LC) that you listed.

Twisted pair is no different: There is a capacitive element and inductive
element per unit length. As it turns out, the capacitance varies as the log of
the distance between the wires, so you can twist the wires and not change the
capacitance very much. There used to be lots of "twinax", which kept the two
wires at a fixed distance, but the separation plastic was mostly a PITA. (The
pics I can find on google don't show the old flat plastic twinax that folks
used to use. You can still find that stuff hanging off old television
antennas.)

