I ran into this when some moron management -- sold by greedy, moron venders -- wanted to share an IBM tools (Rational, etc.) installation between the U.S. and India (primarily -- also some other locations).
Damned chatty software and in many small and synchronous/sequential pieces. It ground to an utter halt and was, in its default configuration, entirely unusable. Quelle surprise.
I'd argued for testing over deliberately hindered (latency et al. injected) network links, before proceeding or at least before placing dependency upon the rollout.
It says a lot that even in this fairly technical group -- a quite significant development shop within a large organization -- no one seemed to get it. Well, one or two did, privately, but there was no effective means to rock the boat.
I'm not sure what conclusion to draw from this, other than that persistently over many years, our profession as a whole (as opposed to some more effective parts) has refused to "get" latency.
I don't know which profession you mean; your example is a failure of consumers of technology to "get" latency more than it is a failure of producers to get it.
I guess I mean some union or intersection of information systems and information technology.
The enormous private employment sphere -- a substantial part of it being significantly sized corporations -- where so many graduates go off to have "careers". It's not "sexy" employment, and careers aren't what they used to be, but there's still a lot of it around. (U.S. perspective)
In other words, Scott Adams could probably write a clever strip or three about latency. Perhaps he has.
I liked this story, but there were a few things I didn't quite understand. How did they manage to get latency to roughly the speed of light? And furthermore, wouldn't emails be routed through a series of intermediary SMTP servers, likely closer than 500 miles in distance anyway?
I remember seeing (a long time ago) a further comment where he confesses that he didn't remember the the details any more when he wrote the story, so he just inserted some numbers.
This is a problem that most people don't understand. The wireless at our office has 256mb/s over 802.11ac but the wireless hardware adds about 5 to 15ms of latency over the gigabit ethernet.
For gaming this is death were it's not bandwidth that kills you, it's latency.
I wonder how this could work in the "internet of tubes" analogy to explain to people?
If you just mean explaining latency and not the difference between ethernet and wireless there are lots of physical instances of latency.
You want hot water from the tap. If you haven't used it in a while or have poorly insulated pipes it could take seconds or even minutes before hot water starts flowing through the tap after you've turned the knob.
Traffic. While a section of highway (say a 3 lane x 40' section) may see 30 cars every minute, a particular car that just started 60 miles away and traveling at 60mph is still going to take an hour to arrive. Even though tons of other traffic is going through constantly the particular vehicle (message) is still in transit.
Echos are probably easy to explain this with, and are directly analogous to the sort of ping-pong, back-and-forth that network communication uses. You shout out a message and it can only travel at the speed of sound. It'll take seconds to cross the ravine and the same time to return to you. Shout 10 messages and 10 messages return in the same order all delayed by the same amount.
I had that and about 10 other examples (supersonic aircraft, hearing a car before it passes through your field of view, etc.). They were eaten by an accidental command-w.
I have a number of technical friends, but also a lot of non-technical friends. Being a nerd, however, I tend to bring up technical topics amongst my non-technical friends more often than is fair. I've found that many of our CS topics (networking, OS, performance, security, etc.) have good physical analogs to at least convey a 10,000' view. Often (like latency) there are some almost perfect analogies that everyone has experienced or witnessed. I'm not turning them into programmers, but they'll understand the terms better.
> I wonder how this could work in the "internet of tubes" analogy to explain to people?
Pretty simple. Water takes time to flow from one end of the pipe to the other. If you are trying to fill a huge tank, doesn't really matter how long it takes the water to get to you. If you are trying to fill a hundred tiny cups, it does matter because you are shutting the tap off and on again pretty frequently.
Well... how to make this visible? If you imagine a tube that is just wide enough to contain a single marble and you fill that tube with marbles all the way then you can also image that if you push one new marble in on one end instantaneously another one will pop out the far end. It's almost as if the marble you put in came out the other end. So even though the electron (the marble) moved very slowly the signal propagated extremely fast.
Wireless hardware shares a chunk of the spectrum, much like cell phones share a chunk of the spectrum. So a node is not always transmitting the way a full-duplex ethernet connection is. Your packets wait for their slot to open up and then they are transmitted. The same happens on the way back. So the throughput can be extremely high but the latency can be murderous at the same time. And that's assuming the packet made it in one piece and did not have to be re-transmitted, in which case the lag will be even longer.
A slight nitpick that I mention only because I just learned this recently. The marble on the other end wouldn't come out instantaneously. Time it would take after starting to push the new marble in for the marble on the other side to start coming out is determined by speed of the pressure wave that would propogate through the marbles. You can see a really cool demonstration of this principle here: http://a.gifb.in/092011/1317143770_slinky_dropped_in_slowmot...
Latency is time taken to travel the distance (between the client and server).
Look at http://eyes.nasa.gov/dsn/dsn.html, for Voyager the latency would be 0.74 days. For the Mars Reconnaissance Orbiter it would be 6.6 minutes.
Someday people might play live action video games with someone on Mars with very little latency, but right now it's not even theoretical without a wormhole through space/time.
I enjoyed reading all the analogies on this thread. Stephen Hemminger and myself try to explain the latency problems involved in queueing with demos using water bottles in various configurations.
This problem like the IPv6 deployment has a solution but ISP's are wary of changing the network.
The Bufferbloat issue has been worked on for the last 20+ years and the CoDel algorithm is a great step forward to fixing that issue.
Try it out for yourself on Linux:
To turn on ECN to help feedback to the AQM in the file: /etc/sysctl.conf add the following line:
net.ipv4.tcp_ecn = 1
To alter the Active Queue Management scheduler alter the following file: /etc/rc.d/rc.local and add the following line:
tc qdisc add dev eth0 root fq_codel
replacing eth0 with the name of the computers network card.
For windows:
(All these settings require the “Run As Administrator” permission)
To show the current state of the TCP settings:
netsh int tcp show global
To turn on ECN:
netsh int tcp set global ecncapability=enabled
To turn on CTCP:
netsh int tcp set global congestionprovider=ctcp
On Linux 3.13 and later it's easier to just add
fq or fq_codel to /etc/sysctl.conf, which will enable it for all your devices, including those with hardware multi-queues. Or, at the command line:
sysctl -w net.core.default_qdisc=fq # if you are a host
sysctl -w net.core.default_qdisc=fq_codel # if you are a router
(but this requires that each interface be taken up and down. Better to just stick it in sysctl.conf and forget about it)
It's interesting to me that the latency has not improved since 1996. I guess it's probably hard to maintain (let alone improve) latency as network complexity greatly increases.
In the past few years tech like 100G Coherent DWDM have reduced latency by 10-20% on longhaul fiber links, as dispersion compensation fiber is no longer needed.
Other companies have simply opted to pull out the slack in the fiber to decrease latency. It's basically "you get what you pay for"
The post from 1996 is relevant again because the same story is repeating itself over mobile connections. Note how the mobile networks never advertise latencies (round-trip-time) of the network, always the bandwidth.
I can't find the reference now but I've seen a rule of thumb that basically says that in all data transfer technologies (whether they be moving data between disk and memory, DRAM and CPU or between two nodes on the internet), the advancements in bandwidth go approximately as square of the advances in latency. E.g., a newer radio protocol for mobile networks might have quadruple the bandwidth of the previous technology but will most likely only improve the round-trip time by a factor of 2. (Subject to of course the fundamental speed-of-light limitation on propagation of signal... you're never going to get a 2x improvement in coast-to-coast ping latency over fiber optics cable over what's already achievable (something like .6c )
Absolutely. For instance, improved latency is the big advantage of LTE for me. The extra bandwidth is nice, but I hardly ever push that limit. It's the latency improvement (~200ms to ~50ms, IIRC) that makes the connection qualitatively different.
A little off topic here, but that name, Stuart Cheshire, seemed familiar. Where would I have heard of him before? From some old game I used to play, I thought. RoboWar? No... Bolo!
Great little game on the Macintosh. Networked game with tanks and pillboxes and resources and alliances and a "little green man" who you could command to run from your tank to harvest trees and place mines and buildings and roads. Lots of interesting mechanics packed into one little game form 1987. You could even write "AI" modules to help you out or run the game entirely. I think it used token ring networking, but maybe that was only when it was played on AppleTalk.
I've connection from a tier-1 network in India. It's a 6 mbps dedicated line (a luxury for most) connected with fibre. ping 8.8.8.8 has delay time of ~40 ms. How's that possible considering the calculations given in this article?
This article is misleading. The fundamental flaw is that it's treating the causes of latency as a black box while enumerating several ways to ameliorate the causes of a bandwidth shortfall.
For example, suppose the reason your modem's latency is so terrible is that it's a software modem and the software is poorly written. If you use a more efficient algorithm then the latency goes down. Suppose the reason for the high latency is that you have a router receiving packets from a fast connection and sending them out through a slow connection, and the router continually has a queue full of packets that have to be sent in front of yours. In that case you can either increase the bandwidth of the slower connection so that packets don't get backed up in that router, or at least reduce the size of the router's buffer.
The person who wrote it is Stuart Cheshire, who invented Bonjour. I think he knows what he's talking about.
For example: say you need to send a message from Earth to Mars, currently it is 0.794 AU away. This means it takes light 6.603 minutes to travel this distance. It doesn't matter how many routers or satellites to put along the path, it will take at least this amount of time. Having more routers is just going to make things worse. That's why he mentioned the speed of light in fiber and the actual latency. Of course you could wait for the distance between the planets to become closer since they are on different orbital paths around the sun but that's a different argument altogether.
Or maybe until we get networking based on quantum entanglement but I don't know enough about the subject to make any comments on that.
As for quantum entanglement, it's not going to help.
Think of it like a pair of dice that always sum to 7 when they are independently rolled. Yes, that requires spooky action at a distance, but no, you can't transmit information faster than the speed of light using the dice to communicate.
> The person who wrote it is Stuart Cheshire, who invented Bonjour. I think he knows what he's talking about.
Appeal to authority. Having experience and being mistaken are not mutually exclusive.
> For example: say you need to send a message from Earth to Mars, currently it is 0.794 AU away. This means it takes light 6.603 minutes to travel this distance.
Now you're talking about the physical limit. That's something else. If you want to talk about physical limits then you have the same thing for bandwidth:
But the physical limit is not what causes a 56k modem to have a 100ms latency. And just as you can run more cables to mitigate the Shannon limit, you can move the endpoints closer together to mitigate the latency of the speed of light. Obviously moving planets closer together to reduce latency is not very practical, but neither is running fiber optic cables to Mars to allow more bandwidth than you can get out of radio.
Not at all. In the context of the date of the article he addresses the physical limitations in place at the time, implying devices that don't come close to the physical limitation can see improvement.
>No matter how small the amount of data, for any particular network device there's always a minimum time that you can never beat. That's called the latency of the device. For a typical Ethernet connection the latency is usually about 0.3ms (milliseconds -- thousandths of a second). For a typical modem link the latency is usually about 100ms, about 300 times worse than Ethernet.
Furthermore, he directly addresses the possibilities of improving things in software, including buffering. Maybe not the fantasy algorithmic improvements you're speculating about but concrete design improvements in the modem stack available at the time.
>Because when you use the Geoport adapter the modem software is running on the same CPU as your TCP/IP software and your Web browser, it could know exactly what you are doing. When your Web browser sends a TCP packet, there's no need for the Geoport modem software to mimic the behaviour of current modems. It could take that packet, encode it, and start sending it over the telephone line immediately, with almost zero latency.
You're correct about many of our current latency issues being giant buffers developed to chase peak throughput for a marketing number instead of the low latency numbers that affect directly affect the (IMO) most interesting uses of the network: realtime interactive applications like communication and gaming.
> I thought that the speed of electricity, actually electrons in a cable for a consumer use was around 1cm/s.
Folks who worry about the velocity of electrons when discussing the velocity of signal propagation in a coax cable have not fully understood Maxwell's Equations.
The fields are responsible for the propagation, and the geometry + dielectric determine the speed of the fields in the cable to first order. If those fields were in the visible, as they are for some fiber, then folks would talk about the speed of light in the coax. Since they are not in the visible, we talk about the speed of the TEM mode instead.
EDIT: Some other folks are pointing out that the speed of propagation in coax is about 2/3 the speed of light. This has nothing to do with the velocity of electrons in the metal.
Most coax has a central conductor, an insulator (made of teflon PTFE), and a outer conductor (the shield). The speed is almost entirely determined by the index-of-refraction of the teflon, which is approximately 1.5. The propagation speed is 1/index (approximately), so the the reciprocal of 3/2 gets you to 2/3.
Broadly speaking, as it goes down a conductor it has to build a magnetic field, and as it does so the increasing magnetic field creates an electric field that opposes the motion of the signal.
One more way to think about it is the propagation delay of a transmission line, or actually the opposite of that, which is the velocity factor, proportional to 1/sqrt(LC):
Coaxial cables are great for blocking interference because one conductor is inside the other, so external electric fields can't penetrate. But they have a high parasitic capacitance because the outside conductor has a large surface area.
So if capacitance is causing too much propagation delay, you can use twisted pair. It also works to avoid interference, but because the wires constantly change position in relation to electric fields:
The problem with coax for cable TV though is that for high frequencies, losses can be high because the capacitor begins to approximate a conductor (and TV uses really high frequencies!). So cable length becomes a limitation. Then you use fiber optic cables (I find total internal reflection somewhat miraculous, like an optical analog to superconductors).
Also in computer chips, parasitic capacitance is a problem because capacitance grows with area and shrinks with distance. But the conductors are so close to one another that even a small area results in a large capacitance. That's why the latency to memory can be orders of magnitude higher than say, cache. Optical interconnects would help here as well.
It's been 15 years since I studied this stuff though, so if I misremembered, somebody please correct me!
> * But they have a high parasitic capacitance because the outside conductor has a large surface area.
> So if capacitance is causing too much propagation delay, you can use twisted pair.*
The capacitance is not really "parasitic", it is the circuit element representation of the electric field which is propagating. The dynamic electric field gives rise to a magnetic field, which is the inductive element in the expression 1/sqrt(LC) that you listed.
Twisted pair is no different: There is a capacitive element and inductive element per unit length. As it turns out, the capacitance varies as the log of the distance between the wires, so you can twist the wires and not change the capacitance very much. There used to be lots of "twinax", which kept the two wires at a fixed distance, but the separation plastic was mostly a PITA. (The pics I can find on google don't show the old flat plastic twinax that folks used to use. You can still find that stuff hanging off old television antennas.)
Actually much, much less than that. http://en.wikipedia.org/wiki/Drift_velocity#Numerical_exampl... Even for a large current in a small wire, you only get average speeds of about 1cm every 40 seconds. Luckily, that isn't the speed at which signals move.
Imagine that each of us is holding one end of a 10 foot metal pole. I transfer a message to you by pushing on the bar. You don't need to wait for my end to reach you before you can tell that it has moved, because you can feel the entire rod moving.
In the same way, the receiver of a signal doesn't need to wait for the electrons to move all the way, because pushing on some of the electrons makes the entire mass of electrons move.
I've always felt that what we used to call the Shannon Limit (more accurately, the Shannon-Hartley Theorem: https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theore...) was more useful in conversations like this than any reference to the speed-of-light or motion of particles in a medium.
make the difference between electricity and a signal. a signal is information, electrons are just particle which flows through a wire.
if you slowly push a train, at the other end it's still going at the exact same speed at the exact same time (well in theory, it still depends on a certain fraction of the speed of light).
Damned chatty software and in many small and synchronous/sequential pieces. It ground to an utter halt and was, in its default configuration, entirely unusable. Quelle surprise.
I'd argued for testing over deliberately hindered (latency et al. injected) network links, before proceeding or at least before placing dependency upon the rollout.
It says a lot that even in this fairly technical group -- a quite significant development shop within a large organization -- no one seemed to get it. Well, one or two did, privately, but there was no effective means to rock the boat.
I'm not sure what conclusion to draw from this, other than that persistently over many years, our profession as a whole (as opposed to some more effective parts) has refused to "get" latency.