Minor note---you can get much tighter theoretical minimum latencies than the ones listed in your table.
1. The table's mins divide straight line distance by the speed of light. This gives you the time it takes for light to travel in a straight line from, say, London to New York. However, your "real latencies" are roundtrip ("ping") latencies. Thus, you need to multiply all the theoretical latencies by two.
2. Data does not travel through a fiber optic cable at the speed of light. This is because light actually bends around the cable when transmitted. These are called cosine losses, and mean the light travels roughly 5/3 the actual cable distance. So, multiply again by 5/3. (This is why HFT firms use microwave links for long distances.)
If you multiply the theoretical maxes by 3.33, you'll see that they're very close to the actual latencies you're observing. New York -> London becomes 62.7 ms optimal, so you're only 13% slower than the theoretical max.
Here on the west coast, I typically see within 10% of the theoretical min for data going over the Seattle -> Japan submarine cables.
What do you mean by cosine losses? I have done research in fibre optics for more than a decade and never heard that term. Also I have no idea what you mean by light bends around the fibre. A simple explanation of light propagation in multi-mode fibres can use geometric optics to explain how light propagates along the fibre by reflecting of the interface between core and cladding, however this simple picture does not apply to single-mode fibre (which all long distance connections are) and also does not easily explain the group velocities in fibre.
The reason that light travels slower in fibre is because the refractive index of slica is about 1.5 while it is 1 in glass (in reality it's a bit more complicated, it's the group index that counts, which is also approx. 1.5 however).
1. Submarine cables can't go in a straight line, since they've got to, you know, go down to the bottom of the ocean. (Which, you may have heard, is quite deep.) Also, a cable with a length of several thousand miles tends to have some slack.
2. Your packets may take a very curvy route from one city to another, even when they're not geographically that distant. This may be because your ISP is bad (and has poor/limited routes), geographic or geopolitical concerns, or just because of the way the Internet infrastructure is built. On the US's west coast, I often experience latencies 60%+ slower than the theoretical minimum when accessing servers in the central or eastern US. (e.g. SF -> Des Moines, IA at 70ms).
For point 1, is the depth of the ocean significant compared to the distances traversed? My back-of-the-envelope math suggests it’s less than half a percent for a cable from New York to England. (6 miles down + 6 miles up, divided by rough great circle distance of 2600 nautical miles.)
I would think a bigger factor would be that the cables (IIRC) don't go in straight lines (which you did allude to).
Correct. The ocean bottom is relatively flat once away from the continents. Also, the submarine cable actually has a very slight advantage over a surface cable as at several thousand feet below it is following a slightly smaller radius curve. I'm waiting for the HFT firm to bore a literally strait hole between London and New York. Then we know that HFT has gone too far.
It's important to remember you packets are in the network layer (probably when you send them even transport or application? I'm not so familiar with the higher layers in the OSI stack).
So you are still quite a bit removed from from the physical layers. Your packet will likely go through several electrical-to-optical and optical-to-electrical conversions, probably there will be some electric switches, plus multiplexers, all of which contain buffers. Then there is forward error correction in the physical layer which also requires buffers etc..
And you're obviously right that for many reasons the "straight path" might not be the path that is being taken, or even the fastest one.
Bottom line, estimating ping time from geographic distance is a very rough estimate. However, the longer the distance through an uninterrupted link (i.e. a submarine cable) the better your estimate, i.e. if you sit at in a google datacentre which is directly connected to their fibre backbone and do a ping to machine in a similar data centre on a different country you will get quite close numbers I imagine (I don't work for google). On the other hand if you sit somewhere in the mid-west at home and ping a server in e.g. NY or LA, not so much.
Debugging high ping at home once mtr (or similar, I think it was a graphical tool) showed the first 5 or 6 hops all in the ISPs network and taking about ⅔ of the time; going from UK to USA and then leaving their network in Amsterdam IIRC only to terminate at another data-center in UK. Pretty crazy.
Ha, it only just struck me that could have been an NSA-type routing issue!?!
That reminds me of the bug in Google Maps where a route from southern Sweden to Norway suggested driving through the Chunnel and taking a car ferry across from Scotland.
>so you're only 13% slower than the theoretical max.
Yes. Not to mention those Fibre aren't exactly a straight line. There is extra distance for layering the fibre route. 13% is very close to practical maximum.
That is why I asked [1] if we have Hollow Core Cable [2] soon where we get close to Real speed of light.
At the moment they are nowhere close to being ready for wide deployment or large scale commercial drawing (try to find some videos of modern fibre drawing the speed is absolutely insane).
Obviously the HFT crowd are very interested in these, but they are willing to pay the premiums. Also the next area is probably datacentres where latency is very important as well, and these fibres already provide similar losses to multi-mode fibres at ~900 nm wavelengths.
Assuming these cables cost more, or require equipment that costs more, I wouldn't expect this on last mile connections. ISPs simply don't care that much. DSL providers typically run connections with settings that add 15+ms of round trip. My DSL provider runs PPPoE for fiber connections where fiber is available, etc. When I was on AT&T fiber, it was still about 3-5 ms round trip to the first hop. It's been a while since I've experienced cable to know how they mess it up.
If there's significant deployment of the cable in long distance networks, eventually that should trickle down to users. It would probably happen faster if there were competitive local networks, but regardless, a significant drop in latency across a country or ocean can be big enough to justify some expense.
Well spotted! I have corrected issue #1 you noticed, a very silly mistake, thank you!
#2 is great background! But these cosine losses are I suppose not a theoretical limit but a limitation of fibre optics so I won't include that (but I will link to your comment!).
1. The table's mins divide straight line distance by the speed of light. This gives you the time it takes for light to travel in a straight line from, say, London to New York. However, your "real latencies" are roundtrip ("ping") latencies. Thus, you need to multiply all the theoretical latencies by two.
2. Data does not travel through a fiber optic cable at the speed of light. This is because light actually bends around the cable when transmitted. These are called cosine losses, and mean the light travels roughly 5/3 the actual cable distance. So, multiply again by 5/3. (This is why HFT firms use microwave links for long distances.)
If you multiply the theoretical maxes by 3.33, you'll see that they're very close to the actual latencies you're observing. New York -> London becomes 62.7 ms optimal, so you're only 13% slower than the theoretical max.
Here on the west coast, I typically see within 10% of the theoretical min for data going over the Seattle -> Japan submarine cables.