
The death of transit? - colinscape
https://blog.apnic.net/2016/10/28/the-death-of-transit/
======
dsr_
Well, no, transit isn't dead. But when your traffic volume rises to one of the
top N -- let's say, N approximates 10 -- sources/destinations of the entire
Internet, you discover it's cheaper to run your own global networks.

And that's what Google, Facebook, and Amazon, at the very least, have done:
bought fiber, hired network engineers, and designed things that work
efficiently for them. If YouTube is 90% of Google's traffic, it's not
surprising that Google's network looks like a CDN. Amazon wants to
interconnect their AWS datacenters to lower their internal traffic costs.
Facebook wrote a new routing protocol (Open/R).

~~~
nowprovision
Not sure this is the case, amz data centres to other amz data centres in most
cases you go across ntt, Tata etc. Google on the other hand is different e.g.
Taiwan to Ireland all google network. Spin up vms and traceroute

~~~
mino
Careful: lack of middle hops in a traceroute isn't necessarily sign that there
are no transit providers. Many times a carrier/enterprise may rent overlay
(i.e., MPLS) transport capacity which won't appear in your traceroute.

Although, in Google's specific case, you are probably right and this does not
apply.

------
convolvatron
Scattering caches or CDN nodes around the internet has obvious value.

Its a pretty big jump though from caches to a world where the internet is
structured like a cable tv service with a architecturally designated 'head
end'.

Many internet architects seem to get very excited about losing end-to-end
connectivity, and I can never figure out why. I guess it allows one to raise
larger barriers to entry(?).

~~~
jlgaddis
I'm one of those who values end-to-end connectivity, but I can remember when
it was the rule instead of the exception.

There was a time -- before the rise of NAT -- when one could directly
establish connections to others across the Internet without having to jump
through hoops or implement other tools (port forwarding, UPNP, a third-
party(!), etc.) to do it.

In general, as a network engineer, I dislike anything (especially NAT) that
breaks end-to-end connectivity, simply because of the inherent problems that
arise as a result.

In addition, some of the DDoS attacks we've seen recently would be a lot
easier to prevent if NAT wasn't a thing (e.g., as an ISP, I could easily shut
off a specific device; I can't just shut off a customer's entire access).

~~~
z3t4
I hope that with IPv6 NAT disappears. But almost all devices comes with only
one network interface, that assumes it's on the LAN, but still needs to access
the Internet ... How can that be done without NAT ?

~~~
dx034
I'm not sure if I'd want that, for reasons of privacy and security. At the
moment, your device IP (phone, computer, laptop) is usually shared with other
devices, due to the scarcity of IPv4. If this gets dropped, couldn't some
providers could get the idea to statically assign IP addresses to each device?

Most people wouldn't know how to rotate IP addresses of their devices even if
it was possible. Having one static address (or even a subnet) for each device
seems like the worst thing that could happen to privacy.

I also could imagine that having all phones exposed directly could make
vulnerabilities much worse. It's bad enough that the recent attacks were
possible because cameras were exposed via UPNP, but as far as I know, it
wouldn't easily be possible to build a large botnet of smartphones just
because you know a vulnerability in their network communication.

~~~
nine_k
AFAICT smartphones are mostly exposed already, they usually get an IPv6
address, along with some IPv4 connectivity behind the phone company's NAT.

OTOH I just tried to ping6 my phone, and got a 'no route to host'. I wonder if
it's my wired connection's problem, or a security measure from the phone
company's side.

------
omegaworks
Fascinating. The growth of architecture containerization will help facilitate
this transition away from high latency centrally-located services. At a high
level, this means users on a particular continent will see their continent's
'shard' of data much more quickly than off-continent shards. I wonder if we'll
start to see prioritization of locally-available data in algorithmic content
aggregators (Facebook, Reddit, G+) because of this.

Also wonder if different regions will impose restrictions on building these
duplication services in hopes of promoting growth of their own content-
producing industries. I mean, China is already doing this with their firewall.
Maybe we'll see the WTO grow to prevent this kind of manipulation.

With the proliferation of PAAS providers (Firebase, for example) I wonder if
there will be transparent ways to proactively structure access to data to
prioritize latency.

