
The Internet's on Shaky Ground - raju
http://www.internetevolution.com/author.asp?section_id=708&doc_id=166793&
======
lsc
but that's the thing. worse _is_ better. The Internet doesn't need to be re-
designed for every new type of technology that comes around. Sure, you can do
a 'content centric' setup on top of IP, on the most hackish level, just set a
squid cache with big disks at the head of your network. (at a more expensive
level, you can put your content on akamai or the like) But you don't have to
be content-centric, and this is good. How would you do VoIP over a content-
centric network?

The decentralized and hackish nature of the internet is precisely why it has
become so successful. Personally, I would argue that 'worse is better' is the
only way to go if you need the cooperation of many people, as it requires the
least agreement. more complex, more "perfect" systems can sometimes be better,
but they usually need to be created and operated by a single entity. This
would be completely incompatible with what makes the internet great.

------
iigs
_You needn’t do more than attempt to watch a streaming video on a busy office
LAN or oversubscribed DSL circuit to understand that even the best-served
markets for Internet connectivity are struggling to keep up with demand for
networked content._

I don't know, I think the worst aspect of my video browsing experience at home
is my ISP's second guessing of Youtube's bandwidth and streaming parameters,
which end up in their DPI shapers and cause Youtube to underrun.

If the point is "see, if the internet was awesome you wouldn't need those
shapers" I guess I couldn't disagree, but experience tells me to trust one
smart guy on the far end of the network to figure it out and not rely on the
people in between who generally seem to mess it up.

~~~
tptacek
That's called "The end-to-end argument in system design", and if you Google
that you'll get one of the most important CS papers ever written. I agree with
you, of course.

~~~
MaysonL
And when I did, guess what came up as number 2 hit? Your comment!

~~~
tptacek
That's fucked up right there. They ought to fix whatever's making that happen.

------
tptacek
Immediate caveats: Alex is a smart guy, and Van Jacobson (Peace be upon him)
is the Interpope. I can only have an imperfect understanding of what VJ is
saying with Content-Centric Networking, and I am extrapolating some of what I
knew him to have been saying in the late '90s.

From the promotional material, Content-Centric Networking appears to be the
combination of three major ideas in networking:

* _Multicast_ , a failed experiment from the network layer but still a possibility outside it: simply the idea that each network hop can be 1-N instead of 1-1.

* _Overlay_ , an idea that started with IRC and TIBCO (which I think invented content-addressed networking): the idea that we can build networks with ambitious service models on networks with simple ones.

* _Peer to Peer_ , the idea that if we write software that's smart enough, we can repurpose all the existing services to build networks instead of deploying new ones.

On their own, these are all good ideas. Clearly, some combinations of them are
effective too. But I don't think there's a new Internet you can synthesize out
of them. There are some really hard problems in here that I haven't seen
compelling solutions to:

* Overlayed or not, multicast gets unwieldy as more addressable content is added. The web scales because Pitchfork Media doesn't need to know anything about Ars Technica. The services that need to know everything are all contemplating building floating offshore data centers to hold it. This is the problem that will keep IP multicast from ever happening: NSPs filter prefixes below (what is it now?) /19, and multicast wants a globally routed address for _every web page_.

* The only proven group reliability strategies are lossiness and forward error correction (a la BitTorrent), but FEC is drastically less effective for short, near-transactional content than it is for large streams of data, and lossiness doesn't work for CNN.

* The failure modes of Internet-scale, telco-reliable P2P (err, "self-organizing networks") are totally unknown.

* Group security is harder than unicast security; for example, the group equivalent to SSL/TLS involves protocols and algorithms like key sharing that are currently exotica in industry.

* Unless all you're trying to do is stream video --- in which case, why not just invest in Akamai? --- there are too many application service models to design a single network architecture to. Who knew 150 character limits were going to be a feature in 2008? Who knew AOL IM was going to crush IRC?

VJ has been talking about these ideas for over a decade, and other absurdly
smart people have been working there too --- Paul Francis and Frans Kaashoek
come to mind --- and I'm sure we're going to get lots of cool stuff out of
this line of thought. I doubt we'll get a new Internet.

And if we do, it'll be running directly on top of the old Internet, so we
better hope it's not _too_ shaky, Alex.

~~~
seiji
"I can only have an imperfect understanding of what VJ is saying with Content-
Centric Networking, and I am extrapolating some of what I knew him to have
been saying in the late '90s."

So you just made all of that up?

If you'd like to see the actual points (which say nothing of multicast and
doesn't proclaim "group security" to be scary and magical), check out
<http://video.google.com/videoplay?docid=-6972678839686672840>

It's about providing network access to everyone from mud huts in villages to
gigabit connected apartments in Tokyo. It's about making networking with
content retrieval work over any medium, anyplace, anytime.

It's about being able to drop "a box of Internet" someplace and having
reasonable access to content.

We could have a workable solution to what Van Jacobson describes now, but
mobile devices and most computers are still too caught up in privacy and
network lockdown. Being "connected" doesn't mean just having Internet access.
It means being able to connect to everything around you and interact
meaningfully without having to go through service providers and gatekeepers.

~~~
tptacek
Of course I didn't. I read the promotional material at PARC, and I spent years
obsessing about every paper Jacobson and his team posted at ftp.ee.lbl.gov, so
I'm familiar with the general subject area. I also helped crater a VC-funded
startup in this area. What I did not do was sit through a Google video of VJ
explaining all the implications of this Great New Idea, which is why I started
my comment with a caveat.

Thanks for telling me what it's about, though. I'm glad the mud huts in
villages will have working content retrieval anyplace, anytime. Maybe we can
get them clean running water soon afterwards.

