

The Fragile Internet - VladVuki
http://vukicevic.blogspot.com/2009/08/fragile-internet.html
The new Internet is based on a set of principles that depend on susceptible bottlenecks such as Twitter and Facebook. Disruptions to these bottlenecks would have global ramifications.
======
gdp
How is this particularly different to any other time in the history of the
commercial internet?

The scale of attack that is required to completely bring down any of the large
services that we use on a daily basis are so large that they could otherwise
be focused on the DNS infrastructure, or the internet infrastructure of a
decent-sized country. Much smaller attacks have been used in the past to take
out large IRC networks (hence disrupting communications for hundreds of
thousands of users). Mail bombs, spam attacks on news groups, internet worms,
etc etc. This has all been going on for years. If anything, companies like
Facebook and Google are better equipped to deal with the threat than most,
because they have the resources to do so as a high profile target (note that I
don't include Twitter in this category, because they have had trouble staying
up under normal loads).

So I don't think this highlights the fragility of "the new internet", it's
just the fragility of the internet in general. An attack on the scale required
to take down Facebook would be able to cause similar amounts of disruption
almost anywhere it was targeted.

If anything, I think the rise of rapid provision of cloud computing resources
are probably making the internet _less_ fragile, in that a well-designed
application can scale proportionally to its (legitimate) user base within
hours or days, not weeks or months as it used to be.

~~~
VladVuki
The point is that a full scale disruption isn't necessary for an impact to be
felt around the globe - tr.im is the prime example for that. It doesn't even
have to be a malicious change.

It's more about the structure that has evolved - an inverse pyramid of
communication vs. the traditional web. At the bottom we have services like
Facebook and Twitter with a plethora of applications on top. This adds greater
pressure for those foundational services to protect themselves and some have
done a better job than other (i.e. Facebook vs. Twitter).

~~~
gdp
But URL shorteners aren't really that much more fragile than non-shortened
URLs. Sure, the scale of the problem could be larger, but URLs go away all the
time. I doubt even a small fraction of the shortened URLs that would go away
as a result of the shortening service going offline would have out-lived the
shortening service anyway.

And how do additional layers create more fragility, when the most fundamental
layers (e.g. ISP infrastructure, DNS) are just as vulnerable to attack as the
application frameworks implemented on top of them?

~~~
VladVuki
If a URL shortener goes down - it's likely that all of the enabled URLs go
down as well. I don't think we've had a system like that before. If bit.ly
went down - half of Twitter would become meaningless.

A DNS or ISP disruption can usually be more localized. I would argue that it's
much easier to disrupt a Twitter or a Facebook than it would to take down
multiple DNS servers or multiple ISPs. It would take less resrouces and time
to create a global disruption by taking down Twitter than it would to take
down an ISP.

~~~
gdp
What about the days when large-scale free hosting provides ruled the internet?
Geocities?

An infrastructure disruption at a large colocation provider would probably
cause just as much inconvenience to users. Attacks against root name servers
can (and have) cause huge amounts of disruption.

I understand your point is about people building upon single points of
failure, but my point is that there have always been single points of failure
within the internet. I don't think conceptualising facebook as a single point
of failure is particularly accurate either, given the distributed nature of
its implementation. You're more likely to get localised failures of facebook
than a complete outage, which would appear to be exactly the same amount of
risk as any other situation which I've described.

As for URL shorteners, it seems apparent that there might be a business
opportunity in client-side hashing with a server-side implementation for those
without client support. That way you remove the point of failure, which I
agree is a good thing, even if we disagree about how important that point of
failure is.

~~~
VladVuki
Geocities is a good example of a past bottleneck - we agree there.

Facebook might not be a perfect example - I think Twitter is probably better
since it's more of a platform.

