
Idea: a legal botnet CDN - marcamillion
http://marcgayle.com/2010/01/08/a-legal-botnet-billion-dollar-startup-idea/
======
gojomo
There were a gaggle of companies trying similar things with bandwidth and/or
computer cycles in the 1999-2001 era.

RedSwoosh was was eventually acquired by Akamai. BitTorrent still plugs its
'BitTorrentDNA' offering as a P2P CDN alternative. Kontiki was bought by
VeriSign... but later spun back out.

There was some speculation Adobe was adding P2P CDN capabilities to Flash
Player in 2007, based on some combination of Kontiki technology and that of
another P2P VOIP company they'd acquired, Amicima. (See
<[http://gigaom.com/2007/01/24/adobe-and-
its-p2p-ambitions/...](http://gigaom.com/2007/01/24/adobe-and-
its-p2p-ambitions/>.))

But, none of these have taken off in a giant way. There may yet be a magic
formula, but many variants have already been tried.

~~~
kierank
Mainly because all the software developed for P2P CDNs so far were so
cumbersome unlike file-sharing counterparts. Redswoosh in my opinion was the
least cumbersome but having used it I never actually saw it use P2P, instead
relying on its main server. Kontiki (used by BBC, Sky and Channel 4) was a
joke too. It was difficult to remove; users couldn't control bandwidth easily
and it forcibly started upon boot. When you removed it, it put itself back in
the startup list again.

------
brazzy
ISPs could not be profitable if end users started reselling bandwidth while
paying flatrate fees. They'd be forced to enforce volume caps.

~~~
jerf
As you say: second-order effects. If user home broadband connections are being
sold cheaply enough to be able to be resold for a profit, the ISPs will either
raise the price to recoup this (the value of the connection has gone up to you
and therefore you will be willing to pay more), or cut out the middleman and
directly provide these services with their bandwidth (forbidding end users
from participating), pocketing the money.

This entire plan hinges on the ISPs doing nothing in response. And the
responses the ISPs would make aren't even "evil" or anything, it would just be
rational market behavior on their part.

Furthermore, there would be major quality issues. Clever programming and
design could make this work with enough redundancy in the system that even as
people log in and out the content wouldn't disappear, _but this does not come
for free_. You _must_ pay a price somewhere else: increased latency (one of
the most likely outcomes, many P2P systems make this tradeoff), decreased
bandwidth, grotesque requirements for content replication that goes a long
ways towards mitigating the capacity of the system. You can often trade one
kind of quality for another, but you can't increase the total quality
available, and a home consumer broadband connection is a low-quality
connection for this purpose. And the quality of home connections aggregates
distinctly sublinearly, especially since so many of them are badly asymmetric.

This is a terrible idea and anybody going into it will lose everything they
invest, excepting the IMHO faint possibility that some interesting technology
could emerge that could eventually be repurposed for something sane. (I call
this a faint possibility because that would then be facing competition from
technologies directly purposed for whatever this later domain is and given the
likelihood of a large difference between the initial design constraints and
the final design constraints you're likely to have a very inferior solution.)

~~~
milkshakes
I still don't understand why this is a terrible idea. Is there absolutely no
way for ISPs to react to this in such a way that they save money rather than
lose it?

If decreasing traffic between providers doesn't save money, then wouldn't
there be some savings at least from decreasing traffic between subnets?

~~~
brazzy
CDNs don't scale down that well. The likelihood that there is demand for one
particular piece of data within the same subnet (and thus the value of having
it there) decreases dramatically with the size of the subnet.

As the article itself points out, some CDNs already do this with large ISPs,
directly. This kind of thing may increase and get more sophisticated, but it
will never include the users themselves as middle men.

~~~
marcamillion
I agree with milkshakes on this one. Brazzy...you are right. CDNs (in their
current form) don't scale well.

A properly executed p2p CDN can scale much better, IMHO.

As for not including users themselves as middle men, I must disagree with
that. It makes economic sense for the ISPs to want to convert bad p2p traffic
to good p2p traffic. That's their goal...all the while monetizing that good
p2p traffic.

Put it this way, the ISPs will do nothing, except sign a revenue share
agreement and possibly change their TOS to allow customers to resell their
bandwidth. For every GB resold, the ISP makes money.

So in essence, they are converting bad p2p traffic to good p2p traffic and
making a killing - the economics works in their favor.

~~~
brazzy
To the ISP there is no "bad" or "good" P2P traffic, just costs vs. income.

And why would an ISP want to share revenue on bandwidth rather than get all
the revenue themselves, since they are ultimately providing it?

------
icefox
In a similar vain to double click v.s. adsense in the 90's. Back in the day
they would count each ad install, done via an ad team and they have several
thousand or something. Comes along google with a little javascript you insert
into any page with little to no pre-screening and they get millions of
installs.

Rather then setting up and maintaining a box at sites just put up a vmware
image for people to run :)

~~~
marcamillion
Exactly...so I would love to see someone do it =)

~~~
milkshakes
why don't you?

~~~
marcamillion
Very good question. The truth is that the timing wasn't right for me when I
originally thought about it, but depending on how the discussion goes here I
might re-consider.

------
Pahalial
Given the global pushback against ISPs from the content industry and
assertions they're knowingly profiting at the expense of content - including
various threats of legal or legislative action - do you really believe any of
them would let you profit at their expense? Such a system would be the very
first thing throttled into oblivion by any packet shaping.

The ISPs would naturally (and rightly) demand a profit-sharing agreement in
exchange for unblocking this app. Their share would have to cover the costs of
their bandwidth - and at that point, why not just do as every CDN worthy of
the name has already done and put servers in their network?

While this is a nice thought experiment, I frankly don't see it being any more
successful this time around than previous attempts at the same. Perhaps less
so, as network administrators everywhere now have more experience disrupting
P2P and ISPs are more vigilant.

------
wallflower
In case you missed it, HighScalability recently had a long, well-written
thought-provoking essay on Planet-scale distributed systems:

"Notice how global, how plentiful, and how fast the flashes flicker. It looks
as if the whole world is afire, every ember burning with computation. That's a
planet full of compute power. With millions of available nodes botnets wield
more collective processing power than the world's top 10 supercomputers.

Now imagine if all this power was available to your application. Take a
moment...welcome back."

[http://highscalability.com/blog/2009/12/16/building-super-
sc...](http://highscalability.com/blog/2009/12/16/building-super-scalable-
systems-blade-runner-meets-autonomic.html)

------
m_eiman
The article is a bit hazy on some rather important details, such as "are you
going to use P2P for downloads" and "how will you route traffic to the nodes".

Using P2P means custom client software, which means that it can only be used
to very specific things like application updates.

I'm assuming that there will need to be some central point of contact that
knows what content is available where - how will this be distributed and
scaled?

There's also the question of how to handle nodes dropping out of the network.

~~~
marcamillion
This article wasn't meant to spell out everything, but just meant to prime the
discussion pump in an effort to flesh out the idea from a 50,000 foot view.

But to clear up any misconceptions, I am not purporting a new P2P application
that people use to download stuff. I am essentially purporting a new protocol
that replaces rmtp (the flash media streaming protocol) and other media
streaming/distribution protocols.

Routing the traffic is all a software issue. Same thing with nodes dropping
out of the network. Bit Torrent (the protocol) has solved many - if not all -
of those issues.

~~~
m_eiman
_I am essentially purporting a new protocol that replaces rmtp (the flash
media streaming protocol) and other media streaming/distribution protocols._

Ah. So you just need to either convince Adobe to add that to Flash, or have
the users run an application that can act as a proxy between it and RTMP.

It's not a question of if it's possible or not, the problem is that it's hard
to sell the concept to a website after you mention the "oh, and every one of
your users will need to install this piece of software to be able to access
the site" detail. It'll probably be a reasonable trade-off for some sites with
devoted users, but for sites that have a lot of random users that only go
there once or very seldom, it'll probably make more sense to pay the higher
price of a normal CDN.

Or maybe you intend to run the P2P/RTMP proxy on the nodes instead?

Either way, the problem with the node dropping is to handle it in a graceful
way that doesn't rely on custom software on the user side. Since your nodes
are going to be pretty unreliable, your users will need to be able to switch
between nodes without interrupting a large download. For smaller files they'll
just need to be able to restart the transfer, which is a bit easier.

I'm assuming that you want to use the CDN for serving various web content, for
which HTTP should do just fine for just about everything. You probably don't
even need RTMP, progressive downloading with a hack to allow non-linear access
works just as well for most uses.

------
marcamillion
I would love to see PG add this as a rfs. Just a thought.

------
dnsworks
One of the big problems with this is that DOCSIS is a shared bandwidth
protocol. It's actually better for a lot of data to stream into a DOCSIS
network from the outside, then for data to originate from that network, even
if it's just going to other peers within that network. I have a friend at
Charter who says that the biggest network engineering problem they face is the
saturation of the "uplink frequencies" for users running bittorrent.

