
BitTorrent's Bram Cohen Patents Live Streaming Protocol - ninthfrank07
http://torrentfreak.com/bittorrent-s-bram-cohen-patents-revolutionary-live-streaming-protocol-130326/
======
yk
Usually I would give someone like Bram Cohen the benefit of doubt, but this
paragraph:

    
    
        “We want people to use and adopt BitTorrent Live. But
        we aren’t planning on encouraging alternative 
        implementation because it’s a tricky protocol to 
        implement and poorly behaved peers can impact everyone. 
        We want to ensure a quality experience for all and this 
        is the best approach for us to take,” Cohen told 
        TorrentFreak. 
    

So they are relying on the patent system to ensure that every peer is playing
nicely? In a p2p protocol? What could possibly go wrong?

~~~
eurleif
It might be that they don't need _every_ peer to play nice, but a quorum of
peers. If that were the case, a client with a buggy implementation could
easily gain popularity quickly and mess things up, but it would take an
attacker with a botnet to do it intentionally.

It also looks like they're making this free as in beer:

> Bram Cohen explains that the patent is in no way going to restrict user’
> access to the new protocol, quite the contrary. BitTorrent Live will be
> available to end users for free, and publishers who are using the service
> and hosting it on their own will not be charged either.

~~~
wyager
>It also looks like they're making this free as in beer:

I have no interest in free beer. I prefer free speech.

------
huhtenberg
I really like Bram, but what _is_ revolutionary here?

I was pretty heavily involved in p2p field back in the early '00 and I've read
extensively on the subject. Even back then there was _plenty_ of research into
peer-casting, including peer clustering by proximity metrics. The idea is
bloody obvious, and it all inevitably boils down to constructing an efficient
and resilient overlay networks, which is a very well researched domain. The
reason there's not much of it implemented is because there is always a simpler
(read - dumber) solutions that worked just as well in practice. Think YouTube
vs. Joost.

Can anyone with a more recent exposure to p2p stuff comment on whether this is
indeed an innovation or is it just a PR spin on a patent application?

~~~
hack_edu
The patent is about breaking peers out into small sub-swarms to look after
each other. The more healthy peers in each group focusing to keep their's
groups healthy while also keeping up with the larger swarm. The greater
BitTorrent protocol doesn't do this and, judging by the patent filing, other
p2p streaming applications don't either, hence why they fall down upon high
demand.

~~~
synctext
> other p2p streaming applications don't either,hence why they fall down upon
> high demand.

This patent does _not_ claim anything novel. This idea of "clubs of peers" or
groups has been around for a while.

This well known paper from Torino, Italy, written a few years ago: "overlays
are maintained by peers organized in clusters that represent sets of
collaborating peers", <http://dx.doi.org/10.1109/TMM.2010.2077623>

Disclaimer: my research team is part of the open IETF Internet Standard on P2P
streaming which is directly competing with this patented technology,
[http://datatracker.ietf.org/doc/draft-ietf-ppsp-peer-
protoco...](http://datatracker.ietf.org/doc/draft-ietf-ppsp-peer-protocol/)

Anybody else has an even older citation of prior art?

~~~
bramcohen
The traditional overlay approach involves a bunch of full trees, and uses
multiple trees as a way around dealing with leaf nodes being unutilized. My
approach uses multiple groupings, which do not overlay, screams within them,
and does something completely different for the last hop. They're completely
different architectures. I have trouble taking seriously any paper which says
that it makes heavy use of multiple description coding. If you have congestion
control, skips should be an extreme and bad event.

------
lifeisstillgood
1\. We learn that the intelligence will always live on the edges of the
network - in software not in routers

2\. We learn that network providers are just utilities. And should start to
act like them. Net neutrality is merely one item in the list

3\. We learn that multi-cast is back

4\. We learn that total volume of data sent is the same even if it is sent
efficiently from local to local - the networks will need upgrading to handle
the volume - and the utilities had better learn to accept they are capital
businesses again and stop trying to do marketing

5\. And we learn that YouTube will rule the world

------
bbarrows
After working for Bittorrent (I recently left for job that fit me a little
better) on the Live team for almost two years (more so on all the software
supporting the actual core protocol itself which Bram himself works on
mostly..) it is awesome to see Live finally being released to the public. I
cannot wait to see how the public will use it (I am sure anyone with an
imagination can think of a thousand ways a P2P Live streaming protocol could
be useful and powerful..). It will be interesting to see how usable people
find it and if they end up adopting it or sticking with the current RTMP
server client style architecture.

One thing I always found interesting while working on Live is that although in
some ways it really seems like Live Video streaming is more or less an
undeveloped field there is actually already a super successful P2P live video
streaming implementation called PPLive that is BIG in China (
<http://en.wikipedia.org/wiki/PPLive> ).

Another interesting thing to check out if this Live Video streaming stuff
interests you is that some guys proposed a Live Video streaming protocol VERY
similar to Live's sometime recently:

<http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-02>

Just compare the BitTorrent Live Protocol and this proposal..

One interesting thing that Brams implementation does is actually speed up and
slow down the playback of traffic depending on the latency and whether or not
Live figures you need to buffer more or can afford to have less of a
delay/buffer. Talk to Bram and you'll quickly figure out he is obsessed with
low latency..

This ends up being funny in implementation too, as you will notice when
watching a stream that the playback will speed and slowdown while you watch
it...

This is detailed in the patent but I did not see anything similar in the ppsp
protocol..

~~~
synctext
> Live Video streaming is more or less an undeveloped field there is actually
> already a super successful P2P live video streaming implementation called
> PPLive that is BIG in China

Indeed, so the situation is now for 'future of TV':

\- 1+ million of users of proven technology (PPLive)

\- patented technology after 5 years of dev work released (Bittorrent)

\- Open Source reference implementation of open upcoming IETF Internet
standard (PPSP)

> One interesting thing that Brams implementation does is actually speed up
> and slow down the playback of traffic depending on the latency

> This is detailed in the patent but I did not see anything similar in the
> ppsp protocol..

Why link the network with the codec? From an architecture viewpoint I would
consider this a 'layering violation'. For many years VLC has support for
dynamic playback speed: <http://forum.videolan.org/viewtopic.php?t=50581>

Why is live streaming not more popular? In my opinion due to lack of quality.
If we put the average upload capacity of Internet users at 800 kbps, that is
the maximum donation you get. User donations limit the bitrate and quality of
the live stream. Video quality at 800 kbps is unacceptable on HD laptop
displays and 1080p televisions. As Prof. Keith Ross wrote many years ago: we
need upload-view decoupling
(<http://cis.poly.edu/~ross/papers/VUDSystemMini.pdf>). For HD quality live
streaming with P2P, users need to donate also bandwidth when not watching.
Unfortunately, going beyond T4T is an open scientific problem.

Discaimer: I'm part of the PPSP streaming team. Note that -02 is outdated,
latest: <http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-06>

Shameless plug; Open Source competitor: <https://github.com/Tribler/libswift>
Android view/inject client available

~~~
AnthonyMouse
>For HD quality live streaming with P2P, users need to donate also bandwidth
when not watching.

How is that compatible with being live? To supply bandwidth for a live stream
you would have to be downloading it simultaneously with uploading it, so if
that node is not watching the video then it would be more efficient to have
the node it's receiving from and sending to just connect directly.

I suppose if some subset of nodes can upload at a rate faster than the bitrate
then you can gain some efficiency by consuming all of their available
bandwidth, but unless those nodes have several times more upload bandwidth
than the stream bitrate, that would tend to be pretty inefficient.

It seems like the better solution is just to increase the upload capacity for
the average peer -- either through political pressure on the telcos to expand
capacity, or through market pressure by just enforcing T4T and then offering
both high and low quality streams but only offering high quality streams to
the users with sufficient _upload_ capacity to keep up with them, which would
spur those who can to subscribe to a more expensive internet package with
greater upload capacity.

>Unfortunately, going beyond T4T is an open scientific problem.

It seems like something bitcoin-like could work pretty well: Make it so that
to get a download credit you either do some serious computation that requires
a nontrivial amount of computing resources, or you upload to someone who has
credits, and then they lose them and you get them. Which is basically T4T with
accounting, except that it scales better because you can adapt to shortages
and surpluses of credits by adjusting the amount of computing resources
necessary to generate new credits.

It also solves the T4T bootstrap problem. Peers that newly join the network,
or who had credits but spent them on something nobody else wants and so can't
earn any upload credits because there is no one to download what they have,
can crunch for credits instead of uploading and get back into the network.

Any reason you can think of why that wouldn't work?

~~~
Nimi
Bitcoin pretty much requires that everyone in the network be aware of all the
transactions, to make sure no one can double-spend. I guess the overhead for
using a broadcasted transaction for every T4T iteration would be quite large,
but maybe that could work. An interesting thought anyway, thanks for bringing
this up.

------
guard-of-terra
Chinese SOPCast does this (live streaming via p2p) for years.

People use it to watch sports for example.

~~~
hack_edu
But does it do it reliably and well? I get nothing but buffering problems,
hell I'm rarely even able to get up to speed even to view the stream. Even if
I get on, the quality is too variable and skips are abound.

It looks like BT's angle (different from Sopcast) is to break up the large
swarms into smaller groupings that are loosely connected to others.

~~~
TillE
If you have high upload bandwidth (10Mbit or so), Sopcast works great. If
you've watched some of Bloodzeed's streams, you know how good it can be -
basically flawless HD, delayed by about a minute from the original broadcast.

Unfortunately, Sopcast relies on a centralized service and there was a major
crackdown on football streams that started during Euro 2012, and now all the
really good ones are gone.

In my experience, competitors like TorrentStream are terrible.

------
lifeisstillgood
Actually this strikes me as a fascinating opportunity.

This underlines how dead DRM is. But it also gives a new opportunity to
provide a service for the majority of people.

BT provides some large % of all home ADSL routers in the UK, and they slice
off a % of each router for their "wifi-anywhere" service - its roaming for BT
subscribers, you park outside my house, you get to use my cordoned-off router
bandwidth and vice versa.

Now the majority of problems will come from "poporly-behaved clients" - but if
the majority of clients are simply running in the backgoround on most routers
most clients will be well behaved

~~~
rayiner
> This underlines how dead DRM is.

Nah, this will just be used to make DRM-ed services like Steam and Netflix and
Hulu and Spotify faster.

And users will love it.

~~~
michaelochurch
rayiner: got your InMail but LinkedIn isn't letting me reply. Email address is
michael.o.church at google's email service.

------
adamduren
Does anyone know what a screamer protocol is? There doesn't seem to be much
information on it.

~~~
mikegioia
I think this might be it but I only read the intro:
<http://www.guralp.com/documents/SWA-RFC-SCRM.pdf>

------
archivator
It doesn't look particularly revolutionary, though I haven't examined the
graph properties of the swarm that would result from this system.

That said, a side interest of mine is investigating applications of fountain
codes to video streaming. There are a few papers out there and I'm slowly
building up the knowledge (and courage) to implement something in that area..

------
yawniek
zattoo did the same 5 years ago. they replaced it with a server to client
system for various reasons.

bandwidth is too cheep now and with HLS we have a technology that not only
works on almost all devices out of the box (flash, ios, android) but is also
cacheable on various levels.

nice technology, but i guess it will occupy a niche.

~~~
marssaxman
What is HLS?

~~~
pixelcort
HTTP Live Streaming

------
ollybee
"it’s a tricky protocol to implement and poorly behaved peers can impact
everyone" Or to put it another way it an unstable protocol where users could
hold broadcasters to ransom.

~~~
jbk
Or maybe "we need revenue because we are bleeding money, so we'll try to
monetize this one" ?

~~~
missing_cipher
"Bram Cohen explains that the patent is in no way going to restrict user’
access to the new protocol, quite the contrary. BitTorrent Live will be
available to end users for free, and publishers who are using the service and
hosting it on their own will not be charged either."

Seems like its free. Maybe they'll have a service where they can host for
publishers.

~~~
jbk
Free for users does not mean free for publishers.

If there are open implementations, there can be free publishers.

------
DomBlack
Link to patent application; <http://www.scribd.com/doc/132418122/Bittorrent-
Live-Patent>

------
woah
Will be interesting to see bittorrent litigating a patent suit.

------
pbrumm
Is a patent like this likely to get accepted? I thought algorithm patents were
hard to get now.

~~~
jacques_chester
Given the publicity, it's going to have a rocky time. Pretty much anyone has
standing to raise an objection to a patent, so you can expect that
competitors, anyone with a threatened business model etc will be raising
objections all over the place.

------
unwind
The article says "screaming" when I guess it should say "streaming", a number
of times. Or? Is "screaming" used in some technical sense with BitTorrent? I
did Google it but came up empty.

~~~
mikegioia
_In the most basic mode of operation, a Scream client sends a UDP request
packet to a Scream server at a regular interval. The Scream server transmits
GCF blocks with some additional information to any clients that have sent a
recent request. The usual port number (both TCP and UDP) for Scream is 1567_

This was the best I could find: <http://www.guralp.com/documents/SWA-RFC-
SCRM.pdf>

~~~
unwind
Aha, okay, so it is an actual protocol. Thanks!

