

Bandwidth breakthrough promises to boost bandwidth tenfold - cpeterso
http://www.technologyreview.com/news/429722/a-bandwidth-breakthrough/

======
blrgeek
This is very interesting. There are already Error Correcting Codes used widely
from ECC RAM, to HDDs, to CDs. These are generally statically generated, ie
the data is not streaming - you know up front the data that is going to be
written and protected.

Protecting streaming data packets up to 1500 bytes in length, by using
previous and later packets is something I've never even imagined. Kudos to the
scientists behind it!

The magnitude of gains are interesting - perhaps partially due to lack of
retransmissions at the MAC level, and perhaps due to TCP re-transmissions that
would be triggered across the entire network.

They seem to be keeping the mechanism under wraps and licensing it out under
NDA - that makes me a bit sad and skeptical about when we'll get to actually
benefit from this - but these scientists have definitely earned their
reward...

Wonder what encoding they're using, and if it reduces the error detection
within a single packet.

~~~
Jabbles
This kind of idea is already used quite extensively in video conferencing (at
the UDP layer), known as "forward error correction". If enabled, every n
packets are XORed together, allowing the receiver to recover from a single
lost packet in that group, at the cost of a 1/n increase in bandwidth.

I am interested in the size of the gains as well, and hope more details come
out.

~~~
blrgeek
Presumably video packets have the advantage of being fixed in size, and
repeating in regular intervals.

Dealing with irregular sized packets at irregular intervals is a more
interesting problem :)

~~~
vr000m
In Video, unless you are talking about MPEG2, packets are not equal. There are
I (Intra) and P (predicted) frames. I-frames are a large spatially compressed
frame, while P-frames are small and temporally compressed.

Other than that, depending on motion compensation, the size of the video
frames may vary, i.e., high motion may create larger sized frames and low
motion will create smaller sized frames.

Size of frames are also dependent on resolution so, if you vary that then you
get even higher variability in frame sizes.

Furthermore, if you vary the frame rate then you will see packet generation
times are irregular. The same logic of motion can be applied here. If there is
low motion, reduce the frame rate, and if there is high motion then increase
the frame rate.

Typically, audio has equal-sized packets and regular packetization intervals
(however, even with audio there are exceptions, for instance silence
suppression in audio telephony)

------
paulsutter
Nothing new here. Digtal Fountain was using Tornado codes ten years ago to
eliminate the need for retransmits to overcome TCP's crap retransmit logic
[1], and at Orbital Data we patented numerous techniques to vastly improve
retransmit performance while maintaining TCP wire format.

Most of all, the article implies that the system bandwidth is increased with
these techniques. In fact, individual client bandwidth can be increased but
the system bandwidth is unchanged.

[1]
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24....](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.921&rep=rep1&type=pdf)

------
Rhapso
This is not really a new idea, based on the info from the article we can
inference what they are doing.

here is a paper on the principle at work:
[http://www.sciencedirect.com/science/article/pii/S0165168405...](http://www.sciencedirect.com/science/article/pii/S0165168405002215)

Strictly speaking you cannot compress data by expressing the data as a
solution a to system of linear equations, but if you are willing to make the
system under-determined (too few equations and too many variables) you can
compress data. All it looks like is going on is that they are compressing and
storing the current packet's worth of data this way as well as half of each
adjacent packet's worth. This is slightly less then double the original size
because we are compressing the data. This way if we loose a packet we can re-
construct it using the data from the adjacent packets.

It works out to be a reliability to memory usage/processing time trade off.
For wireless sensor networks (the specialty of one of the labs at my college
where I ran across this technique) it is a really good idea. If you lose more
then 1 packet then there is not any gain, so any situation where packet loss
is bursty and tends to last more then 1 packet then this will not be of any
use, so it's application to wifi and cellular are iffy.

------
Medard
The codes here are not like conventional FECs, such as block codes (think
Reed-Solomon and their ilk) or rateless codes (think Fountain Codes and the
such). Such codes can be used over UDP readily. Using such codes over TCP is
quite difficult.

Besides the usual difficulties, traditional codes have a structure that
precludes their composability. There are several issues with this. One is that
they cannot be re-encoded without being decoded. The ability to re-encode
blindly is crucial to achieving capacity. For a theoretical treatment of this,
I would refer you to: <http://202.114.89.42/resource/pdf/1131.pdf> The article
is rather mathematical but the abstract provides the gist of the results " We
consider the use of random linear network coding in lossy packet networks. In
particular, we consider the following simple strategy: nodes store the packets
that they receive and, whenever they have a transmission opportunity, they
send out coded packets formed from random linear combinations of stored
packets. In such a strategy, intermediate nodes perform additional coding yet
do not decode nor wait for a block of packets before sending out coded
packets. Moreover, all coding and decoding operations have polynomial
complexity. We show that, provided packet headers can be used to carry an
amount of side-information that grows arbitrarily large (but independently of
payload size), random linear network coding achieves packet-level capacity for
both single unicast and single multicast connections and for both wireline and
wireless networks. "

The benefit of this composability, which we had first shown theoretically, is
illustrated in TCP in the following paper that recently appeared in
Proceedings of IEEE

[http://dandelion-
patch.mit.edu/people/medard/papers2011/Netw...](http://dandelion-
patch.mit.edu/people/medard/papers2011/Network%20CodingMeets%20TCP-%20Theory%20and%20Implementation.pdf)

The theory of network coding promises significant benefits in network
performance, especially in lossy networks and in multicast and multipath
scenarios. To realize these benefits in practice, we need to understand how
coding across packets interacts with the acknowledgment (ACK)-based flow
control mechanism that forms a central part of today’s Internet protocols such
as transmission control protocol (TCP). Current approaches such as rateless
codes and batch-based coding are not compatible with TCP’s retransmission and
sliding-window mechanisms. In this paper, we propose a new mechanism called
TCP/NC that incorporates network coding into TCP with only minor changes to
the protocol stack, thereby allowing incremental deployment. In our scheme,
the source transmits random linear combinations of packets currently in the
con- gestion window. At the heart of our scheme is a new interpretation of
ACKs the sink acknowledges every degree of freedom (i.e., a linear combination
that reveals one unit of new information) even if it does not reveal an
original packet immediately (...) An important feature of our solution is that
it allows intermediate nodes to perform re-encoding of packets, which is known
to provide significant throughput gains in lossy networks and multicast
scenarios.

The exciting news that prompted the piece in TR is that we now had an
implementation "in the wild", in a simple Amazon proxy, which required
significant engineering and works remarkably well.

The article refers to the issue of composability by pointing out that there
would be advantage if it were built "directly into transmitters and routers,
she says". The more places you build this, the better. Dave Talbot also
provided a link to work that our group has done with researchers at Alcatel-
Lucent <http://arxiv.org/pdf/1203.2841.pdf> which touches on this point to
show the very important energy savings that could be obtained by implementing
network coding in different parts of the network.

------
ripperdoc
This seems to be a paper from the leader of the study, giving some more
details:
[http://www.mit.edu/~medard/papers2011/Modeling%20Network%20C...](http://www.mit.edu/~medard/papers2011/Modeling%20Network%20Coded%20TCP.pdf)

------
sly010
TCP relies on lost packages for congestion control. They improve their
bandwidth by removing that control. I really wonder what happens when you
deploy something like that in scale.

My guess is that if you don't implement this carefully, it might actually
cause congestion problems under certain circumstances. Regardless, getting a
reliable connection for yourself by tunnelling and FEC is nice.

~~~
regularfry
From the paper linked elsewhere in this thread:

    
    
        Note that network coding only masks random erasures, and allows 
        TCP to react to congestion; thus, when there are correlated losses, 
        TCP/NC also closes its window.
    

Sounds like they thought of that.

------
vrodic
So this is basically forward error correction (FEC) optimized for TCP
specifically?

------
chmike
A simple way to implement this on top of existing IP stack is to use a proxy
split in two. One half on the mobile device and the other half on the static
host connect to internet with a static connection.

Between the tow half proxy use a hardened UDP communication from the packet
lost point of view.

The idea is not original since I (at least) had it long ago but never bothered
implementing it. maybe it is already implemented in IPsec. Things are easy if
packets are lost randomly. If they are lost in bunches, then things may become
more difficult.

I'm however surprized phone data networks or wifi would not have such
functionality built in.

------
bifrost
This will do nothing for lossless networks, so I'm not sure this is really all
that interesting. There is too much money being spent on infrastructure to
tolerate lossy networks...

~~~
ripperdoc
All networks can have packet loss, and the ones we use more and more (the
wireless ones) certainly have a higher packet loss rate. Not sure what you are
suggesting should be done?

~~~
koenigdavidmj
Right, but 802.11 has built-in acknowledgments, preserving the illusion of a
reliable network to higher layer protocols (TCP being the one where it matters
here).

