

Increasing wireless network speed by 1000%, by replacing packets with algebra - o1iver
http://www.extremetech.com/computing/138424-increasing-wireless-network-speed-by-1000-by-replacing-packets-with-algebra

======
tomstokes
Summary: TCP throughput drops dramatically when packet loss is present. This
technique uses forward error correction to compensate for packet loss,
resulting in higher effective throughput over lossy links. Many WiFi and
cellular connections are lossy, so this would be helpful in those cases.

They haven't improved the underlying link rate at all. In fact, the FEC
overhead is going to reduce the effective link rate. However, in some edge-
case high packet loss scenarios, the reduced packet loss will more than make
up for the reduced effective link rate.

~~~
mrb
Yep. This appears to be nothing more than FEC. Maybe they used LDPC or LDGM,
which are superior to traditional Reed-Solomon codes.

I remember doing research on FEC codes back in 2003-2004 for developing a
protocol for sending large files over satellite links to multicast recipients
when I was working for SmartJog.

~~~
jwm
> I remember doing research on FEC codes back in 2003-2004 for developing a
> protocol for sending large files over satellite links to multicast
> recipients when I was working for SmartJog.

Interestingly, this tool looks like it is useful for the problem you describe:

<http://www.udpcast.linux.lu/satellite.html>

~~~
mrb
Yup. I remember looking at udpcast. I did not select it because, amongst other
things, it did not support encryption. And I don't think we could use IPsec.

------
wmf
Google, don't speculate. The primary source appears to be
[http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%...](http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%20TCP-%20Theory%20and%20Implementation.pdf)
and similar papers from the same authors.

------
guelo
Ugh, remember when academia helped create the internet by releasing free
publicly available unlicensed RFCs?

Edit: Not RFPs, duh

~~~
tptacek
I'm not sure I remember how each successive advancement in signaling from
802.3-coax through 10gbE through gigabit wireless was driven by public RFCs.

~~~
guelo
That's a good point since most networking standards were developed under the
IEEE which was always more industry oriented than the IETF, I think they're
both mostly industry players at this point. But still, there is a rich history
of free contributions by academia which I think is rarer today.

------
oconnor0
See the original discussion & better article at
<https://news.ycombinator.com/item?id=4686743>

------
zackbloom
TL;DR They are using error correcting codes across packets to limit the impact
of losing some.

------
Inufu
This looks like Fountain codes: <http://en.wikipedia.org/wiki/Fountain_code>

Basically, you split your data into blocks, XOR random blocks together and the
client can recreate the data by solving the equation of which blocks where
XORed with which.

A good tutorial is here: [http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-
Fountain...](http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain-
Codes)

And a fast implementation: <http://en.wikipedia.org/wiki/Raptor_code>

------
stephengillie
The linkbaity title links to a neat, proprietary way to reduce wireless packet
loss.

------
potkor
It's the job of the 802.11 L2 to hide correctable packet loss L3 so most radio
link loss isn't passed on to TCP. This could just as well read "the wifi L2
error correction doesn't try hard enough".

This problem is very common but it wants fixing on the L2 and not TCP. Turning
up the FEC on the L2 would reduce its capacity even further though since more
of the bandwidth is taken up by the FEC (and so does this TCP level FEC).

3G gets it wrong on the other extreme, it pretends to always have 0% packet
loss, your packets just sometimes show up 30-100 seconds late and in order.

------
delinka
I see lots of comments talking about FEC. That's not how the article reads to
me. Granted the author (or I?) may be completely out in left field, but here's
my take on what it says:

Let's suppose you have a mathematical process that outputs a stream of
[useful] data. The description of the process is much, much smaller than the
output. You can "compress" the data by sending the process (or equation)
instead. Think π. Do you transmit a million digits of π or do you transmit the
instruction "π to a million digits"? The latter is shorter.

Now, reverse the process: given an arbitrary set of data, find an equation (or
process) that represents it. Not easy for sure. Perhaps not possible. I recall
as a teenager reading an article about fractals and compression that called on
the reader to imagine a fractal equation that could re-output your specific
arbitrary data.

If I've totally missed the article's point, please correct me, but explain why
it also talks about algebra.

EDIT: I re-read and noticed this: "If part of the message is lost, the
receiver can solve the equation to derive the missing data." I can see the FEC
nod here.

Guh. I guess I'm blind tonight. "Wireless networks are in desperate need for
forward error correction (FEC), and that’s exactly what coded TCP provides." I
cannot for the life of me understand why they'd need to keep this a secret.

------
mikepurvis
There's a company in Ottawa implementing a similar idea, but based on carving
up packets and then inserting one or more redundant XOR packets (RAID-style).
Their name for it is IPQ: <http://liveqos.com/products/ipq/>

They have a patent on this: <http://www.google.com/patents/US20120201147>

(Disclosure: I was an intern there in 2009, when it was IPeak Networks.)

------
Jabbles
Has anyone done an experiment to see what simply duplicating every TCP packet
sent over wireless does? If you're in a situation where you're limited by
random packet loss and not by raw bandwidth, I imagine it could help...

Obviously this is a much weaker and less efficient solution that what is
proposed in the paper, but this would be trivial to implement. I believe netem
allows you to simulate this.

~~~
jasonwatkinspdx
There was an RTS game in the late 90's that did something like this. It was
peer to peer, with peers broadcasting their input events to each other and
running mirrored simulations in lockstep. So the bandwidth used was trivial
since you can code a keypress or mouse click in just a couple bytes, but any
packet loss would degrade the game for all peers. So they just duplicated the
last packets input data in successive packets. No algebraic coding or any
other attempt to save bandwidth, since the data was already trivial.

------
leif
Is this likely to be any different from introducing a little error-correction?

Also, how were we not doing this already?

Also, I need a writer. Whoever wrote this up made it sound WAY cooler than
when I explain error correcting codes.

~~~
lmm
As it says, in wired networks packet loss is almost entirely due to
congestion, so adding error-correction is unnecessary - better to just
throttle down when you see it.

TCP is old and reliable and there are massive costs associated with switching
to anything else. I think that's why we weren't doing this yet, and I think
that's why this initiative will fail too.

------
lovamova
This is one of those posts where the comments on the site are better than the
article submitted.

It also means less power consumption on mobile phones. There will be no need
to increase signal power to get better speed or voice quality.

------
dnos
I'm not up-to-date on networking technologies, but it's surprising to me that
some sort of error correction hasn't already been made a standard yet.

I wonder if something along the lines of old-school Parity files would work in
the packet world? Basically just blast out the packets and any that were lost,
you just reconstruct using the meta-data sent with the other packets.

------
codex
Isn't the downside of FEC encoded packets increased latency? Instead of
sending each packet immediately, don't you need to accumulate n packets to
encode as a group? Or does the math allow incremental encoding? Simple parity
is incremental, but the FEC on DSL lines always added 40ms of latency.

~~~
loeg
40ms extra latency on wifi is nothing compared to packet loss.

~~~
codex
Really? My wifi RTT to the metropolitan core is 15 ms, and over LTE it is 47
ms. Not a worthwhile tradeoff at 40 ms unconditional additional RTT delay.

I imagine the higher latency was used to offset lower bandwidth due to FEC
encoding loss, by amortizing it over more bits.

------
codex
What's funny here is that wireless networks already use FEC at the physical
layer. This just adds more, less conservative, FEC higher up for apps where it
makes more sense to reduce throuhput and increase some average case latency to
avoid worse case latency and worse case throughput.

------
liamzebedee
My friend did something similar to the concepts in this article, except he
applied it to compression of audio. It was basically finding patterns in the
audio and transforming these into equations. Interesting stuff.

------
Schwolop
What a remarkably easy to read article.

------
Snapps
Captivating title...

