
A Fairer, Faster Internet Protocol - naish
http://www.spectrum.ieee.org/print/7027
======
Retric
Seems to me the simplest solution at the local level with an over subscribed
pipe is a round Robbin transmission per IP address. If A wants to send 1000
packets, B wants 100, and C wants 10, A get's 10 while B get's 10 while C
get's 10, then A get's 100 while B get's 100, then A get's 890. This ignores
how the users a A are using the network and still gives low latency bandwidth
for those applications that need it.

PS: I don't think this is how cable companies handle there local loop.

~~~
rglullis
You do know that this is not how TCP works, right?

It's not that a node "says how many packets they want to send", and the
endpoint decides how many they are going to consume.

In reality, each node just sends the packet and waits for the ACK from the
other endpoint. So, if A has 1000 packets, it's going to try to send them,
_all of them_. So will B and C.

The problem is, if the endpoint is only able to handle 1000 packets per unit
of time, some of them will be dropped. But there is no way for an endpoint to
say "Oh, A is clogging the pipe, I'll drop from it". The endpoint just gets
what it can and ACKs.

As the article makes pretty clear, the problem is not bandwidth, it's
_congestion volume_.

~~~
Retric
Generally speaking TCP is an endpoint protocol that runs over IP. Switches
don't need to know anything about TCP to rout TCP traffic. Anyway, I was not
talking about TCP, I was referring to the simplest way for a high bandwidth
ISP to deal with oversubscribing their network without pissing people off.
They don't need to know anything about the IP packets your sending be the UDP,
TCP or whatever.

His suggestion is that people who use less bandwidth on average should have
better bandwidth when things get congested, but your network is only effected
by the last few seconds of traffic so letting people have all the bandwidth
for new connections seems like a bad idea. (Comcast was doing something like
this and it ended up messing with a lot of low bandwidth apps.)

PS: Networking equipment has internal buffers so they can buffer 20 and only
20 packets from each user / network and then round robbin transmission of
those packets down the line. The advantage to this is if your sending a small
stream of data your going to have low latency and if your trying to flood the
pipe you and only you will get a high number of packets dropped.

------
bprater
The article is too dense and the font too tiny. Anyone have a two sentence
synopsis?

However, I love the pixel art. We need more pixel art in this world!

~~~
kaens
Check View -> Page Style if you're using Firefox.

