
Van Jacobson Denies Averting Internet Meltdown in 1980s (2012) - kercker
https://www.wired.com/2012/05/van-jacobson/
======
Animats
I'd been working on network congestion in 1984-1985, and wrote the classic
RFCs on this.[1][2] I did the networking work at Ford Aerospace, but they
weren't in the networking business, so it was a sideline activity. Once we had
the in-house networking working well, that effort was over. By 1986, I was out
of networking and working on some things for a non-networking startup, which
turned out very well.

There was much computer vendor hostility to TCP/IP, because it was vendor
neutral. DEC had DECnet. IBM had SNA. Telcos had X.25. Networking was seen as
an important part of vendor lock-in. Working for a company that was a big
buyer of computing, I had the job, for a while, of making it all talk to each
other.

Berkeley BSD's TCP was so influential because it was free, not because it was
good. It took about five years for it to get good. Pre-Berkeley
implementations included 3COM's UNET (we ran that one, after I made heavy
modifications), Phil Karn's K9AQ version for amateur radio, Dave Mills
Fuzzball implementation for small PDP-11 machines, and Mark Crispin's
implementation for DEC 36-bit machines. The first releases of Berkeley's TCP
would only talk to Berkeley TCP, and only over Ethernet. Long haul links
didn't work, and it didn't interoperate properly with other implementations.
(The initial release of 4.3BSD would only talk to some systems during even
numbered 4 hour periods because Berkeley botched the sequence number
arithmetic. I spent 3 days finding that bug, and it wasn't fun. Casts to
(unsigned) had been misused.)

The Berkeley crowd liked dropping packets much more than I did. I used ICMP
congestion control messages to tell the sender to slow down, rather than
dropping packets. I was more concerned with links with large round-trip times,
because we were linking multiple company locations, while Berkeley was, at the
time, mostly a local area network user. So they had much lower round trip
times.

I'm responsible for the terms "congestion collapse" and devised "fair
queuing". I also pointed out the game-theory problems of datagram networks -
sending too much is a win for you, but a lose for everybody. This was all in
1984. Today, we have "bufferbloat", which is a localized form of congestion
collapse, fair queuing is widely used (but not widely enough), and we have
enough core network bandwidth that the congestion is mostly at the edges.
Today's hint: if you have something with a huge FIFO buffer feeding a
bottleneck, you're doing it wrong. Looking at you, home routers.

Back then, I realized that fair queuing could be turned into what's now called
"traffic shaping", but decided not to publish that because it would provide
ammunition for the people who wanted to charge for Internet traffic. There
were telco people who assumed that something like the Internet would have
usage billing. This could easily have gone the other way. Look up "TP4", an
alternative to TCP pushed by telcos. That was supported by Microsoft up to
Windows 2000.

Berkeley broke the Nagle algorithm by putting in delayed ACKs. Those were a
bad idea. The fixed ACK delay is designed for keyboard echo and nothing else.
When a packet needs an ACK, Berkeley delayed sending the ACK for a fixed time,
in hopes that it could be piggybacked on the returned echoed character packet.
The fixed time, usually 500ms, was chosen based on human keyboarding speed.
Delaying an ACK is a bet that a reply packet is coming back before the sender
wants to send again. This is a lousy bet for anything but classical Telnet.
Unfortunately, I didn't hear about this until years after it was too late,
having moved to PC software.

UNET was expensive; several thousand dollars per machine. BSD offered a free
replacement. So 3COM exited TCP/IP and went off to do "PC LANs", which were a
thing in the 1980s.

John Nagle

[1] [https://tools.ietf.org/html/rfc896](https://tools.ietf.org/html/rfc896)
[2] [https://tools.ietf.org/html/rfc970](https://tools.ietf.org/html/rfc970)

~~~
Taniwha
I think there's another piece of context about TCP/IP vs. everything else
(SNA, BNA, DECNet, X25, ISO/whatever etc):

TCP/IP was based on datagrams at the bottom levels, and expected them to be
dropped if something went wrong, everyone had a phone system mind set - world-
wide PTT charged for phone calls, the mindset was that they'd charge for
reliable data connections (by the minute or the byte) and the protocols
essentially mirrored that - you talked to a supplier who did all the low level
stuff for you, and essentially made a call to a remote machine - under IP you
sent datagrams and most of them got there.

I think that TCP/IP 'won' (and it wasn't obviously going to win at the time)
because:

1) the connection to the wider internet was very simple, and essentially
stateless

2) it let people innovate everywhere else up the stack

3) no company (or country) owned it

~~~
Animats
Basic truth: the only reason pure datagram networks work is because long-haul
fiber optic backbone bandwidth became so cheap. If long-haul bandwidth cost
even 0.01% of what it cost in the 1980s, we'd be doing this very differently.
We still can't deal with congestion in the backbone.

I didn't see that coming, and I thought things were going to have to be far
more efficient.

------
jgrahamc
Here's Van Jacobson's 1988 paper on slow start and other congestion avoidance
algorithms. It's really worth reading to understand what was happening:
[https://cs162.eecs.berkeley.edu/static/readings/jacobson-
con...](https://cs162.eecs.berkeley.edu/static/readings/jacobson-
congestion.pdf)

It's personally fascinating to me because in about 1984/5 I was working on a
local area network and with a friend 'invented' a connection-oriented protocol
that used counters to spot dropped packets (because of Ethernet collisions)
and request retransmission. We successfully overloaded the network of about 16
machines using this algorithm as it went crazy retransmitting and upping the
collision rate and we began worked on very similar algorithms to fix this (but
didn't get that far because had A levels to do).

~~~
tptacek
I've always wondered how much of this stuff really comes from Raj Jain and
CUTE, which I think predates it and has the many of the same ideas in it.

~~~
jgrahamc
I know that myself and Petr were working totally in isolation just trying
stuff out. We did what we could but it sure would have been helpful to have
some clues :-)

------
mhandley
The original 1988 Congestion Avoidance and Control paper is still well worth
reading:
[http://ee.lbl.gov/papers/congavoid.pdf](http://ee.lbl.gov/papers/congavoid.pdf)

The observations about timers and ack-clocking are still as relevant today.

That's not to say that 1988-style Additive Increase Multiplicative Decrease is
perfect as a congestion control scheme. There are lots of issues, ranging from
not working well on paths with high bandwidth-delay product, to being overly
sensitive to non-congestive packet loss, to causing buffer bloat. But I don't
think there's any doubt that the Internet survived growing both in traffic and
in hosts by many orders of magnitude, survived all the underlying technologies
being replaced multiple times, and survived massive changes in applications,
all in part because TCP does a reasonable job of matching offered load to
available capacity.

------
drewg123
Van is at Google, and has been for quite some time. He's been working on BBR
at Google. [https://www.networkworld.com/article/3218084/lan-wan/how-
goo...](https://www.networkworld.com/article/3218084/lan-wan/how-google-is-
speeding-up-the-internet.html)

~~~
wereHamster
> They say their acceleration method, called bottleneck bandwidth and
> roundtrip (BBR) propagation time, measures the fastest way to send data
> across different routes and is able to more efficiently handle traffic when
> data routes become congested.

------
th-ai
Van Jacobson
[https://en.wikipedia.org/wiki/Named_data_networking](https://en.wikipedia.org/wiki/Named_data_networking)

~~~
gorgonical
I am actually a research scientist working on NDN (Named-data Networking). Van
certainly was the founder of NDN and is often consulted to remind us about his
vision for the network, but the torch really has been passed to the other PIs;
most notably Lixia Zhang[1], another early-Internet pioneer. It's very
exciting work, and a lot of things need re-inventing when you totally invert
the networking protocol like that.

[1]
[https://en.wikipedia.org/wiki/Lixia_Zhang](https://en.wikipedia.org/wiki/Lixia_Zhang)

~~~
th-ai
hi i'm working some w/ Christos in fort collins, finding way for NDN in
consumer product, email me please! :)

