
MBONE: The Multicast Backbone (1994) - Lammy
https://sites.cs.ucsb.edu/~almeroth/classes/S99.290I/art1.html
======
gatestone
IP level multicast routing turned out to be too difficult in large scale.
There are a several somewhat subtle reasons and mis-incentives, perhaps the
most pressing being scheduling and time and bandwith synchronization between
receivers. TCP does not work there...

The problem was solved by application level "multicast", or content delivery
networks. You can keep your TCP and all your regular clients.

~~~
convolvatron
TCP doesn't work, but the multicast version is a real classic of IETF
engineering, and I am really just looking for an excuse to post it here

[http://ccr.sigcomm.org/archive/1995/conf/floyd.pdf](http://ccr.sigcomm.org/archive/1995/conf/floyd.pdf)

------
nailer
Ooh I remember mbone! At the time I was a kid and didn't have an internet
connection at home, but I remember Wired talking about it back when they
covered tech in detail.

Does anyone on HN know what happened to it?

~~~
mhandley
I spent a decade working on multicast, helped write some of the second
generation multicast conferencing tools, and co-authored a number of the
standards. Generally, the issue was that multicast requires per group or per-
source state in routers, and router memory was a scarce resource. This made it
hard to scale globally. It also made forwarding problems hard to diagnose,
taking up operators' time. These costs, plus an unclear connection between who
benefitted from multicast and who paid for it made it difficult for core ISPs
to justify. And that meant that commericial content or application providers
needed to implement a plan B in case multicast wasn't available, and if that
worked well enough, they didn't bother with also doing multicast.

Still, RTP and SIP came directly out of the MBone, and they bootstrapped the
whole Internet multimedia phenomena we're all enjoying today while working
from home.

~~~
q3k
I wouldn't exactly say I 'enjoy' using SIP.

~~~
mhandley
Sorry about that! It did rather morph from our nice simple initial design into
a monster once it met the real world.

~~~
organsnyder
As everything is wont to do.

------
pipingdog
I remember the MBONE. I was working at a National Lab, and had an SGI Indy
workstation with an IndyCam, so I spent some time setting it up.

One day I forgot to close the app when I left for the day, and came back the
following day to a NastyGram from Van Jacobson, chastising me for using up
bandwidth with an image of my darkened office door for 16 hours.

Good times.

~~~
fit2rule
I had a similar circumstance and the same rig with my Indy and for fun left it
pointed at a Triops tank I'd set up .. people thought it was just an empty
tank of water, until a week later there were epic fights between the banana
shrimps and the triops babies...

Good times, until I got kicked off MBONE and ended up on CuSeeMe instead .. ;)

------
threeme3
Good memories. In 1994, about 5 years later after Steve Deering published
RFC1112, IP multicast got implemented in our Dutch spanned amateur packet
radio network [1]. There were about 20-30 network nodes acting as Local Access
Points for access to users, and interconnected via interlinks. The nodes
implemented the AX25 packet radio protocol with rates of 1200-9600 baud, on
top of that you could transport IP datagrams hence these nodes were part of
the 44.0.0.0 AMPR.ORG Internet network. There were nodes that had an Internet
gateway through a local origanisation or university, but these gateways were
mainly used to support linking the subnet of the 44.0.0.0 network and
connecting the AX25 networks (via AXIP and wormholes).

The nodes here in the Netherlands used the multicast to distribute link
information and IP autorouting. It was very well possible to join a multicast
group somehwere in the network by multiple parties, and then stream UDP frames
accross the network.

A very fun experiment at the time was to send CELP-compressed (1-2kb/s) audio
packets through the multicast network, and hence it was possible to have
conversation with multiple people spanning a distance larger then the radio
horizon. The latency and packet loss were disrupting good operation, but it
more or less worked.

[1]
[https://www.jj1wtk.jp/nos/history.html](https://www.jj1wtk.jp/nos/history.html)

------
gumby
I have been disappointed that multicast isn't used by the mass video chat apps
(e.g. Zoom), but after talking to some of the folks "in the know" it looks
like you can't even count on it being supported in all the devices you need to
reach your endpoint.

It seems like it would be good for that and great for "cord cutting" live apps
like sport broadcast. Of course the majority of "last mile" carriers are also
TV providers so would prefer to charge for that separately over their existing
physical plant.

------
zajio1am
I wonder if multicast could be reclaimed as a layer in application-specific
overlay networks. E.g. if a videoconference tool wanted to join a session, it
would create stateless IP tunnel to an access point, and then just run regular
multicast RTP over that.

It would allow to use existing tools and infrastructure to scale that and
remove forwarding and distribution from applications to a common layer. But it
has a disadvantage that there is no common API for applications to setup ad-
hoc transient IP tunnnels, as that is usually a privileged operation.

~~~
convolvatron
that's kind of what the mbone was, a tunnel overlay. it just look a lot of
fussing around for too many years.

the only reason that a tunnel is privileged is that it creates a kernel
interface. given that we can't really use tcp anyways, there isn't any reason
why the whole stack can't live in user space on a generic UDP port.

you would have to have pim or torrent style discovery/rendezvous machinery

aside from efficiency, I wonder what the use case is?

------
watertom
One the early commercial products that used multicast was InSoft's Communique
and their streaming InTV product from 1993.

