
The End of the Router? (2016) - signa11
https://medium.com/hyperscale-routing/the-end-of-the-router-e4d769aea60f
======
candiodari
There's posts about this on a regular basis, every 2 years or so. Every now
and again someone looks at Cisco IOS and Juniper JunOS and the various smaller
ones and decides that they just don't work well, because they have subtle
bugs, no real tests, etc. (Never mind that both these pieces of software
predate TDD by decades, with code in there that has been proven to be
incredibly reliable much older than that)

There is no more value in posts like this than there is in the regular
academic "we should reimplement the linux kernel in 10000 lines of code" shit
that also comes by. Every now and then someone even actually does reimplement
it, but it never quite replaces the linux kernel because ... well the linux
features are mostly there for a reason. By the time you have an actual
workable kernel, it's also far beyond what most developers seem to consider
reasonable complexity boundaries.

And so it is for those networking features. In reality there is a reason for
all those protocols. They (well, some) also work incredibly well, and,
crucially, those protocols that are required from a router all have global
footprints and interoperability between dozens of vendors which is not
optional. There is a reason for the complexity and there is a cost to cutting
out complexity, a cost we won't be willing to pay.

Just to be clear: I don't deny that there is value in trying to reimplement
these critical infrastructure programs from time to time. But the value in
that is mostly in improving your personal understanding and abilities, they
cannot reasonably replace the infrastructure we use. And if they can, well
then they are also very complex.

What can we say ? The real world is complex. Mostly ... for good reasons. So
if you want to work/program in this space, you'll need to deal with the
complexity.

~~~
inopinatus
This didn't read to me like a proposal to simplify routing software by simply
ripping out protocols, but to refactor the design of the control plane and use
more modern engineering processes than the writer saw at previous workplaces.
If I read between the lines correctly, he wants to build a microservices-based
router for the high end network operator market.

The writer is a multiple RFC author, IETF protocol chair and former
distinguished engineer at Juniper, i.e. they already do "work/program in this
space" and "deal with the complexity", so criticism on that basis might fall a
little flat.

------
microcolonel
Seems to imply that because there are features in the software, it would be
preferable to get a build with fewer features for the sake of quality; but
that makes no sense, removing features from your build isn't going to improve
the features you keep. A lot of this post doesn't make sense. :- P

> _Hardly understandable if you consider “state of the art” Developmental and
> operational models like DEVOPS._

This, and some other sentences, are gibberish. "DEVOPS" is not an operational
model.

> _Lack of micro-services architecture renders technical debt possible._

This of course implies that routers can reasonably be constructed as systems
of microservices, which to me seems like it would sacrifice basic performance
targets if taken literally. If he means node-based routing, sure, that's
basically how things work anyway. More facile, it seems to imply that
microservices render technical debt impossible, which is ridiculous.

> _However, it gets challenged using new concepts like SDN and NFV, which
> promise much faster network adoption, automated control, reduced time-to-
> revenue, which all are good business solutions._

More gibberish.

This is probably not an honest article, but a professional services marketing
wankpiece. That's too bad, because frankly it seems like some of the ideas are
worth exploring, and I would be more convinced and excited for his company if
this were less manipulative and more direct.

~~~
whatupmd
As an operator at a networking vendor this article really rang true. To me the
whole article was a cry to re-think embedded systems design. He opened with
relationship between feature-bloat and hardware performance in networking
space. Won't your service be more performant if you avoid STP check on a
routed network? Yet because of history and system design even if you don't use
STP your hardware will perform check because of system architecture. Just take
a walk through an IP or Ethernet header and look at how many of those
protocols you actually use. Yet even if you avoid using them you can't
'disable' this checks in hardware. Refactoring seems to be hard or cost-
prohibitive for network vendors, so it is avoided (at least until the next
hardware generation is released). Even when engineering identifies an
architecture problem, most managers will not take the risk of proposing
refactoring. Maybe because it's more cost-prohibitive in an embedded system?
Maybe because engineering teams implementing 20 year old protocols like ARP,
Ethernet, STP, OSPF are in that position because they are inexpensive? His bit
about DEVOPs is just trying to point out how QA for networking gear running
well established protocols should not be a hard problem, yet for some reason
gear gets released still with the same bugs you'd see in 2000. Why is this?

------
fulafel
This seems to indicate that on some aspect, internet complexity has stopped
growing:
[https://www.isc.org/network/survey/](https://www.isc.org/network/survey/)

But still the BGP table grows, but the growth has changed from exponential to
linear in ~2012: [https://bgp.potaroo.net/](https://bgp.potaroo.net/)

------
signa11
fta 'Critics of router-based network services argue, that it’s adding and
never removing features, keeps driving up the cost of the routers to a point
where advances of Moore’s law are not passed down to customers any longer.'

this is exactly right. for multiple reasons, network equipment manufacturers
had to have their _own_ 'stickiness' in network equipment. and the canonical
way to do that is via feature-bloat. fwiw, see this video:
[https://www.youtube.com/watch?v=abXezfJsqso](https://www.youtube.com/watch?v=abXezfJsqso)
around 6:00(min) mark, and also around 10:00 (min) mark. the last one being
very instructive.

james-hamilton assertion '...the network is anti-moore...' is also to the
point. imho, it is a situation brought on my the vendors themselves. which
ofcourse is the reason why you see, to a large extent, the big three
(amazon/google/facebook) leveraging their s/w expertise to design and manage
their networks way more efficient than what would have been possible
otherwise.

all of these, coupled with the fact that x86 h/w is now capable enough for
very high speed packet-io seems really bad for network equipment vendors.

------
eleitl
What the network needs is precomputed paths. Let the end devices or at least
network edge compute the routes. The core should be just a dumb crossbar, with
the switch bits directly selected by the header, and gets consumed in the
process.

This would even work with purely photonic relativistic cut-through.

~~~
bostik
> _What the network needs is precomputed paths._

You mean like ... PSTN?

~~~
signa11
> What the network needs is precomputed paths.

>> You mean like ... PSTN?

it seems to me that OP is perchance referring to MPLS...

edit-001 : re-worded.

------
verri
The “lack of loosely coupled components” argument is kind of sad considering
that some unnamed network equipment vendor initially started off with QNX
based firmware, which inherently enforces such a model.

