Brunt of the work is done in ASICs.
It's more readily apparent if you've taken an empty Cisco ASR9010 and started from a blank slate configuration with a new pair of RSP440 route/service processors (the control plane) and then incrementally added linecards/interfaces.
Mid to large sized routers are designed in such a way that everything is either N+1 or 1+1 redundant and hot swap, it's like having a four engined airplane you can replace an engine on in mid flight.
People should know that Cisco takes this discrete fact and then extends and over-hypes it. The differentials between ASICs and CPU are not to the extent that Cisco make it out to be, where it seems like CPU is not even an option at all on most of the network. Also, Cisco has an overbroad notion of how necessary their particular routers and switches are in your network, when in many places in the network, off-the-shelf hardware, CPUs and software would do fine (for much cheaper).
For the past few years, people have been developing software solutions like quagga, BIRD, XORP and OpenBGPD/OpenOSPFD. They have seen some heavy production use ( http://www.openbgpd.org/users.html ). OpenBGPD even teased out problems in Cisco routers - some standards-compliant packets OpenBGPD sent out would crash Cisco routers.
People appreciate the openness, the flexibility and the extensibility and the price of these solutions.
ASICs are faster than CPU in the core of the core, but don't let that fact get you ensnared in Cisco marketing hype and FUD. People are paying a lot of money to Cisco on the low and medium end for equipment that could be replaced much cheaper with off-the-shelf hardware and free and open software.
The Cisco guy is going to sure 'Sure! We can cover everything!'
The OSS off the shelf guy is going to ask 'Well that depends on the growth.'
At that point, most of the businesses just throw some money at Cisco so they don't have to worry about anything.
Though this "depends" is a quite high bar in reality. I know a regional
internet provider that runs purely on software routers on commodity x86_64
servers. You need to be a quite large company (or operate in a very specific
domain) to ever hit this level of traffic.
The control plane which is some general purpose CPU running Linux/VxWorks(Cisco)/FreeBSD(Juniper)is what builds the FIB and actually populates those ASICs in the forwarding plane. The ASICs are basically just lookup tables that rely on data pushed to them by the control plane. In other words they rely on computation/preprocessing done elsewhere.
network-controller is a recent phenomenon and it would be quite interesting to see if this is what was meant. fwiw, a bunch of ericsson h/w for cellular access ran erlang in the control-plane f.e. mme/sgsn etc.
D: I work at Cisco
what conf-d/tail-f brings to the table is the distributed nature of 2-phase commit to device configuration, as well representation of operational and runtime states of these devices. which has some appeal to it.
source: have worked for csco quite a number of times :)
edit: slight clarification on conf-d capabilities.
Most of the actual Configuration Database (CDB) is written in Erlang. Which also explains the statement made Cisco during the conference.
> what conf-d/tail-f brings to the table is the distributed nature of 2-phase commit to device configuration, as well representation of operational and runtime states of these devices. which has some appeal to it.
Another thing that has quite some appeal, I think, is that by using a single YANG data model the daemon actually can synthesize the structures of the CLI, Web UI and NETCONF northbound API's.
Again, biased because of Cisco employment.
Netconf (rfc6241) operations un most routers (not just Cisco) are handled from an Erlang application.
Some years ago the best netconf implementation was from tail-f systems. Cisco and Juniper licensed the software from them...
... And Cisco bought the company for the Netconf and orchestation/automation part.
Routers fall over pretty regularly if you have enough of them so you already have things like MPLS FRR to deal with that.
Sounds like this is for netconf, which is a provisioning system. It failing is more akin to chef crashing on a server from a cosmic ray. Kinda sucks, but you restart it and probably no harm done.
The programmed routes go into hardware on the bigger systems, and either way there is no Linux kernel routing involved at all.
Edit: If I'm not mistaken I think Linux is actually used in some platforms for routing the out of band management interface, but not transit traffic.
Mainframes, because they have saner systems programming languages than C.
Apple devices, because part of the network stack requires their Embedded C++ dialect.
Are you claiming that PL/X is saner than C? What is your reasoning?
The PL/8 papers related to IBM's RISC research are quite interesting, before they decided to go with UNIX on RISC instead, as the best way to recover their money, thus switching to C.
Also not all mainframes were written in PL/X variants, there is also NEWP, which already had the notion of unsafe blocks in the late 60's.
(Probably it is way too late for this, but if IBM had been willing to publicly release PL/S tools back in the 1970s, or at least not use their lawyers to stop others from independently doing that, PL/S and its descendants could have become much more significant systems programming languages.)
UNIX and C only became widespread, because contrary to IBM, DEC, Wang, and many others, AT&T was forbidden to sell UNIX and they started offering it alongside source code for a symbolic price of $100, which was like being free given the prices of other systems.
In any case, that doesn't make C technically better than the alternatives outside AT&T walls, it was just cheaper to get hold of.
But IBM wasn't willing to sell its customers a PL/S compiler.
One reason was that (up until the OCO announcement in 1983) IBM gave its customers the PL/S source code, but not giving them a PL/S compiler discouraged customers from changing the source code, since they couldn't recompile it. Customer modifications had to be made in assembler, which was a lot harder to maintain.
Another reason was to make life harder for competing vendors selling IBM-compatible mainframes, such as Amdahl, Fujitsu and Hitachi.
The parent's point was simply that it's not that surprising that it's effective in this space, when it was targeting that space to begin with.
So I guess you'd choose Go if you want to build a local service — especially if it has some amount of concurrency (e.g. network daemons) — or CLI tools (fast startup). You'd use erlang if you want to build a system which really should not die, especially if it so should not die that you're willing to spread it over multiple physical machines.
Distribution is likewise built into the runtime and language, most of the distribution-related BIFs are not only part of the "erlang" module/namespace but also part of its "prelude" (auto-imported in all modules by default).
So no, it is absolutely nothing like kubernetes in relation to go.
Yes, mostly, but the thing is that OTP was possible at all because of how
the portions provided by the language and the runtime play together. You
cannot replicate the OTP as it is in Go because Go's runtime is lacking and
its communication model doesn't match.
AKA Go does shared-memory multitasking, while Erlang does shared-nothing multitasking.
Since it's not like I'm trying to sell this as a solution to all immutable data requirements, I didn't worry too much about that detail. In the end it's all about effort, anyhow; with the right invocations in Haskell I can mutate "immutable" data just fine. So even an interface in the package that declared it is still nontrivial protection against accidentally mutating it.
Isn't it a lot simpler to just say that Go has no feature support for immutable data? The fact that you can hide accessors in a package is small comfort if I have to read all the source code in that package to know the actual semantics.
EDIT: Yep, we're actually both right! From the Wikipedia page:
> The name "Erlang", attributed to Bjarne Däcker, has been presumed by those working on the telephony switches (for whom the language was designed) to be a reference to Danish mathematician and engineer Agner Krarup Erlang as well as a syllabic abbreviation of "Ericsson Language".
BEAM and Erlang give you tooling that you can’t get from any other language, regardless of how it’s deployed. Using Kubernetes doesn’t make up for any of that...it just gives you a way to deploy containers.
Apples to oranges.
You'd think that this knowledge retrieval/storage problem could have a more elegant solution built by the HN folks. Example solution: Topic X stored in category, ranked to the top, its next 'commit' or successor is chained on kind of like how unroll works with twitter.. but with long HN threads instead.
Think a 'git tree' of knowledge descending down, with merges too maybe...
Engh.. maybe thats part of the charm of HN... the old school style message board.
Erlang / Elixir give you a ton of safety guarantees inside the VM. Something many other languages were never bothered to offer.
Huh? That's an incredibly stupid comment.
The only thing that powers internet is smart people. specially those that don't spend their time on pointless discussions and instead they can focus on analyzing the problem and finding the right tools and solution for it.
Please don't call yourself software engineer or developer if you have a programming language bias.
Elixir was a wrapper for people who already knew Ruby.
Which is why most Elixir users come to it from the Ruby/Rails community.
It was created by Jose Valim, a Rubyist and Rails committer, who was looking to improve Ruby/Rails concurrency story :
Back in 2010, I was working on improving Rails performance when working with multi-core systems, as our machines and production systems are shipping with more and more cores. However, the whole experience was quite frustrating as Ruby does not provide the proper tool for solving concurrency problems. That’s when I started to look at other technologies and I eventually fell in love with the Erlang Virtual Machine.
I started using Erlang more and more and, with experience, I noticed that I was missing some constructs available in many other languages, including functional ones. That’s when I decided to create Elixir, as an attempt to bring different constructs and excellent tooling on top of the Erlang VM.
People start comparing different languages in different areas and use cases and instead of discussing about the main subject, keep saying oh no my language is more popular or that language is written in C so C is the best...
Routing tables are in the control plane.
ASIC's really only do "layer 2" forwarding. (this is not an entirely proper abstract, but close enough anyway.