load balancing? check.
stateful load balancing? check.
HSM-enabled ssl-termination? check.
hardware accelerated ssl-termination? check.
NG firewall? check.
compiled Lua/tcl (i forget which) scripts so you can program something insanely complicated? check.
ISP sized NATs? check.
Plus, way more configuration knobs and options than you'd ever want at each network layer. Like, come up with a load balancing scheme where Tls1.2 clients using Poly1305-chacha20 get sent to a specific pool of servers while everything else goes to another pool, except for clients trying to use QUIC and who are coming from a specific range of IP. They go to another set of servers.
Maybe a better way to think of it is that it's a single device for tweaking anything L3-L7 for your server and parts of your network.
(used to work for f5, too, but i'm not sure how specific i can get with the nda).
As the industry  continues to put its weight behind NFV  and SDNs  along with the rise of IDNs , do you see network-appliances keeping up the share of the market against those solutions? I believe @Edge network might continue to require these appliances for WAF, Firewall/DPI (and other things I don't know about)... but that'd be a niche?
Obviously they won't go away, but network appliances definitely won't keep their share because not everyone needs them as SDNs get better. I see the SDN and IDN as mostly solving multivendor integration issues and making it easier to configure at least semi-complicated networks, which doesn't make them a drop-in replacement for many of the problems f5 is trying to solve. For certain network loads they might achieve performance parity, too.
One of the draws for an f5 box for a large customer is that instead of having like 5 vendors or OSS technologies that they have to maintain for load balancing, ssl-termination, hardware-accelerated/-hardened encryption/decryption, SAML, firewalls, etc. you have one company's product (that hopefully has been designed to work well with itself) to do all of that that's configured from one location. If you don't have to worry about that multivendor orchestration headache, then massive network appliances like BIGIP aren't a value add over having a couple vendors.
Another draw for BIGIP is doing things at the speed of packet flow or nearly so even for VM containers and even for fully encrypted SSL. If you don't have to care about making sure to squeeze every last microsecond of latency or bandwidth out of you 10Gbs or 100Gbs fiber connection, BIGIP isn't a value add over SDN. If you only care a little bit, than an SDN could be way cheaper than BIGIP because you can configure things to do what you need for lower hardware and support costs.
For the people who care about that multivendor issue and performance, they're always going to have hardware dedicated to networking, even if they use SDNs or IDNs, because they need that dedicated compute to achieve their goals. Sustained 10Gbs connections are no joke, let alone 40Gbs or 100Gbs. Same with tens of thousands of simultaneous SSL connections. All of a sudden you need dedicated ASICS/"Raw Compute" and RAM to keep up with the firehose of packets. Plus, network appliances will begin to integrate with SDNs and IDNs, so for customers on the border between needing an appliance and a getting by with an IDN or SDN and more manpower, the form of the network appliance will change, but they're still going to have hardware down in their server room or compute instances in their cloud dedicated to networking infra. If you want SSL-termination? You need compute. Hardware accelerated or hardened SSL termination? You need specialized hardware. Firewalls? Compute. SAML? Compute. Complicated NATs? Compute. If you've got a couple BIGIPs in your server room, your network's complicated enough and/or bandwidth heavy and/or low latency enough that you're going to have nearly as much racks dedicated for your SDN so that it has enough compute as you do for network appliances.
BIGIP isn't valuable because it's great a router or switch. It's great because of how much it does on top of that in a single server/VM, and how well it does it. And most of what it's great at are not things that an SDN will solve. Sure, the configuration tweaking will have parity and maybe load balancing performance (but having seen how BIGIP achieves it, especially for complicated setups, i kinda doubt it). But if BIGIP integrates with SDNs or with IDNs even just a little, then what could happen is that people on the borderline are just going to get slightly smaller BIGIPs and offload some of the tasks where BIGIP overlaps with SDNs/IDNs and the BIGIP will just be another node in the SDN. If BIGIP goes in on SDN and IDN, then you might even see people buying larger BIGIPs to orchestrate their overall SDN and IDN.
L4-L7 load balancing, distributed DNS, SSL offload, WAF, DPI, data centre firewall and other things. With a nice WebUI to configure all that.
The Tcl iRules allow you to hook into pretty much any stage of the request or the response L4-L7 at FPGA speeds to do whatever you wanted to the request / response data.
It's a very powerful product.
I also work at F5, and used to work on the FPGA. This is unfortunately not true for TCL iRules. The FPGA basically only operates on L2-4, L7 is all software.
There was some talk about doing L7/iRules in an FPGA but prototypes never produced compelling enough performance gains to make it worth it.
I challenge that assertion!
For simple things it's adequate, but the fact that one can SSH is also helpful as there's a RHEL/CentOS base to work on. We're able to get Let's Encrypt working with a bash-only ACME client (dehydrated) is short order.
Heck, run Ansible on it:
If it weren't for the need for remote backups, email and such would be hosted there as well, and you could run a company on one of these with no access to the public internet at all. Accounting, finance, etc: all of it.
Or maybe that’s Citrix.
The earlier versions were based on FreeBSD 2.2.6.
Way way back they used to be on BSD. Then they moved to centos for cough cough cough (possibly NDAed) cough cough.
It's not a huge deal which because the host OS isn't anywhere in the dataplane.
Here's the super versatile Colm Mac explaining what AWS does at L4: https://www.youtube.com/watch?v=8gc2DgBqo9U
Google has been very open about their network infrastructure, here's a nice summary from 2015: https://ai.googleblog.com/2015/08/pulling-back-curtain-on-go... and not mentioned in that blog... their NetworkLoadBalancer, Maglev: https://cloud.google.com/blog/products/gcp/google-shares-sof... (AWS equivalent of which would be HyperPlane: https://atscaleconference.com/videos/networking-scale-2018-l... allegedly based on S3's load balancer).
The long version is, "varying degrees of horror"
The data plane is where you have high speed logic and data traveling. That's where you do the multiple 100GBps software defined networking, and its crazy fast chips doing it.
And the control plane has interconnects to program the data plane chips to the rules you want. So the data never hits the control plane at all. Its kind of like a water faucet where the knob doesn't touch the water but controls the floodgates.
We were getting slammed on duty for the product and we were looking at ways of getting the appliances built locally using licensed F5 software, as the software itself wasn't as heavily taxed as the physical hardware. Everything in it seemed fairly commodity, except for the big F5 logo on the front.
It was an appliance that worked fantastically well. One deployment had an uptime of over ten years.
You know, Cisco's IOS XR is built on Linux, but all the real parts are behind their private kernel modules running on ASICs and FPGAs, traffic doesn't even touches the TCP/IP stack of the OS. Cisco ASAs have Celeron/Atom CPUs which obviously couldn't hold the specified loads.