If you start a company and open source your core/clients, your product becomes part of AWS, and AWS runs you into the ground. If you mix in proprietary licenses to protect yourself, AWS forks your core, adds in open-source licensed clients, then runs you into the ground (and you lose open-source contributors/supporters as a bonus who may fork your core themselves).
I remember from a undergrad class reading Google's system design papers, that they publish only the top-level architecture for core systems they use, and only after 3-5 years of use when they have moved on to a better system. After all this (Docker/Redis/Elastic/Nginx), I think that might be the best path forward. You can provide the benefits of open-source and recognition for the architects, but not lose your competitive advantage. Open-sourcing your core product seems too idealistic.
This is not true. It throws out the Google-specific parts of Borg (like integration with Google's service discovery, load balancing, and monitoring systems) and improves a number of things compared to Borg. For a good reference on the evolution of Borg into Kubernetes, I recommend the recent Kubernetes Podcast interview with Brian Grant: https://kubernetespodcast.com/episode/043-borg-omega-kuberne...
> Google themselves don't use it
This is not true, and the reasons why it hasn't replaced Borg are related to the integrations I mentioned above (which will take time to integrate or replace) and the zillions of lines of borg config that have built up over the years, rather than concerns that people outside of Google would have (production-worthiness, reliability, etc.)
(Disclaimer: I worked on Borg at Google, and now work on Kubernetes at Google.)
disinformation is worse than no information.
What's good for the bottom 90% of tech companies probably isn't for the top 10%.
When all you have is a hammer everything starts to look like a nail.
Implementing the whole "DevOps" idea just becomes a whole lot easier when developers don't even have the concept of their snowflake server anymore. And yes - k8s has a ton of overhead and is pretty complex to get into at first, but it all makes sense. There are many points that could be criticised about it, it's far from perfect, but having a standardised way of deploying whatever application has been a massive game changer in the development environments I've been thrown in.
Source/disclaimer: I'm a consultant that has seen quite a few k8s/openshift fuckups and success stories, both on large and small scale.
Exactly. Instead of having something so simple that scales for 90% of everyone's needs. We have solution that Most enterprise wants and filter down from top to bottom. And it is true in almost all Tech things related.
Google open sourced dataflow as Beam - https://beam.apache.org
Think of Autopilot as an automation that tweaks a pod's request/limits according to what it actually needs in order to reduce waste and thus improve cluster utilization.
(I _think_ this no longer qualifies as secret after https://github.com/kubernetes/kubernetes/issues/44095)
That said, k8s is quite extensible and it would definitely be possible to add such a component as a controller.
Without Google using, validating and releasing those design, we might be stuck with MPI and NFS for a lot longer.
Sure, MPI might blow MR out of the water in term of number-crunching, but it is also way harder to use.
They did open-source HHVM, React, GraphQL, and Cassandra, which are the closest things I can see to a secret sauce.
That situation is so rare someone meeting those criteria could be tracked by the lack of that data
Well, looks like you just have to assemble known parts in specific order to make something users like.
Running a service on AWS requires two goods. High-margin computing resources that Amazon really wants to sell, and the software to turn those computing resources into solutions to business problems. Solving the business problems has a fixed dollar amount to be split between the two, so the cheaper the software is the more money Amazon's customers can afford to spend on AWS.
So the final equilibrium is that Amazon ends up funding open-source solutions, and profits off it from increased AWS margins.
But they also figured out that making their free-as-in-beer tools also free-as-in-freedom open source, they'd lift all clouds and not just their own.
As the dominant player, Amazon loses from anything that reduces vendor lock-in. That's why precious little of Amazon's cloud tooling is released on Github.
With Kubernetes, Google made the the opposite calculation. As a challenger, it's to Google's benefit to open-source cloud tooling like Kubernetes. Even though their competitors benefit, Google benefits more from standardized tools that reduce switching costs.
AWS still beats gcp in most other ways though (imho), so it’s far from a slam dunk. But it has opened up a door for Google.
It seems not obvious to everyone, but you don't have to use a license, that allows AWS to run you into the ground. Take a look at API Copyleft License: https://github.com/kemitchell/api-copyleft-license
Also, it's not clear this would prevent AWS from running you into the ground. Amazon is more than happy to publish the source code of what they run internally; they make their money off operations and not software, so they're perfectly happy to commoditize software. The copyright holder is the one trying to make an "open core" business. AWS can just reimplement it.
> you must contribute all software that invokes this software's functionality..
So that rules out all proprietary operating systems, databases, 3rd party services, but why stop there? give us your CPU microcode.
Look at Confluent/Kafka. The PMC is stuffed with Confluent employees and they behave that way. They aren't acting like an Apache project, they are only acting in their own self interest. Any new ideas get the run around, and only their ideas see the light of day. I don't even know why they bother being open source, except maybe to get the free development and bug fixes.
If your worst nightmare is that a big company like Amazon uses your software then perhaps your business model wasn't really for the cloud era. Selling licenses is to the VC crowd what sequels are to Hollywood, comfort food that everyone knows how to handle but simultaneously knows isn't going to be the future.
Linux isn't worse off because it is used by AWS and Google. Quite the contrary.
A company invests money and engineers in building commercial tooling, which you then pay for because there is added value. You are not paying for the open source - which is freely available. How is that scummy?
The "comply with the license" part is the problem you're not seeing, using open source as SaaS is a loophole not a license feature.
An example being, I license something as open source which means if you don't pay for it (assuming there's an option for that), you are bound to follow the license I provided which means all further work has to be open source (the same license I used) and source has to be provided with the product. In an ideal world this would mean either:
1. We both get paid
2. We both contribute to open software which is available to anyone
But in our world it means, technically I'm not selling software but a service so I don't have to do shit. so the result (with scummy companies) is the following:
1. I don't get paid for software critical to your business
2. No one gets the benefit of the new product created despite my license
This is entirely untrue. If it were the case that 'I don't have to do shit', then why doesn't someone else do it too? Running a service takes a WHOLE LOT of work, and writing the software is, in many cases, the easy part.
We know this is true because whenever there is a conflict with a software license, the big cloud vendors just re-write it themselves.
I'm still wary, though. I could imagine if the resultant fines or source code releases from violating a GPL license weren't a strong enough deterrent, you could win by using a GPL-licensed product enough, then parry off attacks from EFF/FSF until you do a complete rewrite of the product underneath, then pay the fine/contribution to EFF/FSF while toppling the original company. If the company is big enough, and can afford enough good lawyers, there may be legal ways to get around laws.
A copyright holder can't get anything more from you by using the GPL. Infringement is infringement. The difference is that a company is usually happy to settle in exchange for a properly paid license, and an open source hacker instead is happy to settle in exchange for complying with the license. You're always free to take it to court and pay the damages from infringement, but you're not going to end up with a valid license in either case, so you'll have to stop distributing the software.
The actual difference, probably, is that you're a (moral) competitor of the open source hacker, and if you're not a competitor of the proprietary company, they have no interest in undermining the secrecy of your code or causing you to go out of business, even if the could potentially force that if they went to court. They're likely to consider "pay us a percentage of revenues" as a win condition.
Also, lawyers are not like Pokemon. You cannot beat the opposing team's lawyers by having more stronger lawyers. You can certainly lose by having bad lawyers, but you can only be guaranteed to win by being in the right.
> You can't actually be guaranteed to win by being in the right. You can be in the right and lose.
What he's obviously saying is that there is a seriously decreasing marginal benefit to more expensive (and presumably competent) lawyers.
It's better to have competent lawyers and be right than have amazing lawyers and be wrong.
Obviously there are shades of grey, nuisance lawsuits are a thing, etc.
It's probably best stated like "lawyers are not _always_ like Pokemon". Sometimes they very definitely are.
This is a copyright license at its heart. It is a contract between the copyright-owner and the service owner. The end user is just 3rd party.
I don't think many developers are aware that Ghostscript has an AGPL license, and I've heard that the commercial license costs $25k per year. It's very easy to just `apt-get install ghostscript` when you want to work with PDFs (e.g. with imagemagick), but this violates the AGPL license when you are running a SaaS application.
There are some permissively-licensed libraries (Apache 2.0) that provide similar functionality, such as PDFBox , or PDF.js + Node.js.
Also, it's 2019 - Artifex, it's OK, you can publish your prices (https://www.artifex.com/licensing/)
Only if you modify the Ghostscript source code.
But I also know that the AGPL license is usually adopted because they want to sell a commercial license, even if the source has not been modified. Artifex is very explicit about their intention on the licensing page of their website.
It depends on the definition of “modification” and “derivative work”. Artifex is adamant that any software using Ghostscript is a derivative work, and the copyleft will apply, so all of your source code must also be released under the AGPL license. This is especially true if your software cannot function without Ghostscript.
If you are only distributing your application (i.e. not a SaaS app), then you could make Ghostscript an optional plugin that people can manually install (like LAME for Audacity.) But a SaaS app provides access to the application over the network, so you cannot use Ghostscript without a commercial license, or without releasing your application’s source code.
I didn’t see anything about Hancom modifying the Ghostscript source. It was the fact that they distributed Ghostscript along with their own application, and their application depended on Ghostscript for some functionality. That was enough to trigger the GPL copyleft, so they were violating the terms of the license and had to settle out of court. The AGPL means that you would be violating the license by providing access to your app over a network.
I'd imagine that the challenge with the AGPL is catching and suing the non-compliant services.
I am curious whether there are any practical observations, one way or the other, about the AGPL’s enforceability.
Nobody is suggesting that an AGPL licence violation would result in a criminal penalty, but if you got caught you may be forced to release any changes you made to the source code. And you could lose the ability to use the software in future—if your business relied on it you might be screwed.
Personally I think the AGPL is stupid, but people are free to pick whatever license terms they like for their copyrighted works. Whether I like it or not is irrelevant.
Personally, I find “you are licensed to use the software without releasing your modifications on your internet connected computer, however if you open a port and offer it as a service, you are not” to be entirely ridiculous. It seems to me that a license can’t (or at least shouldn’t) hinge on what other software I am or am not running on my computer (eg a webserver).
That is why I asked. I’m all for copyleft (despite the fact that its validity hinges on an inherent affirmation of the validity of the concept of intellectual property), but I passionately hate the AGPL because I think it is an unjust infringement upon my freedom as a user. It’s like saying “you have a license to use this software as long as you don’t run a browser that accesses porn on the same machine”. I think it oversteps the boundaries of copyright-as-designed.
So anyone that refuses to pay for their tools will eventually either loose them, or contend to be happy to use lesser ones.
It is certainly not confusing it you learn it by rote and remember the rule. It is burned into my brain, but I see the mistake a lot so I assume it is hard to remember for some people. For me its vs. it's is something I still have to keep looking up.
Open source software is clearly a net positive overall. However is it a net negative for the industry when enterprise developers rely on open source software without demanding their company provides financial support for that software? How is that different than the company relying on free labor from something like unpaid internships?
Speaking for my own open source projects; There are already better, cheaper, and easier alternatives to my software. I'm already paying out the ass for something I could just download. I'm doing it for reasons that I can't, or won't, buy. And I know that sounds cliche because it is cliche, but passion is cliche.
We're programmers and hackers here. Just like a hot rod enthusiast spends $200k and 3 years building a car he could buy in a catalogue for $75k; we don't pay as much attention to cost/benefit relationships as we'd like to think.
Not every piece of open source software is diminishing the value of enterprise software developers just like not every unpaid internship is reducing the value of entry level labor, however both systems can easily (and even unintentionally) be abused by businesses.
In my company an intern conducted research on scalability - creating tools to measure and monitor the software in the mean time. So not only does the company have some neat monitoring tools now, the intern actually found a bottleneck and improved the product. The company is now offering her a contract...
Caring about eachother doesn't have much to do with it.
Where they make open source software for enterprise and provide services to support.
I don't want to be the subject of a constant dripping of emails, calls, and missings from a sales person who is constant to get their numbers and puts me in their sales automation pipeline.
Give me a ballpark estimate, I can go to whomever is needed, and we can go from there: I've never ran into a case where "I don't know" how much this costs is an appropriate answer to give a manager.
The polar opposite of this would be Atlassian, who publishes every price and doesn't negotiate at all. At least it's easy to deal with...
Untrue if you license multiple products from them at scale.
"How much you got?"
I understand we might be too small to matter for them if we aren't ready to dump thousands of dollars per month into their bankaccount, but it does makes me cautious about what is going to happen with them when their investment money runs out.
It's probably one of the next companies to be acquired in the coming years.
I can't prove any of this, you'd have to take my word for it! I guess if we ever go public in N years (not saying we are, but _if_), you'll see for yourself on historical results. :)
(I have no affiliation with HashiCorp.)
The only source I found was https://www.hashicorp.com/blog/2017-year-in-review which just says the company "can be successful", not that it is making money.
From Mitchell himself - at the very least they are growing very quickly. For all the complaining about "pricing pages", I think hashicorp is doing the right thing. Focus on selling to the big dogs who can give you a sustainable business and don't cloud your pipeline with smaller shops who wish to shop around.
So here we are slacking postman collections back and forth on our current year mac books.
Open source strives where different entities develop the software together to the mutual benefit without a single company trying to push their roadmap, without a single company trying to grab all revenue.
See Linux, see different Apache projects, see PHP etc.
But when such projects take on tens/hundreds of millions of funding, it is inevitable that the technology becomes secondary to paying back the investors 10x, no matter what. Ironically most of that kind of funding seems to go towards sales and marketing rather than R&D. Usually core committers are only a minority of employees in such companies and things get worse when that no longer includes the C-level executives.
This creates a lot of friction when inevitably somebody else undercuts such projects in terms of quality, features or price. This is inevitable because all software eventually becomes a commodity. Your fancy pants DB clustering solution might be shit hot this year but you can bet there will be half a dozen projects imitating what you did within years.
This is basically what is happened to mongodb. It's all about diversifying, "adding value", proprietary extensions, etc. for their paying customers instead of doing what they were good at for all their users. And courtesy of the license, copyright transfers and outside contributions dry up and it's all on the company to do everything in house. Great, as long there's money but when that dries up it creates problems. Meanwhile projects like postgresql and others provide more or less drop in replacements, because they can and because there are users and developers in having that. Apparently they are still doing fine in terms of share price. Best of luck to them but I probably won't be using it.
Most healthy OSS projects out there have licenses that are well understood from a legal point of view and battle-tested in years/decades of use. Some have quirks that need working around (e.g. the classpath exception for GPL v2), others are fine as is (e.g. Apache 2.0). They also have a plurality of copyright holders spread over many companies that makes re-licensing impractical. Most such projects have a core of developers that are typically employed by a big company taking an interest in the project. Having key people in key projects is of strategic importance to them and ensures their interests are taken care off.
The whole point of OSS is commoditization and pooling resources between otherwise less likely to collaborate companies and individuals to get things done better than each of them would be likely to achieve by themselves. That's why most operating systems these days are largely made up of open source software, much of which has had multiple generations of developers working on it. Most of the build tooling around that, same thing. Apple, MS, Google, they all ship mixes of proprietary code and OSS code. Quite a lot of this stuff can be traced back to the early days of unix. Almost every big fortune 500 software company out there pays people to contribute to and represent them in OSS projects that are vital to their business. Even the less popular ones like Oracle actually contribute a lot. That's not charity; it's key to their success.
MS just retired two generations of their in house browser in favor of an open source project primarily backed with Google and with significant portions of Apple contributions from back in the Webkit days. If you'd have to choose two competitors for MS, those would probably be at the top of your list. Why did they do this? Browsers are a commodity and they were negatively differentiating with their in house efforts (as demonstrated by world + dog installing something else). They tried to fix it (Edge) and it didn't work out. All of the surviving browsers are now built around open source projects. I think Edge is probably going down as the last non OSS browser to be widely used.
I use open source components, libraries, and tools for almost everything I do. I love Github. I share code there myself. Most of the stuff I depend on has neither VCs nor much corporate funding behind it and its fine. Some of it does have VC funding and its also fine. My life would be hell if I had to reinvent all those wheels.
I agree funding OSS development is key but I don't agree that that needs to primarily come from companies that own the software and sell licenses+support. That's not how most OSS software I use works; it instead thrives on companies using and paying for people to contribute. Nginx is one of many software packages that I use. I don't think I'll ever pay for licensing or support; because frankly they are relatively unimportant to me. As for the dozens of npm dependencies and their hundreds of transitive dependencies, nope. Not a cent. I would probably consider SAAS solutions when it makes sense; as I have done with e.g. Mariadb and Elasticsearch. But mostly OSS works because it is free as in speech and beer.
In the case of nginx, there are dozens of OSS web servers out there. It's just one of many moving parts I need to worry about. I'll pick whatever is cheap and convenient.
But apparently, Oracle is a bad guy for doing this, and Google is applauded for stealing Java.
Can't have it both ways.
The benefits of FOSS are largely non-monetary. I know as entrepreneurs and professionals that might be a hard line-of-thought to default to, but I think it is extremely short-sighted (and borderline ignorant) to judge the merits of free software by its profitability.
This was 2011, so I'd hope newer F5 gear has gotten past that.
I hope NGINX doesn't suffer too much under this new ownership.
In the world I live in, far more commercially supported open source runs outside of AWS, then runs ON AWS, then AWS has copied and usurped.
As somebody, who has no knowledge about that part of the business (Amazon Web Services in production), could you elaborate on that with a few lines or point me to some articles? Thank you.
A few months ago, AWS launched a MongoDb fork
(There's also https://aws.amazon.com/corretto/ , a long-term-support version of the JDK, because Oracle is getting more aggressive about Oracle JDK licensing.)
Isn’t this a great win?
Had all the volunteers' impact being so big? Nginx was created by single developer who is NGINX, Inc CTO now, and then developed pretty much by Nginx employees only.
Search through the change log for "thanks to" http://nginx.org/en/CHANGES and you'll see a lot of contributions. Two people who stand out as frequent contributors are Piotr Sikora and Maxim Dounin (who went on to actually work at Nginx!).
And this does not show the mailing list discussions and bug investigations or the documentation maintained by the community in the early days.
[This post is not intended to mean this sale is bad, just to highlight some of the awesome community contributors]
I don't think there is a future in open source enterprise software where trade secrets are hidden from the public.
I can recommend having a look at https://varnish-cache.org/ - while its performance might not be 100% up to par with nginx in some (very, very high-end) scenarios, it has many other fortes that nginx (at least in its FOSS release version; I've never used nginx plus) just cannot match in my experience. Having seen `varnishlog` and `varnishtest` in action alone are worth spending a day or two exploring it.
I don't have the wherewithal to get at these arguments from any angle (and I'm certain I'm missing many others).
Studious maintenance of the base code and branches/PRs besides build tried/true code bases and I hope that we're not being torn of that practice from this buyout; but only time will tell ...
nginx is fine, but there are now other options that work just as well.
Which only means that NGINX will get even better over time.
What are the contractual consequences if they don't keep that commitment? If the answer is "none", then it's not a commitment.
proxy_pass for example will only resolve a hostname at the time the configuration is parsed, unless you use a convoluted variable hack. This was a serious issue requiring you to restart your fleet if a backend server changed IPs. The bug fix for this was implemented only in Pro and sold as "DNS for Service Discovery."
But yeah, hopefully the community around it is solid enough to make that a possibility. I'd really prefer to not go back to Apache...
Edit: sorry, I should have just checked your profile.
I'm sure that's not the only feature it's missing compared to nginx. Envoy is not very comparable to nginx, in my opinion... but I also wouldn't reach for varnish as an nginx alternative either.
"There is more to life than increasing its speed."
No. Memory safety is a vanishingly small subset of all bugs and security problems. PHP is memory-safe, for example. Where has that gotten us?
> "There is more to life than increasing its speed."
Not if you're a computer.
The number of security vulnerabilities due to PHP's crappiness is two orders of magnitude greater than all of nginx vulnerabilities combined.
Yet PHP is a memory-safe language.
Memory safety won't fix anything by itself, it will just shuffle the shit into some other place.
Now if you're claiming that if you take nginx developers and force them to use Rust they'll somehow start writing better code, then that's a valid point. Although I'm in extreme doubt that it is realistic or even true.
On another note, F5's poorly written code is the reason TLS 1.0 is considered insecure (using a variant of the POODLE attack), among other major security lapses.
Interesting to also see what aws is doing in response to some of the more complicated licensing agreements and specifically elastic search:
The challenge for nginx was they raised VC capital so they were in a forcing function. Either grow revenue or get acquired. Could have remained an independent oss product for ever but alas no more.
load balancing? check.
stateful load balancing? check.
HSM-enabled ssl-termination? check.
hardware accelerated ssl-termination? check.
NG firewall? check.
compiled Lua/tcl (i forget which) scripts so you can program something insanely complicated? check.
ISP sized NATs? check.
Plus, way more configuration knobs and options than you'd ever want at each network layer. Like, come up with a load balancing scheme where Tls1.2 clients using Poly1305-chacha20 get sent to a specific pool of servers while everything else goes to another pool, except for clients trying to use QUIC and who are coming from a specific range of IP. They go to another set of servers.
Maybe a better way to think of it is that it's a single device for tweaking anything L3-L7 for your server and parts of your network.
(used to work for f5, too, but i'm not sure how specific i can get with the nda).
As the industry  continues to put its weight behind NFV  and SDNs  along with the rise of IDNs , do you see network-appliances keeping up the share of the market against those solutions? I believe @Edge network might continue to require these appliances for WAF, Firewall/DPI (and other things I don't know about)... but that'd be a niche?
Obviously they won't go away, but network appliances definitely won't keep their share because not everyone needs them as SDNs get better. I see the SDN and IDN as mostly solving multivendor integration issues and making it easier to configure at least semi-complicated networks, which doesn't make them a drop-in replacement for many of the problems f5 is trying to solve. For certain network loads they might achieve performance parity, too.
One of the draws for an f5 box for a large customer is that instead of having like 5 vendors or OSS technologies that they have to maintain for load balancing, ssl-termination, hardware-accelerated/-hardened encryption/decryption, SAML, firewalls, etc. you have one company's product (that hopefully has been designed to work well with itself) to do all of that that's configured from one location. If you don't have to worry about that multivendor orchestration headache, then massive network appliances like BIGIP aren't a value add over having a couple vendors.
Another draw for BIGIP is doing things at the speed of packet flow or nearly so even for VM containers and even for fully encrypted SSL. If you don't have to care about making sure to squeeze every last microsecond of latency or bandwidth out of you 10Gbs or 100Gbs fiber connection, BIGIP isn't a value add over SDN. If you only care a little bit, than an SDN could be way cheaper than BIGIP because you can configure things to do what you need for lower hardware and support costs.
For the people who care about that multivendor issue and performance, they're always going to have hardware dedicated to networking, even if they use SDNs or IDNs, because they need that dedicated compute to achieve their goals. Sustained 10Gbs connections are no joke, let alone 40Gbs or 100Gbs. Same with tens of thousands of simultaneous SSL connections. All of a sudden you need dedicated ASICS/"Raw Compute" and RAM to keep up with the firehose of packets. Plus, network appliances will begin to integrate with SDNs and IDNs, so for customers on the border between needing an appliance and a getting by with an IDN or SDN and more manpower, the form of the network appliance will change, but they're still going to have hardware down in their server room or compute instances in their cloud dedicated to networking infra. If you want SSL-termination? You need compute. Hardware accelerated or hardened SSL termination? You need specialized hardware. Firewalls? Compute. SAML? Compute. Complicated NATs? Compute. If you've got a couple BIGIPs in your server room, your network's complicated enough and/or bandwidth heavy and/or low latency enough that you're going to have nearly as much racks dedicated for your SDN so that it has enough compute as you do for network appliances.
BIGIP isn't valuable because it's great a router or switch. It's great because of how much it does on top of that in a single server/VM, and how well it does it. And most of what it's great at are not things that an SDN will solve. Sure, the configuration tweaking will have parity and maybe load balancing performance (but having seen how BIGIP achieves it, especially for complicated setups, i kinda doubt it). But if BIGIP integrates with SDNs or with IDNs even just a little, then what could happen is that people on the borderline are just going to get slightly smaller BIGIPs and offload some of the tasks where BIGIP overlaps with SDNs/IDNs and the BIGIP will just be another node in the SDN. If BIGIP goes in on SDN and IDN, then you might even see people buying larger BIGIPs to orchestrate their overall SDN and IDN.
L4-L7 load balancing, distributed DNS, SSL offload, WAF, DPI, data centre firewall and other things. With a nice WebUI to configure all that.
The Tcl iRules allow you to hook into pretty much any stage of the request or the response L4-L7 at FPGA speeds to do whatever you wanted to the request / response data.
It's a very powerful product.
I also work at F5, and used to work on the FPGA. This is unfortunately not true for TCL iRules. The FPGA basically only operates on L2-4, L7 is all software.
There was some talk about doing L7/iRules in an FPGA but prototypes never produced compelling enough performance gains to make it worth it.
I challenge that assertion!
For simple things it's adequate, but the fact that one can SSH is also helpful as there's a RHEL/CentOS base to work on. We're able to get Let's Encrypt working with a bash-only ACME client (dehydrated) is short order.
Heck, run Ansible on it:
If it weren't for the need for remote backups, email and such would be hosted there as well, and you could run a company on one of these with no access to the public internet at all. Accounting, finance, etc: all of it.
Or maybe that’s Citrix.
The earlier versions were based on FreeBSD 2.2.6.
Way way back they used to be on BSD. Then they moved to centos for cough cough cough (possibly NDAed) cough cough.
It's not a huge deal which because the host OS isn't anywhere in the dataplane.
Here's the super versatile Colm Mac explaining what AWS does at L4: https://www.youtube.com/watch?v=8gc2DgBqo9U
Google has been very open about their network infrastructure, here's a nice summary from 2015: https://ai.googleblog.com/2015/08/pulling-back-curtain-on-go... and not mentioned in that blog... their NetworkLoadBalancer, Maglev: https://cloud.google.com/blog/products/gcp/google-shares-sof... (AWS equivalent of which would be HyperPlane: https://atscaleconference.com/videos/networking-scale-2018-l... allegedly based on S3's load balancer).
The long version is, "varying degrees of horror"
The data plane is where you have high speed logic and data traveling. That's where you do the multiple 100GBps software defined networking, and its crazy fast chips doing it.
And the control plane has interconnects to program the data plane chips to the rules you want. So the data never hits the control plane at all. Its kind of like a water faucet where the knob doesn't touch the water but controls the floodgates.
We were getting slammed on duty for the product and we were looking at ways of getting the appliances built locally using licensed F5 software, as the software itself wasn't as heavily taxed as the physical hardware. Everything in it seemed fairly commodity, except for the big F5 logo on the front.
It was an appliance that worked fantastically well. One deployment had an uptime of over ten years.
You know, Cisco's IOS XR is built on Linux, but all the real parts are behind their private kernel modules running on ASICs and FPGAs, traffic doesn't even touches the TCP/IP stack of the OS. Cisco ASAs have Celeron/Atom CPUs which obviously couldn't hold the specified loads.
when visiting Taiwan and then Hong Kong in the late 1990's, a local said "there are no software companies here" .. everything with money had a hardware sale associated, software was just pirated as much as possible, end of story.. no one would pay for software if they did not have to.. It may be a "western" thing to have companies and people that can live from writing only software
What exactly is going on with open source licencing? Is anybody violating open source licences?