Hacker News new | past | comments | ask | show | jobs | submit login
Nginx to Be Acquired by F5 Networks (nginx.com)
1057 points by eduren on March 11, 2019 | hide | past | favorite | 383 comments

So...what is the future of enterprise open source? Is there a future for enterprise open source?

If you start a company and open source your core/clients, your product becomes part of AWS, and AWS runs you into the ground. If you mix in proprietary licenses to protect yourself, AWS forks your core, adds in open-source licensed clients, then runs you into the ground (and you lose open-source contributors/supporters as a bonus who may fork your core themselves).

I remember from a undergrad class reading Google's system design papers, that they publish only the top-level architecture for core systems they use, and only after 3-5 years of use when they have moved on to a better system. After all this (Docker/Redis/Elastic/Nginx), I think that might be the best path forward. You can provide the benefits of open-source and recognition for the architects, but not lose your competitive advantage. Open-sourcing your core product seems too idealistic.

My understanding is that Google has recently deviated from this strategy. The result of the strategy you mention is that the industry standardized on other companies' implementation of ideas that came from Google: Hadoop (MapReduce), HDFS (GFS), ZooKeeper (Chubby, and more. For examples of newer open source projects that see more active maintenance from Google, see Kubernetes and TensorFlow.

But K8s is not a development of Google software. It is developed specifically for the public, it throws out all of the interesting parts of Borg, and Google themselves don't use it, or barely do. As for that other stuff it seems to have worked out fine for Google: they describe obsolete technologies and the outside world develops hideous analogs of those and uses them for decades. Hadoop for example is just an unbelievably bad implementation of map reduce as it was ten years ago and is laughable compared to what Google replaced it with. HDFS is a joke of GFS which Google turned off eight years back. It's really remarkable the way the industry is essentially self-disabling in this regard. Meanwhile Google does not burden itself with trying to adopt every idea they read in a paper, and maintain a significant cost and efficiency advantage by doing so.

> it throws out all of the interesting parts of Borg

This is not true. It throws out the Google-specific parts of Borg (like integration with Google's service discovery, load balancing, and monitoring systems) and improves a number of things compared to Borg. For a good reference on the evolution of Borg into Kubernetes, I recommend the recent Kubernetes Podcast interview with Brian Grant: https://kubernetespodcast.com/episode/043-borg-omega-kuberne...

> Google themselves don't use it

This is not true, and the reasons why it hasn't replaced Borg are related to the integrations I mentioned above (which will take time to integrate or replace) and the zillions of lines of borg config that have built up over the years, rather than concerns that people outside of Google would have (production-worthiness, reliability, etc.)

(Disclaimer: I worked on Borg at Google, and now work on Kubernetes at Google.)

Unfortunately we can't discuss the parts of Google's platform that aren't in Kubernetes on this forum. If we could, I think I could defend my statement reasonably well. But perhaps you just don't think that the pieces I would mention qualify as interesting.

go/-link or it didn't happen.

well my secret document says you're wrong and i'm right.


Partial information is better than none.

aphorism rejoinder:

disinformation is worse than no information.

Implementation matters to google, more than to say the average company that uses Hadoop. At "Google-scale" small imperfections become huge imperfections.

What's good for the bottom 90% of tech companies probably isn't for the top 10%.

I am a consultant working on Hadoop installations across the globe. As an average I usually able to save 70% disk usage and 30% overall cost by changing the defaults to a reasonable value as well as migrating companies out of HDFS to something like S3. I have spent majority of my career (10 years) working on Hadoop and I can tell you that it is a terrible piece of software with insane ineffciency all over the place. If you would switch over all the Hadoop installations on Earth at once to something more reasonable it would be visible on the global CO2 production chart quite a bit. What is good for the bottom 90% of companies is a financial question not a technological question. It is a bit unfurtunate that Hadoop is so popular and nobody cares about efficiency, not even the Hadoop vendors (maybe with the exception of MapR, which is not opensource).

> This find | xargs mawk | mawk pipeline gets us down to a runtime of about 12 seconds, or about 270MB/sec, which is around 235 times faster than the Hadoop implementation.


Using hadoop/spark for <2gb of data seems like a terrible idea.

When all you have is a hammer everything starts to look like a nail.

Well this is great until you need more nodes. :) I am talking about the same scalability while maintaining a much lower ecological and financial footprint.

Is there a good open source alternative that meets the HDFS use-case (i.e. file or blob storage, rather than a KV store designed for point lookups)? Or is tuning the HDFS defaults the best you can do without migrating onto someone's cloud platform?

I'd argue Kubernetes isn't the best choice for the bottom 90%. There's a lot of companies you could describe as tech and many are doing just fine in the old world of manual application provisioning.

I don't agree at all. While the large majority does not need the scalability it offers, it can benefit from all the other stuff applying the 'best practices' offers. The problem is - that many people do not stick to the best practices and do not know how to build containerised applications.

Implementing the whole "DevOps" idea just becomes a whole lot easier when developers don't even have the concept of their snowflake server anymore. And yes - k8s has a ton of overhead and is pretty complex to get into at first, but it all makes sense. There are many points that could be criticised about it, it's far from perfect, but having a standardised way of deploying whatever application has been a massive game changer in the development environments I've been thrown in.

Source/disclaimer: I'm a consultant that has seen quite a few k8s/openshift fuckups and success stories, both on large and small scale.

>I'd argue Kubernetes isn't the best choice for the bottom 90%.

Exactly. Instead of having something so simple that scales for 90% of everyone's needs. We have solution that Most enterprise wants and filter down from top to bottom. And it is true in almost all Tech things related.

Google offers their most recent infrastructure as a service in GCP, e.g. Cloud Dataflow, and it hasn't exactly taken the world by storm. Industry standards matter, even if they are inferior implementations; the differential is just not that big.

Dataflow is based on oss Apache airflow. I don’t know how well it’s doing in the wild but every IT admin I’ve worked with are super excited to use it.

Airflow and Dataflow are not related.

Google open sourced dataflow as Beam - https://beam.apache.org

Dataflow it's self isn't open source. Beam is not open source Dataflow, however you can use the Beam SDK with Dataflow as a runner.

Ah, apologies, my bad.

Can you mention some of the interesting parts of Borg that are missing in k8s?

among other things, I miss Autopilot and generally the extensive machinery to help with massive capacity planning.

Think of Autopilot as an automation that tweaks a pod's request/limits according to what it actually needs in order to reduce waste and thus improve cluster utilization.

(I _think_ this no longer qualifies as secret after https://github.com/kubernetes/kubernetes/issues/44095)

That said, k8s is quite extensible and it would definitely be possible to add such a component as a controller.

Well, nobody should be using MapReduce now, and HDFS now is a lot better than 8 years ago.

Without Google using, validating and releasing those design, we might be stuck with MPI and NFS for a lot longer.

How many GMM can MapReduce do? What about lattice quantum chronodynamics performance? MPI and Lustre exist for a reason: map reduce isn't great for all problems.

MR never claimed to be great for all problems. It main selling points was big-data and easy-of-use.

Sure, MPI might blow MR out of the water in term of number-crunching, but it is also way harder to use.

I know some people who develop on top of Tensorflow; from my conversations with them, Tensorflow's moat is Google making a lot of breaking changes by incorporating a lot of new functionality. I've also heard complaints that the online documentation isn't terribly great for triaging not-happy paths, to the point where you kind of have to just dig through the source code to figure out what's going on. Also, if you want to poach the maintainers, you would somehow have to poach them away from Google (which isn't happening, since ML at scale is something Google does best, and is an ongoing field of research). You can't become as good at using Tensorflow as Google is by simply forking the project.

Does that deviation lineup with Google's entry into offering cloud computing?

Pretty much. That's when it became clear that Google would have to support whatever is popular outside Google.

Gonna just chime in and mention Yahoo for Hadoop (yes I know Big Table was Google), and Zookeeper. Great tech started at Yahoo, but they didn’t (unlike today) make too much of a fuss/self-pat-on-the-back about it

Those projects were initiated by Yahoo, but the designs are taken directly from Google papers. Hadoop (HDFS and Map/Reduce) was based on the GFS and Map/Reduce papers. ZooKeeper was based on the Chubby paper.

Note that none of the successful large companies are actually open sourcing their 'secret sauce'. There is no open source version of Google's search engine, Facebook's social network etc. It's only supporting tools and infrastructure they release (i.e. commoditization of their complements or dependencies.) A product company open sourcing their core product is committing suicide on the product front. Sure, there will be services companies who may benefit (Red Hat or providers like AWS etc.) but the company that actually produces the software rarely will.

I don't think Facebook has any "secret sauce". The integration and scale is certainly impressive, but there's no part of the functionality which is at all mysterious to me. Unlike, say, Google Search, everything on Facebook appears pretty straightforward.

They did open-source HHVM, React, GraphQL, and Cassandra, which are the closest things I can see to a secret sauce.

Their ad-targeting infrastructure likely has some secret subtleties to it.

Meh, not really. Cookies all the way down.

They can and do jump the incognito mode regarding ad cimplaints. I have proof.

You’ll be surprised how many ways there are to track you even in Incognito

nah... after working in Internet Marketing I am convinced the only way to not be tracked is to simply not use any device or credit card.

That situation is so rare someone meeting those criteria could be tracked by the lack of that data

Other companies do this too, so it doesn’t qualify as secret sauce.

These comments so much remind me of the infamous Dropbox comment when they launched: https://news.ycombinator.com/item?id=9224

Well, looks like you just have to assemble known parts in specific order to make something users like.

To expand on this, these companies are not open sourcing their data. Facebook no longer gives you access to the social graph. Google doesnt have an open API for their search results or search volume.

Providing an API doesn't mean open sourcing the data. Because the API provider can very well restrict who can use it, and how it is used, and throttle it.

There absolutely is a future for enterprise open source. It's not through direct monetization, but rather driving down the price of complements for existing product lines.

Running a service on AWS requires two goods. High-margin computing resources that Amazon really wants to sell, and the software to turn those computing resources into solutions to business problems. Solving the business problems has a fixed dollar amount to be split between the two, so the cheaper the software is the more money Amazon's customers can afford to spend on AWS.

So the final equilibrium is that Amazon ends up funding open-source solutions, and profits off it from increased AWS margins.

Amazon figured this out and surrounded their high-margin computing resources with a dizzying array of free-as-in-beer tools.

But they also figured out that making their free-as-in-beer tools also free-as-in-freedom open source, they'd lift all clouds and not just their own.

As the dominant player, Amazon loses from anything that reduces vendor lock-in. That's why precious little of Amazon's cloud tooling is released on Github.

With Kubernetes, Google made the the opposite calculation. As a challenger, it's to Google's benefit to open-source cloud tooling like Kubernetes. Even though their competitors benefit, Google benefits more from standardized tools that reduce switching costs.

This! Kubernetes in some ways was a shot in AWS’s direction. It gives companies a reason to switch to Google. GKE is still far more stable and mature than EKS.

AWS still beats gcp in most other ways though (imho), so it’s far from a slam dunk. But it has opened up a door for Google.

Interesting read on this topic by Stratechery - https://stratechery.com/2016/how-google-cloud-platform-is-ch...

> If you start a company and open source your core/clients, your product becomes part of AWS

It seems not obvious to everyone, but you don't have to use a license, that allows AWS to run you into the ground. Take a look at API Copyleft License: https://github.com/kemitchell/api-copyleft-license

This is not a FOSS license in the sense of the Debian Free Software Guidelines or Open Source Definition: it compels you to publish your changes, even if you're only making internal use. Free software / open source licenses do not require that. Most obviously, it fails the "desert island" and "dissident" tests of https://people.debian.org/~bap/dfsg-faq.html . (There are plenty of perfectly fine licenses that are not DFSG/OSD licenses, but it's not what most people mean by "open source," and importantly it will be impossible to get this software into a mainstream Linux distro, so you won't have the sort of adoption that actual open source licenses get you.)

Also, it's not clear this would prevent AWS from running you into the ground. Amazon is more than happy to publish the source code of what they run internally; they make their money off operations and not software, so they're perfectly happy to commoditize software. The copyright holder is the one trying to make an "open core" business. AWS can just reimplement it.

Oh boy, just skimming through that license and I can see a bunch of lawyers having a good giggle and a field day over it.

> you must contribute all software that invokes this software's functionality..

So that rules out all proprietary operating systems, databases, 3rd party services, but why stop there? give us your CPU microcode.

Enterprise open source is bleak. There aren't very many enterprise open source projects with pure intent anymore, they are all looking for the big payday, which unfortunately companies like Google and Facebook have encouraged.

Look at Confluent/Kafka. The PMC is stuffed with Confluent employees and they behave that way. They aren't acting like an Apache project, they are only acting in their own self interest. Any new ideas get the run around, and only their ideas see the light of day. I don't even know why they bother being open source, except maybe to get the free development and bug fixes.

Obviously open source isn't going away. It's just that VC struggles to understand the implications, still, and the past half a decade has been imprinted deeply by all the cheap money.

If your worst nightmare is that a big company like Amazon uses your software then perhaps your business model wasn't really for the cloud era. Selling licenses is to the VC crowd what sequels are to Hollywood, comfort food that everyone knows how to handle but simultaneously knows isn't going to be the future.

Linux isn't worse off because it is used by AWS and Google. Quite the contrary.

I agree. Open Source software is not a good way to build a moat, and investors should take this into account. It's good for companies that focus on building employees, teams, and customer relationships, and keeping them strong at all times. They can't fall back on exclusivity of their code, hosting, or support offerings. The ideal Open Source company would probably be something like Red Hat.

You could AGPL and sell proprietary exceptions. Amazon won't touch AGPL code.

Others are happy to provide AGPLv3 code as SaaS, look at MongoDB's issues. You can build a ton of tooling around said code to enhance performance, adding features and billing, all while not modifying the core AGPLv3 code and thus avoiding the need to contribute back. This is a scummy business practice, but technically legal.

Why is it a scummy business practice?

Because your core business feature is done by someone else and you just take it and use it for your own profit without giving anything back. Do you need a definition of what scummy means?

No need to be rude. I know what scummy means. It's just not clear to me why this is considered scummy. Anyone is welcome to take the open source and benefit from it as long as they comply with the license. This is a very fundamental aspect of open source.

A company invests money and engineers in building commercial tooling, which you then pay for because there is added value. You are not paying for the open source - which is freely available. How is that scummy?

I apologize for the brash response.

The "comply with the license" part is the problem you're not seeing, using open source as SaaS is a loophole not a license feature.

An example being, I license something as open source which means if you don't pay for it (assuming there's an option for that), you are bound to follow the license I provided which means all further work has to be open source (the same license I used) and source has to be provided with the product. In an ideal world this would mean either:

1. We both get paid

2. We both contribute to open software which is available to anyone

But in our world it means, technically I'm not selling software but a service so I don't have to do shit. so the result (with scummy companies) is the following:

1. I don't get paid for software critical to your business

2. No one gets the benefit of the new product created despite my license

Hence, scum.

> technically I'm not selling software but a service so I don't have to do shit.

This is entirely untrue. If it were the case that 'I don't have to do shit', then why doesn't someone else do it too? Running a service takes a WHOLE LOT of work, and writing the software is, in many cases, the easy part.

We know this is true because whenever there is a conflict with a software license, the big cloud vendors just re-write it themselves.

Using someone else's work and making money on it without contributing anything back (code, funding) is morally wrong.

So there's "Server Side Public License" based on GPL.

Maybe. At least our company won't touch GPL code; as one colleague described to me, if you violate a proprietary license, a company will come after you for money, while violating a GPL license gets the EFF involved who will come after you for your source code.

I'm still wary, though. I could imagine if the resultant fines or source code releases from violating a GPL license weren't a strong enough deterrent, you could win by using a GPL-licensed product enough, then parry off attacks from EFF/FSF until you do a complete rewrite of the product underneath, then pay the fine/contribution to EFF/FSF while toppling the original company. If the company is big enough, and can afford enough good lawyers, there may be legal ways to get around laws.


(Usually not the EFF; more likely Software Freedom Conservancy or someone. EFF is more digital rights and privacy stuff, not copyright.)

A copyright holder can't get anything more from you by using the GPL. Infringement is infringement. The difference is that a company is usually happy to settle in exchange for a properly paid license, and an open source hacker instead is happy to settle in exchange for complying with the license. You're always free to take it to court and pay the damages from infringement, but you're not going to end up with a valid license in either case, so you'll have to stop distributing the software.

The actual difference, probably, is that you're a (moral) competitor of the open source hacker, and if you're not a competitor of the proprietary company, they have no interest in undermining the secrecy of your code or causing you to go out of business, even if the could potentially force that if they went to court. They're likely to consider "pay us a percentage of revenues" as a win condition.

Also, lawyers are not like Pokemon. You cannot beat the opposing team's lawyers by having more stronger lawyers. You can certainly lose by having bad lawyers, but you can only be guaranteed to win by being in the right.

You can't actually be guaranteed to win by being in the right. You can be in the right and lose.

>> Also, lawyers are not like Pokemon. You cannot beat the opposing team's lawyers by having more stronger lawyers. You can certainly lose by having bad lawyers, but you can only be guaranteed to win by being in the right.

> You can't actually be guaranteed to win by being in the right. You can be in the right and lose.

What he's obviously saying is that there is a seriously decreasing marginal benefit to more expensive (and presumably competent) lawyers.

It's better to have competent lawyers and be right than have amazing lawyers and be wrong.

Obviously there are shades of grey, nuisance lawsuits are a thing, etc.

Well his intention is clear. But stating you are guaranteed to win if you are in the right strikes me as naive at best and woefully neglectful of reality at worst.

It's probably best stated like "lawyers are not _always_ like Pokemon". Sometimes they very definitely are.

Yeah, my phrasing was sloppy: you're not guaranteed to win whenever you're in the right. But being in the right is a precondition of being guaranteed to win (along with having competent lawyers, a competent judge, etc.).

The only claimants to your source code are those to whom you've distributed binaries built from GPL sources. Anyone else can pound sand.

If you were the author of a GPL piece of code whose licence was violated, and the EFF came to you and said "we'll pay for the lawyers and in exchange we get publicity", most people would say yes.

not the case for AGPL: then it's everyone who has access to your online service built with the code.

I don't think it works like that.

This is a copyright license at its heart. It is a contract between the copyright-owner and the service owner. The end user is just 3rd party.

Wait, isn't the whole point of AGPL that the "user" entitled to the source code is now the service user, i.e. potentially everyone?

That's true, but GGP comment mentioned AGPL presumably for a reason :)

You actually can't magically lose your rights to code you created. You could rewrite your code so it doesn't depend on gpl code if you found you had included code you had no right to.

I mean if you're planning to violate the license then does it really matter what the license is? Your company sounds shady if you evaluate software icenses based upon how much it will hurt when you violate them.

There's a difference between "planning to violate the license" and worrying about what happens if you do because someone didn't pay attention.

Well, most enterprises wouldn't: https://blog.dgraph.io/post/relicensing-dgraph/

Has the AGPL ever successfully held up in court?

Yes, Artifex sued Hancom over their use of Ghostscript [1]. (The article says GPL, but Ghostscript has an AGPL license.) The judge denied a motion to dismiss the claims, and they reached a settlement for an undisclosed amount.

I don't think many developers are aware that Ghostscript has an AGPL license, and I've heard that the commercial license costs $25k per year. It's very easy to just `apt-get install ghostscript` when you want to work with PDFs (e.g. with imagemagick), but this violates the AGPL license when you are running a SaaS application.

There are some permissively-licensed libraries (Apache 2.0) that provide similar functionality, such as PDFBox [2], or PDF.js + Node.js.

[1] https://www.fsf.org/blogs/licensing/update-on-artifex-v-hanc...

[2] https://pdfbox.apache.org/

[3] https://mozilla.github.io/pdf.js/

$25k seems a bit on the high end - https://ironpdf.com/licensing/

Also, it's 2019 - Artifex, it's OK, you can publish your prices (https://www.artifex.com/licensing/)

> but this violates the AGPL license when you are running a SaaS application

Only if you modify the Ghostscript source code.

From what I was reading, I think your interpretation might have been the intention of the people who wrote the AGPL license.

But I also know that the AGPL license is usually adopted because they want to sell a commercial license, even if the source has not been modified. Artifex is very explicit about their intention on the licensing page of their website.

It depends on the definition of “modification” and “derivative work”. Artifex is adamant that any software using Ghostscript is a derivative work, and the copyleft will apply, so all of your source code must also be released under the AGPL license. This is especially true if your software cannot function without Ghostscript.

If you are only distributing your application (i.e. not a SaaS app), then you could make Ghostscript an optional plugin that people can manually install (like LAME for Audacity.) But a SaaS app provides access to the application over the network, so you cannot use Ghostscript without a commercial license, or without releasing your application’s source code.

I didn’t see anything about Hancom modifying the Ghostscript source. It was the fact that they distributed Ghostscript along with their own application, and their application depended on Ghostscript for some functionality. That was enough to trigger the GPL copyleft, so they were violating the terms of the license and had to settle out of court. The AGPL means that you would be violating the license by providing access to your app over a network.

Surely it doesn't need to—either you accept the license or you don't accept the license. If you don't accept the license, you can't use the software.

I'd imagine that the challenge with the AGPL is catching and suing the non-compliant services.

If the source is available, I absolutely can use the software—regardless of license. The only thing preventing me from doing so would be either criminal or civil law, and while IANAL, I don’t believe there are criminal penalties for license violations. The copyright holder would have to sue me, and prove that I violated the license, and then they might be able to get a court to force me to stop, pay them, or both.

I am curious whether there are any practical observations, one way or the other, about the AGPL’s enforceability.

Obviously you can use the software. Of course you'll probably get away with a license violation if you're not shouty about it. Similarly, I can use a pirated copy of Windows XP too, with effectively zero risk of getting caught.

Nobody is suggesting that an AGPL licence violation would result in a criminal penalty, but if you got caught you may be forced to release any changes you made to the source code. And you could lose the ability to use the software in future—if your business relied on it you might be screwed.

Personally I think the AGPL is stupid, but people are free to pick whatever license terms they like for their copyrighted works. Whether I like it or not is irrelevant.

The way you get forced is by someone filing a civil suit. My question was has anyone done that and been successful. It looks like the answer is yes, although it was settled out of court and not adjudicated. I am curious if a judge has ever found the AGPL to be a valid license constraint.

Personally, I find “you are licensed to use the software without releasing your modifications on your internet connected computer, however if you open a port and offer it as a service, you are not” to be entirely ridiculous. It seems to me that a license can’t (or at least shouldn’t) hinge on what other software I am or am not running on my computer (eg a webserver).

That is why I asked. I’m all for copyleft (despite the fact that its validity hinges on an inherent affirmation of the validity of the concept of intellectual property), but I passionately hate the AGPL because I think it is an unjust infringement upon my freedom as a user. It’s like saying “you have a license to use this software as long as you don’t run a browser that accesses porn on the same machine”. I think it oversteps the boundaries of copyright-as-designed.

agpl will not save you if they choose to re-implement the solution.

Nor will anything else, I reckon.

That's basically how the first GPL software came into being -- as a (partial) reimplementation of commercially licensed unix.

keeping the system proprietary will make it harder for them to understand what you are doing; but that will also be very limiting move - in the enterprise software space.

But will anyone else?

What happened is the realisation that FOSS developers also have to pay their bills and idealism only takes so far.

So anyone that refuses to pay for their tools will eventually either loose them, or contend to be happy to use lesser ones.

“Loose” is the opposite of “tight,” “lose” is the opposite of “find.” Easy to remember because the opposites have the same number of letters.

It always amazes me how often I see this spelling error.

Given that lose is pronounced looze and loose is pronounced looce I can see why it’s confusing.

I learned how to spell it in early grade school--maybe eight years old--so I don't understand what's confusing about it but I'm an educated man.

Maybe because not everyone is a native speaker that learnt how to spell it properly at eight years old.

It's confusing if you try to apply phonetics or any kind of logic to it, as an ESL person might, or many native speakers for that matter.

It is certainly not confusing it you learn it by rote and remember the rule. It is burned into my brain, but I see the mistake a lot so I assume it is hard to remember for some people. For me its vs. it's is something I still have to keep looking up.

Let's not let the minority become the rule. I have only had one person ever come back to me claiming they were a non-native English speaker when having made the mistake over many years of correcting people.

then maybe you should learn that a lot of HN readers are not native English speakers.

None of the people I've encountered--save one--ever came back to me saying they were not.

Very impressive...

In fairness it feels rather arbitrary and counterintuitive of a spelling rule.

Thank you for this public service.


What is interesting is that we are seeing people in other industries shamed for hiring people in unpaid or underpaid jobs and even people shamed for taking those jobs. I wonder if that is something that will ever happen in the software industry.

Open source software is clearly a net positive overall. However is it a net negative for the industry when enterprise developers rely on open source software without demanding their company provides financial support for that software? How is that different than the company relying on free labor from something like unpaid internships?

I think a lot of it truly is passion. Interns have goals that typically don't align with the company they go to work for. That's why they take crap pay. They get to not care and in return the company doesn't have to care about them! Everybody wins, and it's a choice all around. So it's mutually agreeable that the intern will put in some effort and get some reward.

Speaking for my own open source projects; There are already better, cheaper, and easier alternatives to my software. I'm already paying out the ass for something I could just download. I'm doing it for reasons that I can't, or won't, buy. And I know that sounds cliche because it is cliche, but passion is cliche.

We're programmers and hackers here. Just like a hot rod enthusiast spends $200k and 3 years building a car he could buy in a catalogue for $75k; we don't pay as much attention to cost/benefit relationships as we'd like to think.

How is having a passion for a particular piece of software different than having a passion for a specific job or company? Plenty of people have enough passion for an internship that they are willing to take it for free. However society has been discouraging that over recent years because that route is only available to people who have outside financial support and it can easily be abused by employers. Open source software has the same potential drawbacks.

Not every piece of open source software is diminishing the value of enterprise software developers just like not every unpaid internship is reducing the value of entry level labor, however both systems can easily (and even unintentionally) be abused by businesses.

The interns' goal is to learn the trade. The companies goal is to attract talent and/or get a ROI.

In my company an intern conducted research on scalability - creating tools to measure and monitor the software in the mean time. So not only does the company have some neat monitoring tools now, the intern actually found a bottleneck and improved the product. The company is now offering her a contract...

Caring about eachother doesn't have much to do with it.

Exactly right. Its amazing thing in software industry that developers want to be highly paid themselves along with free work from other software developers.

I guess it's the "eat or be eaten" attitude. Give someone a chicken that hatch golden eggs, and they will kill the chicken.

I think Hashicorp[0] has nailed down the open source enterprise model perfectly.

Where they make open source software for enterprise and provide services to support.

[0]: https://www.hashicorp.com

I love Hashicorp products, however, all of their pricing behind "Contact Sales" is a major turn off.


I don't want to be the subject of a constant dripping of emails, calls, and missings from a sales person who is constant to get their numbers and puts me in their sales automation pipeline.

Give me a ballpark estimate, I can go to whomever is needed, and we can go from there: I've never ran into a case where "I don't know" how much this costs is an appropriate answer to give a manager.

I agree, but since so many companies still do this, the math must work out such that having sales staff work out the prices results in higher profits even including lost sales to people like yourself who won't jump through the hoops.

The polar opposite of this would be Atlassian, who publishes every price and doesn't negotiate at all. At least it's easy to deal with...

> Atlassian [...] doesn't negotiate at all.

Untrue if you license multiple products from them at scale.

It is quite hard to A/B test. And the sales people making the bonus is often part of setting the strategy.

> since so many companies still do this, the math must work out

survivorship bias

"How much is it?"

"How much you got?"

Pretty much this. Not Hashicorp, but another vendor I was speaking to initial gave me a $10,000 a month quote that we got knocked down to $500 a month after some negotiating.

This has been my experience with I'd say 80% of the "Enterprise SAAS" outfits I've interacted with.

Was it Confluent?

It’s likely because they give different pricing based on who’s asking, which is normal for products that aren’t commodities.

Have to agree with you. At least their site says "Get Pricing" vs just "Pricing" and it says "Contact Us".

Hashicorp is one of those companies I want to support, but all of their pricing and enterprise details are behind a "Contact sales" button, so it is really hard to get to know their offerings and their pricing.

I understand we might be too small to matter for them if we aren't ready to dump thousands of dollars per month into their bankaccount, but it does makes me cautious about what is going to happen with them when their investment money runs out.

I really dislike when companies don't put pricing on their website. 1. I need to spend hours or days just to understand if I should consider it or walk away 2. Feels like they may give different price to different customers and we have to negotiate price like on asian market 3. If price is kept in secret who know what else is hidden from us

They barely have any revenue and they're struggling.

It's probably one of the next companies to be acquired in the coming years.

I'm one of the founders of HashiCorp. We don't publicly talk about exact numbers, but we broke through 9-figures last year, and a very low % of that is support (its mostly enterprise software). That also isn't using any accounting tricks (such as a ton of multi-year deals). We're doing very well. We're in no talks to be acquired, either.

I can't prove any of this, you'd have to take my word for it! I guess if we ever go public in N years (not saying we are, but _if_), you'll see for yourself on historical results. :)

Congrats on the success! Given your enterprise traction, though, how do you guys intend to avoid the problem Elastic is facing from AWS?

Fantastic to hear you’re still crushing it.

HashiCorp is doing VERY well revenue-wise. They are not struggling. Where did you hear they are struggling?

(I have no affiliation with HashiCorp.)

Instead of taking either of your words for it, are there any actual direct sources on Hashicorp's earnings?

The only source I found was https://www.hashicorp.com/blog/2017-year-in-review which just says the company "can be successful", not that it is making money.


From Mitchell himself - at the very least they are growing very quickly. For all the complaining about "pricing pages", I think hashicorp is doing the right thing. Focus on selling to the big dogs who can give you a sustainable business and don't cloud your pipeline with smaller shops who wish to shop around.

Hashi should honestly set up a donations page, we'd love to throw them some money because we use their entire suite, but the pricing is clearly geared towards whales.

Do you have inside knowledge of this? I wouldn't be surprised; open-source enterprise software is incredibly hard, but Terraform Enterprise for example is a really solid offering and not to expensive. It is actually very doable even for small (< 5 people) startups.

I have always wondered what this enterprise support means? So if a big company starts using vagrant, they would dole out 100k for a support contract? I find this hard to believe but what do I know.

You would be amazed at the spread between big companies who decide "we won't pay $20 for support for something critical!" and other companies who decide "OMG, we can't run /bin/bash without a support contract and someone to yell at if it goes wrong!"

My enterprise company wont even get us Postman Premium to the tune of ~$150 per seat per year...

So here we are slacking postman collections back and forth on our current year mac books.

Did you consider to create testing app to simulate customer experience? I can be a single html+js file in a git, for each major feature

When developing an open source product and trying to do a business around this is a tough ride, as competitors who do less development can do services cheaper.

Open source strives where different entities develop the software together to the mutual benefit without a single company trying to push their roadmap, without a single company trying to grab all revenue.

See Linux, see different Apache projects, see PHP etc.

I'd say this is more of a problem with VC funded OSS companies treating a commodity as an investment that needs to be milked where a few individuals working efficiently might otherwise be able to make a very comfortable living doing e.g. support and consulting. Many smaller projects are surviving just fine like this.

But when such projects take on tens/hundreds of millions of funding, it is inevitable that the technology becomes secondary to paying back the investors 10x, no matter what. Ironically most of that kind of funding seems to go towards sales and marketing rather than R&D. Usually core committers are only a minority of employees in such companies and things get worse when that no longer includes the C-level executives.

This creates a lot of friction when inevitably somebody else undercuts such projects in terms of quality, features or price. This is inevitable because all software eventually becomes a commodity. Your fancy pants DB clustering solution might be shit hot this year but you can bet there will be half a dozen projects imitating what you did within years.

This is basically what is happened to mongodb. It's all about diversifying, "adding value", proprietary extensions, etc. for their paying customers instead of doing what they were good at for all their users. And courtesy of the license, copyright transfers and outside contributions dry up and it's all on the company to do everything in house. Great, as long there's money but when that dries up it creates problems. Meanwhile projects like postgresql and others provide more or less drop in replacements, because they can and because there are users and developers in having that. Apparently they are still doing fine in terms of share price. Best of luck to them but I probably won't be using it.

Most healthy OSS projects out there have licenses that are well understood from a legal point of view and battle-tested in years/decades of use. Some have quirks that need working around (e.g. the classpath exception for GPL v2), others are fine as is (e.g. Apache 2.0). They also have a plurality of copyright holders spread over many companies that makes re-licensing impractical. Most such projects have a core of developers that are typically employed by a big company taking an interest in the project. Having key people in key projects is of strategic importance to them and ensures their interests are taken care off.

The whole point of OSS is commoditization and pooling resources between otherwise less likely to collaborate companies and individuals to get things done better than each of them would be likely to achieve by themselves. That's why most operating systems these days are largely made up of open source software, much of which has had multiple generations of developers working on it. Most of the build tooling around that, same thing. Apple, MS, Google, they all ship mixes of proprietary code and OSS code. Quite a lot of this stuff can be traced back to the early days of unix. Almost every big fortune 500 software company out there pays people to contribute to and represent them in OSS projects that are vital to their business. Even the less popular ones like Oracle actually contribute a lot. That's not charity; it's key to their success.

MS just retired two generations of their in house browser in favor of an open source project primarily backed with Google and with significant portions of Apple contributions from back in the Webkit days. If you'd have to choose two competitors for MS, those would probably be at the top of your list. Why did they do this? Browsers are a commodity and they were negatively differentiating with their in house efforts (as demonstrated by world + dog installing something else). They tried to fix it (Edge) and it didn't work out. All of the surviving browsers are now built around open source projects. I think Edge is probably going down as the last non OSS browser to be widely used.

I use open source components, libraries, and tools for almost everything I do. I love Github. I share code there myself. Most of the stuff I depend on has neither VCs nor much corporate funding behind it and its fine. Some of it does have VC funding and its also fine. My life would be hell if I had to reinvent all those wheels.

I agree funding OSS development is key but I don't agree that that needs to primarily come from companies that own the software and sell licenses+support. That's not how most OSS software I use works; it instead thrives on companies using and paying for people to contribute. Nginx is one of many software packages that I use. I don't think I'll ever pay for licensing or support; because frankly they are relatively unimportant to me. As for the dozens of npm dependencies and their hundreds of transitive dependencies, nope. Not a cent. I would probably consider SAAS solutions when it makes sense; as I have done with e.g. Mariadb and Elasticsearch. But mostly OSS works because it is free as in speech and beer.

In the case of nginx, there are dozens of OSS web servers out there. It's just one of many moving parts I need to worry about. I'll pick whatever is cheap and convenient.

Yeah, it's almost as if it makes sense to copyright the API to prevent unlicensed usage of decade long investments.

But apparently, Oracle is a bad guy for doing this, and Google is applauded for stealing Java.

Can't have it both ways.

Gitlab and Hashicorp are going pretty strong. Maybe they get acquired though.

I think the issue here is that you're viewing the merit of open-source entirely based on the immediate profitability of the software.

The benefits of FOSS are largely non-monetary. I know as entrepreneurs and professionals that might be a hard line-of-thought to default to, but I think it is extremely short-sighted (and borderline ignorant) to judge the merits of free software by its profitability.

This is incredibly deft on F5's part. What are the alternatives to F5's products? Cloud products, and home rolled Nginx configs.

I imagine haproxy might get more popular if F5 does anything to hobble the open source part of nginx.

haproxy is amazingly functional and has done a fine job of evolving over the years. At a particular $DEFUNCTCLOUDVENDOR we implemented a haproxy based ELB-ish solution with home grown control logic to replace a F5 installation whos configuration size grew unwieldy (the F5 config parser was falling over in their LBs) while we were nowhere near the F5's touted connection/throughput numbers. F5 wasn't really willing to work with us on licensing, so we implemented a system based on haproxy VMs that worked because we were more than a bit overprovisioned on switching capacity and were able to shard that configuration base more effectively over more VMs.

This was 2011, so I'd hope newer F5 gear has gotten past that.

18 months ago I was in a shop that used F5 and it was a nightmare. Some of the best network admins I've ever worked with and they couldn't nail down why the load balancer did the things it did, when it did them. It was bad enough that the config had more exceptions than rules just so that we could try to eliminate anomalies.

I hope NGINX doesn't suffer too much under this new ownership.

It may get embedded tcl.

* If you start a company and open source your core/clients, your product becomes part of AWS, and AWS runs you into the ground. *

In the world I live in, far more commercially supported open source runs outside of AWS, then runs ON AWS, then AWS has copied and usurped.

> your product becomes part of AWS, and AWS runs you into the ground.

As somebody, who has no knowledge about that part of the business (Amazon Web Services in production), could you elaborate on that with a few lines or point me to some articles? Thank you.

Just today, AWS announced a fork of Elasticsearch: Open Distro for Elasticsearch - https://news.ycombinator.com/item?id=19359602

A few months ago, AWS launched a MongoDb fork

For context, Elasticsearch merged their proprietary add-ons into the main repos https://www.elastic.co/blog/doubling-down-on-open and MongoDB relicensed to a not-quite-open-source license that compels you to release code for your entire infrastructure if you're running MongoDB as a servie https://www.mongodb.com/licensing/server-side-public-license... . If you make your money on support and not open core, it's difficult for Amazon to do anything to you, e.g., Red Hat is doing just fine despite Amazon Linux being a thing.

(There's also https://aws.amazon.com/corretto/ , a long-term-support version of the JDK, because Oracle is getting more aggressive about Oracle JDK licensing.)

The company sold for 650 million so it isn’t a bad outcome. They took open source code made better by tons of volunteers, added their proprietary bits and were successful and sold the company.

Isn’t this a great win?

> They took open source code made better by tons of volunteers

Had all the volunteers' impact being so big? Nginx was created by single developer who is NGINX, Inc CTO now, and then developed pretty much by Nginx employees only.

Very big yes! No doubt Igor deserves the majority of the honour for Nginx, but volunteer contributors were a big part earlier on, both in patches, investigations and community support.

Search through the change log for "thanks to" http://nginx.org/en/CHANGES and you'll see a lot of contributions. Two people who stand out as frequent contributors are Piotr Sikora and Maxim Dounin (who went on to actually work at Nginx!).

And this does not show the mailing list discussions and bug investigations or the documentation maintained by the community in the early days.

[This post is not intended to mean this sale is bad, just to highlight some of the awesome community contributors]

Support, certifications, and subscriptions all seem to work fine for a number of companies.

I don't think there is a future in open source enterprise software where trade secrets are hidden from the public.

is there such a thing as available source, closed license?

Yes. Source code is copyrighted and you can make it available under any licence you like, including a proprietary one.

Or use copyright licences which the big tech companies are scared of. Affero GPL or some of the more radical "copyfarleft" licences.

can be, I am interviewing quite a bit lately, among start ups there seems to be a renewed interest in doing devices. Probably because it is harder to do the commoditization trick or to re-implement the solution when hardware is involved.

Or keep open source and do enough antitrust that businesses must compete on other axes.

Source-available commercial software might be a more viable route to go.

I guess that signals it's time for nginx users to check out possible alternatives - just in case, if things turn out for the worse.

I can recommend having a look at https://varnish-cache.org/ - while its performance might not be 100% up to par with nginx in some (very, very high-end) scenarios, it has many other fortes that nginx (at least in its FOSS release version; I've never used nginx plus) just cannot match in my experience. Having seen `varnishlog` and `varnishtest` in action alone are worth spending a day or two exploring it.

Given how widely used nginx is, wouldn’t a community maintained fork be the most likely outcome of things changing for the worse?

The problem, of course, when all contributions are "limited" to those within the company, that the talent and expertise required to keep things going exist just in the company itself, and not in the overall open source community. So when the company behind the tech "goes away", all the knowledge does too. It's for that reason that open source that is totally under the control of one company is so dangerous. Sure, you have the ability to fork, but the reality is that such a fork is almost assuredly doomed to fail. You really are beholden to that company.

The antithetic case here has precedence too (node/iojs amongst others) - "org realizes its share of the base tech has outlived its 'welcome' to no real financial benefit". But I guess a more pressing question is: can a corporate entity be "hands-off" enough so that its subsidiary has room to work as effectively as it had prior to its acquisition?

I don't have the wherewithal to get at these arguments from any angle (and I'm certain I'm missing many others).

Studious maintenance of the base code and branches/PRs besides build tried/true code bases and I hope that we're not being torn of that practice from this buyout; but only time will tell ...

Then again most nginx users are going to be fine with haproxy/traefik in front of good old apache or lighttpd. Or even just apache.

I've replaced most of my nginx instances with traefik, except those just serving static files. The configuration is more straightforward and the EncryptIt integration is a lot more elegant than in nginx's case.

The last two companies I've worked for used træfik and haproxy, and just this last weekend I converted my home clusters from nginx to træfik.

nginx is fine, but there are now other options that work just as well.

According to the public statements, F5 is committing to maintain the current level of resources NGINX has allocated to their open source programs, to keep the same dev team involved, to keep licensing as it currently is, not change any of the repositories on Mercurial and GitHub and to keep the NGINX brand.

Which only means that NGINX will get even better over time.

Every company that has ever acquired anything says this exact same thing. Then the deal closes and it’s back to bottom line numbers.

In this case how does f5 make or lose money from Nginx?

I guess this[1] image from the F5 article gives an idea of where nginx fits from their perspective. F5 is a huge company (4000~ employees) that already likely has thousands of boxes running nginx. If you're going to continue using nginx as a critical piece of your business you might as well secure its future.

[1]: https://www.f5.com/content/dam/f5-com/page-assets-en/home-en...

> committing to maintain the current level of resources NGINX has allocated to their open source programs

What are the contractual consequences if they don't keep that commitment? If the answer is "none", then it's not a commitment.

I’ve seen that a lot of times. What I haven’t seen a ton of is the acquired company still existing 5 years later.

Which is an interesting statement given that everyone I've run into associated with F5 - local reps or users - have been relentlessly hostile to "open source crap".

NGINX was already paywalling bug fixes once they launched Pro.

proxy_pass for example will only resolve a hostname at the time the configuration is parsed, unless you use a convoluted variable hack. This was a serious issue requiring you to restart your fleet if a backend server changed IPs. The bug fix for this was implemented only in Pro and sold as "DNS for Service Discovery."

Put a load balancer between your reverse proxy and your backends. Problem solved.

What do you think nginx was being used for?

How’s Apache doing these days? Haven’t looked in a while

Nowhere near as fast with concurrent connections.

That is simply untrue. Apache 2.4.x w/ the Event MPM is just as fast and actually has lower latency. That FUD has long, long since been disproved.

Hopefully if it declines we can have a MariaDB type fork take it's place.


But yeah, hopefully the community around it is solid enough to make that a possibility. I'd really prefer to not go back to Apache...

Incrementing all letters : "nginx" => "ohjoy"

You win

Pronounced "en-genie", with an oil lamp as the logo?

Oh boy do I have a bridge to sell to you!

We've had good experiences with https://www.envoyproxy.io.

I really like h2o (https://h2o.examp1e.net) as a nginx replacement, very nice software, lot of features, extensibility using mruby and amazing perf.

h2o is neat, although its next version has sat in beta for something like 9 months.

The same nginx I was using yesterday, I have a license for tomorrow.

You already have a license for it today, too.

Are you the person behind caddy or is your username just a coincidence?

Edit: sorry, I should have just checked your profile.

Curious, why not envoy (https://www.envoyproxy.io/)? Its open-source, used/supported by variety of companies, has more features than NGINX.

Envoy doesn't support static file serving: https://github.com/envoyproxy/envoy/issues/378

I'm sure that's not the only feature it's missing compared to nginx. Envoy is not very comparable to nginx, in my opinion... but I also wouldn't reach for varnish as an nginx alternative either.

Envoy is not the web server, it's load balancer. If you want to serve static files It will be better to use any other software. Furthermore, it is a very complex problem for HTTPS servers, because the Linux kernel doesn’t have full-featured TLS support for sendfile syscall. I implemented daemon which is used as in-memory storage for static files and client for our proprietary web server. Trust me :)

The replacement should be written in a memory-safe language. Including the TLS library.

"There is more to life than increasing its speed."

Does it start with an "R" and rhyme with "repeated-so-often-its-been-ground-into-the-dust"?

Go has the advantage in having a native mature TLS stack.

It's not a memory safe language though.

> The replacement should be written in a memory-safe language. Including the TLS library.

No. Memory safety is a vanishingly small subset of all bugs and security problems. PHP is memory-safe, for example. Where has that gotten us?

> "There is more to life than increasing its speed."

Not if you're a computer.

I'm not sure if you are trolling or not. Just in case: all rce vulns in nginx have been memory safety bugs: https://www.cvedetails.com/vulnerability-list/vendor_id-1004...

Read what I posted again.

The number of security vulnerabilities due to PHP's crappiness is two orders of magnitude greater than all of nginx vulnerabilities combined.

Yet PHP is a memory-safe language.

Memory safety won't fix anything by itself, it will just shuffle the shit into some other place.

Now if you're claiming that if you take nginx developers and force them to use Rust they'll somehow start writing better code, then that's a valid point. Although I'm in extreme doubt that it is realistic or even true.

Idunno, it seems fine to me. There are plenty of HTTP daemons that get the job done though, and if you have exacting requirements, there's nothing particularly difficult about writing your own specialized one (generally you don't have to write every feature of NGINX).

Does varnish support SSL? Last I checked, I had to run nginx in front of it for https.

With everything that is going on with open source licensing this certainly can create a bit of worry down the line.

Interesting to also see what aws is doing in response to some of the more complicated licensing agreements and specifically elastic search: https://aws.amazon.com/blogs/opensource/keeping-open-source-...

The challenge for nginx was they raised VC capital so they were in a forcing function. Either grow revenue or get acquired. Could have remained an independent oss product for ever but alas no more.

Could this simply be about losing hardware sales to this newfangled open source? A big F5 rig used to go for $175k, vs $30k-sh for an nginx on plain hardware.


A big F5 rig is basically Linux with a CLI/admin interface on top.

(I work for F5.) This is not true - Linux is really just userspace to run our own data-plane code.

Serious question: what does it actually do in non-corp speak, though?

It's an inline swiss-army-network appliance that can do a fuckton of things at the speed of packets or nearly so up to 100Gbs.

load balancing? check.

stateful load balancing? check.

ssl-termination? check.

HSM-enabled ssl-termination? check.

hardware accelerated ssl-termination? check.

firewall? check.

NG firewall? check.

compiled Lua/tcl (i forget which) scripts so you can program something insanely complicated? check.

SAML? check.

ISP sized NATs? check.


Plus, way more configuration knobs and options than you'd ever want at each network layer. Like, come up with a load balancing scheme where Tls1.2 clients using Poly1305-chacha20 get sent to a specific pool of servers while everything else goes to another pool, except for clients trying to use QUIC and who are coming from a specific range of IP. They go to another set of servers.

Maybe a better way to think of it is that it's a single device for tweaking anything L3-L7 for your server and parts of your network.

(used to work for f5, too, but i'm not sure how specific i can get with the nda).


As the industry [0] continues to put its weight behind NFV [1] and SDNs [2] along with the rise of IDNs [3], do you see network-appliances keeping up the share of the market against those solutions? I believe @Edge network might continue to require these appliances for WAF, Firewall/DPI (and other things I don't know about)... but that'd be a niche?

[0] http://opennetworking.org/

[1] https://www.opnfv.org/

[2] https://opencord.org/

[3] https://www.apstra.com/

Not gonna lie, that question is almost not something i'm qualified to answer, since I was more focused on specific ssl technologies/integrations, but I'll have a go.

Obviously they won't go away, but network appliances definitely won't keep their share because not everyone needs them as SDNs get better. I see the SDN and IDN as mostly solving multivendor integration issues and making it easier to configure at least semi-complicated networks, which doesn't make them a drop-in replacement for many of the problems f5 is trying to solve. For certain network loads they might achieve performance parity, too.

One of the draws for an f5 box for a large customer is that instead of having like 5 vendors or OSS technologies that they have to maintain for load balancing, ssl-termination, hardware-accelerated/-hardened encryption/decryption, SAML, firewalls, etc. you have one company's product (that hopefully has been designed to work well with itself) to do all of that that's configured from one location. If you don't have to worry about that multivendor orchestration headache, then massive network appliances like BIGIP aren't a value add over having a couple vendors.

Another draw for BIGIP is doing things at the speed of packet flow or nearly so even for VM containers and even for fully encrypted SSL. If you don't have to care about making sure to squeeze every last microsecond of latency or bandwidth out of you 10Gbs or 100Gbs fiber connection, BIGIP isn't a value add over SDN. If you only care a little bit, than an SDN could be way cheaper than BIGIP because you can configure things to do what you need for lower hardware and support costs.

For the people who care about that multivendor issue and performance, they're always going to have hardware dedicated to networking, even if they use SDNs or IDNs, because they need that dedicated compute to achieve their goals. Sustained 10Gbs connections are no joke, let alone 40Gbs or 100Gbs. Same with tens of thousands of simultaneous SSL connections. All of a sudden you need dedicated ASICS/"Raw Compute" and RAM to keep up with the firehose of packets. Plus, network appliances will begin to integrate with SDNs and IDNs, so for customers on the border between needing an appliance and a getting by with an IDN or SDN and more manpower, the form of the network appliance will change, but they're still going to have hardware down in their server room or compute instances in their cloud dedicated to networking infra. If you want SSL-termination? You need compute. Hardware accelerated or hardened SSL termination? You need specialized hardware. Firewalls? Compute. SAML? Compute. Complicated NATs? Compute. If you've got a couple BIGIPs in your server room, your network's complicated enough and/or bandwidth heavy and/or low latency enough that you're going to have nearly as much racks dedicated for your SDN so that it has enough compute as you do for network appliances.

BIGIP isn't valuable because it's great a router or switch. It's great because of how much it does on top of that in a single server/VM, and how well it does it. And most of what it's great at are not things that an SDN will solve. Sure, the configuration tweaking will have parity and maybe load balancing performance (but having seen how BIGIP achieves it, especially for complicated setups, i kinda doubt it). But if BIGIP integrates with SDNs or with IDNs even just a little, then what could happen is that people on the borderline are just going to get slightly smaller BIGIPs and offload some of the tasks where BIGIP overlaps with SDNs/IDNs and the BIGIP will just be another node in the SDN. If BIGIP goes in on SDN and IDN, then you might even see people buying larger BIGIPs to orchestrate their overall SDN and IDN.

Thanks a lot!

Pretty much everything. :)

L4-L7 load balancing, distributed DNS, SSL offload, WAF, DPI, data centre firewall and other things. With a nice WebUI to configure all that.

The Tcl iRules allow you to hook into pretty much any stage of the request or the response L4-L7 at FPGA speeds to do whatever you wanted to the request / response data.

It's a very powerful product.

> any stage of the request or the response L4-L7 at FPGA speeds

I also work at F5, and used to work on the FPGA. This is unfortunately not true for TCL iRules. The FPGA basically only operates on L2-4, L7 is all software.

There was some talk about doing L7/iRules in an FPGA but prototypes never produced compelling enough performance gains to make it worth it.

I learn something new today - thanks.

"..a nice WebUI"

I challenge that assertion!

> I challenge that assertion!

For simple things it's adequate, but the fact that one can SSH is also helpful as there's a RHEL/CentOS base to work on. We're able to get Let's Encrypt working with a bash-only ACME client (dehydrated) is short order.

Heck, run Ansible on it:

* https://www.ansible.com/integrations/networks/f5 * https://github.com/F5Networks/f5-ansible

And you probably have a point, but it's all relative to configuring all that stuff by hand in the CLI, or worse using some other enterprise vendor's attempt at a "usable" UI ... ;)

It is the corporate internet. The internet exists in the box, as far as your employees / security model are concerned. (as far as the security model is concerned, There Are Always Bugs - this is a feature).

If it weren't for the need for remote backups, email and such would be hosted there as well, and you could run a company on one of these with no access to the public internet at all. Accounting, finance, etc: all of it.

And that userspace is based on a really old version of FreeBSD.

Or maybe that’s Citrix.

Were you thinking of Checkpoint (Nokia) IPSO[0]?

The earlier versions were based on FreeBSD 2.2.6.

[0] https://en.wikipedia.org/wiki/Check_Point_IPSO

Recent F5 systems are running CentOS 6.

Centos 7 starting around version 14.

Way way back they used to be on BSD. Then they moved to centos for cough cough cough (possibly NDAed) cough cough.

It's not a huge deal which because the host OS isn't anywhere in the dataplane.

Citrix Netscaler does use Freebsd, not sure which version off the top off my head.

It's way more than that. Yeah the base OS on the machine is Linux, but there's a lot of extra stuff they do all the way down to the hardware layer.

So what does the likes of AWS (for example) use for load balancing for data coming into its various private clouds... f5s?

Here's the ever excellent Eric Brandwine explaining what AWS does at L3 wrt VPC: https://www.youtube.com/watch?v=3qln2u1Vr2E

Here's the super versatile Colm Mac explaining what AWS does at L4: https://www.youtube.com/watch?v=8gc2DgBqo9U


Google has been very open about their network infrastructure, here's a nice summary from 2015: https://ai.googleblog.com/2015/08/pulling-back-curtain-on-go... and not mentioned in that blog... their NetworkLoadBalancer, Maglev: https://cloud.google.com/blog/products/gcp/google-shares-sof... (AWS equivalent of which would be HyperPlane: https://atscaleconference.com/videos/networking-scale-2018-l... allegedly based on S3's load balancer).

the short version is "magic".

The long version is, "varying degrees of horror"

It used to actually a BSD rig with a CLI/admin interface on top.

The control plane can be a raspberry pi. Its what allows you to modify the rules how you route. Its the interface.

The data plane is where you have high speed logic and data traveling. That's where you do the multiple 100GBps software defined networking, and its crazy fast chips doing it.

And the control plane has interconnects to program the data plane chips to the rules you want. So the data never hits the control plane at all. Its kind of like a water faucet where the knob doesn't touch the water but controls the floodgates.

That may be the case now, but ages ago it was just a fancy BSD box as far as I could tell. I used to be a reseller for their product circa 2000 and I wasn't aware of any proprietary hardware inside, just a really lean networking stack.

We were getting slammed on duty for the product and we were looking at ways of getting the appliances built locally using licensed F5 software, as the software itself wasn't as heavily taxed as the physical hardware. Everything in it seemed fairly commodity, except for the big F5 logo on the front.

It was an appliance that worked fantastically well. One deployment had an uptime of over ten years.

It's not about that. FPGAs behind engineering solution to digest packets at big PPS behind interface is what it does.

You know, Cisco's IOS XR is built on Linux, but all the real parts are behind their private kernel modules running on ASICs and FPGAs, traffic doesn't even touches the TCP/IP stack of the OS. Cisco ASAs have Celeron/Atom CPUs which obviously couldn't hold the specified loads.

> everything that is going on with open source licensing..

when visiting Taiwan and then Hong Kong in the late 1990's, a local said "there are no software companies here" .. everything with money had a hardware sale associated, software was just pirated as much as possible, end of story.. no one would pay for software if they did not have to.. It may be a "western" thing to have companies and people that can live from writing only software

It's also why software as a service has become common everywhere. Like Adobe Photoshop and Microsoft Office. It's harder to pirate plus ongoing revenues.

>With everything that is going on with open source licensing

What exactly is going on with open source licencing? Is anybody violating open source licences?

Cash, apparently. I like how Wilson Sonsini advised F5 in the purchase while firm heir Peter Sonsini invested in Nginx corp. Good times for old money in the Valley.

I thought F5 was primarily a Seattle based company? I've known more F5 employees growing up in Seattle than Amazon employees at least.

On another note, F5's poorly written code is the reason TLS 1.0 is considered insecure (using a variant of the POODLE attack), among other major security lapses.

POODLE was also an issue with other ADC vendors, I think it links back to where those ADC vendors got their initial TLS code, how the industry went about implementing it, etc.

F5 has a few hundred person office in San Jose as well as a decent amount in Tel Aviv as well.

Sure but it's not engineers who are getting paid in this deal, it's VCs and lawyers.

How does a random person become part of this scene?

You can't. You've just got to be good with money, have a small number of children and die with more money than you started with. If the children repeat that then after a few generations they can be part of the "scene".

Plenty of money for (what I understand) essentially a service company.

Service company with very valuable IPR..

Isn't Nginx open source?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact