It always kills me when very successful companies don't buy software from other companies.
I remember being at a lunch with a prospective client that really loved our technology. About 1/2 way through, he said he really would love to purchase our software, but the CEO doesn't allow them to use anything but OSS. What they make? Non-OSS software.
Just blows my mind.
Qt is pushing hard for commercial licensing (which I heard prevents you from using the open-source version), putting L/GPL FUD on their websites, and trying to track users of their installers more.
EDIT: I've just realized that I want a revenue-limited trial, rather than a time-limited one. I basically want the Prosperity license, but with the ability to say "you have to pay me if your company makes more than $100k in annual revenue". Is there a license like that?
I've emailed the License Zero people, hopefully they'll do something for that.
The "that" in question was "copyleft if your project is open, pay us if your project isn't open". For that, try the AGPL or GPL, depending on your use case and customers, and then sell alternate licenses for people who don't want to make their own code open.
> Maybe the Prosperity license?
That isn't an open source license, despite its efforts to be ambiguous on that front. That and the even worse "commons clause" are exactly the kind of license that motivated the latter half of my original comment at https://news.ycombinator.com/item?id=24005833
What's wrong with a license that's non-OSI "open" but gets developers paid from large companies to develop otherwise open software?
> The model of "copyleft if your project is open, pay us if your project isn't open" is one where I have no problems or concerns, and will happily use the open version and recommend that people building something proprietary purchase a paid license. Nor will I typically worry about the motives or future of the project unless I have some other reason to. And the KDE Free Qt Foundation means I never have to worry about Qt going proprietary.'
Software under a proprietary license is something I can't build other Open Source software atop of that I expect other developers to use and collaborate on. I don't want to have a forked ecosystem of proprietary-with-source-available software, I want to actually collaborate on Open Source software.
With Open Source, I'd feel confident that if we had to manage it ourselves, or fork it and add a patch, or get a third-party to develop a patch, or work with a dozen others with the same needs we have and collaborate on it, we can do so. It's reasonable to build an ecosystem or community or company around. You cannot replicate that with any non-open license; by the time you're done granting all the necessary rights, what you'd have is a non-standard-but-Open-Source license, at which point you'll get more traction if you use an existing well-established Open Source license.
I don't really care about encouraging the development of more proprietary software, whether or not it happens to have source available. There are already well-established models for getting people to pay for proprietary software. If someone is looking for a funding model for Open Source, and what they find is "turn it proprietary and generate FUD that it's as good as open", that's a failure. And when people are looking for Open Source and they find proprietary-with-source-available, it undermines decades of careful work and explanations by the FOSS community, and generates substantial confusion.
It's your software, and ultimately your choice how to license it. Various companies have tried the "source-available but still proprietary" model. Just please don't call it open or contribute to the FUD around proprietary-with-source-available licensing.
Speaking from experience, when encountering software under a proprietary-but-source-available license that tries to act like it's open, the response from companies that actually deal in Open Source is not "Ah, OK, let's pay them and use it", it's "yikes, get this away from us, how did this even end up in consideration, let's either use something Open Source or use something actually proprietary, not something by someone who is either confused about the difference or hopes other people will be". (The set of engineers and lawyers who deal with software licensing professionally, at various companies, tend to talk about it with each other.)
I develop multiple popular libraries that thousands of people use, yet I've never seen a single cent from them, which is fine for me because I don't develop them to make money. However, it's really hard to foster an ecosystem when companies who extract millions of dollars of value from FOSS don't feel like they need to give back.
Many ways of monetizing it don't misalign incentives. As a user, I don't value support less just because software is more reliable; on the contrary, I trust the software in higher-value contexts because it's reliable, and in those contexts I need the support more. I don't value "host this for me" less just because the software is easy to install and configure (because I still don't want to be the system administrator if I don't have to). And "please develop this feature for me" has great alignment of incentives.
> However, it's really hard to foster an ecosystem when companies who extract millions of dollars of value from FOSS don't feel like they need to give back.
You're a lot less likely to get paid for software that's under an all-permissive license (e.g. MIT or Apache or BSD). It's unfortunate that so much of the ecosystem has settled around permissive licenses; with such licenses, your best strategy for making money may be "use the software to get hired somewhere or to build reputation for consulting". There's a reason companies love permissive licensing, and subtly (or unsubtly) discourage copyleft. Strong copyleft gives you more options to monetize something, either now or in the future.
That said, I also do agree that there need to be more good ways of funding Open Source.
In general, its been my experience that the closed source enterprise only crap that most companies push for is exactly that: crap. I suspect its because those features are treated as a business expense, and thus built to keep costs low. Almost every time those features are underwhelming and buggy.
If its OSS, at least I can contribute a patch; if the thing is popular, likely someone else has already fixed it.
Enterprise support is a fucking joke; they will delay delay and delay. If you push hard enough, they say "its on the roadmap" without giving any gurantee of when it will be fixed. The only time Enterprise support has really worked well is in my org that got the best support package for GCP. GCP's support for urgent issues and product feature requests has been somewhat reliable and predictable. Much more than literally any other enterprise (I'm looking at you Okta).
Can you give an example of a product feature request that succeeded via that support channel?
- we requested a way to prohibit the provision of ILBs on shared VPC subnets without explicit grants; GCP introduced a role for that
- we had issues with the number of compute instances being too high on our VPC and reaching limits because of the number of vpc peerings. In the past, every GKE cluster created a new peering. We have a bunch of GKE clusters, and as we added more the Max number of instances we could provision was reduced significantly. GCP introduced and fast tracked a feature that enabled all GKE clusters to use a single peering rather than create a peer per GKE cluster.
There’s a bunch more. But on this front I have been a happy customer.
Even for code listings I had at very least to buy the medium where they came.
I think there is a valid point you're making for smaller companies that are providing both open-source and commercial versions of software, but I don't think Nginx is a great example of that.
In the particular case of nginx, I can tell you their reputation is not great in adapting to the users’ needs.
But more abstractly, I don’t actually agree with that sentiment. I see more of a responsibility to give back in the form of patches and collaboration than throwing $ at the problem. I see the nginx approach of open source simply as a business tactic no different from Windows Home/Pro customer segmentation except Home is free for tactical reasons to kill off other competition. It is a calculated business move; if your business model sucks-—which it obviously did not in nginx case—-does not imply others are acting less than ethically or they should pay you out of pity. (That said it might be strategically important for them to keep your head above water and survive for their own benefit as their vendor, but that’d be a different angle.)
I suppose the difference between free software vs open source is also relevant to this discussion, and I could relate to your sentiment when facing the former much more than the latter.
implying that just because your open source project is being used, that it is entitled to fund the bills of the project maintainers.
I think patches and contributions are a form of bill paying.
(The above is assuming the concern that it is funding that’s a problem today; I don’t quite see it that way [for instance, I strongly suspect Nginx to have made more money than DBX so far, so who are we to say who’s been more successful; market cap ain’t everything], but that’s a hypothetical to think about.)
Moreover, supporting a project does not equate supporting its existing maintainers. It could mean taking some partial ownership including the review side and having some developers on your own payroll. Seems like that’s how the big project are done most of the time. The Open Core model we are focusing on is a niche and arguably more akin to fremium products than free software as a thing with communal ownership.
OSS is not a business model, but more closely matches charity and non-profits, and run on donations and altruism of their users.
i find it annoying that people here keep saying that a company _should_ pay for their open source software usage just because they have money to do so. They don't have an obligation. They could donate - and some do - but it is in no way required of them, regardless of how much value they derive from using said OSS.
Open-core projects, which has a somewhat useless core and a paid for 'enterprise' version is not, under my eyes, a proper OSS project, but instead is a way to market their proprietary product.
Linux via the ongoing lawsuit with BSD back then, and GCC because UNIX vendors started charging for their compilers, with GCC being the only alternative available.
However everyone needs to pay their bills, therefore the push for non-copyleft licenses, thus in a couple of years GPL based software will either be gone, or under dual licenses.
You already see this happening with BSD/MIT based alternatives to Linux on the IoT space, NuttX, RTOS, Azure RTOS, Zephyr, Google's Fuchsia, ARM's mbed, Arduino, ...
What on earth does this tirade even mean? Every business lives 'forcefully' and fights for survival. Sometimes it comes with values, i.e. we dont use child labour in DRK to mine thallium, fairtrade, organic, etc. OSS is one of those values.
Is there a business that lives 'effortlessly'?
Because then it is going to be a very thin selection available.
As chirchill once said: "If Hitler invaded hell I would make at least a favourable reference to the devil in the House of Commons."
Not all companies, of course; and to be clear, I think such a company structure is a problem itself and agree with you.
As for investing into opensource, Dropbox is trying to do that when possible, for example we (along with Automattic) did sponsor HTTP/2 development in Nginx.
- Open-Core::: Features are not added to core, as they want people to upgrade.
- Consulting::: Ease of use is ignored, as if it's too easy people won't need consultants.
- Sponsoring Goals::: Software is almost held at ransom, until goals are reached.
The best way to help open-source software is to donate or contribute code... if you're trying to maximise profits, then just make it propitiatory
Some problems can only be made so easy. Some problems require custom work. Sometimes you need paid support not because the product is low-quality but because you need to know that you can call someone at 3am because your service is down. There are lots of reasons to have consulting.
> - Sponsoring Goals::: Software is almost held at ransom, until goals are reached.
You're assuming the work would get done one way or another. Sometimes people have many other things they could be doing, and they need to justify spending more time on a project than they already do. Or sometimes, people have a fixed amount of time but they're happy to prioritize things people want and will pay for.
(No argument about open-core; that definitely has problems.)
Other great approaches include hosting the software as a service. Depending on the nature of a project, many people may want a service whose primary value proposition is "we'll host this for you so you don't have to maintain and administrate it".
Paid support surely is, as you say, about calling someone at 3am and having them look into an incident.
From my experience, that's not about helping you get the most out of the product, and a hand in tailoring it to your needs - that's the consulting part, and is usually paid for separately (and at much higher rates).
I totally disagree. Red Hat patches/maintains things regardless of whether people pay for it. Everything is always available open source. There are numerous derivatives of RHEL that get these for example.
The money you pay for Red Hat stuff is for support. There are always free-as-in-speech and free-as-in-beer alternatives of red hat products.
So much of what happens in sufficiently large organizations has nothing to do with profit maximization. Think confederacy of dunces, not a conspiracy of greedy evil geniuses.
I'm a bit confused by this - I work for HAProxy Technologies and we do have an enterprise product. Many of our customers contribute code directly into the community and we backport those features into the latest enterprise stable version. This means they do not have to wait until the next shipping cycle to take advantage of a new feature. There's also a large community & knowledge base around HAProxy.
Your reasoning may be right when dealing with "closed source enterprise software" but it doesn't line up when we start talking about open source/open core.
I don't know what their volume licensing is like, but at $2500/server list price, costs add up quickly.
If you buy something or worse you have to pay license fees on a regular base your earnings will be smaller.
We live in a world that is driven by economic growth so the ultimate goal is to maximize profit.
Of course this has a moral aspect to it as well and I see it but in this case I think it is not outraging enough to be something on the scale of a scandal.
Many businesses use ideas or products for free to start a successful enterprise that earns a lot of money.
We Germans are very risk adverse (i hate that sometimes)
Like selling tap water in bottles.
However, for profit companies are not here to do what’s “correct” they’re here to make money for its investors. If I had decision making abilities at Nginx I’d be conducting a comprehensive review of the free OSS offering and redacting the features and overall value with extreme prejudice.
Dropbox never paid because it COULD not pay. If you have an enterprise, paid version of your OSS product it has to be impossible for an enterprise to use it for free.
Why? Most enterprises, especially ones that aren't tech firms, are going to shell out for enterprise support even if there are no additional features. Crippling the community version doesn't necessarily help enterprise sales, it can reduce overall mindshare reducing enterprise traction or, worse yet, mean that a third-party downstream edition with richer open-source features becomes dominant and it's creator gets “your” enterprise support contracts.
i don't feel this to be true.
Also, an enterprise that's large would want some features that are irrelevant to a small shop. For example, single-sign-on integration with various providers.
Just think about the commercial success of SUSE Linux?
While partially true, this is overly reductive. Companies can and often do take actions that serve goals beyond "increase upcoming quarterly profits".
And besides, if that were to happen people would just go behind some other open source web server, and push that.
If the said company has unknown track record, then doing business with them is risky.
What if the company goes out of business in near future? Or get acquired (actually I think A lot of infra companies's end goal is to get acquired)? What if they raise the price out of sudden? How extensible/customizable their solution is?
The trust is the key here. If I am in the position to buy software from somewhere and cost isn't the primary concern, the money would goes to a known/stable figure in the industry.
Also if your going to china take a disposable phone and a laptop that is clean ands can be wiped on return.
Even if your SAAS was OSS, they could still deny you access as you're inhibiting their server, not your own.
Just look at the JS ecosystem. Everything is for free. But also shitloads of crap. A lot of libraries left unmaintained.
If you're going to sell software that has an OSS variant, make sure the commercial experience actually outshines the free one.
That said you'd be amazed at how much of man pages is written by Red Hat but isn't attributed, so nearly everybody on every distro benefits from our documentation without realizing it.
Also if I can ask, is your product also closed source (in any nature at all), but made with open source components?
Is this trend happening primarily because devs are pushing for GRPC over REST? What benefits does Envoy offer over Nginx, if you're still a REST based service. I am not fully convinced of operational overhead that NGINX brings.
From my point of view Nginx feels "old". It's a C codebase without a great amount of abstractions and interfaces, and instead having a bunch of #ifdefs here and there. Unit-tests and comments are not to be found. Build via autotools.
Envoy looks as modern as it gets for a C++ codebase - apart from maybe the lack of using coroutines which could be interesting for that use-case. It uses C++14, seems to be extremely structured and documented, has unit-tests, uses Bazel for builds, etc.
So I think the experience for anyone being interested in working on the code will be very different, and people that prefer the style of project A will have a very hard time with the style of project B and the other way around.
"As modern as it gets"? Very, very far from it. Everywhere I looked it was all-over public virtual functions. It looked, more than anything, like Java, which is essentially, more or less, C++92 with some bells on.
The code might be OK, but, as with typical Java code, everywhere I looked was boilerplate, hardly any of it doing any actual work. I would hate for somebody to look at Envoy and think that was what good, modern C++ code looks like.
Virtual functions are a good answer to certain problems that come up, once in a while--in C, for such a problem, you would use function pointers. Inheritance is a pretty good answer to certain problems that come up a little more often.
But neither is a good answer to any organizational need, and a big project that reaches for virtual functions and inheritance as first resort makes me shiver.
Is this unanimously good? I've heard both praise and horror, never used it myself.
Bazel requires a lot of upfront effort but the power of (a programmatically accessible/modifiable) dependency graph and a common build/test system across all the languages is very hard to underestimate.
Are you sure?
This is never going to end as more things shift towards being core APIs that allow you to write code instead of configure things. It's not even configuration-as-code, it's just code managing configuration files.
edit: I think my comment comes across maybe kinda rude. My beef with Envoy is that the documentation is _extremely_ complex, and I've repeatedly asked 'How do I get started with xDS?' and been pointed to the spec, which I think took some time to read through and when I asked others about how to setup LDS/RDS/CDS/SDS was met with a like 'what are these things...? just use xDS,' which led me to a lot of frustration. This has been my experience each time trying to approach Envoy, and xDS.
I ended up reading the specs and found them very clear, and wrote my own xDS implementation: https://github.com/jrockway/ekglue/blob/master/pkg/xds/xds.g... I did this after reading the source code for the most popular xDS implementations and finding myself horrified (you know the popular xDS implementation I'm talking about). Now I have a framework for writing whatever xDS server I desire, and it can be as simple or as complex as I want it. For example, for my use cases, I'm perfectly happy with a static route table. It is very clear what it does, so I have that. What annoyed me was having to configure the backends from Kubernetes for every little service I wanted to expose to the outside world. So I wrote ekglue, which turns Kubernetes services and endpoints into Envoy clusters and Envoy cluster load assignments. This means that I never have to touch the tedious per-cluster configs, and still get features like zone aware load balancing. And I don't have to take on complexity I don't want -- the woefully under-specified Kubernetes Ingress standard, service meshes, etc. (I also plan to use ekglue for service-to-service traffic because xDS is built into gRPC now... just haven't needed it yet. It's great to use the same piece of software for two use cases, without having to maintain and read about features I don't need.)
TL;DR: take a look at the spec. It's really well thought out and easy to implement. Just don't cut-n-paste from Istio because they got it really wrong.
I looked up the current state of the xDS code, and there's a lot more of it than I remember. The EndpointDiscoveryService based gRPC balancer is here: https://github.com/grpc/grpc-go/blob/master/xds/internal/bal.... It appears to balance similarly to Envoy; locality-aware with priorities.
(That doesn't surprise me because I don't remember any field in the ClusterLoadAssignement proto that sends load information back to the client. Health, yes; load, no. But I could easily not remember it being there because it hasn't been something I've tried to implement.)
But yeah, the way to look at endpoint discovery is like DNS. DNS can return multiple hosts, and clients will spread the load by picking one at random (sometimes, if you're lucky). EDS is similar to this, but is a streaming RPC protocol instead of connectionless UDP, so it's theoretically easier to operate and monitor.
The other xDSes do more things -- CDS lets you discover services (that EDS then gives you endpoints for). RDS lets you make routing decisions (send 10% of traffic to the canary backend). SDS distributes TLS keys (and other secrets). ADS aggregates all of these xDSes into one RPC, so that you can atomically change the various config versions (whereas requesting each type of stream would only be "eventually consistent"; doing it atomically is good where route table changes add new clusters, the client is guaranteed to see the new cluster at the same time the route to that cluster becomes available).
It is all somewhat complicated but very well designed. This reminds me that I want to look more deeply into gRPC's support of xDS and add some integration tests between gRPC and ekglue.
One really starts to feel old at such occasions.
- treat messaging as a first class concept, not something to hide & abstract away.
- do not attempt to implement polymorphism in a messaging protocol. Do not bind your messaging protocol to a programming language's type system (they serve different purposes).
- bake fundamental monitoring & maintainability concepts into the protocol (e.g. intermediaries must be able to understand what responses are errors).
- have a well understood, simple backwards and forwards compatibility story.
All of this is stuff we didn't understand in RMI or CORBA or SOAP etc. REST was a great wakeup call, both in simplicity and some of the messaging protocol concepts (such as error modelling). It is missing the application level binding - there's just no good reason why you wouldn't have a statically checkable method/request/response type binding.
I am a bit weary on whether gRPC will go over board again in complexity. We'll see.
Apparently DCE IDL now comes in proto files.
What newcomers did not bother to understand is why we were using those formats in first place.
Rest assured, maybe in 20 years we will be introducing this cool RPC protocol, based on YAML on something. Thankfully by then I should be retired.
And even within IDLs, we've made major progress. Compare the mess of SOAP's data type system, various attempts at inheritance and polymorphism in SOAP and CORBA, pointers in CORBA etc.
Protocol Buffers are good enough to make us forget the traumas caused by CORBA.
Operationally, there are many differences (esp. around Observability) but if I were to distill it down to one thing it is a clean separation between data- and control-plane. This basically means that it was designed to be automated and the automation layer (xDS) itself runs just like any other normal service in production.
Who knows. All I know is, it’s exhausting, and ultimately it’s terrible for the end user. We have no idea what we’re doing when we pull in a new dependency like this. There’s tiny corner cases we don’t think about, and those get passed on to the user.
Innovating is fun, but exhausting in aggregate.
(Disclosure: We use Envoy as part of Ambassador, and so of course we're big fans!)
What I wonder about is how do you distribute the traffic on the higher level? I imagine there are separate clusters of envoys to serve different configurations/applications/locations? How many datacenters does dropbox have?
I was running a comparable setup in a large company, all based on HAProxy, there was a significant amount of complexity in routing requests to applications that might ultimately be in any of 30 datacenters.
* First level of loadbalancing is DNS. here we try to map user to a closest PoP based on metrics from our clients.
* User to a PoP path after that mostly depends on our BGP peering with other ISPs (we have an open peering policy, please peer with us!)
* Within the PoP we use BGP ECMP and a set of L4 loadbalancers (previously IPVS, now Katran) that encapsulate traffic and DSR it to L7 balancers (previously nginx, now mostly Envoy.)
Overall, we have ~25 PoPs and 4 datacenters.
He also contributed a lot to Envoy migration migrating our desktop client to it and adding perf-related thing like TLS session tickets' lifetime to SDS.
"we have an open peering policy"
That's a bit of a lie given you have a minimum 50Mbps requirement before you even consider a peering request.
I would call that Selective, not Open !
Caddy is the closest, but still not near enough.
There is this small segment of the market that we operate in that requires thousands of TLS connected domains to be hosted behind a dynamic backend. It's services like Tumblr, Wordpress.com, or any other hosting service where you can get a "custom domain" to point to your own blog or site.
NGINX - No.
Apache - Nope.
Caddy - Can do (but need lots of workarounds)
Envoy - Nope.
Everyone focuses on a few hand-coded domains and no automatic TLS. Maybe this part of the market is too small anyway. Sigh.
I'm just saying that it's not something that is documented well or purpose built for that scenario.
The SDS (Secrets Discovery Service) supports this, and is touched on in TFA: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overv...
You provide a gRPC service that can return the keypair needed for any host, with host config also being dynamic.
> A traditional problem with SSL in combination with name based virtual hosting has been that the SSL connection setup happens before the HTTP request. So at the moment lighttpd needs to send its certificate to the client, it does not know yet which domain the client will be requesting. This means it can only supply the default certificate (and use the corresponding key for encryption) and effectively, SSL can only be enabled for that default domain. There are a number of solutions to this problem, with varying levels of support by clients.
Then, the best approach seems to be the following:
> Server Name Indication (SNI) is a TLS extension to the TLS handshake that allows the client to send the name of the host it wants to contact. The server can then use this information to select the correct certificate.
Our initial use-case was ingress for docker swarm - after a fooray into k8s with the "traditional" nginx ingress with its rather hackish let's encrypt contraption.
I briefly looked at caddy 2 - but wasn't able to find any out-of-the box tricks for listening to docker messages and dynamically configure sites in a sane way.
Do you use custom code and configure caddy via the api?
Like this? (Am not a Docker user, but I know this is an insanely popular solution) https://github.com/lucaslorentz/caddy-docker-proxy
There's also a WIP ingress controller: https://github.com/caddyserver/ingress/
I don't think hitch (formerly stud) supports acme challenges, but that's where I'd start.
That's... definitely a stretch.
> Nginx performance without stats collections is on part with Envoy, but our Lua stats collection slowed Nginx on the high-RPS test by a factor of 3. This was expected given our reliance on lua_shared_dict, which is synchronized across workers with a mutex.
The `a factor of 3` is quite large to me. Maybe you put all your stats in lua_shared_dict?
You don't need to synchronize the stats every time. Since the collection regularly happens in per-minute frequency, you can put the stats as Lua table, and synchronize them once per 5/10 seconds.
It look like that the compared Nginx is configured with a system which has been survived for years and not up-to-date. The company I worked with used a single virtual server to hold all traffic and routed them with Lua code dynamically. And the upstream is chosen by Lua code too. There is no need to reload Nginx when a new route/upstream is added. We even implemented 'Access Log Service' like feature so that each user can have her favorite access log (by modifying the Nginx core, of course).
However, I don't think this post is incorrect. What Envoy surpasses Nginx is that it has a more thriving developer community. There are more features added into Envoy than Nginx in the recent years. Not only that, opening discussion of Nginx development is rare.
Nginx is an old, slow giant.
* C interface for stats, so we can would have access to from C code.
* Instrument all `ngx_log_error` calls so we would have access not only to per-request stats but also various internal error conditions (w/o parsing logs.)
That said, we could indeed just improve our current stat collection in the short term (e.g. like you suggested with a per-worker collection and periodic lua_shared_dict sync.) But that would not solve the longterm problem of lacking internal stats. We could even go further and pour all the resources that were used for Envoy migration into nginx customizations but that would be a road with no clear destination because we would unlikely to succeed in upstreaming any of that work.
Any pointers on how to achieve this for someone just starting out with lua and openresty? I have the exact same thing (lua_shared_dict) for stats collection, would love to learn a better way.
it's a pity that community never evolved with nginx growth and success
* for deployment we do not need to maintain a pool of stateful boxes with files on them and keep these files in sync.
* for development, engineers now have a programatic interface for managing your static assets.
Compared to NGINX though, it was relatively easy to push these fixes upstream. Community is very welcoming to outside contributions.
Things that may've help:
-- Configuration definition (e.g. protobufs.)
-- More focus on observability: error metrics (instead of logs), tracing, etc.
-- gRPC control plane.
-- C++ module development SDK.
-- (ideally) bazel.
Some dataplane features like gRPC JSON transcoding, gRPC-Web, and http/2 to backends.
I use Caddy a lot and it's perfectly fine for my scale, but at dropbox's scale, maybe go wouldn't be enough for the ingress part?
I am more curious what makes the author think Caddy "might not be the best choice for a general purpose ingress/egress proxy" (there were no other qualifications to that statement, but no evidence to support it either).
Part of reducing dependencies in my own software was a conscious decision to minimize future CVE exposure.
Might as well use C then (with some hand-written asm sprinkled in where the compiler gets confused and doesn't see an obvious optimization) for that. And I'm not even being sarcastic here (I wish I was though)
* Lyft (of course)
Plus the support from major cloud providers: Google, Microsoft, and Amazon.
- Nginx served us well for almost a decade. But it didn’t adapt to current development best-practices
- Operationally Nginx was quite expensive to maintain
- Observability and monitoring
I've always looked at it (esp. with STL) as kind of a "Swiss-Army-Chainsaw" and you were going to shoot your eye out. Maybe that view is old and things are better - but I learned a while back that sending a young gun into a C++ application's code-base would lead to a world of pain)
Maybe that learning is no longer accurate? What do you think?
Platform wars are over-ish. We have the same compile targets. What they call "Undefined Behavior) is relegated to ... well ... platforms we are not supporting.
C is fast - simple - easy(ish) to learn, and easy to "fuzz" in testing.
I can't speak to LUA - but C++ looks like a mine-field (to me).
Why do you declare that C++ is "better"? (Seriously interested - I don't even know enough these days to have a debate. I just gave up on the C++ hell-hole years (decades?) ago, and maybe should have kept up)
Overall our experience is that C++ code is smaller, simpler to write/read, and has a smaller chance of mistakes than equivalent logic written in C.
Of course many of this points are relevant only for relatively modern C++ (c++11/c++14 or newer), before that the cost/benefit ratio was much less clear.
In my case - C (as a language) had a smaller footprint, and if the targets were limited, it was easier to learn, to lint and to code-inspect.
Admittedly, this was mostly before C++14. I guess this might be a case of "once bitten, twice shy".
I value yours.
You have different experiences than I do - so our conclusions will differ.
That said - I "feel" as if C++ is a dangerous serpent of a language. Maybe I need to spend 6 months re-acquainting myself in complex environments with more developers than just me, and re-evaluate that presumption on a medium-size project.
This has to do with nginx not having the required features (features blocked behind the paid edition or GRPC non existent), forcing to develop plugins in Lua to compensate (the only supported language) and Lua is too slow for this sort of stuff (processing Gbps of traffic in real time is no joke).
EDIT: BTW -- I am not going to argue with LUA throughput. I'm still not sure what the thinking was there (maybe time-to-prototype?) - but C plugins run faster than Apache's do. By, like, a lot. (And I like Apache! ...Having used it since 1996)
Of course, you can serve static assets using Envoy, and maybe even connect a fascgi app without very much hassle. But it's quite a bit less straightforward.
This is also good web server.Configuration is done in yaml. Also,it claims to be very fast.
Besides that, it looks like the move was significantly driven by GRPC and profobuf. No surprise here, GRPC really doesn't work well over HTTP. Once a company start using the google stack, they have to move to more of the google stack to make it usable.
As for the middleboxes, using gRPC-WEB allowed us to switch Desktop Client App to gRPC even behind firewalls/IDSes that do not speak HTTP/2 yet.
As for the HAProxy, Dropbox used to use (circa 2013) it specifically for loadbalancing, but we eventually replaced it with our Golang proxy. That said, recent HAProxy improvements (v2.0+) make it quite an awesome dataplane and an excellent loadbalancer!
On a more practical note, each nginx `reload` will double the number of workers, almost doubling memory consumption and significantly increasing CPU usage (need to re-establish all TCP connections, re-do TLS handshake, etc.) So there is only that many reloads that you can do in an hour.
If you are talking about the ability to select a certificate on the fly via `ssl_certificate_by_lua_block` we are not aware of such functionality. If you are missing something, I would highly encourage you discuss it with the community on a github!
From Oleg Guba, Traffic Team TL, co-author, and person driving the deployment:
* ListenerFilters + NetworkFilters are flexible enough, that some of the custom logic could be just moved to the config.
From Ruslan Nigmatullin, our head Envoy developer:
If you are talking more about a custom verification code there is already couple of ways to do that:
* Client TLS auth Network Filter: https://www.envoyproxy.io/docs/envoy/latest/configuration/li...
* Alternatively, if you are writing C++ extension you can use Network::ReadFilter, Network::ConnectionCallbacks.
You don’t just load up a million cents as files and restart the server (though I do know a company that does something like this, but man, quite brittle).
Dropbox works for her :D
Personally, I prefer Dropbox. I found problems with One Drive. Google Drive client was always hit and miss and I could not rely on it. iCloud is not cross platform (afaik). Dropbox has worked where ever I needed it.
Dropbox is more expensive but I prefer to have my files in Dropbox (as a separation of concerns) rather than have a single tech company control every aspect of my life.
My experience with the 'average Joe' is that Dropbox is easy and it works. Yes, they might save a couple dollars switching to OneDrive but Dropbox still offer a good product. Will Dropbox survive long term? I certainly hope so. I have no affiliation, aside from being a customer.
For the extra buck-or-two per user per month - I just like the fact that it "just works" for most people and little tech support. (Although I do miss the RSS feed on events that they removed that helped me keep track of all of the "stuff" "the people" were doing with "all the files". I'm sure there was a reason - but that was actually the only feature that made me think that they and Box.com might be comparable in that area)
Of course it does. I am not sure why you think it doesn't.
Another sign of this anti-pattern is the hyper focus on the green field solution and not to think about simpler solutions (better DevOps tooling, hire C++ developers to rewrite slower lua Code etc)
For $9.99 you get all that plus 2TB storage with Google One. Dropbox has a minimum for of 3 users for their business plan, but with 1 user on G Suite for $12/mo I get unlimited storage and all the Goodies I mentioned before.
In fact, I think that drives a lot of these "what is better" debates in threads here. Some people go "Google Drive is better, because I get 2TB/month for a flat-fee that bundles the other services" (I do that plan too - I just subscribe to everything - to match the client's work flow. Where it really sucks is that I commercially subscribe to 3 (4?) commercial video conferencing systems).
I am not going to choose to save $5 when it stands in the way of me making $100. I find that thinking impoverishing, and time-wasting, and frankly stupid.
Even if I have to pay a designer $1000 to remake my slides after the content is settled for a client trying to pay me $15K, so that they can raise $2.5M, I'll gladly pay it! That doesn't seem to be the mindset here? (Or maybe I'm just coming across the people that shill for Vultr over Digital Ocean (or God-forbid - AWS!) - instead of focused on velocity of earnings. Maybe I just come across the wrong posts)
But, that's me - and I suspect I am not the majority here.
When you cost your business hundreds of $k/yr, who cares about $5-100? Thats less than the cost of the free coffee and snacks!
Not to mention that, USD 10 is a variable amount of money. Today, USD 10 is 25% more expensive than it was last year (and a couple of months ago, it was almost 50% more expensive than it was last year). And there's also the hassle of paying in a foreign currency (and not having common payment methods like boleto bancário available), and the annoying tendency of transactions from another country being blocked as suspicious by the credit card provider.
On a website that started to discuss business ideas and how to turn them into profitable endeavours, it is a bit surreal that every time someone brings commercial products out, one gets endless posts about free beer alternatives and how such software is doing a disservice to the community.