Hacker News new | past | comments | ask | show | jobs | submit login
MongoDB Wire Protocol Specification License (github.com/mongodb)
81 points by aleksi 3 days ago | hide | past | favorite | 95 comments

Is the wire protocol copyrightable? If not, then this license only affects the _document_ itself, not downstream implementations that may _use_ the document.

And if there is a differently-licenses _implementation_ of the protocol, one would assume that it is possible to implement (or just use directly) that implementation without any reliance on this specific specification document.

In other words: IANAL but I don't think this will accomplish what they want (which is Amazon re-implementing a wire-compatible-with-MongoDB database)

As an example: I (probably) can't scan and redistribute an entire cookbook, but I can bake cookies from a recipe in the cookbook and sell them.

If the "You may not use the material for commercial purposes" clause applies to the wire protocol itself, does MongoDB itself get an exception to that for Atlas/MongoDB Enterprise?

The copyright owner/holder can do whatever they want. That's essentially how you're able to have things like Gitlab Enterprise or CockroachDB Enterprise use their open source base in an extended not-open-source premium product.

It does require you manage outside contributions to the open source part carefully and get approval for those contributions to land in the not-open-source part.

The license is for the published version available publicly only. There's no reason why it can't be separately licensed to an individual/entity to grant commercial use on a case by case basis. This is no different from negotiating a contract - the license defines the terms under which you are allowed to use the provided software.

It's seems a bit pointless to publish a protocol with a restrictive license.

The basic purpose of a protocol is interoperability, yet the restrictive license works directly against that.

I also wonder about the reach of this kind of license. It's a license on the published work. So suppose I read this, whole-heartedly agree to the terms of the license, and use it to create a MongoDB Interop framework. I include attribution and release it under BSD.

Am I good?

I did not use the work for a commercial purpose and provided attribution, etc.

Now suppose some commercial cloud provider picks up my framework and uses it to implement MongoDB interop. Aren't they good too? They followed my license. They may not even be aware of the protocol specification, much less have viewed or used it in any way.

So what exactly has the protocol specification license accomplished?

(I am not a lawyer, so I'm sure I don't understand, but I'd like to.)

The purpose is probably to allow alternative clients but not alternative server implementations.

Is the wire protocol exclusive to server<->server communication, or is it used for server<->client communication? If the latter applies, then alternative clients would not be any more or less free of restrictions than alternative servers would be.

Note that I'm not addressing whether the wire protocol's license affects server/client implementations — that's being discussed in a different top-level thread — I'm only addressing whether a difference could exist between server/client.

(I am not your lawyer, please do not treat social media comments as legal advice, etc.)

There’s the ShareAlike part of the license - presumably that means your derivative work must be licensed the same?

Yes, perhaps that's it.

In that case, in my scenario I would not be good releasing my library as BSD.

To my mind, writing a library according to a specification is not a derivative work of the specification. (I think a derivative work would be something like an annotated version of the specification, or a new specification that adopted parts of the original specification.) But like I say, I'm no expert on this so could very well be wrong.

Out of curiosity, why did Mongo get so popular? I used it in university when it was hyped and it was pleasant to get started with. The ability to query your data is so inconvenient though, I switched to SQL and I'm confused about what made experienced developers switch from SQL to mongo in the past. Is it just because you can scale up more easily when your database gets massive?

Like NodeJS and many web technologies, almost everything I read about it years ago turned out to be hype and not based on facts (eg, nodejs being faster for large numbers of requests than traditional backends is not true, but it was widely repeated on the first 10 pages of google search results)

edit: very well said, thanks for all the replies!

Mongo popped up around the time node started growing more in popularity. Around 2012, 2013 I started hearing it everywhere, and it felt like a hard pressed marketing attempt to make Mongo the database for node, and by extension the database for the internet.

The web-scale meme came from them actually trying to position themselves like that, and it partially worked given that it is not only still around, but the prevalence of the MEAN/MERN stack, at least in (tech) pop culture and social media.

To this day my opinion is that Mongo is a snake oil PR firm with a "database" and I have not yet seen anything that has convinced me otherwise.

Because like NodeJS it has a low barrier to entry. This is by far the most important feature for rapid product uptake.

I think a lot of it is how easy it is to insert, query and update nested objects compared to a normalised SQL database. I know that Postgres' JSON field has come on a lot in the mean time though.

Rapid application development.

During which, data is naturally hierarchical and schema-less (ie., not defined, changing).

Mongo makes it easy just to "save your dictionary"

It's really just an "sqlite for hashmaps", and should have remained this.

More specifically, it's an mmap()ed series of linked lists of BSON documents, with some relatively simple B-tree indexing, a little journaling, and an unfortunate tendency to drop data on the floor.

Mongo hasn't used the MMAP storage engine for a long time. In fact it looks like it has even been removed now:


Wired Tiger, the default storage engine is LSM Trees with BTrees for indexes.

Not LSM, all BTrees, but the rest is correct.

Not sure of the current state of Mongo but WT certainly has LSM Trees:



Junior developers are terrified of their application running slow, and not having the technical aptitude required to debug and fix the performance problem.

Additionally, Junior developers do not care about data integrity (the customer's problem), or operational complexity (the SRE's problem).

Finally, Junior developers consider software to be much like bread. Older software is bad (such as SQL relational databases), and newer software is good (NoSQL).

Combine all three of these flawed systems of thinking, and you have MongoDB adaption.

... also before Kafka, the defacto queue broker was ActiveMQ, which was a pain to deploy, so many people used Mongo as a queue.

(Just finished their M121 course for my newish job).

Their aggregation pipelines are pretty pleasant to use. You basically get a bunch of data transformation code that you can avoid writing and you can work with your data without it going out on the wire.

I would probably be able to do most of that in SQL and I’m sure someone who really knows SQL can do everything it can do but I suspect the query involved will be significantly gnarlier. :)

Not sure about how the two would compare performance wise but I do know that you can’t use “$lookup” (Mongo equivalent of joins) with sharding which strikes be as quite unfortunate for scalability.

You don't use a screwdriver to drive a nail, and you don't user a hammer to drive a screw. Its good to use the right tool for the job. In many instances, a relational database (often SQL) makes a lot of sense and is the right tool for the job. Some workflows, a NoSQL/Non-relational database design can be nice. Document databases, K/V stores, etc can be more optimized for those specific use cases which can lead to some better experiences. That said, lots of relational databases are quite powerful and quite performant that unless you really need the particular use cases the NoSQL databases focus on, it may just be better to use the tooling available in your relational database.

There are a few instances where I've really preferred using a document database. Things like logs are often really useful to have in a document database like MongoDB. In one collection I can stick all kinds of logs. The schema of all the different fields in the document are really relevant to what kind of things I'm logging. If I were to have a column for every field I might have in the log message, each row would have a ton of null columns. If I wanted to add a new field to easily search on, I might have to add a new column. A "schemaless" design can then be very useful. Of course this is true of pretty much any document database and not necessarily only MongoDB, and these days some relational databases have decent JSON document database functionality as well. There are always certain kinds of tradeoffs to be made when choosing one database technology over another.

Another thing I've found useful with MongoDB in particular is GridFS, that has been pretty neat to use. These days there are other systems of handling binary blob storage, but as mentioned earlier there are always tradeoffs with choosing any particular technology.

Reading why RethinkDB failed may provide additional context as to how Mongo was successful.


I believe it became popular due to the following:

It had a flexible data model as there is no predefined schema. The selling point was rapid protoyping.

The JSON-based documents meant developers could work in objects without an ORM. If I remember Mongo called this "fixing the data impedence mismatch." Sigh.

It had decent performance as long as your working set fit in RAM as it was basically a mmap()'d file.

It was easy to get up and running.

It was in the right place at the right time during the original NOSQL hype cycle.

IMO, nodejs today is a reliable tech for many different kinds of server side workloads, I wouldn't compare it to mongodb.

I wouldn't say mongo is unreliable, and I enjoy node a lot. Its just that, years ago, everybody said it was amazing for certain things and it wasn't based on anything

SQL relies on joins. Joins are fine for reasonably sized databases, but on internet scale joins have miserable performance. "Document" based databases such as Mongo are designed to avoid joins.

MongoDB has automatic sharding, which is very important for horizontal scaling of writes. For MySQL you have to do manual sharding which is extremely hard.

And finally, document oriented databases are schemaless, which means there is no downtime when you add/remove fields.

This entire post sounds like the "MongoDB is web scale" video that is now a decade old. "Internet scale" is meaningless jargon. "Joins are slow" is a meme based on now-ancient MySQL performance.

Schemaless / document databases have a place and a purpose, but nine times out of ten the data we're dealing with day to day has a known and rigid structure that changes infrequently.

That’s the theory, in practice “no schema” in my recent painful experience translates to “move the schema into the apps” which really sucks when you have a bunch of interdependent services owned by different teams.

And you still need to do joins to enrich data from different collections.

I want Postgres back. :(

It’s exactly that. Shifting the problem from schema-on-write to schema-on-read. It’s like dynamic vs statically typed. Personally, I don’t like it at all, none of your problems have magically disappeared, you just shifted it to runtime (or the application layer, as you say).

In many document databases these days you can have required fields and types enforced by the database to ensure all documents in the collection have at least a certain schema.


> really sucks when you have a bunch of interdependent services owned by different teams.

That's true. The strictness of SQL lets you use SQL as the integration point. Without it you'd have to write a service on top as the integration layer.

> “move the schema into the apps”

Exactly. There is always a schema, there has to be. It's just a question of whether it can easily be seen or if it is buried in a thousand places in the code.

> but on internet scale joins have miserable performance.

Please define internet scale. Joining 10s of billions of rows using clickhouse here.

Clickhouse is OLAP. We are talking OLTP.

>For MySQL you have to do manual sharding which is extremely hard.

Vitess [1] and PlanetScale [2] ?

[1] https://vitess.io

[2] https://www.planetscale.com

>""Document" based databases such as Mongo are designed to avoid joins."

That is not something specific to document databases that is an approach taken by NOSQL datastores. It's known as denormalization which has its own share of issues that then need to be addressed.

HN is reacting as if this licensing change was made today, but it was made back in 2012, when they licensed their specification under a Creative Commons non-commercial variant. Today's linked commit is merely a text blurb reminding human beings of that license.

Exactly. I decided never to use MongoDB after that. I was already gravitating back to relational DBs anyway.

MongoDB was nice for fast iteration on new projects, i.e. no alter table scripts required, but in the end it wasn't really an improvement. The data in the DB became hard to manage since different records in a collection could have different fields, due to no consistency or enforcement. Mongoose helped some with that, but then the schema just lived in the application layer instead of the DB. I also found myself needing to do multiple queries to do "joins" across multiple collections, so it ending up being less efficient. There was also the default setting of write and forget, i.e. to not wait for a confirmation that the data actually was written. Who writes a database like that? Performance focused but not data focused. Anyway, I liked the idea of having JS on the client / server (Node.js) / DB, but after some time I came back to relational DBs.

A company I worked at was hounded by MongoDB to the point where the sales rep turned up unannounced and talked his way to the dev department. He was quickly escorted off site.

We had a far worse experience. We were thinking about upgrading to the Enterprise edition. The sales reps took all our details and several months later complained to our federal client using the details we provided making bs claims about security of the Community edition and licensing to them. All under the guise of "you didn't respond to us about the price quote".

We lost face because of that, although finally the client understood that it was all bs. Stay away from MongoDB and if you have to contact them make sure they sign a strict NDA and don't reveal any details to them.


This is pretty terrible.

We had a pretty bad experience with our company as well, being bounced along nearly half a dozen sales reps (each of whom insisted on having a meeting) before actually managing to get a quote, which was ludicrously high for the one or two features we wanted. For anyone who IS looking for MongoDB enterprise, just use PSMDB (https://www.percona.com/software/mongodb/percona-server-for-...) instead. OSS version of all the enterprise features of MongoDB.

Wow, I'd love to hear more details about this if you can share them.

Does the linked license https://creativecommons.org/licenses/by-nc-sa/3.0/us/ really mean you cannot read it, and then write a piece of software that implements it?

My reading of the license implies it covers _the document_, like a book or a song could be licensed that you cannot modify and redistribute the results, or maybe not redistribute at all: not what you make with the information you gain from reading it.

https://wiki.creativecommons.org/wiki/NonCommercial_interpre... makes me think the license is about _copyright_, not about what you make with the document. The page also has the explanation about "Explanations of NC do not modify the CC license".

Even if the NC clarification somehow held true, then I still don't see how it would prevent someone from writing e.g. a MIT-licensed library implementing the interface (the non-commercial use bit) and then someone else just takes it and uses it for whatever, within the limitations of the MIT license.

I think yours is the widely agreed upon definition of Creative Commons. I am likewise bewildered by their interpretation — it seems that they are suggesting anyone that reads this document is forever poisoned and thus cannot design or implement a Mongo compatible software. To my humble legal knowledge (not a lawyer) that sounds pretty specious at best.

I wonder if this MongoDB's interpretation of Creative Commons is even correct. Typically restrictions apply to reuse at the specific material not use of the knowledge derived from the material.

"It is difficult to get a [person] to understand something, when [their] salary depends on [their] not understanding it." --Upton Sinclair

This appears to be the licence for the documentation on the wire protocol, not the actual wire protocol itself. That would be essentially a part of MongoDB itself, or likely as the Google vs Oracle case decided, not something that can be licenced.

At least that's my understanding. Am I missing something here? It's fairly reasonable for a company to licence their docs with a non-commercial licence like this.

The "You may not use or adapt this material for any commercial purpose, such as to create a commercial database or database-as-a-service offering" language suggests that they think the license covers more than just distributing the document, which I believe is an unusual interpretation of what a spec copyright allows the author to prevent.

Lawyer repellent (I hope): The quote comes from https://github.com/mongodb/docs/commit/50e48200cde7e2eaffdc6... , and anyone who receives this comment may copy it as they like.

I think they just stuffed up the wording. I think the whole "can't use it to create a commercial database" means you can't use the document itself (i.e. "this material") as part of your own commercial database offering but you can absolutely adapt the protocol and share the document that Mongo created under the same terms (i.e. share-alike).

Because it's already in source form and isn't "compiled in" into a derived form I don't think it has the same potency from an infectious license perspective as GPL does. If you bundle this document as part of your commercial database I think they can't do diddly squat as long as you make clear that the document itself is still under the CC sharealike license.

I (not a lawyer) understand copyright law to mostly prohibit us from copying the spec document without permission, and then the license provides permission for some copying of the document.

I'm pretty sure the authors really want to prevent competing implementations that make money for someone else, and the wording expresses their intent. I believe that copyright on the document won't let them stop a company that can afford decent lawyers from implementing the spec, but it certainly doesn't make me any more likely to touch anything related.

It's pretty explicitly written that they mean for this to apply to the specification, not just the documentation.

> You may not use or adapt this material for any commercial purpose, such as to create a commercial database or database-as-a-service offering.

It'll be exciting to see Oracle vs Google turn into Mongo vs AWS, which is clearly where this is eventually going.

The specification itself is documentation.

edit: I mean, it’s not that I strongly care, but why not explain if you disagree? How exactly is a specification not a form of documentation?

The relevant definition for spec would probably be:

> a detailed description of the design and materials used to make something.

That sounds an awful lot like documentation.

Maybe you are implying that you can’t read it and then make your own implementation of that reading, but that’s not true. IANAL, but I’d be willing to bet my lawful ass it means you can’t reproduce the document for your own commercial implementation. Not that you can’t reference it. I don’t think “derivative” is that strong.

Presuming the wire-protocol specification contains things like schemas for Protobuf/Avro/etc. data formats (the kind of formats where most compiled-language libraries for them do compile-time code-generation against the schema to create efficient codec modules), it would be very hard to make a conforming implementation that doesn't breach copyright by embedding those schemas byte-for-byte.

How would making a byte-compatible protobuf by simply looking at the existing one be different than say, creating metric-compatible fonts, or, for example, pretty much exactly what Google did in the Oracle lawsuit? (Copy them verbatim because they shouldn’t be copyrightable...)

Even if you subscribe to the belief that the protobufs are eligible for protection under copyright law, which is fair since it seems to be the current understanding even if it is relatively new, I still don’t understand how creating compatible specs without reversing is illicit. It’s fair use, no? I thought interoperability was a valid claim for fair use.

If you reverse-engineered the protocol (either from scratch, or by looking at existing implementations) and happened to (independently) come up with an identical looking data schema, that would _not_ be protected by copyright.

The original version of MongoDB valued speed over durability, which was not a good look for a database. Since then I have never been able to trust them, even though I've been told repeatedly that "they have changed now".

They have great marketing and appear to solve problems for some customers, but they also seem to cause major problems for customers.

So I'll just keep advocating to stay away from it.

What this makes to current version of Mongo? Whoch is far more reliable. AFAIK there are soem issues with transactions, especialy in case of sharding. But disregard transaction in has many leverls of write concern. I'm not sure regarding bugs, but it claims to have diferent write concern levels that could suit different scenarios. So AFAIK it is reliable now, if used correctly.

what does this have to do with the linked article?

Just another reason why mongodb is trash - does it even go without saying anymore?

Every single company I've ever worked for was crushed by the unreliability of mongo. They're ultra-expensive consulting is also a ripoff - in one case the guy came, suggested a bunch of stuff w.r.t. changing up our queries and indexes, left, then a day later the database exploded and we had to roll back everything he suggested. We tried again piecemeal, which eventually lead to the same thing happening again. Eventually spending the cash to train the engineers and admins to be able to do the tuning ourselves - which ended up being completely different than the garbage the consultant suggested. Let me emphasize that this consultant was from the MongoDB company - not some third party. Completely incompetent company at all levels.

Mongodb exists to extort money from idiots on their cloud offering where costs balloon out of control. It was easily 40% of our monthly infrastructure bill and what we got out of it definitely did not reflect that.

We refused to pay them. Don't ever give Atlas money.

We use mongodb in my current clients cloud product. I agree mongo sucks big time. It has nothing to offer for our relational data (sql please) or for our little search index (elastic, or postgres please). Except for the database, the rest of our architecture is quite alright.

What i do not agree on is that their atlas product is bad. It has a very nice and helpful dashboard. Atlas is quite solid when used from google cloud (downtime/slow performance just once in 2 years) and their consultant was very helpful. Not really fast response times, but it was not urgent. Also their offering is not super expensive, but we only have 50-100gb of data. Consultancy was good value for money.

Dont give atlas money because you probably want a relational database for relational data and use elastic (or postgres if small scale) for search or statistics. Mongodb also sucks with scaling, it suffers from the same issues as normal databases, except that they dont call it a problem. Which means, you, the developer should fix it. It also sucks with scaling, because expertise on using mongo is scarce.

Almost done at this client. Learned a lot about mongo. Would never recommend it to another client. Would recommend running managed sql databases by a cloud provider if their offering was as good as atlas.

Do you or anyone else have any opinions of Elastic’s Cloud offering?

I did my masters research project at university on the issues with MongoDB's distributed consensus protocol (or lack thereof). One of my friends suggested the title "Distributed Cisterns".

That said, I think that the wire protocol is probably ~fine for a schemaless document store if that's what you want. I know that Apple implements MongoDB with FoundationDB, so that there are much stronger guarantees behind it, but they can still use MongoDB drivers in various languages, and that seems reasonable.

Can you share the pdf? Thanks.

Different author but definitely worth a read:



Older discussion for an older version of mongodb:


I have worked for a couple of companies that swear by Mongo, with similar results.

Mongo is just a poorly designed piece of software you shouldn't trust for any mission critical service, unless you are willing to dedicate a lot of resources to constantly put out fires.

Remember in the early 2010's when every bro on the Internet became an expert on scalability because he used MongoDB? https://youtu.be/b2F-DItXtZs (sort of like how Rust users are security experts today) I've been working on building a better database than MongoDB and it's called redbean. https://redbean.dev/ It supports web-scale nosql because you can use the executable pkzip structure as a document object store. It also embeds SQLite for SQL too.

Why do they swear by it then?

Sometimes tech companies have the wrong people at C-level. These people constantly make bad decisions, and have their subordinates pay for them.

Investors are often oblivious to these details. If the company is not doing as well as expected due to crippling tech debt, high turnover rates, dumb decisions, etc. the CEO will come up with an excuse (since COVID, this is easier than ever).

Because they drank the kool-aid that joins are bad (so implement them in your application instead!)....oops, mongo has joins now. It's still trash.

Out of curiosity from somebody watching, what makes you say it's still trash?

Because they don't understand RDBMSes and how to turn them for performance and have bought into cargo cult thinking.

I am tech lead for for large scale trade processing system based on MongoDB for one of the largest banks in the world. We have billions of documents and tens of terabytes of data, which is all processed both daily in huge batches as well as in real time, for various reasons (regulatory, for example).

MongoDB is an immature product, yes.

But it is good at some things.

As long as you learn what things it is good at and what things it is not good at it can be quite viable solution, depending on your problem.

Learn and plan accordingly.

> As long as you learn what things it is good at and what things it is not good at it can be quite viable solution, depending on your problem.

My experience is that the things that mongo is "good at", there are competing products that are just as good. Mongo downsides - such as not giving a crap if you lose data - make it a non starter when there are so many better products that actually protect your data.

> My experience (...)

So what is your experience? I have stated mine.

> My experience is that the things that mongo is "good at", there are competing products that are just as good.

So what does that mean? Nothing.

Every product is a set of compromises. The one that is suitable doesn't need to be perfect in every (or any) respect. It just needs to have the set of compromises that suits your project.

> Mongo downsides - such as not giving a crap if you lose data - make it a non starter when there are so many better products that actually protect your data.

Database do not (typically) "protect" data. There are some database that make it impossible to remove data once stored, but in general if you count on your database to prevent data loss, you are just waiting for a junior engineer to make a blunder and remove half your database, whether it is MongoDB or Oracle.

What all this means is that you cannot count on any database to prevent data loss and you have to organize some kind of way to protect your data. This usually means some kind of backup, snapshotted replica, redo log, etc.

> Database do not (typically) "protect" data.

This is a laugh. There is decades of work and research put into databases and making sure that what you put in comes out correctly and consistently. Things which mongo referred to as "not web scale" for years.

Obviously you can lose data when you make stupid mistakes or God decides to smite you - that's not what I was referring to. Pretty good strawman though.

But somehow they are fifth in DB Ranking [1]. So despite all the hate on HN, they are quite widely deployed. The same goes to MySQL as well.

[1] https://db-engines.com/en/ranking

Just because something is popular doesn't make it good. Especially in web development where even the fads have fads. Managers and non-techs also eat up the sales pitch and force it on teams that don't know any better.

"Don't ever give Atlas money."

Atlas works fine and it's reliable, on the consultant thing I can't comment, but MongoDB as it is today is a good DB.

Same story with Datomic. My conclusion is that databases are hard and require lots of time and resources to be reliable.

Is there an alternative that is easy-ish to migrate to?

This feels restrictive, especially in comparison to Postgresql. Where you have projects like CockroachDB which utilize the Postgresql wire protocol because of how well documented it is.

PostgreSQL is organized as a non profit [1], Mongo is a for profit enterprise attempting to stem cloud providers from providing a wire protocol compatible service without compensating them for their work [2].

[1] https://www.postgresql.org/about/donate/ ("PostgreSQL is an affiliated project of Software in the Public Interest. Funds donated to PostgreSQL are used to sponsor general PostgreSQL efforts. These funds are managed by the Fund raising group.")

[2] https://www.computerweekly.com/news/252455700/AWS-pushes-Mon... ("AWS pushes MongoDB compatible alternative as licences change")

If the courts enforce this then this is basically death to adversarial interoperability.

I assume that's the idea, to try and stop AWS from keeping their "Aurora Like" version of Mongo updated. See https://aws.amazon.com/documentdb/

It's useful to compare MongoDB's text with another vendor known for restrictive licensing, namely Oracle. Here's the legalese on the MySQL internals documentation. It has restrictions on dissemination but not actual use.


It does not restrict you from building applications that derive knowledge from MySQL internals documentation.

I think if you know your product is good, you're not too worried about someone developing a drop-in replacement for it. The opposite is also true.

It is my understanding that AWS very nearly killed ElasticSearch. If I were [working at] MongoDB, I wouldn't want that to happen to my product, either.

I'm perfectly OK with the "you can't run this code as a service" licenses, but when you get to restricting the spec about the wire format, I think other factors are at play.

Like, if Amazon wants to spend millions of dollars writing their own version of MongoDB that is drop-in compatible for existing MongoDB applications, that is great for customers. You, the customer, now have two choice -- options when things don't work out with one implementation. When you use license agreements to restrict that ability, I think it's a statement about the quality of your product -- you think Amazon can do it better than you, so you're going to use the legal system to prevent them from trying. As a database user, that is a signal for me to stay away. It means that when I'm having a scaling emergency there is only one way to fix it -- give most of my revenue to one company. That's scary.

Yes, Amazon did almost kill Elasticsearch. I sometimes wonder if the set of circumstances translates 1:1 to other companies. The products have similar names -- AWS has the Elastic Compute Cloud, and then there's this Elastic Search. That's not their product? Nope, just an unfortunate naming choice. And, Elasticsearch was particularly difficult to run at the time, so a hosted option was clearly valuable. And finally, Elasticsearch had a lot of company-killing problems: basically not doing what it said it did. https://aphyr.com/posts/317-call-me-maybe-elasticsearch At the time, Elasticsearch was losing acknowledged writes. That's a company killer if your only product is a database.

Could this be targeted against people who implement Mongodb API without being Mongodb? See https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-int...

AWS benefits heavily from a rich foss ecosystem, but foss systems also makes it easier for users to migrate to similar platforms. If they were to disrupt the foss ecosystem enough, do we enter a world of heavy reliance on proprietary cloud-based systems?

If you were intending to create a profitable business, and really wanted your product to be foss, and Amazon might just fork your project, honestly, what are your options? I'm honestly curious, it seems like such a difficult situation

Foss have many licenses, you can choose strict ones.

Which in specific would help here? From a 4 freedoms point of view, where anybody can use or modify the software for any purpose, it seems tricky.


Wiki quote:

> GNU Affero General Public License is a modified version of the ordinary GNU GPL version 3. It has one added requirement: if you run a modified program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the modified version running there.

>The purpose of the GNU Affero GPL is to prevent a problem that affects developers of free programs that are often used on servers.

The AGPL guarantees the four freedoms and specifically prevents (cloud) companies from running a proprietary fork.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact