Hacker News new | comments | show | ask | jobs | submit login
Never, ever, ever use MongoDB (cryto.net)
182 points by joepie91_ on July 19, 2015 | hide | past | web | favorite | 116 comments



Been there, done that. Spent over a year trying to make Mongo behave like a decent DB, and it would still fail to do so.

Actually, I wrote a small "goodbye letter" to MongoDB on HN in the past:

"You got your chance, Mongo, and you screwed it up.

I tried getting the most out of you. I treated you like a princess, I indulged you, your wishes became my wishes and your thoughts were my thoughts. I stopped listening to all that criticism around you and thought of you as of a misunderstood child, even as you would refuse to do even the easiest tasks one could imagine.

I gave you everything there is to give, but you broke my heart. You left me in the most critical moments. I trusted you, but you would go your own way. Tears were shed and countless sleepless nights were to follow.

Remember that night you disappeared without leaving a sign? I sent you messages which got a response only after many hours. You didn't give a damn about my needs. Once, you told me: "it's not me, it's you". And I believed you, I truly did.

It's over now. It's been some time without you, and I'm getting better. I have discovered, that not everyone is like you. Some DBs care, they really do. You can trust them, they give you their everything.

I'm still struggling falling in love again, but it's getting better. Don't write me back. Goodbye."

Here's the link: https://news.ycombinator.com/item?id=8990754


If you like laptop stickers:

https://pbs.twimg.com/media/CF9XvchWYAEKzTp.jpg

Source .sketch file is in:

https://github.com/mikemaccana/stickers

Joke is Alex Sexton's, I licensed font, designed, and printed them. Apparently a couple have found their way to Mongo HQ.


If your site has less traffic than Wikipedia, use some relational DB. If your site has more traffic than Wikipedia, you already have a dev team that understands big data.


Now with PostgreSQL 9.4 there isn't really any reason to use MongoDB (JSONB is better suited than BSON). For any other use CouchDB.


sadly, meteor uses mongo and only mongo.

And we are using meteor (which is based on node)

Two levels: node and mongo == two levels of sadness.

(let the flames commence)


Emberjs with pouchdb and couchdb give me similar functionality (I think) with more proven technologies. Each time i see state/data automatically synchronize across browsers, I'm amazed. And your apps work offline, not entirely sure if meteor provides that.


Aah interesting, I have been planning to give meteor a spin. Any reason for supporting only Mongo ? Is it because libraries are yet to come out ?

Edit: punctuation.


Meteor relies heavily on two things:

1) mongo's ability to have a live cursor that updates if the query returns new objects or changes within the objects currently being returned.

2) they have a minimongo which lives on the client. So code can do queries and not worry if the code is running client or server.

If meteor wasn't so tightly tied to mongo - I would hands down recommend it to a company doing node.js development.

Meteor is much easier than node to grok. ( no callback hell, no promises ) Meteor reads top to bottom the same as all the other languages out there. Node reads like a twisted spaghetti hairball that your cat vomited up.


There are a few. Mongo has excellent integration with Hadoop and is becoming very popular in the big data analytics space. Likewise it is gaining traction in the EDW space as a result of their partnerships.

Also it stills remains one of the simplest databases to setup and use making it still my goto for hacks/spikes.


> Also it stills remains one of the simplest databases to setup and use making it still my goto for hacks/spikes.

It's not hard to make it easy to do a thing in the wrong way - and that's exactly what MongoDB does. It doesn't make you set up authentication or table schemas, so it looks 'really easy to set up'.

In reality, though, you're wasting hours to save 10 minutes. Because at a later point, you're going to have your database broken into (if it can even be called that, without authentication), or you end up corrupting your data because two of your applications disagree over what the current object schema is.

To know how easy something really is, you need to compare how hard it is to set something up correctly. And once you do that, MongoDB falls far behind.


I'm all for using tools that don't make bad things easy, but I think threeseed is saying that for hacks and spikes, the "later point" you're referring to might not matter. (I agree that those are the only jobs I'd use Mongo for, too.)

Of course, if you're at a company where those spikes find a way of turning into production code, then it's a different story.


Realistically, though, almost always at least some portion of your hacks makes it into production. And if you're saving 10 minutes for 5 hours lost, then even at a going-into-production-rate of 10%, it's still not worth it.

Hacks that remain hacks in 100% of the cases, are rare :)


I've always thought it would be a great idea to create an explicit "prototyping database" that has limitations that actually prevent you from putting it in production, no matter how small your use-case. Things like "erases random rows after 24 hours" or "default row limit of 100; can increment this by 100 once per day by solving a CAPTCHA."


I think my experience is different from yours (I don't let spikes make it past my own branch), but as far as production goes, we're on the same page!


What exactly are you talking about ?

Why would I care about having my database broken into or even my data being corrupted for hacks/spikes. The very definition of these is that they are throwaway designed to test an idea before rebuilding in something more suitable.

And yes MongoDB is schemaless. That doesn't mean your data is going to automatically corrupt itself. You can just define your schema in some shared library. I can just as easily corrupt PostgreSQL if I change data types without updating the ORM.


> Why would I care about having my database broken into or even my data being corrupted for hacks/spikes. The very definition of these is that they are throwaway designed to test an idea before rebuilding in something more suitable.

No. There's no rule that says a prototype must be thrown away - often, it can be built upon further. See also https://news.ycombinator.com/item?id=9913563

> And yes MongoDB is schemaless. That doesn't mean your data is going to automatically corrupt itself. You can just define your schema in some shared library.

This doesn't work if there is no shared library, for example if you have two separate components in different languages both using the database. And at that point you're (poorly) reinventing a schemaful database anyway.

> I can just as easily corrupt PostgreSQL if I change data types without updating the ORM.

A good ORM will abide by your database schema - and no, you can't "corrupt PostgreSQl" by "not updating your ORM". It will, at most, error out because you're trying to work with non-existent columns.

In a schemaless database, there's no such thing as "non-existent columns", thus you can quietly corrupt data without realizing it.


Actually following proper Agile process you do throw your prototype away.

And yes you can corrupt PostgreSQL. Change the data types without updating your ORM.


So we allow for bad tools through the means of slavish doctrine? Brilliant.


And how exactly do you think that will corrupt anything?


>Also it stills remains one of the simplest databases to setup and use making it still my goto for hacks/spikes.

Eh, more complex and less secure the sqlite which if you want can store json as string values and provide full text search.


As simple as sqlite with a decent ORM? Just curious here, I've never tried Mongo.


pg 9.5 with partial updates, for now pg jsonb support is inferior if you want to perform partial modifications.


There are no partial updates in 9.5. There likely never will be in Postgres, due to MVCC.


My guess is that mirekrusin was referring to the jsonb_set() function, not to the storage level.

On the WAL level there's partial UPDATEs btw.


I've used MongoDB recently for the first time, for an internal project. Being schemaless, I believe it did speed up development time significantly, and allowed me to add stuff more gradually, but you could say that's just a way to indulge a certain laziness of design. Now that the project has matured, stepping back I can see how a relational database would be a better fit, but if I had used a rdbms from the beginning I might have not "shipped" quite as quickly. Swapping it out for a real rdbms will mean changing a couple of classes anyway, nothing dramatic.

So uhm, I don't agree that it's not good for prototyping; if your final code ends up having a decent abstraction level, it should not matter what you picked on day 1 anyway.


> but if I had used a rdbms from the beginning I might have not "shipped" quite as quickly.

I doubt that. As mentioned in the article, migrations with rollbacks basically solve this problem, and let you iterate through schemas quickly.

> Swapping it out for a real rdbms will mean changing a couple of classes anyway, nothing dramatic.

In theory, perhaps, but I have yet to see that in practice. I do freelance code review and tutoring for a living, and in every case I've seen, a significant amount of work would be needed to move to a different database.


> As mentioned in the article, migrations with rollbacks basically solve this problem

Maybe so, but as I said above, they are another thing to do at a time when things are changing fast and one is more worried about checking whether the application "makes sense" than proving its correctness, so to speak.

> I do freelance code review and tutoring for a living, and in every case I've seen, a significant amount of work would be needed

Well, duh: if this weren't the case, you wouldn't have been involved in the first place :) What company calls consultants to review code they know how to fix themselves in a few minutes?


The article mentions CouchDB and other schematic-less, document DBS, which should be equally good for prototyping but suck less.


I don't understand why a schemaless database is so great for prototyping. Most of the pain from schema changes is when you have a lot of code or a lot of users depending on an existing schema, so it's hard to change (or at least, hard to change while maintaining uptime). Running ALTER TABLE on a small dataset with almost no users is trivial.


Schemaless is just one aspect the other one is clustering/replication. Not sure about MongoDB but CouchDB has master-less peer to peer replication built in. That is hard to bolt-on later on top of Postgres and other databases. Just saying that is another factor in play, not just schema vs no-schema.


That's not really useful to prototyping.


Yes it is if you are prototyping a distributed service.

Unless the belief is "if it works on my laptop, it will work distributed across 50 nodes as well" is something you believe in (you shouldn't... without very good reasons).


A lot of ALTER angst comes from MySQL where any table alter results in a full table lock and copy. In most other RDBMS, column addition and some type changes are a quick metadata change.



It might be trivial but it's another thing to do, which is is simply not there otherwise.


If you're trying to use MongoDB as a relational database, and don't understand its limitations and strengths, of course it will be terrible.

MongoDB is good if:

a) Your data model fits into the document model. Usually that means you'll be querying only one collection at time.

b) The access pattern of your application is a lot of READs and few writes. MongoDB has a collection lock, so it sucks for concurrent read/write operations.

c) If you need ACID support, your operations must be restricted to a single document.

If your problem fits on the restrictions above, you probably won't run into many problems. We've been using Mongo to store our authentication info, and to store payment transactions. No major problems so far, but our needs fit nicely into what MongoDB can provide.

Most of the rants I see are from people that got burned trying to use MongoDB as a drop-in replacement for a relational database.


Relationships in MongoDB are commonly stored as ObjectID references, and (at least on node) the default Mongo client (Mongoose) makes you write schemas to enforce these.

I think Mongo is awful too, and I'm sure there are problems with this approach - I'd love for someone with more DB knowledge to go into details - but saying there are 'no relations' seems to be an oversimplification.


Mongoose is a client-side library. It emulates relations and schemas client-side, as I also pointed out in the article.This also means that the database cannot optimize for relations and schemas, as it doesn't even know they exist.

Mongoose is also very much not the default MongoDB client in Node.js - it's a third-party client library by Automattic. The official client according to the MongoDB documentation is `mongodb` (https://www.npmjs.com/package/mongodb), which does not feature relations.


I have used and am aware of node-mongodb-native (I wrote http://stackoverflow.com/questions/19546561/node-mongodb-err... where we discovered it wrapped and threw away all exceptions in callbacks in the stable production version ), but Mongo (the company) have recommended Mongoose. The language now is vaguer than it was 'you can use it natively if you want, or use Mongoose' http://docs.mongodb.org/ecosystem/drivers/node-js/

Thanks for your excellent explanation and totally understood re: not being able to optimise for relationships on the server.


I actually didn't even notice Mongoose being mentioned on that page, to be honest, given that the instructions were for `mongodb`. I'd imagine that neither would a developer skimming the page :)


Mongoose isn't the default Node.js MongoDB driver, though it is very popular.


MongoDB is awful for development.

I once started consulting on a Rails app project which used MongoDB. But I was never able to get productive: I had my development environment all set up, but then the code wouldn't work with the development database. I pinged the other dev:

"Oh, I made changes to the data model. I'll zip up and email you a new copy of the dev database."

This continued for a couple of iterations and I had to quit the project because I could never continuously get work done.


I thought it cannot get worse than dynamic languages, but then they invented document databases.


I, well, one my companies, have been using mongo for a while without any problems. I cannot comment about is reliability but as a business tool, in the niche area we operate, it served us well. I've got nothing against relation databases but there are not a solution to every single problem. Heck, why not decide first if you can use flat files? Do you really need to query and join. Not all problems require that sort of thing.


> I, well, one my companies, have been using mongo for a while without any problems. I cannot comment about is reliability but as a business tool, in the niche area we operate, it served us well.

The issues I listed provably exist. That you haven't run into them yet doesn't change that - and quite likely, you might not even know whether you've run into it.

Losing data or having your database open to the wide internet, for example, are two scenarios that you will likely not be aware of unless you explicitly test for them.

> I've got nothing against relation databases but there are not a solution to every single problem.

And if you'd read my post carefully, you would've seen that I make no such claim. They do solve many problems, however.

> Heck, why not decide first if you can use flat files?

Flat files are rarely the correct solution. Race conditions ahoy!

> Do you really need to query and join. Not all problems require that sort of thing.

Many do. And even if they don't, MongoDB is still not the right choice.


Postgres 9.4 even has `jsonb` and GIN indexes, which in a lot of ways provides nicer schemaless storage than some of the NoSQL databases provide. You're also free to mix in traditional relational tables (since `jsonb` is a data type like any other), or even efficiently join two tables based on a JSON containment operator. I haven't looked in to it _too_deeply, but what I've seen so far looks really neat.


If nothing else psql has a nice tabular output. Sometimes I go out of my way to import something into a local pg instance just so I can format things using psql.


Whoa, this is a worrisome attitude.

1. The bugs that MongoDB historically suffers from are those that are unlikely to occur for most people, but when they do cause the most suffering. This is a combination of them being incredibly hard to replicate, but more insidiously that the system keeps working, it just has a small data corruption. This can obviously lead to huge problems down the line. The conclusion is firstly that just because you haven't seen a problem doesn't mean there isn't one, but also just because there isn't a big problem for you doesn't mean there won't be one in the future.

2. Databases have some of the most demanding testing around, due to the important requirements placed on them. There's a really good talk floating around from a guy at FoundationDB. The sorts of bugs that keep cropping up in MongoDB are the sorts of bugs that level of testing would find, which is a very bad smell. Thus, one should only resort to MongoDB if no other solution is reasonable.

But in reality, for most tasks something like Postgres (with JSON additions) or CouchDB etc is equally suitable and easy to use. This means that most new users of MongoDB can actually afford to use a better/less buggy database, they just choose not to.

3. Flat files are generally a silly idea mostly because of the additional tooling that databases give you. ORM systems make using a database much, much easier than using flat files, and the databases usually allow easy analytics. They also typically give better error messages, better performance, better extensibility etc. If it comes time to move your data into the cloud, you'll have a much easier time of it if you use a database etc. It's a win-win in almost every case.


If you don't need to QUERY, just use a file system.


As a smaller setup that requires far less traffic than a large scale real-time application, what are the draw backs of just using a basic MySQL installation? We've hit over 20k views per day on our internal wiki with no signs of even coming close to any limits with either our hardware or the SQL stack.


MySQL is okay. I still think Postgres is better, but the more severe of the problems MySQL had in its early days have been fixed, and honestly it's good enough for most sites. Certainly it's far better than MongoDb.


If it works for facebook.


How would one use Meteor.js without MongoDB though? Are you saying not to use Meteor.js too? I was thinking about maybe toying with the idea of making a fully reactive RethinkDB package for Meteor.js.


You don't.

In fact it's the reason I'm not using Meteor, which is a shame because it looks really good. The optimistic UI feature is something I really want, and have had to implement my own custom solution for. I've nothing against Mongo per se, I just need a relational database.

Since most apps are likely better suited to using a relational database, I'm curious as to Meteor's original decision to go with Mongo. Would love to know their rationale behind that choice if anyone can enlighten me / point me towards any articles?


Definitely never use a technology that's married to a single backend.


I recommend setting aside these scare stories about mongodb after you read the instructions on how to check that writes succeed, and look into replication. Sure mongodb has its problems but a few years ago I helped a customer run mongodb at very large scale.

Meteor.js is a fantastic development system (if you can live with needing sticky sessions and you need its features).


There is a wrapper for Postgre: https://github.com/austinrivas/meteor-postgresql


yeap... but you have to give up many of the reasons for using meteor. (the dynamic refresh, etc.)

The package is really just a simple wrapper around an npm library.

Because meteor is node-based any db that node has libraries for can be "used" .... at the expense of giving up on most meteor features.


IFRC, postres can notify clients of changes, so you should be able to implement the missing features - that is to say the limitations are artificial at this point


"you should be able to implement the missing features" -- me ?? not hardly - I don't work at Meteor.

Take it up with the Meteor.com people.


Meteor 1.2 will have SQL support [0] (and the discussion [1])

[0] - http://info.meteor.com/blog/whats-coming-in-meteor-12-and-be... [1] - https://news.ycombinator.com/item?id=9811583


there's a card on trello asking this for a future enhancement (I haven't followed the discussions for a while though). https://trello.com/c/Gf6YxFp2/42-sql-support

However, I believe there's a few existing options for using postgres - https://github.com/numtel/meteor-pg

http://www.meteorpostgres.com/


There's a mongodb JS API driver that talks to Postgres IIRC. Postgres 'json' data type is also apparently faster than Mongo. Anyone played with it?


Or: You could just RTFM and avoid all these issues.

For example: The message warning you that 32bit builds are not safe for storing more than 2GB of data is larger than the download button itself. If don't see this, you should probably be not in charge of storing any data anyways.

Many other issues have been resolved with the 3.0 release or are documented very clearly.


1. There was no such warning for a long, long time.

2. It's a dumb limitation, and an architectural issue as far as I'm concerned.

3. It still doesn't justify silently throwing away data for years.

4. Many of the points on my list remain unaddressed, and likely never will be addressed.

EDIT: Additionally, the typical system administrator doesn't install MongoDB from the 'Downloads' page, but uses a package manager, and thus never sees the warning even if it's there.

Warnings like this belong in all the relevant spots in the documentation, not just on a download page.


That 32bit startup warning has been in place since 2009. It was in the README in 2008 (IIRC, pre-1.0).

https://github.com/mongodb/mongo/commit/b3322b86a014683018ac...

There's plenty in Mongo to complain about, but grousing about the 32-bit limitation just makes you look like you're bad at reading introductory documentation.


This doesn't explain the data loss consequences (remember the client default!), nor was it clearly visible on the download page - you'd have to explicitly look for it.

This was the case until as late as 2014: https://web.archive.org/web/20140704182658/http://www.mongod...

And again, the 'download' page or README is not sufficient for such a warning, nor is a startup message a reliable place to put it (because of initscripts and such). It should have been in all the relevant places in the documentation, instead.


Unacknowleged writes were a bad default. I think everyone with a shred of honesty will happily admit to that.

It was certainly in the documentation, in the blog posts, in the startup logs, and in the README for quite a long time. If you missed it, it's probably because you weren't paying attention. Again, there are plenty of good reasons to dislike MongoDB, but this isn't one of them, and harping on it substantially degrades your credibility.


> It was certainly in the documentation, [...] in the startup logs, and in the README for quite a long time.

All the wrong places, as already stated. These are not the places where developers actually look. I provided some better example locations here: https://twitter.com/joepie91/status/622920351731843072

> [...] in the blog posts [...]

Certainly not there. Go look around for MongoDB tutorials from the timespan where unacknowledged writes were the default, and almost none of them remark on the 32-bits limitation.

> If you missed it, it's probably because you weren't paying attention.

Nonsense. It simply wasn't in the locations where people actually look. You can easily verify that using the Wayback Machine (and in fact, this is still mostly the case today).

> Again, there are plenty of good reasons to dislike MongoDB, but this isn't one of them, and harping on it substantially degrades your credibility.

If pointing out extremely dangerous past negligence from the MongoDB developers "degrades my credibility" in your view, then you have some very strange ways to determine credibility.


> All the wrong places, as already stated. These are not the places where developers actually look.

If you're not reading READMEs, installation notes, and server logs and instead depend on marketing splash pages to highlight the limitations of the technologies you use, you're a bad developer. Sorry.

> Certainly not [in blog posts]

http://blog.mongodb.org/post/137788967/32-bit-limitations

> It simply wasn't in the locations where people actually look. You can easily verify that using the Wayback Machine (and in fact, this is still mostly the case today).

I first started using Mongo in 2010 according to my gitlogs (after it having been on my radar for some time before that), and I very distinctly remember reading about the 32-bit limitation well before I'd ever installed it, because we were also using Varnish at the time, which has similar issues on 32-bit systems because it too uses (or at least, at that time used) memory-mapped files.

The claim that the limitation was not documented well, or was hidden, or was generally unknown at that time is just simply false.


The word 'loss' does not appear on that page. There's a world of difference IMO between a rejected operation (which is what I would assume from that description – that the address space exhaustion would be detected and the database would stop accepting writes) and irreversible data loss.


That's a consequence of their unacknowledged writes default - any write which failed for any reason wouldn't result in a rejection as far as the client is concerned (because the default was to shove the write request into the pipe and then continue without waiting for acknowledgement). As I mentioned earlier, most everyone acknowledges (heh) that as a bad default.

If you were checking getLastError on your writes, you would see the write fail when you ran out of address space.


I guess any substantially-advanced failure mode is bound to be multi-factorial. Makes sense; thanks for the reply.


> Unacknowleged writes were a bad default.

Bad defaults? It was defended as "this is ok because system is distributed" and there was a joke about it helping ace all the silly benchmarks. So it wasn't "oops we forgot to uncomment out the fsync line in the code". It was a deliberate decision.

It was deceptive, and that is why many people hate MongoDB and wouldn't let it get even close to their data -- because have a track record of lying about the capabilities of their system.


Also don't forget that almost every driver ignored the defaults and used acknowledged writes anyway.


If you're on 32-bit systems, you're even more out of luck. With the release of version 3.0, MongoDB dropped commercial support for 32-bit systems. You can still get binaries, but without commercial support they'll probably stop making those for 32-bit systems soon enough.


> The message warning you that 32bit builds are not safe for storing more than 2GB of data is larger than the download button itself. If don't see this, you should probably be not in charge of storing any data anyways.

That's all well and good, but reporting writes as successful and silently failing when your data ticks over 2 GB instead of throwing errors or doing something useful like that is more what the complaint is about. Obviously, you could set up monitors on your data storage and alarm if it reaches 1.75 GB, but if you're mature enough to do that, you're mature enough to use a real database solution.


A database built by people who are careless with my data is not a database worth paying attention to, even when particular problems are remedied.


The author is cataloguing some of Mongo's "paper cuts," i.e., issues for which warnings and known workarounds exist but that really make the development experience less pleasant.

For MongoDB's actual limitations, read aphyr's Call Me Maybe series.


Right. I am so sick and tired of MongoDB defenders saying "read the manual" after aphyr has found multiple faults in MongoDB that directly conflict with the documentation.


To be fair, aphyr analyzed Mongo's clustering behavior and found a lot of problems. Which has been a consistent refrain in the series - apparently clustering a datastore is hard. ElasticSearch and RabbitMQ fair poorly as well.

Also, it's not like Postgres even tries to offer clustering. So what are you comparing Mongo to? There's no reason to believe that a single-node Mongo instance will be any less reliable than a single-node MySQL or Postgres. At least, nobody has presented that argument.


Mongo is actually great for very small datasets. But they sell it as a clustered solution (mongodb.org is currently featuring a giant banner that reads, "Agility, scalability, performance. Pick three"), so it's fair to judge them on that.


I agree, but I'm getting the sense that some people (including the OP) are using the CMM article to claim that Postgres is more reliable. Postgres doesn't have this kind of clustering feature at all. Not a fair comparison.


I actually started learning NoSQL on MongoDB but later switched over to CouchDB and never looked back since.

Seems like a lot of other companies are too (Eg. Viber) Here's a video from their tech team explaining the reasons behind the switch: https://www.youtube.com/watch?v=R5JpRrMJVIA


I did the MongoDB class they offer but was always scared to use in production (after reading so many bad experiences). And it's really too bad. I find JSON is just so much more pleasant to deal with than SQL.


Apples and oranges. JSON is a format (Object Notation), SQL is a query language. Some databases support queries of JSON structured data using SQL.


Sure. But I'm just stating which I'd rather deal with. Not claiming equivalency....

Does MongoDB allow SQL queries? I didn't know that, but then again, I guess I don't care either. I haven't looked at Mongo in a couple of years after deciding not to use it. I do recall liking the query syntax from the class though.

I haven't messed with JSON features in PostgreSQL yet and have no idea how they work. That's next.


Not even a hackathon?


MongoDB has THE WORST vendor lock-in imaginable, so only use it if you know that you could throw everything away.

Source: my last employer used MongoDB, and switching to an RDBMS would have forced us to rewrite ~80% of the codebase.


About 80% rewrite: it is poor abstraction problem rather then vendor lock-in one


Abstracting over your database only makes sense if you're writing something that's lowest common denominator. If your application runs equally well on MongoDB and something else you either are running poorly on both or you are spending a lot of resources on writing your abstraction layer.


Yeah, that contributed, but it wasn't any worse than most other CRUD apps I've seen.


"you know that you could throw everything away" To be fair, that's like 99% of all hackathons I've been to


If your data really doesnt matter you might as well just use ElasticSearch as a document store, its a lot easier anyways


I've built real-world products with both ES and Mongo as primary datastores. ES is a hundred times more complicated, with a really awkward query syntax and a lot of unintuitive behavior. And, from the Call Me Maybe series, ES is no more reliable than MongoDB in a cluster.

ES is amazing for what it does, but its use cases don't overlap heavily with Mongo (or RDBMSes).


I doubt ElasticSearch is easier to set up and get started with than MongoDB, especially if you're coming from limited knowledge about either.


ElasticSearch is trivial to set up, especially if we're talking about a small project like a hackathon.

It's not without its problems but ES's first-fifteen-minutes story is pretty good.


ElasticSearch takes less than 5 minutes to set up, and you literally just perform HTTP requests against it. In my opinion it is much easier than setting up MongoDB.


Use Redis, then.


Actually Redis is as safe as PostgreSQL. Read the documentation please :)


Why?


ES is wicked easy to setup and use, but it's not recommended as the primary data store for split-brain scenarios, IIRC.


OP was talking about hackathons.


They've sponsored tons of hackathons, so it's super popular there. But to me, a big part of hackathons is learning something new, and MongoDB is apparently just not worth learning.


This points at a bigger issue, to be honest: it seems MongoDB is popular purely because of marketing and intentional hype.

Evidence for those claims that MongoDB is "fast" never materialized, nobody really knew where the claim came from, yet it was constantly repeated. They sponsor a lot of hackathons, and so on.



I think people only use it because it stores JSON and let's you query it arbitrarily on demand. It appears to be as good as CouchDB, but with on-demand queries, as good as Postgres, but with JSON (!). It appears to be quick and easy and the tool for the job.

Also, when people say NoSQL they actually mean MongoDB.


> as good as Postgres, but with JSON (!).

PostgreSQL has supported JSON for a good while now, I've even explicitly linked to it in my post. And no, MongoDB is certainly not 'as good' - many of the problems listed simply do not exist in PostgreSQL.


I know. I'm just trying to say what I think people in general think about MongoDB.


> Also, when people say NoSQL they actually mean MongoDB

That's just not true. There are tons of NoSQL solutions that are well known. Redis.


In fact it's quite literally the opposite of true.

Edit: In the sense that there are many, many well-regarded and used NoSql databases.


Cassandra, Couch, Redis, Riak, Dynamo...


Right, I know that, but thank you for listing.


Those are the databases that most people mean when they say NoSQL.


You're not authorized to speak in the name of "most people".


Postgres, MySQL, RethinkDB...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: