Hacker News new | comments | show | ask | jobs | submit login
MongoHQ releases SSD powered databases, autoscaling beta (mongohq.com)
46 points by mrkurt 1607 days ago | hide | past | web | 26 comments | favorite

MongoHQ is awesome, and their control panel is hands down the best of the MongoDB PaaS, but their pricing is cost prohibitively expensive.

For example, you can get a 2 core, 2GB memory, 20GB of SSD backed storage from DigitalOcean for $20 a month. So, buying three VM's from Digital Ocean for a replica set is a total of $60 a month. Essentially the same thing at MongoHQ is $500 a month.

I completely understand that you pay more for using a PaaS, but that much more?

(I'm one of the MongoHQ founders)

It's less about the hardware and more about the DBA service. Most companies shouldn't be spending engineering time setting up DBs and don't have the expertise to run them when they're "catastrophically successful". We do, know what the watch for, and have on call DBAs that respond to issues within about 10 minutes, 24x7.

You're also paying for backup, fulltime monitoring, and support - which, while not as nice as having a full time DB ops person on staff, is really nice. The MongoHQ guys have done seamless DB migrations for us and suggested architectural improvements, all with amazingly short response time. For now, we're happy paying them for that service.

I am going to echo mrkurt's sentiments on this one. The cost is really more about management and the ability to buy opportunity to scale without real friction, less about cost of backing services. In general thats what has made AWS so successful, and why PaaS is gaining in popularity. Granted its not always troublefree but for a large portion of the industry it alleviates a HUGE pain point which IMHO is worth its weight in gold.

Ultimately, MongoHQ are pros and you're paying for their time and expertise, I am pretty sure if you were going to buy consulting time from them it would cost a lot more than what they charge for their service.

With a username like "nodesocket", I assume you are not their target audience as you are just as comfortable rolling your own and getting your own 10gen support contract.

Actually I've used MongoHQ as well as MongoLab for development, but for http://commando.io it made sense to just purchase three virtual machines on Softlayer and setup MongoDB. Backups we quite easy as well, a few hours later we had a working bash script which stored backups locally, and well as synced them to S3.

This pricing is absolutely insane for anyone with a decent sized dataset. It'd cost about a quarter million dollars a year to store a terrabyte of data?!

Can somebody explain why mongodb storage space is always so expensive and limited on mongodb hosting services? (Not just these SSD plans)

And the plans also start with so little space

Because it's more profitable for them to charge more.

And they're catering to people who either (a) don't know how cheaply you can run a database or (b) know how expensive it is to run a database well.

The number of times I've seen a database go down in production either because it's solitary mirror failed with no one noticing or because it wasn't consider "mission critical" enough to have a mirror... you're not just paying for the hardware, you're paying for the guys managing the hardware. If you're a company with databases and you don't have someone on staff whose primary job is making sure your databases still exist from day to day, one day they won't.

This. We have a small team for a particular project I'm on and we were initially hosting mongo ourselves. After reading so many horror stories, we felt much more comfortable outsourcing configuration, management, and backups until we get to the point that we can have a dedicated DB person.

Given that you were motivated by "horror stories", did you also consider changing data stores?

That's fair. Yes, we did consider changing. But we haven't had any issues ourselves or a real need for a structured database at this time. If we do in the future, transitioning won't be extremely painful because most of our historical data isn't critical; it would mainly just be user accounts.

For us, saying, "oh shit, a handful of other people had a bad experience – we should drop everything and move our data to postgres" would have been a bad decision and a waste of time/money.

Thanks for the reply. I'm a pro-RDBMS bigot, but path dependency is path dependency. In your position I would probably make the same decision.

As far as I can tell they don't offer either SSL or SSH tunnelling, so there's no way you can securely connect to the database remotely. Without those you're sending your password and data unencrypted over a public network.

By default Mongo uses MD5 for password hashing (with username as seed) so it's not secure to use it over a public network.

(if your server is co-located on the same network as MongoHQ then you can probably hack it using network access rules)

We do offer an SSL option if you email support@mongohq.com

My impression has been that Mongo has great performance as long as your working set is totally in RAM, after which it craters (due to the use of MMAP'd files as the core storage engine and relying on the OS's paging algos). To what extent is this true on SSDs? Can you get away with a higher working-set-to-RAM ratio?

We've generally found that 1GB of RAM per 10GB of data is best for common Mongo setups. SSDs make a huge impact on the to-disk performance, which does help with the working set.

SSDs also help with writes and can help mitigate the infamous global write lock problems in Mongo.

Any tests on ZFS?

I don't understand all the hate about MongoHQ's pricing. I run a Mongo replica set in production on bare EC2 and it works well, but there is a lot of reading/research involved in setting it up correctly and handling any Mongo issues that arise. Nothing stops you from doing that, and 10gen even provides a beautiful AWS CloudFormation template to make it easy to provision a replica set with EBS RAID. However if I were not a senior-level Linux engineer I would totally go for a managed service for anything that was revenue-generating. Assuming it's not a personal/bootstrap situation, spending a few hundred bucks a month is better than risking downtime, or just as bad, scaling pains. The value in MongoHQ and others is their people, not their hardware.

It's not the pricing I don't like, it's the relationship between pricing and storage space. I doubt when I go from a 0.5GB database to a 1GB database that they have to double their support team as well.

Heroku's postgres is just as (or more) expensive as mongohq/mongolab, but they don't limit storage at all (except at the dev tier plans).

<-- MongoHQ Cofounder here

Database storage space correlates pretty well to the management complexity, at least with our customer base.

Offering unlimited storage on a DB is somewhat disingenuous, there are effective resource limits that matter long before disk space and those (RAM/IO capacity) are the things that cost actual money.

is there a possibility of foregoing EBS storage, using instance storage, and creating incremental backups to S3 using a secondary server on a replica set? is there a reason this would be a bad idea? seems like it would cost a lot less.

Agreed. I have to do things like store large things in Amazon S3 when I'd really rather just stick it in GridFS. The pricing is just way to high to do anything else.

GridFS is great for convenience, but blobs are not Mongo's strength. Large files should almost always be stored in something like S3, regardless of price.

We've used MongoHQ extensively for the last 2 years and have always found them to be excellent. The team is both incredibly dedicated and knowledgable. They are the experts in anything MongoDB related. If you're thinking of using MongoDB you should use them. No questions asked

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact