Hacker News new | past | comments | ask | show | jobs | submit login
MySQL v PostgreSQL: Which have you used and why?
25 points by babul on May 8, 2008 | hide | past | favorite | 43 comments
MySQL v PostgreSQL: Which have you used and why?



We started with MySQL and switched to PostgreSQL when some unit tests started to fail after doing a minor MySQL upgrade. Something like going from 5.0.1 to 5.0.2. We tracked this down to MySQL giving completely different result sets for the same (not very complex) query after the small upgrade.

We never looked back. PostgreSQL is really powerful and we use it in a clustered and soon sharded setup. Works well for us.


How is PostgreSQL clustering going? I heard that was a pain, whats your setup?


Slony. It is far from perfect though. But at least it is a good and stable way to have a backup database ready.


That's crazy that there were changes to query results from such a minor revision. I haven't experienced anything like that, but maybe my unit testing could be better.


What did you use and how were you doing the unit tests (e.g. tools/processes). Thanks.


Simple JUnit tests for a Repository style class that does database interaction. I can't remember the exact query or details anymore. Just that we were extremely surprised to see that switching back and forth between minor releases gave different results.


We use postgresql at http://massify.com

I don't have enough experience with MySQL to be objective. I chose PostgreSQL because it had a feature set relatively on parity with Oracle and the other big boys. So far I haven't been disappointed.

Well, except for tool support. I'm still looking for a good schema diff tool for it (something as brilliant as SQL Compare by red gate) and a good ER tool (something as simple but functional as Enterprise Manager for SQL Server).

Otherwise, I really haven't considered looking at anything else.


Have you tried DbVisualizer? I don't think it does schema diff, but it covers quite a few other features in Enterprise Manager and can extract ERDs from your schema. It's relatively cheap too (or free with limitations) and works with any database with a JDBC driver.

There's also Aqua Data Studio. I've never used it, but it looks pretty awesome. A little pricey, though.

http://dbvis.com/products/dbvis/ http://www.aquafold.com/


Haven't tried dbvisualizer.

Huge fan of Aqua Data Studio though, great tool.


That's pretty much my exact same story. I came from an Oracle shop and wanted the same power that Oracle offered, but without paying $250k/yr in licensing fees. Postgres came the closest, and I've been really happy with it so far.


>and a good ER tool

HappyFish (http://www.polderij.nl/happyfish/gallery.html) is elegant, simple, inexpensive and powerful and works with Postgres and MySQL. I have run it under wine on linux.


I been looking at tools too. My friend recommends http://www.datanamic.com as his lecturer at Imperial says it is good, but I have yet to try.


I rather like Enterprise Architect as an ER tool.


I've recently been through this decision process. I had v1.0 of http://shellshadow.com using mysql. I ran into one bug in my assumption about how mysql was handling unique constraints and from there started to look at postgres again. It turned out my decision to move to postgres, which I put in production last month, was more about timing of pfg 8.3 coming to maturity than absolutes against mysql.

Here are a few points that tipped things to postgres. 1 - constraints and complex queries. MySql handles simple selects fast but has somewhat unpredictable behavior on complex queries. Including counts. Postgres generally scales in proportion to complexity of the query well. 2 - Postgres 8.3 came out. This was a big deal as it cleans up lots of things that want into 8.x. 3 - BSD-style engineering. Everything from docs to source code to make files is more rigorous with old school BSD projects. FreeBSD and Postgres are clean!!! I don't want to hammer on linux, GNU, GPL stuff too much, but to be honest, this camp is stable because of massive user base and companies put in effort to make it stable. Its a sledgehammer approach. In general I find projects like FreeBSD and Postgres to follow more rigid engineering processes. I like this in my DB ;) 4 - Full Text Search built into postgres 8.3. MySQL has FTS, sort of. Postgres prior to 8.3 had it as an add-on. You can run external Full Text Search engines like lucene or ferret or maybe even better, sphinx. But this adds one more "engine" to manage. Most real world FTS use cases needs to have its results filtered in context of user prefs and have results filtered and sorted by other DB criteria. This means that for many FTS use cases, it is very desirable to just give the DB one big complex query and have it return things all sorted out. Using an external FTS system, you usually end up taking your FTS results and going back to the DB for further queries to filter and sort the FTS results. This adds to app code complexity. With postgres 8.3, you can do this in ways that MySQL cannot begin to compete with.

Hope these rational help. I am very happy with my decision to switch from MySQL to postgres and the process was not painful.

good luck, Jon www.shellshadow.com


I thought about that too and the timing of the new relaease is a big factor in the decision making.

A year ago is would have been MySQL, just because everyone else who is big is them (e.g. paypal, google, amazon etc.) but now I really think PostgreSQL is the better choice and will be going ahead with it until I can think of a reason not to.

It is now just as fast, but much more robust

Thanks


> has somewhat unpredictable behavior on complex queries. Including counts.

By default, MyISAM compresses its primary key: http://www.mysqlperformanceblog.com/2006/05/13/to-pack-or-no...

So you get bad performance when the primary key is accessed in descending order: http://bugs.mysql.com/bug.php?id=17847

If this corrupted, you get a different row count when you order by ascending or descending key. I've seen this in the wild. Unfortunately, this possibility has been repeatedly missed by developers: http://bugs.mysql.com/bug.php?id=28542 http://bugs.mysql.com/bug.php?id=28658 http://bugs.mysql.com/bug.php?id=35605

MySQL is a nice bit bucket but I'll continue using Postgres for anything important.


While Postgres seems nice, mySQL is still the choice of the majority of high-traffic sites that absolutely need to scale. I'm not saying Postgres is bad, but their only huge-volume user is Skype. They simply don't have as many of the huge, high-traffic users such as craigslist, eBay, Ticketmaster, Facebook, Digg, del.icio.us, Flickr, LiveJournal, Wikipedia, YouTube, etc.

Those sites need to scale an order of magnitude (or several) greater than sites like CD Baby or Macworld.

Just look at: http://www.postgresql.org/about/users

Other than Skype, there aren't as high-use sites as the ones I mentioned that go for mySQL. Postgres isn't bad, but it just hasn't been tested at the same level. Could it do better at that level? It IS possible. I prefer to go for a sure thing.


That was probably because a few years back when those sites where launched MySQL was much faster. Now PostgreSQL is as fast (in many cases even faster) BUT much more robust out of the box (no corruption worries like with MySQL) and more feature rich (FTS etc is betetr in default).

I think today when building, the choice is not as one sided in favour of MySQL.


Reddit is also Postgres, if you consider that high-volume.


Did not know this -- very cool.

One of the only reasons I still use MySQL is that PostgreSQL would break some Drupal sites I deployed. I miss the ease of obtaining relational integrity under Postgres (although pgsql's EXPLAIN can be horrifying compared to MySQL's).

Have scaled up to saturating an OC12 with Postgres (17 web servers) and doing 30 million hits/month with MySQL, so it's not like I care that much, but MySQL has a nasty tendency to piss me off in corner cases, and I'm getting tired of it.

Definite pros and cons to both. I've seen people stick with (and patch) a particular point release ofMySQL (eg. 3.23 for AdWords at one point) just because the errors were consistent and they were no longer doing much to the schema. Never seen that happen with PostgreSQL.

Jeremy Zawodny's tools (mytop, etc.) are invaluable if you have to get a MySQL shop to scale up (eg. without migrating a metric fuckload of existing SQL), btw.


  pgsql's EXPLAIN can be horrifying compared to MySQL's
Really? I'm curious if you could explain what you found unintuitive about PG's EXPLAIN.

BTW, there's a "ptop" (pg_top) project for PostgreSQL as well.


psql's EXPLAIN basically shows you the execution plan, IIRC. MySQL's EXPLAIN provides the information about what keys will be hit for which joins in a more compact fashion.


Interesting question is if they could do it again today, what they would choose.



I had some problem intially getting the question posted here. The second post asks a similar question but slight variation. I am really intersted in this at the moment so sorry if this seems to be a double posting. Thanks.


I initially wanted to settle on MySQL for the sake of simplicity. But got bitten twice by it and have since changed my mind.

a) Once I had to store a large chunk of time series data in MySQL 'TEXT' field. Got errors in the application, could not figure out where they are coming from.

Turned out, MySQL has 1 to 65535 characters limit for TEXT field, and I should have used LONGTEXT type. MySQL has simply bitten the tail off my data and happily stored the rest without any errors or warnings. OK, I should have RTFM but still, the idea of a database corrupting my data without any warning makes me very uncomfortable...

b) Second issue that I've discovered is simply the ability to make 'hot backups' to SQL text files. pg_dump will produce consistent backup from working db server, mysqldump requires you to stop the server or to lock its tables etc. 'Hot backup' of mysql databases can be achieved through other methods/tools but at the cost of greater complexity.


"MySQL has simply bitten the tail off my data and happily stored the rest without any errors or warnings. OK, I should have RTFM but still, the idea of a database corrupting my data without any warning makes me very uncomfortable..."

That is just insane. I heard about it a couple months ago and wondered what were those folks smoking when they wrote it like that.

The hot backups thing never crossed my mind, but is very interesting. Of course, it relies on transaction consistency and MySQL has the option (shivers) of not having it, thus the locking demands.

From what I know (I am a PostgreSQL guy too), when you use a transaction-aware data store in MySQL, these behaviours are gone (along with any performance edge it may have over PostgreSQL, but that's another story), so, you may want to give it a try.


> the ability to make 'hot backups' to SQL text files. pg_dump will produce consistent backup from working db server, mysqldump requires you to stop the server or to lock its tables etc. 'Hot backup' of mysql databases can be achieved through other methods/tools but at the cost of greater complexity.

Run a slave, stop it, do mysqldump --single-transaction --master-data yourdbname off the slave to get a hot backup, and then restart the slave to get hot copies without shutting down writes. (I am going to assume that you round-robin your data into memcached from the read slaves if you are worried about a read lock on the master)

Not saying that you shouldn't use Postgres, but if you have to work with MySQL, the technique above produces consistent hot backups from 4.0.18 onwards.


Started with MySQL, switched to PostGres for latest project, but latest project doesn't have any significant traffic yet, so I can't give you a field report yet.

MySQL can have fairly unpredictable performance characteristics if you're not an expert in it. It certainly can scale, but it's very easy to kill your performance if you're indexed wrong or have the wrong table type or wrong cache settings or run into table locking issues. (And these aren't theoretical issues; we've run into some of them at past MySQL-based websites.)

I don't have the time to spend debugging cache settings and EXPLAIN SELECTs and locking issues, so I went with Postgres, which supposedly has many fewer pitfalls "out of the box". Been happy with it so far, for development, but haven't needed to scale it yet.


Postgresql here, for all the reasons states by over commenters, with an additional proviso: we were a Postgres shop when I arrived and nobody has ever seen a reason to change.

Sometimes we'll think wistfully of more OTS solutions for Oracle and MySQL, in a grass-is-greener sorta way, then we see stuff like this: http://bugs.mysql.com/bug.php?id=11472


I've been using PostgreSQL since 2004 on http://ourdoings.com/

I've been watching free-software databases since 1997, when people at MIT were using msql and I thought mysql would be a better choice. PostgreSQL looked like it had the potential to be better than mysql, but it wasn't fast or stable yet.

A few years later I tried PostgreSQL 7 and it was great.


Our main product uses MySQL 3. We are unable to upgrade it to version 5 because a key violation problem. Only next versions will use MySQL 5, so we are not able to use UDF nor stored procedures. When developing another strategical product, Ricardo Banffy used PostgreSQL with a very good performance and nice UDF and stored procedures.


PostgreSQL for many reasons. If you use MySQL check out http://sql-info.de/mysql/gotchas.html to stay on top of potential problems.


We switched from MySQL to Postgres because MySQL does not scale. MySQL rebuilds all indexes for InnoDB tables (and all other storage engines) on any trivial "alter table" even though InnoDB indexes refer to records using the primary key. So just adding a column to a big table means taking down your database for hours, even though the rebuilt indexes will be exactly the same as before the rebuild. So dumb... We are happy with Postgres.


Postgresql

1. license less confusing 2. great mailing list support


There is also enough commercial support for when you have to bring the marines in to fix stuff.


I have used both, MySQL has the edge on friendliness but Postgres is overall the better database. A dual or quad Opteron system with a decent amount of RAM and well-tuned Postgres should be able to handle queries sufficient to drive a web site at 50 to 100Mbps continuous bandwidth, assuming no ridiculous queries.


I tried MySQL like 2 years ago and out of the box it preferred to insert a known-invalid "000000" date than aborting loudly.

The last thing I want my DB to do is corrupt my data silently. There were other braindead "philosophy" problems but I don't remember what they are.

Been a happy user of Postgres ever since.


Both. We use Ruby on Rails both on Windows and Linux. MySQL and PostgreSQL sometimes work on one, sometimes on the other. If one does not work we just switch to he other. Often small revision changes in the DB or in Rails will break one or the other.


MySQL. Simple to setup for non-complex websites. And I wonder if MySQL's rise to billion-dollar acquisition was fueled by the growth of PHP (read: non-CGI) websites.


I just ran into a case where I needed a FULL OUTER JOIN and MySQL let me down. It looks like PostgreSQL would have saved me from UNIONing two queries.


we are building our product on postgres, because we like the sql features that mysql doesn't have, such as constraints and checks, or really much of anything relational.


MySQL. Easy to setup replication and good tool support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: