
MySQL - Do Not Pass This Way Again - craigkerstiens
http://grimoire.ca/mysql/choose-something-else
======
viraptor
I'm really not sure what to think of that article. On one hand side, I
definitely agree with it and I've experienced many issues with MySQL.

On the other, there are so many... strange points, it's hard for me to trust
the author about the parts that are new to me. Things I've found weird so far
are:

\- "my favourite example being a fat-fingered UPDATE query where a mistyped =
(as -, off by a single key) caused 90% of the rows in the table to be
affected," - if I ever run "rm -rf . /" fat-fingering the space, I'm going to
blame myself only - not fileutils or bash - this has nothing to do with the
database

\- (about backups) "Unless you meticulously lock tables or make the database
read-only for the duration," - this is not trivial, but logging onto slave and
doing "FLUSH TABLES WITH READ LOCK, sync, snapshot, UNLOCK TABLES" is not
rocket science either. And it's well documented on their "backup methods"
page.

\- "It's unrealistic to expect every single user to run SHOW CREATE TABLE
before every single query, or to memorize the types of every column in your
schema, though." - ... yeah... we shouldn't ask them to remember the syntax
either - just keep guessing until you get everything right ;)

\- "Foreign keys are ignored if you spell them certain, common, ways" -
another case of "I want to use the wrong syntax, but still get the right
answer"

I really wish he limited himself to hard facts - the main idea of the article
wouldn't suffer at all. There are enough things to hate in MySQL without going
into the subjective and "inconvenient, but still ok" parts.

~~~
epo
The first thing you found wierd relies on selective quotation, you omitted
"because of implicit string to integer conversion". The second is an apologist
"but its well documented" defence. Your third point is changing the subject,
he was criticising implicit type conversion, you again igore that. The fourth
point is not about wanting to use the wrong syntax it is about MySQL acepting
valid syntax and ignoring it.

So your whole post is nothing more than simple fanboy apologism.

~~~
viraptor
I'm as far from being mysql fanboy reasonably possible, so no, that's not it.
What I'm trying to say is: mixing subjective and objective criticism makes the
argument weaker. Don't complain that you lost your data due to a typo
(statement worked as designed, you had backups, this was a test database and
you'd never do that in production anyway - right?) A complaint about the lack
of consistency and not adhering to standards is much stronger and still true.

------
dangrossman
I appreciate this well-argued piece of persuasive writing for not choosing
MySQL, but the premise is surprising -- I don't recall seeing, on Hacker News
or elsewhere, any writeups from companies that chose MySQL then ran into
significant problems they had to architect around, nor writeups from companies
that chose MySQL then had to rip everything out to switch to something in the
same family of solutions (Postgres, Oracle, etc).

~~~
jes5199
I worked at a large startup that was trying to migrate from MySQL to Postgres
because of how long table migrations take on MySQL. There's a tendency for
adding a column in MySQL to a table to be stop-the-world for the whole server,
and it can take hours. They had some pretty serious workarounds for that, and
some of them were "try to never change the database structure". There was some
thought that there were other problems that Postgres would solve, some
advocacy happened here: [http://corner.squareup.com/2011/06/postgresql-data-
is-import...](http://corner.squareup.com/2011/06/postgresql-data-is-
important.html)

~~~
reinhardt
Percona's OSC [1] pretty much fixes the migration pain without locking.

[1] [http://www.percona.com/doc/percona-toolkit/2.1/pt-online-
sch...](http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-
change.html)

~~~
tene
We recently spent a few weeks trying to make it work in production, and
couldn't. We always ran into deadlocks due to gap locking and auto_increment
columns.

After a few weeks of failure, we finally gave up and did it the slow way,
unweigh a slave, do the alter offline, catch up in replication, and fail over.
Repeat as needed.

Percona's OSC is great for the cases where it works, but there are still many
cases where it doesn't.

------
zzzeek
this is my favorite MySQL "decision", that the GROUP BY keyword by default
(that is, unless you turn it off with the late-added magic flag
ONLY_FULL_GROUP_BY) will gladly select an essentially "random" (well, the
first row based on INSERT order, which in SQL is as good as random) row for
you:

    
    
        mysql: create table data (token_a varchar(10), token_b varchar(10));
    	Query OK, 0 rows affected (0.05 sec)
    
        mysql: insert into data (token_a, token_b) values ('A', 'A');
    	Query OK, 1 row affected (0.01 sec)
    
        mysql: insert into data (token_a, token_b) values ('A', 'B');
    	Query OK, 1 row affected (0.00 sec)
    
        mysql: insert into data (token_a, token_b) values ('B', 'B');
    	Query OK, 1 row affected (0.00 sec)
    
        mysql: insert into data (token_a, token_b) values ('B', 'A');
    	Query OK, 1 row affected (0.00 sec)
    
        mysql: select * from data group by token_a;
    	+---------+---------+
    	| token_a | token_b |
    	+---------+---------+
    	| A       | A       |
    	| B       | B       |
    	+---------+---------+
    	2 rows in set (0.00 sec)
    

Note here the value we get for "token_b" is based on whether or not "A" or "B"
were inserted first. The second "token_b" for each "token_a" (as well as any
number of other rows that might follow it for that "token_a") is just
discarded.

The scary thing is that I semi-regularly come across applications in Very
Important Industries that have large amounts of SQL that _rely_ upon this
behavior of "picking any old row" for you, rather than selecting a MAX() or
MIN() of some column and then joining to a subquery of the GROUP BY +
aggregate....because joining to a subquery in MySQL also performs like crap.

~~~
tveita
I have encountered situations where this is convenient, and I have yet to see
it cause any bugs or problems.

In the places I have seen it used, the 'arbitrary' column typically has the
same value for the entire group, e.g. for efficiently selecting distinct texts
based on their hash values.

PostgreSQL has a similar feature using SELECT DISTINCT ON:
[http://www.postgresql.org/docs/9.2/static/queries-select-
lis...](http://www.postgresql.org/docs/9.2/static/queries-select-lists.html)

~~~
zzzeek
> In the places I have seen it used, the 'arbitrary' column typically has the
> same value for the entire group, e.g. for efficiently selecting distinct
> texts based on their hash values.

I agree that case is convenient. But I've seen it being used to actually pick
a random row, where a different row would have a different result (such that
the output of the program would definitely be different), and it's clear the
developers who wrote it didn't fully understand this was going on. I wrote it
up in a report for this particular client but they didn't seem to want to
touch this particular very old and venerable code.

------
sadmysqluser
Ronald Bradford's [http://www.slideshare.net/ronaldbradford/my-sql-
idiosyncrasi...](http://www.slideshare.net/ronaldbradford/my-sql-
idiosyncrasies-that-bite-otn) is worth reviewing for anyone who runs MySQL.

I especially like how he explains SQL_MODE bit by bit and ends up recommending

    
    
            SQL_MODE =
               ALLOW_INVALID_DATES, ANSI_QUOTES, ERROR_FOR_DIVISION_ZERO,
               HIGH_NOT_PRECEDENCE,IGNORE_SPACE,NO_AUTO_CREATE_USER,
               NO_AUTO_VALUE_ON_ZERO, NO_BACKSLASH_ESCAPES, NO_DIR_IN_CREATE,
               NO_ENGINE_SUBSTITUTION, NO_FIELD_OPTIONS,NO_KEY_OPTIONS,NO_TABLE_OPTIONS, 
               NO_UNSIGNED_SUBTRACTION,NO_ZERO_DATE, NO_ZERO_IN_DATE,
               ONLY_FULL_GROUP_BY, PAD_CHAR_TO_FULL_LENGTH (5.1.20), PIPES_AS_CONCAT,
               REAL_AS_FLOAT, STRICT_ALL_TABLES, STRICT_TRANS_TABLES
    

I also recommend "MySQL 5.1 vs. MySQL 5.5: Floats, Doubles, and Scientific
Notation" [http://blog.mozilla.org/it/2013/01/17/mysql-5-1-vs-
mysql-5-5...](http://blog.mozilla.org/it/2013/01/17/mysql-5-1-vs-
mysql-5-5-floats-doubles-and-scientific-notation/) for anyone who working with
non-integer numerics.

------
danso
Ugh, maybe my Internet is slow, but I can't seem to get the page to load.
Maybe there should be an unofficial rule that if you're going to write a
worthwhile article about database and, I assume since I can't yet read it,
performance, you should enable caching on your blog

Edit: here's the raw text version stored on github
[https://raw.github.com/ojacobson/grimoiredotca/master/wiki/m...](https://raw.github.com/ojacobson/grimoiredotca/master/wiki/mysql/choose-
something-else.md)

~~~
eksith
They can't expect the page to be on HN in the first place. Though, I agree,
caching couldn't hurt.

Loaded for me just now, albeit after 30 seconds.

------
16s
MySQL is the visual basic of SQL databases. Anyone can set one up and use it.

The problem is that many non-technical people use MySQL and then think they
know all about DBs. Ask them what ACID is, or about foreign key constraints.
You'll get blank stares. If you know what those things are and value them, you
probably don't use MySQL.

~~~
taligent
I really don't know where to begin with such a stupid comment.

How about you start with listing which of these companies needs educating:
<http://www.mysql.com/customers/>

Facebook ? Twitter ? Amazon ? Flickr ?

~~~
jacques_chester
In some of these cases MySQL is not being used as an RDBMS.

And in most of those cases it looks like path dependency rather than a
selection based on the merits of this or that database.

Either way, this is actually an argument from authority ("So and so use Brand
X, therefore Brand X has positive qualities Y").

I've sat in on presentations with DB2 engineers who bragged about the data
centre IBM runs for UPS. ~12 billion transactions per day (and that was 5
years ago). Does UPS make DB2 better or worse than MySQL?

Answer: it's irrelevant. DB2 and MySQL would need to be picked on their
merits, not their users.

~~~
LaGrange
It does invalidate claims that using product X is going to inevitably cause
you huge problems once you "grow enough". I've seen a lot of places grow a lot
while using MySQL, and the problems aren't really that different or more
serious than in places using Postgres or Oracle.

In fact, I know enough places moving from Postgres to MySQL due to growing
pains.

~~~
jacques_chester
Right, but we're not talking here about scale. We're talking about ...
trustworthiness, I suppose. Dependability.

What this article demonstrated is that MySQL will often give a false sense of
security. That's not what you want from a system ostensibly meant to provide a
number of important technical guarantees.

Multi-master is the one tickbox feature where MySQL is still clearly in front.
Once the PostgreSQL team finish going about inbuilt multi-master in their
usual meticulous, stepwise fashion (my WAG is that it'll land by 9.6), there
really won't be any good technical reasons left to use MySQL.

~~~
LaGrange
But I didn't mean "grow" in exclusively performance meaning. I meant the whole
package (as with growth the requirements for reliability also ultimately show
up, at least in some parts of the business). Even my current job that involves
a lot of MySQL has important chunks of Must Never Go Away Or Else data, with a
lot of complex insert/update action going on them, and somehow it's working
fairly well for us.

~~~
jacques_chester
One of the persistent themes of the linked piece is that MySQL simply conceals
many kinds of fault. "Somehow it's working fairly well for us" may be right.
Or it might not. Either way, MySQL isn't going to warn you.

~~~
LaGrange
With default settings. I agree that keeping them as default is fairly bad, but
in a growing scenario you need people who know the defaults very well anyway.
After all, quiet database silliness is only one of the many ways you can
quietly corrupt huge swaths of data.

It's similar to Rails – the defaults save your time, and in their case are
arguably reasonable, but if you don't know how and why they work, they are
going to bite you. There is no escaping from knowing about the complexity, but
you may escape from typing it out every single time.

~~~
jacques_chester
This is a mechanism vs policy argument.

Defaults count. Elsewhere I've said that I prefer to start with rigid
guarantees and relax them. Default policy matters because in practice:

* The documentation isn't read.

* Even when it's read, the documentation may be incomplete.

* When it's read and complete, the crucial segment may be skimmed.

* When the documentation is complete and the crucial segment was read, it may have been misunderstood because of unclear writing.

* When the documentation is complete and the crucial segment was clearly written and read, it may be forgotten later on.

Then a new DBA or programmer arrives, and the whole thing starts all over
again.

Safety mechanisms that require active effort above the baseline configuration
do not work very well. Saying "there is a great mechanism" does not describe
the actual properties of the actual system. The default policy is the policy
that counts, because a single omission will reintroduce it.

Windows XP, buffer overflows, botched system deployments and so on all have a
common property: they require positive effort by humans in order to rise above
their baseline safety/security/reliability profile. There is no failsafe --
they only work by constant vigilance.

I don't see that as a good thing.

------
Happymrdave
I worked at one data-heavy startup where things were on MySQL and even with a
lot of consulting by Percona, eventually it just couldn't keep up with our
needs, and the project was ported to PostgreSQL. I've worked on other projects
that were less data intensive and MySQL worked fine though.

If given a choice, I'll take PostgreSQL any day, but I do understand that
people are hesitant to change database whey they don't _need_ to. If you are
encountering trouble though, by all means, move on.

------
ybrs
i read until the backup process and gave up reading through, for cold backups
of online databases, you should use percona's extrabackup, its open source and
free and works perfectly. <http://www.percona.com/doc/percona-xtrabackup/> if
you are using mysql a simple googling "hot backup mysql" will lead to that, i
guess author didn't even bother to search.

besides, you need to use replication and connect a couple of slaves if you
care about being online and backup from one of the slaves - which is a common
practice for all databases not for mysql. if you are trying to dump from the
master without slaves good luck with any database.

~~~
zorlem
XtraBackup has its limitations wrt to locking when you've got a mix between
InnoDB and MyISAM tables.

The problem with taking a backup from a MySQL slave is that the data is not
guaranteed to be identical to the master, thanks to the subtle problems and
peculiarities of the MySQL master-slave replication (some of them are
described in the OP). For precisely this reason I install an automated job
that periodically checksums the tables and sends the results to the DBA role.

------
mscarborough
This person doesn't even offer a solution?

How is it that people who have blogs that take 30 seconds to load continue to
give performance advice that gets upvoted?

The funny thing is that this blog's performance is based on some cookie. If I
reload in Chrome? 2 sec. If I reload in "Incognito Chrome", it's again really
slow.

So seriously, just stop with these authoritative blog posts when you don't
even know what you're talking about.

~~~
jacques_chester
It's not performance advice, though.

It's a list of known problems with MySQL, most of which violate the "Principle
of Least Surprise".

Some of the problems have performance implications. The query planner stuff,
for example.

> _The funny thing is that this blog's performance is based on some cookie. If
> I reload in Chrome? 2 sec ... just stop ... when you don't even know what
> you're talking about._

It may interest you to learn that web browsers have local caches. Incognito
Mode does not have such a cache, and so must refetch pages from scratch on
each view.

~~~
mscarborough
If you have to be a jerk about it, it may interest you that relatively static
blog content can be cached on the server.

I'm genuinely interested to learn which use cases are more dependent on query
planning than caching methodologies.

~~~
jacques_chester
> _I'm genuinely interested to learn which use cases are more dependent on
> query planning than caching methodologies._

I'll reply to this separately, it's a good discussion to have.

My basic problem is that caching comes with a coordination cost and I prefer
the originating data source to be as performant as possible.

My own use case is a small Wordpress multisite installation. Even with a
relatively trivial amount of traffic and site data, it behaves abysmally on
some simple requests. The linked article seems to explain why -- the query
planner ignores indexes on certain kinds of joins. The same sort of joins as
the Wordpress Recent Comments Widget.

Now, I can and have worked around this by using multiple layers of caching.
There's the MySQL query cache, some memcache (PHP opcode caches would in
theory be quicker but I've never been satisfied with their stability) and of
course pumping gzipped HTML to disk for nginx to serve directly without
hitting PHP or MySQL.

But like I said, caching comes with a coordination cost. One of my sites is
used less like a blog and more like a chat room. Hundreds of comments per
hour, every single one of which causes the query cache to be pruned of _the
exact query I most need to cache_ in the first place.

That is: I need to cache this query because they talk so much. But they talk
so much that the cache is not that helpful.

What _would_ be helpful is if MySQL was a bit smarter about using the indexes
I put there in an apparently useless bit of chicken-waving.

~~~
mscarborough
As you described it, the query/caching strategy sounds too complicated.

mysql proxy might help you out, or mysql triggers, or plenty of easy indexing
strategies.

i agree, caching strategies are not easy, but that is not the fault of
databases. IMHO keeping your data layers separate is best.

if you have a multi-site installation, set some kind of prefix within the app
that makes sense to you for each application, before it goes into whatever
cache.

this will work for you whether it is shared memory, memcached, or some kinda
file situation.

~~~
zorlem
_> As you described it, the query/caching strategy sounds too complicated._

But that was his whole point - needlessly complicating an otherwise simple
setup to get around limitations in the DB engine.

 _> mysql proxy might help you out, or mysql triggers, or plenty of easy
indexing strategies._

I genuinely fail to see how mysql-proxy [1] (which has a sleuth of long-
standing, unfixed, problems on its own), or triggers would (elegantly) help in
this situation.

[1] <http://dev.mysql.com/doc/refman/5.1/en/mysql-proxy.html>

_edit: formatting_

------
matt2000
I'd be very interested to hear from people using other databases on whether
their DB of choice is much better. I've been using MySQL for a while, have
been burned by a few things, but figured it was mainly my fault. If indeed
there are better options I'd love to hear the details.

(Just to be clear, I know about other databases, just aren't sure if any are
that much better in real world use).

~~~
saurik
Many of the complaints I hear people make about the entire concept of an
"RDBMS" (often then to motivate why the NoSQL solution they decided to start
using is better) are actually MySQL-specific issues that do not affect
PostgreSQL (or Oracle, or usually SQL Server; I only mention PostgreSQL, as
you wanted a concrete experience); one key example is "if you want to change
your schema, it requires locking the entire system and rewriting the table"
<\- no, as the schema is just metadata; you should be able to do these things
under first-class transactions, and PostgreSQL supports this just fine.

~~~
rosser
To be fair, there are cases where schema changes in PostgreSQL require re-
writing the table, too. Like, for example, when you _change the data type of
an existing column._

Otherwise, you're good, though.

~~~
saurik
Yes, but that case comes up less often and isn't what I see NoSQL people
complaining about ;P. (Also, when I've listed that as an explicit caveat
recently, I often get corrected that they changed the ramifications of that
recently, and it only sometimes has to happen, so I figured I'd just
explicitly list the situations where you clearly should never have needed to
rewrite the table.)

Generally, though, this is related to be able to do table changes under
transaction locks: I have often enjoyed being able to, under an atomic
transaction, replace a table with a view over that table, or make
modifications to indexes and columns that I then rollback if there's a
mistake.

------
contingencies
Used MySQL for 10+ years. I found it great for most purposes. I don't feel I
left MySQL because of MySQL's failings, but that of all monolithic RDBMS.

~~~
jacques_chester
Have you tried other RDBMSes?

~~~
contingencies
OK I'll bite: Yes. Ultimately my problem with the old monolithic RDBMS is
architectural.

Q: "How do you get 24x7x365 service on a mission-critical system that depends
on a huge monolithic RDBMS datastore, preserving the capacity for RDBMS major-
version software upgrades (protocol, on-disk format, etc.) and no downtime
even between major versions?"

A: "While you could umm and ahh about it for awhile, basically, you don't.
It's too big. Decentralization, plan-to-fail, clustering ('private cloud',
hah), and commodity hardware are your friends. Embrace them and enjoy lower
blood pressure."

If you don't have high availability requirements, RDBMS can still be a great
and simple solution. Right tool for the job. (I have simply personally drifted
from RDBMS as the requirements of systems I tend to work on has grown. My
current rule of thumb is roughly: 'Make all storage backends service-provider
abstracted (swappable, easy to benchmark, unit test, etc.). When choosing a
backend for a storage implementation - if you can't SQLite it, don't SQL it.')

~~~
jacques_chester
Any central service is going to be a point of failure. This isn't unique to
RBMSes (the other classic SPOF is message buses).

The idea that RDBMSes are unsuitable for HA applications is ... well to be
charitable, I'll call it "inaccurate".

Most of the techniques that are used for HA were first invented for
conventional databases and/or their best friends, mainframes and midrange
systems.

I agree however that it is a question of picking the right tool for the job.

I'm a data-safety bigot. I require a great deal of talking down from my tree.
These days I can understand that, yes, OK, a consistent and durable model of
Facebook comments probably isn't really that important.

But I will bet folding money that Facebook stores their financial data in a
huge monolithic RDBMS datastore, like Amazon, Microsoft, Google and Apple do.

~~~
contingencies
I agree with the historical point. But I think you miss mine: show me an open
source RDBMS (I don't want to drop $annual_profit_margin on Oracle) with
major-version upgrade capable HA while under load. My view: it doesn't exist.
You could try to make it happen somehow, but it would be a huge project on its
own.

On your somewhat loaded financial example, as someone who is designing some
financial systems at the moment, I would instead argue that 'eventual
consistency' is actually the de-facto model within the vast majority of
business accounting, globally (credit card chargebacks, taxation systems,
international (or domestic in the US) bank transfers, invoices/accounts
receivable, etc.). Simultaneously truly real time and truly atomic
requirements, particularly at scale, are rare.

~~~
jacques_chester
Accountants _invented_ the concept of a queue of transactions causing
predictable updates. They also invented eventual consistency in the form of
special and general ledgers.

However, ACID is four requirements and they're all still valuable and useful
defaults. Atomicity comes from double-entry bookkeeping, consistency from the
idea that data is meaningless without structure, isolation from the demands of
consistency and durability because people get grumpy when you tell that that
umpteen jillion dollars may ... or may not ... have been recorded.

Like I said, I'm a data-safety bigot. I greatly prefer to start with a safe
default and then relax the guarantees. Retrofitting safety is harder,
especially when you don't have a uniform statement of what your data _is_.

For something like a blog, an RDBMS is probably overkill. And MySQL got its
start in life because it so thoroughly (silently) relaxed the standard
guarantees that it was much faster than anything else.

> major-version upgrade capable HA while under load

The strategies are the same as for NoSQL.

Either you stop the world, or you run versions in parallel and drain traffic
from the old versions.

I also find it slightly hilarious that just casually upgrading something
without exercising great caution is seen as a good idea.

------
dendory
I started 7 years ago with SQLite and still have all my sites and webapps
running on that. Works wonderfully for me. Meanwhile I must have seen close to
a dozen data storage systems become popular then be replaced by the next big
thing, from MySQL to NoSQL and everything in between.

~~~
TazeTSchnitzel
I love SQLite. It's simple, it doesn't make a fuss, and it does what you'd
expect. And better yet, since it's a single file application database instead
of a database running on a server, support for it is built in to Python, and
you using it is as simple as an import statement and .connect() - no server to
configure.

------
exabrial
>>It's good enough. No it ain't. There are plenty of other equally-capable
data storage systems that don't come with MySQL's huge raft of edge cases and
quirks.

Actually, it is good enough. Good enough to powere billions of websites. Good
enough not to pay for Oracle, DB2, or trying to cram some half-finished nosql
mess in where a relational database works better.

MySQL isn't an end all, but please, don't pretend that NoSQL holds all the
answers.

~~~
zzzeek
what's wrong with this picture?

    
    
        Oracle/DB2/SQL Server/$$$ <------------ (?) ------------- MySQL --------> NoSQL
    

someone's missing the elephant in the room.

~~~
robfig
Which elephant is in your room?

------
pippy
I've been hearing bad things about MySQL, so I've been avoiding it as of late.

So far my experience has been subpar.

PostgreSQL is pedantic with data insertion, almost to a fault. This costs me
development time. (Also I have no idea what my users will do, and I'd rather
have faulty data inserted than none at all. If it's for a client asking about
a product, this could cost money). Yet purists claim this is a great feature.
It's also about twice as slow as MySQL (admittedly this is likely due to the
maturity of the environment I'm working in).

I personally like PostgreSQL. However I see it more as a guilty nerd pleasure
rather than a development time effective solution.

MSSQL is very nice, and my experience has been the best. Microsoft's tools are
top quality. You'll find yourself very productive; creating advanced SQL
views, mirroring, and snapshots. However MSSQL reeks of vender lockin, I had
to virtualise the MSSQL tools, and getting the drivers to work on Linux took
almsot two days of googling. Despite ease of use, the vendor lockin doesn't
make MSSQL worth it.

~~~
jeffdavis
"PostgreSQL is pedantic with data insertion, almost to a fault."

The PostgreSQL philosophy isn't about being "pedantic", but it is very
different from MySQL. I assume that you have much more experience with MySQL;
maybe you are trying too hard to use postgres in the mysql way rather than the
postgres way?

Personally, I don't think it's a good idea to migrate usually for this reason.
The entire project development always has lots of built-in assumptions about
how the DB will be used, so the new system is almost never quite the right
fit. It can only work in trivial cases, or when you have the right
expectations.

~~~
pippy
Yes I've been using MySQL in the PostgeSQL way. I learned to use databases
using oracle databases, which is similar to MySQL.

~~~
jacques_chester
> _oracle databases, which is similar to MySQL._

How did you reach that conclusion?

------
wereHamster
If you must use MySQL, at least switch to MariaDB. Those Oracle folks can not
be trusted anymore, not even with a toaster.

~~~
fatbird
But you can trust the fork started by the guy who sold it to Sun in the first
place, then followed it to Oracle, then led the exodus from Oracle to
capitalize on anti-Oracle feeling by starting a fork?

Riiiiiiiiiight....

~~~
taligent
Then there is always the Percona or Twitter or millions of other forks around.

~~~
jacques_chester
Percona Server is good. The ability to monitor is great and the supporting
tools are very helpful.

------
apapli
I originally was using MySQL to learn with rails because it was so simple to
set up on my Mac.

The only reason I migrated (quite early I may add) is that at the time Heroku
pretty much mandated I move to postgres.

I'm glad I made the move, but I'd say awareness of the alternatives is the
limiting factor. The brand awareness MySQL has is pretty big compared with
many others. I wonder how much impact Heroku's decision to support postgres
has helped those similar to myself drop mysql.

------
nnnnni
"I'm going to rant against MySQL, but I'm not going to suggest a better
alternative."

~~~
jmix
In what entitled universe do you live in where a guy who carefully and
patiently points out problems is also obligated to solve every single one of
them?

Also, do you really need someone to spell out the alternatives to MySQL? There
are too many to list.

~~~
nnnnni
The article would have been much more credible if it would have said something
like "try postgresql instead". It has nothing to do with entitlements.

~~~
jmix
He does not need to be able to point to an extant, better alternative for his
criticisms to be "credible."

BTW, I can't believe you're implying that his post is not credible. The
practical outcome of your demand for a solution is to shut down legitimate
criticism.

------
jacques_chester
This is a great article.

But I really wish it had sources for each of the claims. I would be interested
to read the relevant documentation, because some of these directly describe
problems I've had with running a Wordpress installation.

And I've been blaming Wordpress for it. There's possibly a big _mea culpa_
brewing; but I'd really like to look at the specifics.

~~~
lucb1e
Oh just blame Wordpress. My self-written software has no performance issues on
MySQL, but Wordpress loading times are over two seconds with a default
installation.

~~~
jacques_chester
> _Oh just blame Wordpress._

I usually do. But in this case it looks as though MySQL's underwhelming query
planner might be at fault.

~~~
toast0
MySQL's query planner is actually pretty good, but there are some things that
are just not going to be fast; most of which is documented, almost all of
which can be seen with describe select ...; if it uses a temporary table, it's
probably necessary, and it's definitely going to be slow once you have enough
rows. If WordPress uses any of these things, it's not MySQL's fault.

~~~
sergiosgc
This may have been true ten years ago. Not anymore. Mysql query planner is
dumb as a rock. Easy example: instead of referring to a table in FROM, try
using a sub query with SELECT * from the same table. It's an easy reproduction
of the planner failing to process the query tree. It should result in the same
execution tree. It results in a temporary table being created, as an identical
copy of the original table.

Before you attack the query itself, and note that it is an example.

~~~
michaelmior
Fortunately there are big improvements coming to the specific case of
subqueries in MySQL 5.6.3 [1]. MariaDB (a MySQL fork) also has several further
optimizations [2].

[1] [http://dev.mysql.com/doc/refman/5.6/en/from-clause-
subquery-...](http://dev.mysql.com/doc/refman/5.6/en/from-clause-subquery-
optimization.html) [2] <https://kb.askmonty.org/en/subquery-optimizations-
map/>

------
paul_f
There is a bit of an elitist attitude in this point of view. Of course MySQL
is not perfect. That's a strawman argument, nobody is claiming it is.

For 99+% of all applications that need a simple database, it is more than
"good enough".

------
redegg
This looks similar to the MongoDB FUD from a year or two ago. Nevertheless, I
don't like MySQL and prefer PostgreSQL for all my projects.

~~~
thanasisp
That's a detailedand well documented article sir. Nothing like FUD.

------
billrobertson42
The article lost me at this.

> Already on MySQL? Migrate.

Got a silly little thing in the corner running just fine on MySQL. Go spend
time on it? No.

------
hpaavola
Instead of writing these rants go and build a WAMP like package with
PostgreSQL or other alternatives. That might actually do some good.

~~~
zorlem
_> Instead of writing these rants go and build a WAMP like package with
PostgreSQL or other alternatives. That might actually do some good._

You mean like WAPP, the package installer provided by BitNami [1]?

I don't see a problem with a well-argumented rant, IMO it's quite helpful as
it sparks such discussions, let people improve their knowledge and make better
informed decissions, and, not the least important, forces a vendor to work
towards improving their software.

[1] <http://bitnami.org/stack/wapp>

