mysql> select 0 = 'banana';
| 0 = 'banana' |
| 1 |
1 row in set, 1 warning (0.00 sec)
pg# select 0 = 'banana';
ERROR: invalid input syntax for integer: "banana"
LINE 1: select 0 = 'banana';
To suggest the former approach is "stupid" is to fail to explain how it became so wildly popular. I allege that it is actually brilliant.. marketing.
MySQL marketed itself to developers by being not just free, but easy to work with. Need a database? Want to learn SQL? We won't bore you or slow you down with Codd and those pesky rules maaaan. It'll just work!
For the vast majority of use cases, it did exactly that.
Meanwhile, Postgres was esssentially sending the stodgy, old-fashioned message that databases are a hard problem that require careful up-front consideration so that one doesn't run into problems down the road, be they scale/performance or more severe like (even non-catastrophic) data loss.
That this message was, and still is, correct, just isn't very compelling in a rapid-protoyping fail-fast world. Unfortunately for those of us whose job it is to deal with the eventual consequences, it's too late by the time we're brought in .
I think we've seen a similar effect with many of the "NoSQL" datastores, as well. They gain initial popularity due to being easy, lightweight, and lightnight fast, but, as they mature and gain real-world use, we see articles that are shocked (shocked!) that there is data loss going on in this establishment and that, yes, if you need certain tedious features (like data integrity) from that stodgy old database world, you'll have to sacrifice that lightning fast performance.
 aka flexibility
 aka standards adherence or even pedantry
 though I've never quite understood the reasoning of why it would be so impractical to move RDBMSes when there's so often an ORM and no MySQL-specific features in use