There are a few pain points with postgres that have been holding it back. Lacking Windows support has been mentioned (that's in the past now fortunately). Incredibly cumbersome (you could say broken) unicode support makes postgres difficult to use in not exclusively english speaking environments. And mysql is faster for simple things.
On the other hand postgres is way faster for complex queries and its support for SQL features is second to none.
Please see my other comment in this thread. There's the LIKE + indexes issue and there's the fact that postgres supports only one character set for the entire database, whereas mysql lets you choose on a per column basis.
The problem is that for locales other than the "C" locale the LIKE operator will not use indexes unless you explicitly tell postgres to use text_pattern_ops. However, if you do that other comparison operators will not use that index. Effectively, what you have to do is to use two indexes on the same column, one using text_pattern_ops and one using the default operator class.
I know this is not a bug so it's debatable whether it should be called broken. Let me call it a broken design.
[Edit] And there's another workaround that allows you to avoid using two indexes. You can use text_pattern_ops and use regular expressions for all comparisons, even for equality. This solution may have other performance drawbacks. I'm not sure.
Trying to understand the issue: I suppose text_pattern_ops is a special kind of index optimized for LIKE searches? Then why should other queries use that index? Serious question, since you call the design broken - but what would be the proper way to do it in your opinion?