Materialized path is a good tool for trees that won't be deeply nested, and for tables that won't grow particularly large. However, because the path is stored in a string, read operations are not particularly efficient. It's a good tool to know, but it doesn't scale as well as Nested Set to very large datasets. That being set, it's much less complicated to implement than Nested Sets and is more efficient than adjacency list for many types of data.
I'm not so sure about inefficiency; in particular, most databases that I've seen can use an index for string-prefix queries, which means its performance ought to remain acceptable even for large datasets. (Assumption: subtree queries are the ones you care about making fast.)
Also: inserts are practically free compared to the linked article, where they require updating the left and right numbering for every following node!
Another nice property: if you make entries in the materialized tree column constant-width (e.g. by zero-padding), an alphabetical sort by that column will give you a depth-first dump of the tree -- the exact order you'd like for, say, a comment thread or a table of contents.
When I've implemented materialized paths in the past, I have run into issues with the maximum allowed length of indexable string types (which limits the tree depth), but this was in the long ago late 90s. :) I think it's a very nice albeit imperfect way of storing certain types of trees, especially ones that are mostly insert+query-only.
How are reads not particularly efficient? If you use rooted paths in your lookup, then even a "LIKE '/a/b/c%'" query for all decedents will effectively utilize the index. I think that this would be good for deeply nested trees also. As Zach implied, the down side of this approach is moving subtrees. Unless you have a very volatile tree structure, this should be perfect for most uses.
Because B-Tree indexes perform orders of magnitude better on smaller lookup values, like say an integer, than they do on large (and even worse variable length) strings. There are a number of factors that contribute to this, but two big ones are the raw computation time it takes to compare two strings is much larger than the comparison of an integer that matches the register size of the machine. Second, the depth of the B-Tree is dependent on the key size used for the lookup.
As I said above, if you are using the materialized path for the type of problem it's best at solving, the speed differences won't matter so much. But that's primarily because the tree's aren't particularly deeply nested and/or that tree table itself isn't overly large. So in essence you are trading computational complexity for ease of use on smaller sets of data. In many cases that's exactly what's needed. On the other hand, if you will be modeling very large trees, or will have a huge number of them, nested sets are more efficient in terms of encoding and storage, as well as lookups and retrievals. The down side is that nested sets are more complex to setup and work with, and make understanding the structure of your trees more difficult.
IMHO, it's important not to fall into the trap that one technology/tool/solution/data structure will solve all of your problems. It's good to know the pro's and con's of different solutions and which problems they are most efficient at solving.