I am just learning haskell, but I like the fact that I can return very large structures (even infinitely large) and then operate on those structures much later, and it magically pipelines everything behind the scenes.
This is not to say that Haskell is always more performant. From the little I've done it seems as though you need an awareness of when thunks would take up more memory than the results of your expression and how to write code accordingly.
SQL has a magical compiler. It's actually a lot more magical in some ways than GHC, because good implementations use statistics on the actual input data and a sophisticated cost-based optimizer. And SQL is certainly suitable for large projects.
I mostly program in C, so you don't have to convince me that predictability of code fragments has its benefits.
But it has a lot of costs, too. Sometimes, good performance fundamentally requires a lot of complexity, so if your compiler is not helping you (being "magical") then the complexity is forced on the programmer. Once the programmer takes on that complexity, they also take on the burden of additional maintenance cost and additional cost to add new capabilities forever.
So, you have to consider whether you really have a net increase in predictability by using more predictable (but less helpful) tools. People complain about garbage collection and virtual memory and NUMA for the same reasons, but those are ubiquitous now -- dealing with the annoyances simply takes less effort than managing the memory yourself.
You could argue that laziness, in particular, is not worth the trade-off (though it has worked out pretty well for SQL).
I'm not really a GHC expert, but that doesn't seem like a fundamental problem that can't be solved, and I assume it is (to some extent) solved already.
Having a solid grasp of the fundamentals gets you quite far.
The new code gen is also pluggable, so you can add new optimizer passes easily (e.g. by implementing text book algorithms)