I used to think that way. But it only took me half a decade of professional development (and reading/discussing in circles like HN) to see that it's a fallacy. A programming language cannot be perfect because it's made for humans, who are not perfect (and runs on computers, which arguably are not perfect).
Accepting the fact that there's no singular, perfect way to express a given idea has been freeing, actually. I used to constantly chase the dragon, getting hung up on refactoring and refactoring and refactoring, trying to attain that perfect description. I still get caught up in it sometimes. A little bit of that spirit makes for better code. But you have to be able to pull yourself away.
I think the author definitely agrees that there's no "singular, perfect way" based on their use of the phrase pareto optimal. Instead, the author is claiming is that some ways are strictly better than others, and that they want to use techniques that are strictly better than their current techniques.
That is not how I interpreted the post, but I do (mostly) agree with it. I think "strictly better" is a real thing that exists, though I do also think it gets over-applied a lot of the time.
That is I think one of the fairer criticisms of Haskell. I don't personally mind, but I respect others that do so.
That said, I wouldn't want 10M more users right now. The Haskell ecosystem right now has tons of breaking changes, and I think that is important. More users would freeze things in place until we have the tooling to both have breaking changes and tool-assisted migrations. Only then would I want rapid growth in popularity.
(Compare Rust's "backwards compat is really important", ....as if they they could get everything right the first time ....as if that won't lead the way of C++. Thank god they relented a bit with "editions".)
Haskell and other FP language is a completely different paradigm and it’s true unless there is very good tooling which can transparently take old code and move it to new code with breaking changes every language will go through that transition, we can’t be prophets, who can know exactly what’s there in future precisely. Python learned it the hard way as when it was designed didn’t have Unicode and later to support it gone through 2 to 3 transition disaster. Java was lucky in 1996 as by that time Unicode was popular and it backed it in language and JIT infrastructure.
But at present in Haskell tried both cabal and stack to compile elm 0.19.1 compiler and pandoc and found it’s really hard due to version incompatibilities of GHC. I hope situation in Haskell improves so that GHC can compile old and new code transparently without too much fiddling and pain.
Isn't Rust's workaround their "edition" system? It seems to be working pretty well for them, but it sounds like it could end up being a massive maintenance headache.
Having worked with Vladislav Zavialov on both proposals and GHC itself, I can say he's great to work with and has excellent taste in design.
I don't know anything about the author; maybe he was being hyperbolic when he used the word "perfect" (though it didn't sound like it). I was just taking the opportunity to point out a lesson that I've learned.
Words like yours implies we are "good enough" and anything better is "too costly", and I don't think that is remotely true.
For example, a foreach eliminates a whole class of problems that a for loop introduces by using a reference to an item instead of a counter variable (which risks array bounds errors, etc). And higher-order functions eliminate a whole class of problems that foreach introduces, by helping the user to think declaratively and allowing for things like composability and parallelization. These are the "low hanging fruit" of programming languages and it astounds me that a lot of people haven't even made it to foreach yet.
What really gets me though is that compilers could trivially analyze side effects and transform code to use these better abstractions. We should be able to write a for loop and end up with higher-order functions in the compiled code if the outcome is equivalent. Then we could trivially parallelize code and be running orders of magnitude faster than we are now.
In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?
These simple examples show some of the fundamental prerequisites that software engineering somehow missed. And I think that saying all programming languages are roughly equivalent caused us to accidentally overlook some obvious truths.
Writing this all out has shown me that my ideal language would probably piss a lot of people off. So maybe you are right after all!
You need COW things like in Closure? Here in unordered-containers they are. You need SIMD processing of vectors? Here in vector package it is. Parallel processing? parallel strategies.
Channels like in Go? You bet right, it is on hackage, in stm package. Green threads are much more greener in Haskell than in Erlang and Go (about twice as small overhead compared to Erlang).
I keep repeating that what is usually a language feature in regular languages, is often just a library in Haskell.
Which, as you rightly noted, piss many people off.
> In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?
I prefer FP languages, because code written in them is more readable to me --- recursion is often more comprehensible than looping at the same time being more general, immutability lowers my anxiety connected with tracking the values of variables/bindings, generally "reasoning via equality" is easier. I can even do it with a piece of paper. Good luck programming with pen and paper using an imperative language. So that's my preference.
But... The benchmarks that everybody's seen would indicate that things like garbage collection, which is pretty much a must with higher-order functions, disregard for the modern CPU cache locality rules (a lot of pointer indirection), and all sorts of other things that I haven't a slightest idea about are costly for performance. So much for [If we just converted everything to FP, t]hen we could trivially parallelize code and be running orders of magnitude faster than we are now. Also: parallelizing is a lot more complicated with regard to performance than "i'll run it on n threads to do it n times faster". Sometimes you will slow things down this way. It's weird, but when you start measuring things, you find that your intuition is wrong all the time. We could probably attribute it to all the complexity accumulated in the lower layers (CPU, OS) that we do not understand.
That said, converting to FP is kinda what we do (but not really). After all, a lot of compilers use SSA to make analysis of dependencies between variables easier.
I was very specifically disputing the phrase, "I believe that a perfect programming language with perfect tooling...can exist". I am only making the case that "perfect" is not attainable (or even real), even though it can often feel like it might be in the pure world of code. We can keep our eyes upward, and always be making things better, but we will never "arrive". Because of this, qualities like expressiveness and safety - while valuable - always have to be balanced against realities like human nature and project constraints. A feverish compulsion can overtake some of us (or at least me) when we feel like we're very close to true perfection in our work, so it's important to be reminded that you will never quite get there. The desire for refinement has to leave room for other priorities.
This kind of snub is weird.
I only meant that this low latency garbage collector has stronger positive implications in concurrent web stuff than in many other domains.
I merely meant that this is a welcome upgrade that specifically benefits web applications, even if it carries a minor computational performance penalty.
[Disclaimer, I work at the company behind this.]
For most software, the SDLC is a stream, not tightly scoped like a library or utility. I haven’t seen migration or upgrade plans, docs aren’t really there, links rot extensively, and the strangeness budget is a blank check. It’s not for most developers and it’s not because most developers are “lazy” or “dumb”. I would have loved to use Haskell for my personal projects, but it didn’t satisfy my requirements and I’m sticking to Python.
For starters, in Haskell Stack, there’s no uninstall option. You use shell to rm a package. https://github.com/commercialhaskell/stack/issues/361
That’s nope territory for most developers.
There is an 'Idris 2' being developed, too (https://github.com/edwinb/Idris2), which is an entirely new compiler with a new type theory (although not entirely dissimilar), which just makes it even harder to use for real things (again, IMO).