I think the focus on Haskell is missing the mark a bit.
Haskell is about programming language researchers and enthusiasts having a excellent example language to try out ideas. Over the years it has turned into a production ready platform.
As the PDF shows there are many languages spun off Haskell and it doesn't even mention them all. Then there are language features in C# and Java, Typescript etc. that are coming across from Haskell.
I think the problem with Haskell as a language as a mainstream thing is that it requires a lot of upfront investment to get used to. C# -> Ruby is 10 hours to be productive, 100 hours to be reasonable. C# -> Haskell is ten times that.
A lot of stuff to get your head around. There are complex topics from category theory to metaprograming. Sure you can ignore it - until you want to use a web framework that uses those concepts, then it's on you.
Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages. How do we make immutability easy and ergonomic and get rid of nulls in C#? Can we have guarantees in our program. There are even features not in Haskell proper like Liquid Haskell that would be interesting to have in C# or Typescript.
This is what is needed if we want the Haskell ideas to be popular. That might not be what other people want.
I was in a Haskell team at Google, and have trained groups at various companies professionally. From that experience, for a normal developer with working experience in Java, Python, or C++:
* It takes 2-3 weeks of full-time onboarding (half with a coach, half self-study) to work on a typical industrial Haskell project.
* It takes around 3 months of full-time participation in such a project to know a wide array of libraries and feel ready to tackle almost anything.
The companies involved were happy to make this time investment for increased productivity over the time they expected their employees to stay with them.
On an individual level, it makes sense that if you're just getting into programming, and you have the choice between investing 2 weeks into Python to be productive, or 3 months for Haskell, you pick Python. It's rational. But after you've been in it for a few years, and you realise you want to do it for another 40 years, investing 3 months for life-long increased productivity (even if it's just a few percentage points; most feel the effect is larger) suddenly becomes a great deal.
Counter point on "complex topics":
After 6 years of industrial Haskell, I know nothing about category theory. It has not prevented me from using web frameworks.
Everything you said is from the perspective on being not only in a Haskell job but in a Haskell team. Your 3 months is 5 years for someone trying to learn on their own in their spare time. And then they need this to get a Haskell job due to the competition for such job and the queue of super smart people lining up for a Haskell job. And the pay cuts are brutal. Now you are an outlier, working on a Haskell team for Google, the rare triad of using Haskell at work, presumably being well paid, and having that mentorship from other team members.
> After 6 years of industrial Haskell, I know nothing about category theory. It has not prevented me from using web frameworks.
As someone in as Haskell team, maybe this is possible. You don't have to try and pick apart the online knowledge. For example a lot of libraries use Lens. I want to understand Lens? It is not simple: check out the diagram on https://hackage.haskell.org/package/lens. "Yeah but it's just X Y Z". Great - I've been to many FP meetups, and seen a lot of stuff online and there is no simple help for this stuff.
You're making it seem harder that it really is. Yes, it's hard if the package documentation is all you have, but that's the hard way to learn. There are other online unofficial documentation and tutorials (of diverse quality). There's also a book (https://leanpub.com/optics-by-example) that I find adequate for a reasonable learning experience (I think I would have saved much time if I had started learning lens using this book.). And, you don't need to know the entire lens package to use it. The most commonly used part of lens (Lens proper, not Prisms, Isos, etc) is only a small fraction of the whole thing.
> Your 3 months is 5 years for someone trying to learn on their own in their spare time.
I learned the key parts of Haskell in 3 months of my spare time from university (where I had much less spare time than I have now in industry). And that was at a time where learning Haskell was much harder than it is now; today there are a lot of really detailed books, tutorials, and videos available, tooling works out of the box with few surprises, and error messsages are way better.
Of course learning in your spare time does not give you as much practical experience and feedback as you get on a job, but it gets you enough to get into such a job.
> You don't have to try and pick apart the online knowledge.
Not any more.
* Step 1, buy a beginner book and work through it.
* Step 2, work through FP Complete's [1] Applied Haskell Syllabus (https://www.fpcomplete.com/haskell/syllabus/). This will make you comfortable with most day-to-day data structures, techniques, testing, benchmarking and so on.
* Step 3, practice building some small applications that you think companies will actually need. Web servers, input validation, streaming data processing, and so on. It doesn't have to be fancy or use "cool" techniques, it just has to be /useful/.
* Optional Step 4, for extra hireability: Become better-than-average in a specific topic. This could be testing, performance, developer tooling, documentation, web stuff, anything that gets your juices flowing. Do a bit of open-source work in that area. Go to some meetups or conferences to see who's hiring, what they are doing, and what might be useful for them
> I want to understand Lens?
Again, you don't need to do that to get a job. I've conducted ~ 40 technical interviews for filling Haskell positions for various companies, and none required this. What is required is that you can do normal /useful/ things, with the high amount of correctness, clarity, refactorability, and reliability that Haskell provides.
Haskell, like mathematics, knitting, and C++, offers near infinite avenues of special topics that you can deep-dive into if you want. People talk about those at meetups because they excite them. But you don't have to do that to build useful software or get a job.
[1] This is the consulting company as part of which I gave most of aforementioned training. It makes most of the training material publicly available.
I've used lens for years and years, but don't fully grok it. Who cares? Lens is also like category theory, you don't need to understand the details in order to use it.
Generally I easily found C++ as difficult to learn as Haskell. Mastering C++ takes a half-dozen books -- although that number's starting to reduce as the language is getting a bit cleaner. Haskell takes 2-3. The last is just the optics book, if you want to learn lenses, which are worth it.
Anecdata point: I'm comfortable with lens at a very basic level, and if you have a good grasp of "normal" Haskell you can just compose lens operators and use combinators by playing the usual "type tetris" game of "what fits here?" and eventually you pick up a few idioms that are enough for most normal lens usage.
There are many people (at least that from the developers I know) who need to understand things fully before they 'can' use them. One of my friends is mentally stuck for 30 years on a platform because he cannot use what he does not fully understand. He tried Haskell (a few years ago) but instead of going through exercises or just trying to implement something, he read a book and noticed he did not understand basic (monads and such...) things and quit. He cannot touch a computer unless understanding arrives first. He is an extreme case but many (especially non-entrepreneurial in my experience) people I know have this in some ways.
So they're essentially non-functional with today's technology stack? I think it's safe to declare the entire thing essentially incomprehensible to a single person. Sure - you can understand all the basics, and achieve expertise in a lot of it, but modern software and computing is essentially fractal in its complexity. Even when you think you understand, it often ends up being an abstraction which is re-written by some routine further down, deep within some area which raises even more questions.
Why would they be non-functional with today's technology stack? The whole stack is complex, but everything depends in general on very basic principles (eg: basic information knowledge about parallelism, how a compilation works, etc.. and hardware knowledge : how parallelism is handled, why do we need asynchronous jobs and how they work, etc..). Any complex piece of software can be decomposed into basic principles and very often without a lot of work. The only softwares that require more time to understand are the ones that solve some physical or mathematical problems IMO, as you need to understand the math underlying them. But again, this is not an impossible task
I would personally have trouble naming all of the pieces in the flushing mechanism and being able to put y them together if I asked someone to buy them for me. That’s seems to be the same for many people I know.
It is not because you cannot name something that you don't understand something, even in profound levels.. the knowledge of the name is only conventional against the underlaying mechanism. It suffice to repair one wc, one time to have a pretty good idea on how it work without much effort
It’s easy to do. You should get at least some more handy friends — or become one yourself: it will broaden your understanding of your environment, most things are very easy to fix and change.
It is not so consistent; I guess when they think about it consciously, they want to understand it. For all the rest (more low level, a WC etc) they, inconsistently, do not care. It is not really for me to change their behaviour, I just know that some of them are really crippled by this.
I've always been somewhat skeptical of Haskell due to the scarcity of things you could point to and say "that was made with Haskell and it made it so much better". it's usually the same 3 pieces of software people point to, none of which are particularly noteworthy.
with rust, by contrast, which is ostensibly much less general purpose and hasn't been around as long there's already quite a collection and it's easy to see why it was chosen for those things.
Haskell does have an issue of people using it being more interested in doing clever things and researching topics than building cool applications.
Haskell jobs do exist, I have one of them. The big problem is that there are more people who want to do those jobs than there are jobs, so they seem very scarce.
But anyway there are also a number of investment banks I have heard that use Haskell as their “secret weapon” and do not advertise it, in fact those who could speak to details are all under NDA.
Personally I am hoping to help with this issue by building cool open source things in Haskell (and purescript)
I was skeptical for years for all the reasons you said, but I finally decided to set my skepticism aside and see for myself, and oh boy is Haskell great. It has a lot of problems, but I consider myself at least 2x more productive in Haskell.
Both Standard Chartered and Barclays loudly advertise that they use Haskell.
A quick web search turns up their job offers on Reddit, and both also send people to conferences such as Haskell eXchange, to talk about their team tasks, structure, and size. Example: [1]
Their code bases were discussed on HN before, e.g. [2].
Standard Chartered also funded the development of GHCs low-latency, non-stop-the-world, incremental GC, for 2 years until it was recently released [3].
Yes Standard Chartered does have its own strict Haskell scripting language (Mu). But it mostly wraps C/C++ quant libraries, it's used as a alternative to Python (because Python is a bad idea at scale). GHC is used too and is what Mu was built in.
I'm not sure if you're being facetious or asking a genuine question. The answer is "yes", with the exception of the word "actually". The "eager subset" was written to target a pre-existing runtime, which before then only supported an in-house, fairly poorly-designed, functional language called Lambda. Some of the team there wanted to be able to use Haskell instead (understandably).
For stuff that doesn't require that runtime (as convenient as it is) they just use familiar old lazy GHC. My understanding is that they are progressively moving more and more to GHC, although my knowledge is five years out of date.
> I'm not sure if you're being facetious or asking a genuine question.
A genuine one. I remember a presentation by a bank about their use of Haskell, and wasn't sure if it was Standard Chartered. I remember they had the subset, and mentioned they basically had to a lot from scratch to interface with various systems.
> use Haskell as their “secret weapon” and do not advertise it, in fact those who could speak to details are all under NDA.
I remember hearing this said about Perl back in the day! Language X is so awesome businesses use it in secret because they don't want their competitors to discover the secret to their success.
Not saying it couldn't happen, but you can say it about any language and it can't really be disproven.
Right. It’s frustrating. I’m not under NDA with them but I don’t want to get anyone into trouble, but even if I did name names I don’t think anyone would back it up.
This is why we need more publicity-visible Haskell success stories.
If the "secret weapon"-theory is true, then I guess the companies loudly proclaiming they use Haskell could just be trying to trick their competitors into using it...
Yeah I mean you can take it a lot of ways lol. I think some companies are just secretive about their practices and others are open.
Really the thing that convinced me to really dig into haskell was that there were so many things that I wanted to learn but so much of the extra information had roots in haskell topics. So i basically said "to achieve my goals I need to know haskell even if i never use it for anything practical".
Slowly, as I learned more, I realized it is great. I wish I had chosen to learn haskell 10 years ago instead of like two years ago.
True, but then the multiple for C# is either nonsense or a false comparison of going between near identical languages or of writing code of very different quality and utility.
There are plenty of cases where a broken notebook is about as good as a spreadsheet, and languages suited/typical to that use have a psychological victory that results in apples to oranges comparisons.
My first job was at faang and it took me 3 months to be productive in C# working on a compiler and everyone said 3 months is expected so 3 months for Haskell is nice
Can you share what kind of system your Haskell team worked on at Google? I thought Google projects were limited to a few blessed languages. Once upon a time that was Java, C++, and Python but I guess that would now include Go and Rust (in parts of Fuchsia and possibly Chrome).
Ex-googler here: While the "blessed languages" still exist in a way, it's mostly about not really needing to explain the language choice when considering/pitching a project. Choosing a different language (such as Haskell) is allowed, sure, but it'll be one more hurdle to clear when trying to get go-ahead compared to just using, say, Java.
Incidentally, I am learning Haskell right now. And fwiw don't have a CS degree.
I think the difficulty of Haskell is overrated. It's just different. The crowd who complain about Haskell being hard despite x years experience are the same people who speak English and think Mandarin is hard.
And yet 1.7+ billion people speak Mandarin, and most linguists agree that English is actually the harder language owing to its more branched roots.
Haskell is what happens when you go full lambda calculus, and the dedication to referential transparency really forces you down some interesting paths/abstractions. It's also worth noting that other functional languages like OCaml and lisp do not force purity so the idioms are not as extreme.
The impedance mismatch essentially boils down to Turing Machine vs Lambda Calculus and as such you don't find many familiar friends when switching camp (to begin with, at least!)
The Haskell idioms are very different and the consequences of referential transparency ripple throughout the way you need to think about computation.
It's also worth noting that Haskell is a language laid bare - much of the features and abstractions are plain-old functions which means you can "just" duck into them, implement them for yourself or read them as libraries. But what we often forget is all the abstractions we needed to learn when grokking "programming" (which was probably imperative style programming) the first time round. Most people have moved on from the time when trying to conceptualise what an object is or how methods work or even working with mutable state. When you get to design patterns - some of the implementations are quite complex the first time round. Eg the visitor pattern/double dispatch is not particularly simple when you sit down and manually expand the execution of those calls and yet when you use it, it's quite simple.
When we program with objects, most of us don't stop to think about vtables anymore* we just use the object to solve our immediate problem.
The more I see of Haskell, the more confident I am that it just uses a very different set of abstractions that you can just use and forget about except you get the benefit of a HM type system and referentially transparent code. It's also nice that that itself somewhat forces you towards a more functional-core imperative style architecture where your business logic is pure and mostly happy path code and your boundary is a bunch of monad imperative glue.
I think at the end of the day FP makes the hard things easy and the easy things hard. But as the Turing Machine and Lambda Calculus are equivalent then being adept at both schools is like having twice the arsenal for dealing with a given problem.
*naturally when performance becomes a problem you start to unravel abstractions.
> Haskell is what happens when you go full lambda calculus, and the dedication to referential transparency really forces you down some interesting paths/abstractions. It's also worth noting that other functional languages like OCaml and lisp do not force purity so the idioms are not as extreme.
Clojure encourages functional style code through its immutable data structures and a set of (mostly) referentially transparent functions that operate on them.
But clojure is very strictly imperative. Atoms are easily accessible in the stdlib and side effects can be called inline and hidden within functions. This masks IO from a perceived return type.
Haskell differs here because it forces you down the path of "everything explicit" and "everything referentially transparent". Essentially monads emerge as a result of this, but their use definitely feels unergonomic in clojure
I'd say it took roughly 3 months on my own to learn enough Haskell to write a web server that does what I want. So I'd expect with professional coaching a smart person would end up on a decent level in 3 months, ready to put code to production.
"Compare that to Erlang's half a day to a day for onboarding,"
Erlang's a half day for onboarding if you're onboarding a Haskell programmer, not because Haskell programmers are that awesome but because they've already had to learn the hard things involved in Erlang to work with Haskell [1]. Otherwise, that's ludicrous nonsense. Most programmers will need several hours just to figure out how to write the equivalent of lists:map in the new Erlang, immutable world, more hours to figure out how to write things non-trivially with it, and a good couple of weeks at a bare minimum to figure out how to structure applications in an OTP world, then some more time to figure out how to debug it.
It's not especially harder than learning SQL or the latest Javascript framework or other things that aren't Algol-descended languages, but it's nowhere near that easy.
[1] I went the other way. I found Erlang an excellent introduction to Haskell; you get to absorb some of the challenges like working in pervasive immutability in an environment where you're not also swallowing a complicated type system or strict effects separation.
It's odd how the various smaller companies I referred to, many of which are normal enough that you may never have heard of them, get summarised as "It was Google". I only mentioned it to also include the largest org I know where onboarding onto a Haskell project was not a problem.
I think he meant "bad sample" in the sense that those programmers aren't representative of abilities of average programmers. Ie, he thinks Google programmers are top programmers.
I can't speak for everyone but for me the type system is not what made Haskell difficult. I loved it. What made it difficult was laziness. I'd find myself always trying to reason about execution (even though most of the time you probably shouldn't). Laziness might even be fine in a world built around it, but for a language to be practical it has to interface with systems that are not designed that way (numerous external libraries developed in strict languages). This isn't to say laziness is bad, or even worse - just that for a typical engineer switching from strict languages it's a large mental tax.
The problem is while you can adapt and bolt on some functional programming techniques to existing languages like C# - to bring the power of the type system to an existing language is all but impossible. There are a few areas it has started to sneak in, like nullable types. But from my naive perspective you just can't bolt on an expessive type system like that, let alone a dependently typed one.
I find the type system and the errors difficult. The basics are easy (like you would use in Elm for example) but once you are talking about types that are related to other types, GHC extensions and all that, I quickly run into inscrutable errors that might take 1-2 hours of Googling to resolve.
Basic Haskell is lovely though. It's why I liked Elm initially although it does take it a bit too far in the basic direction.
The problem with Haskell is it's purity and thirst for abstraction means more and more complex type definitions and ideas - monad transformers (yes I know they are supposed to be 'easy') - and GADTs for example. And if you want to use libraries, you need to get these concepts to merely be able to read the example code. It's table stakes like known what "npm i" means when looking at a node module doco.
Javascript on the other hand, for all it's warts, allows you to cheat by wrangling directly with the data. This has a lot of issues but it is a lot easier to grok - to the point where non programmers use it as a scripting language. Sometimes people do some weird shit with proxies/promises - and you'll get that type of code in any Turing-Complete and useful language, but on the whole most libraries do keep it simple and scrutable.
GHC's type errors can be daunting, but the complexity of the errors usually scales quite linearly with the complexity of your code. That's good, it keeps you in line.
> The problem with Haskell is it's purity and thirst for abstraction means more and more complex type definitions and ideas [...]
If that's the case something goed wrong. Proper abstraction means having complex definition with simple types!
I felt that way too. I remember the first time I tried to make a lazy list of random numbers in Haskell. It didn't work because of course the random number generator is stateful which conflicts with laziness. It was a disappointing moment for sure.
> the random number generator is stateful which conflicts with laziness
It's not difficult to create a stateful computation in a non-strict runtime environment. Laziness is compatible with (local) state. Haskell's random package is designed around explicitly threading the PRNG state through the computation:
Since the function passed to `unfoldr` always returns a `Just` value, the resulting lazy list never ends. If you're not familiar with `unfoldr`, it's essentially the opposite of `foldr`: instead of applying a reduction function to a list it produces a list by iteratively applying a generator function to a seed—in this case the PRNG state.
As it happens, this function already exists as `System.Random.randoms`. You use it like this:
main = do
(xs :: [Int]) ← randoms <$> newStdGen
mapM_ print xs
There are related functions `randomR` and `randomRs` which take a lower and upper bound in addition to the state.
If you were trying to achieve this effect with `randomIO` or `randomRIO`—perhaps to avoid explicitly threading the state—then you will run into difficulties. Not because of laziness; rather the contrary. The problem is that IO actions are strict by default, so you can't easily combine them to create a lazy list. You would need to use something like `System.IO.Unsafe.unsafeInterleaveIO` to defer the generation of the random values until they were actually demanded, but then you're mutating the shared, global PRNG state each time an item is evaluated from the list and the result would not be deterministic or repeatable even if you started with a known seed.
There's no reason that you can't do this. Of course, you're not going to be able to just simply return a list of random numbers. But you can get a Rand [Int] or whatever and then evaluate it in an IO monad using evalRandIO.
On the one hand, of course you can. But I do understand how it might seem tricky if you're too focused on trying to glue together `Gen a`s (or even `MonadRandom m => m a`s), principally with Monad.
To the grandparent, assuming the trouble was as I understand it, the idea is to split your state and advance one new state down the lazy list while the rest of the computation uses the other new state.
Eliding constraints for brevity, if you have:
split :: g -> (g, g)
random :: g -> (a, g)
then you can say
lazyRandomList :: g -> ([a], g)
lazyRandomList oldState =
let (listState, newState) = split oldState
in (makeList listState, newState)
where
makeList inState =
let (value, nextState) = random inState
in value : makeList nextState
When you match on the outer tuple, the split gets forced, but `makeList listState` is still a thunk. You can go ahead and use `newState` all you want without forcing it.
Once you force it, you have a cons, with two thunks that can be forced independently. The tail evaluates to a similar cons produced from a different state. A lazy list.
It's true that it's infinite, but you can fix that (if you want to) by taking the head (perhaps a random number of elements); it's easier to control the distribution of length that way than with some fixed chance you produce nil at each step, although that approach works too.
Personally I don't like the C# nullable types as they are right now. I would prefer them to be just a library type instead of a language construct. With the language construct there seems to be no way to use fmap/bind (Select/SelectMany in LINQ) on nullable types while that would be trivial if they had used an Option<T> type.
Is laziness at all related to IO/effects libraries in Scala like cats-effect or zio? Like where you have to switch your mental model to view a program as something that just describes an eventual outcome but doesn't actually "do" it until some last-minute side effect thing that is bolted on?
In a lazy-by-default language (Haskell) you absolutely have to use those kind of approaches, because you can't write code where effects are important any other way. You have no control over evaluation order (other than dirty hacks), so you can't rely on that to control effect sequencing.
Of course in a strict language with a decent type system like Scala you eventually find you want to use that style anyway. But you can come to it in your own time and see the benefits for yourself rather than having it forced on you.
Personally, I found precisely the type system the most challenging part. It still is. The whole type level programming is something you just don't have in almost any other language so that's completely novel to me.
> Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages.
I see a lot of this happening in Kotlin and Rust (not too mainstream, but clearly more mainstream than Haskell). Also FB putting it weight behind Reason/OCaml clearly shows the movement. To be fair: sdiehl did mention in the preso that copying these concept over to other languages takes 10-20 years.
> How do we make immutability easy and ergonomic and get rid of nulls in C#?
The problem here is: you dont. To some extend you have to get these things right from the start. The father of null, Tony Hoare, now considers it his $1B mistake. Why? Because this is hard to fix.
> Can we have guarantees in our program.
Sure you can. C# gives you some typing guarantees. Now pattern matching switch statements (with exhaustivity checking) combined with sum types are coming: this will help! But the guarantees that Haskell (and Idris/Elm/PureScript) bring are next level, and impossible to replicate in a language that does not encode purity in the type system nor has type classes.
> I see a lot of this happening in Kotlin and Rust (not too mainstream, but clearly more mainstream than Haskell).
But only at the most superficial level. It was heartbreaking to hear Kotlin ignore decades of Haskell and Scala experience when it couldn't be condensed into a 10-line example. They've adopted a fundamentally broken approach to representing absence and they'll regret it in 10 years, but it's already too late to fix it. Heck, look at Go ignoring Java's experience with not having generics.
Nested nullable types collapse to the same type, meaning it's very easy for generic code to contain really subtle bugs. If you have some generic code that has a T? and assumes that whenever it is null it is not T, this assumption will be almost correct and very easy to miss during testing.
Also, it's very common to have option-style code and as your code evolves you need to refactor it to either/result-style code (that includes a reason why something was not present) - indeed I'd say this almost always happens in code that lives long enough. In kotlin this is unnecessarily difficult because it's impossible to write code that works with both nullable types and an either/result-like type, and since null has special language-level support you can't even use the same syntax.
For me, there are two killer reasons for Kotlin to handle nullability as it does: smooth host platform interoperability and zero overhead. The existing huge mass of JVM libraries use null, so by embracing that rather than adding a new wrapper object like Scala's Option (or even JDK 8+'s Optional), Kotlin works smoothly with all those libraries. And with some annotations, Kotlin's nullability support can be applied to those existing libraries. As for zero overhead, an Option or Optional is another object, and the overhead of that object can't be completely optimized away by the JIT. And both of these concerns apply to all of Kotlin's target platforms.
> The existing huge mass of JVM libraries use null, so by embracing that rather than adding a new wrapper object like Scala's Option (or even JDK 8+'s Optional), Kotlin works smoothly with all those libraries.
Recent JVM libraries have (rightly) moved away from null. Using e.g. JDK8 streams from Kotlin is very cumbersome (noticeably more cumbersome than using them from Scala). The special treatment of interop also creates a bunch of special cases in the language that break your usual assumptions: extracting a common method call won't always work, adding explicit types can change behaviour. It's one of those things that looks like a good idea in the small but creates more problems in the large.
> And with some annotations, Kotlin's nullability support can be applied to those existing libraries.
Again this is a case of Kotlin refusing to learn from history. Look at what happened with the JSR308 checker and adding nullness annotations to existing libraries. Someone adds the annotations and everything seems great, then other people make changes to the library and don't maintain the annotations and they become lies.
> As for zero overhead, an Option or Optional is another object, and the overhead of that object can't be completely optimized away by the JIT.
People who care about zero overhead wouldn't be using Kotlin in the first place. The raison d'etre of the JVM is safer programming with fewer crashes, even if that means a little performance overhead some of the time. (For the record you're wrong: in the cases where the JVM stack-allocates an option or optional the memory pattern is exactly the same as if you'd used a nullable value instead. But that's not the point).
> People who care about zero overhead wouldn't be using Kotlin in the first place.
Depends on the target platform. When developing for Android, one can't entirely escape the JVM (or rather, ART), and developing entirely in a JVM language eliminates JNI overhead. And I'm guessing ART doesn't have all the advanced optimizations of HotSpot.
I'll respect your opinion but sign up for the "not what other people want" crowd.
I started learning Haskell last week. I have a small rest service I need to write and I want to try something new. Why Haskell?
Because I miss seeing excellence/purity in computer languages. Nearly 10 years ago I jumped out of the Smalltalk balloon. It was one of the only languages where Object Oriented felt like a paradigm shift (CLOS and Self were also purist and paradigmy). It was sadly headed for the pile of lost languages.
I have spent the last many years polyglotting, learning and plying Python, Ruby, Objective C, some C++, Kotlin, Swift, Dart (both 1 and 2), Lua, Go, and even a smattering of JavaScript (plus coffee script). It's been a journey through language populism. Language features pulled from multiple disciplines and mashed together with no real rhyme or reason other than what crowd or favorite feature the steering committees cater to at any given point. Consistency and guiding principles seem to be a thing of the past.
I'm too new/naive on the Haskell journey to know what lies ahead. I've discerned that it seems to be one of the golden standards of "pure functional" paradigm. And at 49, I may be too old to be enlightened by another paradigm shift.
But what I yearn for is computer languages that feel like the design elements all work together so that the whole is greater than the sum of the parts, instead of languages with a checklist of "good ideas" that thrown together create a whole that is lesser than the sums of their features.
Will Haskell take me there? Time will tell. Wish me luck.
Haskell is probably the exact opposite of what you are looking for. It's basically a pile of every academic's pet language feature. I would suggest you try Common Lisp or Clojure, which will probably be closer to what you are looking for.
Language extensions do not prose a problem for the best-in-class composability that OP is interested in, for practitioners of the language, that haskell offers.
Are you absolutely sure these language extensions are as bad as you think they are?
Lets look at what they asked for:
> But what I yearn for is computer languages that feel like the design elements all work together so that the whole is greater than the sum of the parts, instead of languages with a checklist of "good ideas" that thrown together create a whole that is lesser than the sums of their features.
- computer languages that feel like the design elements all work together so that the whole is greater than the sum of the parts
+ check
- instead of languages with a checklist of "good ideas" that thrown together create a whole that is lesser than the sums of their features
+ the key here is making sure that you have "a whole that is lesser than the sums of their features". Haskell's Language Extensions do not prevent composability that the language offers, so this is not as big a problem as you make out here
that said, I think it does require a bit of a 'paradigm shift' to learn haskell well, and if you're not up do it, clojure might be a better fit. But i don't think that would be motivated for the OP by the language extensions, but more from learning about laziness and lambda calculus.
Don't get me wrong - I really do love Haskell. And I don't think the language extensions are bad.
I just mean that Haskell doesn't feel like a coherent, commercial language to me. It is basically a bunch of PhD thesis's strung together on top of a research language - with some of these being very interesting and incredibly useful. This is perfectly fine for what it is - but it also means the overall language and standard library have not been designed with a "batteries included" mentality that let people get going easily and quickly in a commercial setting.
I suggested CL/Clojure because those are both interesting, lambda-calculus based languages similar to Haskell (without the typing, unfortunately), with great, mature libraries for building any commercial project possible.
there's a moment with the language where you go from trying to do things the way you would have done them in another language and instead, your brain adapts, and you learn to do them the functional way. because it's Haskell, you learn that you can quickly simplify things -- a little glue, a better understanding of the various functions you can pull out from the standard library, etc. -- and something that seemed incredibly complex reduces (like, for a very strong definition of like, algebra) to a few simple terms. many parts of the language come together and make it all click in this way: the syntax, the automatic currying of functions, the purity, and the type system. you start to look at all code differently. you start to see it as a bunch of puzzle pieces that can be fit together just so. and when you go back to try and use something else, you find yourself reflexively reaching for tools that aren't there, or that, removed from their original context, no longer have the support that made them quite so useful or so pleasant in Haskell. you even feel it going to other functional languages -- sticking all these parantheses into this lisp is obscuring the simple details, the puzzle pieces extended with decoration that makes them fit together less well.
i don't mean to suggest Haskell is without flaws. but i do think you'll experience the paradigm shift as something concrete. there are almost no languages like it, after all.
None of the languages that let you achieve certain top qualities (top performance, proven correctness, concurrency, compactness, etc) are trivial or "feel natural". Haskell, Rust, APL / K / Q, Erlang, hell, even C++ and Scala (though these two are unnecessarily bloated). You got to study and internalize concepts that underlie them, change your way of thinking.
This is because reality is not intuitive, too.
Take natural numbers. They are utterly intuitive. Three-year-olds can grasp them. But as you try to calculate more and more complex things in the real world, you discover the need in counter-intuitive concepts like zero, negative numbers, fractions, even the aptly named irrational numbers. If you try to calculate √3 using the intuitive natural numbers, your solution will be rather approximate.
Same thing with many "intuitive" languages, from Basic-80 to ES6. You can solve a number of problems with them, but many solutions end up inexact and full of holes. In a lot of cases, this is acceptable, or seen as acceptable.
But when you need tools for a precise solution, you have to study some "advanced concepts" (another word for math) and take a language like Haskell which support them. Sometimes you end up with something as unwieldy as your average three-story analytic formula, but this is often not a shortcoming of the language but the nature of your problem domain, described correctly.
I'd definitely argue that no language ever will "feel natural" unless it fundamentally changes the metaphors of computation. Humans just don't think in sufficiently "hard-boundary" terms for programming as currently phrased to come naturally. Granted, culture has made up for a lot of that - in the UK, children are expected to be able to perform arithmetic, which presumably helps a lot in instilling the appropriate style of yes-no exact thinking - but I don't think the programming mindset aligns well at all to what was in the ancestral environment. Our minds are classifying machines built to do Just Well Enough in a world where everything is shades of grey; programming is the art of making the world concretely align to a well-specified model.
When we tell a person to do something, we always give them some freedom in how to do it. When we tell a computer to do something, we don't. If one were to instruct a human being to go to the grocery store in absolutely full micromanaged detail, it would look equally unnatural ("Lift your right knee. Swing your right foot forward. Put down your right foot...") Sort of like that videogame QWOP.
Ah, I wonder if “intuitive” is really as much of a feature as it’s trumpeted to be (under the assumption that intuition could be a crutch used to “get to places we’ve already been”)
> Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages.
I feel like that's what the Scala people are doing. Yet the end result is that this kind of Scala code is just as slow to compile as Haskell, and just as hard to learn for newcomers as Haskell.
The difficulty lies in the concepts enabled by the language, not the language itself. Go read the GHC user guide, especially the part about language extensions, and you'll finish in an afternoon. Yet people are always coming up with new ideas and abstractions that use these language features in new ways. Haskell is an especially rich platform for new ideas and abstractions to foster. Lens is somewhat difficult to learn, yet in its simplest form it can even be used without any language extensions. (You won't be able to name the Lens or Traversal type itself without RankNTypes, but you just need to expand the synonyms; in other words it's just sugar.)
> A lot of stuff to get your head around. There are complex topics from category theory to metaprograming. Sure you can ignore it - until you want to use a web framework that uses those concepts, then it's on you.
I'm convinced this is just FUD. Every language's main web framework has a bunch of complex concepts to learn - if you want to use Rails you're going to have to learn about ActiveRecord, metaclasses, whatever it is that rails uses for routing. By the time you've got productive in an enterprise-scale codebase, you've learnt just as much as if you were using Haskell - the only difference is that the knowledge is less transferable.
> Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages.
I believe Scala is already such a language.
> There are even features not in Haskell proper like Liquid Haskell that would be interesting to have in C# or Typescript.
There are many advanced type system feature are very hard or mathematically impossible to implement in main stream languages. The reason Haskell can maintain its novelty is due to purist and minimalist approach on many aspect of the language.
For example, Hindley-Milner languages cannot add sub-typing. But it can easily infer the type of parameters and the return type of the recursive function, while Scala cannot despite the fact it's very advanced.
> C# -> Ruby is 10 hours to be productive, 100 hours to be reasonable. C# -> Haskell is ten times that.
> would be interesting to have in C# or TypeScript
While I tend to agree, there are also many teams are also burnt by these languages because multiple paradigm actually makes things even harder. It's easy to be productive, but as people climbing the tower of advanced type, the code diverse, the idioms people speak diverse, too. It's not as easy as we thought to be reasonable. There are also many people in Scala community eventually found out it's better to just use the functional part only rather than mix everything.
Stream API would be my first thought but I am not as familiar with Java. Maybe I should have said "ideas", I'm not sure there are any language level changes for Stream API.
> Stream API would be my first thought but I am not as familiar with Java.
Can you name any design characteristics in Java's stream API that came from Haskell, rather than were already common in many languages when Haskell itself was designed? I can't.
But Java doesn't do stream fusion as you're describing.
It does conventional inlining and combining loop bodies, which can in some trivial cases achieve the same effect, but these are techniques you'd find in any compiler or even an 80s text book - they didn't come from Haskell.
Here's the limitation of Java's 'fusion' - you'll notice there's no pass to do this in Hotspot - it just happens due to the inlining and GCM passes.
The original paper on Java generics as we know them today only mentions Haskell twice, both times in passing.
Once in reference to something that they didn't do (algebraic types), and once in reference to that both Haskell and their work were being inspired by an earlier common idea (System F) from the early 1970s.
But those were popular and common before Haskell. Why would we say that lambdas spun off from Haskell to Java, rather than they were spun off from ML, or Miranda, or from Lisp?
I can’t say really, but my recollection is that the last decade of higher interest in FP has had Haskell as a fixture, and was significantly sparked by the need to program in a multi core world, and parallelization is made significantly simpler in the context of pure FP and controlled side effects.
I hear you though. I was mostly a lisp person and only really got into Haskell in the last couple of years. Even still I would have said that the FP zeitgeist had Haskell as it’s central example of FP.
I think it's safe to say that the inspiration for Java lambda almost certainly did not come from the "FP zeitgeist" you're describing, but simply from other JVM languages like Scala and Clojure, which showed how useful they were and how they could be done nicely on the JVM.
> Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages. How do we make immutability easy and ergonomic and get rid of nulls in C#?
F# pretty much plays this role within the .NET ecosystem. Almost all of the important advancements in C# over the last few years originated in F# (and FP in general).
> How do we make immutability easy and ergonomic and get rid of nulls in C#? Can we have guarantees in our program. There are even features not in Haskell proper like Liquid Haskell that would be interesting to have in C# or Typescript.
Will any of that be enough to bring the refactoring powers of Haskell? I bet not, you need the entire type system for that, and if you bring the entire type system, your language will become as hard to learn as Haskell.
Without the refactoring powers, you are stuck again into the old school "design it well or you'll suffer" development cycle.
I love haskell, but personally don't want haskell ideas to be popular until they're possible for a given language. Yes, lazy evaluation and tail-call recursion are awesome... At least they are in haskell. Outside of haskell, it's just a fun trivia fact that some other language does it.
I've got a blog post in the oven about tail-call recursion and how to use it in F# to write your loops more safely. By using recursive (possibly side-effectful) functions instead of iteration, you make the decision about "whether or not to loop" explicit in code, making it much harder to make e.g. certain classes of off-by-one errors. When it's an explicit decision to loop, by calling yourself with an amended immutable state parameter, it often sticks out like a sore thumb when you loop wrongly (e.g. by forgetting to update your state, or by looping one too many times).
I tried twice to get started with Haskell. I understand the language well enough, but the standard prelude gave me problems. I forget the details, but the first attempt involved reading ASCII strings from file. I had some issue with the standard string type and after a bunch of searching and trying other things, nothing worked. The second time I tried to use sockets. After a few days, I discovered sockets were both broken and deprecated.
Haskell needs to replace the standard prelude and burn the old documentation.
I have used dozens of languages and never really had any problems.
But the simplest of tasks in Haskell immediately put up roadblocks.
I’m sure if I had sufficient reason I could get passed these problems, but when easy things are not easy one loses interest fast.
> I discovered sockets were both broken and deprecated.
Just in case anyone gets the wrong end of the stick, "Haskell sockets" are both working and supported (quotation marks because there is no single blessed implementation: sockets are just a (few) library(ies)). When people want help choosing a library, I point them to https://haskelliseasy.readthedocs.io/en/latest/, which is an opinionated, curated list.
There are alternative preludes you could use instead that provide a much better experience. I recommend taking a look at relude [0].
Could you share a bit about your experience with sockets in haskell? I had the chance to use sockets in haskell on multiple occasions and they worked as expected.
There's some amount of truth to this. The I/O facilities that ship with Haskell are in many regards lacking. Fortunately, the community has created some fantastic libraries to help do I/O much better.
It was a while since I was involved in that space, but at the time, you had things like the Text library to properly handle human text, ByteString to handle plain 8-bit arrays, Conduit and Pipes as two libraries to do streaming I/O (both network and file) with resource usage guarantees.
First, there's a number of preludes around: rio, foundation, relude to name a few. The standard prelude, OTOH, isn't going anywhere anytime soon, because throwing it away would break too much existing code.
I'd love to help you with the roadblocks, but I need more specifics to do that.
I've been writing Haskell professionally for 5 years now and as a hobbyist even longer and this presentation just doesn't resonate with my experience and view towards the future.
That said I've taken my focus off working in it professionally. It's such a chore to have to constantly justify it to newly-externally-hired management and inevitably get a top-down directive to rewrite. I'm done relying on managers and corporations to drive Haskell.
My focus now is freedom & independence. I've got plenty of hugs and impactful personal projects & Haskell advancements I'd like to undertake in the next 5-15 years, and nothing even sniffs Haskell as a tool for me building without thinking or exerting energy. Maybe my issue is I love Haskell but I am abjectly not a community member.
What kind of production systems are a good fit with Haskell? I've wanted to learn Haskell but keep getting distracted by other shiny things that are more directly applicable to my job.
Haskell won't give you high interactivity (on the milliseconds of response time), or C-like performance. It will make it hard to control low level details of your memory and execution path. It will also not bring a lot of gain if you only have homogeneous data (like in machine learning).
If you have a problem where intermediate performance (on the level of Java and .Net, better than Python or JS) is ok and people won't notice 100ms extra here and there, Haskell is viable. If it's not focused on low level IO, and deals with complicated stuff, odds are it's your best option right now.
I do not have real-world experience with Haskell, aside from little toy projects, but I have a lot of experience with other functional languages in the ML family and Scheme.
Idris 2 looks appealing, but they should have also mentioned other approaches like Fstar, Lean and Z3. I quite like Fstar since they are delivering a big verified code base, as implied by the name of the effort: Project Everest [1].
While theorem proving is the most general approach to verification, manual proofs are tedious and full automation might never be completely possible in the short term.
I would love to see someone pushing for a different approach. A toolkit where you can create DSLs with limited expressiveness, which in turn make it easy to build and verify things. The equivalent to Alan Kay's Viewpoints Research Institute efforts to build a minimal computation system, but with some verification guarantees.
Those are interesting approaches. I'm pretty pumped about Lean. It may just not have fit into his narrative and he can't talk about everything. Fstar and Lean are more children of Coq and ML (at least syntactically) to my understanding than they are children of Haskell. They're cousins. Although everyone influences everyone of course.
>A toolkit where you can create DSLs with limited expressiveness, which in turn make it easy to build and verify things.
This is a common approach in the Coq ecosystem, writing DSls for building things, and custom automation for them. See for instance papers by Adam Chlipala.
I wonder if increasing bloat is a permanent thing? Will software ever transition to a cycle of simplification? Has it ever gone through a simplification cycle in the past?
Physical engineering has some limits that push back on complexity. You can only cram so many gears into a Swiss watch. An item can only have so many parts before manfacturing becomes a nightmare.
What will push back on the complexity of software? Will it ever "break" in a way that we just do things differently?
And it would be strange to think that something like haskell could save us, when the compiler itself is such a beast. If something saves us, it will probably look more like FORTH or lisp or Prolog.
> Has it ever gone through a simplification cycle in the past?
Not as a whole, but there have been some significant attempts. Maybe the most radical one I can think of is Chuck Moore's Forth systems. Every new version is simpler than the previous one. Arthur Whitney has done something similar with APL, A+ and k, constantly trying to remove non-essential features. A more mainstream example is Unix and Plan 9, the 7th version of Unix was simpler than previous ones, and Plan 9 was much simpler than Unix. There are also initiatives like suckless, which try to write simpler software.
I am sure there are many more similar examples. Trying to remove bloat and complexity is a very important goal for many developers. Unfortunately, the real world usually gets in the way (colorForth has quite limited hardware support, Plan 9 never included a full-featured web browser, suckless software is not too friendly for novice users...).
> Has it ever gone through a simplification cycle in the past?
Yes; whenever smaller, but less powerful hardware replaced larger, entire bloated old platforms were discarded or ignored by a generation of new users.
The fact that many students learn Haskell in University and then don't use it in production isn't as much of a problem as the presentation seems to suggest. Programming in a pure functional language is a school of thought, a way to solve problems that's markedly different from OOP or imperative styles.
In my education as a programmer Haskell has a special place and the way I use filter, map and reduce in production JS code is a small reflection of that.
Ideas of Haskell have cross fertilized into many other languages, e.g. you can clearly see it in Rust.
These slides simply assume that Haskell as-is is progress, and then bemoan the "fashions, whims, and tastes" of the industry in being slow to relise that. But to measure the "timescale of progress" you need to consider the duration not from the time an idea first appears until mass adoption, but from the time the idea is proven and market-ready. GCs were widely adopted within 5 seconds of them having acceptable performance and not being packaged in runtimes with other significant downsides. The adoption rate of technology -- in general and in software in particular -- is largely consistent with what we'd expect from traits in selective environments. Ideas that trickle slowly through fashions (genetic drift?) are those that aren't adaptive (yet?).
"Everyone should build GHC from source at least once...You will learn a lot about how the sausage gets made."
A decade ago a friend paid a local company to build him a many-core compute server. I was fairly certain they'd under-spec'd the power supply. My favorite stress test was to build 32 copies of GHC at once, daisy-chaining each new copy to build another. Keep it running.
I love the clear eyed approach to what Haskell's future may look like. I look forward to seeing the fresh ideas the programming language community continues to bring forward in this area. Does anyone know if these slides came from a talk that was recorded?
> "The barrier to entry for GHC development remains impossibly high"
Could not agree more. I've put in a couple merge requests to the ghc Haskell compiler, and it's very difficult to even figure out what's going on, nevermind make a meaningful contribution.
When will Haskell get a different spokesperson? Stephen isn’t effective in this role. These posts have jumped the shark to the point when I see a big Haskell post on HN and see stephendiehl.com it’s just time to tune out.
For Haskell to succeed, it has to move away from rigid commitment to various pure functional paradigms and move to eager execution by default. But Stephen does not help this, just preferring to instead dig in more on expecting the reality of the rest of the world to change itself to accommodate the strictures under false pretense that the strictures are parochially better.
Haskell’s real failure to launch is not because of complex tooling issues between cabal & stack, not because of a big mess of compiler pragmas to get basic String functionality, not due to complexity of understanding monads or type classes.
The problem is that the stated benefits of pure functional programming are not actually benefits at least not when writing business software.
The slides (rather arrogantly) say we’re at the blood-letting and leeches stage of software. Well claiming Haskell is a solution is like inventing epicycles to explain orbits or elevate alchemy to a science. It wastes everyone’s time with artificial complexity substituted for advanced capability.
Right. If you want to talk about your language, there are two effective ways:
1. Talk about it in isolation: we enjoy Haskell, here's where we think it should go.
2. Place it in the context of the wider software ecosystem, as in these are the pros and cons, barriers to adoption etc.
What isn't effective is doing #2 while not trying to really engage with reality. If you just assume, contrary to evidence, that Haskell is the right way and the reason it's not adopted more is that everyone insists on doing things the wrong way, you will not convince anyone who doesn't believe this fairy tale already, and worse, you'll ensure you won't help yourself, because you focus on the issues you wish you had rather than the issues you actually have. Self-reflection is supposed to help you, and it only helps if it's a long hard look in the mirror. The approach of, "things aren't going the way we hoped but our assumptions could not possibly be wrong so we'll just examine everyone else's" is not generally useful, as it's nothing but a waste of time: you know you'll end up exactly where you started.
The types of posts that get downvoted on here is at times maddening.
This to me, is the only reply thus far that addresses what I also perceive to be the fundamental problem with Haskell - it doesn't provide any new convenient tools to solve real world problems, while requiring one learns a bunch of useless abstractions that aren't used anywhere else in industry as a default (monads, laziness, immutability)
However the problem with the parent post of that it presumes Haskell isn't useful because of X, when clearly it's useful for many and changing X would make it not Haskell. That sort of comment isn't useful.
I think for as long as I have known Haskell (1992-) there has been interest in and a desire for a strict variant of Haskell. In 2020 it seems there are a lot more options for that, which is great, but they aren't Haskell.
As a seasoned C++ programmer, that occasionally plays with Haskell, here is a question?
Has finally Haskell the problem of how to do destructive updates or not?
If it cannot do destructive updates, I am not interested. The kind of software I am called to build always requires destructive updates for performance reasons and secondly I am so used to assignment and all the design patterns around it that I don't think I should bother with something that does not provide that.
I've been paid to write Lisp most of my career. Most new languages I check out seem like some variant of C or a dumbed-down version of Lisp. Haskell is the only one that seems like an actual upgrade from Lisp, in that there are things you can do in Haskell that would be quite difficult to shoehorn into Lisp without changing Lisp in fundamental ways. (A statement I never thought I would make before I found Haskell.) The full panoply of monads for example. Automatic currying for another. A true lambda-functional language that does not need parentheses and makes no compromises for their absence for yet another.
All this does not make me want to abandon Lisp for Haskell; Lisp is still more practical for most of my needs. But if I had to write a parser I'd probably first write it in Haskell and then translate it to Lisp, because Haskell expands the way you think about hard programming problems -- even beyond the realm to which Lisp expands it.
For me it's been the opposite. Rust was the first programming language I learned back in 2014, and it's what I've mainly been using until recently. Rust is still my go-to for situations where performance is key, but my most used language today has become Haskell.
Rust is very nice, but when you've used it for long enough you start noticing the warts. I especially miss Higher Kinded Types from Haskell, and I'm constantly reminded of the lack of Functor and Monad etc. At some point I simply realized that for most applications, the generally lower performance of Haskell and the GC is really not a problem at all. I mean, it's not like it's Python slow—it's just a bit slower than C/Rust. The more advanced type system of Haskell is just so nice, that I feel it's worth it.
As a Rustacean, I find that silly. There's a lot to love about Haskell.
I'd be interested in hearing from your Haskellers what it is about Rust that renders Haskell obsolete. As it stands, I can only construct straw men, and I'm not really willing to do that.
I've used both extensively, even on some similar projects. Biggest thing I've noticed with Rust is that—often by necessity—it's hard to design abstractions that don't leak implementation details. That's a fundamental property of how Rust uses its types to manage resources, and it can definitely pay off for some kinds of code with particular performance requirements. In practice, though, it also makes it much harder to write reusable abstractions, especially with anything involving first-class functions and polymorphism.
The upshot is that the two languages feel very different, encourage totally different programming patterns and give you different high-level trade-offs. They're far more complementary than substitutable, and I expect my future projects will mix and match both.
Everyone else pretty much said it for me. To elaborate more personally: the distilled expression of pure functions as _the_ basic entity of modular engineering is such a huge selling point for me. Lazy evaluation is also surprisingly valuable in this vein, because I can compose components -- pass the output of a component to the input of another -- with the assurance that I'll only pay for that computation if it's actually requested.
I do deeply appreciate Rust in how it supports serious functional programming idioms without requiring a GC. The value of immutable-by-default and single-ownership semantics have been infectious far beyond my direct use of Rust. But those very same features of Rust also make the pure function just a little more clunky in that environment.
Oh, and being literally unable to perform I/O (or any other computational effect) outside of the right context is such a clear win for me. Very few languages, even in the functional world, have such a strict separation. The architectural and mental benefits are outstanding.
Some concepts take on a surprisingly simple existence in Haskell. I recently discovered the Yoneda and codensity transformations, as so frequently discussed in Haskell... in the context of a Visitor pattern on a tree structure in Java. Visitors are literally Yonada-ified sum types, and I could only make that connection (and understand the ramifications and benefits) after seeing how it naturally arises in both languages.
>Very few languages, even in the functional world, have such a strict separation.
It's catching on. Nim has the noSideEffect pragma (enabled by default if a function is defined with "func" rather than "proc"), and D has a "pure" annotation. Both are compiler-checked.
Rust doesn't even have higher-kinded types, yet alone kind polymorphism. That means no proper do notation, so look at the complete mess of macro-based syntaxes you have for things like error propagation, async or transactions. Things like recursion-schemes are impossible, and the borrow checker is kind of ad-hoc, meaning the control flow keywords are special cases and you often can't do what should be a very simple substitution when you want to add an effect (e.g. if -> ifM is not possible because if is not an expression and the borrow checker wouldn't be able to handle it if it was).
Rust may be the language that has finally dragged mainstream programming kicking and screaming into the 1970s (it's more or less the equal of Standard ML). It still has a ways to go to catch up with Haskell.
Haskell is about programming language researchers and enthusiasts having a excellent example language to try out ideas. Over the years it has turned into a production ready platform.
As the PDF shows there are many languages spun off Haskell and it doesn't even mention them all. Then there are language features in C# and Java, Typescript etc. that are coming across from Haskell.
I think the problem with Haskell as a language as a mainstream thing is that it requires a lot of upfront investment to get used to. C# -> Ruby is 10 hours to be productive, 100 hours to be reasonable. C# -> Haskell is ten times that.
A lot of stuff to get your head around. There are complex topics from category theory to metaprograming. Sure you can ignore it - until you want to use a web framework that uses those concepts, then it's on you.
Instead of Haskell we should be thinking of programming language research and how to get the best ideas into our mainstream languages. How do we make immutability easy and ergonomic and get rid of nulls in C#? Can we have guarantees in our program. There are even features not in Haskell proper like Liquid Haskell that would be interesting to have in C# or Typescript.
This is what is needed if we want the Haskell ideas to be popular. That might not be what other people want.