Hacker News new | past | comments | ask | show | jobs | submit login
Learning Haskell is no harder than learning any other programming language (williamyaoh.com)
290 points by nuriaion 10 days ago | hide | past | web | favorite | 460 comments





The biggest lie about Haskell is that it's easy to learn. No it's not, and I do use it at work. Sure, it's not THAT difficult to get a basic understanding until you get to the usual Functor, Applicative, Monad stuff, which you can understand if you imagine them as context bubbles. Once you put something into a side-effect bubble (IO), you cannot take it out, so you're obligated to work inside of that bubble. This analogy should get you far enough. You're now ready to build toy projects.

But, even if you finish the Haskell Book(http://haskellbook.com), which is like 1300 pages, you're still going to be unable to contribute to a serious code base. Anyone who says otherwise is lying. Now, you have to understand at least 20 language extensions which you find randomly at the top of files {-# LANGUAGE ExtensionHere #-}. Now you have to understand how to really structure a program as either a stack of monad transformers, or free monads or anything else. Then you get into concurrency and to do that you have to understand how Haskell actually works, what non-strict computation does etc. etc. Otherwise you're going to get some nasty behaviour.

You think I'm done? Let's get to Lens. You can use Lens after a relatively short time of reading the docs. But to understand Lens? Very few people actually understand Lens.

Don't get me wrong, Haskell has spoiled me, and I don't really want to touch any other language (I still like Clojure, Rust, Python, Erlang). Once you get past that the language is a joy to use.


> You think I'm done? Let's get to Lens.

Lens is a library, not part of Haskell the language, it's also not particularly widely used. If you are going to conflate the ecosystem with the language, then we could equally talk about the complexity of dependency injection frameworks, AbstractSingletonProxyFactoryBeans, aspect-oriented bytecode weaving, and enterprise Java beans when discussing how much "easier" Java programming is.

I have programmed both Java and Haskell professionally on multiple codebases. I honestly found more ad-hoc and incidental complexity in the Java world. At least the Haskell libraries generally were consistent in the abstractions used. Java the language, also had a lot of hidden complexity, for example its memory model (required knowledge for concurrent programming).


Yes, you should consider all of those when you talk about how hard Java is to learn. Java is a notoriously complex language and one should expect to put a lot of hours into learning it.

Haskell has a different kind of difficulty. One must expect to feel dumb for a long time, because it's composed of hard concepts. Those are much simpler concepts, but they aren't any quicker to learn.


Understanding the Java memory model properly is not easier than anything Haskell throws at you.

Not to refute your findings but it seems unfair to call commonly selected aberrations not hidden complexity and the Java memory model as such. The only things you can compare is the complexity inherent to the task vs that which is added by the choice of implementation which is subjective based on familiarity.

I was calling out the Java memory model and concurrent programming in Java as being very complex. It's inherent to the language itself and almost impossible to change. I was only calling it "hidden" complexity as most people do not see an issue when writing single threaded programs. Concurrent programming I genuinely believe is easier in Haskell.

And also using the java.util.concurrdent package sidesteps most foot-guns involving roll your own synchronization.

Perhaps most, but not all. Even something as simple as a date formatter in Java is not thread-safe.

That was fixed with the Java 8 package java.time.format, https://docs.oracle.com/javase/8/docs/api/java/time/format/D...

Ok bad example, but it shouldn't have taken them 15+ years to fix it.

SingletonFactory is an oxymoron.

Edit: oh wow, it actually exists


That's not oxymoronic at all.

You have a Factory<T> interface. Depending on T and context, it may be reasonable to provide an implementation of Factory<T> that returns a singleton, something taken from an object pool, or a new instance of an object.

I haven't worked in an IoC-heavy world for awhile, but a SingletonFactory is neither silly nor an anti-pattern.


> SingletonFactory is an oxymoron.

It's not, you still need to create the one instance when it is first needed. It's possibly excessive abstraction, but it's not an oxymoron.


I've come to believe that "excessive abstraction" is practically a synonym for "object oriented programming". It takes an incredible amount of discipline to not abstract unnecessarily in an OO shop.

It's not just excessive abstraction, separating Singleton and SingletonFactory can even reduce encapsulation since you can't make the Singleton constructor private.

You can if it's in the same file.

A factory method returns instances, but there is no obligation for those instances to be created on demand or each time the function is called. Some Singleton implementations only initialize the Singleton object at first call, for example, and other implementation units the object at startup.

Come on it only took me 3 semesters of category theory and I was ready to use ‘do’

Thinking that you need to learn "Category Theory" in order write Haskell is like saying you need to read George Boole's 'Investigation of the Laws of Thought' (1854) in order to write a conditional statement.

In other words, it's not.

"Category Theory" is for mathematicians and people interested in highly abstract mathematical theory. It doesn't help you write Haskell programs.


Never did I assert the need to learn category theory to write Haskell. I was speaking of my own experience and if you can understand the underlying category and type theory that goes on behind the scenes implicitly and can write Haskell programs without ever thinking of the theory behind them that is awesome. Personally I don’t like languages that feel like magic so I strive to learn what is behind the abstractions and for me that meant getting a deeper understanding of category theory. Furthermore I feel as a computer scientist the more math I can learn and absorb the deeper my understanding grows on a multitude of subjects. Sorry if my post came off as flipped I was simply trying to use humour to accentuate the difference in the conceptualization of Haskell vs. a non-pure procedural language.

You need it for a deeper understanding of type classes and algebraic data types. You can get by without it but I would say your understanding of things like a "functor" will be flawed.

This is not true. For example, The typeclassopedia contains all you need to know about functors and doesn't go into detail about the underlying Category Theory.

Agreed. Having worked closely with them, I would say such Haskell luminaries as Simon PJ, Lennart Augustsson and Neil Mitchell have not "learned category theory" (and I don't think I'm being too offensive to them if I state that). In fact the only figure of highest repute in the community who puts much stock in category theory is Edward Kmett.

excerpt from typeclassopedia:

"The wise student will focus their attention on definitions and examples, without leaning too heavily on any particular metaphor. Intuition will come, in time, on its own."

typeclassopedia admits that the intuition is missing. I would go on to argue that just going from the definition alone you would think that a functor is anything that is mappable and that fmap simply maps a function across a functor to produce a new functor.

The intuition that cannot be grasped without some category theory is that fmap actually lifts a regular function of standard types into a function between functors. A functor is more than just fmaps.


you draw the wrong conclusion from that excerpt. It does not say "Intuition that cannot be grasped without some category theory". It is saying that you need to see many concrete examples to learning something abstract. This is how it works in all of education basically, and there's isn't an easy substitute. Learning "category theory" will not replace your need to see concrete examples to grok something inherently abstract. You'll have to go through the concrete examples anyway.

Those concrete examples aren't visible until you do category theory. That is what I'm saying. You are drawing the wrong conclusions.

There is literally not enough information from the definition of the type class Functor and from examples of usages of that definition for a programmer to truly understand the concept of a functor. That is the conclusion I am deriving. It is not wrong. You are wrong. What's going on is you are deriving a conclusion convenient to your view point.

Sure you can get by programming haskell without category theory just like you can program without knowing the notion of an algorithm. However in both cases you are worse off without the knowledge.


I was able to use `do` with only C programming experience under my belt and a little LYAH. Cmon now.

Do still bugs me because it obscures what's really going on. I'd rather work with monads directly, even though it requires more verbosity.

How do you even take 3 semesters of category theory?

By failing twice.

>Sure, it's not THAT difficult to get a basic understanding until you get to the usual Functor, Applicative, Monad stuff, which you can understand if you imagine them as context bubbles. Once you put something into a side-effect bubble (IO), you cannot take it out, so you're obligated to work inside of that bubble. This analogy should get you far enough. You're now ready to build toy projects.

The analogy with Promises, which by now everybody knows, even if pedantically flawed because they don't fit this or that criterium of Monads, is quite useful to get the idea...


> which by now everybody knows

I wouldn't make that assumption. Between people who program in languages that don't offer promises, and people who use promises but still get them wrong, there's not exactly what I'd call a strong base of understanding.


> Anyone who says otherwise is lying.

Anyone who has experienced things you didn't should be discredited, because obviously your personal take is the definitive take about it.


But that is true for many other languages of the similar caliber, like Rust and C++ are also extremely complicated!

This. Remember that to be functioning in Python, C++, and many other languages you don't need to learn most if not all concepts. That is not true for Haskell where if you know a lot but not everything, you are likely running into trouble quickly. Example from my own experience: I've read 1/5 of "The Haskell Book" and know the essentials pretty well, but throw mature Haskell code (as in "something people made for a real purpose") at me, let alone monads, and chances are I'm blown away.

But it I throw mature Python/C++ code at you, with concepts you don't know, how is it easier?

Those Haskell threads are full of exaggerations. I don't know/use a lot of Haskell concepts and I can still produce software with it. You can be just fine with IO and passing everything as arguments. Which is, well, what article is talking about.


It's easier because Python/C++ have huge communities at this point and Haskell does not. It's also easier because most people learn some variety of Java/Python/etc as their first language. So languages with similar structures and conventions are easier to grasp.

Every time someone has suggested I learn Haskell the discussion goes similar to suggesting I learn German. Sure German from a language perspective has some advantages over English in some situations. Some even argue that it's an objectively better language. But I live in Pennsylvania and speak English as a first language. Outside of moving to Germany/Switzerland/Austria, how do the advantages of German provide enough benefit for me to invest the massive amounts of time to become fluent?

Sure if we could turn back the clock on Computer Science education and have everyone learn lisp as their first language maybe we'd all be avid Haskellers these days and be better off for it. But given how history went it is "harder" to learn and less productive to work in due to external factors alone, regardless of how intrinsically easy/hard the language may be (which is entirely subjective).


There's still plenty of value in learning Haskell even if you're not going to use it daily.

I'd go as far as saying it's essential for anyone who likes programming beyond just a profession. Same with lisp.

You don't need to learn all of the language extensions or how to architect serious applications with free as the OP said but it's very useful knowledge and one of the pedestals from which all other languages should be judged.

Plus you'll understand why Idris and dependent types are an interesting future development in safety and language design. While also understanding the source and inspiration of many features in far more popular languages like JS and Rust. And there may be a real future in it via PureScript and other similar projects.


Why do you write Python and C++ in one line? C++ is an extremely hard and esoteric language. Python is very easy language. They are like two opposite points.

C++ can be quite manageable in a professional setting, where you get Qt or boost. Then it's like any other programming language.

I think the major troubles for beginners with C++ is that you can't do anything out of the box like work with files, create a directory or perform a HTTP request. Whereas python or java are ready to use.


Eh, I strongly disagree with this characterization. I did c++ for a few years at Google and am now back to developing in it, and I'd say it's probably my favorite language to work in a codebase that sets the right constraints on it. But that doesn't change the fact that it is replete with unintuitive footguns even for those comfortable/experienced with it, in a way that Python or Java absolutely isn't.

The closest thing i can think of in Python is passing a mutable object (like an empty list) as a default param value. C++ is littered with bug-prone landmines like that.


[flagged]


>Python is anything but easy, it is the same caliber as C++, just lacking implementations able to execute as fast.

Yeah, that's stretching it to absurdity.

Python is easy to get started, and easy to adopt any of the extra features (e.g. slots, metaprogramming, async, etc) piecemeal. And easy to read most codebases.

Haskell is not easy to get started, not easy to adopt the extra features piecemeal, and not easy to read most codebases.


Easy to get started, undoubtedly yes.

Easy to be a Python black belt certainly not.

While it looks like piecemeal to adopt metaprogramming, decorators, multiple inheritance, slots, operator overloading, extension of built in types, generators, async/await, their use combined in the hands of clever programmers, is anything but easy.

Doing Python since version 1.6.


>Easy to be a Python black belt certainly not.

Perhaps, but that's a different goalpost...

Plus it's still easier to be a Python black belt than a Haskell one...

>Doing Python since version 1.6.

Doing Python since 1.5 (at university) and 1.6 professionally. A remember when it didn't have half of its current constructs...


> Haskell is not easy to get started

Agreed

> and not easy to read most codebases.

Hmm ... debatable. I have to dig in to the implementation of Python libraries and Haskell libraries regularly. I'm much more confident that I'll come away understanding the latter than the former!

> not easy to adopt the extra features piecemeal

I can't see any evidence of this. Can you name a few Haskell features that can't be adopted in the absence of other features?


I didn't say that they "can't be adopted", but that it's "not easy" to adopt them (and specifically it's not as easy as Python or even Java, C#, Lua, whatever) -- because they come with a bigger mental burden...

OK ... can you name a few Haskell features that can't be adopted easily in the absence of other features?

How do you functionally manipulate an indexed mutable structure like a vector?

How about the common task of CSV file manipulation? Database access? (Simplest of FFI.) Not reopening nor reparsing the file every time you want to do something? At the same time, with known upper bound on memory use?

Note how almost none of standard CRUD and web stuff is easy to write in Haskell from scratch and libraries do not help a lot.

You always end up in some variant of IO monad, typically multiple incompatible ones at the same time, making the pure functional nature of the language moot. You get to glue the various kinds of IO explicitly.

Haskell feels like writing a CPU (high level state machines everywhere), except with less readability and more verbiage than Verilog. The propensity of Haskell programmers to abbreviate everything and add redundant "helper" functions under different names does not help.

Programming is stateful because world has a state, and Haskell's handling of state is annoying at every step even for someone versed in it.


Firstly, the question was very specifically to coldtea to help flesh out his/her claim that it is "not easy to adopt the extra features piecemeal". So far that claim doesn't seem to have be substantiated.

Secondly, are you really saying you believe that Haskell doesn't have all these features? That there aren't good ways of mutating arrays, writing CRUD apps, combining effects, etc.? Presumably then, your jaw would hit the floor if I could demonstrate that everything you believe is false.


>Python is anything but easy

There's plenty of shade you can throw at the good old snake, be it package management, performance / speed, formatting peculiarity but this one is the most unlikely I can think of.

What does it make it not easy in your mind?


Python allows for very creative programming, just because every feature looks easy in isolation, when used together they can open the door to some head scratching.

I think "easy" is the biggest bait in the industry.

Python is a language where, by default, a typo is a potential runtime error. That's far from easy in my book.


Ever mistyped an output file name in Haskell? This is one of those myths about static typing that really gets under my skin. I worked in a large, hardcore Haskell production environment for several years. There was no difference in the amount “silly typo breaks something later at runtime” types of mistakes, none, between that and the decade or so of production Python experience I have.

These bugs enter your system and manifest in such weird ways that it always will be the job of unit and integration testing, not static typing, to catch them. Not with modeling states in the type system. Not with phantom types. Just nope. Frankly to me this is what distinguishes a senior engineer from junior engineers in statically typed languages. Do they understand the language design faculties don’t actually protect them, abandon the misguided idea of encoding protection into the language’s special faculties, and instead put that effort towards making the testing infrastructure easy to understand and update and very fast to run.


> There was no difference in the amount “silly typo breaks something later at runtime” types of mistakes, none, between that and the decade or so of production Python experience I have.

This is fascinating. You are basically the only person I know who has used Haskell extensively who claims this. Have you considered writing up your experience as a blog post (or even more formally as a technical report)? I think it would be extremely helpful to the programming community and particularly the Haskell sub-community for you to share your point of view.


Briefly, there is something very similar to Amdahl’s Law for parallel speedup but for removing the thin layer of defects checkable by static typing. Most defects in any real system aren’t like that, to such a degree that the whole correctness bottleneck is concentrated so heavily in unit and integration testing and the extra language complexity, extra lines of code for type annotation or registry of type system designs, slow compile times or constraints on mutation imposed by the static typing don’t pay for themselves through meaningful defect reduction. It’s like the cost of shipping data to a GPU. The efficiency gained by processing it in parallel on the GPU device must be much greater than the transport cost, or it’s not worth it.

But in terms of me ever wanting to write this up with rigorous technical examples, I mean, just look at the level of discourse and tribal downvoting in a thread like this.

Even setting aside that this experience was spread across a quantitative trading company and in a large public financial technology company, meaning I definitely can’t publicly share a lot of details of those systems (which adds tons of required effort to convert examples into totally isolated tutorial-like standalone samples), why would anyone with a valuable technical dissenting opinion about Haskell want to open themselves up to that kind of religious backlash?

It’s demoralizing and discouraging for me even just in a thread like this one, where I’m just some mostly anonymous commenter talking subjectively about my experience in small comments.

There’s no way I’m sticking my neck out on a big technical blog post or technical paper about why leveraging a static type system doesn’t meaningfully reduce defects in real systems.

Also to be clear, I think static typing is fine. Some people enjoy it a lot or have clever ideas about using it for expressiveness. Some people also write amazingly concise dynamically typed code that covers a huge variety of use cases in a safe way with pretty much no overhead code to register anything at all about those use cases. People are free to choose their tools and whatever gets a job done is totally fine.

The part I find disingenuous is that it seems like only the static typing zealots are trying to come up with a reason to think a certain way of doing things strictly dominates or supersedes a different way of doing things, and it’s totally disingenuous to act like the benefits of static typing on defect rates would be such an argument for “universal” applicability of one certain paradigm.


To be fair, you are expressing an allegedly subjective account of your personal experience in what to me sounds like an overly assured and conclusive manner, coupled with expressions like "static typing zealots". It reads a bit like you are imply that there are only zealots and then there is this account you're sharing, which is the definitive truth.

It similarly turns me off into trying to discuss this constructively, even though my experience of Haskell is almost a complete opposite of what you're saying here. I guess that leaves space for only low-effort discussion and people happy to hear that Haskell turns out to not be worth it after all.


My opinion is not allegedly subjective, it is subjective. I don’t expect anyone to do anything with the comments I write. They don’t prove anything, but someone may find it useful to hear that a person with experience decided to have a dissenting opinion of Haskell in practice.

I will say, however, that just as I mentioned in my comment, I’m willing to say static typing is fine. There are lots of tools in a toolbox. It is one of them.

I don’t believe a lot of commenters who seek out this discussion would give a similarly charitable view of dynamic typing, and in my real life experience, these are people who superficially dismiss projects written in dynamic languages, especially Python, on parochial grounds not rooted in reality.

In other words, I see a lot of people in the Python community in real life saying, “Haskell is cool, you can do expressive things in it, but it makes certain other things hard and so for a wide range of tradeoffs I wouldn’t pick it.” But I see people in Scala, Haskell, Clojure, F# etc., communities saying, “Python is crap, so unsafe, so many bugs, it’s just a categorically wrong way to design and write programs.”

So the discussion is (in my experience) extremely asymmetrical along these programming religion lines.


> I don’t believe a lot of commenters who seek out this discussion would give a similarly charitable view of dynamic typing, and in my real life experience, these are people who superficially dismiss projects written in dynamic languages, especially Python, on parochial grounds not rooted in reality.

Fine, but that's a criticism of the people not the language. I'm interested in the latter and not really in the former, unless you're trying to say that they are somehow linked.


> “Fine, but that's a criticism of the people not the language.”

I totally agree, and my dissenting opinion of Haskell is not based on anything about people or communities, just on ergonomics of using it and working on a big legacy codebase of it in production.

I mentioned the asymmetry of people who can be zealots about only one paradigm being The One True Way only in response to the parent comment I was responding to.


> It similarly turns me off into trying to discuss this constructively, even though my experience of Haskell is almost a complete opposite of what you're saying here. I guess that leaves space for only low-effort discussion and people happy to hear that Haskell turns out to not be worth it after all.

Let's hope there's another alternative: that those of us with seemingly opposite experience and opinion to mlthoughts2018 can encourage him/her to share more so that we can all learn something beneficial to our lives.


Yes. Especially because I am sure Haskell and similar languages have failure modes, in which the seemingly magical sauce I've personally experienced might not work, for one reason or another. I do believe that mlthoughts2018 worked in such an environment/codebase and it would be extremely useful to figures out what variables are involved in that.

Thanks for your comments. There does seem to be three languages sure to engender disparate opinions: Lisp, Prolog, and Haskell. I suppose that their advocates can be forgiven for their enthusiasm. They are all quite remarkable languages.

I am unqualified to assess the benefits of an advanced type system, after all, I've only worked through examples in a few Haskell books. I've never used Haskell professionally. Haskell is a lovely language. Its compiler is a remarkable achievement of computer science, mathematics, and engineering. Simon Payton Jones deserves the notable accolades and awards that he as received.

In my opinion, Haskell's most important contribution is in pushing the state of the art of programming languages forward. Is it practical? Yes, that too, but after all these years, it hasn't really become popular because being "practical" wasn't the main goal for Haskell. Haskell was designed to explore the non-strict functional landscape. Haskell's designers made good choices and were able to expand our understanding of non-strict functional programming (e.g. see Miranda [1]).

In the past I did years of research in program verification, so I'm naturally skeptical of the widely repeated claim that "Once your code compiles it usually works" (it's even on the haskell.org site). In what universe is this true? Verification that a program meets its specifications is quite difficult. In general, no compiler for Haskell can even verify that a program will terminate (the Halting Problem). I don't believe that real programmers are using Haskell's type system to formally verify total correctness (which includes freedom from deadlock, etc.) or even partial correctness (the weaker condition that if a program produces an answer that it is the correct answer).

I frequently write Python programs that work the first time; of course they are little scripts. It isn't dynamic typing that is keeping my programs from working more often. Consider the errors that I do make, syntax errors are caught by my IDE, I don't count those kinds of errors as real bugs. Next there are "type" errors, these aren't really troublesome even when I'm not using Haskell. I can find these almost immediately by testing or even using the REPL. (Every so often, I've heard of, say, a ruby program crashing once deployed because there is an untested path through the code that has a type mismatch between an argument and a function parameter. Haskell would catch this type of defect at compile time. That's good.) However, the really troublesome bugs are more subtle. Do the distributed parts have some kind of race condition? Am I handling the various spans of data within some vector correctly? Can an index touch memory outside of my memory segment? Is the floating point arithmetic doing what I think it should be doing? Have I translated the mathematics of wavelet compression correctly? Do I understand the Vandermonde matrix used in fast decoding of Reed-Solomon error correction codes--I don't! Haskell might be able to help with some of these harder bugs if the concepts can be properly represented in the type systems, but I believe that what mlthoughts2018 is saying is that this is often too hard to be worth it.

Haskell is a pioneering approach to programming, and the next frontier could be dependent types (see [2]). My own feeling is that someday programming will involve a dialog with a proof checker while coding. Writing proofs is hard, it seems harder to me than writing the program, so having an AI assistant that aids with the proof checking might make it more useful than simply struggling to encode a proof in the program's (dependent) types (see the Curry–Howard correspondence[3]).

[1] http://miranda.org.uk

[2] https://serokell.io/blog/why-dependent-haskell

[3] https://en.wikipedia.org/wiki/Curry–Howard_correspondence


My two cents here.

In my experience, dynamic typing has not caused unforseen bugs at run time.

What it does do, is it causes large codebases to become extremely difficult to reason about, as you could get very little information about what types are needed or received where, and program flow, from the code.

Where I work, the managers decided that everything shall be python or ruby. So we have some 10,000+ line codebases, which are very hard to reason about. Including industrial control programs. "garbage collection pauses? What are those?"


This just happens in every programming language. I can tell you because the large Haskell systems I worked on were also incredibly hard to reason about. The analog of garbage collector pauses was accidental misuse of eager evaluation, but buried in misdirection through a sequence of specialized implementations that get called due to type class.

Big codebases becoming ugly messes is sociological and pressured by bureaucracy. It is not something that stricter language designs can seriously mitigate, even a little. Meanwhile, very disciplined and experienced teams can avoid it in virtually any programming language.

Some of the cleanest and safest huge software systems I’ve ever worked with were written in C, C++ or Python. Also some of the worst huge systems I’ve seen were written in C, C++ or Python.


I find getting as much tooling as possible that will tell you about types is pretty important with a large code base. For example in the Python code getting type annotations along with mypy up and running tends to be a big win.

Mypy certainly helps, but it is not that powerful, and completely falls flat if you use an old library

That's a shame. There are numerous voices clamouring "Haskell's too much effort for its benefits to be worth it". You are basically the only voice saying "Its benefits aren't even benefits". It feels like you could really add something beneficial. If only there were some middle ground between carefully considered and reasoned critique and vague and unsubstantiated sniping on message boards, but so be it.

fwiw, I didn't read "it's benefits aren't even benefits". I read something more like "static typing helps with some problems, but those problems are just a sliver of the real problems." I also heard something like "static typing not worth the ceremony to me". Also, a general frustration with the ability of people of dissenting opinions to communicate meaningfully with each other.

I've not worked on Haskell but while reading about it I've always been suspicious that its type system is actually effective at preventing integration bugs. It's nice to hear this echoed.

I'd love to read more about this experience (good and the bad).


Well, I'm just pointing out that "easy" is not something that can be attributed to a programming language based on cute, pseudo-code like samples online.

You don't see "Python is easy" examples with full test suite attached to them, explaining that, well, you are in for a ride without those.

As for your comment: I can believe that production breaking typos may have been at a similar level. I don't believe that the effort to reach that level was the same, though.


In terms of mental concepts I will maybe give you Rust - lifetimes, and the borrow checker take some getting used to. But C++ doesn't really have any complicated concepts. Sure it has a lot of features, and some of them have complicated edges (ADL, template metaprogramming, etc.), but most application code rarely uses those things.

I think the complexity of C++ comes from the edge cases, unexpected interactions between features, conflicting syntax, and bad assumptions made over its long history.

I mean, the rule of three is just insane. How is anyone supposed to anticipate that behavior?


I’m not sure lifetimes are that much more complicated than the way C++ does implicit type coercion of user classes, resolution of compile time polymorphic functions, namespaces or SFINAE. I do agree it’s more like death by a thousand paper cuts rather than the torso-cleaving katana of the borrow checker. However I tend to believe any C++ codebase of consequence will run into some of this stuff, unless practices that avoid all the pitfalls are metoculously followed. Which implies a rather thorough understanding of said complexities.

Those are all advanced C++ features. Lifetimes smack you in the face 5 minutes into "Hello world".

Maybe not complicated in terms of how abstract it is, but pretty much everything in C++ is very complicated in terms of all the little rules and exceptions and subtle interactions between behaviors and compilers leveraging UB to do insane things. You can do a lot without thinking about it but if you want to have a precise understanding of things prepare to have to read a ton of rules. It's a language lawyer's dream language.

You may have a point with Rust, but you don’t have a point with C++, and certainly not with other mainstream languages like Python, Scala, Java, C#, and others.

In C++, you can focus on subsets of the language that eschew whole huge paradigms (like templating or inheritance) and you’re roughly no worse off in terms of the scope of practical programs you can write with basic software patterns.

You can’t do this in Haskell. You really do have to learn all the complex hierarchy of paradigm-committing patterns and use nearly all of them nearly all the time, making it much harder to learn than C++ even though C++ is complicated.

Languanges can be complicated in different ways, and Haskell is unique in that you have to engage with every complicated aspect of it nearly all the time.


> You can’t do this in Haskell. You really do have to learn all the complex hierarchy of paradigm-committing patterns

Err, no you don't. The only slightly unusual concepts Haskell 98 has (from a functional programming point of view) are monads, type classes and laziness. Haskell 98 is a perfectly decent language to write computer programs in. In fact it's even perfectly decent if you largely avoid monads and type classes!

The "you can focus on subsets of the language" claim is no weaker for Haskell than it is for C++.


> “The only slightly unusual concepts Haskell 98 has (from a functional programming point of view) are monads, type classes and laziness.“

Exactly. The way they manifest in Haskell requires huge time investment before you can write basic programs. For example, how do you write dynamic dispatch in Haskell 98?

If you give an answer that either (a) involves exotic use of type classes or (b) say “don’t desire dynamic dispatch in a functional paradigm and instead restructure the whole program to avoid needing it” then you’ve proven my point.

That fact that you think monads, type classes and laziness, being “three” concepts (except really they unpack into way more top-level concepts, especially the first two), means you have a huge blind spot. Your experience makes you think of them as self-contained things but they aren’t and even just those three things yield huge complexity sprawl in Haskell.


I'm sorry, I'm not sure what this has to do with the "you can focus on subsets of the language" claim being no weaker for Haskell than it is for C++. Perhaps you can clarify?

Your claim now seems to be "Haskell contains a great deal of complexity" or "You can't implement dynamic dispatch in Haskell" which are different things entirely.

(FWIW I'm not quite sure what dynamic dispatch is or whether I've ever needed it, in Haskell, C++ or Python)


Suppose you wanted to write professional software in Haskell using only the IO monad and modules of functions. How would you do it? I don’t believe it can actually be done in a serious way.

Suppose you’re using C++ but you don’t want any objects, exceptions or templating. Ok, you’ll be fine. It will be a lot like C, but you’ll be fine. Nothing will be substantially harder to solve, design or implement.


> Suppose you wanted to write professional software in Haskell using only the IO monad and modules of functions. How would you do it? I don’t believe it can actually be done in a serious way.

People write professional software in OCaml and Scheme so I hardly believe your suggestion is impossible.

> Suppose you’re using C++ but you don’t want any objects, exceptions or templating. Ok, you’ll be fine. It will be a lot like C, but you’ll be fine.

Sure, and the same applies to Haskell, except with Standard ML instead of C.

I'm not suggesting that one would be particularly productive like that, only that the level of complexity of Haskell is about the same order as the level of complexity of C++, and one can reduce one's subset of Haskell (all the way down to SML if necessary) in the same way that one can reduce one's subset of C++ (all the way down to C if necessary).

> Nothing will be substantially harder to solve, design or implement.

That surely can't be the case. If it were then objects, exceptions and templating would never have been implemented.


I must be missing the point here because if you wanted to “write ... Haskell using only the IO monad and modules of functions” you could simply write your modules (Haskell supports modules) and then write your IO. Done.

Is that what you mean? Or is this really about something else?


> You think I'm done? Let's get to Lens.

Couldn't you also say that about C? If you think K&R C is enough of an introduction, wait until you see the Linux kernel!


Or those 200+ UB use cases.

I agree. I've never used Haskell at work (apart from some toy projects I did completely on my own as proof-of-concepts and/or quick-fixes) but I did spend a fair amount of time learning it myself, and that's what it took to get to the point where I could understand what was going on in "real" projects, let alone contribute at that level.

And, yes, Lens is where I stopped. I grokked the basic mechanism and used the library in some limited cases but a real understanding of the whole zoo of Lens-related types always eluded me. Doubtless I could have figured it out given time, but working on my own toward my own goals it was a real slog. So I started learning Rust instead. :)

Oh, and TH is an absolute nightmare. Just putting that out there. No, I don't know how I'd do it better.


As someone who is very interested in the experience of people learning Haskell I'd like to ask a question. Why did you stop learning Haskell after lens eluded you, rather than just turning your attention to some other aspect of Haskell instead?

(After all, I've been interested in lens since it first came out seven or so years ago. I'd consider myself an expert in that type of technology and yet there are still parts I don't understand!)


I felt that I'd reached a point of reasonable proficiency with the language, and since I'd been learning it mostly for my own entertainment I decided to turn my attention to some other new hotness (Rust, IIRC) rather than slog through increasingly more difficult and obscure (to me) features of the language with diminishing returns for my personal projects.

Entertainment is maybe not the right word, though it was definitely entertaining. Learning Haskell was the beginning of a personal renaissance in my approach to programming, and as a self-taught programmer (and professional software engineer looking to expand my horizons) that was a huge deal.


> I felt that I'd reached a point of reasonable proficiency with the language

Ah, well that puts quite a different spin on things!


What, you think I was wrong? :)

(that very well may be the case)


No, not at all! I previously thought you meant "Haskell was hard for me to learn because I couldn't even learn lens" when you meant "I learned a lot of Haskell but couldn't be bothered to learn lens"

The article is called "you are already smart enough to WRITE Haskell" not "LEARN Haskell".

I think the point is that Haskell is quite powerful and useful without learning the whole of it. Basically, nobody knows the whole of it.

This is a perspective worth considering. The bones of Haskell are great and it's academic origins (god bless them) have it running around in knight's armor and a tutu (or something).

You could most likely use it effectively on a properly advised, disciplined team ("we don't use language extensions").

It might also get dressed up in other clothing and be what we're all using some day. The power and expressivity seem immense.

Imagine, for example, if the Rust documentation team got a hold of it. Holy crap!


I've only ever read half of LYAH and a few chapters of a couple other books. I don't know how Lens works under the hood. I've been writing Haskell for pay for several years with no problem. The rest of the stuff I should learned on the fly by reading Haddocks, misc articles, and plugging away with ghci & sketching ideas by hand (e.g. writing my own state monad for learning)

There is plenty to agree with here! But learning haskell isn't the biggest issue for me, I work in a large org and getting ppl to want to learn it with me or care at all is the biggest roadblock for me. The first thing they are looking for is nice IDE like and tooling experiences among many other things.

I used to want that too. Spacemacs has pretty good support out of the box if you’re willing to use stack. Vim w/ the haskell-vim-now collection is good too.

But soon after you realize that all you need is any text editor and a terminal (to run ghci(d)).


May I ask you about your drafting/recruitment process regarding haskell ?

You can say this about most languages. C++, Forth, Prolog, Rust, lisp, etc.

Most languages are easy to learn, and that's what the article is talking about. Mastery is a whole other topic.


Yes yes yes on the serious code base. Or even a non-serious code base: the problem is that everyone else's code uses these fancy features. So, for example, you read the docs to some open source library, and all the examples use these 20 language extensions. So congratulations, you can't tell what the code does without learning them.

> you're still going to be unable to contribute to a serious code base.

How is this different from Ruby with Rails, or Erlang and OTP? Or Python and Tensorsflow?

> Now, you have to understand at least 20 language extensions which you find randomly at the top of files {-# LANGUAGE ExtensionHere #-}.

I absolutely agree it can be super irritating when you find a new extension that radically changes syntax. I have made jokes and complaints you can find in my comment history on this very site about it.

But I don't think this is very different from C# or C++. Folks tend to exclude and create style guides for their language. At least Haskell has the decency to label these features explicitly.

Most of the really exciting libraries of 2018-2019, fused-effects and polysemy come to top of mind but there are many others, don't use terribly exotic extensions. My favorite web framework Spock also doesn't use anything to exotic either (and in fact uses nearly identical extensions to the more popuplar Servant, nearly).

So I think that the community is moving forward with a consensus on what the valuable and expected extensions are. What we could do to improve this is make the consensus more accessible to newcomers both by talking about it (and not in the context of Alexis's great extensions post last year [0] or Chris Martin's suggestsions and awareness-raising efforts (e.g., [1]). Hopefully the community can agree to raft a bunch of uncontroversial extensions together and say, "This is GHC 2020, we just agree this is the default language spec unless you tell ghc otherwise."

> You think I'm done? Let's get to Lens. You can use Lens after a relatively short time of reading the docs. But to understand Lens? Very few people actually understand Lens.

Is this any different from ANY data structure library an the majority of its consumers though? I still run into senior software engineers with amazing history who still don't understand things like, "What is the asymptotic runtime of a modern sorting algorithm" or "what alternatives to cardinality estimation could we use here than the default library bloom filter?" or more frustratingly to me, "Why this HAMT is not in fact constant time access even in practice for your specific use case, yes they're rad all praies Bagwell but it's the wrong structure for this case."

We could write a whole thing about sane uses of lens and ways to resist its excesses. Maybe now that I have finally quit twitter, I will do that this year.

[0]: https://lexi-lambda.github.io/blog/2018/02/10/an-opinionated...

[1]: https://twitter.com/chris__martin/status/1102457521380442112


You don’t need to understand the internals of a thing to use the thing.

This has not been my experience of using technology effectively. Without an understanding of the implementation details, you inevitably use something inefficiently or for not quite the right purpose.

I cannot imagine anyone using a database effectively on any significant amount of data without understanding indexes, how different joins work, why join order is important, what effect join orders have on performance, etc. Get to a certain scale and it's not enough to know about indexes; you need to understand the structure of b-trees, disk I/O performance, how CPU cache performance affects b-tree navigation even when index is cached in memory, how to use compound indexes effectively to reduce random access through the index, etc.

The constraints of CPU and memory never go away, and if you're trying to scale something, you're going to be limited on either or both of those resources. That in turn forces you to understand execution and memory behaviour of the abstractions you're working with. All abstractions leak when pushed.


> I cannot imagine anyone using a database effectively on any significant amount of data without understanding indexes, how different joins work, why join order is important, what effect join orders have on performance, etc.

But conversely you probably didn't have to understand what filesystem the database runs on, whether it is in a RAID array, whether the network connection was over Ethernet or T1, etc.

All abstractions leak. The question is how leaky they are. In my experience Haskell abstractions are much less leaky than most.


I did and do; when performance at the database level isn't right, you need to look into OS stats, I/O stats, disk stats (the SAN is an area ripe for unexpected contention, RAID5/6 act like a single spindle and inhibit random access in ways that RAID1 or RAID10 don't, stripe length in RAID5/6 bloats the size of small writes, etc.), but I had to stop somewhere :)

> but I had to stop somewhere

Why? You seem unsatisfied with any of the previous layers of abstraction, so where does it end? When can I truly call myself a user of a database?


When you don't have to fix it when it stops servicing requests, or worry about scaling workloads.

When you treat it as a black-to-grey box, not a grey-to-white box.


So I'm confused - how many years did you have to study before you felt confident enough in your grasp of all the underlying concepts to write your first "Hello World"?

Hello World doesn't push much to the limits.

I’m not much of a haskeller, but friends’ war stories indicate that laziness leaks quite a bit.

Thunk leaks happen, but they aren't that scary. They all pretty much have one of a handful of root causes.

It's definitely a wart of laziness, but it's also pretty easy to avoid.

As far as abtraction goes, laziness doesn't leak. In fact, it's a big reason Haskell performance composes. The head . sort example is contrived, but it holds true in more complicated & useful examples as well!


Aha, yes! You are right. But then so does strictness. See https://augustss.blogspot.com/2011/05/more-points-for-lazy-e... for some examples.

Laziness is a definitely a double-edged sword, with sharp edges.


I am in the Database-as-a-Service world.

I have to understand all the intricacies of the system, so people who choose to treat databases as black boxes don't have to. There is only so far this abstraction holds.

Stateless (immutable) code scales horizontally. ACID[1] doesn't.

[1] https://en.wikipedia.org/wiki/ACID


Fun aside, coworker recently had a fun issue where the database server would halt due to running out of disk space... except the disk had over 100 GB left.

Turns out NTFS has a maximum number of file fragments a file can have, and if it exceeds that it will refuse to grow the file (for obvious reasons).

Hardly everyday stuff though.


> I cannot imagine anyone using a database effectively on any significant amount of data without understanding…

You'd be surprised just what proportion of systems running in the market operate on amounts of data you would not deem "significant".

And as I said in another comment, Haskell doesn't try to pretend that computations run with no hardware constraints.

> All abstractions leak when pushed

Yeah. But we might have wildly different ideas for where that boundary is.


I wouldn't be surprised because I seek employment in areas where my skills are valuable.

This interesting, I work as a data engineer but never had to really understand how joins or indexes work because I work 99% of the time with denormalized data. I also do not know much about b-trees. I think you come from the database developer point of you rather than the database user point of view.

There is also other aspect of this, even if I do not understand joins or b-trees I can measure performance so I can figure out which combintion of joins are the faster. The reason that I prefer performance testing over getting to know the theorethical background for certain things is because in many cases your performance also depend on the implementation details, not only on which tree data structure backs your implementation.


It's exactly because the performance depends on the implementation details that when you understand implementation details, it can guide what you test and help find an optimization peak faster - or come up with a theory of another peak more distant, and journey through the performance valley to find it.

Implementation details follow the contours of the fundamental algorithms and data structures. You understand the outline of the implementation details - the bones, if you will - from that. And you can dive into the source (or disassembler - I've done that on Windows) to fine tune knowledge of other specifics, with e.g. gdb stack traces on the database engine when it is abnormally slow to focus attention.

Without having gone to battle together over some performance problems, or me writing significantly longer battle-story posts, we're probably not going to be able to communicate effectively here.


<rant> I work with a team that has a 16 TB database, who doesn't understand how it works. They seem really surprised when their queries run slow. </rant>

Do you struggle to use a computer due to not having an expertise in semiconductor physics?

This is exactly right. I use lenses all the time, but I have absolutely no idea how they're actually implemented, nor do I need to know.

This is abstraction. If there's one thing Haskell does well it's abstraction.

EDIT: It's really bizarre. We see these same responses to all the Haskell-or-Idris-or-whatever threads -- I wonder if there's some imposter syndrome going where "I can't immediately read/write Haskell" somehow morphs into "Haskell is useless". IME it's really rare for people who actually program in Haskell to have serious issues with the language. Yes there are issues from a smaller ecosystem, package management was bad (Stack fixed that), etc. etc. but there are very few fundamental problems with the language. Something so small as just Pattern Matching is a huge increase in productivity. Thankfully, quite a few languages have adopted pattern matching these days (Scala, Rust, TS, maybe even C++23?).

(The really big payoff comes from granular effects, but I'm sure the rest of the world will realize in about 20-30 years' time. The Erlang people already have, albeit in a different way.)


People look at weird syntax and discussions about things that have no resemblance to the problem they are facing, and conclude it must not be useful for anything real. Yes, those problems have no resemblance to any real problem because of abstraction, but most people's experience with abstraction is in a Java-like language where nothing good ever gets out of it.

You don’t need to understand the internals of a thing, until you do. Everything works fine as described in documentation until it doesn't for your use case. You might be lucky and find help from stackoverflow, otherwise you need someone who really grok it.

Haskell's internals are actually pretty easy to inspect!

- Haddock hyperlinked source makes it easy to understand libraries if you need to dig into internals

- ghci makes it easy to interactively learn the language - and a new codebase!

- Haskell can dump the stages of its codegen pipeline (Core, STG, Cmm)

- Profiling and Event logging exist and are easy to use

- You can even write an inspection test to assert certain behavior holds. For instance, that no allocations occur or that some abstraction (e.g. Generic) is truly erased by GHC's optimizations


Sure and at that point you need to learn those internals. That's just the way it is with everything, no?

When things go wrong, you might not have time to do so. When my project started to leak memory at enormous rate, I was able to find the issue quickly enough. But if I didn't know how all those things work, I would spend weeks or months learning all those things. Restarting application every 10 minutes for a week is not a good idea.

How is this different for Haskell than with anything else? If you want to be an expert in something, anything you do need to put in the work and learn it inside-out. There is no royal road. I fail to see how this is specific to Haskell...

It boils down to the steepness of the learning curves.

If I am in the business of system reliability, I will choose the language with shallower rabbit holes.

Abstraction layers are great for builders and terrible for fixers. I am both, so I need to strike a balance.


I agree with you, I don't think that it's different for Haskell.

There was this piece of common knowledge floating around a number of years ago about how you need to know at least 1 level of abstraction beneath you well, and have a working knowledge of the second one below it, to use your tools effectively. I don't recall where the advice floated around or came from but it was something along those lines, and it's pretty true.

I’m not sure I agree with that.

How does this work in the context of CSS? Do people making websites need to understand how WebKit paints the screen?

The word “effectively” seems rather arbitrary here too.


I actually think you do need to understand rendering logic to some extent to use CSS effectively.

For example I have seen many having a hard time understanding why it is trivially easy to align an element to the top of the screen but tricky to align something to the bottom of the screen - something which would be symmetric and equally simple in a typical application GUI framework.

But understanding how layouts are generated makes this clear.


IMO the conceptual level below CSS is in the design sphere - the box model itself. You would not believe the number of people who hack together properties until they get something that looks right, without understanding the box model, who think they're really good frontend developers.

I feel like being able to find an exception doesn't mean the rule is invalid?

How many exceptions should I find to invalidate the rule?

Enough to show it's at least on the same order of magnitude as the number of situations where the rule does hold.

Only if you have infinite memory. In theory there's no difference between folding left and folding right (if associative), in practice there is a right way and a wrong way.

> In theory there's no difference between folding left and folding right

There's no difference in practice either for a sufficiently small dataset.

> in practice there is a right way and a wrong way

Sure, but that's true of all technologies.

Yes, Haskell can't help you escape the limitations of our world — or indeed our hardware — but it doesn't pretend to either.


> There's no difference in practice either for a sufficiently small dataset.

As a non-Haskell user, just for reference what's "sufficiently small"?


It depends on your needs. This is neither constrained to Haskell specifically nor functional programming more generally.

If you were building a website for your local Italian restaurant, what would your needs be? Do you need an ElasticSearch cluster to handle customer menu item queries? Do you need a database at all?

In Haskell's case it's best to avoid lists entirely, as they're _usually_ not the optimal data structure. But best for whom? Does the beginner care that a set operation would be more effective than a list operation for their given use-case?


Not sure how that's an answer to my question?

In my day job, I frequently generate records returned from a database along with local changes to be posted later, and say compute the sum of one of the columns. That sounded like something I'd use folding for, with my limited knowledge, so I was just curious at which point (order of magnitude) I'd have to worry about doing it this way or that.

But if lists are not to be used, what should I use for the above? And will the data structure you propose be fine with either fold?


Again, it depends. A left fold might be fine (and likely will be). Lists might be fine (and likely are). I don't know anything about the size of your data, or about the business you're writing software to support (who knows, maybe you're writing a FPS game, or some real-time aviation software!).

At this point (taking into account the extent of your knowledge as you yourself described it), my advice would be to just use basic folding and lists. At some point in the future, if you observe some performance issue in exactly that area, you might remember this and think "hmm, memory consumption is bit excessive here; maybe I'll try a foldr instead", or "finding the intersection of these two big lists is a little slow; maybe I'll convert them both to sets instead?"


>There's no difference in practice either for a sufficiently small dataset.

So what happens in practice with unfortunately large datasets?

>Yes, Haskell can't help you escape the limitations of our world — or indeed our hardware — but it doesn't pretend to either

Then what is Haskell's value-proposition when it comes to solving real-world problems?


> So what happens in practice with unfortunately large datasets?

You take a different approach. Haskell provides plenty of options.

> Then what is Haskell's value-proposition when it comes to solving real-world problems?

There are many. You can consult your favourite search engine to learn more.


>You take a different approach. Haskell provides plenty of options.

So do other languages. Why is Haskell special in this regard?

>There are many. You can consult your favourite search engine to learn more.

None that seem to address the problem of diminishing returns.


> So do other languages. Why is Haskell special in this regard?

I never suggested other programming languages don’t also have value. Again, if you want to educate yourself further on a specific technology’s benefits, I invite you to make use of a search engine instead of sea-lioning on a technical forum.

> None that seem to address the problem of diminishing returns.

That’s your opinion. Nobody is forcing you to like Haskell. You are free to just ignore it.


Another item is the crazy amount of compiler pragmas you have to use literally modify the meaning of syntax to get to a minimally viable state to work on a real project.

> But to understand Lens? Very few people actually understand Lens.

That's a lie. The basics of optics can be taught to even new Haskell programmers in an hour or so. Don't start in the deep end with generic optics with scary signatures like (Profunctor p, Functor f) => p a (f b) -> p s (f t). Start with something concrete like (String -> IO String) -> (User -> IO User) and then introduce the type variables one at a time. I've taught the basics of lenses, traversais, folds and other useful optics many times.


Coincidentally: it’s also the only programming language I know of where someone has written a lengthy blog post about how I’m in fact, not too dumb to comprehend it.

I didnt think about how hard Haskell is because I never forced myself to learn it. It just didnt interest me. With Java or C# I can make so many things with minimal friction. The other language that people talk about being hard is Rust. I am going to assume theres blog posts about it not being hard.

I like that with Go or Erlang everything I learned 5 or more years ago has still stuck to me. With D I can be effective quickly. With Rust I struggle a bit. Rust is probably great for building a web browser but doing backend web development feels way more work than Go or even Python (CherryPy). Haskell I dont even remember a darn thing anymore.


Haskell was used in the first programming class at university and that was excellent. Some people that already know how to program had to throw their preconceptions out the window. So everyone was either a novice at programming entirely or at least a novice at functional programming. It wasn’t problem at all. I’d argue most students wrote better code in that class than they did in the imperative/OO classes that followed. Especially the students that weren’t (and likely never did become) programmers.

The thing about imperative and OO programming is that it’s hard to do well. I honestly havent seen more than 1/10 developers write ”good” OO code even after 10 or 15 years as professionals. Large scale OO is a cognitive load that requires extreme focus, skill and talent. I prefer functional (or OO using an extreme functional discipline like all immutable types etc) because I don’t have that talent, focus and skill.

My point isn’t that Haskell should be the language of choice. I think it’s a great language but I think e.g laziness makes it too hard to reason about performance and behavior. Today I’d recommend F# I think.


And how many people dropped out of CS because of that first Haskell course? I saw plenty of people struggling with Python initially and I know exactly how they felt later when we had to learn Haskell.

I think if I had to learn Haskell first, then I would've just given up at the start.


Where I did my CS degree (Imperial College London) , Haskell is the first language taught (before Java and C).

To my knowledge across 4 years there, nobody ever dropped out because of the Haskell.

People usually struggled with the maths instead.

People also did not have more problems getting Haskell concepts right than linked list modification in Java.


They might not drop out specifically because of Haskell, but I don't see how it wouldn't be a contributing factor. The Haskell course here was pretty much just a struggle for almost everyone that took it.

Mind the opposite: finishing a bachelor's without ever having to deal with a functional language.

I've met a lot of people (at work and post-grad) who have graduated from colleges without an idea of what a functional language is or the concepts behind them. I don't blame them, but its a pity to find so many workarounds in legacy code which could have benefited from functions as parameters and less state in general.


I think they deliberately did some really “rough” math and CS classes up front because they knew some would drop out and they preferred them to drop in the first few weeks so someone could take their spot.

Yeah, my university did something similar. The very first computer science course was a bog-standard Java course - this was often an elective that engineers or science folks took also. The first CS-specific course used Haskell, and it probably turned off 75% of the people that tried taking it.

My university(Gothenburg University) did this for the CS course. Haskell does a very good job at levelling the playing field since it was new even to those who had prior programming experience. As I recall about 40 people out of 70 did drop out during the first semester.

I really hate the concept of "weed out" courses. That is the opposite if what colleges should be doing. We had plenty of them in the sciences almost 30 years ago when I started. We need to make everyone welcome in sciences, not weed people out with overly complex 101 courses.

We had Haskell in college too, almost 30 years ago, and even then it was one of the hardest courses at school. This was before IO monad, just as the language was being created in the early 90s.


Anyone who cannot do a Haskell class probably shouldn't become a software engineer. Better software that way.

Let's accept your basic premise, that it provides a meaningfully strong positive filter.

We only get better software if the people who don't pass the class proceed to not write software, as opposed to writing software anyway without the training available in the following courses (and presumably writing worse software) or being replaced in industry by others who could not have passed that filter.

If neither of those happens then we presumably will have better software. We will also have less software. It's not clear whether that trade-off is the right one.


Seems a bit harsh.

Of course it's harsh! But perhaps also true?

Or perhaps not so true.

I believe (based on exactly zero hard data) that FP fits the way some peoples' minds work, and procedural fits how other (many more) peoples' minds work. The ability to pass a Haskell class is not the same as the ability to program competently in at least one language.


Life is full of weed out exercises.

I managed to get through some of them, was weed out from others. That's life.


To be clear: I don't think anyone actually dropped out because of a tiny Haskell class. A lot of people did drop out because they hated math and failed the first couple of courses.

Lisp (Scheme) was chosen for the same reasons at my uni, but I think is a superior choice as the syntax is so small and it’s very appropriate for teaching.

Programming in any language or paradigm is hard to do well. Since vast majority of software that we are using is written in imperative and OO paradigm it is probably safe to say that it doesn't require any special developer power as you claim.

> I don’t have that talent, focus and skill.

Same. I'm perfectly happy to fall into the pit of success[1].

1. https://www.youtube.com/watch?v=US8QG9I1XW0


> Haskell was used in the first programming class at university and that was excellent.

I don't think getting started in haskell is that hard. If you're just doing theoretical exercises like building data structures and their related operations then it might even be easier than other languages.

Building a bigger application that does things like create a mini game or act as the backend for a web service, you know the cool stuff, becomes a lot harder.


Yes. I think we wrote a guessing game and a miniature parser etc. The concept of "web service" wasn't invented, sadly.

That just reads as:

"You are already smart enough to write Haskell"... if you accept that you are too dumb for most of Haskell and are therefore okay with being cut off from most of its ecosystem.

Digging into a dependency due to a bug or custom feature is something that usually happens on any bigger project of mine. If I have to expect that I won't be able to work in a dependencies codebase because it will most likely contain (multiple) concepts that I won't be able to understand, then that's a big no-no.


I have only done Haskell during college and my friend is doing Haskell in web backend in production so I'd say I have a pretty good feeling about the skill gap in Haskell. There are many concepts that he tries to explain to me which I don't get 100% but it really does feel like all I need is more time. It was like that at first with monads and then phantom types, row types, hkt etc. I'd say the biggest detractor for people is that it takes more than 1 blog post and 1 hobby project to comprehend everything. It may take 1-2 years to learn all the concepts. To me the beautiful thing is that the concepts learned aren't some language quirks but instead general programming/math concepts which you simply cannot think about in simpler languages like C#. With all that, I'm still heavily put off from Haskell by all the tooling and I just can't let go of the comfort of working in Visual Studio. In 10 years I hope either C# gets some of my favorite things from Haskell like HKT and better inferrence or Haskell gets better tooling and broader ecosystem. My bet is on the former but gafter keeps postponing HKTs spec after spec.

If you feel like messing around with Haskell again for an afternoon then install VSCode with ghcide (and a syntax highlighting plugin ofc). You might get surprised ;)

Are you using that? Doing Haskell on vscode has been a painful experience for me.

Yep, all my team has switched to VSCode+ghcide and so far we're in love with it. Mind you ghcide is brand new, so this luxury of having a 21st century Haskell IDE did not exists even just a few months ago. Before we used ghcid with whichever editor (eg. Vim/Emacs/VSCode).

Thanks, I'll switch to it now based on your recommendation. Desperately want a decent Haskell editing setup, the bad tooling is killing my productivity and making me consider quitting Haskell.

Edit: Ok so I tried it. Template Haskell seems to break it, which my projects use heavily. And does not even report syntax errors in some files. Seems like yet another dead end for me as an editor setup, unfortunately.


Ah yes TH. That’s imho the last biggie in the way of total awesomeness. See this ticket: https://github.com/digital-asset/ghcide/issues/34

Got it, I'll keep an eye on this. Thanks for this recommendation!

Have you written up a blog post about this? I think it would be really useful!

Agreed! For years I used Emacs with various Haskell tools, but after I tried VSCODE and ghcide, that is almost all I use anymore.

most people drive cars, and yet have very little understanding of how it actually works internally. As long as the interfaces are well designed, this works fine for most.

I say the same is for haskell.


That's fine if you use your car for commuting (= casual projects with no special requirements) but not if you want to compete on the race track.

Of course you can optimally keep out of the dependencies, and a lot of people don't even think about digging into them and unnecessary limit themselves, but as I said in my initial comment, every bigger project I worked on involved tweaking dependencies in one way or another.


To run with your analogy — which I don't find particularly constructive anyway — people who are just starting to drive don't start by competing on the race track.

With no prior experience with lisp, I was able to get comfortable in a clojure-only codebase in less than 3 months.

That’s great. Not sure what point you’re making though :)

I'd say rally driving is the best analogy. Not only do you have to drive the car at peak performance, you also have to perform ad hoc repairs halfway through a stage

Imagine you spent a career learning the piano. Would you expect to pick up the violin as if you were an expert?

Would it be reasonable to declare violin sucks! I've played piano for 30 years, vibrato is so hard it must be wrong, tell me why I should learn the violin?

Actually probably a lot will come with you, your dexterity and music theory will be quite useful, as will your ear.

But really, the instruments are very different and if you want to level up your sense of pitch or really learn how to sing a melody, the violin can be great for that...


Violin is definitely much harder to pick up than piano though, regardless of experience with other instruments. Obviously mastery is hard for both, but getting a basic pleasant melody out of a piano is far easier than doing the same with a violin.

Haskell is rather similar to a violin in that respect.


>Would you expect to pick up the violin as if you were an expert?

Rightly or wrongly, in programming we do expect this. We keep saying things like how languages are only tools, and how terms like "Java programmer" is as absurd as "Casio mathematician".

If you have effortlessly switched between imperative OO languages many times before, it's easy to think that any language you can't quickly learn must be because it's fundamentally and abnormally difficult.


Are you replying to the wrong link? The link is about programming fyi

I'm trying to draw analogy to maybe something less inflammatory.

Most of us here are programmers, a lot of us professionals.

Music is the output of musicians, as perhaps code is the output of programmers.

But music comes in all kinds of flavours: classical vs jazz vs pop

Performance vs composition

Recital vs improvisation

And then choice of instrument within any of those areas.

I feel a lot of people misplace their aversion to Haskell because they fail to recognize how different Haskell is.

It's a bit like when English speakers think Chinese is a "hard" language. It's not, linguistically it's actually a pretty simple language (whether the script is easy/hard is up for grabs...)

Actually Chinese is just different to English so you have few anchors and familiar friends to base things off so it feels hard cos you're starting afresh, like a child. And yet children of a china manage quite fine learning chinese....

Meanwhile, ask a Spanish speaker about their experiences learning Portuguese...


No one has ever been able to explain to me why I should use Haskell instead of something else. I get answers about idempotence and list comprehension and strong typing which are great tactics but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

I know Carmack has presented a case to code in a pure functional style here https://www.gamasutra.com/view/news/169296/Indepth_Functiona... but that somewhat precludes the necessity of switching to another language from C++. Further, he advocates for a new keyword 'pure' to assist the compiler like const does, perhaps not knowing that const doesn't actually help the compiler out in practice.


> but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

I would add that the elitist signaling that is rampant in functional programming in general and the Haskell community in particular is also not helpful.


As a counterpoint when wanting to promote any of the "new wave" (even though Haskell isn't new) languages like Haskell, Go, or Rust for certain use cases it's become too common lately for people to dismiss it as evangelism (or indeed, elitism), and that is also unfortunate in my opinion.

I never got a sense of evangelism or elitism from Go advocates

Yeah, they don't need to evangelize it, they just have the critical mass to use it everywhere and the rest of us have to deal with their garbage.

Absolutely. "I'm just not smart enough to use mainstream languages" - "In fact we both know you are very smart. But, as we've just learned, also not competent enough to hide your elitist signaling in subtlety".

Wait, how is saying “I’m just not smart enough to X” elitist signaling?

If I tell someone “I’m not smart enough to use php”, I’m admitting a weakness. I genuinely don’t have the mental capacity to keep details straight when using it. If I tried to use php for at a job, I’d move at such low velocity that I’d get PIP’d.

Or like saying “I can’t write code without automated tests.” there are genuinely people who, if they try to write code without tests get stuck for hours and don’t know how to move forward.

I’d think someone this is the same sentiment. I’m willing to believe that the Haskell community is elitist in other ways. But how is “I’m not smart enough to X.” An expression of that?


"My brain is 100% pure logic so I can't deal with the unholy messes of the unwashed masses. In fact I'm so smart that I didn't even notice that many many things are much harder to write in this language that requires you to think about unrelated math things, because I solve them for breakfast"

How does "I'm not smart enough" turn into "I'm so smart"?

I didn't think it was that subtle, but different people have different sensors I guess.

It doesn't seem subtle. It seems like a dramatic transmutation from one meaning ("I have a need") into another totally different meaning ("other people are less worthy of respect").

I don't know, if you say "I'm just not smart enough" as a mathematician that is more interested in Functors and Monads and Yonedas than solving actual engineering problems, then there is definitely deception and signaling going on. IMO.

So, I agree that it is totally possible and maybe common for someone to ignore the business impact of their work. More engineers should read https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr... and understand that ultimately their role in their organisation is to Cut Costs/Risks or Grow Revenue/Impact. But being having a commercial or UX mindset doesn't make your brain more effective at using any given tool.

Imagine a programmer who, when she studies Haskell, discovers that the way it resembles category theory provides really effective mental affordances to her memory. She feels that Haskell's formalism makes it easier for her to have a clear mental model. When she writes and reads Haskell, she can predict what that code does to data and her predictions are largely correct. She feels confident in her understanding. This means that if she has a business goal she needs to fulfil, she's confident she can estimate that task, communicate about with stakeholders its complexity. and execute it.

Imagine that this programmer, when she starts to study PHP, does not find similar affordances. She finds it difficult to build a conceptual structure of it in her head. She finds that she forgets things or overlooks things when she tries to write PHP. If she tries to predict how a piece of code she writes in it behaves, she frequently is wrong. She feels very nervous about it. Consequently, when asked to plan adding a feature to a PHP task, she doesn't feel she's got a good enough grasp of her tools to answer. She worries about the risk to the business of her blowing past estimates and leaving making errors in production.

She tells someone "I don't think I'm smart enough to write PHP".

1) Is this statement a lie? 2) Is this statement elitist? 3) Is she a person who could exist?

What if her positive feelings about Haskell lead her to evangelise it in a way that ignores the fact that another person's brain might work in a different way to hers. She claims to this other person that Haskell is easy.

4) Does that statement make her elitist?

-----

My answers: No, No, Yes, Kinda yea.


"My coding practises are so good, I can't even write it the bad way anymore."

GP complains that nobody can explain to him why he should switch to Haskell, but when somebody tries, that's "elitist signaling"?!

You know what, use whatever makes you happy, while we continue to avoid success at all cost.


It depends on how it's done. If the explanation is "Because strongly-typed FP is the One Right Way, and you're stupid to use anything else", that's elitist signalling. If it's "because these certain kinds of errors simply go away, without other kinds of errors becoming more common", then it's not.

Yeah, the elitist attitude around the language is a shame. Haskell is actually pretty easy to learn and teach, and is very useful in certain domains.

Functional programming is just one of the paradigms, and useful in it's own domain. Its not a end all solution to everything.


> Haskell is actually pretty easy to learn and teach

What are you basing this on?


I must concede that it is my own experience

This is what has killed Scala. It was a fine pragmatic hybrid language which perfectly combined FP and OOP. Now it is attacked by hordes of type astronauts which will look down to you if you are not using Tagless Final, Cats, or whatever new fad is now in vogue.

If you (or anybody else) haven't given up on Scala just yet, have a look at ZIO

https://github.com/zio/zio/

It's a fresh, and pragmatic too, take on how to do pure functional programming, but without advanced concepts (like higher-kinded types).

could be (and maybe even has been) ported over to other languages besides Scala


Yes, Zio is really interesting. Unfortunately, its author has been recently outed from the community for political reasons (and there is too much drama in Scala community in general). I only hope that the dust will settle someday.

Just wait to see where it will take Kotlin with its Scala refugees.

https://arrow-kt.io/


The people making this library aren't Scala refugees - they're a Scala shop that uses Kotlin on Android only, so they decided to bring over their idioms.

Stand corrected, but doesn't change the fact of what is coming to Kotlin as well.

> a fine pragmatic hybrid language which perfectly combined FP and OOP

Ah, so Common Lisp then?


Really? Might i ask which places are you visiting, where you see this behaviour? I have rarely seen elitist behaviour in r/haskell or freenode #haskell. Mostly i see people complaining about eliticism on HN.

I guess that's the main problem.

And look, defining a problem functionally works great on 10% of the cases, but it complicates some 40% of other cases (very imprecise numbers).

The world is not functional, as it turns out. And a lot of those cases where you can write functionally don't gain much from a performance or correctness perspective compared to the procedural version


> The world is not functional

What does this even mean? If the world is not functional, then what is it? Is the world procedural? And what world are we talking about? Our planet, physically? Are you then discounting the worlds of mathematics and logic? You don’t gain performance? What performance? Program execution speed? Development pace and time to market?

Compared to the procedural version? Is your view that procedural programming is inherently better — for any definition of the word — regardless of context? Would SQL queries be easier to write if we told the query planner what to do? Is the entire field of logic programming — and by extension expert systems and AI — just a big waste of time?

So many vague aphorisms which do nothing to further the debate. And Haskellers are the ones getting called “elitist”!


>> The world is not functional

> What does this even mean? If the world is not functional, then what is it?

The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events. Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations, but not with many real-world problems.

> And Haskellers are the ones getting called “elitist”!

Well, one may say that answering with questions could fit the bill...


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

Yes. All of which are modelled in Haskell in a pretty straightforward manner. I’d argue Haskell models adversity like this better than most languages.

> but not with many real-world problems.

Haskell is a general purpose language. People use it to solve real world problems every day. I do. My colleagues do. A huge amount of people do.

> Well, one may say that answering with questions could fit the bill...

I see. So trying to define the terms to enable constructive discourse is elitist. Got it.

If you want me to be more concrete and assertive, fine. No problem. Here we go.

You are wrong.


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

True

> Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations

True

> but not with many real-world problems.

Debatable, but in any case a non-sequitur. Are you sure you're talking about functional languages as they're used in reality?

I once wrote a translator between an absurdly messy XML schema called FpML (Financial products Markup Language) and a heterogeneous collection of custom data types with all sorts of "special cases, corner cases, exceptions, holes, etc.". I wrote it in Haskell. It was a perfect fit.

https://en.wikipedia.org/wiki/FpML


It's not vague, processors are still procedural. Network, disk, terminals, they all have side effects. Memory and disk are limited.

SQL queries are exactly one of those cases where functional expression of a problem outperforms the procedural expression, and that's why they're used where it matters.


Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical?

Because neither of those are true.

You’ve conceded that SQL queries are one case where a functional approach is more ergonomic (after first asserting that the world is not functional, whatever that means). Why aren’t there other cases? Are you sure there aren’t other cases? One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.


> Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical? Because neither of those are true.

No, I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided (Lisp is less opinionated than Haskell for one).

> One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.

They could argue, but they would be wrong.

SQL works because it's a strict abstraction on a very defined problem.

Functional is great when all you're thinking about is numbers or data structures.

But throw in some form of IO or even random numbers and you have to leave the magical functional world.

And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions. And can you work without a GC?


> I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided

Functional programming certainly doesn't work literally everywhere, but to say Haskell's design is "misguided" is your opinion, and it's one that some of the biggest names in the industry reject. How much experience do you have designing programming languages? Or even just building non-trivial systems in Haskell? Judging by the evident ignorance masked with strong opinions I'd say around about the square root of diddly-nothing.

> But throw in some form of IO or even random numbers and you have to leave the magical functional world.

Wrong. Functional programming handles IO and randomness just fine.

> And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions

Are you suggesting there aren't a lot of people working hard on GHC? Because if you are — and you seem to be — then again you would be wrong.


> processors are still procedural

Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine. Declarative programming is really unhelpful because it says nothing about the order in which code executes.

We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe). Electrons move one after another around circuits. Instructions in the CPU happen one after another according to the procedural machine code.

The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Haskell fanboys talk about immutable data structures like it's beneficial to go back in time and try again. But it's a bad fit for the CPU. The CPU would never run some code and then decide it wants to go back and have another go.


You're saying a lot of wrong things about CPUs. CPUs do execute instructions "out-of-order", and they do speculative execution and rollback and have another go. Branch prediction is an example.

All of this is possible only with careful bookkeeping of the microarchitectural state. I agree the CPU is a stateful object. But even at the lowest level of interface we have with the CPU, machine code, there are massive gains from moving away from a strict procedural execution to something slightly more declarative. The hardware looks at a window of, say, 10 instructions, deduces the intent, and executes the 10 instructions in a better order, which has the same effect. (And yes, it's hard for me to wrap my head around it, but there is a benefit from doing this dynamically at runtime in addition to whatever compile-time analysis.) In short, it is beneficial to go back and have another go.

This was demonstrated also in https://snr.stanford.edu/salsify/. If you encode a frame of video, but your network is all of the sudden to slow, you might desire to encode that frame at a lower quality. Because these codecs are extremely stateful (that's now temporal compression works), you have to be very careful about managing the state so for can "go back and have another go".

I am less confident about it, but what you say about the universe also seems wrong. What physical laws do you know take the form of specifying the next state in terms of the preceding state? And literally many of them are time-reversible.


Thanks. It was an attempt at parody but apparently I didn't lay it on thick enough. I'll try to up my false-statements-per-paragraph count next time.

What you wrote about CPUs many people believe, and many simpler CPUs operate like that. So it was difficult for me to detect as parody. Sorry that I missed it! It's funny in hindsight.

Not sure what the parody was in computing 2020 before 2019.


> Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine.

I believe it is because abstractions are the way we have always made progress. Is the C code that's so close to the machine not just an abstraction of the underlying assembly, which is an abstraction of the micro operations of your particular processor, which in turn is an abstraction of the gate operations? The abstractions allow us to offload a significant part of mental task. Imagine trying to write an HTML web page in C. Sure it's doable with a lot of effort, but is it as simple as writing it using abstractions such as the DOM?

> We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe).

You just proved why abstractions are useful. "One thing happens after another" is simply our abstraction of what actually happens, as demonstrated by e.g. the quantum eraser experiment [1][2].

[1] https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser

[2] https://www.youtube.com/watch?v=8ORLN_KwAgs


>I believe it is because abstractions are the way we have always made progress.

>The abstractions allow us to offload a significant part of mental task

Edge/corner cases in our abstractions is also how propagation of uncertainty[1] happens. You can't off-load error-correction [2]

[1] https://en.wikipedia.org/wiki/Propagation_of_uncertainty

[2] https://en.wikipedia.org/wiki/Quantum_error_correction


I'm not sure what you mean by "You can't off-load error-correction". In the case of classical computing, we do off-load error-correction (I don't have to worry about bit flips while typing this). In the case of quantum computing, if we couldn't offload error-correction, an algorithm such as Shor's wouldn't be able to written down without explicitly taking error-correction into account. Yet, it abstracts this away and simply assumes that the logical qubits it works with don't have any errors.

> Instructions in the CPU happen one after another

https://en.wikipedia.org/wiki/Superscalar_processor

> a CPU will never execute later instructions before earlier instructions

https://en.wikipedia.org/wiki/Out-of-order_execution

> The CPU would never run some code and then decide it wants to go back and have another go

https://en.wikipedia.org/wiki/Speculative_execution


Actually the FP fanboys, all the way back to those IBM mainframes where Lisp ran.

Even C abstracts the machine, the ISO C standard is written for an abstract machine, not high level Assembly like many think it is.

Abstracting the maching is wonderfull, it means that my application, if coded properly, can scale from a single CPU to a multicore CPU + coupled with GPGPU, distributed across a data cluster.

Have a look at Feynamm's Connection Machines.


> The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Except thanks to a compile-time optimization.


And ooo-execution that happens at _runtime_.

This 1000 times.

The reason SQL (really - relational algebra) works so well is precisely because relational data is strongly normalized [1].

But the data is only a model of reality, not reality itself. And when your immutable model of reality needs to be updated strong normalisation is a curse, not a blessing. The data structure is the code structure in a strongly-typed system[2]

Strong normalisation makes change very expensive.

[1] https://en.wikipedia.org/wiki/Normalization_property_(abstra...

[2] https://en.wikipedia.org/wiki/Code_as_data


> The world is not functional, as it turns out.

Have you watched the talk "Are we there yet?" by Rich Hickey? He makes a convincing case, referencing the philosophy of Alfred North Whitehead, that "the world is functional".

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...


I like hickey but a lot of his examples are data driven domains where we can indeed model the world in functional or accounting-like terms.

Where this falls short is domains that are extremely ill-suited to keeping track of past states over time (his identity idea) simply because it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on.

Much of his argument relies on the GC and internal implementation being able to optimise away the inefficiences of the functional model that needs to recreate new entities over and over, but this simply is not always enough.

This also is very obvious if you look into the domains where Clojure or other declarative functional languages have success, it's almost always business logic / data pipeline work.

edit: And in fact a lot of his most salient points aren't really as much about functional programming as they are about lose coupling and dynamic typing. A lot of his criticism of state and identity in languages like Java isn't related to Java not being functional, it's related to Java breaking with the idea of late binding and treating objects as receivers of messages rather than "do-ers".


> it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on

Performance seems to be the problem there more than the model not fitting. And Rich’s answer would probably be that Clojure isn’t the right tool for those tasks in the same way that any general-purpose GC’d language isn’t.

Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them. Abstracting hardware properties this way sounds useful too, but I’ve not worked with it enough to comment.


>Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them.

because I don't really think the functional model describes it in an intuitive way. You can model a sim or game at a high level like worldstate1 -> worldstate2 etc.. but it doesn't really tell you much, because often you don't really care what the state was a second ago anyway in particular not in its entirety, and because at that high level of abstraction you don't really get any useful information out of it, so there's no point in tracking it in the same way it makes sense to track a medical history or a bunch of business transactions.

Rather instead of thinking of games or sims as high level abstract transitioning states we tend to reason about them as a sort of network of persistent agents or submodules and that lends itself much closer to a OO or message based view of the world.

I think in many systems that are highly complex and change very incrementally reasoning about things in terms of functions doesn't really tell you much. You can for example reason about a person as like say a billion states through time but there's not much benefit to it at least to me.


That makes sense, and the actor model (well, entity-behavior, which is fairly close) has been what I reached for in the past when doing game dev.

I agree that looking at a whole worldstate at once is unlikely to be useful. But I do see great value in keeping state changes isolated to the edges of a system, and acting upon it with pure functions. If you have the means to easily get at the piece of state tree that you’re interested in, you can reason more clearly about how each piece of a simulation reacts to changes by just feeding the relevant state chunk to it and seeing what comes back. That takes more work to set up or repeat if each agent tracks its own internal state.

I haven’t used Clojure to make a game before, so I’m speculating here. I find myself avoiding internal state by habit lately, though of course “collections of relevant functions” are valuable tools. I just lean towards using namespaced functions instead of objects with methods.


A lot of what's done in software eventually ends up getting hardware support. Our current von-Neumann/modified-Harvard hardware architectures are old and it's precisely thinking outside these lines that will lead to non-local maxima.

In a functional world persistence (e.g memory) cannot be a thing.

The case is convincing, but it requires you to reject your own, human, faculties.

Makes the use/expression of language rather awkward when you can't remember any words.


> In a functional world persistence (e.g memory) cannot be a thing.

Why not?

To give a functional Haskell flavoured example, I can use the Reader and Writer monads (which are functional) to create a whole bunch of operations which write to and read from a shared collection of data. That feels a lot like memory to me.

Indeed, the Reader monad is defined as:

> Computations which read values from a shared environment.

I just don't understand the whole "you can't have memory / order of operations / persistence / whatever else" as an argument against functional concepts when they have been implemented in functional ways decades ago. The modern Haskell implementation of the writer monad is an implementation of 1995 paper.

Edit: it looks like who I responded to doesn't actually want to have a reasonable discussion, but for anyone else reading along, it's entirely possible to have functional "state" or "memory" - what makes it functional is that the state / memory must be explicitly acknowledged.

Trying do dismiss functional computation in this way is essentially a no true Scotsman; functional computation is useless because it can't do X (X being memory or persistence or whatever), but when someone presents a functional computation that does do X, it's somehow not a "real" functional computation precisely because it does X. Redefining functional computation as "something that can't do X" doesn't help anyone, and doesn't actually help with discussing the pros and cons of functional programming since you're not actually discussing functional computation but some inaccurate designed-to-be-useless definition mislabeled as functional computation.


>what makes it functional is that the state / memory must be explicitly acknowledged

Isn't state/memory assumed by default when talking about computation?

What kind of computations you can perform on a Turing machine without a ticker tape?


Lambda calculus, for example, has no ticker tape.

What kind of computations can you perform with Lambda calculus without any input variables?

What are you applying your α-conversion and β-reduction operations to?


[flagged]


If I am Euthyphro, then you are Socrates - no?

I thought Socrates was the troll.


What is this thing that you reading from and writing to?

It sounds mutable.


I’m sorry if immutability is a new concept to you, but you can read from one state, and write to a new state.

There you go. No mutation!


Writing state is the definition of mutation.

It's what all data storage systems do.

An immutable data store sounds pretty useless.

https://en.wikipedia.org/wiki/Persistence_%28computer_scienc...


It’s fine for you to think that. That’s not going to stop other people from finding these concepts useful though.

Once again, you are free to ignore the things you don’t understand. Your trolling is mostly harmless.


>you are free to ignore the things you don’t understand

Is that why you are ignoring the complexity behind persistence/mutability/state?

Even the Abstract Turing machine has ticker tape you know...


Interesting point. Lambda calc is equivalent, where the expression can be seen as equivalent to the ticker tape. So when a reduction occurs in lambda calculus, it's just like a group of symbols being interpreted and overwritten. But the thing is, no one has to overwrite the symbols to "reduce" them. It's just a space (memory) optimization. Usually to calculate something (you're fully correct by the way) we do overwrite symbols. The point of using immutable style is simply to make sure that references don't become overwritten. It sucks to have x+2=8 and x+2=-35.4 within the same scope, right? Especially when the code that overwrote x was in a separate file in an edge case condition you didn't consider.

Thanks for your input, Euthyphro!

My knack for spotting performative contradictions didn't sway you towards Deskartes?

Actually, according to quantum physics, the world is functional. The Universe can be described by completely by a function (the universal wave function) and is completely time-reversible; you can run it forwards or backwards because it has no side-effects.

The collapse of a wave function into a single state that apparently happens on observation is as far as we know non-deterministic. The wave function itself is a probability amplitude. So I think it's a misinterpretation to say that "the universe can be described completely by a function" when everything we can actually know about the universe is observable and has necessarily collapsed from a probability amplitude of possible states into the observed state.

My interpretation is the exact opposite. Newtonian physics suggests the fundamentally deterministic world that QM can't because in the latter the only thing that is deterministic is probabilities.


Collapse of a wave function occurs when one wave becomes entangled with another. If I shine a photon on a particle to measure it, and the particle emits a photon back to me, I am now entangled with that particle through mutual interaction. But to a distant observer, there is no wave collapse. If our universe is purely quantum and exists alone (not being bumped into by other universes), then its wave function would just be evolving unitarily according to it's Hermitian - totally deterministically (the wave function, that is).

I don't know what it means for the universe to be "purely quantum", or what it means to be a "distant observer" yet somehow not interacting. A wave function in itself is not observable. Is it real? Definitional. Is it a physical phenomenon? Dubious. All we know is that as a model it is consistent with what we observe. What actually can be measured (and can reasonably be considered real and physical in that sense) can only be predicted probabilistically using quantum mechanics. It's like saying that I can perfectly predetermine the outcome of a dice roll—it's between one and six.

By that standard every programming language is functional, since the compiled program is a function.

Since I got downvoted let me elaborate: it isn’t relevant that the universe is a function. What matters is whether the universe can be modeled better as the composition of pure functions or as some stateful composition (e.g. interacting stateful objects).

Programming languages aren’t about the final result but about how you decompose the result into modular abstractions.


your just stating claims without any evidence.

> I get answers about idempotence and list comprehension and strong typing which are great tactics but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

First, I suppose those three things fit together in the sense that they make list comprehensions safe and fast.

But as you say, Haskell isn't the only language that gives you that.

I think the killer feature of haskell is that the community went through ridiculous contortions and pain to get rid of side effects.

Which means that Haskell programs exclude certain kinds of bugs - so it's not just that you can program in a functional style, but that you can be fairly certain there's no "shifting sands" under your functions.

On the other hand, I see many haskell programmers say that the powerful type checker forces them to think a bit differently, and perhaps harder, when writing - with the result that once the type signature fits, the function tends to "just work".

Anyway, I never did really enjoy Haskell - not as much as StandardML anyway. Sadly there doesn't seem to be any sml with good real-world libraries, so I've not really been using that either, outside of university :/

Perhaps looking at one of the few popular haskell utilities, and see if you feel Haskell is a good fit? I'm honestly not certain myself.

https://github.com/jgm/pandoc/blob/master/src/Text/Pandoc/Re...


Haskell's type system works best for me in allowing fearless changes, even significant ones to the core of a code base. The compiler won't let me overlook something.

A comprehensive test suite would achieve the same thing, but I'm rarely confident that my test suites are truly comprehensive.


If you're having trouble justifying the switch to Haskell and are primarily concerned with systems programming, I'd recommend try out a language like Rust. Many of the features present in Rust were inspired by Haskell, and there is quite a bit of overlap between the communities.

As far as Haskell itself goes I've written a little in the past in comments on here about why it's useful:

https://news.ycombinator.com/item?id=20111321

https://news.ycombinator.com/item?id=20260095

These days when I want to start a new project the decision on whether to use Haskell or Rust is the biggest consideration I make -- the expressiveness and safety and correctness Haskell can provide versus the raw speed, ease of build, memory safety rust provides.


Now I don't know Haskell particularly well but I have found another language I'm focusing on, that people basically have the same question about.

For me, the quest is to be able to build more with less effort, with less bugs and make things more maintainable for the future. For this, I've found Clojure (of the 10+ languages I've professionally worked in) to be the best one, but like all the rest of the languages, it doesn't fit every single use case. But most of them, so far at least.

I could go on and on giving you arguments for/against, but I think these goals are the same for most programmers who aim to learn/use some of the lesser known languages.


I think it has a lot to do with taste; you really need to like typing and abstracting everything; which I do in C# (no stringy types unless i'm in a blistering hurry) because I get paid to program C# because I actually find this very beneficial, but not limited to Haskell; Haskell taught me to do it to extremes though which, in my opinion, is what you are basically asking. That 'elitist' community in Haskell teaches a lot of abstractions, that, once practically understood, for me, give a great amount of power (which, for me again, means a far better understanding of large codebases and easy adding of features. For me that would answer the question you ask as a lot (most) of these are just very hard (not impossible which is why we see more and more of them, but not at the speed the Haskell community introduces them) to move to other languages.

Short answer: Because of the features you mentioned, You will be able to write code which is correct, self documenting, and achievable in fewer lines of code[.].

Your life won't be easier if you solve problems, because solving problems is hard. But you get a lot of benefits from using any language which has features similar to Haskell because of the above reasons. If you are into programming language research, compilers, theorem provers etc, Haskell comes with a lot of idioms, features and tools that it is a viable language for the tasks at hand.

[.] a lot of languages rightly claim it like lisps and modern lisp derivatives, but Haskell brings with it a top of the line type system, (imo) a clearer syntax and tooling than OCaml, and a decent support from the industry and the academia..


It’s self documenting if you consider a type signature to be documentation.

In addition to type signatures, the whole syntactical outlook resembles mathematical definitions.

This comment misses the point of the article, which argues against a urban legend that Haskell is too difficult "unless you are this tall" (a Phd, an E.T., a superhuman). I also had a shot at explaining why people are already tall enough in a past short "comic strip" [0]. I think the article makes a good job re-assuring the urban legends are a myth.

Now. there are many reasons why one would want to use Haskell. For instance, to have fun, to understand better some pattern, to write expanding-brain memes, or, because they are good at solving problems with it. It is fine if your reasons do not intersect with other people's reasons. You can try finding answers whether Haskell matches your reasons in some other essays [1,2,3]. Maybe it is true for many people that their needs and desires are entirely covered with their own toolset. As a curious/optimistic person I find it incredibly pedant to say I'll never ever need to learn/use something (there's different goodness in everything).

Personally, practicing Haskell led me to appreciate the importance and trade-offs that occur when isolating/interleaving side-effects. Similarly, it gave me some vocabulary to articulate my thoughts about properties of systems. Both are super important in the large when you architect softwares and systems. There are other ways to learn that (e.g., a collection of specialized languages), but at least for me, Haskell helped me build an intuition around these topics. That said, the prevalence of negative and derailing comments in discusions about Haskell can be demotivating (but our industry is like this ️).

[0] https://patrickmn.com/software/the-haskell-pyramid/ [1] https://www.snoyman.com/blog/2017/12/what-makes-haskell-uniq... [2] https://www.tweag.io/posts/2019-09-06-why-haskell-is-importa... [3] http://blog.vmchale.com/article/functional-haskell


"Idempotence, list comprehension and strong typing", efficient communication is done succinctly through terminology.

Idempotence eliminates the need for a whole bunch of retry and error checking code, which themselves are often highly stateful and susceptible to race conditions and bugs.

List comprehensions can make allow for certain guarantees about parallelizing which is becoming ever more relevant as processors scale with threads / cores rather than clock speed. All without explicit programmer intervention.

Strong typing means the compiler can check types at compile time rather than runtime - this (largly) eliminates entire classes of errors before they happen. If errors happen at compile time, the programmer fixes it, if errors happen at runtime the user gets annoyed.

These are just trivial examples of some of the benefits of these technologies - see how these strategies eliminate not single errors but entire classes of errors. Selecting the right technologies means less bugs, less headaches for users, less support calls, less firefighting, and THAT is how your life will be easier with Haskell, for example


Idempotence means spending days thinking about how to rewrite large parts of your application to get in "stateful" actions (for example, logging?) or how to make your list processing actually fast (hint, it doesn't work with lists). Idempotence makes it extremely hard to control when something actually happens or how much memory is used.

Strong typing means you'll spend your time waiting for the compiler to finish, or thinking how to unwrap this monad stack in a way that doesn't suck. It also means large parts of your app will have very unstable interfaces with way too many dependencies, you'll constantly be fighting with cabal or whatever, you'll often have to do busy work to keep track with external dependencies, you'll constantly be tempted to represent a different or additional subset of your invariants in the type system using the newest fancy language extension (that slows your compiles down even further, and may have subtle interactions with some of the other extensions you're already using).


> Strong typing means you'll spend your time waiting for the compiler to finish

You can see this is false by running ghc in type-check-only mode ("-fno-code"). Type checking is less than 10% of compile time. If Haskell takes a long time to compile it has nothing to do with types.

(and you mean "static typing")


I can't try out your suggestion right now because I've basically quit this ecosystem. But I would be careful about making any generalizations if the user can do significant computations at compile time.

> (and you mean "static typing")

Let's not split hairs. And I think "strong" is actually what I want to say (maybe better: "advanced" or "complex"). C also has "static typing" and I explicitly don't mean a simple type system like that.


> I would be careful about making any generalizations if the user can do significant computations at compile time.

Indeed, and with some language extensions type checking may never terminate! But library code only has to be compiled once and if you write code that takes a long time to type check that's on you. Bog standard type safe Haskell 2010 code type checks in the blink of an eye.

If your argument is "advanced type system features are too seductive for people to avoid" then I'd be more inclined to agree with you. But that's not what you said.

> > (and you mean "static typing")

> Let's not split hairs.

Well, Python has strong typing and doesn't take long to compile. Perhaps you meant "some advanced features of Haskell's type system". If so I'd agree with you. But don't discourage people from Haskell by making false statements about it and then accuse me of splitting hairs.


Ok, I appreciate the refinements, and I think we agree.

> Idempotence means spending days thinking about how to rewrite large parts of your application to get in "stateful" actions (for example, logging?) or how to make your list processing actually fast (hint, it doesn't work with lists).

Idempotence isn't a golden hammer, it's not suitable for every task. It is suitable for 1 way purchase code, increasing robustness in faulty networks, initializations. It's not suitable for logging.

It's not suitable for making list processing fast (that was some guarantees of list comprehensions, I think you got confused there)

> Idempotence makes it extremely hard to control when something actually happens or how much memory is used.

No it doesn't, that's totally wrong. Idempotence doesn't dictate execution time or memory used, it's not an implementation detail it's an abstraction level higher than that : it's a system strategy. This sounds like you're confusing idempotence with lazy evaluation.

> Strong typing means you'll spend your time waiting for the compiler to finish,

Compilers are really fast these days, only compiling the changes. "Waiting for compiler to finish" hasn't been a problem since the 90's, even on larger codebases (100,000+ LOC).

Also - So you cant be bothered to wait for the compiler to check your code, then the entity that's going to do it will be your users, in production. So maybe you can spend the time you saved not waiting for the compiler answering the support tickets coming in?

> thinking how to unwrap this monad stack in a way that doesn't suck

Subjective, no examples. This is just whining.

> It also means large parts of your app will have very unstable interfaces with way too many dependencies, you'll constantly be fighting with cabal or whatever, you'll often have to do busy work to keep track with external dependencies, you'll constantly be tempted to represent a different or additional subset of your invariants in the type system using the newest fancy language extension (that slows your compiles down even further, and may have subtle interactions with some of the other extensions you're already using)

Unstable interfaces with way too many dependencies is not a language problem it's a system design problem. It sounds like you have inexperienced software architects making poor decisions.


Wow, either this language has significantly transformed since I last worked in it 3 years ago, or you need a reality check.

Won't reply to everything you've said, but just one little thing that I remember vividly. I was trying to get this 300-400 lines basic OpenGL setup code to work. Each time I made a little change it took 10+ seconds to compile. I went with the minimal types needed to interface with this OpenGL library, which was nothing fancy by Haskell standards, but just a "straightforward" C wrapper.


GHC is fairly slow for a compiler, and Haskell has no separation of interface and implementation, so small changes can have big consequences.

Template Haskell is the worst for it, but I did have a code generator produce code that took about 25G of memory to compile recently, which was very, very slow on a 16Gb machine ;)


It's just a solid mature programming language.

There are plenty of those, though, and most others seem to make both getting started and doing practical things a lot easier.

Its actually very easy to get started with Haskell. Just install the Haskell platform and you have the environment.

I don't mean installing it, I mean learning the language as a beginner (already a programmer). I've had a couple of goes at it in years past, without success.

If you are into books, I recommend http://haskellbook.com/

After reading it, the language just clicked with me, although it is for complete beginners, it is a great read nonetheless.

Doing practical stuff like webdev, apps etc is a bit trickier, partly because Haskell is not the first language of choice for these kinds of projects, and thus lacks much needed documentation for starting up and best practices in those spheres.


From that article:

"I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language."

Even Carmack has a subliminal jab at languages on the basis of popularity from time to time lol


For me, the reason is rather simple- every single program I wrote in Haskell, once successfully compiled, ran correctly.

If someone from the 50s told you "no one has ever been able to explain to me why I should use stored-program computers instead of my cables and plugboards", what would your answer be?

Maybe your answer will be that that the stored program is an abstraction of the cables and plugboards, so you can be more productive.

Today, there are several additional layers of abstraction on top of that, functional programming is one of them.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: