Hacker News new | past | comments | ask | show | jobs | submit login
The Cult of the Haskell Programmer (wired.com)
55 points by vector_spaces 16 days ago | hide | past | favorite | 83 comments



Unfortunately not a very well informed article. Not outright wrong, just sounds like the 100th re-telling of The Illiad.

Haskell might have something to do with spreading FP more widely, but it's hardly the origin of the movement. No mention of ML. No mention of lisp, APL, Miranda, ...


I get a bit annoyed when people focus too much on the "functional" aspect when discussing Haskell. True, it's a significant part of it, but there are other functional languages that go back as far as Lisp. What sets Haskell apart from these languages is its strong typing, lazy evaluation, algebraic data types (along with supportive matching syntax), and many other small language features which together make it a really nice language.


Some other features I really miss from Haskell when programming in other languages are the dedicated function composition and application operators, curried functions by default, and support for partial application of operators.


What truly sets Haskell apart is how much the language shoehorns you into being an architecture and abstraction astronaut


I program Haskell every day and don't feel like I've been shoehorned into being "an architecture and abstraction astronaut". I do feel I've been gently nudged away from worst practices[1] though.

Could you elaborate more about what you mean, maybe giving some examples?

[1] https://www.haskellforall.com/2016/04/worst-practices-should...


Maybe just scarring from working with haskellers who are writing c++. The project has been delayed probably months due to shitty abstractions. Has never met a self imposed (generous) deadline. They keep breaking the up stack contract that I depend on. Why they didn't just choose c for this project is infuriating.

Literally was in a meeting where a new hire was given the green light because "she was a language lawyer but not enough of a language lawyer to get in the way". Fuck that.


Do you mean you worked with people who were Haskell enthusiasts but were programming in C++?


I don't think it "shoehorns" you into that. It certainly doesn't stop you from doing it. That's the thrust of this page:

https://pages.cpsc.ucalgary.ca/~robin/class/449/Evolution.ht...

Where a simple program goes through levels of increasing complexity before arriving at:

    fac n = product [1..n]


That doesn't set Haskell apart at all. Rust is the same. You have to first understand what programming patterns are strongly discouraged because they don't work well with the borrow checker (for good reasons!) and therefore best avoided.


This article doesn't touch on what I consider to be the central feature of Haskell: it is lazily (more precisely non-strictly) evaluated. A lot of the other aspects of it, such as monadic IO, follow from that decision.


How does monadic IO follow from laziness?


If you start with an impure lazy programming language, you will very quickly discover that you may as well make it pure and handle side effects through an IO type. Specifically, it would be far too hard to perform effects in the correct order without an IO monad. I wrote up an explanation:

http://h2.jaguarpaw.co.uk/posts/impure-lazy-language/


If you start with a pure and strict programming language, how will you handle side effects?


Side effects like IO are very difficult to handle in a pure language. I believe Simon Peyton Jones' argument was that, if it's strict, there is too much of a temptation to say, "screw it", and abandon purity.

https://www.microsoft.com/en-us/research/publication/wearing...


Much respect for Simon Peyton Jones, but empirically, that's not true: see Idris, PureScript, Elm, and probably many others.


Those were all designed after monadic IO was introduced in Haskell. The ability to mark IO operations in types (and the do notation) was a game-changer.


The same way as a pure and lazy programming language. That is to say, the IO monad. This abstraction works for both lazy and eager evaluation because it's internally just passing a token from function to function.

When you have something like `doThis >> doThat` (two actions sequentially) the first action results in a token (RealWorld#) that must be passed into the second action. Even though the evaluation of the two actions may be undetermined in a lazy language (but determined in a strict language), the token being passed around has a definite order.


That was kind of my point, but thanks for laying it out nicely, I guess :)


In a lazy language, you have a significant difficulty in ensuring

the order of operations, specifically printing a prompt and reading the answer. Before someone (Mogi?) realized that a category theoretical monad with specific rules made IO sequencing trivial, the state of the art (IIRC) was world-passing. (With world-passing, you have a value that represents the state of everything outside the program; any IO operation absorbs the current world value and returns a new one. Only the most current value actually works.)

I don't know if it is still the case, but the last time I poked around, in GHC the IO monad was implemented using the older world-passing code.


Take this bit of pseudocode:

    a = readLine()
    b = readLine()
    print(b)
    print(a)
What order are the lines executed in?

In a strict language, the answer is obvious. In a non-strict language, line 4 has a data dependency on line 1 so always executes after it, ditto lines 3 and 2. But how the two groups get interleaved is completely unpredictable, so, if you’re really committed to non-strict evaluation, you need a way to force data dependencies between all four lines such that order of evaluation is forced. Once you achieve that, you have a bunch of ugly-looking machinery that’s peppered across your whole codebase.

Monadic IO (and do-notation in particular) gives us a way to write this sort of code without feeling the urge to gouge our eyes out.


Laziness requires immutability to work well, and that means you need to represent mutations such as IO operations in an immutable cover like the IO monad.


You've got it backwards. Laziness requires mutation to work well. Laziness is just a thunk (a closure) that can be overwritten by a value at a later time. It is impossible to implement laziness if you cannot mutate memory: if the address of the unevaluated closure and the evaluated value cannot be the same, you have to update references to the closure into references to the value, but you cannot do that if you can't mutate memory!

People always assume garbage collectors in Haskell might somehow be different due to immutability. But due to laziness it works just like Java garbage collection because laziness requires mutability.


I don't quite understand. You don't update any references when thunks are evaluated, GHC's runtime does. The runtime shields you from any mutation, doesn't it? (Unless you explicitly ask it to let you mutate a value in memory, of course. But that goes beyond the normal evaluation of thunks.)


Oh yes. I just realized we are talking past each other. I'm talking about the implementation detail of laziness. The mutation is internal and usually invisible. The only knob that I know of to adjust it is -feager-blackholing to control whether evaluation results in two mutations or one (except when switching threads: always two in that case).


Mhm, I see now. Perhaps immutability without the laziness would also lead to monadic IO ?


raku (raku.org) was implemented by a team who were encouraged to cut their teeth on Haskell. The first complete implementation of a raku parser - PUGS - was written in Haskell. This tradition shows in that raku offers a natural path for Haskell coders to come to raku and still use many of their familiar Haskell idioms. Here is the Haskell -> Raku guide: https://docs.raku.org/language/haskell-to-p6.

Raku balances imperative, functional and OO styles in a unified syntax. Functional programming primitives include currying, lazy and eager list evaluation, junctions, autothreading and hyperoperators (vector operators).


It’s a great language for sure. I do worry about the bus factor though, especially given they’ve not had a major release since the most prolific contributor left after some community issues.


What were those community issues?


raku has hit some community issues over the last few years, from (justifiably) standing on the perl versioning when called perl6 and some other issues from time to time.

since the name change to raku some years ago, the tussling over the future of perl has declined to nothing

on the other issues, well many OSS communities suffer from falling out of individuals at times - all the same the raku commit unity is healthy and large enough to be making slow but steady progress toward the release of v6.e, with a lot of cool features already in preview

it’s a supportive and active community and i’d invite you to come over to Discord or IRC and say hi

here are the stats on git rakudo since 20 April…

_Excluding merges, 7 authors have pushed 133 commits to main and 181 commits to all branches_


If you're learning haskell and want an exercise that motivates monads and more, this is great:

https://github.com/lsmor/snake-fury

You build a snake game in stages.



I've dabbled with Haskell. I like it a lot, but I want to like it more and I just can't. I don't come from a CS background, I'm pretty good mathematically, and the syntax is just beautiful to me. I remember when I first understood recursion instead of loops, I was amazed.

But the whole no side effects thing... I just can't wrap my head around monads. I understand the benefits of no side effects, I still can't figure out how to write code, test it, improve, you know, iterate over it. It feels to me like you need to have your entire program understood and specified before you can write a single line of code.

That, and it seems like you need to have a complete understanding of the entire standard library just to get anything done. When I go into Haskell community groups to get some help, I'm usually met with walls of things I don't understand, which is why I'm there I'm the first place.

If I could get over these two road blocks I really think I'd write everything I could in the language. I hope to do that some day soon, but it's really hard when you're expected to already get it by the community. I just want to know how to do IO without a complete refactor and be able to read some documents on a library and start using it.


> I'm pretty good mathematically, and the syntax is just beautiful to me.

Ironically, I think the syntax probably is causing some of your problems here. Haskell seems to have a problem where the syntax is so dense that simple operations get confusing. And the strong typing pulls a lot of the learning pain up front. You might be better off learning functional programming independently of Haskell's type system to get a feel for how to do it, then come back and get the types correct and learn the advanced tricks.

If you want an unsolicited tip; don't bother trying to "understand monads", there isn't much there to understand - they are what they are (it doesn't get much more basic than the Maybe monad). Look at the code that uses monads - they are an answer to being unable to solve problems the imperative way. So the point of a monad is programmers want to write code in a certain way and without monads there would be a monad-shaped hole in the code that needs to be filled. If the code was written in a dynamic imperative style, monads wouldn't be so interesting.

A chunk of monads are just obviously necessary and simplifying once you start using ... whatever the Haskell equivalent of a pipe or threading macro is ... and make the mistake of trying to use functions designed for an imperative style.


Hi, thanks for sharing this. I'm actively trying to understand the roadblocks to onboarding new users to Haskell, and trying to fix them.

If you'd like to share any more about your experience I would really value hearing about it. Perhaps you could link to forum discussions you've been involved in, documentation you found confusing, or programs you've written that you got stuck on.

You're also welcome to contact me privately if you like. My contact details are in my profile.


Monads are easy. They're just a design pattern that provides a formal abstraction over operation sequencing with additional computation. If you just accept them as they are, rather than attempting to grok them by approaching them by analogy ("monads are like burritos"), you will make your life much easier.

That's the secret to Haskell. There's no "getting it". There's just using it.


I TA’ed a Haskell course for four years, and I came to the conclusion that what made it so difficult to learn was the apparent implicit mixup between newtype syntax / overloading and purpose of even using them. (The newtype syntax is necessary to differentiate between multiple instances of the typeclass; if you threw away the newtype syntax and overloading, >>= could only mean something for one thing, making it comprehensible but less useful — taking both steps at once loses most people.)

Motivating the use of monads in FP requires solving a bunch of problems without, and realise your life would be so much easier if you abstracted out the commonality at the function composition level. But there isn’t time for that.

When you skip the motivation, the explanation becomes “Welcome to FP. We’re gonna jump through these hoops because math is beautiful, also, most of the syntax you’re typing gets thrown away at compilation, good luck and fmap fmap fmap!”


> They're just a design pattern that provides a formal abstraction over operation sequencing with additional computation.

You are right. But to someone who doesn't already know what a monad is, these words don't mean much, because they are very abstract.

I personally started to understand monads when I learned why monads are useful. They are a tool. And if someone just hands you a tool without any explanation what to use the tool for, you will most likely not be able to do something with that tool.

Monads are useful, because they make it possible to compose a sequence of computations in such a way, that failure/abortion anywhere in the sequence of computations is handled gracefully. This is relevant, because computer programs can fail all the time because of bad data or bad network connections. Monads allow for elegant error handling and fall-through.


Try using it for concrete problems you have today, start small. I used it to write a small calorie counting web app and some other tiny projects and picked it up that way.


Well, I wrote a series of functions doing a bunch of mathematical calculations, making lists and that. But when I went to 1) read a file, 2) output results to a file, I ran into some trouble. This was for a real use case I had. Writing the code to do all the calculation was really enjoyable and pretty easy to pick up. Actually turning it into a useful program not so much.

I used no libraries and wrote all the math myself, because when I went to investigate libraries none of them seemed to explain what they do in language someone who didn't already know would understand. When I went in stackexchange and other community spaces I got similar results.


It's normal to hit some walls, usually from lack of/difficulty in finding the right kind of intro material. (For example, with all respect to my esteemed sibling poster, their advice is overcomplicated.) I encourage you to try again. Laziness can get in the way of I/O, especially in small interactive programs. And Haskell can be written incrementally, once you know a few tricks. We'll be happy to help with "useful program" tips in chat (http://matrix.to/#/#haskell:matrix.org).


That's exactly why I went through the exercise of translating Kernighan and Plauger's Software Tools in Pascal into Haskell.

https://web.archive.org/web/20171020034308/https://crsr.net/...


For reading line by line from stdin you can do something like

    import qualified Data.Text as T
    import qualified Data.Text.IO as T
    ...
    -- maybe you're reading things into a Set or something named acc
    go acc = do
      eof <- isEOF
      if eof then pure acc
      else do 
        line <- T.getLine
        T.putStrLn ("LINE IS "<>line)
        go (insert line acc)
There's hGetLine, hPutStrLn and hIsEOF for file handles, use withFile https://hackage.haskell.org/package/base-4.19.1.0/docs/Syste... which will close the handle when you exit the block. There are many more ways of doing this stuff of course, but the above will work fine for most stuff without any special libraries.

In general, use Data.Text for anything that you don't consider binary data (human-readable text as well as code, config files and such). Use Data.ByteString if you're thinking of it as pure encoded bytes.


> When I went in stackexchange and other community spaces I got similar results

Could you post some examples of interactions you had on stackexchange? I'm interesting in improving the onboarding experience for new users, and knowing what do avoid will be helpful.


That's why I like ocaml and racket (or scheme). Functional, but you don't need to jump through hoops to do basic stuff like I/o.


When you grok the IO type ("monads") it stops feeling like jumping through hoops, and then it isn't any harder to write imperative code in it than in any normal, imperative language.

Or at least that was my experience.


Haskellers think I/O is an important effect and want to manage when it's allowed. It's allowed in the IO context, which happens to be the default:

    main = do
      putStrLn "hello world"
      more

    more = putStrLn "hello again, still in IO"
If you want to, you can break the rule and do I/O from non-IO functions, by using trace:

    import Debug.Trace
    
    add :: Int -> Int -> Int
    add a b =
      trace ("adding " ++ show a ++ " and " ++ show b)
      (a + b)
or unsafePerformIO:

    import System.IO.Unsafe
    
    add :: Int -> Int -> Int
    add a b = unsafePerformIO $ do
      appendFile "debug.log" ("adding " ++ show a ++ " and " ++ show b ++ "\n")
      return (a + b)
Not exactly white-hot hoops of fire ? :)


Because everything can do IO and launch missiles there :) And there's no way to limit it (except maybe algebraic effects introduced recently? I'm pretty sure nobody uses them anyway)


shoot me an email, which is on my profile. I can help. I honestly was where you were a few years ago, and worked through it.

A few direct observations based upon what you said:

- working with monads takes some practice and understanding. There are several things that need to be learned to work with them, none of which really are directly addressed by educational materials. My advice here is to find someone who is willing to help you work through direct, concrete problems you're having, and/or work though a well-designed resource, such as haskellbook.com. You will still likely need another resource to talk to about issues tho.

- re: entire program understood before writing any code: I am not sure specifically what you mean here, but I think you're talking about how making a change in haskell can necessitate quite a large refactoring, where in another language it might be a trivial change. This is... indeed sad at times, but otoh, this is intractably linked with _other_ notions in haskell: functions should be total, side effects should be isolated, etc. So you _will_ need to do a refactor if you need to introduce side effects to a new part of your program. This is how you get the benefit of having isolated effects.

- re understanding the entire standard library: yeah, this is part of the unfortunate reality. Haskell is an active research language, and that research has borne fruit. So, practices have changed, which means that there is a lot of legacy gotchas around. Not only that, but things that you might do trivially in another language may require using a seemingly esoteric functionality in haskell (traversable imo is the canonical exemplar). Overall this means that learning haskell is a much larger effort than it could otherwise be.

Thus, I think there is certainly room for a haskell successor that addresses these issues, however, this doesn't exist yet, and haskell is still the best option we have at this point.


You might benefit from this video [1] on the "Functional Core, Imperative Shell" pattern in Ruby (a very object-oriented language), which sounds like one of the things you are having trouble with.

The talk is about segmenting IO and mutation from the rest of your program. The core of your program should be just pure functions and the IO part of your program should be "at the edge".

For example, consider a game loop. Game loops tend to have a while loop like this:

function loop() { while (should_not_close_window) { update(); // Run update function which updates mutable state draw(); // Draw from the mutable state } }

If we wanted to segment IO, we would return an immutable type/value from the update function instead like:

function loop(old_state) { const new_state = update(old_state); // Call a function which returns a new state given an old_state. draw(new_state); // Draw function still uses IO because drawing is inherently imperative

  if (new_state.should_not_close_window) {
    loop(new_state); // Call the loop function with the new state; this tail recursion replaces the previous while-loop). The program automatically exits the loop if new_state.should_not_close_window is false.
  }
}

The key thing is to turn as much of your program into a function that takes a value and returns a new value as possible, and let the IO code be only there for operations which cannot be done without IO (like drawing).

There are some cases where you might find yourself wanting to perform IO inside the update call but that can be handled by setting some state (inside your root state object) which the boundary of your program interprets as "do IO at the boundary".

For example, let's add an async handler to the loop (async often being IO like disk writes).

// Function to handle async messages function handleAsync(old_state) { for (const msg of old_state.msgs) { execMsg(msg); // Call function to pattern match and execute this specific message } const new_state = remove_msgs(old_state); // Call function that returns same state, except msgs is now the empty list (because the msgs have been handled) return new_state }

Where type msgs could be a list of types like | WriteFile of string | MaximiseScreen

The main loop will be modified by putting a call to this handleAsync function after update, like this:

function loop(old_state) { const mid_state = update(old_state); const new_state = handle_async(mid_state); draw(new_state);

  if (new_state.should_not_close_window) {
    loop(new_state);
  }
}

The majority of your code should be in the pure update() function which returns a new value (this example doesn't explain anything about pure functions at all) and IO should be minimised.

Generally, don't do IO (except at the boundary/root of your application). When you want to do IO, add some kind of message/datatype value to your state object and do the IO when your state object bubbles up, rreturning to the root of your application.

The talk likely explains this better but I hope my attempt at giving an explanation was helpful.

[1] https://www.destroyallsoftware.com/talks/boundaries


A bit of an off topic, but I think FP is a flawed concept in practical terms - it focuses on disallowing local mutation, but usually does nothing to prevent global mutation (which usually happens when the program interacts with its environment via network/OS calls etc.). The resulting programs are just as crash prone as if they were written in Java.

Why did nobody go for the actually useful sweet spot - allowing procedural/local mutation, but making the procedures themselves 'pure' and taking care to control side effects on the program scale?

This would allow for things like actually useful time travel debugging and runtime code modification while keeping the performance and familiarity of procedural languages.


> allowing procedural/local mutation, but making the procedures themselves 'pure' and taking care to control side effects on the program scale?

This is what Haskell does by encouraging you to bundle the part of your code that does mutation into, for example, a state monad, and pass the result of the monadic computation to other pure functions.


This would be a great place for any Haskell programmers to insert their own thoughts. I like the look of Haskell myself but I've spent more time with Rust because it seems more practical.


Rust is indeed more practical than Haskell, I wouldn't deny that. But Rust lacks full support for higher-kinded types. Exploring a few type classes (known as traits in Rust) like Applicative, Foldable, Traversable can really open your mind. An exaggerated quip by Haskellers is that there are entire libraries on NPM that simply implement the single function `traverse`. While that's an exaggeration, it really helps to look at a few dozen things the single `traverse` function can do to realize its generality.


Well, we have entire libraries in Haskell that implement `traverse` for a specific data type, too. The nice thing in Haskell is that the language is strong enough that we can tell the compiler that all those disparate implementations have something in common.


I just thought of one more thing. I really like -fdefer-type-errors in GHC and wish Rust has something similar.

Both Rust and Haskell are very opinionated in the way the program is organized. Adding a seemingly small feature can often result in larger-than-expected refactoring. (In Haskell, one might need a new monad transformer layer or convert non-monadic code to monadic; in Rust one might have to rethink ownership patterns or add Rc to many places.) In GHC, type errors can be deferred until run time, until the point where the expression containing the type error is forced. If the expression containing the type error is never evaluated, the program works; if it is, you get an exception raised. This combines especially well with laziness: for example one can find the length of a list even though evaluating the element of the list would result in runtime exceptions. This is useful when I'm halfway through a large refactoring, when there are plenty of errors remaining, but I want to run my test suite to determine whether the parts without compiler errors actually pass their tests.


As long as you remember to turn it off after you're finished refactoring. :-)

(Yeah, I'm the guy with the co-worker who kept turning off -Wall in C++ because "nobody has time for that".)


If you like another language that's a bit like Haskell-but-more-practical, give OCaml a try. They even have a for-loop in OCaml!


There's... forM_ in Haskell. that's basically a for loop, right?


Yes, you can implement your own loops in Haskell and OCaml (and Scheme and even the GCC-variant of C).

However, I meant specifically that OCaml is so 'pragmatic' that they have a for-loop as a built-in syntactic element in the language. They also have mutable variables built-in. In eg Haskell those are 'only' available as part of the standard library.


Sorry, bad attempt at humor. I knew what you meant, I was just poking fun at forM_ for being bizarre and unergonomic


What do you find bizarre and unergonomic about it?


If you're quite used to Haskell, it's nbd. If you're coming from another language with for loops and someone tells you it's a for loop, you might be surprised that you need to use $ and the loop variable comes after the sequence

for x in range(10):

    print(x)
forM_ [0..10] $ \x ->

  putStrLn x


You don't need the $ anymore with `BlockArguments`.

But overall, Haskell isn't really any worse here than Rust's for-each; or even C's weird for-loop (where you just have to rote learn what the three parts in the header of the loop do). I agree it's worse than Python's or Rust's vanilla for-loop.

I mostly like Python's syntax, I just wish they would make their would declare their indentation syntax to be sugar for a curly brace syntax (like Haskell does), and their would make their statements and blocks give values, like Haskell and Rust do.

(Of course, I have some other things I don't like in Python. But her syntax is by-and-large fine.)


I'm convinced the people who came up with Haskell syntax were on some sort of bath salts.


I think what you're complaining about is not syntax, but lack of syntax. There's nothing syntactially special about this "for loop"! What's the benefit of this then? Firstly, less to learn. If you already understand $ and lambdas, then you understand this structure, and can reuse your knowledge in many places, for example

    withTempFile "foo" $ \tmpFile -> do
       <do something with tmpFile>

    flip fmap myList $ \e -> do
      <do something with each element i>
And you can even create your own constructs. You're not limited to the ones that are built in.

(The [0..10] is actually syntax, and I think that's quite unfortunate. `range 0 10` would have been just as good.)


> (The [0..10] is actually syntax, and I think that's quite unfortunate. `range 0 10` would have been just as good.)

Well, Haskell is in fact flexible enough that you can define that range function.

But one reason to make it syntax is that you can define things like [0..], which would be between awkward to impossible to do with a single `range` function in Haskell.



The reason it looks the way it does is that forM_ is just a function like any other, not a keyword in a dedicated syntactic construct.


Yep, and then there's the $, \ and PutStrLn. As a non-Haskell programmer none of those mean anything to me.


$ is short for “put everything to the right inside parentheses” and \x -> y defines an anonymous function.

Why do you expect to understand the syntax of a language you never studied?


> $ is short for “put everything to the right inside parentheses”

There's, you know, also the actual parentheses that everyone already knows.

> Why do you expect to understand the syntax of a language you never studied?

Because most languages have a syntax that you can grasp almost immediately, since they conform to a common syntax, rather than invent everything all over again. Then you can focus on the actual features of the language and are not bogged down by syntax.


It doesn't take more than two minutes to understand what $ and \x -> y do, and if you start writing out those parentheses, you will also realize why $ is part of idiomatic Haskell.


You know, perhaps Haskell isn't the right language for you. That's fine. Why should it conform to your syntactic preferences?


> There's, you know, also the actual parentheses that everyone already knows.

Those are also available, and are legal syntax in Haskell.

> Because most languages have a syntax that you can grasp almost immediately, since they conform to a common syntax, rather than invent everything all over again.

I hope you don't count C's `for(something, something; something; something) {` as part of that intuitive bunch?

Btw, Haskell didn't re-invent the wheel for most of its syntax: they stuck to the common and already established syntax of the ML family of languages. ML is from 1973. (For comparison, Wikipedia says C is from 1972.)


> Those are also available, and are legal syntax in Haskell.

I don't especially like the idea of making multiple notations for the same thing, to be honest.

> I hope you don't count C's `for(something, something; something; something) {` as part of that intuitive bunch?

Not my favorite, but at least widely used. I find the style in for example Rust the most easy on the eye.

> Btw, Haskell didn't re-invent the wheel for most of its syntax

Yeah I actually suspected this was the case.


> I don't especially like the idea of making multiple notations for the same thing, to be honest.

I can appreciate that sentiment, but I'm afraid the designers of Haskell had a different approach there.

There often a few different ways to express something. But at least they can often be understood as thin syntactic sugaring of one into the other. It's not like C++ or Perl (or even Ruby with its different types of closures) where you have tons of overlapping ways to express something, and they are all subtly different.

> I find the style in for example Rust the most easy on the eye.

I'm not sure about easy-on-the-eye, but I find Rust mostly quite bearable, too.


> > I don't especially like the idea of making multiple notations for the same thing, to be honest. > > I can appreciate that sentiment, but I'm afraid the designers of Haskell had a different approach there.

It's also very difficult to achieve in practice. According to the Zen of Python "There should be one-- and preferably only one --obvious way to do it" but I don't think even Python lives up to that.


Yes, it's only aspirational in Python, too.

Your profile says

> Emacs/Lisp forever

I think of Haskell as the declarative statically typed lisp I’ve always wanted.


then you've never really wanted a lisp...


Why not? If your comparison is eg C, Haskell and Lisp share a lot in common.


Could you elaborate on the "more practical" bit?


More jobs I could get with it, more open source tools or projects I could work on with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: