Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not nonsense, though. But IMO it's not the language itself, but the libraries that are impractical. The naming is the worst. If Haskell's authors really wished to create a practical language, they would never have allowed things like ">>=", "\", "." or "!!" into the standard libraries. It's not the first programming language to use lots of arbitrary signs, but it's certainly the worst I've ever seen. The only conclusion I can draw from all this is that Haskell's written by people who love memorization and don't care much about explaining to others.


Have you ever tried doing any non-trivial work in Haskell? I've used it professionally, and spoken with a lot of other people who have, and exactly zero people have expressed an experience that the naming of operators has caused the language to be less practical for them. Some think it's less aesthetically pleasing than it could be, but that's not a practical concern.

Furthermore, the specific operators you cite are confusing examples, given that ">>=" can be elided using do notation, "\" looks enough like a lambda that it took me a while to remember what you were referring to, and "!!" is rarely used. A slightly more reasonable argument for your case would be to cite the operators used in the lens library, but those are all aliases for regularly-named functions.

There are some arguments against Haskell in terms of practicality, but this is not one of them.


> I've used it professionally, and spoken with a lot of other people who have, and exactly zero people have expressed an experience that the naming of operators has caused the language to be less practical for them.

This is a pretty self-selecting set. One of the reasons I don't use Haskell professionally is that I hate its operator-naming culture, and Haskell isn't big enough in industry yet that it has lots of involuntary users (like C++ or PHP or Java does).


As I said, you can definitely raise aesthetic objections. You're free to hate operators all you want, though it does strike me as a peculiar basis on which to decide what tool to use. My point was whether there are actual practical implications, though, and I don't think the self-selected nature of Haskell professionals has any bearing on that. I frequently hear many other objections from people working in the language, so it's not a case of rose-colored glasses in which we think everything about the language is fine. It's simply a case of operator naming not being very important, from a practical perspective. To the extent that it is important, most people who actually use the language think it's a benefit, not a detriment.


Well right, but what I'm saying is that the people who actually use the language are the kind of people who like the operators. I know Latin, and a language whose functions were named in Latin would not be a practical impediment to me, but I wouldn't say that such a language would be practical in general. Haskell's operators aren't on that level, but I do think if you introduced it in a company where people were hired to write Python/Ruby/C/whatever, there'd be people who found them a practical problem.


Again, whether or not people like the operators is irrelevant. Aesthetic opinions about syntax don't have a material effect on productivity. As I said, there are people using the language professionally who have an aesthetic distaste for the use of operators in, e.g., the lens library (i.e. your purported selection effect is empirically false). Those people will still admit that this has little practical effect, and certainly not enough to make a language choice based on it.

If there were a language written in Latin, and it was useful for some reason, people would quickly write aliases for all the Latin functions, named in English. This is very easy to do in Haskell: if your argument held, you would expect to see aliases for all of the operators. With a few exceptions (again, lens), this isn't the case.

You're choosing where to live based on the color of the neighbor's bikeshed, against evidence from previous tenants that the current color is quite pleasant.


It's not about liking, it's about looking at a piece of code and going and going "I understand all that" vs "What does all these signs and single-letter variables mean?". Naming matters.


Yes, it's very important that people who work in the language be able to read and understand it easily. It is not, however, important that people who do not work in the language be able to do so. The most persistent critique of Haskell is that it is hard to understand for people who haven't learned it. This is as reasonable as critiquing Java because it is hard to understand for people who do not know how to program. You're welcome to claim that Haskell has a larger barrier to entry, but that is a fundamentally different claim from the claim that operators make the language harder to read for practitioners. I have been attempting to refute the latter claim, using my experience discussing the issue with said practitioners. If you want to discuss the former claim, I'm happy to do that as well, but you have to acknowledge that it is a separate claim and be clear about what it is you actually mean.


>Yes, it's very important that people who work in the language be able to read and understand it easily. It is not, however, important that people who do not work in the language be able to do so.

Well, I guess we disagree there. And remember that some people may only work a little bit in a language, and it matters, to them, how much time they need to get into the groove with a language. Once you work enough in a language you get used to all the weirdness anyway, which is how you end up with people claiming C++ and PHP are actually kinda neat languages. The only good judges of language syntax are people who have never seen that syntax before.


I think the exact opposite: the only good judges of syntax are people who understand the semantics deeply enough to understand whether the syntax accurately reflects them.

It's all well and good to say that people should be able to bounce around between languages within a paradigm. A Rubyist should be able to read Python, Perl, and PHP, sure. A Java programmer should be able to read C++ and C#. Claiming that people should be able to read any programming language, though, means that languages aren't allowed to explore significantly different semantics. A JavaScript programmer isn't going to be able to pick up something written in Prolog or Forth and quickly understand exactly what's going on, and that's as it should be. The whole point of having multiple paradigms is that they can express different ideas and ways of approaching problems.

So, I agree that an SML or OCaml programmer should be able to "get into the groove" with Haskell quickly. In my experience, this is usually the case.


It's not just syntax. If the semantics and design patterns are different, then no matter if you call it (>>=) or bind, you won't be any closer to understanding the code; this is the "people who do not work in the language" part.

Would you expect to understand Forth or other concatenative languages even if all the operators are named?

The new set of semantics and patterns are the part that take time to understand. The syntax is not really the actual stumbling block.


>"!!" is rarely used

We just finished the lists chapter in Haskell Programming and we never even mentioned that operator. It's a silly function ^_^


There's a nice writeup describing the simplicity of Haskell syntax. [0]

  Haskell has 21 reserved keywords that cannot be used 
  as names for values or types. This is a relatively low
  number (Erlang has 28, OCaml has 48, Java has 50, C++ 
  has 63.

In particular, some of the operators you mention represent such fundamental and common tasks that it makes sense to have a shortened form. I'm in agreement that descriptive names are preferable in general, but not always. Haskell syntax is quite simple and predictable (I say that as a former Rubyist); it only appears strange in the beginning because there are so many new concepts to learn.

[0] https://github.com/kqr/gists/blob/master/articles/simple-syn...


I'm learning Haskell right now for fun and don't personally have any opinions on the naming of operators. However, I will say that "." is as far as I can tell a practically named one, because it is just composition which in mathematical notation is also a dot.


Many confusing operators certainly exist(especially in Control.Arrow), but the ones you have mentioned are the worst possible examples you could give. All of those have a clear meaning, and are extremely useful

    f . g y . h z a . f'
is certainly much clearer than

    f `compose` g y `compose` h z a `compose` f'
The same goes for (>>=)


It's clearer to me, but I still have no idea what's going on. This, on the other hand, is a syntax made by people who thought it through:

  with Ada.Text_IO; use Ada.Text_IO;
  procedure Hello is
  begin
    Put_Line ("Hello, world!");
  end Hello;
Not that it's at comparable complexity, but I know what the following code does, but I still can't understand it. It's basically obfuscated:

  module Main where
  import Control.Monad
  import Control.Concurrent
  import Control.Concurrent.STM
 
  main = do shared <- atomically $ newTVar 0
            before <- atomRead shared
            putStrLn $ "Before: " ++ show before
            forkIO $ 25 `timesDo` (dispVar shared >> milliSleep 20)
            forkIO $ 10 `timesDo` (appV ((+) 2) shared >> milliSleep 50)
            forkIO $ 20 `timesDo` (appV pred shared >> milliSleep 25)
            milliSleep 800
            after <- atomRead shared
            putStrLn $ "After: " ++ show after
   where timesDo = replicateM_
         milliSleep = threadDelay . (*) 1000
   
  atomRead = atomically . readTVar
  dispVar x = atomRead x >>= print
  appV fn x = atomically $ readTVar x >>= writeTVar x . fn


What, a Haskell program that launches 3 threads and coordinates them with inter-thread communication is harder to read than an Ada Hello World? Who would have tought!

The equivalent in Haskell to your Ada program is:

main = putStrLn "Hello, World!"

I don't even think about taking the time to write the equivalent in Ada to the Haskell program you posted.


I'm no good at Ada, but I like the way it's syntax tries to guide you through reading the program. Now for something of comparative complexity in Clojure:

  (def x (ref 1))

  (defn increment [i]
    (if (> i 0) 
      (
        (dosync
          (alter x inc)
        )
        (Thread/sleep 1)
        (increment (- i 1))
      )
    )
  )

  (defn decrement [i]
    (if (> i 0)
      (
        (dosync
          (alter x dec)
        )
        (Thread/sleep 1)
        (decrement (- i 1))
      )
    )
  )

  (defn printref [i]
    (if (> i 0) 
      (
        (dosync
          (println (format "in printref %d" @x))
        )
        (Thread/sleep 1)
        (printref (- i 1))
      )
    )
  )

  (future
    (increment 10)
  )

  (future
    (printref 15)
  ) 

  (future
    (decrement 10)
  )
Isn't this much nicer? It's not immediately obvious that x is an atomic variable, but aside from that it's a lot better than the Haskell example. It took me along the lines of 2-3 hours from never having touched a Lisp to writing this.


Here is the Haskell equivalent of your Clojure code:

    import Data.IORef
    import Control.Concurrent

    increment _ 0 = return ()
    increment x i = do
        alter x succ
        threadDelay 1000
        increment x (i - 1)

    decrement _ 0 = return ()
    decrement x i = do
        alter x pred
        threadDelay 1000
        decrement x (i - 1)

    printref _ 0 = return ()
    printref x i = do
        val <- readIORef x
        putStrLn ("in printref " ++ (show val))
        threadDelay 1000
        printref x (i - 1)

    main = do
        x <- newIORef 1

        forkIO (increment x 10)

        forkIO (printref x 15)

        forkIO (decrement x 10)

        threadDelay 100000

    -- This is just a helper to more closely match the clojure
    alter x fn = atomicModifyIORef' x (\y -> (fn y, ()))
I'd argue the Haskell is even nicer.


No this is not much nicer.


It's absolutely shocking that you're able to easily understand a literal hello world example in a language in the dominant paradigm, but not able to easily understand a significantly more complex example in a language from a different paradigm that you haven't taken the time to learn. Absolutely shocking.


See my reply to marcosdumay. It's just altering an atomic variable. It doesn't need to look so cryptic.


There are certainly times when operators seem to be overused (I'm thinking of Lens in particular). But I think this criticism is overstated. One of the things that makes it more desirable to use operators rather than named functions is that due to type classes, the meaning of the operators will change with what context they're being used in. Another is that sometimes there really isn't a great name to be found; an example is the `<* >` operator in the Applicative class (space put in for formatting). Once one becomes familiar with the operators, it's much easier to read something like `doThing1 >> doThing2 >> doThing3` than `sequence doThing1 (sequence doThing2 doThing3)`.

I'm not sure why you chose the two examples you did. The Haskell version of the Ada program you wrote is as simple as can be:

    hello = putStrLn "Hello, world!"
While I'm sure an Ada program that did what the code you pasted does would be of comparable complexity to the Haskell version. And being familiar with how monadic functions work lets me guess pretty well what the code does, despite having very little knowledge of the libraries involved.

    main = do -- atomically create a new transactional variable init'd to 0
              shared <- atomically $ newTVar 0
              -- atomically read the variable and print it
              before <- atomRead shared
              putStrLn $ "Before: " ++ show before
              -- Fork a thread where we show the variable and sleep 25 times
              forkIO $ 25 `timesDo` (dispVar shared >> milliSleep 20)
              -- Fork a thread where we add 2 to the variable and sleep 10 times
              forkIO $ 10 `timesDo` (appV ((+) 2) shared >> milliSleep 50)
              -- Fork a thread where we subtract 1 from the variable and sleep 20 times
              forkIO $ 20 `timesDo` (appV pred shared >> milliSleep 25)
              -- sleep 800 ms in the main thread
              milliSleep 800
              -- read the variable and print it
              after <- atomRead shared
              putStrLn $ "After: " ++ show after
     where -- define some convenience functions
           timesDo = replicateM_
           milliSleep = threadDelay . (*) 1000
   
    atomRead = atomically . readTVar -- perform an atomic read
    dispVar x = atomRead x >>= print -- read then print what was read
    appV fn x = atomically $ readTVar x >>= writeTVar x . fn -- read, apply a function and then write


"One of the things that makes it more desirable to use operators rather than named functions is that due to type classes, the meaning of the operators will change with what context they're being used in."

I don't understand what distinction you're making here. Named functions can also be members of typeclasses (and frequently are - return, mempty...)


Personally, I believe Haskell syntax is a work of art. Learning how it fits together with currying is extremely satisfying. Also, the meaning of all the operators you mention, with the exception of (>>=), is immediately clear from their types.

    (.) :: (b -> c) -> (a -> b) -> (a -> c)
It is clear that it takes two functions, and chains them together to create a new function

    f . g = \x -> f (g x)
So

    double (addOne 3)
is equivalent to

    (double . addOne) 3
Similarly, (!!) has type

    (!!) :: [a] -> Int -> a
It is immediately obvious from the type that it acceses the object at a particular index in a list, so

   ['a', 'b', 'c'] !! 1 == 'b'
Also, the syntax complements currying extremely well

    f g h x
is equivalent to

   (((f g) h) x)

This allows for some very neat things.

   addOne :: Int -> Int 
   -- addOne 3 == 4

   map :: (a -> b) -> ([a] -> [b]) -- which is equivalent to '(a -> b) -> [a] -> [b]'
map is an extremely neat function, and is used in many languages. It applies a function to every element of a list, producing a new list.

Now, there are two ways to use map

    map addOne [1, 2, 3, 4] == [2, 3, 4, 5]
However, the above is equivalent to

   (map addOne) [1, 2, 3, 4]
From this we see there is another way to use map

   addOneList :: [Int] -> [Int]
   addOneList = map addOne

   -- addOneList [1, 2, 3, 4] = [2, 3, 4, 5]

Note how map was partially applied. In Haskell, map can be seen as doing two things. One is taking a function and a list, and applying the function to every element in it to produce a new list. However, you can also see map as a function transformer, taking an ordinary function, and converting it into a function that works on lists!

   map :: (a -> b) -> ([a] -> [b]) -- which is equivalent to '(a -> b) -> [a] -> [b]'


The Haskell's record/struct syntax is probably the worst of any language.

C style:

    a.b.c = 1;
Haskell:

    let b' = b a
        c' = 1 + c b'
        b'' = b' { c = c' }
    in c' { b = b'' }


It is bad, but couldn't you also write:

    a { b { c = 1 } }


It's (b . c .~ 1) with lens. Or a { b = (b a) { c = 1 } } without.


lens attempts to solve that. Though it does so at the cost of unreadable types.


What exactly do you feel is obfuscated?

There's $, backticks, >>=, >>, ++, ., and <-.

All of these are very frequent things you use in Haskell, and deserve short notation. Besides >>, they're all learned in day 0-1 of Haskell.


Are you actually going to respond to the fact that you compared Hello world in Ada to something significantly more complex in Haskell?

How could you ever think that is a fair comparison?


In C or C++, there are even more unfamiliar operators:

  >>= += <<= -= *= ...
  ?: ! ~ ^ % ...


Preaching to the choir. I hate those languages. If I had to choose between coding C or Haskell for the rest of my life I would definitely choose Haskell.


Would you say these languages are impractical because of their stylistic choice to use operators?

I think people have a preconception that "Haskell is not practical" -- and then anything that they do not find appealing becomes a source of its impracticality. Despite the fact that the same traits are shared with vastly practical languages.


Yes, I do find their use of operators impractical. In particular the pointer syntax drives me up the wall. I never seem to get it right. As does the printf syntax, come to think of it. Why is integers refered to with "%d"? Can't come up with a reason. And dynamically allocating function pointers on the heap (calloc) requires completely batshit insane keyboard manoeuvres. It would have been a lot better if it was typechecked, which is why I would prefer Haskell.

I can vaguely sense somewhere in my memory that <<= and it's ilk are bit shift operators. How am I supposed to know that? Fuck it. It's bad library design.

I like Python. How do you append to aList? Answer: aList.append(anElement). Scala, on the other hand, seems to believe that ":+" is an acceptable append syntax. The compiler won't let me do stupid stuff with it, which is good, but I would prefer it if errors was caught by English-proficiency instead of rote knowledge/the compiler. I think that's a very powerful distinction.


You would prefer errors to be caught by a human needing to exercise their English-proficiency over errors being caught by the compiler? Seriously? That can't actually be what you mean.


Apples and oranges. In situations where Haskell can do the job well enough, I would recommend Haskell over C, because it's just more secure. However, they are still completely different beasts. You use different tools for different purposes. Try Haskell in an embedded environment, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: