Hacker News new | past | comments | ask | show | jobs | submit login
Haskell ArgumentDo Proposal (haskell.org)
52 points by adamnemecek on July 4, 2016 | hide | past | favorite | 26 comments



Hmm, this doesn't sit well with me.

    atomically do
      v <- readTVar tv
      writeTVar tv $! v + 1

    withForeignPtr fptr \ptr -> c_memcpy buf ptr size
Since whitespace is used for function application "f x", code like this would require mental effort on my part to realise that, for example, "withForeignPointer" isn't being called with 7 arguments.

I really don't see why so much effort is spent on infix syntax, precedence rules, fully-applied "$", etc. (not to mention the associated bikeshedding), all to remove the requirement of a few parentheses.

Whilst I can perhaps entertain the notion that Lisp-style, fully-parenthesised syntax might annoy some people (personally I find it quite pleasing), I think this level of syntax-fiddling is a solution worse than the problem.

Keeping track of precedence rules is the kind of task computers are good at but humans not so much; I don't know why we invent these schemes to impose on ourselves, when we could just wrap a couple of parens around the tricky bits. (In fact, I do this; but hlint tells me off!)


People spend time on this because Haskell syntax is deeply unpleasant to a lot of people, including myself. I love the capabilities and expressiveness of Haskell, but I find the syntax annoying, and it hasn't gotten better even coming on three years of active Haskell programming.

I've forgotten the $ in some usage of do or a final argument lambda twice in the last week even.

I think it's worth solving these (superficial?) syntax issues even if they only affect the comfort of a third of all Haskell programmers.


> People spend time on this because Haskell syntax is deeply unpleasant to a lot of people, including myself.

I can understand that the syntax may be unpleasant, but I don't think having to memorise even more precedence rules makes a language more pleasant.

When I read and write Haskell, my default "mode" is to see whitespace as function application, i.e. if I see "A B" then I assume "A" is being applied to "B". Hacks like "A $ B" interfere with this, since I have to mentally back-track and re-parse them as "$ (A) (B)".

> I've forgotten the $ in some usage of do or a final argument lambda twice in the last week even.

This is exactly the kind of thing I was lamenting. Rather than (ab)using "$", why not do the more natural thing, which every other language does, and use parentheses for grouping?

Instead of "forgetting the $" in

    atomically do
          v <- readTVar tv
          writeTVar tv $! v + 1
Why not use parentheses for the job they're designed for, and write:

    atomically (do
          v <- readTVar tv
          writeTVar tv $! v + 1)
Likewise, for

    withForeignPtr fptr \ptr -> c_memcpy buf ptr size
You can use parentheses and never "forget the $"

    withForeignPtr fptr (\ptr -> c_memcpy buf ptr size)
Parentheses are a universally understood syntax for grouping, they're supported in editors (e.g. finding matching pairs, checking if they're balanced, etc.), they always work in the same way, have no interference with other constructs or edge-cases, etc.

Don't get me wrong, the "$" function is really useful, for example in "map ($ arg) [func1, func2, func3]", but I don't see the point of abusing it to avoid parentheses. I do sometimes use it myself, but whenever there are multiple infix functions around, I'll tend to use "redundant" parentheses for grouping.


Syntactically significant indentation doesn't mix well with parentheses. (That's why Python doesn't have multiline lambda, for example.) And Haskell's rules for indentation are much more complicated than Python's, so the interactions with parentheses are even worse. I'm not sure there's any single person who really understands all the edge cases. It's a complete clusterfuck, they better start afresh. C-style parentheses and curly braces are so much better for parsing, copy-paste, automatically generated code, etc.


> C-style parentheses and curly braces are so much better for parsing, copy-paste, automatically generated code, etc.

Firstly, to get this out of the way, you shouldn't be using significant indentation when generating code unless you have a good reason; use parentheses, curly braces and semicolons instead https://en.wikibooks.org/wiki/Haskell/Indentation#Explicit_c...

Now, I wouldn't say parentheses and offside rules "don't mix well", but I'll grant that there are interactions to keep in mind.

It's true that Haskell's indentation edge-cases are unfathomable, although thankfully I've never run into any. I have no strong preferences either way regarding significant whitespace, as long as there's an "escape hatch" for code generation (i.e. the braces and semicolons I linked to above).

Regarding Python, its lambdas aren't "single line", they're "single expression"; that expression can cover as many lines as you like (e.g. see my SO answer http://programmers.stackexchange.com/a/252546/112115 ).

When learning Haskell, after being a long time Python programmer, I used to think Haskell's indentation rules were complicated. These days, I find it awkward to indent Python code, and end up second-guessing the interpreter a lot.

Maybe that's because I'm fond of using 2D code layout and vertical alignment to indicate relationships between lines, which is quite natural in Haskell. Python's indentation seems to be limited to counting blocks, so it's hit or miss whether adding extra spaces to align things will cause it to choke or not.


I wonder if anyone is working on a full rework of Haskell's syntax, a la Facebook's Reason for OCaml? I'm a big fan of the semantics but not as much the syntax of Haskell, too, and it would be interesting to see what a more... current-day-mainstream... syntax (Rust-like braces-and-semicolons-expression-based-language style, perhaps) would do for it.


This is what I'm doing with plastic [1], though it also has some semantic differences: strict instead of lazy, monads become objects.

[1] https://github.com/DanielWaterworth/plastic


That doesn't compile to Haskell.


I'm surprised that you expected it to.

edit: Having looked at Reason more closely, I now see where the confusion came from. Sorry, I didn't mean to deceive you.


There was Liskell, but unfortunately it seems to be unmaintained :(


This would be absolutely amazing actually.


Wait till you see a rust-style datatype polymorphic over 4 parameters, and use that type over and over again. Haskell syntax may not make much sense coming from algol-inspired syntax, but to express its patterns it makes sense otherwise you end up with scala type lambda madness.

Both purescript and agda have this


I'm curious, what sorts of issues arise with complex types that Haskell-like syntax handles better? (Genuinely curious here -- I've written a little Haskell but am by no means an expert...)

IMHO a Rust-style type `Option<HashMap<String, Vec<u32>>>` is a little noiser than the Haskell-style type `Maybe (Map String [Int])`, but not fatally so. Maybe there are much worse cases though?


The big one is `->`; all the 'template argument'-inspired syntaxes for this are terrible. Once you realize you want operators (and polymorphic operators, at that!) in your types, the `<u32>`syntax falls apart.

I am not a fan of C#'s `Func<a,b>` at all.



The problem is that ideas like this make the syntax even less readable.


It increases the regularity of the syntax (infix is not special) - and reduces $ noise which does not aid readability.

I have been using Haskell for around a decade and hate every infix operator spam I have to throw in there to please this syntax irregularity.

Other than this, I quite like Haskell syntax, but I remember initially hating it. It took many weeks to get used to it.


I'm curious as to what the author means when discussing whether the extension makes the language "more regular". I thought regularity of languages was a binary thing; either a language is or isn't regular. And since all languages with recursive grammars are not regular (if I'm correct in that statement), why does it matter?

Regarding this particular extension it seems more or less fine to me; I'm not sure if it's really worth the effort just to remove a single character (usually) but it doesn't seem like a bad thing overall. I think the resulting syntax would be quite familiar to ruby programmers and could make it easier for newcomers to grok the syntax.


Regular as in "uniform", not as in "regular expression". The question is whether this extension conceptually feels like adding an edge case to the syntax or removing one. Does it make the syntax simpler over all?


That's what I thought too, but if you read the page they're talking about the number of nonterminal symbols in the same sentence...


That's a decent measure of how complex a grammar is. It also highly hints that the language in question is at least context-free: you probably wouldn't use the word "nonterminal" to describe a regular language. (Unless you were specifying a language that happened to be regular as a context-free grammar, I suppose.)


They are not referring to regularity in the formal grammar sense, but as constructs being used uniformly under different contexts.


> Cons

> 2. Contributes to a proliferation of extensions that other tools must support. (NB: This is just a parser change so should be easy for all tools to support.)

In Haskell are libraries isolated in the sense that 3rd-party libraries can have their own language extensions without affecting other programs that import them? Or does it get pushed up the chain to all code using it?

Or is this "con" just related to syntax when using other libraries APIs or some GHC versioning dependency risk.


Language extensions are generally enabled on a per file basis. This complaint seems to be talking about tools for working with Haskell source code (such as an IDE). If these tools try to work with a file that has this extension enabled, then it has to be aware of what effect this extension will have, or else it may mis-parse the file.

As an example, here is the first example config file for XMonad [0]. Note the "LANGUAGE" pragma, which enables a number of extensions for this file.

[0] https://wiki.haskell.org/Xmonad/Config_archive/adamvo%27s_xm...


Makes sense, thanks. There are so many extensions where I could see this adding a lot of overhead for tooling.

Still, I love the experimental nature of the language which this promotes. A worthy tradeoff given Haskell is typically an experience developers language who (should) know how to handle such responsibility.


> This is just a parser change so should be easy for all tools to support.

I have nothing against changing the grammar, but I would never say it's "just a parser change". Haskell has no standard parser/AST/pretty-printer (e.g. like read/write for Lisp, or even "eval" in dynamic languages), so the only "portable" way of representing Haskell code is using raw strings, and cobbling together parsers, pretty-printers (or, heaven forbid, regexps) as required by the situation. With so many string-of-Haskell transformers around, changing the grammar is bound to require lots of work for tool and library makers.

The de facto parser for Haskell is the one used by GHC, since almost all Haskell code is written solely for compilation by GHC with no regard for other parsers. Unfortunately, it's very hard to reuse components of GHC for anything other than compiling code.

GHC does offer an API for invoking its parser, type-checker, etc. but it will crash (saying "the impossible happened") unless it's configured just right for that piece of code (setting "DynFlags", include directories, extensions, package databases, etc.). In practice, the only way to set those options correctly is to use the author's Cabal file, which limits us to either invoking the monolithic "ghc" command and hoping it has an option for what we need, or having our code pass all of those Cabal-provided commandline options to the GHC API, at which point we're basically reimplementing the "ghc" command.

There are other parsers, like haskell-src-exts, TemplateHaskell quasiquotation grammars, etc. but they either require per-input configuration (which we can only get from Cabal files, which are tailored to GHC's particular options), or else they flat out fail for a large proportion of existing code (e.g. due to preprocessors like CPP).

I ran into this myself when trying to make a tool for extracting ASTs of Haskell functions. In the end I went down the "hope ghc has an option" path, and wrote a Core-to-Core optimisation plugin to spit ASTs on to stderr. This is unfortunate since the ASTs are of GHC Core rather than Haskell, and it's very inefficient since it's running parts of the compilation pipeline which are completely unnecessary :(




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: