Hacker News new | more | comments | ask | show | jobs | submit login
GHC 8.0.1 is available (haskell.org)
190 points by patrickmn on May 21, 2016 | hide | past | web | favorite | 36 comments

  * The introduction of the DuplicateRecordFields language extension, allowing
    multiple record types to declare fields of the same name
Holy hell. Is this the end of the Haskell record field names problem?

One of Haskell's great miseries has been that, because record field accessors are declared globally, you couldn't define records with fields of the same names:

  data Person = Person { name :: String, age :: Int }
  data Object = Object { name :: String, id :: UUID } -- error! `name` taken

Edit: Info on the extension at: https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedReco.... Looks like it creates ambiguity in some cases.

As a recent beginner to Haskell, are there other big outstanding issues with the GHC to watch out for?

The record issue is a widespread inconvenience, but ultimately nothing more than an inconvenience. Most people just prepended the type in their fields: 'person_name' and 'object_name'.

Some other issues might be:

* The Prelude is hard to update for compatibility reasons, so it clashes with modern Haskell somewhat. A lot of projects will roll their own prelude and add 'NoImplicitPrelude' to the project options.

* There's also this presentation: https://secure.plaimi.net/~alexander/tmp/pres/2016-05-11-why...

* Deploying Haskell programs to older corporate servers is doable, but not at all obvious. Stick with C, Bash, and older Perls(5.8.8 is on my router) for maximum portability, if you expect to deploy to servers with a 10 year old image.

Record issue is more than just an inconvenience. I consider it a genuine issue. Having to add 2 lines of import (unqualified name of the type and qualified record module to get the accessors) for every record type you use is a pain. When you work with data-heavy code this ends up hundred of lines of import (see amazonka packages for instance where each request has a record type).

Even if it doubles the number of import statements, I still don't see how that is more than an inconvenience. Having said that, the record issue alone is not the only reason to need qualified imports (as they are also used to resolve collisions of functions that are not auto generated from records).

What would be nice is if Haskell would allow any name collision so long as it could use the type system to disambiguate the function.

Your argument generalizes to "The lack of any feature that can be worked around is an inconvenience", which, okay, but it's not a very useful statement. By that definition, lots of really important things are inconveniences.

> What would be nice is if Haskell would allow any name collision so long as it could use the type system to disambiguate the function.

That would need very careful design to work well with type inference, I suppose?

Idris[1] already has a working system for this, and it's type system is more complicated (it has much more support for dependent typing). It allows this principle for regular function names as well.

[1]: http://www.idris-lang.org/

Thanks! I'm glad we have a working model we can probably copy from.

Most people just prepended the type in their fields: 'person_name' and 'object_name'.

Coincidentally, the reason why that practice is still visible in some C structures today is because very early C compilers had a similar limitation:


Regarding deployment, why not just link statically? http://stackoverflow.com/a/5953787/309483

This isn't portable with glibc. (It isn't for C/C++ programs, either).

You can build GHC with musl libc and then make truly portable static binaries but it isn't that easy.


You can't link glibc statically for technical reasons. In some sense, this is the only problem I had. If there was a way to use an alternate libc, it would probably eliminate this issue entirely.

I know there was some work to do a musl and or alpine Linux build. Or both, probably easy to google. Also I'm moderately certain that ghc on FreeBSD and Mac don't need glibc ;)

Already built for alpine linux, and I have it setup to use a stage 2 bootstrap. So you build off debian, then build locally to build an alpine apk.


Recently updated last week to 8.0.1, and its submitted upstream but not yet accepted because alpine linux 3.4 is about to come out.

If you want to try it now you can docker pull mitchty/alpine-ghc:8.0

Targeting musl works out of the box with the musl toolchain and once you have a musl build you can build another ghc with that one with musl as the host.

I haven't tried building a statically linked musl ghc, and I honestly don't know how, but it'd be great to figure out. Any pointers? Speaking of static musl builds, that can be a good choice to create a single bindist of GHC that works on CentOS, Debian, Alpine, and any other Linux distro. Seeing how Docker defaults to Alpine images, it could be beneficial in that regard as well.

we've deployed modern haskell onto RHEL 6.5 systems at work, and thats not too bad (just some dy load library path hackery at worst)

There was a large (but informal) survey done on /r/haskell (one of the largest Haskell communities) last month about "Why Haskell sucks". The results were collated into a presentation linked below. It's a very detailed review that covers just about everything "bad" about Haskell (though I think it views each issue rather optimistically).

Presentation: https://secure.plaimi.net/~alexander/tmp/pres/2016-05-11-why...

EDIT: Also, if you find the colors on the slide show intolerable, the page source is well-formatted and readable. Content starts at line 200.

about the edit: you can just disable js and read it as a markdown source if you like.

Here are some useful Haskell extensions:


The ambiguity can be resolved by type annotations, so I think that this won't cause much pain in practice.

I actually quite like this limitation. It encourages you to separate data structures into their own modules and use qualified names to access their member data, which I think is often the right idea.

If only Haskell would be more like OCaml in this regard, and would allow you to eg nest module, instead of saying "One File = One Module".

It goes in that direction with https://ghc.haskell.org/trac/ghc/wiki/Backpack

I've never encountered a situation in which the One File - One Module imposed convention was problematic. I also find it greatly helps when reading new Haskell code bases. I think it's generally a sign of good project design when there are lots and lots of small files, each with one data structure and a few associated functions.

For most of what I want, being able to introduce additional namespaces would be good enough. Making them modules as well could open the doors to some need techniques.

(I don't need multiple namespaces per file so much for the finished program, but it's handy when developing.)

It's ok... But I think we really want a more comprehensive solution. Anonymous record types are really what we want.

    * Significant improvements in error message readability and 
      content, including facilities for libraries to provide custom
      error messages, more aggressive warnings for fragile rewrite
      rules, and more helpful errors for missing imports.
I'm really excited about this. Any push towards more decipherable error messages should be huge in increasing adoption (which leads to more awesome libraries and opportunities to use it for our day jobs).

I still see plenty of errors that remind me of this post. https://izbicki.me/blog/error-messages-in-ghc-vs-g%2B%2B.htm...

The FPComplete guys will probably have it up on Stackage in a few days. You can still use it by adding the tarball to your stack.yml, and maybe adding the 'allow-newer: true' flag:


I'm looking forward to trying out the new debugging support: gdb was unusable with older GHCs.

Implicit callstacks are cool: Remember that you can hide the parameter inside of your application's main monad:

    type MyApplicationM a = (?l :: CallStack) => StateT ConnectionPool IO a
I've certainly run into a lot of the other stuff too. Good release all-around!

> I'm looking forward to trying out the new debugging support: gdb was unusable with older GHCs.

Indeed; it should be slightly better now but as the documentation says, there are still plenty of rough edges. I certainly wouldn't recommend it for day-to-day use. I have a patch set in the works which ought to fix the remaining issues so hopefully 8.2 will finally be usable.

However, in my experience the implicit callstack functionality along with the ability to provide callstacks from profiling information greatly reduces the need for DWARF unwinding for debugging (low-cost profiling, on the other hand, it may still be quite useful for).

The new Sphinx based documentation has many code sample boxes where the content overflows and the rectange is smaller than the text which doesn't fit into the rendered HTML/PDF. Is this a common issue with Sphinx?

Hmmm, I've noticed this in the PDF output but haven't yet seen anything similar in the HTML output. Could you point me at a specific example?

Rebuilding the latest release now, was referring to previous RCs. It may very well be just the PDF output. I'll check Sunday and report here.

One example: 3.2.8, the second code box about hsc2hs.

HTML of the same section is fine.

There must be a way to fix this in the pipeline. I mean, you cannot find each and all and try to manually adjust the text.

I don't see the problem on this page[1]. In any case this would be an issue with the builder that turns the doctree into HTML, PDF or whatever.

[1]: https://ghc.readthedocs.io/en/latest/8.0.1-notes.html#hsc2hs

As I wrote, HTML is fine, PDF is not.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact