Hacker News new | past | comments | ask | show | jobs | submit login
Developing GHC for a Living (serokell.io)
167 points by jacobwg 5 months ago | hide | past | favorite | 55 comments



> I’m an idealist, and I believe that a perfect programming language with perfect tooling (or, at least, Pareto-optimal language and tooling) can exist. This is in contrast to people who just see languages as tools to get the job done. For them, a language is probably as good as the libraries available for their tasks.

I used to think that way. But it only took me half a decade of professional development (and reading/discussing in circles like HN) to see that it's a fallacy. A programming language cannot be perfect because it's made for humans, who are not perfect (and runs on computers, which arguably are not perfect).

Accepting the fact that there's no singular, perfect way to express a given idea has been freeing, actually. I used to constantly chase the dragon, getting hung up on refactoring and refactoring and refactoring, trying to attain that perfect description. I still get caught up in it sometimes. A little bit of that spirit makes for better code. But you have to be able to pull yourself away.


the fact that there's no singular, perfect way

I think the author definitely agrees that there's no "singular, perfect way" based on their use of the phrase pareto optimal. Instead, the author is claiming is that some ways are strictly better than others, and that they want to use techniques that are strictly better than their current techniques.


"orders-of-magnitude higher developer productivity", to quote the author. We aren't talking quibbling over the last few crumb like a microbenchmarks game, but a revolution of the means of production with profound economic implications.


> Instead, the author is claiming is that some ways are strictly better than others, and that they want to use techniques that are strictly better than their current techniques.

That is not how I interpreted the post, but I do (mostly) agree with it. I think "strictly better" is a real thing that exists, though I do also think it gets over-applied a lot of the time.


I would still just let him be, people like him are the reason some cool stuff exists.


Not to mention a ton of the cool stuff (or decision to skip non-cool stuff) in your [practical language] came from experimentation in projects like Haskell.


There are a lot of unacknowledged tradeoffs in tools like Haskell. For example the value of Haskell is balanced against ecosystem value of having 10M people using the same tools. Getting 10M people to choose Haskell is actually a goto market strategy problem that could be solved by a charismatic leader, but I don't hear anybody from that community talking about that.


To be clear: I'm not making any statements about Haskell itself (though I think you make a good point). A certain amount of idealism can be a good thing. It's perfectionism that's dangerous, and there's a fine line between the two.


> ecosystem value of having 10M people using the same tools

That is I think one of the fairer criticisms of Haskell. I don't personally mind, but I respect others that do so.

That said, I wouldn't want 10M more users right now. The Haskell ecosystem right now has tons of breaking changes, and I think that is important. More users would freeze things in place until we have the tooling to both have breaking changes and tool-assisted migrations. Only then would I want rapid growth in popularity.

(Compare Rust's "backwards compat is really important", ....as if they they could get everything right the first time ....as if that won't lead the way of C++. Thank god they relented a bit with "editions".)


You are absolutely right adherence to backward compatibility will eventually leads to situations similar to C++ or C. Rust and go eventually will be facing same issues as C++ or C today, once they reach critical mass like them (which itself is a question mark for Rust).

Haskell and other FP language is a completely different paradigm and it’s true unless there is very good tooling which can transparently take old code and move it to new code with breaking changes every language will go through that transition, we can’t be prophets, who can know exactly what’s there in future precisely. Python learned it the hard way as when it was designed didn’t have Unicode and later to support it gone through 2 to 3 transition disaster. Java was lucky in 1996 as by that time Unicode was popular and it backed it in language and JIT infrastructure.

But at present in Haskell tried both cabal and stack to compile elm 0.19.1 compiler and pandoc and found it’s really hard due to version incompatibilities of GHC. I hope situation in Haskell improves so that GHC can compile old and new code transparently without too much fiddling and pain.


> Rust and go eventually will be facing same issues

Isn't Rust's workaround their "edition" system? It seems to be working pretty well for them, but it sounds like it could end up being a massive maintenance headache.

https://doc.rust-lang.org/nightly/edition-guide/editions/ind...


Absolutely, after stack happened and ghc started a faster release cycle it felt like the system was breathing again. 10M more users would be detrimental at this point, because the language is actually moving.


I doubt you can appeal to ten million people without turning your language into just another Blub dialect. There's a joke that Haskell is navigating between "avoid success-at-all-costs" and "avoid success, at all costs".


But it only took me half a decade of professional development to feel closer to the goal than ever before. Thanks, Haskell job.

Having worked with Vladislav Zavialov on both proposals and GHC itself, I can say he's great to work with and has excellent taste in design.


All I'm saying is that literal perfection is an asymptote. It's well and fine (great, even) to try and move towards it, but a person will be both happier and more productive once they accept that it's ultimately unreachable.

I don't know anything about the author; maybe he was being hyperbolic when he used the word "perfect" (though it didn't sound like it). I was just taking the opportunity to point out a lesson that I've learned.


I think "perfection" is understood to be a limit or non-unique. But where we are with tooling today, industry wide, is so completely abysmal that I don't think the behavior of shooting for absolute perfection or something in reach is that different.

Words like yours implies we are "good enough" and anything better is "too costly", and I don't think that is remotely true.


That was not my intent.


Glad to be wrong, then.


what did you think he meant by referring to "Pareto principle"? did you not know and just decide to not look it up?


I tend to disagree, only because I think that certain things are fundamentally better.

For example, a foreach eliminates a whole class of problems that a for loop introduces by using a reference to an item instead of a counter variable (which risks array bounds errors, etc). And higher-order functions eliminate a whole class of problems that foreach introduces, by helping the user to think declaratively and allowing for things like composability and parallelization. These are the "low hanging fruit" of programming languages and it astounds me that a lot of people haven't even made it to foreach yet.

What really gets me though is that compilers could trivially analyze side effects and transform code to use these better abstractions. We should be able to write a for loop and end up with higher-order functions in the compiled code if the outcome is equivalent. Then we could trivially parallelize code and be running orders of magnitude faster than we are now.

In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?

These simple examples show some of the fundamental prerequisites that software engineering somehow missed. And I think that saying all programming languages are roughly equivalent caused us to accidentally overlook some obvious truths.

My perfect language would have the embarrassingly-parallel vector processing of something like MATLAB, with the copy-on-write data handling of something like Clojure and Redux, with complex concurrency handled by lock-free atomic functions and channels/streams like Go/Elixer/Erlang, with all immutable variables to encourage higher-order functions instead of object-oriented programming, with the syntactic sugar of Python for slices etc, with the homoiconicity of Lisp and the speed of C. Basically it would look like immutable Javascript transpiling to Lisp. Without things like monads (or a clearer handling of them somehow), so that the user can always think in terms of synchronous blocking one-shot execution with no side effects and the code can be viewed as a tree so it can be dropped directly into a genetic algorithm. In my head, code looks like a mix between this and a spreadsheet. Then 90% of my time now goes to converting that to whatever mediocre language and framework I'm stuck using.

Writing this all out has shown me that my ideal language would probably piss a lot of people off. So maybe you are right after all!


What you described is, actually, here and called... Haskell!

You need COW things like in Closure? Here in unordered-containers they are. You need SIMD processing of vectors? Here in vector package it is. Parallel processing? parallel strategies.

Channels like in Go? You bet right, it is on hackage, in stm package. Green threads are much more greener in Haskell than in Erlang and Go (about twice as small overhead compared to Erlang).

I keep repeating that what is usually a language feature in regular languages, is often just a library in Haskell.

Which, as you rightly noted, piss many people off.


>What really gets me though is that compilers could trivially analyze side effects and transform code to use these better abstractions. We should be able to write a for loop and end up with higher-order functions in the compiled code if the outcome is equivalent. Then we could trivially parallelize code and be running orders of magnitude faster than we are now.

> In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?

I prefer FP languages, because code written in them is more readable to me --- recursion is often more comprehensible than looping at the same time being more general, immutability lowers my anxiety connected with tracking the values of variables/bindings, generally "reasoning via equality" is easier. I can even do it with a piece of paper. Good luck programming with pen and paper using an imperative language. So that's my preference.

But... The benchmarks that everybody's seen would indicate that things like garbage collection, which is pretty much a must with higher-order functions, disregard for the modern CPU cache locality rules (a lot of pointer indirection), and all sorts of other things that I haven't a slightest idea about are costly for performance. So much for [If we just converted everything to FP, t]hen we could trivially parallelize code and be running orders of magnitude faster than we are now. Also: parallelizing is a lot more complicated with regard to performance than "i'll run it on n threads to do it n times faster". Sometimes you will slow things down this way. It's weird, but when you start measuring things, you find that your intuition is wrong all the time. We could probably attribute it to all the complexity accumulated in the lower layers (CPU, OS) that we do not understand.

That said, converting to FP is kinda what we do (but not really). After all, a lot of compilers use SSA[1] to make analysis of dependencies between variables easier.

[1]: <https://en.wikipedia.org/wiki/Static_single_assignment_form>


Indeed, one would hope that the author knows enough category theory to know that choices of syntax are arbitrary and that choices of semantics are forced; any Turing-complete language will do, and the rest is optimization and trying to convince cooked rocks to do arithmetic.


I wrote this ... jeez...that was a while ago, on more or less this topic

https://journal.dedasys.com/2006/02/18/maximizers-satisficer...


Lots of people are misreading what I said here. I'm not "against" "idealistic languages". I haven't done much Haskell but I suspect that I'd love it; I just haven't had time to take it for a spin. I think it's great that people are trying to improve the state-of-the-art and I think there's lots of improvement to be made still, and I think Haskell is a focal point of that improvement.

I was very specifically disputing the phrase, "I believe that a perfect programming language with perfect tooling...can exist". I am only making the case that "perfect" is not attainable (or even real), even though it can often feel like it might be in the pure world of code. We can keep our eyes upward, and always be making things better, but we will never "arrive". Because of this, qualities like expressiveness and safety - while valuable - always have to be balanced against realities like human nature and project constraints. A feverish compulsion can overtake some of us (or at least me) when we feel like we're very close to true perfection in our work, so it's important to be reminded that you will never quite get there. The desire for refinement has to leave room for other priorities.


I think you missed the point. Haskell, keeping up with research, allows for further research. That research allows you to explain your ideas better and explore. It could save you refactoring time if you could explain it right the first time.

This kind of snub is weird.


In terms of GHC's evolution, this looks really promising to me: http://www.well-typed.com/blog/2019/10/nonmoving-gc-merge/


Yeah that is a monumental addition imo. You can find forum posts from just a few years ago with this on their wishlist. Cool to see it finally happen.


Have you tried this yet in practice? I'm expecting it makes Haskell more suitable for web stuff since that's where low latency is often more important than computational performance is.


Web stuff is pretty popular in Haskell already (I'd say "Web Stuff" is probably the biggest portion of production Haskell users these days), but then again it's also popular with many languages where people use throughput-oriented GC strategies vs low-latency ones. "Web Stuff" is a weirdly large class of applications these days, since it's basically "large scale server applications" that people are really talking about. But it will definitely close a big hole where one existed before -- there are users who had to abandon Haskell implementations of their server-side products because of it.


Yes, I'm also using Haskell for web APIs. It's certainly a great fit for that purpose.

I only meant that this low latency garbage collector has stronger positive implications in concurrent web stuff than in many other domains.


In what way do you think Haskell is bad for web use? I can't think of any runtime performance metric in which it doesn't crush Ruby or Python, and those are pretty popular for web apps.


Poor phrasing on my part. Certainly not bad in any respect for web stuff.

I merely meant that this is a welcome upgrade that specifically benefits web applications, even if it carries a minor computational performance penalty.


Ah, I see what you mean. It's a domain in which you're happy to give up a couple percent points of throughput to bring down the 99% latency, which that option will do. Not necessary, but it will be nice.


Sure but does it have web support like Django or Rails? There’s more to it than performance when it comes to web dev!


Yes it certainly does https://github.com/obsidiansystems/obelisk

[Disclaimer, I work at the company behind this.]


That looks pretty cool, I’ll try it this weekend!


But a low-latency GC won't solve that problem.


Web support...?


He means an ecosystem where 99% of the work needed to build yet another CRUD app is done for you.


I mean the web eco-system that allows you to develop web applications quickly.


Not yet since it's only on the bleeding edge of GHC. I have some games I've been building that I'll probably try it with though.


For reference, as I wasn't aware myself, GHC stands for Glasgow Haskell Compiler.


Overuse of esoteric acronyms is OOMBPP.


One of your biggest pet peeves?


Isn't that OOYBPP? :P


You got it! (YGI)


no relation to the LHC


Which is the LLVM Haskell Compiler

https://github.com/Lemmih/lhc


I would say I have a beginners understanding of Haskell (proof: https://bytes.yingw787.com/posts/2020/01/30/a_review_of_hask...) and IMHE Haskell is an academic research language for good reason.

For most software, the SDLC is a stream, not tightly scoped like a library or utility. I haven’t seen migration or upgrade plans, docs aren’t really there, links rot extensively, and the strangeness budget is a blank check. It’s not for most developers and it’s not because most developers are “lazy” or “dumb”. I would have loved to use Haskell for my personal projects, but it didn’t satisfy my requirements and I’m sticking to Python.

For starters, in Haskell Stack, there’s no uninstall option. You use shell to rm a package. https://github.com/commercialhaskell/stack/issues/361

That’s nope territory for most developers.


Yep, Stack has an atrocious UX that does nothing but cause progressively increasing amounts of pain. I wish someone would just write the Cargo or NPM of Haskell already.


What happened to Idris? As someone who knows absolutely nothing about Haskell, I thought Idris was essentially Haskell+dependent types?


It is, and it still exists... it's also a brand new compiler, instead of one with nearly 30 years of development on it, and a brand new ecosystem, instead of having the massive amount of hours that Haskell has in it. Idris is really nice imo, but Haskell+Dependent Types will likely be usable sooner.

There is an 'Idris 2' being developed, too (https://github.com/edwinb/Idris2), which is an entirely new compiler with a new type theory (although not entirely dissimilar), which just makes it even harder to use for real things (again, IMO).


How hard is it to start working on a compiler like this?


[flagged]


I believe it’s a reference to the dot operator:

    .
The parens around it are Haskell notation for using an operator in prefix position, which is often how operators are displayed in a “standalone” context such as this.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: