Hacker News new | past | comments | ask | show | jobs | submit login
Calling Purgatory from Heaven: Binding to Rust in Haskell (well-typed.com)
129 points by g0xA52A2A 11 months ago | hide | past | favorite | 89 comments



Is it just me or was the high-water mark for interest in Haskell & FP in general around 2016-2017 ?

There was lots of discussion & debate around static vs dynamic, clojure vs haskell, oop vs fp, new languages vs established languages, conferences devoted to FP ideas, etc...

The whole debate ecosystem seems to have subsided around when rust really started taking mindshare. Now, I'm not implying here rust absolves these issues at all. It's just a co-occuring event on the timeline imho.

Although, I think it's probably obvious that devs who have free time use their attention on whats trendy. Rust probably took some wind out of the sails of haskell, but I'm not sure what trendy is moving to these days, maybe AI, but do devs really program against large AI models or just play around with chatgpt; if not, what are they interested in with regards to tools that materially affect their career?

What happened to all the debate about langs? What happened to interest (measured in frequency of online discussion) in FP? Is Rust still trendy in dev mindshare? Are these debates over/resolved?


What happened are multiple factors

1. Web3 hired a lot of these people and so they had less time to work on this stuff. Shame to spend that much on a dead end but eh

2. Scala died with Big Data. It is still around and all but noone care anymore, which emptied the room. It also happened that the whole Implicits experiment for polymorphism, which scala was really supposed to explore, did not pan out that well

3. Effects progressed but... Mostly out of view. Ocaml shipped them with its multicore, we are seeing good work on the academic side, you see Verse wanting them, etc. Same thing with linear types.

4. Dependent types ... Never really crossed to the realm of production. And Idris and co are mostly "complete" so it slowed down

5. Oh and monad interest, mostly fueled by scala, died slowly. Effect handlers seems to be a nicer solution in practice to most of this stuff.

6. Typescript killed a lot of the need for advanced stuff, same with python and ruby shipping their stuff too. Meanwhile Rust and Elixir showed you did not need the really up there stuff to have results in prod.

In the end what happened is that a lot of the highly abstract stuff was driven by "hype domain" that died, while more pragmatic but limited implementation burgeoned and absorbed some of them. Rubber met the road and that tampered a lot of people down.

There is still work being done, but rn it is more at the "experimental language" stage. Think Rust in the mid 00s.

Oh and Rust mindshare is still growing. A lot. A looooot.


Regarding tools affecting their career... Right now pretty limited. We mostly see move around JS and some stuff around Rust for all kind of application. Outside of this, this generation burned out maintaining stuff. And most of the (small) set of people interested in low level are already consumed by keeping the existing projects running. We have a dire lack of investment in low level tooling.


It's really too bad too. Most of the languages we've settled on have glaring problems, but I guess "critical mass" doesn't really care. Python's packaging is a mess (and a host of other problems), JavaScript has almost as much baggage as Java with much worse design decisions, over-complicated tooling, etc. (but has arguably some of the best runtime implementations in existence). Rust is good, but has sharp edges and a steep learning curve.

Both Rust and JS have adopted some FP concepts that are pretty nice, so there's that.


Python's packaging mess have nothing to do with the language itself, most definitely from the perspective of programming language theory.


As I understand it, it's related to the decision to make the entire interpreter part of the public C extension interface which drove everyone to write C extensions for anything that was remotely performance sensitive (and the breadth of the extension interface and the pervasiveness of C extensions in the Python ecosystem made it really difficult to make substantial backwards compatible performance improvements in the interpreter). Since so much of the ecosystem is C, Python package management has had to try and figure out how to distribute arbitrary versions of C software to arbitrary target systems which is really hard in large part because C projects don't have any standard build tooling or dependency management. As far as I know, no one has figured out how to do this well given Python's constraints (the best language package managers just absolve themselves from Python's original sin--pushing the ecosystem toward FFI rather than making the core runtime model fast enough for native code).


None of the above is an argument as to why pip, poetry, virtual envs and half a dozen other tools and methods are needed to manage packages with Python. I'd go so far as to say that this isn't even a language level issue, as much as it's an implementation detail, a rather important one. However, for some context, Python first came out in 1991. Back then, nobody was really focusing on making anything run well on multiple cores, best you could get was multiple CPUs in somewhat esotheric systems. It's far easier to understand the design decisions for a programmer friendly (and hardware hostile) language in the 90s.


Tcl was born three years earlier.

The AOLServer, which was heavily multithreaded and based on Tcl, was developed in 1995: https://en.wikipedia.org/wiki/AOLserver

You also can easily guess who of these two get i18n right earlier.

The AOLServer was made possible by the decisions to have an API to have specific versions of interpreters (including highly constrained safe ones), by forcing extensions to have per-interpreter state and make interpreters communicate by messaging on Tcl level. This resulted that if your extension worked with several interpreters in single thread execution, it would work with several threads ruunning in parallel. The same is for just Tcl code, which does not even know which thread it gets executed on.

Thus, Python developers did not do any looking around.


Tcl also has a simple but effective package management architecture, that required specification of package versions from the start, so no version hell.

Later, a fully functional multiplatform binary package server/client was developed for Tcl [0]. It died of neglect while pythonistas spent (are spending) years trying to build something satisfactory.

[0] https://wiki.tcl-lang.org/page/Teapot


Yeah, this matches up pretty much exactly with my understanding of the issue. Python as a language is "pretty good" but far from perfect, and it's really unfortunate that C's packaging/building ecosystem is in such a horrible state.


> it's really unfortunate that C's packaging/building ecosystem is in such a horrible state.

This is a recurring theme in my career. It's one of my cornerstone grievances with Nix as well (any time I need to write a Nix package I end up having to package some obscure C dependency that my package transitively depends on). :)


It is related, but not in a way that matters much. My main issues with Python are related to the language itself, primarily around duck-typing, lack of good lambdas, import side-effects, etc..


Rust has a steep learning curve, but I'm not sure what its "sharp edges" are? Certainly it has fewer sharp edges than a fully dynamic language? FWIW, I'm not a Rust evangelist or anything; my background is largely Python, Go, C/C++ with a smattering of Rust and JS/TS.


I would say the sharp edges aren't unexpected runtime behavior — compiles => works as expected, barring off by one errors — but rather things like “I just spent an hour figuring out the best way to build a tree out of nested HashMaps because the borrow checker didn't want me to do overlapping mutable borrows” or “what's the right way to be agnostic with regard to owned or borrowed data in my struct/function”. I don't consider these to be part of the learning curve because the problems are inherently more complicated than just “how do I use rust”.


Conventionally we use "sharp edges" to mean things which are dangerous in a surprising way. For example freshly cut paper has a sharp edge, it's very easy to cut yourself painfully on a sheet of blank paper. The things you've described don't threaten to be dangerous or surprising. They're not good but they don't strike me as "sharp edges".


I see what you mean. As the sibling noted, I think you're using "sharp edges" in an unconventional way, but Rust definitely has a sort of unpredictable developer experience. Some things go really fast and the programmer finds themselves thinking, "wow! this is as easy as Python/Go/whatever!" and other times something that seems to be easy fails to compile and you get a compiler error that says you need to fix one seemingly small thing, so you try and fix that small thing and you get another seemingly-easy-to-fix compiler error and you just keep pulling on that thread until you've forgotten the problem you were originally setting out to solve. (:

I don't think this is a "sharp edge" (maybe "indeterminately long threads to pull") but it definitely is an unpleasant experience. If anyone knows of any good names for this category of frustration, I'd be happy to hear them!


Rust is also huge in crypto. It’s the DSL or inspired the DSL for a few chains and there is a lot of Rust work happening on Ethereum.


At least for me, Rust gives me the parts I like about FP (sum types, generics, type inference, limited mutability, first class functions) without the annoying parts (bad tooling, poor documentation, odd syntax, laziness, mediocre ecosystem, very circuitous routes to mutability and IO).

Basically I suspect the majority of FP users liked the type system and general guarantees of FP but didn’t care about the ideological parts like purity or laziness so when a language popped up that gave you those guarantees but also had better UX and could bind easily to C or C++, they hopped on.

Also it’s much easier to sell Rust to management as C++ but with less bugs. Who doesn’t want less bugs?


All of that describes me. I'd add that I really like rust syntax. Ok, maybe not the syntax itself, but the semantics of the decorations. It trades a little up front pain, in the form of "cmon compiler, why do I need to tell you about this?!", to save a lot of pain later when revisiting mostly forgotten code, "oh i see, the return pointer is tied to the lifetime of argument 2, not the others" (as opposed to "WTF are the invariants here again? time to go hunting").


Do trends matter? At the end of the day, software is interesting for it's own sake, but not useful. If the goal is to create useful tools (i.e, a custom hammer for your particular nail), does it matter what language is used in their creation?

I was writing Haskell in 2011. I'm writing Haskell now. I'll keep writing Haskell until I find something I like better. I also write C++, Python, R, and a dozen other languages when the mood or the need strikes. Some of my favorite bits of software are written in Delphi or poorly hacked together C. They're closer to rocks than Estwings, but they do the job (and have done since the '90s).

There will always be trends. There will always be floods. Floods can take you interesting places, but having a well-sheltered hole or a firm grip on something solid is probably the better long-term solution if survivability is the goal. In the end, COBOL devs are worth more than ever.


Trends are the tides of culture, and I'd argue that culture matters overall. If you're using a lang/toolset that's widely out of fashion and also isn't widely used by slower moving institutions, you're the cultural equivalent of a monk in a mountainside monastery copying down texts for future generations: your work may hold cultural influence down the line, but for your lifetime, you're gonna be largely isolated and unappreciated unless you happen to get extremely lucky.


Unless one does enterprise consulting for companies whose main business is unrelated to shipping software.


No, I'd still say that people in those circumstances are highly isolated from programming culture at-large. I'm not saying it's not a viable career, I'm saying it's sacrificing cultural relevance.


I think a lot of Haskellers just got tired of arguing lol. I know I do so way less than back then. But I am writing Haskell more than ever nowadays (even at my dayjob!)


If we're talking HN trends then I'd say Haskell comes back around roughly every 6-7 years. There was a big push around 2008-2010, again in 2016-2017, so... maybe next year we'll start seeing it come back around?

I work in Haskell full time now and have for more than a few years at this point. The ecosystem is small because there aren't any network effects propping up Haskell's popularity. We don't have large corporations like M$, Google, etc pouring dosh into GHC development, tooling, etc. It mainly survives on the community of dedicated folks working in academia, their spare time, and contributions from the small (growing!) pool of companies investing in it. Progress happens but it's slow.

This is one factor that can contribute to the cyclic nature of FP trends, Haskell specifically; a handful of influential people discover it, learn a bunch in their free time, write about it, and then when it fails to catch on they move on.

Another factor that contributes is... well network effects. Potential new programmers aren't rushing out to learn Haskell/OCaml/F# because there aren't a whole lot of jobs using it, there aren't a lot of courses teaching it, and there aren't many people recommending it. It also means that established language ecosystems are free to adopt ideas and features from the FP community in their own languages which further prevents people from leaving their ecosystem and adopting another language. Sure, C# may not be a great functional programming language but it's good enough and you don't have to fully buy in: you can use FP patterns when it feels appropriate and OOP ones when that works better (C# has the advantage of the .NET runtime which F# uses and the two can interop well... further preventing any reason to leave that space).

The set of Haskell programmers isn't empty. It's filled with people who are rather dedicated and passionate! And sometimes people new to Haskell join, learn something, and leave for various reasons. Some stay.

As for Rust well... if you look at just HN, again, I think you see these hype cycles. The early-mid 2010's the big trend was "X written in Go." Now it's, "X written in Rust." You don't see that happen much with OCaml/Haskell/F#... probably, again, because of the aforementioned effects. Either the pool of candidates re-writing existing tools is small enough that they can't break through the current hype-cycle or they're not re-writing those things and are carrying on with their work.


Microsoft Research and Facebook and have certainly paid GHC salaries for key contributors.


Anyone know why dependent types aren't a thing in practice? They seem to eliminate a bunch of restrictions induced by phase separation. Easier proving of compile time invariants, seems good.

All the demos I've seen of Idris suggest the compile time cost is substantial. So my best guess at present is the languages take too long to compile but that doesn't seem like it should be a deal breaker.


> Anyone know why dependent types aren't a thing in practice?

Because full dependent types imply no phase separation, as you say. Modern languages like Rust and Zig have introduced more powerful facilities for doing custom things at compile time, and perhaps dependent types can ultimately be viable in the compile-time-only subset of such languages; but we're a long way from dispensing with that phase separation altogether.


They are also quite complex for most developers.


Just a guess with no evidence, but I think a core premise of the FP movement was that mutability was the source of all of our problems. And what Rust has shown is that maybe mutability was fine along, and it was more specifically "mutability + aliasing" that was the source of all of our problems.


FP isn't Haskell, there are FP languages with support for mutability.


These debates will resurface thanks to new languages like Austral, which attempt to place Rust's overall feature set (affine types, borrowing, interior mutability, safe vs. unsafe subset etc.) on a simpler foundation and make it less ad-hoc. There's also a lot of work along these lines in recent PL research. This will also make it a lot easier to explore better and more elegant interactions with known features from Haskell or FP-like languages more generally. And then some future version of Haskell might ultimately gain the ability to express Austral programs (if not full-blown Rust) more or less seamlessly.


Anecdotally, I was toying around with Haskell around 2015 but concluded that while learning it deeply would teach me important lessons about software, I was likely never going to use it for work, and so I just dedicated myself fully to learning Rust. It's not necessarily about any debate or trendiness for me, it's just a boring choice that helps me get things done.


As Alan Kai cleverly puts it, informatics is a fashion industry.

Every new fad is followed by folks that want to monetize conferences, books, training, consulting gigs,...

Eventually everything that mattered is said and done, a née generation comes along, and a new cycle starts.


I switched from Haskell to Rust at about that time, so that's one datapoint.


Anything you can share about the reason?


The best title ever in HN.


Yeah. But which one is heaven?!


Well, given that this is clearly intended as a followup to https://dl.acm.org/doi/pdf/10.1145/317765.317790, It'd be surprising if Heaven didn't mean Haskell.


Haskell and COM, hilarious


> In this blog post, we will consider how to call functions written in Rust instead: not quite hell, but not quite heaven either.


Clearly Haskell since you are binding to Rust in Haskell.


Calling Rust purgatory on HN and getting away with it. What a time to be alive.

Although i guess that makes C++ hell, which is appropriate.


I'll take Rust over Haskell all day long. Even though Haskell does rock.


This is because you keep state. You are fallen.


“Church vs state”, an century-old conflict.


Start hopping on rails, enjoy life and see everything going by for the circus it is


I made a lot of money off of Rails but finally switched to Clojure because of the gem pain.

So many gems are C or system wrappers and require very specific versions of Ruby and whatever underlying libraries are called. That's fine on brand new dev projects, but it leads to a lot of unexpected web searches when it's time to ship code or return to a project you haven't touched in 6 months.

Pinning your gems helps with some of this but every time it popped up I thought ugh...here we go again. Eventually that pain made me look for a better way.

One of the oft-undersold features of Clojure is the culture is more careful with breaking changes.


> Start hopping on rails, enjoy life and see everything going by for the circus it is

I thought this might be a quote from the movie Trainspotting (1996). Seems to have a similar spirit to some of the things from that movie.


jump to elixir. enjoy life and get great runtime perf too!


> This is because you keep state. You are fallen.

The Rust philosophy is that there is nothing wrong with mutating state, as long as nobody can ever see you do it. :-)


So Catholicism.


"Forgive me Borrow Checker, for I have sinned"


As always, depends on what you're writing :)


I don't really get what's so good about Haskell.


local reasoning everywhere via lazy evaluation.

Principled abstractions enabling unparalleled composition.

Usable effects libraries like effectful.

Easy concurrency and parallelism.

Fearless refactoring.

stm.


I wouldn't plug effectful here - MTL and few others are usable as well, or not using an effect library at all.


I'm unconvinced that suboptimal versions of Haskell such as mtl with it's n+1 instances or very strict interpretations of simple Haskell have the ability to meaningfully move the popularity needle of Haskell forward.

With that view it makes since to highlight either "Haskell v2", my ideal flavor of Haskell without reservation, or some mix of the two.

There's a time I wouldn't have mentioned effectul for exactly the reasons you allude to, but... I suppose I along with reasons cited, I no longer care that much in a way.


Already ADTs do it for me when comparing to other popular languages :)


> local reasoning everywhere via lazy evaluation

Doesn't lazy evaluation mean memory/complexity issues could manifest far away from the problematic code?


occasionally but usually not

the reasoning is about correctness and program behavior

Haskell is still the only mainstream language that truly delivers on "understand the part without needing to consider the whole." Others can with work and discipline. With Haskell, you usually have to work hard to get in that level of quagmire (and I've seen and fixed plenty of quagmires. I've seen people complain about code too and just be wrong when I got my hands dirty for like an afternoon.)


Pithy answer: In theory, but very often doesn't matter in practice.

I think the sibling response has a good answer FYI.


I mean this sounds like a lot of buzzwords, which is usually the answer I get when I ask why I should use Haskell.

What would it do to make my life easier?


> I mean this sounds like a lot of buzzwords, which is usually the answer I get when I ask why I should use Haskell.

That's what makes it a hard sell I guess?

> What would it do to make my life easier?

It gives those advantages which I'll explain more in a moment everywhere, meaning you can depend on those things in your and others code.

Similar to how it's difficult to explain why it's insanely useful that emacs is plain text everywhere.

Now let's see if I can demonstrate how just one of these could make your life easier... Local reasoning.

A good definition:

> Local reasoning is a property of some code wherein the correctness of the code can be inferred locally under specified assumptions, without considering prior application state or all possible inputs.

Give me a second to think up a good example. Have a desired language or code example in mind that doesn't use local reasoning?


I'm not sure I know what local reasoning is.

Most of the code I write needs to take in a chunk of floating point data a few thousand points long, do a lot of maths on it, and emit it back out again, as quickly as possible, over and over.


What is there not to get? You see, a monad is just a monoid in the category of endofunctors,


It's cos it's all up in that category theory bro.


What is assembly then?


That’s what both heaven and hell are actually written in by a dude in pajamas.


Obligatory XKCD: https://xkcd.com/378/


"The Earth was without form and void and darkness was upon the face of the deep"


"The Earth was without form and void and darkness was upon the face of the heep"


I'm torn between appreciating your wit and lamenting my own missed opportunity :P

Beautifully done.


What dwells beyond the darkness.


Java


Unproductive


Based


Based in facts? Based in fiction?

No, just based.


Based in 64


Heh - my first chuckle of the morning :-)


To me Haskell looks more like “The Good Place” (season 1)

If this was calling Rust from Lisp, on the other hand…


Lisp has already walked the 8th circle of hell to write the first Malbolge program.

Calling a C stub written in Rust using Lisp's C FFI is just your average Tuesday in comparison.


Implementing scheme or lisp in rust is probably a path to zero cost data movement between the two. Write the garbage collector in the abstractions rust understands, use roughly the same data types on both sides. That might be worth having.

edit: and there are lots, already, where for each one it is difficult to tell where it lies on the spectrum from vapourware to production


a common lisp implementation in rust wold have potential to be quite the contender to sbcl


Why? I don't think of SBCL's problems as being particularly related to the language it's implemented in, but a lot more related to the complexity of making an optimizing non-JIT compiler for a language as complex as CL.


the codebase is old and in c. a rust version gives new rust devs a chance to cut their teeth on building a lisp and learning about lisp as well. plus we could get a interop with crates. imagine if cl code could call rust crates for instance.


The codebase is old and almost entirely in Lisp; parts of the runtime are in C, but they're also not the parts that are getting worked on, because they're not where the bugs are (because so much of it is in Lisp).

Unless Rust has gotten runtime reflection without me hearing about it, I don't think writing the compiler in Rust gives you any special powers in interfacing with Rust. You'd get a lot more of a win from parsing the DWARF of a library when you dlopen() it, and using the type information that exposes to be able to use the full Rust ABI rather than the C ABI.

EDIT: Actually, no, even with runtime reflection, that would only help an interpreter, not a compiler.

That still wouldn't give you the ability to perform template instantiations at runtime though -- for that, you'd need to ship the Rust compiler with your Lisp _applications_, since you might be able to invoke a Rust template with arguments at runtime that hadn't been seen before.

If you're looking at work on building a more modern Lisp compiler, check out SICL or Clasp, though as far as I'm aware, those are both early-ish projects.


CLASP may be newer, but it is hardly more 'modern'. There is only limited adoption of CLASP in the Lisp community, because its advantage (excellent integration into C++) is only interesting for a small group of users. At the same time its disadvantages (complex implementation, much slower than SBCL in both development and application, less compiler sophistication for the Lisp user, ...) limit its adoption. CLASP is a special purpose CL with the aim to easily interface to large libraries of C++ code. It maybe be newer and also a great piece of engineering, but it is no competitor to faster implementations. SBCL can compile itself in roughly a minute on a current machine - something like CLASP takes a lot longer. With SBCL we'll have already the much faster and more responsive implementation.


why would one implement Common Lisp in Rust? Most implementations are largely written in Common Lisp itself, plus some runtime (memory management, threads, OS interface, ...) underneath. The runtime can be specially written in C (maybe Rust) or it could be an existing VM, like a JVM. Writing Common Lisp itself in Rust or C or whatever makes little sense. For the runtime it makes sense, since it needs to interface to the underlying hardware and the operating system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: