Hacker News new | past | comments | ask | show | jobs | submit login
Rust vs. Haskell (serokell.io)
271 points by aroccoli on Feb 14, 2023 | hide | past | favorite | 184 comments



This article paints Rust and Haskell as very similar languages, but when I saw the title I was thinking about how they are fundamentally different. Rust is technical and exposes/requires you to understand the reality of computation, while Haskell hides it away and lets you program in a system based on lambda-calculus (you don't even have strict order of evaluation!) Interesting to see that they are very similar regardless.


Quite a few of Rust's features were borrowed from or inspired by ones in Haskell - albeit often modified to be more suitable for systems programming (and systems programmers!).

Most Haskellers I know are quite fond of Rust :)


I am (or used to be) a Haskeller, and I just can’t get over how spectacularly ugly Rust’s syntax looks compared to Haskell or SML and finally force myself to learn it. Call me shallow, but it’s true.


I love how Haskell splits the type definition and function definition into two lines. It’s so readable that way. Also enables outlining your program first with just type definitions, and then going back and filling in the function definitions later.


No, you're not shallow. Rust's syntax has put me off a lot. I was looking to learn an alternate Erlang VM language and liked Gleam, which originally had a Haskell/ML syntax, but they've decided it's a popularity contest, so they chose to go to an Algol, C, Rust looking syntax. Shame. I was learning LFE (Lisp Flavoured Erlang), buecause Lisp, but then Elixir took off because all of the Ruby programmers took to it (and it's got a lot of other stuff going for it, but it's closer to Ruby than LFE!).


I share this opinion. Haskell was my hobby language for 2 years until Rust. I still miss the brevity of Haskell and point-free style in particular.


I think Rust's features were actually inspired by OCaml rather than Haskell. Rust was originally written in OCaml, and OCaml feels a lot more similar to Rust than Haskell does.

Of course Haskell is heavily influenced by ML so it also has a lot of the same features.


Traits are definitely inspired by Haskell typeclasses (according to Graydon), but you are right that many other similarities are also shared with ML languages.


> I think Rust's features were actually inspired by OCaml rather than Haskell

The old alpha/beta rust docs specifically referenced Haskell all over the place, and I don't recall them mentioning OCaml.

> OCaml feels a lot more similar to Rust than Haskell does

I disagree very strongly with this. OCaml doesn't even have type classes/traits! Rust lacks polymorphic variants, functorial modules, etc. It's really nothing like idiomatic ocaml, whereas I think it's pretty reasonable to describe Rust as "what Haskell devs would make if they weren't allowed to heap allocate". Maybe they'd also add a few more gizmos to the type system :)


There does exist at least one sml implementation that computes the memory allocations at compile time a bit similar to how Rust does it.


It doesn't have to be one or the other :). OCaml and Haskell are two great languages that the Rust designers were familiar with.

The big thing IMO that makes Rust feel like Haskell is the pervasive use of traits and `#[derive(...)]` which is directly analogous to the pervasive use of typeclasses and `deriving` in Haskell.


IMO, that's too superficial to compare them. So what if both have a bunch of similar looking constructs? Rust is imperative, and sequential, and nearly everything is mutable, hence the borrow checker. That makes programming in Rust rather different.


Haskell controls mutable data by pervasive immutability.

Rust controls mutable data by the borrow checker system: allow sharing OR mutation but not both at the same time.


I get what GP is implying. The way you program (and think) is entirely different if you are working with immutable data structures and functions versus safe mutability.


Rust is immutable by default. You have to deliberately opt in for mutability. When you don't opt in to mutability the programming styles are very similar at a basic level.

First the type system is very similar with the capability of building sum and product types with recursive values. Second pattern matching over sum types is also very very similar.

Rust is like the bastard child of C++ and haskell.


Rust's new claim is now that references isn't immutable, it's shared by default. There is no concept of immutable data in rust. All data is mutable in various ways and that is a product of the binding not the data, from the cell interior mutability eupphamisn to shadowing in the same frame: (`let x = 2; let x += y;` is valid even though x isn't defined as mutable. The mental gymnastics a rust parishioner has to go thrown to shout down other languages while theirs fills with the same concepts is growing by the hour.


You can build "product types with recursive values" in any language that remotely descends from C or Algol. Pattern matching is not wide spread, but --while nice to have-- it's syntactic sugar.

> When you don't opt in to mutability

Rust's whole shtick is safe mutability.


>You can build "product types with recursive values" in any language that remotely descends from C or Algol.

You cut off the part where I said sum AND product types. Variant and union are post C++11 and not traditionally part of Algol style languages. Even nowadays the only time I see variant is from some haskell engineer stuck on a C++ job.

>Pattern matching is not wide spread, but --while nice to have-- it's syntactic sugar.

No pattern matching is a safety feature. Most of rust code should be using pattern matching as much as possible. In fact there's a language called Elm that achieves a claim for code that has ZERO runtime exceptions via the pattern matching feature. See: https://elm-lang.org/

Don't believe me? You know about programmings biggest mistake right? Null? Well C++ sort of got rid of it by discouraging it's use but even with optionals the issue is still there because optionals can still crash your program.

With rust, optionals never crash unless you deliberately ask it to via unwrap. If you use pattern matching exclusively to unroll optionals a crash is virtually impossible. Try it, and try to guess how pattern matching prevents optionals from crashing your program in rust, while in C++ the lack of pattern matching forces empty optionals to crash when you try to access it.

>Rust's whole shtick is safe mutability.

Yes, but at the same time it's opt in. Algol languages are by default mutable and you have to opt in for immutability via keywords like const. Rust offers extended safety via a single usage mutable borrow ans single usage mutable ownership but this is opt in. You should avoid using the keyword mut as much as possible.


In most Haskell codebases most code is written imperatively and executes sequentially.

It is very rare that anyone is actually using the truly advanced techniques like knot tying or TARDIS monads which are truly impossible in other languages.


I think painting Rust and Haskell as similar languages is very misleading.

Haskell has a garbage collector for a reason.

You can't program naturally with closures unless you have a garbage collector, because closures introduce cyclic references.

Also, Haskell has a type inference system that allows e.g. the IO monad to work seamlessly with the rest of the language.

EDIT: and laziness of course.


Rust has closures and does not use a GC. The borrow checker just makes sure that a closure never outlives its captures or any of its references. The most prominent difference between Rust and Haskell is that Haskell uses lazy evaluation throughout; it's the language's unique selling point. Rust doesn't even have generators in the stable variety of the language yet, although they are internally used to implement async/await.


There is nothing "natural" about working with Rust closures. If you want to take a losing fight against the borrow checker, they are the best way to do it.

(Rust has your C-like run of the mill function references too. People should emphasize those more, because differently from closures, those work very well.)


Closures work very well in Rust and are a core aspect of the standard library. The 'Iterator' trait adapters, for example, make very heavy use of closures, as do the combinators on 'Option' and 'Result'. Closures are also how you spawn threads. They are absolutely ubiquitous and quite natural.

In contrast, function pointers are very rarely used.

Are they equivalently nice in every way to closures in Haskell? Of course not. But I think your comment is swinging way too far in the opposite direction.


I think the word ‘natural’ in this context is too vague. If you’re coming to Rust from C++, sure, Rust’s closures feel quite natural. If, on the other hand, you’re coming to Rust from Haskell then they’ll feel wholly unnatural and confusing.


Sure it's vague. We're expressing opinions here. :-) But it's important to push back here. Closures are absolutely natural enough in Rust that a huge swath of very fundamental APIs in Rust rely on them. If they weren't natural in at least some sense, it would be absolutely bonkers to do that.

I came to Rust from Haskell (among other languages). I was a little confused by closures, but I attributed that to the fact that I started writing Rust before 1.0 when closures were far far far less convenient than they are today. (This is back when there were 'proc' closures.) Now I find them extremely natural.


Wait, they got rid of proc closures? Hahaha. Shows how long it’s been since I last looked at Rust. Okay, now they do look quite natural!


Indeed. My (and by no means am I the only one) mental model of closures is really simple. A closure is a struct with no name, and the free variables of the closure are fields in the struct. If the closure is annotated with 'move', then those fields are owned, otherwise the fields are borrowed and the lifetime parameter on the struct is the shortest of the bunch. Then, calling the closure is just calling it with 'self' (FnOnce), '&self' (Fn) or '&mut self' (FnMut).

Everything else pretty much falls into place.

The other thing that was added since you've looked at Rust is that you can return unboxed closures because Rust has gained the ability to express anonymous types directly by naming a trait it implements:

    fn adder(x: i32) -> impl Fn(i32) -> i32 {
        move |y| y + x
    }

    fn main() {
        let add2 = adder(2);
        assert_eq!(7, add2(5));
        assert_eq!(10, add2(8));
    }


Your example indeed perfectly demonstrates the subtlety of closures in Rust. Now let's imagine I want to return a nested closure, e.g., `impl Fn(i32) -> impl Fn(i32) -> i32`, or something like that:

    fn adder(x: i32) -> impl Fn(i32) -> impl Fn(i32) -> i32 {
        move |y| move |z| y + x + z
    }
Alright, I can't. I can do that for a single closure, but can't for two or more of them. Here's why we have `#![feature(impl_trait_in_fn_trait_return)]` -- in Nightly, among many other such features.


My example didn't demonstrate the subtlety here, but it inspired your example, which of course does demonstrate some subtlety. And as I acknowledged in another comment, I of course agree there are subtleties to closures in Rust! And I agree there are useful nightly features that unlock additional use cases. Rust gives you enough rope to hang yourself with. For example, if you're okay with a heap allocation (like I assume you might be in a language with "non-broken closures" lol), then you don't need nightly Rust:

    fn adder(x: i32) -> impl Fn(i32) -> Box<dyn Fn(i32) -> i32> {
        move |y| Box::new(move |z| y + x + z)
    }

    fn main() {
        let add23 = adder(2)(3);
        assert_eq!(10, add23(5));
        assert_eq!(13, add23(8));
    }
It's almost like "fundamentally broken" (not just "broken" but "fundamentally broken") and "has subtlety" are two totally different things. What a fucking revelation.


That’s really nice. One of my favourite uses for returning closures is to separate a function into ‘configuration’ and ‘execution’ stages. It becomes extremely useful when you have a ton of parameters but only a few change between typical invocations.


I understand your point and perhaps I'm in fierce agreement.

OTOH, things like parser combinators are much more ergonomic in Haskell than Rust.


Well yes... For many reasons. Including automatic currying and the fact that eta reductions work seamlessly in Haskell and less so in Rust.


Closures in Rust are fundamentally broken. See [1] for the discussion.

[1] https://hirrolot.github.io/posts/rust-is-hard-or-the-misery-...


Lol. "fundamentally broken" and yet I've been using them with little to no friction for pretty close to a decade now. If that's what you think "broken" means, then, well, you've completely devalued the word. This right here is what we call sensationalism folks.

Now if you said, "Rust has some rough points at the intersection of generics, closures and async programming," I'd say that's absolutely true!


> "fundamentally broken" and yet I've been using them with little to no friction for pretty close to a decade now.

This might rather confirm my point, since engineers using a specific programming language quite often have a contorted picture of how code in other languages is written, as they become more and more acquainted with their main tool. If we compare Rust closures with those of ML languages, it becomes pretty clear how natural closures are in ML and tricky in Rust.

Not quite a sensation either to anybody who happened to use closures in a typical Rust code, which, apparently, happens to have quite a lot of generics and async!


The sneer is strong with this one.

> This might rather confirm my point

It might, but it doesn't, because I don't only use Rust. Notice also how you've moved the goalposts from "fundamentally broken" (sensationalist drivel) to "how natural closures are in ML and tricky in Rust" (a not unreasonable opinion with lots of room to disagree on the degree of "natural" and "tricky").


That's not just closures, but a confluence of closures, async, and more.


The example provided in the post -- yes. But I can basically demonstrate other language features that are not working with closures, such as generics -- you just can't have generic closures in the same way as you have generic functions, even in a synchronous code. With closures, you have a pretty limited scope of what you can do, considering the rest of the language, so I think it'd be pretty useless to consider closures as a separate mechanism that shouldn't interact with the rest of the language.


> But I can basically demonstrate other language features that are not working with closures, such as generics -- you just can't have generic closures in the same way as you have generic functions, even in a synchronous code.

Generics and closures work just fine:

    use std::ops::Add;

    fn adder<T: Copy + Add<Output=T> + 'static>(x: T) -> impl Fn(T) -> Box<dyn Fn(T) -> T> {
        move |y| Box::new(move |z| y + x + z)
    }

    fn main() {
        let add23 = adder(2)(3);
        assert_eq!(10, add23(5));
        assert_eq!(13, add23(8));
    }
You keep making sensationalist generalizations. It's trivial to demonstrate that closures and generics work together just fine, as I've done above. Are there subtleties and complexities and other things that can make certain cases or areas tricky? Absolutely. But that's not the same as "not working."


Well, your example is not a generic closure, since inside `adder`, you already have a generic type `T`; it's introduced by the `fn` keyword. If we try to do the same for a closure, we would face a type error. This issue has been discussed on SO [1].

I haven't claimed that closures are not working, since well, they are working, but under a very limited set of circumstances. Again, I see nothing sensational, since the trickery of using closures has been discussed elsewhere.

[1] https://stackoverflow.com/questions/34814423/possible-to-def...


> Well, your example is not a generic closure, since inside `adder`, you already have a generic type `T`; it's introduced by the `fn` keyword. If we try to do the same for a closure, we would face a type error. This issue has been discussed on SO [1].

Of course it's generic. If you were to write the type out for the closure, it would be generic over 'T'. (Whether that type ever actually gets written out that way or not is a different matter.) As far as I can tell, what you're saying is that Rust doesn't support higher-rank polymorphism. Which is true (for types, but not for lifetimes). But that's not the same as "generics don't work with closures."

> I haven't claimed that closures are not working

You've said:

> Closures in Rust are fundamentally broken.

(which was completely unqualified and not to a "very limited set of circumstances")

and (emphasis mine)

> But I can basically demonstrate other language features that are not working with closures, such as generics


this comment made my day.


> The borrow checker just makes sure that a closure never outlives its captures or any of its references.

The point is that the borrow checker forces you to program in a fundamentally different style than idiomatic Haskell.


You are not forced into any design choice. You can use explicit reference counting to enable closure captures to add to object lifetime, or even a lightweight tracing GC implementation (such as the recently developed Samsara https://redvice.org/2023/samsara-garbage-collector/ ) to collect cycles as Haskell does. Of course, Rust makes the performance tradeoffs involved quite clear, even though the implementations it can use are quite possibly leaner than Haskell's.


Infix operators and currying give Haskell completely different ergonomics to Rust, even if you could largely do the same thing in both languages.


You should look at what you can do with the Rust type inference too, it's not too far away from Haskell, at least not superficially. For example using the return type to deduce type parameters for a method like let samples: Vec<_> = iterator.collect();


But this is the most trivial example of inference. Rust has neither generic implementation of higher kinded data nor global inference for the existing specific cases of it like GAT.


Is it the most trivial? C++ doesn't support it, and it has type inference using `auto`.

Note that in my example the type information flows from the variable to picking which generic method that is called, which is reversed information flow of what I would call the most trivial case - when the variable gets its type from the method that was called. (This is trivial: let x = 1i32;)


> For example using the return type to deduce type parameters for a method like let samples: Vec<_> = iterator.collect();

> C++ doesn't support it

How exactly is it different?

    #include <iostream>
    #include <vector>
    
    int main() {
        std::vector<int> v1 = {1,2,3};
        auto             it = v1.begin();
        std::vector      v2 (it + 1, it + 2);
    
        for (auto i: v2) { std::cout << i << ' '; }
    }


C++ auto can only do forward type-inference, i.e. the RHS must have a definitive type and this type is what `it` will end up having in your example. In the Rust example from above:

  let samples: Vec<_> = iterator.collect();
The RHS by itself has the type "B for any B that implements FromIterator<I> where I is the Item type of the given iterator". This cannot be assigned directly to a variable without any type hint (like with C++ auto) because it's an entire class of types and not a specific type. However, in the provided example, the Rust type checker can use the provided clue to narrow this down to an exact type, Vec<I>, without requiring an explicit statement of the type I (instead the underscore serves as a placeholder).


Sorry, I still don't get how it's different from C++, and my point wasn't about `auto` but rather about `std::vector v2`:

    std::vector v2 (v1.begin() + 1, v1.begin() + 2);

> However, in the provided example, the Rust type checker can use the provided clue to narrow this down to an exact type

The Rust example is incomplete and its RHS by the time it gets to `.collect()` within the context of `iterator` has to be bound to a particular type of the context via `impl Iterator for <MyContext>`. This is pretty much the same thing as template argument deduction for class templates in C++17 [1][2], and in C++20 it got extended to generic concepts and constraints [3]:

    #include <numeric>
    #include <vector>
    #include <iostream>
    #include <concepts>

    template <typename T> 
    requires std::integral<T> || std::floating_point<T>
    constexpr auto avg(std::vector<T> const &v) {
        auto sum = std::accumulate(v.begin(), v.end(), 0.0);        
        return sum / v.size();
    }

    int main() {
        std::vector v { 1, 2, 3 };
        std::cout << avg(v) << std::endl;                                      
    }
Note that nowhere in the snippet do I specify the type explicitly except for the container of type vector.

[1] https://en.cppreference.com/w/cpp/language/template_argument...

[2] https://devblogs.microsoft.com/cppblog/how-to-use-class-temp...

[3] https://www.cppstories.com/2021/concepts-intro/


I think they meant trivial in comparison to Haskell's type inference, which you had started off comparing against, not C++.


Also Haskell is getting linear types, which gets us the best of both worlds, GC productivity and low level performance when needed.


Every talk I've seen on Haskell linear types is that at best they can do some specific compuatations more efficiently (especially with arrays) but are not a good fit in Haskell to significantly elide GC.


> but are not a good fit in Haskell to significantly elide GC

That's because the entire haskell ecosystem would have to be rewritten which seems insurmountable.


They are on the early days of integrating them, so how could we expect them to refactor GHC all the way already?


What would be an example of significant elision in this context?


Rust is the closest thing to Haskell that I can run on a processor without an MMU.

There's a pareto frontier of "best available language for a given project", and I think Haskell dominates the portion of the pareto frontier where you don't have super tight physical/memory/real-time constraints, and Rust dominates the portion where you do. This is by virtue of their similarity.


They are extremely similar.

Rust just happens to heavily use linear types. Haskell let's you use both. If you write your Haskell program using linear types you get borrow checker semantics.

The main difference is mem management. However haskell could likely be written to be manually memory managed. There are manual men based ocaml implementations.

Haskell is not typically used for systems programming.

> you don't even have strict order of evaluation!)

People say this but I'm not sure people understand it. Haskell evaluation order is exotic but deterministic in all cases unless you introduce threads, which bring non determinism into any language including rust


I see Rust as heavily influenced by ML, perhaps even descended from it loosely, just with C family syntax. This seems to be a commonly expressed view? Rust has everything ML has, pretty much. Rust's concept of ownership/RAII was first implemented in an experimental ML variant, one that didn't need garbage collection. (The first version of Rust was implemented in Ocaml, perhaps not coincidentally.) And of course, ML and Haskell are close cousins, if not siblings.


> you don't even have strict order of evaluation

You do, with `seq`. And a lot of Haskell programming in practice is deciding between lazy vs strict programming.

The only lazy feature commonly used by Rust programmers are iterators, but they are just lazy lists (that are immediately discarded after being forced)


> As a result, both languages have steep learning curves compared with other languages.

Well yes and no. In a way the features such as immutability and algebraic data types are things you should know about as a software developer even if your current language means you can't use them at the moment.

My 16 year old son has learned Rust coming from a python at school background and is now writing small games in the bevy game engine.


> yes and no

Having to understand monad transformers or another kind of effect system to get anything working is a heavy load that's unnecessary in other languages.


You need to use the IO monad to get anything working, true. Monad transformers? Not at all.

I'd say knowing how to use monad transformers is the barrier between intermediate and expert Haskell programmers.


You don’t even really need to understand monad transformers in general to use just the ‘ReaderT IO’ pattern, and if you want to go crazier, free monads are both easier to understand, and more flexible.


I'd say using transformers are more the barrier between beginner and intermediate. Its not impossible to do serious work without them but I'm not sure its worth doing.


> Its not impossible to do serious work without them but I'm not sure its worth doing.

How do you think the folks on Java and C# find their work worth doing on their end in that case?


Because they aren't using Haskell? Haskell more or less creates all the problems that transformers solve.


Such as what exactly? One can literally define the entire application state as a collection of iorefs and pass it around explicitly to actions defined universally as IO to emulate the default state of art in $mainstreamLang. The users of $mainstreamLang find it worthy and fulfilling and probably don't know a thing about transformers. Then why holding the work done in Haskell to a different standard of worthiness?


I'm not making any points about users of mainstream languages. They have no use for transformers. Using a lot of global IORefs is possible but relies on a trick using unsafePerformIO; its definitely not in the spirit of Haskell to work this way though it is sometimes needed.


I get this point and I agree with it, but I also object to your comment on having to use transformers as otherwise it's not worthy. My objection has to do with seemingly different standards and expectations applied to levels of users of mainstream languages. In my opinion they should be the same, even if Haskell allows for safer and better results with transformers: they are not universally required and expected to be used by all haskell developers. If `unsafePerformIO` does the trick for them at the moment, so be it: they won't be worse off than Java devs.


If you aren't going to learn to write Haskell in the spirit of Haskell why learn it at all? Learning a language also involves learning the communities norms and accepted practices.


What problems does haskell create How do Monad transformers solve it @jeremyjh


The main problems are immutability and inability perform I/O outside the IO monad. To address immutability you have Monads like Reader, Writer, State which can give you a similar experience that you would have with mutable data and/or global variables. For example with the state monad you can have code like:

  runState do
    value <- get
    put counter+1

The problem comes when you also want to do IO with that state; if your code is in the IO monad it has no access to the State monad and can't call those methods. There are ways to work around that but monad transformers give you a StateT which can be applied to any base monad, including IO, giving you the capability of both.


Wel IO is State, cf. https://hackage.haskell.org/package/transformers-0.6.1.0/doc...

You set up your pipeline in terms of monads and then when you want to execute it you run it in IO. You can generalize IO to MonadIO or define an alias to some transformer pinned at IO. It adds a Computational overhead. Transformers are a solution to monad composition which isn’t naturally possible.


Yes and they are essential to working in serious, modern Haskell. You could never be an intermediate without know how to use them.


Would it be possible to give an example of monad composition that shows both state and Io on one computation unit?


Is there an example of StateT applied to IO? That's not too complex for python#++ devs?


None. Transformers are a solution to the composability of monads.


Java code is always in the IO and Except monads.


I would say using them is beginner->intermediate. Writing them is intermediate->advanced.


I'd agree in general and almost made this same point but it depends on what you are doing. If you write a transformer that is essentially just a specific kind of Reader (like ResourceT), then you can just follow existing examples and not really have to understand how to write all the instances yourself.


That's a good distinction


You don't need to know that to write haskell

Effect systems and monad transformers are advanced topics. They're possible to use in my language yet most people do not and can still write software.

Certainly no need for them in Haskell to be successful. You can just program at the lower levels of abstraction common in other languages.

The only difference is that Haskell's community tends to emphasize abstraction


I hear this opinion every so often but I just don't think it's really the case. Many libraries out there expect you to use Monad transformers. You might try to keep MTL out of your own application code, but you'll find that the rest of the ecosystem favors it. You can still use these libraries without MTL, but it's often not ergonomic and therefore not really an option.


There's no way to even get a job without understanding monad transformers.


If true that's sad and hopefully effectful will bring than to an end https://github.com/haskell-effectful


steep learning curve usually means the opposite of what the author wants to say:

"The common expression "a steep learning curve" is a misnomer suggesting that an activity is difficult to learn and that expending much effort does not increase proficiency by much, although a learning curve with a steep start actually represents rapid progress."

https://en.wikipedia.org/wiki/Learning_curve

When writing, consider "challenging learning curve" instead (I'd love other suggestions too).


If essentially everyone uses it to mean one thing, that's what it means now. That's how definitions work.


Does that apply to “Hacker” in Hacker News?


Sure. Language is contextual. If you find yourself explaining why hacker isn't really hacker to the people that you talk to _about_ hacker news, then yes it is actually impeding communication


I always imagined a mountain climb. So steep learning curve would mean daunting, high effort, high "drag" (gravity), but if you undertake it and push hard, you do, in fact, gain tons of utility (rather than proficiency per se) in a short amount of time.


Easy fix, put time on the y-axis.


You can consider a function with a steep slope from either perspective. The defining factor is the context that function exists in.


Great article.

Rust struct and Haskell records work pretty much the same way, too:

  // Rust
  Item { name, price }
  Item { name: name, price: price }
  match item {
    Item { name, .. } => todo!(),
  }
corresponding to

  -- Haskell
  Item { item, price }
  Item { item = item, price = price }
  case item of
    Item { name, .. } -> undefined
Haskell has some opt-in flexibility wrt. packing and unpacking of field names [1].

[1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/reco...


FWIW if you’re just “unpacking” a structure `match` / `case` is unnecessary syntactic overhead, you can just use a regular let:

    // rust
    let Item { name, .. } = item;

    -- haskell
    let Item { name } = item in …
(Note: the haskell version requires NamedFieldPuns, or RecordWildCards for something like Rust’s version)


There's one important difference: aren't Rust structs unboxed by default, where in Haskell they are boxed by default?


This is true. Though I believe GHC is pretty aggressive about unboxing (when posible), but don't quote me on that.

Another possible gotcha is that by default Haskell records introduce a named accessor function into the surrounding scope. So defining two records with a `name` field next to each other is an error


One important thing Rust and Haskell have in similar is the nearly complete absence of an endeavour to unify things. As more fancy type system features get added, both languages become quite complex, thus increasing the learning curve and compiler complexity.

This is quite aptly expressed by the recent paper on dependent types [1]:

> Previous versions of Haskell, based on System Fω , had a simple kind system with a few kinds (, → * and so on). Unfortunately, this was insufficient for kind polymorphism. Thus, recent versions of Haskell were extended to support kind polymorphism, which required extending the core language as well. Indeed, System FC↑ [30] was proposed to support, among other things, kind polymorphism. However, System FC↑ separates expressions into terms, types and kinds, which complicates both the implementation and future extensions.

Later in the paper, they show how some of the most recent "fancy" features of Haskell can be achieved in a more economic framework based on first-class types. Unfortunately, systems programming (as in C/C++) puts strict constraints on a programming language, and currently it's not quite clear what's the best way to integrate dependent types with systems programming. In the coming years, I expect to see some forms of castrated dependent types tuned for systems programming (e.g., types dependent only on indices).

[1] https://www.researchgate.net/publication/308960209_Unified_S...


> currently it's not quite clear what's the best way to integrate dependent types with systems programming.

Dependent types dispense with the phase separation between compile-time and run-time code, which is inherent to system languages. So you can easily have dependent types in a systems language as part of compile-time evaluation, but not really as a general feature. It would work quite similar to the "program extraction" feature in current dependent-language implementations, which does enable improved type checking because you can express arbitrary proofs and check them reliably as part of compilation.


Yeah, the main problem with dependent types being used in low-level programming is that they're now able to perform arbitrary computation. That's why I think castrated dependent types might do the trick, since if you accept only integer indices as parameters to types, then you can boil down type checking to constraint solving without actually performing arbitrary computation.


Strictly speaking, dependent types cannot really perform arbitrary computation (as usually implemented) because general recursion is restricted. And most real-world programs, especially systems programs, need general recursion. So, for all the supposed generality of dependent types, you end up with two quite separate feature sets for compile-time and run-time execution that can only share a comparatively trivial common subset.


You can have general recursion with dependent types though. Theorem provers prohibit it because otherwise it'd render the system logically inconsistent, which is far less severe in programming than in mathematics. But I agree that a great deal of expressivity of dependent types would be lost if they're applied to systems programming, which is somewhat unfortunate.


> if we have a map and want to do some operation on a slightly modified map, we can have a value that keeps the old map but also works with the new map (without much performance cost).

What black magic is this? Is the article just glossing over the cost of a copy or does Haskell do something weird here to avoid the copy while retaining both versions?


The second. Because everything in Haskell is immutable, it can take a _lot_ of liberties in terms of letting structures share parts of one another, in the case where one version is derived from the other.

Conversely, in the case where something like a list is modified in entirety (e.g. with a `map` function), if the compiler can determine that the original is no longer needed, it can run the map operation in place - much like you might do on an array in C - avoiding the need for a second copy of the structure in-memory.


The immutability guarantees let you build some really powerful constructs. Check out the world of immutable/functional data structures: https://en.wikipedia.org/wiki/Purely_functional_data_structu...


> In computing, a persistent data structure or not ephemeral data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. The term was introduced in Driscoll, Sarnak, Sleator, and Tarjans' 1986 article. (Making Data Structures Persistent - https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.133...)

From https://en.wikipedia.org/wiki/Persistent_data_structure

Clojure leverages the same data structure for (at least) four basic types; list, map, vector and set, and the following article explains it well with a pretty graph/picture too: https://practical.li/clojurescript/clojure-syntax/persistent...



I don't know about Haskell particularly but the underlying implementation of data structures in pure languages is sometimes more complicated to enable this sort of thing. For example, the new updated copy of the map may refer to the "old" map for most of its data, thereby making it cheap to have both.


So copy-on-write basically?


Yes, but with granularity at the "node" level instead of being across the whole container. Imagine a linked list.

If you add prepend an element to an existing list, no copy is necessary. It's just a new head with the tail being the original list. That's an easy case.

If you add a node in the middle of the list you can still share the unchanged tail between the two slightly different prefix sequences.


Copy-on-write generally refers to copying everything on every write; this is a persistent data structure, which can do updates while only copying a small part of itself. Same purpose as COW, but more efficient for larger data structures we want to make lots of small updates on.


Well, yeah, only there ain't any writes (and therefore, no copying).

The simplest example would be with linked lists, I suppose: you have a list A->B->C, then you take the B->C sublist and prepend X to it, so you now have X->B->C, but A->B->C is still around, and "B->C" part is shared between those.


No, because you don't copy the whole structure. You create a new partial structure.

E.g. let's say you create a map with keys A and B. You then insert a new key C. In memory you might have two objects: One is the origi al map with A and B and the other has "Key C and a ref to the first map".


No, there is no "write" because the old value never chnages. It's "create an overlay on write"


I can't speak for Haskell as I haven't looked at their implementation, avoiding copies is pretty typical of immutable data structures. Because each component of the data structure is immutable, you can safely point to any part of the structure that you don't need to change rather than copy it.

For example, in a basic binary search tree implementation of a map (using C-ish syntax for those who don't know a functional language):

    struct Node {
      String key;
      int value;
      Node left;
      Node right;
    }

    Node set(Node n, String key, int value) {
      if(n == null) {
        return new Node { key = key, value = value, left = null, right = null };
      }

      if(key < n.key) {
        return new Node {
          key = n.key,
          value = n.value,
          left = set(n.left, key, value),
          right = n.right
        };
      }

      if(key > n.key) {
        return new Node {
          key = n.key,
          value = n.value,
          left = n.left,
          right =  set(n.right, key, value)
        };
      }

      return new Node { key = key, value = value, left = n.left, right = n.right };
    }
A perfectly-balanced tree with depth of 5 has 1 + 2 + 4 + 8 + 16 = 31 nodes. If you call the above function, on such a tree, the worst case scenario is that the key doesn't exist, so it modifies 5 nodes during its search and creates a new sixth node. 26 of the original 31 nodes are reused and referenced by the newly-created map. The percentage of nodes reused only improves as the perfectly-balanced tree gets larger.

Of course, if this is your implementation of set(), the tree won't be perfectly-balanced, so a production implementation of a tree-based map needs tree-rebalancing (as well as memory-reordering and compacting for cache locality). These extra constraints typically mean less of the tree can be re-used, but the percentage of nodes which can be reused remains high.


The wording is terrible, but the simple common Haskell way is to use overlays/diffs.

You have an object. When you want to apply a patch, you create a new object that contains just your patch, plus a reference to the old obejct. DiffArray is the simple common example. It's fast enough when the diffs are small, but terrible when there are many diffs in series, creating a deep stack of references.

It's not obscure. $PATH and the /bin,/usr/bin, /usr/local/bin, $HOME/bin dirs on Linux work the same way.


A compiler can do a huge amount of black magic when the order of evaluations is not guaranteed.

And GHC exploits that liberty.


When I first learned Rust I'd been writing Haskell in serious side projects for four years, and had used C++ at work as well as a game project and was quite familiar with it (I'm not sure C++ is ever comfortable). Practically every feature in Rust other than the borrow checker was thus already known to me, and I still found Rust a quite significant learning curve, though to be fair I probably also learned about some things in C++ that looked safe but weren't.


Surprised to see no mention of higher kinded types. Rust has generic associated types but they're not exactly the same and are more limited in their functionality than true HKTs.


HKTs in Haskell are quite castrated, too. Haskell only lets you define HKTs via data type declarations, while, in general, HKTs can be considered just functions "one level up" -- instead of operating on the term-level, they operate on the level of types. The type checker thus becomes undecidable for a Turing-complete language, since you need to be able to perform arbitrary computation on types during type checking. Not a huge problem for Rust though, since its type checker is already undecidable.


About Haskell's function type annotations: "But it usually results in a warning, and adding a type signature is a good practice."

I prefer Rust's way of doing this. When you're a beginner it ensures that you're passing and returning the proper type to and from the function, you kind of have the function as a guard which ensures that you're using the correct types.


> it ensures that you're passing and returning the proper type to and from the function

You don't need it for all function declarations though, there are many trivial cases where type signatures don't add value. Consider that you probably would prefer most of the variables within a function to be inferred for you by the compiler. A similar thing could be said about the most of the functions within a given module. Important interface methods can be defined explicitly though, for extra clarity and self-documenting purposes.


pub functions: should be explicit

module functions: if I chose, I'd let them be derived, but I'm relatively indifferent

inner functions: should be allowed to be derived, IMO. They're very infrequently used so this choice doesn't have much impact.


You can use closures as functions with type inferred signatures


Closures capture variables so have significant differences compared to inner functions. Capture can affect lifetimes so the difference is significant in Rust.


Closures don't have to capture variables. If they don't then they're equivalent to functions.


That's what I thought too, but I fixed some compiler errors by turning a closure into an inner function. Probably a PBKAC, but...


Perhaps it's due to the fact that you can't really have type-generic closures in Rust: once the compiler has inferred one type, the closure cannot be used with another type. In contrast, inner functions can have any number of additional type parameters.


I prefer to have a choice. You could have a compiler that doesn't run it you haven't eaten a balanced breakfast either. Haskell's type system is designed around type inference (of the Hindley-Milner kind), if it can infer the type I'd like to have the option to let it do so.

It's certainly good practise to write out your types, and often your editor can do it for you, but there are tons of little places where it's a waste.


In "serious" Haskell code, almost all top-level definitions (i.e. functions) are given types - for exactly the reason you specify.


I find type errors in HM typing systems can sometimes be quite obscure and misleading, due to omnipresent type inference. Writing type signatures can save you from that. For example, even if the type signature is quite "obvious", you can erroneously plug in some other type into the function, which triggers the type system to underline the function being defined instead of the function call.


Rust only half-asses this though. Lifetime annotations are part of the type of the function but can be elided if unambiguous. For beginners and generally people who read code that's confusing. I don't think there's anything to be gained by writing a few less characters and editors can easily add them as well.


There's crucial nuance here.

Lifetime annotations aren't elided based on there being only one unambiguous answer. Rather, there's three simple rules[1] that look at the function signature and in simple cases give what you probably want. If those simple rules don't match your use case you need to define manually, the compiler doesn't get clever.

This means that if you want to take a reference to an array and return a reference to an item in an array, for example, elision will work fine. But if you take a reference to a key and look up a reference to the value in a global map you need to write it by hand, even though the compiler could pretty clearly guess the output lifetime you want.

This preserves a crucial feature: you can know the exact signature of a function by only looking at the signature definition, you don't need to look into the body.

Lifetime elision isn't like inferring argument types. It's like defaulting integer literals to i32 unless you specify otherwise.

See https://doc.rust-lang.org/nomicon/lifetime-elision.html


Is 4 ‘a, ‘b, etc. really more readable?


As a systems programmer any mention of the word Haskell next to the word Rust sends shivers down my spine. Haskell is the king of yak shaving abstractions and is almost the polar opposite in all the ways that made C the most practically successful language on the earth.

I do not want abstractions where they aren't needed. I want control, simplicity, a clear correspondence between what I'm writing and what logical assembly I'll be generating (logical, not physical). Most of all I want my code to be stupidly clear to the next person reading it. Systems programming isn't like writing a Java app, the domain is complicated enough that there's no room for abstraction astronauts to create problems where there are none.

I am still very wary of Rust. I have used it and will continue to use it, but it still teeters on being too complicated to be useful in the same way as C.


I've helped write some very abstract combinator-y Rust that compiled down to an embedded program that runs with only 4K RAM. Get over it.

> and what logical assembly I'll be generating

The problem with Rust and C is not worrying about what assembly I'll get, it is that one has very little control over the layout of the stack. It's the last implicit data structure and that's a PITA for highly resource constrained programming.


With all due respect, writing one program with a small memory footprint doesn't disprove my point. I'm referring to writing very large, complex programs that interact directly with hardware - kernels, boot loaders, firmware, all of which must be as clear as possible. And I say this with many years of experience dealing with other companies badly written device drivers and firmware.

I absolutely need to worry about what assembly I get. I am often checking my assembly or looking for optimizations in my assembly. And I'm doing this across multiple architectures.


> I'm referring to writing very large, complex programs

Then the power of good abstractions should be all the more useful!

> that interact directly with hardware

OK if you are using various privileged instructions, writing things with weird calling conventions like interrupt handlers, etc. Then the assembly matters.

But, and this may sound heretical, this stuff is the boundary of the lower level code; its interface with the hardware. For the interior, the precise assembly once again doesn't matter.

> And I say this with many years of experience dealing with other companies badly written device drivers and firmware.

There is a lot of crap low level code out there, yes. And I would say a chief mistake of crap code is not properly respecting layers of abstraction. The low level world needs more http://langsec.org/ where we precisely and mechanistically document the interface, rather than half-assing it with reference implementations.

Just as user space weird instructions get intrinsics exposed to spare the user from writing inline assembly, libraries like https://docs.rs/x86/latest/x86/ should be maintained the device manufacturer or ISA consortium. They should be unit-tested with an emulator too.

We properly do all this legwork to fix the foundation, and the rest of the low-level software will practical write itself, compared to today, and be accessible to a much larger pool of programmers.


Rust is the bastard child of C++ and Haskell.

So it doesn't have the simplicity of C, it tries to give you as many abstractions as possible while still maintaining the zero cost philosophy.

I would say Rust is easier then C++ and easier haskell.


Being easier than C++ or Haskell isn't really saying much at all. Neither of them have improved code readability and are notorious for their learning curve.


It is saying a lot because no other language offers that combination of performance and high level abstraction.

The speed of C++ with high level language features of haskell. And none of the safety pitfalls of C++ either. That's a lot.

When compared with haskell the benefits aren't clear. When compared with C++, rust is clearly better when viewed from strictly a language standpoint.


But C++ is an outlier in systems software. Systemd is written in C. Grub is written in C. The Linux kernel is written in C. Firmware is written in C. Sure there are some kernels that use a bit of C++, but it's an outlier, not the norm. C++ never proved the is was so worthwhile to have a higher level language that we should drop everything and write C++. C++ only dominates in performance critical applications where there is a bridge to higher level languages and you're interfacing with software.

The strongest argument for Rust is its safety features. Abstractions have made C++ an unusable mess that even C++ developers complain about on a constant basis.


No C++ is considered systems software. I largely agree with you, except on the fact that C++ is an outlier in systems software. It's only an outlier for GNU linux.


Would not a better comparison be Ocaml vs Rust? In the end Rust WAS written in OCaml, and is heavily influenced by it.


That would not be a comprehensive comparison, but:

- OCaml is definitely tailored for more "high-level" tasks, such as writing a programming language or a theorem prover (there are many of them in the OCaml world, and even Rust was initially written in OCaml, as you've mentioned). OCaml has a GC, which might be a problem under certain circumstances.

- Rust has a far better ecosystem despite being a younger language. You can just compare the number of packages on crates.io and OPAM.

- OCaml _sometimes_ has some more fancy type features, such as functors (modules parameterized by other modules), first-class modules, GADTs (Generalized Algebraic Data Types), algebraic effects, and the list could go on. It doesn't have type classes or an ownership system though.

- OCaml is more ML-like, while Rust quite often feels C-like. For example, you have automatically curried functions in OCaml and the omnipresent HM type inference.

- OCaml has a powerful optimizer called Flambda [1], which is designed to optimize FP-style programs.

Having written some code both in OCaml and Rust, I can say they are in a lot of aspects quite similar though. They both take a more practical POV on programming than Haskell, which affects language design quite evidently.

[1] https://v2.ocaml.org/manual/flambda.html


Early alpha/beta rust docs mentioned Haskell all the time as a point of comparison, but I don't recall them ever mentioning OCaml.

Also, Rust's idiomatic type system usage is way closer to Haskell's than it is to OCaml's (with typeclasses, no polymorphic variants, no module functorization, etc.)


A recentish /r/haskell discussion[1] had some brief thoughts.

[1] https://old.reddit.com/r/haskell/comments/zk2u6k/what_do_has...


Another recent article that compared the expressiveness of several languages for a particular project including Rust and Haskell[0] as well as C++, Python, Scala and OCaml. That comparison was for a complex task undertaken by adept users of each language. The variance was within a +/- factor of 2 with an exception of one team which made choices that led to much more typing.

[0] https://news.ycombinator.com/item?id=34684325#34688865


    let nums = take 10000000 naturals
    print $ (sum nums, length nums)
> Because the nums list is used for both sum and length computations, the compiler can’t discard list elements until it evaluates both.

Now that makes me wonder, if I write something like

    print $ (sum (take 10000000 naturals), length (take 10000000 naturals))
will it run in constant memory? I think it ought to, but are there mechanisms in GHC optimizer to prevent extraction of CSEs that would cause huge increases in memory consumption?


The solution to this is a package like https://hackage.haskell.org/package/foldl , which lets you fuse multiple traversals of a lazy list into a single pass and get the laziness right.


It's both solvable and easy to accidentally have blowups by accident.


yeah sadly this summarize my haskell experience. I would hate to do code reviews in haskell.


Can someone explain why/whether a language like Rust requires mutability? Wouldn't it be better if we could just have the compiler guarantee that functions marked as such are doing TCO, so algorithms would look better and had the full benefit of persistent data structures?


The price for immutability is pretty high. Haskell garbage collection has a lot of work to do.

When you say 'algorithms would look better' this is pretty subjective. Graph based algorithms or inplace algorithms doesn't look too good in haskell.


In-place maybe not, but what language would be more suitable for graph based algorithms? Of course it depends on how you represent the graph, but writing something like breadth-first search is quite nice and easy to implement in Haskell, in my experience.


This is maybe just me but I find any kind of self referenced data structure a bit awkward to define in haskell. The thing mentioned in http://wiki.haskell.org/Tying_the_Knot .


I generally express graphs as a `Map k (Set k)` or similar, with extra data as desired. Using keys and lookups means you don't need self-references. I guess in a way this approach sacrifices some purity, but I find it works very well.


Rust is designed for maximal performance, so it's designed to be close to machine language, where mutability is an unavoidable reality. That's pretty much the whole story.


> Can someone explain why Rust has mutability?

Because it’s target domain requires reliable efficiency.

> Wouldn't it be better if we could just have the compiler guarantee that functions marked as such

As such what?

> are doing TCO

TCO is worthless, and Rust does not have TCE, nor does it care about TCE.

> so algorithms would look better and had the full benefit of persistent data structures?

Rust does not significantly care for persistent data structures, there are crates which provide them, but not the standard library.

Rust is not a functional language.


"Worthless" is a bit strong. The Rust devs want to add TCO/TCE, but there are still problems left to solve. The "become" keyword is already reserved in anticipation of guaranteed tail calls being added.

https://github.com/rust-lang/rfcs/issues/2691


> "Worthless" is a bit strong.

No, TCO is worthless, it’s an unreliable optimisation.


Compiler optimizations are best-effort. -O3 gets you 90% of the optimization for 10% of the work of writing hand-rolled assembly. Individual optimizations are not guaranteed, and sometimes even regress in newer compiler versions, but everyone uses them because overall programs run much faster with optimizations than without. I don't see how TCO is any different. Would you say all optimizations are worthless because they're not guaranteed?


> Would you say all optimizations are worthless because they're not guaranteed?

Context is relevant, the context here is language semantics, not things going faster.

TCO is fine to make things go faster, like inlining, unrolling, LICM, ....

TCO is hot garbage when you rely on it for program correctness.


Weird how none of that context was present in your first two iterations of the claim.


If given explicit support in a language then no it's not. Especially with the keyword approach rust is taking, the compiler will guarantee the semantics.

Do you believe the exec system call is similarly unreliable at replacing your process image? Amazing that unix systems work at all


> If given explicit support in a language then no it's not. Especially with the keyword approach rust is taking, the compiler will guarantee the semantics.

Then it's TCE not TCO.

> Do you believe the exec system call is similarly unreliable at replacing your process image?

Exec is entirely unlike TCO. If exec was a syscall which sometimes did and sometimes did not replace your process image, then it'd be like TCO, and it'd be similarly unreliable.


I think it would have helped if you'd defined your terms earlier. Wikipedia, for example, doesn't distinguish between the terminology TCO and TCE: https://en.wikipedia.org/wiki/Tail_call


>"Can someone explain why Rust has mutability?"

Why not? Why this "there can only be one" mentality?


Others have already said why Rust has mutability, but in the spirit of the question I would like to share https://koka-lang.github.io/koka/doc/index.html which I think looks a bit like Rust if it had very limited mutability. (It has a "sufficiently advanced compiler" - an implementation of the Perceus static reference counting - that still makes the resulting code extremely fast)


You mean mutability annotations or allowing mutability at all?

Because you can't simply disallow mutability in an imperative language.

But about annotations, you annotate your code to let the compiler know what you meant to do. If just looking at what you do was enough, C would be a safe language. You improve it by giving more information to the compiler, so it can check if you are doing what you meant to.

(But I don't know what relation you saw with TCO. It's not a related concept.)


Maybe because otherwise you need a 'sufficiently smart compiler' which will be able to extract the right amount of performance from the underlying hardware which is mutable. Also I don't know if you can implement persistent data structures without a gc and with the rust typesystem.

Maybe because it's designed to replace existing, familiar imperative and mutable languages.


> Also I don't know if you can implement persistent data structures without a gc and with the rust typesystem.

Well you can use Rc/Arc (that’s what Bodil’s imrs does for instance), but essentially yes, by design and purpose a persistent data structure has confused ownership, and thus requires a separate layer handling memory reclamation.


I love the ideas behind both these languages, even though in their approach they are complete opposites.


Very informative. Thanks!


"What do you use Haskell for?"

Most important question about Haskell


I know it is common to think that Haskell is used only in academia and side/weird-projects, but there is a decent amount of companies using Haskell - e.g. we use Haskell in production for developing a DSL / web framework for building web apps (https://github.com/wasp-lang/wasp)!

I participated teaching students Haskell on my alma mater this year and "what can Haskell be used for" was a common question, with genuine expectation that the answer will be it is limited to only specific use cases. I would answer that it can be used anywhere where languages like Java, C#, Go, and similar can be used -> it is a general programming language that uses garbage collector! And while somewhat harder to learn due to abstractions that we are all not used to, it is a delight to express business logic in it once you get to know it well.

The biggest factor for deciding if Haskell is a good fit for the problem is probably ecosystem support -> are there enough libraries and tools to support efficient development in a specific problem domain. In our case, we are building a compiler/transpiler, and Haskell is well-known for great support in that area, so it was a no-brainer. We were actually also considering Rust, but we just had no need for that level of memory control and rather decided to go with language where we don't have to think about that (Haskell).


> The biggest factor for deciding if Haskell is a good fit for the problem is probably ecosystem support

And hiring right, or being prepared to train/let new hires climb up a steeper learning curve than hiring someone with Python experience for a Ruby app say.

Rust has reached that critical mass I think, got past the chicken/egg issue of experienced people to hire & companies interested in hiring them (to work on a rust codebase). Against the odds I think, there are plenty of languages you hear about similarly up and coming that haven't (D, Zig) or have only in a niche (F#, Swift, Kotlin - the last two I include mainly because I'm thinking Go could so easily have gone the same way, just been the one Google pushed for K8s plugins and GAE applications, not used generally as it is despite being a general-purpose language).


Anecdotally, hiring Haskell developers seems far easier than you’d expect. There are a lot more people out there who want to use Haskell than there are Haskell jobs. Having some training is good- you don’t want to lose out on talented people because they haven’t used Haskell before, but it’s not like everything you know goes out the window when you learn Haskell either. With some help most people should be pretty proficient in a couple of months, and you can have people learn the language in parallel with learning the codebase and business domain, so you aren’t really losing that much time in practice.


I can confirm this! We had no problem getting a very decent number of applications for Haskell position, both from very experienced Haskell devs, and from junior Haskell devs, all very motivated.

As for getting somebody on-board -> we hired a couple senior / intermediate devs that had no or introductory knowledge of Haskell, and all of them so far got up to speed in a month or so, while not learning exclusively Haskell but also the rest of the codebase at the same time, so normal learning process in the new company. So I wouldn't say at all that learning Haskell for them was an issue, but I am certain that big factor here was that they are generally experienced in other languages. That said, we do keep our codebase pretty tidy and simple (no super crazy Haskell features).


What concrete, user facing, revenue generating product have you built using Wasp?

How is Wasp more convenient than other frameworks? How do I hire people that know it or can train themselves to use it?


I like to give pandoc and xmonad as examples


And shellcheck!


I didn't even know it was written in haskell. Very cool tool that does very hairy stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: