Hacker News new | past | comments | ask | show | jobs | submit login
Haskell and Rust (fpcomplete.com)
148 points by pmoriarty 74 days ago | hide | past | web | favorite | 129 comments



Kinda surprised not to see HKTs[0] mentioned which, AFAICT, is what really sets apart Haskell's from Rust's type system. Without HKTs you can't express a Functor typeclass/trait[1].

[0] https://github.com/rust-lang/rfcs/issues/324

[1] https://www.reddit.com/r/rust/comments/2av5tv/why_does_rust_...


Aren't Generic Associated Types (in the pipeline for Rust) broadly equivalent to HKT as found in Haskell?


I'm kinda bad at these things so I'm not sure. GATs are a form of higher-kinded polymorphism but I'm not sure if they're really equivalent to HKTs.

GATs are still missing anyways. Not sure how close they're to being implemented.


Not particularly close. The feature seems to be progressing slowly, but very deprioritized (which is understandable - Rust has a bit of a reputation for being hard to learn and too feature-heavy, so it's natural for them to focus on stabilizing the language instead of adding new features)


They’re needed for async fn in traits, which feels like a pretty natural extension to async fns, so there’s a really good motivating case for them.


I think the comparison dwells on superficial similarities. Rust's type system is absolutely necessary to ensure a certain kind of safety under the requirements Rust adopted for itself in order to address certain domains (like no tracing GC), namely a guarantee of no undefined behavior, common in those domains. This safety (or something very close to it) exists in every so-called "managed" language anyway, be it Python, Java or Haskell, regardless of their type system, so the goal of the type system in Rust and in Haskell -- at least as far as safety goes -- is obviously very different.

Moreover, such safety is not at all what is normally considered "correctness" in the field of software correctness; the term normally refers to functional correctness, i.e. that the program satisfies the specification, not that it doesn't have undefined behavior. On that front, there seems to be no evidence, nor even hints, that either Haskell or Rust do better, or worse, than other languages developed in recent decades. It's a myth, albeit one that's often repeated in some circles.

That's not to say that type systems cannot be hugely valuable, and sometimes even necessary. For Rust, the type system is what ensures undefined-behavior-safety. In other languages it has many other advantages as well. Some type systems are even intended for correctness. But Rust's type system, and certainly Haskell's are not. Both are interesting though expressively "weak" type systems, of the kind that has not been found to be correlated with higher correctness. So focusing on that as the point of comparison completely misses the mark.


Lack of undefined behavior may be a rather low bar to clear with respect to functional correctness, but it absolutely is a bar you have to clear. A program with undefined behavior is trivially not compliant with the specification.

Furthermore, Rust's type system is absolutely not only designed to prevent undefined behavior. Dereferencing a null reference in Java or C# is not undefined behavior, but it is still a (tremendously common) bug. Deferencing a null reference in Rust is not possible because null references do not exist.

Affine typing allows Rust programs to statically guarantee that only routines applicable to an object's current state can be invoked; in most other languages this can only be checked via runtime assertions.

Unique ownership not only guarantees lack of data races but allows programs to statically prevent race conditions more generally, including those that in other languages may occur even in the absence of any concurrency (see eg. Java's ConcurrentModificationException).

Rust enums and exhaustive pattern matching statically ensures that all possible values of a type are accounted for. Again, not doing so might not directly lead to undefined behavior, but would be a compliance failure nonetheless, and again, Rust has your back there.


> but it absolutely is a bar you have to clear.

I don't think that a program must be free of undefined behavior before significantly reducing bugs. I think most people would prefer a program with five undefined behavior bugs and no other bugs to a program with zero undefined behaviors but 50 other bugs (assuming all those bugs are of similar severity).

Also, even clearing that bar doesn't require doing so with the type system (e.g. Python doesn't have undefined behavior, either). It also doesn't mean that clearing it gets you very far with respect to functional correctness.

> Affine typing allows Rust programs to statically guarantee that only routines applicable to an object's current state can be invoked; in most other languages this can only be checked via runtime assertions.

Yes, but that doesn't mean that the end result is more correct, and, indeed, there is no evidence to suggest that it is. Software correctness is an extremely complicated topic, one that I have written extensively about [1], and just pointing out that one technique can eliminate a certain class of bugs is almost never a guarantee that that technique is superior to other techniques, even if they are partial.

[1]: https://pron.github.io/


"Also, even clearing that bar doesn't require doing so with the type system (e.g. Python doesn't have undefined behavior, either)."

If python doesn't have a type system, why does it report type errors like this?

  >>> "foo" / "bar"
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  TypeError: unsupported operand type(s) for /: 'str' and 'str'


In programming language theory and formal systems in general, "types" mean what is often called "static types." In any event, that's what I'm referring to.


In the places where correctness tends to matter, like safety-critical domains, undefined behavior is unquestionably worse. It's open ended latent risk and also impacts the soundness of attempts to deal with residual risk.

Depending on the nature and surface area of the undefined behavior, you can't necessarily know that your program has "no other bugs" if you can't trust the soundness of the rules for writing your program to begin with. You'll have to test it externally with something that you can depend on the soundness of, which is just moving the same work to somewhere else.


Of course, if you want to ensure the correctness of your software using techniques that rely on the lack of undefined behavior in the language then you'd use a language without undefined behavior (although this doesn't require ensuring that in the type system). But that's not usually how it works. Even when using, say, deductive proofs [1], you can prove your program has no undefined behavior -- just as you prove it has no logic bug (with respect to the spec). So it's not worse. To give you an example from mathematics, if you need to prove that, under some conditions, 1/x > 10 knowing x ≥ 0, you'll need to prove that x < 0.1 and that x ≠ 0, and the latter is not "worse" just because 1/0 is not defined while 1/0.1 is, and it's not harder than proving x ≠ 0 when 1/0 is defined (as it is, e.g. in Coq and Isabelle). Similarly, if you want to prove that in, C, a[i] < 10, you will need to prove, among other things, that i ≥ 0 and that i < length of the array, and the fact that a[i] could be undefined if the latter is false does not make that part of the proof harder, nor does it reduce the soundness of the result if it is true.

I.e. the mere existence of undefined behavior in the formal system does not negate the soundness of the entire system. At least it doesn't in arithmetic and it doesn't in C. In fact, C is currently the language with the best formal verification tools. Of course, that is out of necessity, but it's absolutely false that the presence of undefined behavior in the language breaks the possibility of sound formal deductions.

It is true that C does make some local logical inferences more tedious, and some of that is due to undefined behavior, but that's a whole other discussion.

[1]: Deductive proofs are very rarely used in real safety critical system, and they're especially unpopular even compared to other formal methods such as sound static analysis and model checking. Those two are more commonly used in safety critical systems, but the most common assurance tool is tests.


Their popularity has more to do with how the certification regimes are created, which itself has next to nothing to do with the particular efficacy or relative strength of a given verification approach.

I'm also not sure what your disagreeing with exactly. The reason all those assistive tools for C exist is because C is popular for orthogonal reasons, but also its undefined behavior is known to be a source of danger & uncertainty. That's why compilers undergo qualification for instance. The undefined behavior is characterized, catalogued, and codified to turn it into de facto defined behavior.


> The undefined behavior is characterized, catalogued, and codified to turn it into de facto defined behavior.

No, this is not true. Again, the fact that a[i] is undefined for values of i greater than or equal to the array's length does not necessarily make guarantees about the value of a[i] more difficult when it is defined, and it certainly does not impact the soundness in that case. Verification methods can -- and do -- deal with undefinedness without making it "de facto defined". Indeed, even in arithmetics 1/0 is not defined, and yet Peano arithmetics is fine. Formal logic in general does not require every expression to be defined (in the sense of having a certain specific value) in order for it to be sound.

I agree that undefined behavior is a problem, but it is not necessarily worse than other problems when correctness is concerned, and it is not true that it affects the soundness of inferences. It can certainly be tricky or confusing, though, and it is certainly a source of incompatibilities, which is why most languages try to avoid it or control it (Rust has undefined behavior inside unsafe, and Zig can dis/allow it depending on compilation level).


> No, this is not true.

A whole bunch of the tests and the developer interface agreement that come with your purchase of a IEC/ISO qualified C compiler are around establishing what that instance of the compiler does with undefined behavior corners of C. The whole point is to have evidence of turning unknowns into knowns for a defined set of deployment constraints.

> Again, the fact that a[i] is undefined for values of i greater than or equal to the array's length does not necessarily make guarantees about the value of a[i] more difficult than if it were defined, and it certainly does not impact the soundness in cases where i is within bounds.

That's because you know the surface area of what's defined and what isn't already. If the behavior for a[i] were undefined for all cases of a[i], then you wouldn't be using a[i] at all because you'd have no idea what to expect about its behavior.

> I agree that undefined behavior is a problem, but it is not necessarily "worse" than other problems

You seem to be making this argument on the basis of taking for granted that you know something about what is defined already. Here's how to know that undefined behavior is worse. Say I asked you to write a program for your own pacemaker in a language where _all behavior_ is undefined. What do you do? It will be impossible to avoid killing yourself except entirely by accident, unless you resort to external validation methods.

The assurances that are possible in things you're describing are predicated on having some known a priori model for the domain of what is defined and what isn't, and the various certification/qualification methods required to use certain technologies & tools in domains where these things tend to matter focus on keeping engineers inside the bounds of what's defined and reducing the scope of what is undefined for a given configuration in-use.


> The whole point is to have evidence of turning unknowns into knowns for a defined set of deployment constraints.

Maybe, but undefined behavior still doesn't kill soundness.

> That's because you know the surface area of what's defined and what isn't already.

Sure, but in every programming language most of what you do is defined, even in C.

> Say I asked you to write a program for your own pacemaker in a language where _all behavior_ is undefined. What do you do? It will be impossible to avoid killing yourself except entirely by accident, unless you resort to external validation methods.

It's impossible to write any program in a language where all behavior is undefined. Such a language, by definition, has no semantics, and so no notion of a program. In fact, a language with no semantics of any kind is not a language at all.

> The assurances that are possible in things you're describing are predicated on having some known a priori model for the domain of what is defined and what isn't,

This is true for all programming languages.


> It's impossible to write any program in a language where all behavior is undefined. Such a language, by definition, has no semantics, and so no notion of a program. In fact, a language with no semantics of any kind is not a language at all.

So, it's necessary to remove a sufficient amount of undefined behavior before something is even usable in a coherent way? Which is another way of saying, you have to have defined enough behavior to be able to even get to the point of knowing whether or not the programmer is the one responsible for the "bug".

Undefined behavior seems to be necessarily worse than programmer error, since to be able to determine the latter you have to have removed some part of the former.

:shrug:, going to get back to work.


Undefined behavior means that certain expressions, like 1/0 in arithemtic, have no meaning; i.e. they are grammatically correct but nonsensical. For something to be a language, it is necessary for it to be able to state things that do have a meaning. But if some of its sentences have no meaning, it does not necessarily impact those that do. You don't need to "remove" anything. By default, sentences people say have meanings but not all of them do. To understand the sentence "I like pizza" you don't need to "remove" any undefinedness, just as you don't need to "remove" undefinedness to determine 2 + 3 = 5 even though some arithmetical sentences are meaningless, like 1/0 < 5.

It's like asking which is "worse", saying "England is a country in Africa" (false) or "Love ate a circle" (meaningless)? Both are "bad" in their own way.

But undefined behavior can certainly be inconvenient, because it is insufficient to determine whether a sentence is "well-formed", i.e. grammatically correct, in order for it to be meaningful, as some grammatically correct sentences are nonsensical. But it's certainly not catastrophic, because mathematics has a lot of undefined sentences, and we still use it successfully.


Having read this entire exchange I can say with absolute certainty that I did not understand it at all. I did however enjoy it very much.


> but allows programs to statically prevent race conditions more generally

Wrong, wrong, wrong! A race condition is an error in program logic. It can't be prevented by a type system, which is limited to local, syntax-based analysis. Race conditions can trivially occur, e.g. in channel- or message-passing-based models of concurrency, where there's nothing that could cause a data race.


Wrong, wrong, wrong. As I said, race conditions caused by non-exclusive mutable access (such as modifying a collection while iterating over it) are trivially prevented by forbidding non-exclusive mutable access! Which is what Rust does. Not all race conditions can be prevented, obviously. But that does not mean that none of them can. I'm not sure what you mean by "local, syntax-based analysis" given that type systems are neither local nor syntax-based. Do you perhaps have a failure of imagination of what sorts of invariants a powerful type system is able to enforce?


Do linear and/or dependent types change this statement?


You can always write nonsensical code like this (pseudocode):

   def counter = 0

   def increment() {
       def tempcounter = null
       counter.lock {
           tempcounter = counter
       }
       tempcounter++
       counter.lock {
           counter = tempcounter
       }
   }
Exclusive access to counter is guaranteed and therefore memory safety has been maintained but reads and writes may still be interleaved and cause incorrect results.


Your point is a good one: a correct program might still be wrong.

I think the argument is that it's inadequately typed. I.e. the program itself is correct because the semantics aren't captured properly.

If you had a type system that captured that tempcounter was derived from an earlier version of counter, then it could enforce that the assignment of counter in the second lock does not depend on a previous version of counter. Runtime enforcement would be trivial in most languages by making counter a monad that essentially carried a DAG around to detect this sort of problem. I'm pretty sure though that you could enforce this at compile time with dependent types in something like Idris.


> A program with undefined behavior is trivially not compliant with the specification.

Undefined behavior means doing something the compiler spec does not specify, therefore the compiler is free to do anything. There are plenty of programs that exploit the programmer's knowledge of a particular compiler's actual, unspecified behavior as an optimization.

One example is using an int variable after an overflow. Post-overflow, the int's value is undefined behavior, but in practice it is predictable for a given compiler and platform. It is possible to write a program that implements its own program spec correctly, while exploiting out-of-spec behavior of its compiler under the hood.

Much like how it's possible to wrap unsafe Rust code with a safe interface.


Relying on undefined behavior because your compiler happens to behave a certain way today on one specific machine is NOT the same as wrapping unsafe with safe. Next version of the compiler can exploit the undefined behavior and simply remove the code as an optimization, or even worse the optimization is already in your current compiler and only happens under some strange circumstances you are not aware of.

Unsafe code blocks still have defined and predictable behavior, but you have to guarantee the memory safety.


> There are plenty of programs that exploit the programmer's knowledge of a particular compiler's actual, unspecified behavior as an optimization.

There are plenty of programs that have bugs.

Programmers who exploit undefined behavior on purpose are reckless.


> There are plenty of programs that exploit the programmer's knowledge of a particular compiler's actual, unspecified behavior as an optimization.

And then they break when the compiler changes its behavior…


It's worth noting that gcc and llvm treat C integer overflow as a can't-happen condition (unless you supply an explicit compiler flag to change overflow semantics).


The point about null references is a very good example. In both Rust and GC'd languages whose type system differentiates between null and non-null pointer types, it is impossible to define a type like Vector<T>, where T doesn't have a default constructor, without resorting to some form of type cast.

In Rust, as well as GC'd languages like TypeScript, dereferencing a null reference is possible, in the same sense, because a type cast or unsafe code may have been incorrect.


> In Rust, as well as GC'd languages like TypeScript, dereferencing a null reference is possible, in the same sense, because a type cast or unsafe code may have been incorrect.

True, but type casts are unsafe code. You'll never see a null pointer so long as you stick to the safe language subset.


> the term normally refers to functional correctness, i.e. that the program satisfies the specification, not that it doesn't have undefined behavior

A program that has undefined behaviour is not correct, unless the spec has undefined behaviour (why would it?).

> On that front, there seems to be no evidence, nor even hints, that either Haskell or Rust achieve better correctness (or the same correctness more cheaply).

I don't know what you mean by evidence, but Haskell has a stronger and more expressive type system than say Java, it doesn't have null pointers, it has referential transparency, and simpler, safer concurrency and parallelism builtins. The fact that you can make more general assumptions about pieces of code which you don't necessarily fully understand makes writing safer and correct code easier.

> But Rust's type system, and certainly Haskell's -- both interesting but very expressively "weak" type systems -- have not been found to be correlated with higher correctness.

Are you saying there have been studies that show Haskell's type system has not been correlated to higher correctness than say Java or Python, or are you saying you are unaware of any such studies? Also, Haskell's type system is certainly more expressive and strong than Java's or Python's.


> A program that has undefined behaviour is not correct, unless the spec has undefined behaviour (why would it?).

Yes, but a program has many ways in which it cannot be correct, and eliminating one of them does not mean that you've eliminated a significant amount of problems, and it doesn't even mean that the amount of incorrectness you've eliminated is bigger if you'd spent your time with other approaches.

> The fact that you can make more general assumptions about pieces of code which you don't necessarily fully understand makes writing safer and correct code easier.

Except this does not seem to be the case, as evidence does not suggest that Haskell programs have significantly fewer bugs or are produced faster.

> Are you saying there have been studies that show Haskell's type system has not been correlated to higher correctness than say Java or Python, or are you saying you are unaware of any such studies?

I am saying that there is no evidence to suggest Haskell programs are more correct, and that at least one study has tried to find such correlation but did not.

> Also, Haskell's type system is certainly more expressive and strong than Java's or Python's.

It is, but it is still very weak.


> Except this does not seem to be the case, as evidence does not suggest that Haskell programs have significantly fewer bugs or are produced faster.

What evidence?

> It is, but it is still very weak.

It's strong relative to the most popular programming languages in use today.


> What evidence?

Similar evidence to that that does not suggest eating lettuce reduces baldness.

> It's strong relative to the most popular programming languages in use today.

On an expressivity scale, as far as functional correctness is concerned, if Python is 1, Java is 2 and Agda is 100, Haskell would be at maybe 2.5. Sure, stronger, but not enough to suggest it has some real impact, and, indeed, none has so far been found.


Interesting. What are some other languages that would be above 2.5 on your scale?


The jumps in expressivity are not very gradual, and the reason for that is the exact same one as computational expressivity growing "suddenly". PDA's are much more expressive than DFAs, and TM's are drastically more expressive than PDAs.

Liquid Haskell would be a big jump (say, to 20), and anything more than that would put you close to, or at, 100.

But note that type systems are far from the only way of expressing correctness properties. Eve is untyped, but its ability to express correctness properties is much higher than Haskell's. Also, even type systems that are designed for functional correctness do not necessarily do a good job at it compared to other approaches.


> unless the spec has undefined behaviour (why would it?).

Because maybe you're specifying a partial function. The code to handle the special cases of invalid inputs can sometimes have dramatic effects on code size and performance, sometimes too much to bear for code that should never run in the first place; allowing for garbage output to be produced for garbage input instead of a result that says "this is garbage input" could well be acceptable.

Furthermore, there are some places where we don't actually know how to specify the intended behavior in the first place. Data races are perhaps the best example here; we have yet to find a satisfying specification for data races that is more constraining than "anything goes," and the two most well-known attempts (Java and C++) have both been widely regarded as failures. But pointer aliasing is another area that proves surprisingly difficult: if requiring pointer provenance isn't acceptable, then trying to come up with a reasonable specification is surprisingly difficult.


> The code to handle the special cases of invalid inputs can sometimes have dramatic effects on code size and performance, sometimes too much to bear for code that should never run in the first place; allowing for garbage output to be produced for garbage input instead of a result that says "this is garbage input" could well be acceptable.

That seems solvable by refactoring your types a little. Also, some languages do not allow partial functions - such as total functional programming languages.

> we have yet to find a satisfying specification for data races that is more constraining than "anything goes," and the two most well-known attempts (Java and C++) have both been widely regarded as failures.

What about nested data parallelism[1] or pure parallelism such as in Haskell?

[1] https://www.classes.cs.uchicago.edu/archive/2016/winter/3200...


Would pointer provenance still be such a big deal if one-past-the-end pointers were disallowed? All the interesting examples I’ve seen are based on one-past-the-end pointers.

I assume something analogous could happen with pointers to freed memory that are converted to integers, but that seems easier to resolve.


> unless the spec has undefined behaviour (why would it?)

If your code relies on e.g. integer wrapping as implemented by CPU integer instructions, then your code, by design, inherently has undefined behavior for architectures that implement CPU integers using something other than two's-complement. (A lot of crypto code is invalid on these architectures!)


If your code relies on twos-complement integer arithmetic wrapping, you should use a function/operator/language that guarantees that behavior. Then, the function/compiler/interpreter can use the CPU instruction on platforms where it exists and emulate it otherwise.


Sure. Can you name the relevant intrinsic in any pre-modern language? It's nice that Rust has overflowing_add, but what were you supposed to do in the 90s? GCC's __builtin_add_overflow only appeared in 2014. Before that, there was this: http://kqueue.org/blog/2012/03/16/fast-integer-overflow-dete..., where, hilariously:

> GCC will optimize away the check (c < 0)

...and the suggested solution to this was to use special, third-party "safe math" libraries written in target-specific assembly.

Which meant you weren't really getting the feature we're looking for (which is portable code that will compile anywhere, using either native two's-complement wrapping arith when available, or a shimmed emulation of it when not), because you were locking yourself into a whitelist of targets that the library had been written to support.

Now, mind you, that's just C. "Using two's complement arithmetic" might be something you can get a guarantee on from, say, Ada, or Fortran, but if you can, I've never heard about it. Anyone know?


Optimization often requires knowledge of the behavior at a level of abstraction lower than the one you're currently working at, and exploiting leaky abstractions. Taking away the assumption of two's complement makes code more portable at the cost of making a lot of optimizations impossible. It's a tradeoff, not an absolute rule.

As soon as undefined behavior is guaranteed by a spec, it's no longer undefined, so your statement reduces to "you should never rely on UB." You can say that, but people will still do it because it makes their programs faster.


I beg to differ. Signed-overflow and such could easily be defined as returning an implementation-defined value. No UB is required.


UB exactly means that it's up to the implementation to decide the behavior because the language spec doesn't. "Implementation-defined" == UB.


No, there's a distinction. "Implementation defined" means that every implementation is free to do what it likes, but it must document what it does. Undefined behavior is illegal to have in your program, period, and means the implementation has no restrictions on what it can do–including being inconsistent.


No it doesn’t. Implementation-defined means that the implementer must know and document the behaviour (define it). Undefined behaviour may well be behaviour that the implementer does not understand at all (for example it may invoke other undefined behaviour from another part of the system or an underlying system).


> Are you saying there have been studies that show Haskell's type system has not been correlated to higher correctness than say Java or Python, or are you saying you are unaware of any such studies?

On the contrary! Consider for instance: "Total Haskell is Reasonable Coq"[1]

1. https://arxiv.org/abs/1711.09286


This has absolutely no bearing on the matter. The fact that some simple propositional logic statements could be proven using Haskell if Haskell were total, does not mean it has a significant impact on correctness (which requires predicate logic) even if Haskell were total, and it certainly doesn't mean that it's more effective at increasing correctness than Python.

And, BTW, Coq is not a particularly effective tool for writing correct software, either. The largest software whose correctness was largely verified in Coq (CompCert) is positively tiny compared to standard software (it's less than 1/5th of jQuery), and the cost was huge. So Coq's "correctness score" of increased correctness per unit of effort is not particularly stellar compared to alternatives.


>I don't know what you mean by evidence, but Haskell has a stronger and more expressive type system than say Java

My first experience with running real Haskell code was installing Leksah - which is a Haskell editor written in Haskell itself.

I initially installed an older version by accident. It presented me with some window saying it will try to download something. Then it closed spewing a cryptic error message into the console. I could not restart it, because the program immediately went to the same error on startup.

I then installed the latest version. That one loaded... only to consistently crash when I tried to create a file.

This is what happens when software is written by people who don't understand the difference between theoretical "correctness" and real-life reliability.


The type system of Rust does so much more than just ensure "undefined-behavior-safety" and increased type safety of any language often does wonders to improve functional correctness.

If i had a penny for every time someone forgot to check the return code or errno in a C program i would be a millionaire by now. Such class of error is simply not possible in Rust's type system. I'd classify this error as functional correctness, it has absolutely nothing to do with undefined behavior or anything like that, hence Rust does improve functional correctness.

Other common error i see i C# for example is that because of the GC everybody just passes objects around wildly without thinking of ownership, eventually there is a reference to some object everywhere, often in multiple threads, and you have no idea who is the real owner and who is just borrowing. Every reference is mutable so it is often very difficult to understand who is mutating it. Rust's immutability by default with only one mutating reference allowed at a time, would prevent this class of error also. (This isn't necessarily an error in itself but usually this spaghetti eventually leads to functional errors)

I do see your point though and agree that it would be very interesting to see a statistical evidence of how well this translates to productivity and end user perceived functional correctness. It could for example be that all such errors are weeded out during testing anyway, which leads us down the endless rabbit hole debate of dynamic vs static programming language. Though in my experience the closer to compilation you can find an error the better. Especially in larger teams where types not only help yourself but also describe the intent of the code for others.


> I'd classify this error as functional correctness

It isn't. Functional correctness is about ensuring that the code matches the spec. Preventing those kind of bugs could have a positive or a negative impact on functional correctness, but they are not, in themselves, functional bugs.

> Such class of error is simply not possible in Rust's type system.

This doesn't have much to do with correctness, though, and it's is a common mistake when discussing correctness (a subject I have written extensively about https://pron.github.io/). I am not saying that eliminating a certain class of bugs is bad; it might actually be good (although it might also be bad), but it's far from simple. It is absolutely not the case that if one language eliminates a certain class of bugs and another one doesn't, then the first leads to more correct programs. This implication is just wrong, both in its internal logic as well as empirically. To get some intuition for why it's wrong, consider that when naively writing a program on paper, you introduce 100 bugs. When you actually program it, you must use a variety of methods, including tests to get rid of them. Now, if you use a language that eliminates 10 of them with its type system, it does not mean that given the same amount of effort, you'll catch more bugs in that language than the other. It's possible (indeed, we know it happens when we work with more powerful type systems like dependent types) that it's harder to write the program in the first language, and in the time it took you to eliminate those 10 bugs -- i.e. getting the program to compile -- you'll have found 20 in the other language. And it's certainly possible that overall there just wouldn't be much difference.

Again, I am not saying eliminating certain classes of bugs is bad. I am saying that it is absolutely not true that doing so is a certain path to better correctness.

In any event, Rust's type system does not directly increase functional correctness. It does eliminate some bugs that could cause functional correctness issues, but whether the approach's cost/benefit is significantly better or significantly worse than, say, Java or Python is very, very far from clear.


> java

Java does not enforce thread safety at compile time. I would take Rust over java just for that.

Disclaimer - work heavily in Java by day, and have written some cli tools in Rust.


Rust doesn't enforce "thread safety" just a lack of a simple kind of data races. Much of it is because data races could result in undefined behavior in Rust when there's a race between freeing memory and accessing it. This kind of race cannot happen in Java, either.


Have you read Ghosts of Departed Proofs? I agree there haven’t been enough studies. Sufficiently advanced type systems do enable us to express specifications that can check our implementation. It’s not on the level of model checkers or proof assistants but it is an awfully effective tool for constructing programs that are functionally correct with respect to their specification, faster.


Yes, I have, but the question here isn't whether types can be used to verify any correctness property, but whether, overall, there is some correlation between the type system and functional correctness, and there doesn't seem to be one. The question is why this is the case. Perhaps it is because types are not any more effective in their cost/benefit than other ways of increasing correctness. Papers like that are driven by what can be verified with a chosen approach, not by cost/benefit and comparison with other approaches. In other words, they show some property without assigning it a relative value.

Pointing at such papers when discussing correctness relative to other languages is like telling people they'd be much better off working at company A rather than company B, because company A gives out free lunches. Even if company B doesn't, it might offer free dinner, and even if it doesn't the salary can be higher, and even if not, it's unclear how big of a difference A's lunch is. In other words, correctness is very complicated, and there is just no way to look at one property or another and make a conclusion. Indeed, there does not seem to be a big difference in practice.


In the absence of enough studies I find it difficult to draw any hard conclusions.

I think type systems, such as Haskell’s, strike a good balance between formal and informal specifications such as unit tests or diagrams of boxes and arrows.


> In the absence of enough studies I find it difficult to draw any hard conclusions.

Me too, which is why I say that it does not seem to make a big positive or negative difference (big differences are easy to find; there haven't been enough studies about the correlation between eating lettuce and baldness so we cannot draw any hard conclusion about there being a correlation or not, and should continue assuming that it's likely not a big cause for baldness nor a definitive cure).

> I think type systems, such as Haskell’s, strike a good balance between formal and informal specifications such as unit tests or diagrams of boxes and arrows.

I agree. Haskell's type system is OK, and I think most other languages also strike a balance that is roughly equally good, albeit not necessarily through the type system, which is why Haskell doesn't appear to be doing much worse or much better at the bottom line. So I definitely see no good reason to suggest Haskell should lead to drastically lower correctness, and if someone said they cared about correctness and want to use Haskell, I wouldn't think it's an exceptionally bad choice.

I think (simple) type systems have many good benefits (code organization, documentation, tooling, AOT-compilation performance), which is why I like them, but correctness just isn't one of them. And why should it be? Theory certainly does not suggest simple types should have a big impact on correctness. The strength of the Haskell type system is such that for most uses of it, for any well-typed Haskell program, there is a very small finite-state program that satisfies the same type (expressible as a very small program), yet clearly most of what the program does -- as well as its code -- lies well beyond its type-equivalent small FSM. So any argument that says that the type would still have a big impact of correctness despite the theory saying it shouldn't would also need to explain why other techniques, like tests without a type system, aren't at least as effective (cost/benefit-wise).


> I think (simple) type systems have many good benefits (code organization, documentation, tooling, AOT-compilation performance), which is why I like them, but correctness just isn't one of them

Most of those impact correctness (amongst other benefits), otherwise we wouldn't care about them. None of the studies that have been carried out tested the long-term benefits of types, which is precisely where types would shine,so I cannot agree with your conclusions in this thread.


> Most of those impact correctness (amongst other benefits), otherwise we wouldn't care about them.

Well, it doesn't seem like they do because, again, we can't find significant correctness differences. But I care about them because they improves my subjective experience (enjoyment). I don't know if it's universal, though. I care a great deal about the art I enjoy, but I don't make any universal claims about it improving productivity or something. And I find any such claims -- just like unsubstantiated claims about programming languages -- to be just like claims about homeopathy: I understand and sympathize with the psychological mechanism that causes people to make them, but they are unlikely to have any large real-world validity.

> None of the studies that have been carried out tested the long-term benefits of types, which is precisely where types would shine, so I cannot agree with your conclusions in this thread.

It's not a "conclusion." Until we find evidence that lettuce cures baldness -- or causes it -- we assume it doesn't.

But it's even stronger than that. We don't need difficult studies to detect large effects. Large effects are often found even without studies at all. If eating lettuce increased or decreased baldness by 50%, we would have noticed. Especially in software, which is a huge economic undertaking and therefore has powerful selective pressures, adaptive techniques have very clear adoption dynamics, of the kind we've seen with unit testing and even garbage collection. Yet we don't quite see them with Haskell. Haskell doesn't even spread rapidly inside organizations that already use it. On the other hand, it doesn't rapidly disappear, either, as we'd expect from a highly maladaptive practice. Its dynamics is similar to that of a neutral trait.

So the scientific reality is that Haskell in all likelihood does not have a large negative or positive effect on any important bottom-line metric, at least until proven otherwise. This is the current working assumption, and it's great news for Haskell fans: Haskell is probably a reasonable choice, and not significantly worse than others. Looks like the FDA would call consider it a safe nutritional supplement.


> Well, it doesn't seem like they do because, again, we can't find significant correctness differences

Like I said, none of the studies conducted so far actually measure this in any realistic fashion. If type consistency, documentation, code organization, and tooling had no impact on the functional correctness of the resulting programs, we wouldn't bother with any of it.

> It's not a "conclusion." Until we find evidence that lettuce cures baldness -- or causes it -- we assume it doesn't.

No, we assume nothing. Why would you even bother making assumptions? Certainly you can advance a hypothesis, but that's not an assumption.

> Especially in software, which is a huge economic undertaking and therefore has powerful selective pressures

I disagree. Software development is a comparatively meager economic undertaking compared to most other human endeavours, as you need only a computer and a human brain.

Furthermore, the selective pressures you speak of aren't really driven by functional correctness ("worse is better", first mover advantage), so I frankly expect to see little correlation between economic success of a software company and the functional correctness of its software products, which in fact seems to be the case. There are plenty of reasons why the effects of types may not be visible in the current data.

> So the scientific reality is that Haskell in all likelihood does not have a large negative or positive effect on any important bottom-line metric, at least until proven otherwise.

Haskell has plenty of other baroque features that might mask the benefits of type checking. We're talking about types in general, not Haskell specifically.


> Like I said, none of the studies conducted so far actually measure this in any realistic fashion.

That doesn't change the fact that our current working hypothesis is that there is no large positive or negative impact. And, again, studies are not required to find large impact.

> If type consistency, documentation, code organization, and tooling had no impact on the functional correctness of the resulting programs, we wouldn't bother with any of it.

I don't understand the logical deduction here. Is functional correctness the only metric we care about?

> No, we assume nothing.

We certainly don't assume that Haskell has any correlation with increased correctness, something the article does.

Also, assuming the null effect as a working hypothesis is practically the same as assuming nothing. In either case, if we want to make a choice (e.g. on correctness) we cannot take the variable (e.g. language) into account. Similar to how you don't consider the effect of lettuce on baldness, even though nothing has conclusively determined either a positive or negative effect.

We never treat the existence and nonexistence of effects as symmetrical in practice, or we wouldn't be able to make any decision. That's what we mean when we say that if there's no evidence for an effect the working hypothesis, i.e. what we use to make decisions, is that there isn't one. Our working hypothesis is, then, that Haskell does not have a significant impact on correctness, and so the article is wrong.

> Furthermore, the selective pressures you speak of aren't really driven by functional correctness ("worse is better", first mover advantage), so I frankly expect to see little correlation between economic success of a software company and the functional correctness of its software products, which in fact seems to be the case.

Wait, but before that you said that if there was no impact on functional correctness then we wouldn't do what we do. You can't have it both ways. I say something different: Haskell's adoption dynamics show that it does not have a large impact overall on the bottom line that the industry cares about. Now, you can take back what you said before functional correctness being why we do things, but even then some measure of functional correctness is clearly important to the software industry, so we can say that Haskell does not seem to make achieving the level of correctness the industry cares about significantly harder or easier than other languages in common use, or that if it makes this harder then it makes other things easier, and if it makes it easier then it makes other things harder.

> We're talking about types in general, not Haskell specifically.

Types have a very wide range, as do non-type-based software assurance methods. I think we need to be more specific. But also remember that we're talking about comparisons, so it's not enough to say that types achieve X, but that types achieve X more effectively than other ways of achieving X. X > 3 is no evidence whatsoever in favor of X > Y without additional information.


> And, again, studies are not required to find large impact.

They are if the impact only shows up over time, for instance, because it's cumulative. Smoking unquestionably has the largest impact on your chance of developing lung cancer, but it still took decades of careful study to demonstrate that causation beyond a reasonable doubt. Nowhere near this level of study has been applied to the benefits of static typing (really, no question in CS has enjoyed a comparable level of empirical study, which is a shame).

>> Furthermore, the selective pressures you speak of aren't really driven by functional correctness ("worse is better", first mover advantage), so I frankly expect to see little correlation between economic success of a software company and the functional correctness of its software products, which in fact seems to be the case.

> Wait, but before that you said that if there was no impact on functional correctness then we wouldn't do what we do.

There's no inconsistency between those two assertions. As an overly-reductive example, if my company's revenue consists of 70% inertia from first-mover advantage and 30% from having a product with fewer bugs than my competitors, then functional correctness is not the primary source of revenue. As long as I satisfy the minimal viable product threshold, inertia will keep me alive.

Even so, as professionals we presumably ought to be motivated to minimize our contribution to operational costs and maximize our contribution to revenue. If improved code organization, improved documentation, and improved tooling had literally no impact on functional correctness, then we'd be wasting considerable resources for no observable benefit. There's some slack in there to make your employees's live more enjoyable, but not as much as we focus on these other qualities.


> but it still took decades of careful study to demonstrate that causation beyond a reasonable doubt

Not really. The correlation was immediately seen once they looked because the effect was so pronounced [1]. Then it took about seven years when virtually every study showed conclusive results for it to become consensus. This is not the case here.

> Nowhere near this level of study has been applied to the benefits of static typing

First, typing isn't one thing, which does complicate matters a bit. Second, various studies came up with either null result, inconclusive results, or small-effect results. This is not a smoking situation.

> If improved code organization, improved documentation, and improved tooling had literally no impact on functional correctness, then we'd be wasting considerable resources for no observable benefit.

Why? Maybe it benefits development speed? Why did you decide that developers do things for many reasons, but these things just for correctness? And anyway, it's not like types make documentation and organization redundant, they just make them somewhat easier for me, probably because I'm not used to working with untyped languages.

In any event, any claims of correlation between programming language choice and correctness are certainly unsubstantiated, and people should stop repeating what is, at least currently, a myth.

[1]: https://www.theguardian.com/news/2005/jun/02/thisweeksscienc...


Until we find evidence that lettuce [...]

You keep saying that but everyone knows the foundational work here is in pickles, not lettuce:

https://web.archive.org/web/20190324004223/http://jir.com/pi...


The goal of Haskell type system is to have Rust as a library.

For example: http://hackage.haskell.org/package/liboleg-2010.1.10.0/docs/...

One can implement linear types and Rust's borrow checker in the Haskell as a DSL. That's main difference - in Haskell, you describe a system where different parts have different properties and use type checked DSLs to express them and type checker to ensure that everything goes along well.


Isn't safety actually part of functional correctness?

Safety is inherently a part of accurately implementing a program's specification, even if it's not always obvious. For example, a 1-in-a-billion race condition can invalidate the functional correctness of a program, and (Type|Memory) safety is one piece of preventing these types of errors.


I don't understand this argument. Most other languages also give you safety from undefined behavior. Also, an undefined behavior bug is not necessarily more severe than a logic bug.


Undefined behavior is different from data races and memory safety bugs, though, right? I'm not saying it's more severe, but it's definitely part of the puzzle.


Which languages would you say have expressively "strong" type systems?


Agda. ATS. Idris. Liquid Haskell would be medium-strength.


Quick meta question.

Does this kind of article/content works for getting consultanting clients?

I am not sure I would pick them if I ever need a rust project done, but maybe I am not the target audience.


Well they're primarily a Haskell shop IIRC so it makes sense?

Also making content for getting clients absolutely works, it's the same concept behind big/small company engineering blogs.


The idea of ubiquitous need for significant enforced correctness assurance is one of the worst things in software. Correctness is a resource like time or memory. Some applications need extremely high resource of correctness, some applications don’t (and can even fail badly, usually due to delay or rigidity, if they have an over-provisioning of correctness).

The presumption that every type of project should fundamentally begin with adherence to patterns guaranteeing high resource usage of correctness is a bad type of premature optimization / premature abstraction.

Rust & Haskell are great tools in the rare cases you do need extreme correctness. But in my experience most situations do not benefit from spending that cost (the ways Haskell & Rust enforce program structure is a huge cost) to get correctness, and you can get approximately just as much correctness with pretty low-touch unit & integration tests in most languages, without heavy costs of roundtrip compile times, static typing, forced (instead of selective) management of mutability and ownership, and so on.


> the ways Haskell & Rust enforce program structure is a huge cost

In my experience, it is the opposite of a cost - "worrying about types" doesn't slow me down. Instead, types help guide me, help reason about my code, and help documentation. Coding with types is, for me, faster than without.

In fact, I find myself worrying about types the most in dynamic languages - I'm never sure what I can pass to a function, or what a certain thing is, so I have to manually review what a simple type signature or inferred type would tell me.


I agree. People forget that you can't avoid types. It's the shape of the data. Whether they are static or dynamic/uni, you still have them, and you have to deal with them. I'd much rather deal with it upfront, statically, at compile time. I find it works much better for me, and it helps me model my program. The compiler is a tool I use to ask questions about my program, and we work together to build the shape together.


> It's the shape of the data.

This is how far types are required to go. (And not even strictly - sometimes it makes sense to switch on the shape dynamically). The hype around types in contemporary languages goes far beyond mere shape. And that can lead to unnecessary program structure which can block further development.


This seems like too narrow a reading of OP's point. Rust imposes not only types, but also a particular tree-like program structure which makes it awkward to express lists, upward references, and other constructs.


Lets say I'm only responding to that narrow reading, and am not objecting to a wider interpretation.


If it was just simple type annotations, type synonyms for domain modeling, I’d agree. It’s not at all though. Just consider how extremely hard it is to set up systems of dynamic dispatch in Haskell.

I do however think the “types help me reason about code faster” argument is a lot of bollocks. Years of experience working in large enterprise Haskell & Scala codebases made me believe this is not true at all.


When is "dynamic dispatch" a business requirement? You clearly are trying to put square pegs in round holes for no reason other than habit.

Passing in a value containing s function isn't hard, but I very rarely want to do it.


> Just consider how extremely hard it is to set up systems of dynamic dispatch in Haskell.

What? Make a record of function values. Done.

You can also go with an existentially qualified type with type class constraints if you just want the moral equivalent of [Show _], say.


I do find types helpful in smoke testing greenfield code where it needs to deal with structured data and references. But definitely the more complex approaches to typing have sharply diminishing returns. The static invariants I start caring about in complex scenarios are more along the lines of contracts or structured concurrency, and I don't want to reason about those with types.


> and you can get approximately just as much correctness with pretty low-touch unit & integration tests in most languages

You can get about the same amount of correctness in Haskell or Rust as you can in typical languages by omitting all unit tests.

This is where people saw a certain sliver of Haskell or Rust community pursuing increased correctness with diminishing marginal return and assume they must represent all of the Haskell or Rust community. But didn't the same communities advertise their languages by that magical feeling of "if it compiles then it probably works"? Plenty of people use only simple language features, write no unit tests, and achieve a similar level of correctness as software in other languages with unit tests. But they aren't as visible because efficient boring software engineering doesn't get blog posts.

Besides that, I actually totally agree with your first paragraph. Not all software needs to be totally correct. But my argument is that if you know you don't need total correctness, it will be less work to achieve a certain level of correctness in Haskell or Rust than in typical languages.


The “if it compiles it probably works”-tagline is certainly truthful in my experience, even without unit tests. I can imagine scenarios where you use Rust in a “non-rustic” way where this might not be true, but this goes against the grain of the language and is usually not very enjoyable to do.

These correctness guarantees and tests have the nice side effect of helping you to isolate problems and writing bigger projects in Rust is for that reason alone more comfortable than say in python.


>The “if it compiles it probably works”-tagline is certainly truthful in my experience, even without unit tests. I can imagine scenarios where you use Rust in a “non-rustic” way where this might not be true, but this goes against the grain of the language and is usually not very enjoyable to do.

Same situation in Haskell with same provisos. It's very profitable for my coworkers and I to use Haskell or Rust even if it's a slapdash prototype. Saves time now and time later.


In my admittedly limited experience with Rust, writing tricky data structure library code, “if it compiles it works” was ludicrously far from true. That’s not to say that the type system didn’t help immensely with correctness, but what helped the most by far was property-based testing, which works just as well in dynamically-typed languages.


If you worked on a low level data structure library I’d assume your experience is quite normal, as you are not really using the type system as much — this however changes a lot if you program on a higher level.

I use python for years now and I am convinced my python code has more errors than my Rust code, just because certain errors that Rust wouldn’t even compile are non-trivial to detect.


> and you can get approximately just as much correctness with pretty low-touch unit & integration tests

This isn't accurate. It both vastly overstates the domain & value of unit tests and vastly understates what's possible to assure using types [1].

1) https://blog.auxon.io/2019/10/25/type-level-registers/

disclaimer: I am the CEO & co-founder of the linked company, but it nonetheless shows some very powerful uses of Haskell/Rust-style types to provide assurances you couldn't get with even PBT, let alone simply unit tests.


This seems to be using advanced macros, some sort of Church-like encoded integers, and a special error rewriter to simulate integers as types.

Wouldn't this be much simpler in C++ with its value-type template parameters?


Can you explain what parts of that article illustrate your counterargument? I don’t see anything in it that demonstrates overvaluing correctness from testing, and the domain modeling (eg Register) in the post seems juvenile and very unnecessary, certainly not creating any sort of expressiveness or correctness benefit, merely just some organizational preference of the writer of the code, which might even be counter-productive in the minds of people tasked to refactor it later.


Re: Unit Tests. To say that they're not exhaustive is an extreme understatement. You test the happy-path, maybe a few sad-paths you can think up, and if you're a really fastidious unit-tester, then you add future sad-paths you discover (often post deployment) to your test scenarios. They cover the narrowest slice of the input domain and the assertions are against the narrowest slice of the output domain. They're _just barely_ better than nothing for establishing some correspondence between requirements and implementation.

One could graduate to property-based tests to attempt to do better than this by being able to specify a more global behavioral/constraint property that the implementation must uphold across all possible inputs. However, except in the case of really small state spaces, the input domain is still just a sampling of potential inputs (though perhaps many thousands rather than a small handful that would be represented by unit testing). You're probably 2-3 orders of magnitude closer to meaningfully establishing a correspondence between your requirements and your implementation this way, but purely by way of having created a kind of brute-force-ish automated generative unit testing system.

Re: The article posted. The final API exposed to the end-user is very simple and fairly idiomatic for interacting with registers (the approach could be applied to other kinds of resources as well). It's no more, and probably even less, baroque for such a task than a typical C API, but with much higher assurances provided to the consumer of the API. The compiler is more-or-less enforcing correct access to the underlying hardware model, so that you don't do something dangerous in any random corner of your program, like incorrectly write a value meant for one register to another, flub a mask, or misalign a shift. The article shows a way to get strong assurances with pretty much no extra effort on behalf of a consumer of `bounded-registers` because the crate authors baked the assurances into the crate via types.

How would you cover the domain of all possible combinations of register layouts, field layouts, field to register relationships and field values using unit testing and enforce those constraints from the bottom of the stack all the way to the top? How many unit tests would it be, and what assurances would you be able to claim, and how would you avoid the run-time cost of constantly checking and/or clamping values, and how could you compose those assurance claims with others further up the stack "by construction"?


It's honestly not a big cost once you become an advanced user of it. I've been getting paid to do pure FP for like 5 years and the language rarely gets in the way. You can pick how statically correct you want to be, and refactoring in one direction or the other tends to be rather mechanical. And when it isn't, there's typically something interesting there anyways.


By the same token, tests and the “heavy costs” of testing round trips should also be considered resources.


Yes, for sure. But unlike language features that enforce correctness, you can choose what and how to test to suit your use case. Haskell for example doesn’t support gradual typing. You must always pay the static typing cost regardless of whether you need it.


Gradual typing does allow a more graded trade-off between types and tests.

Weak typing doesn’t really allow you to make that trade-off, since testing is going to be directed by failures and you don’t have the option to use types instead. Although in theory you have discretion with regards to tests, in practice you have to write a lot of them. Kind of like how interfaces and `Object` let you write less specific types in theory but in practice you end up writing them anyways.


> Haskell for example doesn’t support gradual typing

Sure it does. It’s possible to gradually make the contracts of your Haskell functions more precise by more finely specifying the behaviour in their types.

I think what you mean, though, is that the lower end of the well-typed spectrum is much higher than most dynamic language users would be comfortable with. This is true in practice, but in principle there’s no reason everyone couldn’t just use Dynamic everywhere. It just turns out that’s not particularly useful.


https://typesandkinds.wordpress.com/2016/01/22/haskell-as-a-...

I wonder if the extension proposed in this blog post would fit the bill and be attractive to certain non-Haskellers?


It doesn't work. Tangentially, I had an interesting experience reading this blog post:

* This is an amazing idea! Why did I never think of it?

* Oh, it’s never actually going to work ...

* ... and I said so much as a comment on the blog post three years ago


Well I suppose it's honestly not a huge value proposition anyways imo


I don't agree, but I wanted to say to everyone else:

Please don't downvote coherent comments where someone obviously put an effort into expressing their argument just because you disagree. This is perfectly valid discussion.


That's not how HN works. There's no 'don't downvote to disagree' rule. There is a 'don't go on about downvoting' one, though.


I doubt it will surprise you, but many disagrees with you. Using Haskell isn't a burden - on the contrary, it's far easier to get stuff working correctly. The type checker looks over your shoulder and stays out of the way until you make a blunder. It's not uncommon for things to work as soon as they pass the compiler. I work in C every day and this is not the typical experience there.

I'm hugely interested in Rust, but I'm disappointed that they felt the need to rename all concepts and adopt an ugly syntax. I assume it was done to attract the C++ crowd.


I find the point about "ugly" syntax interesting, because I'm generally quite indifferent to that sort of thing, but my experience tends to indicate that it's very important to people.

Facebook created ReasonML, which is just a different syntax for OCaml (I think the AST parsed from it is interchangeable with one parsed from OCaml), purely because it's less alienating to people who are used to writing C-style code.

I'd say though, that generally, people who are interested in "niche" programming languages are less likely to be put off by syntax than people who have only been exposed to the more mainstream languages, which is why their preferences are less likely to be the ones reflected whenever mass adoption is a goal of language designers.


Additionally, at least in (GHC) Haskell, you can recover most of the flexibility typical of dynamic languages via use of undefined values, partial functions, the PartialTypeSignatures extension, and the defer-type-errors flag. Personally, I've found these tools to be very useful when prototyping, exploring/experimenting in a REPL, or beginning a refactor, and the process feels very akin to development without unit tests (or tests turned off/ignored/whatever), especially in the progression of incrementally adding back compile-time-checks and watching my mistakes surface.


Great advice, I'll try these features out while prototyping on Haskell.


Probably the other way around. When I first started using Rust it still had LWT and segmented stacks. I was excited that it seemed like I'd finally have a sort of "systems" Erlang. Over time the only consistent large nexus of Rust users using it "in anger" were the compiler team itself and the Servo team. Lots of those folks were, or had been, working on pretty significant C++ code bases, and over time their needs and perspectives seemed to (kind of necessarily) take hold.


Would you say that's still the case? I've also been hoping that Rust would see a kind of Actor-oriented ecosystem emerge, and while there's definitely projects out there, none of them really support distributed computation in the same way you'd get from Erlang. At least yet, I'm still hopeful.


You want a Pony.


Yeah, Pony is a really interesting language. Too bad it doesn't get more traction.


Rust is designed to replace C++.

That "ugly syntax" is the de facto standard in the industry. No one, comparatively, would use Rust if it went the Haskell/ML way.


Lots of people in the industry are using python, which has a syntax very different from c++.


I’d say the syntax is pretty much the same, besides using indentation instead of braces for defining blocks.


Quality does not waste time, quality saves time.

That's a mantra from production engineering, but it applies just as well in any engineering activity, even when constructing abstract machines.


> The presumption that every type of project should fundamentally begin with adherence to patterns guaranteeing high resource usage of correctness is a bad type of premature optimization / premature abstraction.

Yes. Devs are often found on the four corners of the internet screaming things that could be extrapolated to:

"Undefined Behavior is unacceptable regardless of the project!"

"Garbage Collection is the devil!"

"How dare you code in a language with nulls?"

"Can you even sleep knowing your code might have race conditions?"

Sure, I'd love to code in a stable language without nulls, no GC, no UB and no race conditions while still being easy to learn and quick to onboard to the average dev as C#/Java/Go/Python and having a large community with tons of libraries available. No such languages exist yet so tradeoffs have to be made.


"Undefined Behavior is unacceptable regardless of the project!"

This is an ambiguous statement as to whether it is about avoiding officially defined "undefined behavior" or about not programming in a language that has it. I agree that the former is impractical, but I think the latter is good advice.

If a language defines something as undefined, and if we agree that permits nasal demons, then undefined behavior is unthinkable, intolerable, yet inevitable. It's a spiral into insanity, because this philosophy is self-contradictory and those who claim to believe in it do not believe in it at the same time.

But if a language simply doesn't define everything, then nasal demons are off the table and we can all assume that we are reasonable people who are part of an advanced civilization that deals with exceptions and the unexpected on an adhoc basis.


I wonder if typescript has more shared features with haskell than has rust (at least for the type system).


It's actually farther than Rust compared to Haskell.


Why? It has union, intersection, recursive and conditional types.


Adding to sibling comment: structural typing vs traits is kind of a game changer. The way I use them is very different. TS still feels like OOP (since it's just a formalization over JS's implicit native typing). Rust and Haskell feel mostly the same to me wrt how I think about my code (abstraction through capabilities).

Intersection types are very different. In Haskell/Rust it means "this generic type implements traits A, B and C". In TS again it's about structure "it has at least the properties shared by types A, B and C". This means the intersection of A, B and C is all of A, B and C in Rust/Haskell, while in TS it can be a completely different type that only has their shared props, which makes no sense in Haskell/Rust since traits/typeclasses are disjoint (their intersection is always empty).

All of them have recursive types (though in Rust the recursion must be boxed explicitly to make the type sized) and union types.

Neither Rust nor Haskell have conditional types. This is really cool from TS: types are functions! Rust/Haskell are purely nominal.


Interesting, but I don't really get the difference about intersection types. (abstraction through capabilities) What differentiate an intersection type from a mixin? Secondly I get it that intersection types in typescript are more permissive but they are mostly functionaly the same, aren't they?


Traits are similar to mixins, this SE answer explains it better than I could:

https://stackoverflow.com/questions/925609/mixins-vs-traits

Sorry about the intersection confusion, I mixed up things.

Intersection generic types are types that adhere to a set of traits (all of them, completely). Since traits don't overlap (even if their method names are the same) their intersection requires them to be defined explicitly.

E.g. if you have an interface A { length: number } and an interface B { length: number, size: number } in Rust/Haskell you need to implement both explicitly for your type, while in TS your type just has to adhere to B's shape to be both.

TL;DR: it's mostly about the "structural vs nominal" + "traits vs interfaces".

But yeah they're functionally the same indeed, sorry for the confusion.


Neither Haskell nor Rust have subtype polymorphism, TypeScript does. Haskell and Rust have type classes (traits), TypeScript does not. Etc.


Just to add since I find it an interesting nitpick: Rust has subtype polymorphism in lifetimes.


The example in the polymorphism example feels strange to me. Why convert from Option to Result?


IMHO conversions between various “error types” are quite common when gluing together different APIs. Like when you are chaining a handful of Result-returning calls and there’s one in the middle that returns errors using Option.


This is exactly right. We habitually lift `Maybe`/`Option` values into a more explicit and comprehensive `Either`/`Result` type in our programs. It's pretty common to have an error type hierarchy in the program too, so you could find yourself chaining from one `Either e` to another. There are libraries in Haskell and Rust for regularizing/code-gen'ing these patterns.


We see everywhere influences from Haskell




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: