Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Okay, before I say 'you win', can you show me this in Haskell:

   ; -- the program --

   (defn add* [x y] (+ x y 1))

   (add* (read) (read))

   ; -- repl demo --
   3
   40.1
   44.1


I tried this, because my first thought was that this was much the same as the first example. Certainly Haskell doesn't have a variadic addition function by default but I don't think that's the biggest concern here (variadic addition is easily just summing a list). The surprising thing was entering both an "integer" and a float to be read. You certainly can parse floats from input, but this: `(+) <$> readLn <*> readLn` doesn't do it, even though at first glance it seems like it should. If you wanted to handle fractional values, you would have to hint to the type checker that you wanted to do so. I do think that this example is perhaps too specific, and highlights only the specific topic of dynamic number conversion which is, depending on the situation, both blessed and cursed. I will concede though that if you want dynamic number conversion Haskell isn't going to give you that easily.


GHCi defaults to numerical operations on Int by default, so we have to specify what type we would like. Before you convince yourself that that's an obvious loss for Haskell bear in mind that you have to specify the type at only one single place in your whole program and it will propagate everywhere else, even if your program is 10m lines.

    Prelude> add_ x y = x + y + 1
    Prelude> add_ <$> readLn <*> readLn :: IO Float
    3
    40.1
    44.1


But this is the point.

My Clojure program is valuable/effective/per-spec without the type burden. So the type burden has slowed my delivery.

And that "one single place" is for this trivial toy program. Now imagine the impedance across the delivery of a real business app.


I was thinking about this sentiment:

  And that "one single place" is for this trivial toy program. Now imagine the impedance across the delivery of a real business app.
That seemed very incongruous with my experience. I think the problem is that in my experience the number of types necessary scales sub-linearly with the amount of code. This is because Haskell (and other languages with what one might consider pleasant type systems) generally have full type inference. So while you might need the explicit type for the toy example, scaling that example up means only a handful, and possibly a net negative, other necessary explicit types.

This comes up in a concrete manner in your example. The add function is almost too small to be typed appropriately. There isn't enough information for GHC to constrain the choice of numeric representation so it picks one. If you were to embed that in a larger program with more specific operators no types are necessary.

  Prelude> let add = (+) <$> readLn <*> readLn
  Prelude> let foo = (/) <$> add <*> add
  Prelude> foo
  3
  30.1
  40
  10.2
  0.6593625498007968
So it's quite possible that the "type burden" becomes smaller as your system grows.


What you did here is very opposite from what real-world dev looks like. Here, you picked a `foo` so that the type checker could find consistency.

Typically in the real world, the customer or product manager asks for the thing-of-value. The job is not find a `foo` that reduces the type annotations. The job is to add business value to your system, to your code.

The probability that the world asks for something that fits so nicely into your type taxonomy is low. The real world and the Future is often more creative than our invented taxonomies support. In these cases, what you need to do is go fix up your types/taxonomy so that consistency ensues again. Whether or not that results in less type annotations doesn't ultimately matter. It's the work you had to do (the game you had to play) to make the type checker happy.

Note: The implicit assumption behind what I'm saying is that we want to create less work for ourselves, not more. If we don't care about work/time/efficiency, then my arguments in this thread are all moot/baseless.


I did no such thing. I picked a `foo` to prove a point: That the type inference engine is more than capable of inferring every type in a reasonably sized program, and the toy examples we are dealing with are below a minimum threshold. It had nothing to do with satisfying the type checker or finding some "consistency". Furthermore, the larger point was simply that the "type burden" is often not a burden at all and can in fact be completely invisible to you the user. Look at https://hackage.haskell.org/package/turtle-1.4.4/docs/Turtle... and see how very few of the examples have explicit type annotations, and when they do they exist for explanatory and documentation purposes only.

Please don't assume the motivations or intentions behind my actions and then attempt to use that to prove a point.

To address that point explicitly: In general, if requirements change then the code needs to change. Neither Clojure, nor Haskell will make that fundamental truth go away. You say you want to create less work for yourself when this happens. I'd like to propose that there is different types of work with differing qualities. When you receive a requirement change, the first step is to determine what that means for the system as it stands. This is by far the hardest step and is where the majority of the "hard" work lies. The next step is to change the system to fit the new requirement. This is primarily a mechanical task of following directions you laid out in the first step. I think it's preferable at this point to have some tool or process, even multiple, to make sure you do this part correctly. Tests, types, and code review are the three major tools I've seen aimed at this problem. Tests help ensure you've specified the changes correctly by forcing you to write them down precisely. Types help ensure you've implemented the changes correctly by forcing consistency throughout the system. Code review helps ensure you've implemented the changes in a manner that is understandable (maintainable) by the rest of your team. Note also that tests require an investment in writing and maintaining them, types (with global type inference) require no upfront investment but can require some maintenance, and code review requires participation of team members of similar skill level to you. They also all require some level of expertise to be effective. It seems shortsighted to throw away static types when they are, in my opinion, the cheapest method of helping correct implementation.


> Tests, types, and code review are the three major tools I've seen aimed at this problem.

I did a job for a defense contractor once that brought the entire team of 10 into a room to code review page by page a printed out copy of the latest written code. Now whether this was rational or not, I'm not sure, but I will give it to them that in mission critical systems you don't want to take any chances and that you're willing to be way less efficient to make sure you don't deploy any bugs.

I've been at a few companies where one of the engineers discovers code coverage tools and proposes that the team maintain 100% coverage of their code. These were mainstream industry software businesses that could afford some minor bugs here and there, so no one thought this was a good idea. Most engineers I know doing industry programming think that trying to sustain some specific percentage is a bad idea. Most say that you have to write your tests with specific justification and reason.

So no reasonable person I knows would advocate some overall mandate to do Code Reviews like above, or to hit some specific metric on Automated Tests is a good idea. And yet the general tendency of statically typed programming languages is to enforce the usage of types through and through. This should be really interesting, considering that Code Review and Automated Tests are far more likely to root out important bugs than a type checker.

I'm not arguing against type verification. I agree with you that it is one tool among many. It's the weaker tool among the three tools that you mentioned but we are still somehow arguing whether it should be (effectively) mandated by the programming language or not.

Why mandate type verification and not unit test coverage? Why not mandate code review coverage? Because, when mandated, they cost us more time for the benefit they bring.

My main argument is that type verification is a tool that should be used judiciously not blindly across the board.


> This should be really interesting, considering that Code Review and Automated Tests are far more likely to root out important bugs than a type checker.

Not to bring up the tired argument again, but as far as I know there isn't proof for this. Nor is there proof that types are better at catching bugs than tests or code review. The best we can hope for is anecdata. I've got my fair share that shows types in a good light and I'm sure you've got your fair share that shows tests in a good light.

That being said, global type inference is in my opinion a fundamental game changer here. No longer are you required to put effort into your types up front. It is the only one of those three tools with that property. This makes it trivial to get "100% coverage" with types. That is why people argue for static type coverage.

Additionally, Haskell specifically has a switch to defer type errors to runtime, which means you can compile and test without 100% coverage.


Really nice observation! Thanks.


I'm going to assume that if you take 10 times longer to respond to this than it took me to respond to you, then my "orders of magnitude" claim has been proven.

(...friendly joke, of course.)


If using Clojure means that one no longer needs sleep then I concede immediately.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: