Hacker News new | past | comments | ask | show | jobs | submit login
Is Typed Clojure worth the trouble? (juxt.pro)
79 points by jonpither on June 22, 2015 | hide | past | favorite | 60 comments

A slightly unrelated question: Are there any plans to make the compiler "aware" of the type annotations, so we could have performance improvements? Basically Typed Clojure going a similiar route as asm.js

For Clojure, I know there has been talk about this. Clojure allows you to insert Type Hints, which allows Clojure to remove reflection in certain cases, which improves both runtime and compile speed. Typed Clojure should be able to insert Type Hints automagically for you since you provide type information anyway, although it doesn't do that today.

For ClojureScript there really isn't any performance optimization to be done, as far as I know. asm.js doesn't really help out high-level functional languages like ClojureScript.

I believe ClojureScript only supports boolean type hints/performance optimization. Other type hints are "supported" but don't do anything.

Yeah. They are not really necessary in a Javascript VM that is optimized for these sort of languages anyway.

This is not entirely accurate, the ClojureScript compiler actually internally propagates type information for further optimization. And nothing precludes a Typed ClojureScript which feeds even more information to the compiler.

I am impressed by the fact that Typed Closure is a lot of things in addition to language design -- library development, editor/tool development, even community engagement and evangelism.

> Clojure codebases typically rely on tests to verify the correctness of the code.

OT, but this is a pet peeve of mine. Tests aren't verification, and they cannot demonstrate correctness (without being exhaustive). Clojure codebases typically rely on tests to mitigate the risk of runtime errors.

> Tests aren't verification, and they cannot demonstrate correctness (without being exhaustive).

Agreed, but it's important to note that there's very little that can demonstrate correctness. Most type systems by themselves don't; as Pierce describes it in TAPL, "[a] type system is a syntactic method for automatically checking the absence of certain erroneous behaviors" (emphasis mine). It's possible that, in a dependent type system, those certain behaviours will be sufficiently many that you can prove the correctness of a program specification, but even a rich system like Haskell's usually doesn't offer such guarantees (just consider `tail []`; or, for an example that's not just a Prelude wart, `take n xs`).

It's difficult to demonstrate full functional correctness. However, it is pretty easy to demonstrate partial correctness. Most tests -- even if they were completely exhaustive -- correspond to partial correctness rather than functional correctness.

I like how:

- first, the ClojureScript REPL whines at smelly coercions out of the box

  cljs.user=> (+ 1 "2" nil)
  WARNING: cljs.core/+, all arguments must be numbers,
      got [number string] instead. at line 1 <cljs repl>
  WARNING: cljs.core/+, all arguments must be numbers,
      got [number clj-nil] instead. at line 1 <cljs repl>
- second, the author kind of hints you should be using schemas & coercion already (so as to desmellify)

- third, you can typecheck a codebase incrementally

Also, I really thought I'd see that quote about 100 functions operating on a 1 data structure easier to grok than 10 functions operating on 10 different data structures. Of course, it's nicer when that 1 data structure (or even the 10) has a schema instead of being void *.

Doesn't Clojure already have optional static type annotations like Common Lisp? I thought it did when I looked through the tutorials.

In Common Lisp you can define pretty much any type (with deftype), you can define algebraic types and even value dependent types. Once you add full type declarations to your code, you will get warnings for inconsistent and violated type declarations.

Type checkers are an automated solution for the problem of not having good teammates. If you don't have that problem you won't benefit that much from type checking and might have many other huge advantages.

That's nonsense frankly. I've worked with an excellent team of Haskellers before I don't think we'd have been anywhere near as good without the type system. I'd agree that having an awesome team gives you a lot of other advantages, and actually FP is a huge opportunity to build that team for start ups. FP tends to attract the kind of people who make up a great team. It's for this reason that I think the conclusions of the flub paradox article are wrong http://steved-imaginaryreal.blogspot.co.uk/2015/06/the-flub-...

Thanks for the downvotes, I was referring to things beyond FP. The type system is a set of early restrictions (as opposing to late restrictions or runtime or late bound). But now you mention it, I'm curious about if you think the type checker allowed you and your team to reach the goals of the project faster or not.

Sometimes restrictions are useful...

Aside from that in Haskell the type system isn't usually a restriction. Experts in Haskell tend to use the type system to guide them towards a correct solution.

I'm not sure it possible to say that we couldn't have built something quicker in another language or stack. What I will say was that the underlying abstraction that we built our system on (streams of events) is very well suited to a Haskell implementation, and I can say with some certainty that building a solution that is as "correct" as the one we built would have been extremely challenging. The code also held together for longer than has been my experience in some other languages and communities.

I had a system in C, with a goal of extremely low latency. This involved a small collection of threads, each serving a different role. Project goals would periodically change, and I would find myself needing to move some functionality from one thread to another. With some creativity, I was able to enlist the help of the type checker in assuring that a particular function ran only on a particular thread. This sped me up tremendously, as my tooling would point me to inconsistencies rather than having to reason them out myself.

I'd agree that having an awesome team gives you a lot of other advantages, and actually FP is a huge opportunity to build that team for start ups. FP tends to attract the kind of people who make up a great team.

Yes, absolutely. It's amazing how overlooked this fact seems to be, by many. Languages aren't that hard to learn, so excluding the really awful ones (e.g. Java for anything other than code that has to be on the JVM and has to be fast) the "familiarity" aspect is minuscule. Programmers learn new languages quickly. What does matter is the community and the quality of programmer you'll attract based on the tools you use and the signals that your tools send about who makes decisions.

Why is Java one the of really awful ones? I know Java is not the best language, it's not my favorite language, but I wouldn't say its really awful.

Two words: no functions.

Java has functions one word "static".

You can't pass a static method as an argument to another static method. You can't write a static method that returns a static method. You can't partially apply a static method. Static methods are not functions.

Yeah, it do not have closures

I think Kyllo was referring more to first-class/higher-order functions. I believe Java 8 added support for lambda expressions, but it's nowhere near the same thing.

I think so too and agree about the power of being consistent with first class things (functions in this case)

Java 8 does have functions as values/lambdas

That's the lack of higher order functions and the lack of first class functions. Anyway I think higher order functions and method references have been added to Java 8 as well as Lambda functions.

As much as I love dynamic languages, I have to disagree friend. :( Static typing confers more benefits than just enforcing a self-documenting type check on your code. It adds a level of expressiveness about datatypes to match the expressiveness of the algorithm itself. This is especially true in languages like SML or Haskell where the type system lets you describe the data you're working with in a pretty precise way.

Dynamic langs have lots of good qualities to them, but every tool has a drawback. Tests, documentation, good team dynamic, etc. hedge against some of the disadvantages of a dynamic language. They aren't a panacea.

I don't think you understand the benefits of type checkers. Here are some:

- Catching at compile time errors that would otherwise happen at runtime

- Performance

- Occasionally, it can enable making your code mathematically, provably correct

- Maintainability and flexibility in the sense that the code is easier to read by future hires

- And the most important advantage of all: automatic refactorings. Without that, the code base rots because developers are afraid to refactor since doing this without errors on a dynamically typed language requires a lot of tests, which nobody really has. Even renaming a function cannot be done safely in a dynamically typed language and it requires the oversight of a human

I know of at least 3 dynamically typed systems that can rename a method automatically in the whole system in one click. And move instVars to super and other tricky things like that before Eclipse was in the womb. I'm talking do those refactorings in year 1999.

Even the smartest teammates in the world won't protect a program from that idiot who wrote all the code in my personal projects. Good type systems minimise the damage that he can do.

I get your point so type systems are a protection for bad hires, a perceived as defense mechanism on code. But things should never reach not even close to that point.

Type checkers are an automated solution for having a human brain.

Of course, so are calculators :-)

Just like hand-washing is for doctors who aren't cleanly people.

that is static typing FUD. if it takes you a few days or weeks to learn to use the type checker and it saves you from expensive runtime errors later, then whats the problem?

The problem is the overhead in adding typing. The time taken to learn the type system is a one-off but static typing has running costs in slowing down development, discouraging exploratory programming, extra cognitive load in reasoning about the type system, and "false positives" in disallowing otherwise correct programs. It does nearly eliminate a class of bugs - this is wonderful, but the bugs it does eliminate tend to be ones that are fairly obvious to find.

What about the cognitive load of chasing down things like null reference errors, which in some strongly typed languages don't even exist? And no, they're not always fairly obvious to find, especially when you're fixing somebody else's code.

The keyword being "some". I've rarely had any problems with null/nil reference errors in Clojure, because nil is treated both as false and as the empty list, so most of the time you do the right thing in case of nil. You also don't call methods on objects, but send values to functions, which eliminates a large source of NPE right there.

Clojure is also a compiled language. So things like calling a function with the wrong arity, and calling a variable/function that doesn't exist are caught at compile time. This is, by far, the most common runtime bugs I've experienced in Javascript and Python, due to spelling mistakes in and out of refactoring.

> most of the time you do the right thing in case of nil

I can't remember where, but I was just reading something yesterday to the effect that a solution that works most of the time is worse than a 'solution' that never works: at least you'll notice the latter quickly, whereas you might not notice the former until it's buried so deep in your code that you've forgotten the hidden assumptions it involves.

Well can't be taken seriously. Real life works most of the time. Only theoretical thought experiments could work all the time. Can you show me a clock that worked more than 10K years? They are still trying to invent one (and we're not having it yet)

I agree, and Skinney made a similar point yesterday (https://news.ycombinator.com/item?id=9759571). However, I think that an excessively literal reading misses the spirit of the quote (as I (probably mis)remember it). I elaborated in my reply https://news.ycombinator.com/item?id=9760336 .

Incidentally, your question about the 10K-year clock (to which I think it is impossible to give an answer today even if one exists today, since I think clockmaking has not yet been practiced for 10K years!) reminds me of anecdote #3 in http://www.netfunny.com/rhf/jokes/91q3/oldanecd.html .

By that logic, most object oriented languages like Java, C# and C++ are fundamentally flawed as calling a method only works most of the time.

Besides, code in general only work most of the time. Static typing does not protect you from bugs.

> Static typing does not protect you from bugs.

But it does. It doesn't protect you from all bugs, but nobody claimed it did.

True, should have specified what I meant. The thing is, static typing doesn't necessarily protect you from more bugs than a dynamic language. For instance, Clojure is less prone to NPEs because you don't call methods on objects, and because a lot of your program manipulates datastructures, and nil is treated as an empty datastructure. Of course, a language like Rust, which doesn't have null, does protect against NPEs. It's all relative.

Reading this again make me realize that I'm wrong. Static typing will safeguard you against certain types of bugs that a dynamic language don't. The point I was trying to make was that a static language doesn't necessarily protect you against a certain type of bug, which you may rarely experience in a dynamic language.

I didn't call anything flawed, fundamentally or otherwise; but I take your point.

I think that there is an important distinction between 'almost-there' solutions.

Suppose that you have a piece of code that is specified to work in a certain way. Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification. I think that we are all comfortable with this kind of "works most of the time".

By contrast, consider 'smart' products—for example, the auto-complete on your phone. This also, after a bit of training, works most of the time, and can make some brilliant inferences. However, I think that most people can agree that the "most of the time" for auto-completion is qualitatively different from the "most of the time" for specified software: it is reasonable to rely on the latter, but not, or at least not nearly as much, on the former (http://www.damnyouautocorrect.com).

Oops, I an important word out:

> Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification.

This sentence was supposed to end "it only satisfies its specification some of the time." I was not going for Knuth-ian irony about proving vs testing (https://en.wikiquote.org/wiki/Donald_Knuth#Sourced).

I know you didn't, but if you have a language have a core semantic that only works "most of time," and "most of the time" is worse than "never," then you have a language that is worse than not having a language at all, which means that it is fundamentally flawed. Yes, I probably did enjoy my logic courses at the university too much.

But they are. As in, they fail to handle the null case properly.

Do you have any evidence to prove this? I've read a few papers that disagree with your opinions...

That's the thing though, there are lots of papers that support both sides of the argument out there. For example, take this http://macbeth.cs.ucdavis.edu/lang_study.pdf large scale study of GitHub projects. In the study, Clojure is right up there for correctness with hardcore statically typed languages like Haskell.

There's plenty of software written in both typing disciplines in the wild. Yet, nobody managed to conclusively demonstrate that software written in statically typed languages is produced faster and with less overall defects or that it has lower maintenance cost. The very fact that we're still having these debates says volumes in my opinion.

Majority of arguments regarding benefits of static typing appear to be rooted squarely in anecdotal evidence.

I agree with this comment of yours but the first one has a few overstatements and that prompted my reply. Also if you feel that strong types slow you down I suggest you try some gradually typed language where you can start out without types, do the exploring and add the types when you know what you want. I've tried this in TypeScript and it was a fairly good experience.

It wasn't my intention to overstate the benefits of dynamic typing, just to point out that static typing isn't just a free lunch. I think yogthos sums up the situation pretty well.

Clojure also takes this approach with core.typed as well https://frenchy64.github.io/2015/06/19/gradual-typing.html

I personally haven't found a place where this is necessary yet, but who knows what might happen in the future. :)

Probably based on personal experience. For me, whenever I start a project from scratch I usually start with the most dynamic environment I can find, then hack away until I have something that resembles the functionality I need, before locking it down. In the beginning of a project I don't care if something is 100% correct, I just want to see what my idea looks like, and what I actually need. I don't want to deal with synchronizing my types with a database schema, or make sure the JSON I send to the server satisfy the type system. I really don't care about interfaces or abstract classes or whatnot, I just want to retrieve the value I know to be in the object I got. Then, when I have a working'ish prototype, I can start nailing it down with types, assertions, tests and contracts.

Of course, people are different. Some people do all of this on paper before writing a single line of code. Others plan by making types and interfaces, then proceed to implement those. I just like making prototypes.

"Some people do all of this on paper before writing a single line of code. Others plan by making types and interfaces, then proceed to implement those. I just like making prototypes."

Of these, I sound the most like you - think just a little, then sit down and start coding to get a feel for things.

Yet I find a good type system essential (or at least conspicuously missing when I write Python). There is a notion that, if one has static type checking, one has to get the types right before writing code. There's no reason that has to be the case. Get a sketch, start writing code, refine your sketch. With type inference, the sketch can even be somewhat incomplete and still help me find where my assumptions clash.

Don't know why people are downvoting you, your approach is perfectly valid.

That's a really important point. Lots of time sunken in confirming the compiler that could have been used in exploratory programming. Humans are fundamentally explorers not conformists.

Nice duality you exposed right there!

More like its FKF (Fear of the Known and Familiar). Most (if not all) programmers have experience with static typing and most of those experiences were not pleasant hence why a lot of people don't like static typing. People start to hate it even more when it gets shoved down their throats i.e. "You must absolutely must use static typing otherwise your software is unreliable and buggy".

Static typing requires discipline, it is no magic cure for bugs which will automatically make your programs correct, same way forcing everything to be in class in Java does not automatically make readable, modular and reusable code, its just restrictive and annoying.

The disciplines required for static typing to be effective can be employed in dynamic typing however merely having a static type system does not mean the discipline will be developed. In order for a static type system to not be restrictive (like Java's or C++'s) it has to be flexible i.e. it has allow types of varying degrees of vagueness. If I am an inexperienced programmer with no discipline nothing stops me from picking the vaguest type possible and making a complete mess. Type errors do not prevent all bugs, in-fact they prevent only a minimal subset of bugs, most bugs are related to logic errors involving an ill-thought out algorithm or data errors related to invalid data provided by the user which often can only be caught by dynamic type systems.

There are too many things wrong with this comment for me to address any subset of them coherently...

I'll leave it at responding to the bit I do agree with, which is that many people have had poor experiences with poor checkers for poor type systems.

That's a good observation

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact