Hacker News new | past | comments | ask | show | jobs | submit login
Hindley-Milner in Clojure (lispcast.com)
142 points by ericn on Jan 13, 2014 | hide | past | favorite | 77 comments



Here's the Hindley-Milner implementation (in Haskell) from a toy compiler project of mine. It was really enlightening to write it and surprisingly simple.

https://github.com/rikusalminen/funfun/blob/master/FunFun/Ty...

This was also the first time I used monad transformers and almost the first non-IO monad application (I've used ST and Parsec before) I have dealt with. If you compare my code with the book source (Peter Hancock's type checker in Peyton-Jones' "Implementation of Functional Programming Languages", link in source code comment), my version using monads is a lot simpler to follow than the original, written in a pre-Haskell functional programming language called Miranda with no monads.

The type checker is a "pure function", it has inputs and outputs but no side-effects but in the code you need to 1) generate unique "names" 2) bail out early on type errors. I solved this problem using Error and State monads. The Miranda code used an infinite list of numbers for unique names and cumbersome tricks to handle type errors.


Nice job! Seems you could have put the TypeEnv in a ReaderT as well, might make things slightly easier.



I find it curious that the venn diagram seems to indicate that a sizable subset of people who are familiar with type theory don't advocate either static typing or dynamic typing.


I would suspect that a large fraction of them are mathematicians and not particularly interested in how people program.


I wouldn't say I'm intimately familiar with type theory, but I'm certainly old slash grumpy enough to not be an "advocate" of anything except learning both and thinking for yourself about the problem at hand.


CS departments are where people exposed to Type Theory tend to be exposed to it. The sorts of languages that tend to be taken seriously in such departments [yes, I know there are exceptions] tend to be statically typed. Java and C++ just have more gravitas than Ruby and JavaScript and Lisps are for hippies.


I'm familiar with type theorie but i'm not a proponent of either of them. Sometimes its better to use static typing and sometimes its better to use dynamic typing. Howeover most of the times I would prefer static typing.

I think in this subset you're mentioning?


I love type theory, but I think a language like Sage[1] is probably the most interesting way forward.

[1] - http://sage.soe.ucsc.edu/


That's a somewhat unfortunate name to choose for a new language given the existence of this well-established system:

http://www.sagemath.org/


Why the 'but'? There seems to be quite some type theory involved in Sage.


I really like the idea behind Sage but I found it kind of funny that in their test cases all the ".out" files are empty because none of the programs actually do any IO. No hello-world for you :P


How do you feel about dependently typed systems?


Sage has dependent types so he probably likes them.


Ah, I misread the intro blurb.


I think that was the joke :)


I think the joke is rather the small overlap between people who advocate dynamic typing and people who know type theory.


which doesn't mean much. it could be that people who would be interested in learning type theory in the first place would prefer static typing regardless, with a similar argument for the case of dynamic typing


That's not relevant to the original point, which was that the fact that dynamic typing enthusiasts not knowing type theory makes debate on the subject not interesting.

Case in point: just look at https://news.ycombinator.com/item?id=7054960 , where someone who apparently likes dynamic types lists a ton of studies done by other people who like dynamic types where modern dynamically typed languages are tested against the best static types that the early 60's have to offer.

The only part I'd change about that diagram is move the "knows type theory" area way down, so that the majority of static typing enthusiasts are not covered either (but so that the proportion of static typing enthusiasts that are covered is much greater than the proportion of dynamic typing enthusiasts who are covered.)

It's depressing to talk about types when living in a world where both sides of the fence consists mostly of people who think static typing means Java, and where new, "exciting" statically typed languages like Go can have a type system that completely ignores all development that has happened in the past 50 years.

Modern static types are actually good, actually useful, and only ever bother you when not bothering you would mean that your code can crash at excecution time.


not necessarily a response to your comment, but i think you underestimate just how much the static/dynamic preference rests on fundamental psychology.

for example, generally speaking i only use basic data structures (lists or hash tables), so introducing types would be over-engineering (note, i'm familiar with e.g. Haskell's type system). for me the idea of typing implies a programming style complicated enough to require it. it's only at a larger architectural scale that i think typing pays off

that said, C# is still my favorite production language.


Disagree about the idea that those who are unfamiliar with type theory prefer dynamic typing.

Typing preferences are usually due to trends in language usage having little to do with knowledge.

Plenty of java programmers use static typing without ever having to understand type theory.

But looking to history of language designers/implementers

Dan Friedman

Gilad Bracha http://www.infoq.com/presentations/functional-pros-cons

Guy Steele

Rich Hickey

All of these guys have worked on static languages, have a keener understanding of type theory than most, and yet they seem to promote dynamic languages at least when it comes to their pet languages.


I'm not disagreeing with you (that a lot of people go with what's trendy or what they already know), but I wouldn't put Gilad Bracha in the list of knowledgeable people. I've seen the talk you are linking to and it's not very impressive... he sounds mostly whiny to me. In his own blog, when he writes about about functional programming or type theory, he gets called out by the people who really know about it.


I would agree, I don't see why Gilad Bracha is on that list.


Gilad Bracha has no idea what the hell he's talking about.

Rich hasn't worked on static languages and I'm not familiar with him having done anything in type theory. He wanted a nicer, practical Lisp first and foremost. A helpful compiler wasn't high on his list of priorities.

Guy Steele's most recent work has involved functional, statically typed programming languages: http://en.wikipedia.org/wiki/Fortress_(programming_language)

One of Friedman's most recent books http://www.ccs.neu.edu/home/matthias/BTML/ was on ML which is a statically typed, functional programming language.

The smart people that weren't using static types back in the 70s and 80s weren't using them because the statically typed languages available back then were fuckin' awful except for ML and Miranda.

We can do a lot better as programmers these days. Stop giving yourself an excuse to not learn new things.


FWIW, I don't think the diagram is claiming that people who are unfamiliar with type theory tend to prefer dynamic typing, rather that people who prefer dynamic typing tend to be unfamiliar with type theory.


The most useful takeaway from that graph is the insight into how many type theory aficionados look at the world.


Why not both?

ghc compiler in Haskell has -fdefer-type-errors flag. SBCL Common Lisp implementation has option to turn type warnings into errors. Extending -fdefer-type-errors function and creating better type checker for dynamic languages could achieve best of both worlds.


The actual reason for why not both is that its actually very hard to do and is still kind of an open problem. Some things are easy to do statically but hard to check dynamically (for example, parametric polymorphism or type-based overloading and name resolution) and some things are easy to check dynamically but hard to specify using static types (for example, array bounds checking).


You can do those things that are easy to do statically at development/testing time and those thing that are easy to do dynamically in runtime.

Some things like overloading based on return values don't work dynamically, but you could either choose to not have them or resolve them in development time. I would like to see languages where dynamic vs. static is continuum and programmers can determine how much they need want case by case basis.


> You can do those things that are easy to do statically at development/testing time and those thing that are easy to do dynamically in runtime.

But then you lose the ability to freely mix and match the static and dynamic code. For example, if you want to write a dynamic implementation for some parametrically polymorphic code there must be a way to check at runtime that the dynamic implementation is respecting the type that you assigned to it (there are ways to do that but they are all very tricky).

The list of tricky things also grows very quickly once you put some though into it. You mentioned return type polymorphism but there are also many other things that are hard to check dynamically (record field names) and lots of things that dynamic codes likes to do (like mutability and subtyping) that force extra complexity into the static type system or that can be used to subvert the static types if you are not careful.


>(for example, array bounds checking)

It's relatively easy to do this statically compared to bringing the benefits of Haskell to a unityped language. We just don't program in languages designed around dependent types typically.

Dependent types aren't hard.

Agda, Idris, Coq for any who are curious as to what that looks like.

Total functional programming is cool ^_^


You are right. The biggest problem I was trying to point out is that you need to make your static type system much more complex in order to be able to even state some things that are easy to check dynamically. Also, dependent types on their own dont solve the problem of interfacing with dynamic code.


I agree. Good dynamic languages (Common Lisp, Smalltalk, Factor) has a lot going for them but static typing is also really nice. A mix between the two (preferably something that starts out as a dynamic lang and slowly moves towards being static) would be great (Common Lisp kinda does this with optional static typing).


CL's type system has a few warts to really do static typing well.

For a language that truly hass optional static types see Shen[1]

[1] http://shenlanguage.org/


Dynamic typing is just a special case of static typing where there is only one type!


While this is technically correct (the best kind of correct) it also trivializes the argument and somewhat misses the point. For example, in a dynamic language all the values are belong to a single type and carry a tag at runtime to identify their category. Static languages also do that but on more restricted scope - you aren't allowed to accidentally mix strings and integers in Haskell but you can accidentally mix empty/non-empty lists (getting a runtime exception on `head`) or zero/nonzero numbers (breaking division). You can sort of fix this if you go up to dependent types but then the type system gets much more complicated and its not always easy to statically create the proofs for everything so you start to see the appeal of dynamically checking some of the stuff.


Haskell but you can accidentally mix empty/non-empty lists (getting a runtime exception on `head`) or zero/nonzero numbers (breaking division).

This is due to Haskell's support for partial functions. It does induce some weakness in the type system but it brings the benefit of making the language Turing-complete. As for these specific examples, they would be solved if the libraries in question were rewritten.


The problem is that you might be forced to add dependent types to the language to be able to safely define some of those partial functions. This is not a trivial matter since dependent type systems are even more complicated and might demand extra work (proofs) from the programmer.

IMO, there are situations where you are better off doing trivial runtime checks instead of trying to write complex proofs that might themselves contain bugs and that are tightly coupled to your current implementation and choice of datastructures.


Even when this is true, this special case of static typing gives you exactly none of the advantages of general static typing.


Yes, of course. I'm a big proponent of static typing, or (as Bob Harper likes to call it) typing! This is in contrast with unityped languages.


An idealized dynamic strong typed language can actually give you the guarantee that all statically valid programs give you well-defined runtime behavior. This is a huge advantage over weakly typed languages that you also get out of the idealized static typed language.

The issue is, outside of languages with very baroque type systems or monstrosities like Java's sandbox, static typing often winds up decaying into weak typing, which offer much weaker behavior guarantees than real-world dynamic languages.


That's not quite true: there are many types, but the type checking is delayed into run time. Though your point of view may be also correct with respect to definitions (if you assume that types exist only during compile time).


That's not quite true: there are many types, but the type checking is delayed into run time.

Runtime checks are not type checks[0], they are class checks.

[0]. https://existentialtype.wordpress.com/?s=dynamic+typing


You mean in Bob Harper's opinion?

Dynamic type checking is a useful concept when classifying languages, but it might not be sensical with respect to certain narrow definitions of what a type system should be.


You mean in Bob Harper's opinion?

Yes, shocker of shockers, I agree with Bob Harper. The term type comes from type theory which is an entire discipline (as far as computer languages are concerned) based on the idea of propositions as types. If types are propositions, then what are programs themselves? Proofs of those propositions, of course, with the type-checker doing the work of checking the proofs. With so-called dynamically typed languages you do not have that. What you do have is dynamic dispatch based on the class of a value. This has nothing to do with types!


The term type actually originates way before that, the 19th century meaning is "category with common characteristics", which is how we use the word in normal discourse (Python is a type of programming language....).

Bob Harper has adopted a very narrow definition of type based on one discipline, type theory, but the word "type" itself is much more general than that. I definitely use the word "type" when talking about my Python programs, even if there are no static type theoretic types to be seen.

In a dynamically typed language, your proofs are checked at run-time, which has everything to do with types! Getting a type error at run-time is much better than having the program keep going and produce a wrong result (if it doesn't core dump via memory corruption first).

Dynamic dispatch doesn't even play into it.


Getting a type error at run-time is much better than having the program keep going and produce a wrong result (if it doesn't core dump via memory corruption first).

To me that's akin to an airplane ditching in the ocean rather than slamming into a building. Sure, it saved a lot of lives but it'd have been better to solve the problem while it was still on the ground.


Whether you think ensuring safety later rather than earlier is a good or bad thing is kind of irrelevant; the fact is that one can still technically check type properties at run-time. Pragmatically speaking, even Haskell must rely on a few run-time checks to ensure complete type safety (e.g. match not exhaustive exceptions...).


No. That sounds more like an untyped language.


All this talk about formal type theory, but where are the references to the relevant studies? Where's the data? The few studies[1][2][3][4] I've found are inconclusive one way or the other and none of them focus on error rates. I found another conversation about how to go about studying error rate in dynamically vs statically typed languages, but all I really found was this article studying the affect of hair style on language design[5].

[1] http://pleiad.dcc.uchile.cl/papers/2012/kleinschmagerAl-icpc... - maintainability

[2] http://dl.acm.org/citation.cfm?id=2047861&CFID=399382397&CFT... - development time

[3] https://courses.cs.washington.edu/courses/cse590n/10au/hanen... - development time, take 2

[4] http://pleiad.dcc.uchile.cl/papers/2012/mayerAl-oopsla2012.p... - usability

[5] http://z.caudate.me/language-hair-and-popularity/


I glanced over the studies you linked to and in all of them, the languages used as examples of static typing are Java, C and C++. There's no mention of type inference or any languages that have a more advanced static typing scheme like ML or Haskell. A lot of it seemed to be a "Java vs. Ruby fight" with a slight bias towards the latter in the authors.

To joke and exaggerate a little, those studies seem to be done by people who belong to the "proponents of dynamic typing" and not "familiar with type theory" bin of people in the Venn diagram in the OP.


I agree completely, that's why I used the word "inconclusive." Still, it's the only real data we have on static vs dynamic typing. Until we get better data, everything is just opinion and preference. Well informed opinions, in the case of those familiar with type theory (which I am not), but opinion nonetheless.


I'm afraid that when it comes to hard science, the static vs. dynamic typing discussion will remain as "inconclusive" for a long time.

The testing methodology in (some of) the studies above involved test subjects to be working in toy prototype programs during a short period of time. In my opinion, this will favor dynamic typing. If there's a simple problem with a quick'n'dirty solution even I do often prefer Python to Haskell.

The real advantages of static typing become apparent only when a project matures as time and effort are spent on maintenance and refactoring. In dynamic languages it is rather easy to break "old" code by making changes to "new" code and subtly changing some types and you have to rely on unit testing to catch this at run time. This class of errors is caught by a sane type checker before you even start running your tests.

So my somewhat informed opinion is that it is very hard to get "conclusive" evidence of the superiority of static typing because it's a very difficult thing to objectively measure in a short period of time. I say "superiority" because that's my opinion that static typing is a little better for most but not all applications.


Are you trolling? I seriously can't tell...


I think you should implement Hindley-Milner in the language of your choice for a small toy λ-calculus.

Did this a little while ago (as a stepping stone to building an inference system for a more complicated calculus).

https://gist.github.com/jrslepak/6158954


As an aside, if you just want curried functions in Clojure, try poppea.

https://github.com/JulianBirch/poppea


Noob question, doesn't partial provides curried functions?


A 'true' curried function in clojure doesn't need partial because it overloads on the arity of the function. It automatically partializes itself if it's called with too few arguments.

E.g. you have a function f that takes two arguments, x and y. If the function is truly curried, then (f x y) is a full function call of f that returns a value, while (f x) returns a function of one argument, as if you had used partial. You could create a (two-argument) currying helper-function with the following:

  (defn curry-2 [f]
        (fn ([x] (partial f x))
            ([x y] (f x y))))
Basically with a currying function, (f x y) and ((f x) y) are equivalent calls, without the need for partial.

The reason this isn't more pervasive is because it doesn't play well with optional parameters, &rest parameters, arity overloading, or other ways in which the number of arguments to a function might vary (Haskell permits currying by not allowing variable param lists). Clojure has a couple of library functions that use this pattern (notably, the reducer library), but it's not ubiquitous.


Indeed, that's exactly what poppea does. The code is a generalisation of the code in the reducers library.


Partial application and curried functions are subtly different.

Consider a "+" operator that does currying:

    ((+ 5) 10)
If the plus operator is not currying, that's either an arity or type error. You're either not supplying enough arguments to +, or you're trying to do this:

    (5 10)
Partial application can be explicit, without currying:

    ((partial + 5) 10)


I'm familiar with type theory and (often) a proponent of dynamic typing.

It depends on what you're doing. If you're building cathedrals-- high-quality, performance-critical software that can never fail-- then static typing is a great tool, because it can do things that are very hard to do with unit testing, and you only pay the costs once in compilation. There are plenty of use cases in which I'd want to be using a statically typed language like OCaml (or, possibly, Rust).

If you're out in the bazaar-- say, building a web app that will have to contend with constant API changes and shifting needs, or building distributed systems designed to last decades without total failure (that may, like the Ship of Theseus, have all parts replaced) despite constant environmental change-- then dynamic typing often wins.

What I like about Clojure is that, being such a powerful language, you can get contracts and types and schemas but aren't bound to them. I like static typing in many ways, but Scala left me asking the question, any time someone insists that static typing is necessary: which static type system?


I've heard this argument a lot, and I disagree. If your software is changing a lot, that is where types really shine. Refactoring is a breeze when you have types: you just change the code you want to improve, and all use sites are pointed to by the compiler. This is taken from daily experience: I work on a code base that is about 25K lines of Haskell and 35K lines of Javascript. Refactoring the Haskell is a pleasure. Refactoring the Javascript is something we dread, and always introduces bugs, some of which might linger for up to a year.


I've actually had this experience a couple of times on a project like upgrading a library from one release to another when APIs were updated. The compiler would often "show the way" by highlighting every mistake between version x and y.

I would go so far as to say one of the most evil things a person can do when designing an API for statically typed languages is using the equivalent of System.Object unnecessarily.


Exactly. A strong type system enables more rapid iteration than a weak one, just like writing unit tests. I was a fan of dynamic languages until I used Haskell and Scala, and saw how powerful types could be when they allowed you to express intention rather than getting in your way.

Now I relegate dynamic languages to writing single file scripts or less.


I think on top of this the self-documenting properties of a nice statically typed system push back the onset of code ossification quite a lot. I always trust my type documentation while I tend to be a skeptic of documentation that's more than a few months old unless it's being actively refactored regularly.


A strong type system can give you a lot of "free" documentation and I love it! I long for a search engine like Hoogle in other languages (http://www.haskell.org/hoogle/). Most functions are very obvious from their signature and name.

My favorite example are two functions with the signatures:

    fun1: a -> a
    fun2: Int -> Int
You might think the first is more powerful because it takes any type and returns that type, but ultimately, that function can only be the identify function (barring unusual things like exceptions, undefined values and runtime introspection). With the second function it could do all sorts of things. It might increment, it might decrement, it might divide by two and round down if the argument is a multiple of three.

I'll go further to say that it's very frustrating in languages like C++ and Java to have to deal with statics and objects. They have a lot of hidden state that isn't obvious from the type signature of their methods. Particularly the fact that some methods must be called before others—e.g. initializers—can make the behavior and side-effects non-obvious.


I agree. I define empty interfaces and the types to go with them for prototyping, and it makes building my prototype so much quicker. And with a better final product, too. So IMO static (or optional/gradual) typing can even be better in that way, but only if you think in types rather than hacking away til it works.


Hum, no.

I use Clojure at work and the single biggest drain on my productivity is a lack of a sensible, static type system. Yearn for Haskell big time.

I'd use Rust if I had to really get down and dirty, otherwise Haskell would be my first choice. OCaml isn't that great at all in practice.


Have you looked into Prismatic's Schema?

I fully agree that if you're working on a large project, you'll need something that is "type-like" in terms of enforcing interface integrity. Where there's an open question is how necessary compile-time typing is for most problems. (Few would disagree with the claim that types are a good thing.)

Clojure (esp. without types) seems to favor smaller projects (libraries over frameworks) and modularity in the extreme, almost implicitly. When you get "into the large" and need checks like types or contracts, there are various libraries that are available.

I like Haskell a lot, but my experience going between the dynamic and static styles of programming and languages (these things are as much about coding style as the language itself, which is why Scala can be beautiful or horrid depending on how it is used) is that the grass often seems greener on the other side. Haskell's great in many ways, and designed by some of the most brilliant computer scientists and logicians alive right now, but it's not without some painful warts (although my knowledge is 3-4 years out of date).


Yeah, it's not interesting.

I use Clojure a lot at home and work. I'm an active participant in the community. I've also discussed core.typed with Ambrose to my satisfaction.

If you do Clojure web development, there odds aren't terrible you're using a library I've either made or worked on. (Korma, lib-noir, Selmer, luminus, Revise, bulwark, blackwater, trajectile, clj-time, brambling)

https://github.com/bitemyapp?tab=repositories

I'm tired of tracking down type errors in Clojure.

I'm tired of increased source->sink distances in runtime errors compared to compile-time errors.

I want to be able to refactor my code fearless, period, end of story with static assurances.

Record (product) types handle statically verifying schematic use of data. I'd rather have that work statically so that I can minimize source->sink distance. That obviates the need for a "schema" library.

I'm a very active Clojure user, I end up having to explain the same things over and over as to why I'm moving my stuff over to Haskell.

The most thorough way to go is to do what I did and just learn Haskell to decide for yourself. Don't try to paper over the problems with 1/4 solutions.


What you're saying makes a lot of sense. Clojure has a lot to recommend it (especially the JVM, in business) but there are a lot of benefits to Haskell.

The "source->sink" problem is a pain, I agree. It's one of those dangers of macros and metaprogramming that seems to be difficult to resolve. Generally, I only run into nastiness there, though, when I'm trying to do things that would be very hard to do in statically-typed languages.

What I've noticed in Scala and Ocaml is that people end up hacking the compiler (see Jane Street's "with sexp" and "with fields"). That, to me, has all the negatives that come from macros and dynamic typing. A compiler that does static typing is great, but if it ends up being hacked, then all bets are off and I'd rather use macros. (I'm playing Devil's Advocate here; I know that most web apps aren't going to require compiler hacks, but most companies, given enough time, will find reasons that they need to hack the compiler.)

I'm curious about your experiences with Haskell. What negatives have you found in the language? (I like it a lot, but haven't used it for anything big.) How strong is the story for the web? What are the build tools like; are they mature, or obviously in need of work (as in, say, Ocaml or Scala)?


>would be very hard to do in statically-typed languages.

I doubt these things apply to Haskell, which is part of my point. It offers a lot of rope for the self-hanging if you want, but the defaults (static, safe, strong, immutable, pure, lazy) are the best place to start.

There are a hundred things people think "only dynamic languages can do" that you can do in Haskell.

I have yet to get anybody to contrive something you can't do in Haskell that you can in other languages.

This is a mere glimpse, but should give you an idea:

http://hackage.haskell.org/package/base-4.6.0.1/docs/Data-Dy...

^^ Uni-typed languages are a subset of proper type systems. This is a practical application of that idea.

You don't generally hack GHC unless you're an enthusiast, although that option is certainly available to you and it's much easier than you'd expect.

Negatives? More people using it would improve library coverage, but the libraries are very good. The web story is better than Clojure. Scala web vs. Haskell web entirely and utterly depends on whether you're happy with Scala's Play Framework or assorted uber-micro-frameworks. If you're not and would prefer a more diverse and componentized ecosystem of libraries, Haskell's is better. You can go whole-hog with a single framework like Snap or Yesod, but their individual components are eminently reusable libraries. Yesod even checks the damn URLs in your templates at build-time.

The interactive workflow in GHCi is legitimately better than Clojure's. I'm not exaggerating. It's not as good as the glory days of Common Lisp + swank + Emacs, but it's better than Clojure + Emacs + nrepl.

Bonus? Has a fucking debugger - unlike Clojure.

Native binaries, multiple quality vector libraries that support CUDA. Burgeoning but respectable family of bioinformatics libraries.

Negatives? Forces you think and learn. There's a ramp-up time, but the other side of the first hump or two has some pretty serious leverage. Doesn't have the library diversity of Perl or Python, but the building blocks are incredible (Parsec, attoparsec, Aeson, etc).

Another negative? You'll have even less patience with poorly designed static languages.

It's worth noting that I'd sooner use Clojure than OCaml, but I'd sooner use Haskell than Clojure.

I don't fetishize type systems in general, I just really find Haskell very pleasing and productive.

The build tools are mature and quite nice now that Cabal has sandboxing built in by default. Put it this way, I don't cringe at all when I use cabal the way I did with Scala's sbt.

Another difference between Haskell and Scala is that it uses a relative conservative core augmented by potentially unsound/unsafe augmentations like GADTs. http://en.wikibooks.org/wiki/Haskell/GADT

What this helps with is it means you only have to think about the "core" of the language at any given time, but if somebody is using magic like Template Haskell (macros), you get that documentation in the form of a pragma at the top of the file. I quite like it.

The laziness is also important, it obviates the need of 80-90% of how macros get used in languages like Clojure. It makes functions themselves more general and useful.

Another negative? A sufficiently "specialized" application of Haskell might mean you're on the frontier. But the same is true of Clojure. I don't find that really matters nor do I find the situation truly improves by using something like Java or Ruby.

I'd rather just read a white paper, sip tea, think, and then sit down to write the code in a language that won't waste my time.

I would add that Haskell's sweet spots extend from Java -> Python -> Clojure. I wouldn't use it for systems programming - for that I'd use Rust.


I wish I could give this more than 1 upvote. Really good response.

My Haskell experience is a few years out of date (and not as deep as my OCaml knowledge) but it sounds like the language has advanced pretty far.

I'll still be using Clojure for the next 3-4 years (my company uses the JVM heavily, and Clojure still beats Scala IMO) but I'll have to look into the current state of things in Haskell.

Out of curiosity: what were your experiences with OCaml, and why didn't you like it?


Interesting insights. Would love to read more aobut this. Usually the arguments are "I'm a bad programmer like everyone else so I need verification" or "the world is dynamic".


The good arguments for static typing are much different than "I'm a bad programmer".

Google "Theorems for free". One of the most life-altering CS papers I've read. :-)


Douglas Crockford is a proponent of dynamic typing. (At least from what I read in the beginning of "JavaScript: The Good Parts)


What's the relevance of that point? He doesn't appear to be mentioned in the post.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: