Hacker News new | past | comments | ask | show | jobs | submit login
Why OCaml, why now? (2014) (spyder.wordpress.com)
185 points by lelf on Feb 14, 2015 | hide | past | favorite | 129 comments



Nice post! There are a lot of good reasons to use OCaml over Haskell which are more compelling than “JavaScript”, though. A few of them are:

1. modularity (and now that they have added generative functors à la SML, you can have true abstraction)

2. benign effects: in Haskell "proper", you do not have effects; rather you have "codes" for effects, which get interpreted into effects by the RTS; this rules out the possibility of, e.g., using effects to implement a semantically pure interface. On the other hand, OCaml has actual effects, which can be used in an open-ended way to implement all sorts of functional interfaces.

3. strictness: arguments abound about whether laziness or strictness is better; for me, it comes down to the fact that with some pain, you can embed laziness in a strict language with effects, but you cannot embed full-on ML-style strictness into a language like Haskell; moreover, strictness-by-default permits safe uses of benign effects.

I'd call Haskell an expression-oriented programming language, since the types end up classifying expressions and algorithms (i.e. the particular "effects" you used get put into the type). Whereas I'd say (OCa)ML is a value-oriented language, since values (canonical forms) are considered separately from general expressions (canonical and non-canonical forms); moreover, implementation details don't end up in the types, so you can really consider them to be classifying values and functions (i.e. equivalence classes of algorithms, not algorithms themselves). This is largely orthogonal from strictness vs laziness, but as soon as you add partiality in, strictness becomes the only tractable way to have canonical-form-based meaning explanations for the judgements of the theory.

P.S. My day job is writing Haskell. (In case the Pedagogical Brethren wish to come and "correct" me.)


Adding my own two cents:

- I like that with OCaml you can choose if you want your code to be functional or imperative, and both are equally well-supported. I am writing a compiler in OCaml, and I am able to use the API described in Peyton-Jones & Lester for doing pretty printing while still using imperative algorithms from the Dragon Book. This sometimes feels like cheating, but not having to figure out how to transform imperative code into functional code while still maintaining the same complexity constraints is really nice.

- OCaml code is usually a lot more "boring" than Haskell code, which is nice when reading other people's code. Let me explain: I am T.A.'ing an intro to compiler class where some students use Haskell and some use OCaml. The Haskell styles of the students varies a lot: some prefer to use point-free style, some used Parsec others AttoParsec, some passed state around manually while other use a state monad, etc. The OCaml submissions on the other hand were a lot more homogeneous: ocamllex for the scanner, menhir for the parser, the AST definition was almost identical to mine, and the code generation was also very similar.

- This may just be me, but I have had less problems with opam than with cabal. Also, the merlin tool and its integration with Emacs are really good and give me the kind of minimal, out-of-my-way IDE experience that I am looking for.


I generally prefer programming with immutability, but I certainly appreciate the ability to use "benign effects" in my programs. Besides the case for performance, I have applied a strange combination of benign effects, GADTs, and functors to generate a sort of "proof of type equality" between two values passed into a framework, each of which are not known to the framework because they're passed into the framework by two separate clients of the framework. At that point, I had the ability to reason about two values with arbitrary and potentially distinct types, as either being not the same (None), or the same Some(x, y) with x and y having the same type.

I have no clue if there's a more elegant way to do this (edit: there probably is), but even I (as a n00b) was able to figure out how to do this by using benign effects. There's only a single mutation in this library - but it was such a critical one that made everything else possible.

I'm curious about the reason for preferring generative functors over applicative functors. It seems like both could have valid use cases. Could you point me to a writeup that explains why you believe generative functors are superior?


After numerous discussions with Bob Harper (and reading Derek Dreyer's work, who has been the biggest proponent for applicative functors), I have come to understand that there are only two compelling use-cases for applicative functors:

1. higher order functors.

2. modular type classes

Higher order functors are kind of cool, but IMO Standard ML does not seem to be suffering too much from the lack of them. I'm not too interested in it, since it gets super gnarly super fast, and this happens to be pretty much the main use-case for applicative functors. I suspect that most use-cases of higher order functors in OCaml could be reformulated to be first order, with a lot less monkeying around in the module system. There may be compelling use-cases though.

The other use-case is possibly a version of modular type classes that behaved a bit more like Haskell's. The idea is that if functors are going to be applied automatically during elaboration to provide something like type classes, you'll get MkWelp(S) called in multiple places, and you would prefer that any type members of the resulting structure be compatible. Applicative functors would do this.

I am not too convinced by this use-case, though I could see that people would find it useful.

In all other cases, generative functors have the correct semantics. Pretty much the whole use-case of putting abstract types in a functor is that you can then reason intensionally about them (i.e. distinguish them based on their access path). This is super useful, for instance, if you have a notion of "index" or something and you want to prevent yourself from using indexes from one table in another one, or something along those lines.

This is what the Scala people mystifyingly call "path dependent types". It's just generative abstraction.

So maybe it is an interesting feature to have applicative functors, but these should be added post facto, and generative functors should be the default. OCaml now supports generative functors if you add an extra () parameter; it's strange syntax, but it's good enough for me! :)


I think I disagree with this. Your functors should generally be pure, and for pure functors applicative is a nicer semantics.

Consider the case of a set implemented as a binary tree. The type of such a set should be parametrised by the type of the elements and the ordering used. With applicative functors this is the case as your set type will be `Set(O).t` where `O` is a structure containing the type and the ordering. With generative functors each individual set type is abstract -- so the type itself is not parameterised by the ordering.

You could consider this to be just an example of what you are calling "Modular Type Classes", but there doesn't need to be a system of implicit module parameters for it to be useful.


lps25:

> Consider the case of a set implemented as a binary tree. The type of such a set should be parametrised by the type of the elements and the ordering used. With applicative functors this is the case as your set type will be `Set(O).t` where `O` is a structure containing the type and the ordering. With generative functors each individual set type is abstract -- so the type itself is not parameterised by the ordering.

You make a good point about this not being strictly about type classes. But I'd say that the issue is only a problem in the presence of implicit resolution, since otherwise, you can just bind the result of the functor to a structure once and be done with it. It becomes an issue with type classes, because you don't have the choice to share a single structure during elaboration.

IMO, the generative version is still better for most use-cases (pure or not), but it's nice that you can have both in OCaml.


>> since otherwise, you can just bind the result of the functor to a structure once..

My understanding was that of lpw25's. I wanted to use applicative functors in the same way type classes (or OCaml's modular implicits) would make use of them, but even without type class's implicit resolution (explicitly specifying them). I agree there isn't too much difference from doing the same with generative functors, but the main difference is, as you said, I would need to find a place (some appropriately accessible location in the namespace) by which to access the result of these generative functor applications, and that's just an extra point of friction (if I understand correctly). It's not the end of the world as you said either way because we can do either.

Thanks for the help, jonsterling/lpw25.


The first chapter of Dreyer's thesis "Understanding and Evolving the ML Module System" (http://www.mpi-sws.org/~dreyer/thesis/main.pdf) talks about generativity


> 3. strictness: arguments abound about whether laziness or strictness is better; for me, it comes down to the fact that with some pain, you can embed laziness in a strict language with effects, but you cannot embed full-on ML-style strictness into a language like Haskell; moreover, strictness-by-default permits safe uses of benign effects.

I've seen this claimed before, but I'm not sure I can subscribe to any sense in which this statement is true. It's been known since at least John Reynolds that you can make programs evaluation order oblivious by means of a CPS transformation: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110..... Just as thunkification is a program transformation to simulate call-by-name, the call-by-value CPS program transformation is a way to simulate call-by-value.

Moreover, Haskell has strictness annotations, so strict programming is possible if you pepper all the arguments of all your functions with strictness annotations and you use only strict datatypes. In fact in the future a language pragma will do this for you: https://ghc.haskell.org/trac/ghc/wiki/StrictPragma. That's less invasive than a CPS transformation.


Nope! You may be able to get evaluation order out of it, but the main reason I care about strictness is not CBV evaluation order. It is the following:

1. Reasoning by induction: not possible with a lazy function space

2. Non-pointed types: also not possible in Haskell

3. Compatibility with a proper treatment of effects

It's great that you can do strictness annotations in Haskell! But they don't accord one the above facilities at all.


> 2. benign effects: in Haskell "proper", you do not have effects; rather you have "codes" for effects, which get interpreted into effects by the RTS; this rules out the possibility of, e.g., using effects to implement a semantically pure interface. On the other hand, OCaml has actual effects, which can be used in an open-ended way to implement all sorts of functional interfaces.

So, to be all pedantic and stuff... You've always got unsafePerformIO, first off. This is used in Debug.Trace (https://hackage.haskell.org/package/base-4.7.0.2/docs/src/De...) to allow printf debugging in pure code. This is also used in some Haskell libraries to provide restricted effects in monads other than IO. But if you don't want to use unsafe* functions, you can always use ST to get direct access to mutable memory in a safe way. If you can runST within that interface, then you can present a pure interface to users.


1. that's why I qualified it "proper". Of course there is unsafePerformIO, but in the presence of laziness, this is really unsafe. as opposed to ML, where doing IO may harm your ability to reason about stuff, but it won't be unsafe.

Debug.Trace is also unpredictable; actually, it's perfectly predictable if you understand laziness, but it's still not what's really wanted in most cases.

2. ST is a single instance of an effect that can be interpreted into "pure" code. There are plenty of other effects that don't work like this in Haskell, not to mention the problem of composing them together.

So thanks for chiming in! But what you have said is not really that different from what I have said.


I would love to talk to you a little about these kinds of things. I've been digging more seriously into OCaml the last few weeks and would love some signposts for making the Haskell->OCaml transition.


Hi! Please feel free to email me at any time. I can't promise that I will know everything you want to know (since I only know very little), but I would be happy to help with what I do know. jon [at] jonmsterling [dot] com.

BTW, I looked at your OCaml stuff yesterday, and it seemed pretty cool.


Could you say something about why `unsafePerformIO` is unsafe in the presence of laziness?

I know `unsafePerformIO` can be used to violate the type system in combination with `IORef`s, for example, but I don't see what laziness has to do with it.


In the presence of laziness, it can be very difficult to reason about what code executes when. When you don't know when your unsafe IO happens, things can get out of order or have other unforeseen side effects.


With regards to 2., could you briefly mention some other instances? I have no idea what I should be thinking of.


Thanks! (I'm the author). I focussed on JavaScript because it's what I know, it's my day job, and I wanted to share a bit of my research while trying to introduce AltJS to it.

I wasn't (and am still not) really enough of an expert in either Haskell or OCaml to talk about other differences; including a bunch of dot points I don't understand wouldn't help anyone :)


Hi! Thanks for writing the post. I didn't mean to criticize you for talking about what you know; I hope I was able to help give a broader perspective on some of the other issues at hand too. :)


>1. modularity (and now that they have added generative functors à la SML, you can have true abstraction)

Could you share some information regarding Haskell and it's 'modularity' problem (vs. the ML family of languages). I'm fond of the way SML projects can be structured. How are people solving this using Haskell? Are there any interesting solutions for creating modular Haskell application/system's I can see today?

Thanks for your comment.


There is a weak form of modularity which can be achieved by using two parts of haskell:

1. hiding constructors in files 2. type classes

But it doesn't really begin to approach the kind of structuring that is possible in an ML-like system. Unfortunately, the issue is very (needlessly) controversial, and I don't think I want to get dragged into it here.

I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases. This is because of two things: haskell is non-strict, and GHC has pretty good fusion. In ML, you often end up manually fusing things together in order to get good performance, so composition can be a bit more difficult.


> I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases.

I'd actually argue the opposite. MLton is one of the best whole program optimizing compilers I have ever used. There are virtually no penalties for abstraction. OCaml has a bit of trouble here, from what I've heard, but I've never personally run into serious performance problems as a result of abstraction.


sgeisenh — wow! How cool. If that is the case, than that's awesome, and makes me very happy.


Awesome response, but....

> (In case the Pedagogical Brethren wish to come and "correct" me.)

Seems very unnecessary and is a big departure from the Haskell community I'm personally used to. On top of that, it's not the tone I'm used to on HN for the most part either.

Apologies for going off topic, just felt obliged to chime in.


Hey, sorry... You're right, I shouldn't have said this. Over the past few days, certain parts of the Haskell community have been particularly vicious (toward me and others), and I have been feeling a bit beleaguered. I shouldn't have brought it here though. Cheers.


> The first question I usually get ... is “Why not Haskell?”.

> The answer is JavaScript.

My current favorite language for this is Elm[1]. It compiles to JS, is based on Haskell, and has a few simple differences between either[2].

Elm doesn't let you use the tons of available Haskell libraries, and makes it slightly painful to integrate with JS. But it's the coolest thing for the web so far. And here's why:

It uses a completely declarative state machine to model a fully interactive and complete GUI! This is the dream React.js and Om didn't even know they were heading towards! And yes, it is as good as it sounds.

[1]: http://elm-lang.org/

[2]: http://elm-lang.org/learn/FAQ.elm

EDIT:

Also, Elm fixed Haskell's awful syntax record name collision problem, and awesomely too!


Purescript[1] is also gaining some traction and even has a pretty nice looking book out. [2]

It has some changes however; most notably it's strictly evaluated rather than lazy, and there are differences in how it handles type variables. [3]

Personally though, I've grown more fond of F# than Haskell; I only wish the documentation for Websharper and Funscript was more solid.

[1] http://www.purescript.org/ [2] https://leanpub.com/purescript/read [3] https://github.com/purescript/purescript/wiki/Differences-fr...


Elm is actually 3 things rolled into one:

1. an language very similar to Haskell

2. a set of runtime libraries

3. a technique for modeling a GUI as a declarative state machine

PureScript only competes with #1 and #2 here. Where Elm really shines is #3, which #2 helps with a lot.

Technically #3 could be done in any language. But it really helps to model it in such an expressive language with immutable data and no side-effects (except via Signals).


I won't argue that Elm does anything other than an exemplary job at #3, but it is worth noting that there are some interesting efforts to build similar functionality in PureScript libraries:

https://github.com/paf31/purescript-thermite

https://github.com/kRITZCREEK/reactive-psc

https://github.com/bodil/purescript-signal

https://github.com/michaelficarra/purescript-demo-mario

https://github.com/mechairoi/purescript-frp-rabbit

To me, this is the benefit of PureScript - yes, you have to do a little more work, because you don't get these things for free from the compiler and tools, but you gain complete control over your application structure. You're not forced to work in some ambient Signal category.


You can probably implement Elm in PureScript pretty trivially, AFAICT. AFAIK Elm doesn't have any static guarantees about lack-of-space leaks or lack-of-time leaks. Or, maybe it just lacks higher-order-signals? (Which would mostly prevent those sorts of things happening). I found Neel Krishnaswami's papers on a static way to decide/prevent these things statically very interesting. I haven't found any implementation of his ideas, but they seem pretty solid: http://www.cs.bham.ac.uk/~krishnan/


Time to plug Kris' blog post about Haskell in the browser: http://blog.jenkster.com/2015/02/a-brief-and-partial-review-...

TLDR: he thinks elm's the most practical approach right now.


Could anyone summarize how well PureScript and Elm treat sourcemaps/debugging in the browser? For a while I thought that js_of_ocaml didn't support sourcemaps, but it turns out, one of my dependencies wasn't compiled with debug flag (-g) and I was able to get a pretty good sourcemaps/debugging experience in Chrome dev tools once I fixed that issue. Is there something js_of_ocaml can learn from PureScript/Elm's JS compilation toolchain? Or should we look to CLJS (which I'm also really excited about) as the best example?


I can't really comment on source maps in Elm, but the approach in PureScript has been to generate clean, readable JS which is debuggable directly. Source maps are on the roadmap, but not really a priority right now. I haven't heard any complaints about the ability to debug compiled PureScript yet.


That's a nice approach for debugging! If the mapping is close enough, I don't mind reading the JS output. I'm curious about the general approach to compilation, though. It seems like a statically typed language could take advantage of the knowledge of types to generate an even more efficient version of the program that uses typed arrays and views (though, yes, it would require implementing a garbage collector unless relying on some kind of WeakMap in the JS engine). I've heard of garbage collected languages compiling to LLVM which would allow Emscripten to assist you, but I've also heard that LLVM has a really hard time with GC languages.


Right now, the translation is very direct. My basic rule of thumb is - only perform those optimizations which the user opts into. Some things are standard though, like a few inlining rules and tail call elimination, but the plan is to provide a rewrite rules engine so that the developer can be as fine-grained as they like when it comes to optimizations.


As far as I can tell Elm is more of a GUI focused language, when I was evaluating AltJS languages I was looking for something a bit more low level. The JS and DOM bindings of js_of_ocaml were what really sold me although I didn't mention this in the post.


hey didyoucheckthe I'm working on a new programming language that I think you may be interested if you like Elm. Couldn't find your email in your profile, but mine is cammarata.nick@gmail.com. I'd love to show you


I think the author doesn't give enough credit to things that OCaml has that Haskell doesn't have: a powerful module system (ie, functors), polymorphic variants/subtyping, etc.


There's a bunch of other nice features of OCaml such as named arguments, fast compile times, strictness, c-types, and reasonable records. To achieve (some of the feature in) Elm style "structural subtyping" records in OCaml you use the more verbose "object" keyword which is just a record with row polymorphism. Most choose to stick with standard records because they compile to more efficient code.

I think the ML module system (in OCaml and other languages) is very powerful because it allows you to abstract and operate not only on types but also on values simultaneously. F#, Haskell, and most others lack this but it would be great if they were to officially adopt it. OCaml is also very easy to learn and I seriously recommend Real World OCaml - it's free online and excellent.


Agreed on the ML module system. It's a damn shame that Haskell has this anemic probably-historical-accident namespacing-only module system that we're stuck with. On the plus side, there seems to be some real impetus to implement the Backpack system, though there are still unexplored points in the design space behind Backpack, especially wrt. type classes.

FWIW, I think type classes are actually what make Haskell more appealing to me than O'Caml, even though O'Caml's modules technically subsume type classes (for most purposes anyway). In practice it just gets too verbose.

That, and enforced purity.


One thought on the verbosity of type classes vs. ML modules/functors: As a framework developer, if I have a more powerful abstraction (supposing ML functors/modules really are more powerful), then even if abstracting is more verbose, it might result in an even better end-developer experience, if that explicitness is limited to the core internal part of the framework.

In other words, that verbose application of functors etc, might only need to be written once in the internals of the framework, but could enable more powerful features for users of the framework with little or no additional verbosity. This is just one thing I've noticed happen in a very specific case and it might not be true in general. Really learning Haskell type classes is still on my list of things to do, so please forgive me if I've misspoken.


That was actually my main objection to modules vs. typeclasses: The burden often seemed to land on the users of libraries. Typeclasses are usually effortless as a user, though there may be a multitude of sins[1] hidden behind them.

EDIT: Don't get me wrong, there are also advantages to being able to explcitly declare two structurally identical modules as different, but in practice I find that newtypes suffice.

[1] FlexibleInstances, UndecidableInstances, etc. :)


Total agreement about most of those things, but I want to indicate that Haskell has some amount of row typing available via libraries like Vinyl and it certainly has c-types.


the most recent vinyl version (0.5) https://hackage.haskell.org/package/vinyl-0.5 winds up being a REALLY nice balance of flexibility, good type inference, and a few other things.

Its actually simple enough that for a work project I decided it would be simpler to write a custom version of the same datastructure just to avoid extra deps. (and because I needed some slightly bespoke invariants)


:) Thanks! And yes, I don't know about Anthony, but my intention has always been for Vinyl to be a proof-of-concept for what happens when you try to make a clean, minimal & well-factored HList experience; my motto is, “Now build your own Vinyl”.


Thanks for reminding me. I saw a recent records proposal for Haskell that seemed to hit all the marks including performance (it wasn't Vinyl). I seriously hope OCaml considers a similar approach to records in the future, but in the mean time, "out of the box" records in OCaml are sound and reasonable.


Named arguments may be just "syntax sugar" but its one of the biggest things I miss from Haskell. They make library functions more consistent and they also make it much easier to write point free code because you don't need to resort to combinators like "flip" or "." as much.


I believe they are slightly beyond "syntax sugar". If I'm mistaken, I'd love to see the equivalent of what they desugar to. Some of the nuances that I believe OCaml's named arguments get right (and what greatly distinguishes them from passing a record) may require additional work in the type system beyond simple desugaring to something like records. Someone please correct me if I'm wrong.

- Partially applying arguments. You can apply one named argument, and get a function that expects the remaining named arguments. This is pretty great though confusing when you see it for the first time.

- Defaults for omitted arguments. If the caller doesn't specify an argument, you can define what should be used instead. This is kind of like the opposite of subtyping on record arguments. With structurally subtyped record arguments, you can pass a record that has more information than a certain minimum set of labeled fields. But with named optional arguments with defaults, you can pass less than a certain maximum number of labeled fields.


Also, provable performance properties are a big plus, caused by making lazy evaluation optional rather than mandatory. This aspect is explained quite well by Robert Harper:

https://existentialtype.wordpress.com/2012/08/26/yet-another...

(although he doesn't mention OCaml, and probably has other, even better future languages, in mind)


Naw, Harper is SML-all-the-way (AFAICT).

Btw, Haskell is technically not "lazy", it's "non-strict".

Now, I think he has a good point, but I don't think think there's any general consensus within the FP community which one of non-strict/strict is "better". Personally, I don't think there's a "right" answer.

Earlier on in my career, I would have said that "non-strict/strict" should have a part of the type of a term, but after non-trivial experience with O'Caml and Lazy.t, I'm not so sure. I'm definitely sure that it lead to an absurd proliferation of incompatible interfaces.

And, as SPJ has opined, laziness forces you to be honest about side effects, which is not a trivial thing!


One thing I find interesting, is that our industry is basically a gigantic sea full of Java/C++ programmers.

Within that ocean, there is a small portion of Functional Programmers who have come to a realization that a set of finer grained abstractions would improve the industry productivity at large.

Within that small subset, you have a set of people who believe that static type systems aren't ready for widespread use or are too cumbersome for expressive programming in many cases. While the other part of the functional camp are convinced that modern type systems (usually the ML variety) are more than sufficient for expressive programming.

Within the ML static typing camp you have people who adamantly claim that in order for static type systems to be sufficiently expressive, you must have a particular kind of polymorphism, or that this kind of polymorphism must be implicit - not explicit. Or they might form a stance along the lines of strictness vs non-strictness.

Then within the strictness camp, for example, some people will form a stance that your language must be formally specified (SML) as opposed to having only a reference implementation (OCaml).

At the end of the day, we're left with a handful of people who share our exact opinion about what languages would in theory make the industry more productive.

Meanwhile, the enormous sea of industrial engineers are still using Java/C++.

It can end up looking like people debating which particular brand of natural spring water should be given to millions of people who are thirsty in the desert.

I'm not sure I'm saying anything helpful, and I totally understand having a stance on any one of these issues. I'm not accusing anyone of being too focused on these nuances. They are important questions and I'm thankful people way more studied than I am take the time to report their findings on the tradeoffs. But personally, I also try my best to not loose sight of the fact that we are in a giant sea of people who would benefit from exploring virtually any of functional paradigms/languages.


I won't claim any special knowledge, nor do I have any actual solid research to back up my intuitions.

I can certainly recognize the feeling that O'Caml makes you more productive from when I first discovered it, but that was mostly just because of algebraic datatypes. (And pattern matching which, while not terribly useful in general circumstances, is hugely useful in practical CRUD-like applications.) Polymorphic variants also made the "pro" list.

The biggest boost to my productivity I've ever felt(!) was when explicitly separating different types of effects. (Not just "pure vs. impure", but "uses-network" vs. "uses-filesystem" vs. "pure"... which is why I'm currently sticking with Haskell because it enforces that kind of discipline.

(I'm sure there'll be something better coming along any day now, but...)

Mostly, I hope that problem is mostly a lack of (appropriate) advocacy and education. There's also an absurd amount of inertia which is due to sheer entrenched interests/industries.

EDIT: ... or maybe it's a generational thing. After all this kind of thing happens in every other fast moving discipline without terribly rigorous theoretical underpinnings[1], e.g. medicine or biology.

[1] Don't get me wrong, CS is basically math which is unassailable, but we still have no idea how to (reproducibly) produce stable/well-functioning software.


(Note, not being critical of you, just placing this here because I was thinking about it recently)

Maybe because CS is so young, there is a tendency to confuse theory and practice.

Whether a particular language makes one more productive isn't math, it's engineering. When we talk about how monads might allow you to separate concerns and relieve a mental load -- we are talking engineering. When we talk about how monads compose, we are talking abut the math/science that enables the engineering. They are both related, just like mechanics is related to mechanical engineering, but they aren't the same and they have different concerns.

Be wary when you start thinking in terms of "entrenched interests". It's tempting to go down that road, but the reality is that those "entrenched interests" actually have good engineering reasons to be that way[1]. It isn't like a million other programmers haven't noticed that FP-style programming offers some benefits -- but often the benefits end up not out-weighing the drawbacks in the languages, ecosystems, and practical performance and hardware concerns.

This should be obvious, but I think people start muddying the waters -- especially when they focus on the ideological purity of their programming language. Programming languages are tools. The most popular ones are engineering tools -- and there are some not so popular ones that are tools for exploring the math behind the language itself. There are too many tradeoffs to have a language which occupies both spheres successfully.

In particular, the FP advocates go round and round on this issue. Just because something is elegant mathematically, does not mean it's good engineering practice. Haskell, for example, can be practical, but it struggles between the math and the reality of limited machines and human cognition[2]. Likewise, when you start talking about SML vs OCaml, you're talking engineering, not math -- and possibly a language tailored to engineering vs math.

You see this in languages like C++ too, where practicality starts giving way to a kind of semi-mathematical, yet totally non-scientific, dogma about how programs should be constructed based on their respective committee-designed[3] standard libraries.

[1] not always, but more than people really give others credit for.

[2] I still maintain Haskell is a write-only language, like an opposing pole to perl. Not (completely) because of the language itself, but because of the culture surrounding it which glorifies one-liner lambda calculus/laziness tricks over engineering pragmatics.

[3] e.g. compromises made in absence of any real on-the-ground engineering constraints.


I won't address all your points, but I think [1] deserves special attention: I think we can all agree that if you want to write software that will let you land a small vehicle on Mars, then you don't want/need the opinion of a theoretician, you just need $1B and a team of extremely disciplined programmers who will ADHERE TO PROCESS. Then you impose so much process that they either leave or prevail. What we're speculating about here (at least I think we are?) is if this is a sustainable model for general development and if we can do better. Even if we can't get better runtimes from FP languages, could we perhaps make programs which generate better C programs than those elite programmers that were chose for this particular mission? (I think we can. It has very little do with humans, but a lot to do with the fact that programs are very meta in that we can create programs that generate programs that generate programs ad-infinitum. If we can get our specifications right, the rest becomes trivial.)

A Mars lander program director explicitly said that he chose C because it was what he was familiar with. (I'll edit and post a link if I can find the video.) Just for context, his decision was also based solely on familiarity and experience. For him it wasn't quite so much about language, it was more about process (6 different industrial-strength linters, etc.)

> Just because something is elegant mathematically, does not mean it's good engineering practice. Haskell, for example, can be practical, but it struggles between the math and the reality of limited machines and human cognition[2]. Likewise, when you start talking about SML vs OCaml, you're talking engineering, not math -- and possibly a language tailored to engineering vs math.

That's the thing I would dispute, but it's hard to convince people who aren't already drinking the Kool-Aid, as it were. Compared to compiler-assisted reasoning about side-effects, the difference between SML and O'Caml is completely trivial.

Your [2] is just absurd :). Clearly, you don't have to understand the body/implementation of a function, just its type. :)

More seriously, I'd be interested if there's a particular experience that soured you on FP (or perhaps Haskell, in particular)...?


> What we're speculating about here (at least I think we are?) is if this is a sustainable model for general development and if we can do better.

Here is the kind of difficulty I'm talking about. Instead of thinking about this on a continuum from Aeronautics & medical (where people could die) to yet another throwaway web TODO app (e.g. "general dev"), there is this tendency to say that they are "completely different" in some way. Let's be clear, the theories do help both. Side-effect free programming is useful. But you cannot take theory and just map it directly to the real world with no caveats. Just because Haskell tracks side-effects, doesn't make it superior to C in every context.

The Mars Rover shows this clearly -- C was chosen because someone was used to it, yes. But the part you didn't catch was the implicit decision that it makes no sense to use a compiler and ecosystem you don't understand the caveats to when you need to understand all the caveats to build a successful system.

Similarly, the TODO app developer isn't using Haskell because Haskell doesn't have anywhere near the libraries Python or even Go does, and building and deploying Haskell programs is somewhat of a chore compared to those. It's no contest, Go is superior to Haskell. :-) It's also one of the reasons almost anything is superior to C/C++ in this very same domain. :-) It's not about being "entrenched", because Go is way, way younger than haskell -- it's about the priorities the language designers and developers for that language have -- ie. the culture.

> If we can get our specifications right, the rest becomes trivial.

This is what I mean about theory and practice. Note the big "if" there. I totally understand the sentiment, and I wish it were true. I even have my own meta-programming based language in the wings I'd like to release some day.

But the reason I've stalled a bit is I've never seen this work in practice because our minds (and specs) tend to paper over the devilish details. I'm not saying we shouldn't use meta-programming, I'm saying that it should not become dogma. Invariably, you get caught up in details. This kind of domain specific "meta-programming" is a great bootstrap technique, but doesn't appear to be "the way" programs should be written.

The OMeta folks created a TCP stack that compiles (almost) directly from the specification. But that was a academic exercise. How many special cases do you think they cover? Does anyone really believe that the stack in question is anything but a way to bootstrap a more robust/performant implementation later on?

Similar for the PyPy folks. Yes, you can JIT compile a python interpreter, but how many years have they worked on special cases for that, and how much farther would they have gone if they hadn't used the meta-circular approach and just addressed the real problem to begin with[1]?

> Compared to compiler-assisted reasoning about side-effects, the difference between SML and O'Caml is completely trivial.

I think as a general statement, this is true, but is a poor way of thinking about things. You're not comparing OCaml to SML, you're comparing it to C++ or Java. If all the language brings is side-effect reasoning, it's not enough, because I can add decorators to C++ code to do what you're asking.

Even with "side-effect handling" these languages don't assist you with all of the side-effects someone actually cares about. Is there a decoration for runtime speed, for memory consumption, not just for IO, but for the amount of IO? Is it even predictable? These are the real things people care about, not just whether a function peers into some global state somewhere (although that is an important thing to track, it isn't the biggest issue, IMO).

> Your [2] is just absurd :). Clearly, you don't have to understand the body/implementation of a function, just its type :)

Is this sarcastic? :-) I mean, anyone with a reasonable amount of experience knows that this is not true most of the time.

It's not completely false -- one doesn't always have to look at the implementation of the operating system facilities or even the standard library. But in code that is less than tangential to what you're working on, you certainly do end up having to understand how it's implemented. As a developer, you spend more time reading code others wrote than writing it.

> More seriously, I'd be interested if there's a particular experience that soured you on FP (or perhaps Haskell, in particular)...?

I'm not soured on FP as a concept (which could mean many things, but I just mean state-awareness), just haskell. The <space> fn call operator combined with currying in particular seems like an advancement to rubyists[2] and those think succinctness is a virtue above all others[4], but it is an engineering disaster -- completely unreadable at the call site unless you know the arity of every function by heart.[3]

But maybe I've just read bad code... I'm not dismissing that possibility. :-)

[1] The problem with python was never optimizing plain python (look at JS as an example) or the "GIL", the problem was (stupid) performance requirements about the GIL from guido, and later on, C extension API compatibility.

[2] Do not get me started on the "domain specific language" shit-fest that ruby (and progenitors like groovy/gradle) have unleashed on the world. At least stack overflow gets money and page views from it, I guess.

[3] As much as people like to put down smalltalk (cum obj-c) keyword argument syntax, it is a revelation when you're maintaining code (and I'm sorry Swift seems likely to drop it as a default).

[4] Is Arc (PG's lisp) used anywhere significant, but for this website?


I agree especially about the Smalltalk-like keyword-syntax. People not used to it can't probably appreciate the clarity it brings to code. You understand the code better - and faster - because you can understand from each calling site what each method-call does, because it can indicate the meaning of each argument clearly and concisely enough that you actually start using it for that purpose. So to understand what a piece of code does, you mostly don't have to look up the definition of the methods being called.

It takes longer to write but makes reading/understanding much faster. When you write the code it is obvious in any language what your calls "mean", but no so obvious 3 months later.

Writing without keyword syntax is as if all our emails just referred to "him" and "her" and "they" and "there" never mentioning the proper nouns of who or what or where are we actually talking about. It is obvious everybody understands what we are saying at the time those emails are written. But for another person or you yourself trying to understand what a specific email is actually saying would be rather difficult. Relying on the POSITION of an argument, rather than the name of it in the calling context is like saying "that argument which is the 3rd".


> laziness tricks

When your language is lazy by default, is it really a trick to take advantage of that?

Do you consider this a trick?

    take 1 [5..]
How about this Fibonacci definition?

    fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
What about this extensible fizzbuzz example?

    fizzBuzz i = if null desc then show i else desc 
        where desc = concat [label | (j,label) <- tags, 0 == rem i j] 
              tags = [ (3,"Fizz"), (5,"Buzz"), (7,"Baz") ]   
   
    main = mapM_ (putStrLn . fizzBuzz) [1..120]


Focus on why you are trying to accomplish something, not how you are accomplishing it.

Yes, these are tricks, because "lazy" means you're storing thunks and significant partial results with no outward indication of such. Just because you don't see it, doesn't mean it's not there. I could say the same about C's allowance of global state, or spurious use of recursion without a depth guard.

To your examples:

A trick, because who would ask for a constant from an infinite list that could be generated numerically in a fraction of the time?

A trick, because for random queries of large n, this is not the optimal way to compute it (the optimal way is more complex). It also entangles memory allocation and the computation. Note how haskell is "side-effect free" for fibs, yet somehow memory allocation or the time required for allocation is not considered a side-effect when I ask for "fibs !! 2000000". The C iterative solution is trivial, and will execute faster.

A trick, because no one cares about fizz-buzz being extensible, and this is not particularly clear to the reader compared to an ordinary if/switch/case expression.


Can I please quote your debating which particular brand of natural spring water should be given to millions of people who are thirsty in the desert when I get a chance :-)? It is a great summary of the problem with functional programming.

As a person with academic background interested in real-world FP (using F# in my case), this is exactly why I'm not very enthusiastic about most functional programming papers that appear in conferences like ICFP - they might be solving fun problems, but I'm not convinced they are problems that actually matter if we're going to ignore the main issue.

I think interesting functional libraries that demonstrate how to apply FP to some interesting problem can help here - for example, the paper on financial DSLs (http://research.microsoft.com/en-us/um/people/simonpj/papers...) or the original paper on Functional Reactive Programming (http://conal.net/papers/icfp97/).


No need to ask permission to quote me, but thanks for asking anyways. I definitely agree that shifting focus towards industrial use cases is what we need.


I guess it's more difficult than shifting the focus towards industrial use cases. Industrial use cases are a great thing, but it is something that the industry has to provide :-) (we tried to do something like that for F# here: http://manning.com/petricek2 (sorry for a shameless plug!)).

But many people contributing to functional programming (in one way or another) are in academia. They are not the best people to contribute industrial case studies - but I think there is still a lot to be done there too! The nice thing about the FRP paper is that it is really just a fun (and very simple and somewhat impractical) example, but it is nice inspiration showing (what was back then) a novel use of functional programming. Some of the more recent academic work around FP lacks this kind of creativity...


The financial DSL paper is a good example as although it was Haskell based, Jane Street eventually chose OCaml for production.


I agree that this was the situation a while ago. But, recently, I've seen very encouraging signs that the wider programming community is waking up to the value of FP. Languages like Scala and Clojure, whatever else you think about them, will hopefully act as gateway drugs.


Hi from "the sea". I think the Functional Programmers crowd overestimates how much better languages can really improve software development.

Projects in the industry would be massively improved by: more focus on quality (eg. decoupling), technical debt awareness, enhanced communication, enhanced architecture, internal engineer mobility, more thoughts given on social dynamics and productive work environments.

It pains me to say this but supposedly "bad" languages like C++/Java (which are actually really well done) are not the bottleneck for software development. It is only a distant factor among many that lead to software being the permanent tragedy it is in our era.


Agreed mostly, but I beg to differ with your conclusion that this makes the coice of language insignificant.

The point of modern languages (name them functional or not) is to erade whole classes of bugs. Also, it is about making it simple to define good interfaces between components.

So at least your points "focus on quality" and "enhanced architecture" are directly influenced by the programming language. Of course you should train your people to focus on quality, but you also shouldn't make it too hard to get things right in the first place. Of course you should train your people to develop a sense for good architechtures, but you should also provide a formal language that makes it more natural to express their architectural and design decisions.

That way, your training can focus 100% on the real issues, rather than 5% on the real issues and 95% on how to apply them in your programming language, as you need lots of workarounds, wrapper classes and so on. And you don't only have to write them - others also have to read that bunch of mess, and reduce this to the "real point" in their heads. This may be considered a nice mental exercise, but in the end, it's just boring, prone to errors/misunderstandings, and a finally waste of time.


> more focus on quality (eg. decoupling

> technical debt awareness

> enhanced architecture

I think all three of these can be improved and alleviated somewhat by better languages.

Some languages most certainly have cultures that promote properly taking care of all of these as central principles.

It's a bit shortsighted to think that since projects are hard for reasons other than language choices (too), we should just ignore languages all together.

At some point that'd just mean you end up realizing you could be even better with better tools, so why not start now even though everyone hates each other and can't talk to one another?


Toolchains also matter and OPAM (the package manager) has gone from strength to strength since this post. It's the basis for the OCaml Platform which combines a number of useful tools and libs into a coherent workflow (making development much more productive).


Also merlin is worth mentioning explicitly. It finds and highlights errors during editing (in emacs, vim and co), does autocompletion, shows types and since recently can also write pattern matches automatically.

http://the-lambda-church.github.io/merlin/destruct.ogv


OPAM still doesn't run natively on Windows though :(


I'm the author, and to be honest I agree. I was still fairly new to OCaml (and ML in general) at the time, so I tried to focus on what I knew rather than rattle off a list of things I had heard were better but didn't really understand.


Haskell has subtyping, it just doesn't use it a whole lot. That said, you can make arbitrary subtyping hierarchies using it if you like.


Actually Haskell doesn't have subtype polymorphism, at least not Haskell 98 (but maybe there is some new GHC extension nowadays that I'm missing - please elaborate if that is what you refer to).


It has width subtyping on type variable constraint sets. For instance, `forall a . a` is a subtype of `forall a. C a => a` and the compiler will automatically make the conversion by adding the unnecessary constraint. You also get the whole covariant/contravariance bit arising (solely) from function types.

This is a subtyping relation and you can treat it like one (and even abuse it quite a lot if you like) but since it's not key to think of it that way to understand Haskell's polymorphism then most of the literature just ignores it.


IMO key advantage OCaml is easier to learn and much more pragmatic.


Unless start-ups or top companies [0] start adopting OCaml, I doubt its rise would be meteoric. Take go-lang, for example. It performs no better than Java on JVM, but is gaining tremendous traction because Google is putting all its weight behind it. I believe C# gets far less credit than it deserves... and MSFT knows exactly what its doing by open sourcing it. I digress. I feel, one is better off investing time in Clojure or Haskell (if functional programming is what one is after), both of which are gaining a lot more traction in the industry and the academia compared to OCaml.

I can speak from what I observed here at Amazon... which did try and make use of Erlang (due to its excellent concurrency model) at one point, but gave up (Simple DB is powered by Erlang, but its successor DynamoDB is JVM based). At least Erlang had its chance... OCaml community within Amazon is non-existent. Whereas Clojure (plenty of traction within due to STM on JVM, I guess) has plenty going for it, as does Nodejs and go-lang. All of these plaforms are popular due to the strong community presence both within the company and at other tech power houses.

I'd also like to point out that the current crop of volunteers behind ocaml.org [1] are doing a really great job of evangelising OCaml.

[0] https://ocaml.org/learn/companies.html and http://clojure.org/Companies and https://wiki.haskell.org/Haskell_in_industry

[1] https://ocaml.org/about.html


Take go-lang, for example. It performs no better than Java on JVM, but is gaining tremendous traction because Google is putting all its weight behind it.

That's definitely a large factor. But let's not forget that Go already took off when it was barely beyond being a 20% project. Go also fills some niches, three particular ones I can think of are:

- People who like Java, but favor the UNIX approach of small programs over the JVM.

- People who like Python or Ruby, but need more performance.

- People who like C, but want garbage collection and some extra safety in some of their projects.

Add to that that Go is trivial to learn for anyone with a Java, C, or C++ background. Of course, it takes some time to learn all the idioms, but someone who knows languages with C-like syntax can start writing Go programs productively within a day.


> People who like Java, but favor the UNIX approach of small programs over the JVM.

Ah, that must be why the golang toolchain does not support bloated enterprise features like dynamic linking, so a hello world binary ends up almost 2MB. Go UNIX!


You mean top companies like, say, Facebook? Or startups like, Esper? I'm not really sure what your comment is trying to say because it comes across as circular ('they're popular because lots of people use them').

Edit: seems you've added the links I was about to point to.


It doesn't take much to boot up a community in a large company like Amazon or Facebook as long as it brings value. See the FXL, Haxl, and React work at Facebook. (I removed my comments on Amazon as it's hard to gauge what's ok to mention in public)


Now that .net is going cross platform, I see a great future of f# (another variant of ML family). Its got multi processors support too.


I think most people who wanted a cross-platform, VM-based, corporate-ecosystem-integrated ML derivative with good concurrency support have already found Scala :P.


The problem with Scala is that you don't just get ML -- you have to accept everything else that comes along -- so F# can be a more comfortable choice.


It might be easier to accept the additional parts of Scala, because the creators didn't design them to be an tacked-on, intentionally horrible feature. (OO in F#/OCaml anyone?)

Plus, sane typeclasses and higher-kinded types. :-)


I agree and I think F# would look a lot more attractive to functional programmers if it had some type of polymorphism other than the C#-like generics and traditional OO subtyping.

I can understand not choosing to go the ML route "down the Functor rat hole" as one of its developers said. But then it also chooses not to do the simpler type classes either. Apparently a proper impl would need CLR changes which is a downside to the reified generics, they are baked in as C#/VB imagined them. That leaves a language that conceptually is very close to C#/VB, with a somewhat different syntax.

I find either of typeclasses or an ML-like module system far preferable for expressing abstractions than traditional ("left-biased") OO, even if OO can (clumsily) get the same results in other ways.

Or I may have missed some developments in F# since I last looked. I really hope so, F# does have a lot going for it.


I've looked but not touched Scala. My impressions are A: Rather verbose. B: Carries a lot more OO/Java baggage. C: I've heard that the performance can be underwhelming.


A: Sometimes there is a small price to pay for gracefully unifying OO and FP (for example using sealed trait/case classes for ADTs, instead of having standard OO classes + a special additional ADT feature)

But it doesn't really matter. Some pieces might be slightly more verbose, but Scala is more concise in the long run, not because of saving some syntactic stuff here and there, but due to being able to express things which just aren't expressible in OCaml or F#.

If you want some shorter, more ML-like syntax, just pick a different parser https://github.com/lihaoyi/Scalite and have fun.

B: Unlike in OCaml/F#, OO is not "baggage". Scala took the good parts of OO and made them even better. OCaml/F# feel like the language creators tried to make a point about disliking OO by making OO support terrible on purpose. Scala demonstrates that having good FP support doesn't mean OO support need to be terrible.

Some of the stuff for Java support is a bit annoying, though, but doesn't matter in practice.

C: Performance is on par with Java. Scala can leverage the most powerful JIT-compilers and the world's best Garbage Collectors known to man. Does OCaml have working support for concurrency yet?


Is there somewhere one can follow the Mac and Linux ports? I'm stoked about F#/OCaml too.


Have you tried Mono and Monodevelop? I have used C# with Mono and for back-end work (non-GUI) and it works pretty well so far. I think they have F# support as well.


I can confirm that MonoDevelop works reasonably well for F#, but the documentation for F# on Mono isn't great. Coming from OCaml, I love F# as a language, but the tooling isn't as good as it is for OCaml. Might be better on Windows with VS, but I haven't looked closely at that.


F# seems like a very nice language, but it won't really be compelling outside of the Microsoft bubble until it's divorced from the thick layer of Microsoft .CORP Brand Identity 2010, Visual Studio Visual Studio Visual Studio, and posts from Microsoft Outreach Engineers.

Compare these two Stack Overflow "getting started with" answers for Haskell http://stackoverflow.com/a/1016986 and F# http://stackoverflow.com/a/11974625 With the Haskell answers I don't know or care what platforms, operating systems, editors, or IDEs they are using. With the Microsoft answer it's like a dystopian scene filled with Branding, Branding, and more Branding.

Personally, I spend a lot of time experimenting with languages and ecosystems and the ".NET" platform is the only one where I feel the need to put on a plaid shirt and dockers with a Windows Phone on my belt and schedule a conference call with a committee at Microsoft to tell them to give it a rest already.


mac:

brew install mono

run f# console/repl: fsharpi

compile files: fsharpc file.fs

Not sure about linux, but should be simple as well.

And as with Haskell and cabal, you have FAKE ( http://fsharp.github.io/FAKE/ ) which isn't exactly the same but similar

The main difference between those two answers is that for Haskell, the author pointed to generic Haskell resources while F# pointed to windows ones. I've been developing with F# for a while on the Mac and found no issues whatsoever. There is also MonoDevelop/Xamarin if you want a more complete IDE which is better than anything I found for Haskell (but there maybe new stuff I'm missing, having messed with Haskell in a while) but SublimeText and Vim work fine.


One thing I don't like about OCaml is that I always find myself writing the same things, like "to_string" functions for my variant types (although there must be some ways to alleviate this burden).

Also, when your programs use abstract data types you lose the benefits of pattern matching. In that case, I'm happier with languages like Go or Ada with a friendlier syntax.


If you have an ADT but want pattern matching, try Scott Encoding.

For instance, here we have Option

    module Option = struct
      let scott (some : 'a -> 'r) 
                (none : 'r) 
                (opt  : 'a option) =
        match opt with
        | Some a -> some a
        | None   -> none
    end
For Option, since it's non-recursive, the Scott Encoding and the recursor/inductor/Church Encoding are identical. Here's a linked list, though

    module LL : sig
      type 'a t
      val fold  : ('a -> 'r -> 'r) -> 'r -> ('a t -> 'r)
      val scott : ('a -> 'a t -> 'r) -> 'r -> ('a t -> 'r)
    end = struct      
      type 'a t = Cons of 'a * 'a t | Nil
      let rec fold cons nil = function
        | Cons (h, t) -> cons h (fold cons nil t)
        | Nil         -> nil
      let scott cons nil = function
        | Cons (h, t) -> cons h t
        | Nil         -> nil
    end
Anyway, the pattern should be more clear now. These provide effectively "functionalized" pattern matching which you can apply whenever you need. In particular, you can think of these as expressing a (potentially partial) "view" of the abstract type. For instance, my linked list might have not been a linked list exactly but instead some kind of tree, but `scott` and `fold` let me expose a "view" of that tree as though it were a linked list.


If you look at this from an "expression problem" point of view, this version with the records-of-functions is very similar to OO programming. But without inheritance, classes and so on.


I'm very much a fan of that POV. I think OO has a lot of mythology, but the technology is relatively similar to something like OCaml or Haskell. Classes are essentially just functions which return "structures".

Inheritance and open-recursion/dynamic binding are usually what remain. I'm not terribly sure I miss them.


Indeed classes are great for these kinds of encodings. It is one of the use cases for which classes are the nicest approach in OCaml.

For example, see this section of Real World OCaml:

https://realworldocaml.org/v1/en/html/classes.html#open-recu...


> Also, when your programs use abstract data types you lose the benefits of pattern matching.

In some cases you can also use "private ADTs". Modules seeing such a type can use pattern matching on them but can not apply the constructors.


PPX lets you generate show functions automatically. As for abstract types you can use "private" if you need to pattern match outside the module but wont to ensure that the structure can only be created by your module.


I honestly don't get how people can claim that Haskell is unusually hard to learn. I started learning Haskell with zero functional programming experience, and after getting over that initial pure+functional learning curve (which took two or three weeks of casual learning), it was smooth sailing. Haskell is actually a very simple language compared to popular languages like python or C++; you just have to do a bit of thinking to get out of the procedural/imperative/impure mindset.

I especially don't understand how people get tripped up by laziness. 99.9% of strict code will work with no changes in a lazy environment. The rest usually just needs a slight tweak to avoid memory leaks.

To anyone considering undertaking the modest effort required to learn Haskell, I completely recommend it. My code in every language has tangibly improved.


Keep in mind that writing something like this is not actually going to be perceived as helpful to most people who may have difficulty with Haskell. Consider a couple of possibilities.

First, you may be unusually quick, gifted, clever -- whatever you might want to call it. In this case, the new concepts in functional programming might come easy for you. That's great, but to people for whom they don't come so easily, this could easily be considered discouraging, or else bragging.

Second, consider that maybe you haven't advanced as far in learning Haskell as you think you have. In my estimation Haskell can take you up an abstraction ramp, that has no clear and obvious "end." As a research language, higher up that curve is some really mind-blowing stuff. So unless you're SPJ or beyond, I wouldn't be quick to claim that Haskell isn't "unusually hard to learn."

Third, you may be a stage in your life where you can put a lot of free time into learning, but recognize that not everyone is there. In my twenties, I worked much more than full time, and in my spare time worked on yet more programming, teaching myself other languages and environments. Really in retrospect it's fortunate I didn't damage my physical or mental health more than I did. But not everyone is in that boat now. In particular people who are older and have "work/life balance," who are professionals in some form of software development, may find that their employment doesn't offer them much in the way of opportunities to learn another language, and the life part of that work/life balance doesn't allow much time to do so.

Me, I'm somewhere partway up that ramp and proud of how far I've come (I feel like I can use at least simple monads now) but still very aware that there is a lot I don't fully grok (currently trying to get my mind around arrows). And I agree, I completely recommend it, but just maybe don't be so glib to claim it is not hard.


Regarding your first point, it doesn't even require the claimant to be better in any particular sense - it could be that their prior knowledge was simply a better fit to the new concepts.

Regarding the second, I don't think it makes sense to consider "difficulty in learning Haskell" to be "difficulty in learning all of everything anyone's done with Haskell." Even SPJ has (or at least recently had) a pretty shallow understanding of Kmett's lens library. The two important questions are 1) how much do I need to learn to get things done in Haskell, and 2) how much do I need to learn to participate usefully in the Haskell community. 1 is less than 2, which is less than what you described.

"There is more stuff to learn that can make you even more productive, but you can get by without it and still be productive" often gets treated as a negative, and I find that strange...


Learning Haskell is not out of the question for most devs. The real question is whether they're willing to let go of a lot of their programming experience so as to admit a different style of programming.

It's more akin to studying than anything else. I've had evenings where I've been unable to proceed because I can't write one line. It's OK; it's part of learning. No one said it would be easy.

Beginner's mind is key.


>Keep in mind that writing something like this is not actually going to be perceived as helpful to most people who may have difficulty with Haskell

No, but hopefully it well help people who have yet to learn Haskell that not everyone thinks it's particularly difficult. I'm not interested in convincing the people who've tried and given up on it.

>That's great, but to people for whom they don't come so easily, this could easily be considered discouraging, or else bragging.

So what do you propose? That I pretend it's really hard for me, even though it's not? What effect do you think that's going to have?

>So unless you're SPJ or beyond, I wouldn't be quick to claim that Haskell isn't "unusually hard to learn."

You don't have to be SPJ to learn Haskell (or talk about how hard it is). Do I have to be Stroustrup to pass judgement on how hard it is to learn C++?

>In particular people who are older and have "work/life balance," ... and the life part of that work/life balance doesn't allow much time to do so.

I obviously don't mean that Haskell is easy for every single person in the world to learn; I mean that, as far as languages go, it's relatively easy enough to learn. If I didn't have enough time to learn Haskell, I also wouldn't have enough time to learn C++ or Python.

> but just maybe don't be so glib to claim it is not hard.

Again, what do you propose? Lie and tell people that it's super hard, so I don't hurt their feelings if they can't figure it out?


So what do you propose? That I pretend it's really hard for me, even though it's not? What effect do you think that's going to have?

Again, what do you propose? Lie and tell people that it's super hard, so I don't hurt their feelings if they can't figure it out?

I don't anybody is suggesting lying about your own personal experience with Haskell. Rather your posts state that it was easy for you and that it should be relatively easy for everyone.

I think that when programmers from other languages are having trouble figuring out how to write a particular program in Haskell, whether it be due to documentation, the behavior of the code execution, or even language syntax, it doesn't make sense to claim that it should be relatively easy for them.

In my (limited) experience with Haskell, there can be hang-ups that someone with experience in strictly-evaluated languages isn't expecting. [1]

[1] https://wiki.haskell.org/Iteratee_I/O#The_problem_with_lazy_...


Maybe consider a little humility first.


Does it demonstrate a lack of humility to be honest about how difficult something was for me? I'm sorry my truthful assessment offends you. If humility means pretending things are harder than they are, then I guess I don't have any humility.


Whether we find it offensive or not isn't relevant. Your comments are degrading to those who find it difficult, and you are actively hostile when they complain or we point this out.

Your attitude is one I see too often in the FP community, and well-meaning or not it holds us all back.


>Your comments are degrading to those who find it difficult

Again, what do you propose I do about it? Should I never say anything positive about anything, lest I offend someone who had a bad experience?

>Your attitude is one I see too often in the FP community

Which attitude is that? Optimism?


> Should I never say anything

Well if you can't understand why your phrasing turns a positive idea into making somebody feel bad, probably.

> Which attitude is that? Optimism?

A version of it, yes. The "if you don't find it easy you are the problem" attitude.


You seem kind of hooked up on the "easy for you" part, which really isn't the part anyone cares about. I can't even comprehend how you came to the conclusion that you are being asked to lie.

Maybe a more constructive (humble?) approach would be to show others why it was easy for you, subjective as that may be, and how they can achieve faster comprehension levels based on your own experiences.


I'd like to see examples that start with a Java or JavaScript program, then try to translate them to Haskell and documents the thought-process of figuring out how the Haskell solution is and needs to be different.

Haskell aims to be a terse language, right? That's one reason that makes it difficult to "read" it and if you can't read it it's hard to learn it.

Think about learning to read and write Chinese when all you know is English. The only way to do it is to have a text-book that shows you sentences in both English and Chinese. I don't think it is necessarily "difficult" to learn Chinese, but you need learning materials targeted to an English-speaker.


That's exactly why most FP articles and tutorials around the net didn't help me, but Dan Grossman's "programming languages" coursera course did.

If (when) I ever learn enough to talk about why OCaml is better in detail, it will be with concrete examples not buzzword bingo :)


The language itself is not very big. The category-theory based abstractions built on top of it, on the other hand... Just look at the lens library.


That's fair. But I don't think it's reasonable to lump learning about lenses (or other abstractions that aren't part of the prelude) into learning Haskell.


Judging the ecosystem is fair in the same way that Java is lumped together with the reams of j2ee, spring, AbstractFactoryFactory sort of libraries that it yields.

For better or worse, switching to Haskell does entail asking yourself how hard it'll be for your team to become comfortable with things like http://learnyouahaskell.com/functors-applicative-functors-an...


That's a good point. I suppose people do judge Java by the popular Java libraries and their styles. However, I don't really consider those things when talking about how difficult Java is to learn.

Learning Haskell may well be easier than using it, although the same is true for most languages.


But you don't learn Java just to learn Java, usually. You learn it in order to solve a problem. Which means working with the ecosystem. And personally, I'd rather use libraries I can understand if needed.


Except that lens is being used by a quickly growing chunk of the ecosystem:

http://packdeps.haskellers.com/reverse/lens

(Yes, I am guilty too, but I inherited the dependency via Chart ;).)


From your website, you appear to be a college student. Perhaps your expectations and uses are different than commercial developers.


College students often feel at the time that they are completely overloaded with work and barely getting through (in some cases that is true, especially students who are working or non-traditional). But looking back, it is pretty clear that although I worked hard in school, I could have worked so much harder, and also that never in my life again will I be in an environment that was so _supportive_ of just learning. Not just in classes; in fact, my college experience was _especially_ supportive things learned outside of classes. And also, at 20, 21, 22 -- there are very good reasons those are the traditional college years. Our brains are just able to _soak_ up ideas so quickly at that age. They aren't wise, they can't necessarily integrate everything, but we are _sharp_ then! (Usually...)


I am also a commercial developer; I work full-time during the summer to pay for college. My expectations don't really change much. Of course, everyone has different expectations.


That's a pretty sad indictment of commercial developers. Or perhaps just the pressures put on said developers :/.


I wanted to like OCaml, mainly for working with Unikernels. When I looked at the syntax it just looked very arcane, which was discouraging.


Might come handy in case you want help yourself get over the seemingly cryptic syntax: http://rigaux.org/language-study/syntax-across-languages/


The syntax is certainly very wonky but you get used to it. Installing merlin (vim / emacs plugin) helps a lot too.


If it's about what's well-supported for compiling to Javascript, does scala.js becoming non-experimental change the landscape?


I knew about scala.js when writing the post (although it was still experimental at the time) but I didn't mention it because I don't like Scala. It looked promising when I first saw it, but then I had to write some production code in it.

My opinion now is that if you're forced to use the JVM then Scala is one of the best options. But that's a low bar, given the choice I would pick something else.


Is there anywhere that I can track the status of the multicore runtime and modular implicits?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: