Hacker News new | past | comments | ask | show | jobs | submit login
OCaml for the Masses (2011) (acm.org)
304 points by alanfranz 5 months ago | hide | past | web | favorite | 223 comments



I for one can't wait for Typed FP to become mainstream as soon as possible. I think the prevalence of scripting languages in large systems today is a quirk of history.

In early 2000s, as the web was growing to be an application delivery platform, the only practical statically typed language was Java. It didn't have generics or lambdas at the time, and was infested with the sprawl of J2EE. Choosing Ruby/Python over Java was at the time an act of rebellion and a show of technical superiority. To quote pg: "if they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers."

The static type system in Java is object-oriented: the only way to create a type is to create a class. I believe it was limited in abstraction power and everyone attributed it to the rigidity of static types. Java has come a long way since, but the sense that dynamic typing is superior to static types lingers.

But OCaml's static types are a completely different kind of type. There is a kind of "procedural" static typing in Go, C, and to some extent C++ where primitives are typed and you can construct structs and unions without much ceremony. Then there is object-oriented static typing in Java and C#, which traces its origin back to Simula through C++ where types and classes are the same. Then you have functional static types where types are just and only about data. Almost all Typed FP languages use a form of Hindley-Milner inference, support algebraic data types and pattern matching, have generics (they were invented by Milner for ML in 1973), and allows for code organization and encapsulation through modules and opaque types.

Algebraic data types alone is a tool for thought like no other. You'll start reifying concepts that would've otherwise gone implicit in your codebase thanks to ADT. The languages are solid, ecosystems are vibrant but small and that can only be fixed with more people. Come on in, the water is fine!


I agree with the points in your post and I also want a mainstream typed FP (something with a clear happy path so I don’t need to guess about what prelude or string types or standard library or etc to use). The biggest impediment I see is that these language communities are more concerned about the abstract mathematical properties of the language and things like tooling get neglected.

I would love to see a typed FP equivalent to Go—everything is super simple/small learning curve (I should be able to meaningfully contribute to a project after a day or so), the standard library is decent, package management just works, no guessing about what test lib to use, everything compiles to a single static binary by default (dead simple deployments), no-fuss documentation generation, great concurrency/parallelism story, straightforward profiling and optimizations (e.g., if my program is slow because I’m allocating too much in the hot path I can trivially preallocate), etc. These are the things that matter for real world projects—mathematically elegant type systems really are just gravy, which is why Go has been able to be so successful in spite of its flat-footed type system.^1

^1: if you go over to r/programming, no one can figure this out because everyone believes you can’t ship Software in a language without monads or generics, so Go’s popularity must be due to Google marketing.


Personally, I see Elm as an interesting development in such a direction, i.e. humane FP language, with the author relentlessly striving for simplicity and ease of use kin my opinion, successfully). That said, it's currently limited to a web dev./"frontend" domain (JS replacement).


>Personally, I see Elm as an interesting development in such a direction, i.e. humane FP language, with the author relentlessly striving for simplicity

Red is another language that is or was striving for simplicity (or resisting the complexity that seems to be the norm in tech nowadays). In fact that was supposed to be one of its main goals. I felt sad when they went down the ICO route for funding, though.

Not sure how FP-ish Red is, though. Rebol, which Red is based on, is supposed to be homoiconic. I like Rebol and Red.

The size of the Rebol and Red interpeters (and Red is a compiler too) and the size of the Red EXEs are quite small, which is another plus. Much smaller than some other languages.


>or resisting the complexity

I should say, sometimes seemingly unnecessary complexity.


I'd say Rust is the closest thing. Cargo is fantastic, super ergonomic and with an excellent package ecosystem. Rust doesn't have the pretense of purity or immutability, but honestly, you can very easily write pure functional Rust (and it'll save you a lot of pain with lifetimes). Performance is excellent, of course.


I agree that Rust is the exception--it has a fantastic tooling story, but it's actually the _language_ that holds it back for the sorts of applications I tend to write. The performance and even correctness benefits it affords me over Go are just too small for the cognitive burden it imposes (and yes much of the gap closes with time--I've been kicking the tires on and off for 5 years now--but every year the returns diminish and it's looking like the gap will remain significant). Hopefully I'll be proven wrong in the next few years.


Would you mind elaborating on the cognitive burden you find Rust imposes?

I find that after roughly two years of writing Rust, I can architect and re-architect big projects fairly clearly, and the type system gives me strong reassurances that refactors haven't screwed things up.

Not to mention the advantage of even simple things like enums, which most languages (including Go) tend to lack.


> Would you mind elaborating on the cognitive burden you find Rust imposes?

Millions of little decisions with respect to which lifetimes to use, which types to use, which type of pointer to use, whether to pass by ref or by copy, and so on.

> I find that after roughly two years of writing Rust, I can architect and re-architect big projects fairly clearly, and the type system gives me strong reassurances that refactors haven't screwed things up.

> Not to mention the advantage of even simple things like enums, which most languages (including Go) tend to lack.

I don't doubt it. I think these are really cool properties of Rust. They just don't pay for the cognitive burden, since the applications I write aren't critical systems (I can afford _some_ unsafety, but I can't afford to slow my development process).

To put it differently, if I write in Go, I can quickly ship a feature with a high degree of confidence that it's overwhelmingly correct, and the few errors that do slip into production can be quickly fixed because I can iterate so quickly. If I write in Go, I will eventually ship a feature with a very high degree of confidence that it's overwhelmingly correct, but even if I find zero bugs in production, the time spent shipping the first iteration in Rust is much larger than the time it would spend me to ship _and_ iterate on bugfixes in Go. I'm sure the extent to which this is true shrinks as I get more experience with Rust, but there's a law of diminishing returns at play, and no indication that the gap will ever vanish entirely.


What kind of code have you been writing? For most Rust code that I write and read, I find almost no lifetime annotations are needed.


I'd be curious to hear about your edit -> recompile -> run experience with Rust. My typical dev process is edit -> auto-recompile in the background -> refresh the browser / whatever I need to do to see how my edits affected the thing I'm working on. Rust's compilation speed seems like a non-starter for me.


Depending on the project I'm working on, I do need a server class machine for running benchmarks, but that might not be the case for most projects (I work with heavy cryptographic things, so tests sometimes take a while to run).

However, for compilation errors, feedback is instant, because cargo check (which runs everything except codegen) is almost instantaneous. I've also set up neovim to give run it on every save (via a plugin), so I don't have to leave my editor for this. I know that VS Code and IntelliJ also do this.


Try to write GUI applications without sprikling the code with RC<RefCell<>> everywhere, due to callbacks and access to widget fields.


I have heard about Rust. But I wonder if you can do something that a OO programmer (like me :) would call "string subclasses" so you do not manipulate a generic type called string, but a meaningful subclass such as "FirstName" or "LastName", or "TownName".


You wouldn’t do it as a “subclass”, as Rust doesn’t have classes or inheritance.

You’d use the “new type pattern”, which is like a named 1-tuple:

  struct FirstName(String);


is this type interchangeable with String?


I'm not the person you asked, but I don't think domain-specific user-defined types (whether defined as subclasses or subranges (see Pascal) of existing language types, or another method as Steve said), are supposed to be automatically interchangeable or equivalent to the existing type they are derived from (without a cast or conversion of some kind), otherwise it would defeat the purpose, which is better type or range checking.


Not exactly. That is, you can't pass it directly to a function that takes a String, but if we made the inner string public, we could extract it with two characters. If we don't want to make it public, we can write an accessor as well, etc.


As far as i understand, Rust is strongly typed, has generics and no type erasure. I also suppose String is some kind of a primitive type. Could it be possible to make it parametrizable, so we can manipulate a String<FirstName> as a String, but the compiler will never confound it with (for example) String<LastName>.


String is a library type, not a primitive type.

You could write your own string parameterized in this way, but it would be incompatible with the standard library String and so you’d need the same conversion steps.


What would happen if the Rust core team decided to parametrize String? Would it work with legacy code? (that creates something like String<?> and consume String<?>).


It cannot be done in a backwards compatible way with today’s language features.


Love Rust, but there's room for garbage collection over borrowchecking if we're talking about replacing scripting languages. It would be hard for a decent statically-typed FP language to be slower than Ruby, for example.


Rust is lovely - but you definitely do have to guess about what string types to use!


I think that’s more of a problem (or at least used to be one) in Haskell. In Rust it mostly boils down to how much speed or ergonomics you want


This isn't really true for OCaml. I daily compile large OCaml programs which are linked to C programs, we run valgrind on them, they are fully integrated with GCC and autotools and so on, ship in tarballs to end users who can run ‘./configure && make’ etc. In case that all sounds very Unix-specific, I have also in the past cross-compiled OCaml programs on Linux and Windows and Mac OS X at the same time.


I wouldn't consider a home-grown autotools/gcc/valgrind setup to be "simple", especially when it comes to dependency management (assuming you care that dependencies don't conflict and that your code links against the right version of the dependency and deterministic/reproducible builds and so on). If you can afford the personnel to manage all of that, then that's great, but for those of us who can't, it's nice to be able to run `go build` or `go test` with zero configuration.


Why would you think OCaml can't do that as well? opam lets you build from source with "zero configuration". We don't use it because we have invested heavily in autotools and want to integrate with many other programming languages, but the option is there.


I'm a little hazy on the particulars because I never got terribly deep into OCaml and haven't tried it in a while, but IIRC, OPAM is a package manager (and according to https://opam.ocaml.org/doc/Packaging.html, it requires a configuration file). The most popular build system for OCaml is JBuild (or whatever it got renamed to) which also requires a configuration file (formatted as S-expressions to make things more confusing for newcomers) and I think that usually bottoms out in a Makefile to boot (or maybe it's the other way around, and OPAM packages usually call into Make which calls into JBuild?).

In any case, it's a far cry from `go build`.


Man, I agree 100%. I wonder if it would be possible to get this by compiling an Elm/SML-like language to Go?

[Edit: I'd add a few things to your feature list: really fast, OCaml-speed compilation, along with a utop-like REPL]


+1 regarding your additions to my list! With respect to compiling an SML-like language to Go (I've thought about this a lot), it would probably get you a long way, but the performance would suffer because FP langs allocate like crazy and Go's GC is not optimized for lots of fine-grained allocations. And there would still be a long tail of things that didn't work quite right: stack traces would need to be modified to map to lines in your source program (instead of the emitted Go code), documentation generation and linking would need to be patched, dependency fetching would need to understand and distinguish between packages in $MYLANG and Go packages (so it invokes the $MYLANG->Go compiler appropriately), and so on. That said, it would certainly be a great proof of concept!


>Go's GC is not optimized for lots of fine-grained allocations

Why is it not? Hadn't heard of this point before. Is it because of catering to the most common use cases, which are not fine-grained, maybe?


Go has value types and they're used pervasively. Where an OCaml or Python or Haskell list is O(n) allocations, a Go slice is 1 allocation (values are laid out side-by-side in memory). This allows Go to have a simple, concurrent GC with pause times on the order of 1us (compared to 10-100ms, as is common for compacting generational collectors) as well as great (CPU) cache properties (things are more likely to be collocated in memory, so you're more likely to have cache hits).


Got it, thanks. The Go team has been working on improving Go's performance, over time, I've read. One other such example I read about is that for compilation, where package A depends on B and B depends on C, the metadata for C is stored in the header of B, so when A is being compiled, it only needs to read B, not C - and there are other related optimizations on the compilation front, IIRC. I thought that was a good optimization.

I wish the EXE/binary sizes of Go programs were smaller, though I know they've done some work on that too. Coming from a C background originally, I was a bit surprised when I first saw that the size of a simple hello-world-style Go binary was ~1 MB (some years ago). Equivalent Free Pascal and D binaries are a lot smaller, IIRC, from what I tested at the time (~100 KB or less). Heck, the Red interpreter / compiler (and the Rebol interpreter before it) fit in ~1 MB. Don't mean to denigrate the efforts of the Go team at all (even apart from the fact that as a long-time Unix guy, Rob Pike, Ken Thompson, et al are among the people I look up to); I know language translation technology is a hard field, plus the scope of Go may be much more than that of Rebol / Red; just speaking from the POV of a user, hoping for smaller binary sizes.


> Got it, thanks. The Go team has been working on improving Go's performance, over time, I've read. One other such example I read about is that for compilation, where package A depends on B and B depends on C, the metadata for C is stored in the header of B, so when A is being compiled, it only needs to read B, not C - and there are other related optimizations on the compilation front, IIRC. I thought that was a good optimization.

AFAICT that's a compiler optimization, not a runtime optimization. But yes, generally the Go team is working to improve performance.

> I wish the EXE/binary sizes of Go programs were smaller, though I know they've done some work on that too. Coming from a C background originally, I was a bit surprised when I first saw that the size of a simple hello-world-style Go binary was ~1 MB (some years ago). Equivalent Free Pascal and D binaries are a lot smaller, IIRC, from what I tested at the time (~100 KB or less). Heck, the Red interpreter / compiler (and the Rebol interpreter before it) fit in ~1 MB. Don't mean to denigrate the efforts of the Go team at all (even apart from the fact that as a long-time Unix guy, Rob Pike, Ken Thompson, et al are among the people I look up to); I know language translation technology is a hard field, plus the scope of Go may be much more than that of Rebol / Red; just speaking from the POV of a user, hoping for smaller binary sizes.

Go binaries contain _everything_, while C programs (usually) statically link against libc (glibc is on the order of 10mb by itself). Go's runtime is also quite a lot more complex than C's--it comes with a scheduler, a reflection system, and a garbage collector. These are one-time costs--the binary for a program 10 times the size of hello world is pretty much the same size as the hello world binary. If you really want, you can dynamically link a Go program, but I can't imagine a scenario where this is practical.


>AFAICT that's a compiler optimization, not a runtime optimization. But yes, generally the Go team is working to improve performance.

Right, it is a compile time one.

>Go binaries contain _everything_, while C programs (usually) statically link against libc (glibc is on the order of 10mb by itself).

Did you mean "dynamically" (for C) [1]? Also, I'm not too sure if Go links all dependencies statically by default, if that is what you implied above. IIRC I read somewhere recently that some things are linked statically, and some dynamically - I need to check that.

[1] I need to check it for C too, because things have changed since I last used C a lot.

>Go's runtime is also quite a lot more complex than C's--it comes with a scheduler, a reflection system, and a garbage collector. These are one-time costs--the binary for a program 10 times the size of hello world is pretty much the same size as the hello world binary.

Agreed on the points about the other stuff Go has that C does not, like a GC, reflection, etc. And that those are one-time costs. But they are still costs, and if you create lots of small binaries, that cost adds up. Disk space is cheap nowadays, but depending on the use case and your environment (e.g. small or embedded system, cost constraints if shipping many copies of a h/w + s/w product or appliance, etc., that cost might be significant. Whereas with some of those other languages that I mentioned above, those costs can be quite a bit less, times the number of copies of binary or appliance that you ship. That's all that I meant. That's not to say that we should not use Go, though, of course. As always, the overall picture and all factors need to be taken into account, e.g. the benefits of Go's language features vs. some of those others, team knowledge of the various language contenders, even availability of needed libraries, time to market, etc. etc.


> Did you mean "dynamically" (for C) [1]?

Yeah, I did. Good catch.

> Also, I'm not too sure if Go links all dependencies statically by default, if that is what you implied above. IIRC I read somewhere recently that some things are linked statically, and some dynamically - I need to check that.

By default, Go does dynamically link to some things--it will prefer to use the system DNS resolver if one is available, for example. But you can disable this by setting `CGO_ENABLED=0` at compile time. If you do this, a Go program needs nothing more than a linux kernel (no system libraries or even libc) and whatever other files it requires (certificates, for SSL for example).

> I need to check it for C too, because things have changed since I last used C a lot.

I'm sure you can make statically-linked C programs which are smaller. For example, the musl libc implementation is quite small and designed for static linkage. But lots of libraries break (often in bizarre and hard-to-debug ways) if they don't link against glibc specifically, so it's not a panacea.

> But they are still costs, and if you create lots of small binaries, that cost adds up. Disk space is cheap nowadays, but depending on the use case and your environment (e.g. small or embedded system, cost constraints if shipping many copies of a h/w + s/w product or appliance, etc., that cost might be significant.

Agreed, but again, Go does allow for dynamic linkage in those cases so you can conserve disk space. There's no doubt that if you're golfing, C allows for smaller binaries than Go, but Go can meet 99.9% of real-world requirements when it comes to size (if you're at the extreme end of what Go can support, you've probably already ruled out Go for other reasons).

There are still many reasons to pick other languages besides Go--if you're working on a hard real-time system, if you're working with a team of people who all prefer a different language, if you're working in a domain (iOS app dev, for example) for which there are no good Go libraries, if you're willing to trade all else for extreme performance or extreme correctness, etc. But Go is actually a pretty good little language for most things that aren't at those extremes.


>By default, Go does dynamically link to some things--it will prefer to use the system DNS resolver if one is available, for example. But you can disable this by setting `CGO_ENABLED=0` at compile time. If you do this, a Go program needs nothing more than a linux kernel (no system libraries or even libc) and whatever other files it requires (certificates, for SSL for example).

Ah, sounds similar to what it says in the osso.nl link I posted below. Good to know.

>I'm sure you can make statically-linked C programs which are smaller. For example, the musl libc implementation is quite small and designed for static linkage. But lots of libraries break (often in bizarre and hard-to-debug ways) if they don't link against glibc specifically, so it's not a panacea.

Didn't know this, interesting. But actually, I didn't mean making statically-linked C programs smaller by using leaner libraries like musl. I just meant that for equivalent code, the resulting C binary (even when using the default libc) was smaller by a lot than the Go binary. And so were the Free Pascal and D binaries smaller than the Go binary, but they were both slightly larger than the C binary. But it was only an anecdotal observation based on writing a few small programs in C, Go, Free Pascal, and D, and comparing the resulting binary sizes. Not a thorough scientific study.

>Agreed, but again, Go does allow for dynamic linkage in those cases so you can conserve disk space.

Got it.

>>There's no doubt that if you're golfing, C allows for smaller binaries than Go, but Go can meet 99.9% of real-world requirements when it comes to size (if you're at the extreme end of what Go can support, you've probably already ruled out Go for other reasons).

It wasn't about golfing, it was about equivalent code in the two languages, and the relative sizes of the resulting binaries. But good last point (about 'other reasons').

>There are still many reasons to pick other languages besides Go--if you're working on a hard real-time system, if you're working with a team of people who all prefer a different language, if you're working in a domain (iOS app dev, for example) for which there are no good Go libraries, if you're willing to trade all else for extreme performance or extreme correctness, etc. But Go is actually a pretty good little language for most things that aren't at those extremes.

Agreed. In that sense it is somewhat like Python, which is sometimes described as "the second-best language for almost any domain", an interesting description that I read only somewhat recently. I wouldn't necessarily fully agree with that description, it is probably the best language or at least equally the best (with some other language) for certain areas. Similarly, there are probably areas where Go is, if not the best language, one of the best, for certain kinds of tasks, such as the command-line utilities, web and network server applications that it is often used for.


I googled briefly about dynamic vs. static linking of Go binaries, didn't find much useful info; will try more again later and post again here if I find anything. Will also do some experiments with compiling and linking and see if that gives the needed info.

Update: Found this page which seems to make sense:

https://www.osso.nl/blog/golang-statically-linked/


Haskell can do value types allocations, not everything is a list.


Unless 90% of Haskell allocations are value types on the stack, this is irrelevant pedantry.


Do not optimize what isn't proven by an optimizer.

The features are there, 90% of the cases it doesn't matter for the typical use cases.

When it does matter, they are in reach to any savy Haskell developer.


F# touches on most of these points doesn't it?


Last time I tried using it, it was unclear what build system to use and I had a hard time finding the "happy path" (what test library to use, how to set up my project file to include a test target, etc). It also seems to have this weird FP/OOP dichotomy where some things are methods on a class and others aren't and that affects how you call things and so on. I didn't get far enough to really be able to articulate my problem, but it seems quite complex (albeit much of the complexity probably exists to support interop with other .Net things). I think most of this could be addressed with documentation "How to do X in F#" (I'm sure someone has a writeup somewhere, but I couldn't find any such document around the time I tried it).


F# for fun and profit is an excellent resource


Yeah, I remember trolling through those and I found them really useful and accessible; unfortunately I don't remember the details about why they didn't help me get over the hump (I'm sure if I stuck with it long enough I could have; I just have limited free time).


Seconded. It helped me clear up some FP concepts.


FP and OOP are orthogonal... You can have purely functional objects


I fail to see how this would work. An object is a combination of state and methods acting on the state, and purely functional meaning no state.


> The biggest impediment I see is that these language communities are more concerned about the abstract mathematical properties of the language and things like tooling get neglected

Is it really so. I am only asking. A mathematical model allows deeper insight on structure (backed by hundreds of years of prior mathematical thought), and we then sugar it appropriately for users. Just like civil engineers who learn Strength of Materials etc mathematically before they design the bridges we drive over daily.

Tooling is one of my gripes also. There was a time decades ago when I couldn't go beyond compiling the simple hello world example. This has gotten much better after many years. Same with Haskell. Is this due to difficulties in getting the necessary financial support.


I don’t know for sure, but I’ve never had a project fail for lack of monads or functional purity, but they’ve often failed because I couldn’t figure out the tooling quickly enough or there was some critical library missing or the documentation was shit or etc. And the economics of bridges and web apps aren’t comparable (I can’t redeploy a bridge in an hour and no one dies if one of my routes 500s when the user passes unexpected input).


I feel the pain as I read your line. One great disaster in my career early on was my going into Haskell, and also Prolog, after frustration with Java (remember Java Beans etc?) in an enterprise project. Whereas Perl5 (this was in the biotech/genomics industry) and later Python did the real work. Still I cannot forget the impressive functional power while coding Ocaml.


Projects fail because of bad code quality and architecture all the time. Monads and functional purity are simple tools that help tame and manage these issues.

I've seen a number of projects (often in JavaScript or Python) fail because of poor effect management—but if you don't have much experience with functional programming, you'd see it as yet another project failing because of bad development practices or lack of discipline.

With experience, it became pretty clear that what we needed were not iron-willed, self-disciplined developers but just better tools, of which effect management (ie monads) and functional programming are some of the most powerful.


This seems like a reasonable hypothesis, but I don't perceive a clear signal that indicates that projects which use monads or pure FP are more successful than their counterparts. I think your hypothesis actually jives with mine and with my observation from the previous sentence--that monads and functional purity could help, but if the language neglects things like documentation, high quality libraries, and tooling to productionise and operationalize your code, then it's a non-starter.


At least in the case of Haskell, it is explicitly so. Haskell is a language that is meant to support programming language research, and you can see that manifest itself all up and down the design of its toolchain. It's a woefully under-appreciated factor in Haskell's famous learning curve.

OCaml less so, but the Jane Street monoculture means that, if it's not itchy for a trading firm, it's not gotta get scratched. Trading's an odd industry, tech-wise. There are aspects of the OCaml ecosystem that eliminate any of my own desire to use the language that I would have seen as neutral or even strongly positive when I worked at a trading firm. (At least during working hours.)

So, yeah, I do think that one of the big things that's missing from typed FP right now is a strong language that has similar semantics to an ML or (preferably) Haskell, but with a much more pragmatic focus. F# is really nice if .NET is an option for you, but I'd sure like to see something that is statically compiled and can produce C-friendly binaries, so that it can play nice with a wider variety of pre-existing codebases.


In what concerns F#, there is Mono AOT, IL2CPP, eventually CoreRT and .NET Native might support F# native compilation as well.


Out of curiosity, how do these compare to `go build`? I'm jaded by years of C and C++ which of course can statically link, but doing so in practice is tedious and error prone and still often not possible (e.g., many popular libraries can't be statically linked or it's infeasible to do so).


Well, Xamarin applications for iOS use Mono AOT and Unity gameplay code gets compiled to native code for deployment on game consoles and iOS, optionally on Android.

As for compilation time, Go compilers use very basic optimizations as such they tend to compile faster, but hardly impressive for those that ever used TP, C++ Builder, Delphi, Modula-2 and similar languages.


Some Borland language tools' compilation speeds were insane. I remember them claiming 100s of KLOC per second compilation speed for their BCC++ and Delphi. Anecdotal experience bears it out. Even TP used to compile real fast. I read somewhere that early TP versions were written in tuned assembly.


Sorry, I read your comment before my coffee and misread "native compilation" as "static compilation", so I'm conflating two distinct things.


> There are aspects of the OCaml ecosystem that eliminate any of my own desire to use the language that I would have seen as neutral or even strongly positive when I worked at a trading firm.

Can you give some examples? Really curious.


No support for multi-core parallelism. Strings are sequences of 8-bit characters, so crappy Unicode support. Weak support for non-Unix OSes.


I'm curious as to why you'd consider them advantages in that specific line of work. I can see lack of threads as an advantage in that it doesn't let you write multithreaded code with shared state, and forces into some kind of multiprocessing with message passing. But why would you prefer strings as 8-bit-clean, or weak support for non-Unix?


Multithreading is the stronger positive.

I'm guessing 8-bit strings is a minor positive, on the basis that you really don't need to deal with non-ASCII text in that kind of setting, and there's probably some performance benefit to not supporting it. Even if you stick to characters from 7-bit ASCII in your own data, if the strings are technically UTF-8 or UTF-16, then the fact that it's a variable-length encoding means that more string operations will involve costly branch instructions. And if UTF-32, your strings will be 4x as big as they need to be.

Not working on non-Unix is neutral. There's no benefit to it, but nothing was going to be running on Windows, anyway, so there'd have been no harm in it, either.


Crystal is very close to your list.


Crystal is interesting-- I've followed it at a distance-- but from what I understand, it's not really functional. Persistent data structures aren't a first-class citizen, and it seems to prefer blocks to lambdas. Also, (unrelated to the functional thing), the compiler isn't super fast. I think a really fast edit -> save -> compile -> reload time is one of the defining features of OCaml and Go.


> it seems to prefer blocks to lambdas

What does this mean?


Crystal prefers these constructs [0] `foo.map {|x| x + 1}` where the stuff between the squiggles is called a "block". I come from Ruby, so maybe Crystal got this one right, but Ruby blocks, at least, are a more limited, less general-purpose approach to lambdas (by which I mean anonymous, first-class functions + closures) [1].

[0] https://crystal-lang.org/docs/syntax_and_semantics/blocks_an...

[1] https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...


Crystal has lambdas w/closures (called Proc) with `->{ 1 }` syntax [0]. Because of ruby-like syntax where `foo` is a call if `foo` is a function you need to do `foo.call` on those lambdas explicitly - so you can distinguish between passing them around as values `foo` and calling them `foo.call` or `foo.call(arg1, arg2, ...)`.

I've played with it a bit, haven't seen problems with compile times, they were actually pretty good.

The nice thing about Crystal is that it is extremely easy to use, terse code, easy macros, fast executables, algebraic types, good packaging story - in general very good ergonomy.

I hope it'll pick up one day.

[0] https://crystal-lang.org/docs/syntax_and_semantics/literals/...


"I for one can't wait for Typed FP to become mainstream as soon as possible."

This will never happen till there is a typed FP language that has solid tooling, documentation, solid libraries for most common programmer tasks, a welcoming and active community and at least one of a killer app, or framework (Rails for ruby), killer platform (objective c/swift for AppStore dev) or targets an underserved domain (Julia for Scientific Computing).

The existing typed FP language implementations (OCaml, Haskell, SML, not sure of F#) are full of cruft and/or are just lacking with respect to one or more of the above.

Haskell comes the closest I think, but that maybe because I've been using Haskell for decades, and so I am biased in its favor (and I'd still hesitate before using it on a large team + long timescale project).

Try writing a 'mainstream' application in any of the above, and you'll see what I mean. Yes you can write a typical web app /CRUD app/mobile app in (say) Haskell, but to be honest, Yesod (say) is no Rails.

Reason might be 'the one' I guess, at least in bringing TypedFP to the javascript ecosystem, especially with FB's corporate backing.

Rust gets/is getting most of the above right, but it isn't really an FP language.

OCaml or Haskell being the mainstream will never happen imho. If there is a TypedFP mainstream language in the future, it is yet to be invented (or is being invented right now! touch wood).

What is more likely to happen is some FP features moving into mainstream languages. Not sure if that counts as "typed FP becoming mainstream", but it likely to be a long wait ;-)

I honestly don't see any typed FP language ever having the success of say, Python, for at least a few decades yet.

(all imho, ymmv, as it should)


The killer app for OCaml is front-end webdev with Reason. Most new large-scale Javascript projects are using gradual typing with TypeScript, and the experience is not the same as having types natively. It gives you the safety and rigidity of types, but not as much the ability to think with types. These projects also toy around with immutability - using a library like Immutable.js which is a boxed data structure that needs explicit handling and conversion to and from native types, need to pick a state management library like Redux which implements a version of variants through constants and manual dispatch, and all these projects need large test suites to check for things that types would've caught and so on. A typed functional language eliminates almost all of these issues and grants more powers of abstraction. If not Haskell or OCaml, I suspect Elm has a good shot at it.

I'm all for new languages! If I remember correctly, the ideal one is Lispy with Hygienic macros, ML-like type system baked in, multi-core, good stdlib, and tools that don't stand in your way.


"The killer app for OCaml is front-end webdev with Reason."

I'll make a 1000$, 10 year bet with you that in 2028, if browsers still exist, ReasonML (or its successors) will be nowhere near mainstream status (which is the point we are discussing here). I like Reason, or at least the idea of it.

Javascript itself might incorporate more static typing in 2028 than today's version does, but I suggest the 'advances' will be relatively miniscule.


I think the problem here is that OCaml/ReasonML is a relatively large language - the flip side to the nice syntactic sugar, and there's a paucity of material to get people into ReasonML if they haven't touched FP before (which for FP to take off is the target audience).


It is definitely not ideal, but they are happening. A few helpful resources:

Nik Graf has a free ReasonML course on Egghead.io, which has a lot of positive reviews from the community. https://egghead.io/courses/get-started-with-reason

Dr. Axel Rauschmayer has written "Exploring ReasonML and functional programming" which covers the language comprehensively. http://reasonmlhub.com/exploring-reasonml/

Web Development with ReasonML by J. David Eisenberg, published by Pragmatic Bookshelfis in the works. I read a few sample chapters and they're excellent for beginners: https://pragprog.com/book/reasonml/web-development-with-reas...

Jane Street wrote a set of 24 runnable exercises that teaches OCaml, and we ported it to ReasonML. It has been used for workshops and individual study and will get you upto speed on the paradigm quickly. https://github.com/protoship/learn-reasonml-workshop

You also have textbooks on OCaml, a lot of them openly accessible, including the venerable Real World OCaml. They are all listed in the awesome-ocaml repository: https://github.com/rizo/awesome-ocaml#books

There is vramana's awesome-reasonml repository that covers similar ground for Reason itself: https://github.com/vramana/awesome-reasonml


Going to plug a quick thing here :) I run a project called Byteconf - we have a free ReasonML conference coming up this Friday, streamed on Twitch: https://www.byteconf.com/reason-2018


> there's a paucity of material to get people into ReasonML

Believe it or not, it's even worse if you want to target native code and not JS.


I agree that JavaScript isn't going to change significantly, but fortunately it can be treated as a compiler's target platform, not only a source language.

We're not going to see any language supplant JavaScript in the browser. Dart got nowhere, and that was with the backing of Google. But there are already toolkits out there that compile down to JavaScript. GWT, for instance.


>We're not going to see any language supplant JavaScript in the browser. Dart got nowhere, and that was with the backing of Google.

Never say never (again. With apologies to Ian Fleming). Just because Dart couldn't do it (even with Google), doesn't mean that nothing else can. I, for one, hope something better does supplant JS.

https://en.wikipedia.org/wiki/Never_Say_Never_Again

https://en.wikipedia.org/wiki/Ian_Fleming


I hold hope, but not certainty!


I'm doing the ES6 + Immutable thing right now in some React apps. (I'd prefer to use TypeScript, but the codebases are too large to convert.)

It's an improvement over plain JS, but also a nightmare. For example, it's hard to create truly immutable classes. I ended up with this pattern:

  class Foo {
    #props
    
    constructor(props) {
      this.#props = Map(props)
    }

    get someField() {
      return this.#props.get('someField')
    }

    withSomeField(value) {
      return new Foo(this.#props.set('someField', value))
    }
  }
It works, but it's annoying. Then you have, as you say, all the coercion/conversions between Immutable and plain JS values.

TypeScript would help, but I think I'm going to look into Reason, too. I think I heard that it supports interoperating with ordinary JS. The problem with TS, last I tried it, was that while it supports JS, there are subtleties (like the lack of support for default imports) that mean it's not fully backwards compatible. You can't just run a JS codebase with the TS compiler and expect it to work, as I understand it.


You didn't mention Scala. It has access to all Java libraries, and it doesn't force you to into a single fixed mindset like Haskell's purity at all costs.

That said, it doesn't seem to have taken off, and it doesn't really look like it will. Not sure why, though I hear a lot of complaints about compile speeds.


I've been a Scala developer since 2012.

Not sure what "taking off" means in the context of programming languages, but for me it's been popular enough since 2012.

Several big companies use it and I've introduced Scala (and FP in Scala) at about 3 companies thus far ... it isn't a tough sell if the company is Java-friendly.

We also have a great, active community developing libraries. As a shameless plug here are the projects I've been working on:

1. https://monix.io

2. https://github.com/typelevel/cats-effect

Scala is one of the best languages for FP these days. Note I'm not saying that it's perfect. We can certainly improve it and a lot of improvements are coming in Scala 3.


Are you hiring in Bucharest office?


I use Scala quite a bit, and I just can't bring myself to regard it as a functional language. It's a kitchen sink language ("multiparadigm" when I'm in a polite mood) that happens to support a lot of nice things from FP, but with a distinctly object-oriented flavor.

There are add-on libraries like ScalaZ and Cats that are meant to make it easier to follow a more thoroughly functional style, but, even with those, I can't shake the feeling that I'm swimming against the current if I try to take a functional-first approach.


I've been doing Scala full-time for several years, so as far as I can see it's taken off pretty well. I can't really claim it has a welcoming community though (though many of the most unpleasant people are those who've apparently quit the language, but still hang around and hassle people).


"you didn't mention Scala"

my hypothesis (and it is only that) is that trying to unite FP and OO, as Scala tries to do, leads to unnecessary complexity in learning and usage. Also see 'tooling'.

That said, many impressive + widely deployed systems have been built with Scala, which is more than other typed FP languages can claim.


OCaml also tries to unite FP and OO, it is right there on its name.


Yes and no. It's possible to ignore the OO features and use OCaml purely as a functional language, in a way that you can't really do in Scala.


Compile speeds are a problem of the past. When you use Scala 2.12+ together with incremental compilation with modern Zinc-based compile server, compiling Scala after each change takes a low few seconds. Sometimes less than a second. And Dotty (Scala 3) shall improve compile times even further.


Rust isn't a FP language in the strict, traditional terms of Haskell and OCaml, but it's pretty damn close. If people started switching to Rust, they'd probably get 80% of the gains (concurrency, excellent typing, non nullability, etc.). If Rust gains traction, I wouldn't exactly complain; don't let perfect be the enemy of good, after all.


Could Kotlin or Swift do it? I like Python/Flask/Sqlalchemy for backend web dev, but I long for a modern, statically typed FP alternative that’s equally (or nearly as) practical and accessible. Just a pipe dream?


Kotlin doesn't belong in these discussions, no exhaustive pattern matching.


The language is not set in stone.


There is hardly anything FP about Kotlin.


Have you tried Mypy? It certainly has limitations, but it provides some static typing to Python.


I've tried mypy, but it's still too immature (creates more issues than it solves). Specifically it's buggy and very limited (IIRC support for recursive types hadn't landed and many popular libraries--like SQLAlchemy--can't be annotated or stubbed).


What about F#?


F# is pretty close. The tooling isn't up there with C# but it is close.

If microsoft would double the budget for tooling support I think it would get there quick, and that wouldn't be a ton of money.


It is more than just the budget.

F# isn't properly part of the .NET team, hence only VB.NET and C# get all the goodies.

They didn't even took F# requirements into consideration when doing .NET Native and the whole VS refactorings, leaving the F# team to play catch up on their own.


Not trying to contradict you, but I think C# is special in than it has absorbed a lot of things from the academic functional programming world and presented them in a nice 'friendly' form for the masses; IE, LINQ; first class functions like delegates and Func<> and Action<>; and now tuples and pattern matching in C# 7 (among many other things).

I know having mutability the default still lets one shoot themselves in the foot (and nullable reference types), but I'm always impressed when I find more of the nice FP ideas baked into the language. It's a pretty good compromise for me.


Functions in C# aren't really first class. A static method for example isn't a Func< > or an Action< > and neither are normal methods, both of which are prevalent. To say that C# has first class functions is a misnomer.

And on anonymous methods (delegates), Func< > and Action< >, that there's even a difference between the 3 shows C#'s age and some of its quirks.

> now tuples and pattern matching in C# 7

Note the pattern matching in C# is a far cry from what you get normally in FP languages. First of all C# does not have ADTs, which means it cannot do exhaustiveness checking, making pattern matching only a superficial syntactic sugar.

Tuples are nice though.

> academic functional programming world and presented them in a nice 'friendly' form

This is sort of a marketing myth. C# is not an FP language and I rarely see actual FP built in C# (same goes for Java).

In my opinion you should try out the languages in the "academic functional programming world". Because you'll then notice that C#'s way of doing things is not "friendly", unless by that you mean half baked and annoying ;-)


I totally agree that FP in C# isn't super nice (its clear that it's bolted on) but to match the style of my co-workers, I can't do much more than sneak in a few little things here or there. The bulk of the code is clearly OOP, and I've accepted that, but at least I can have a little fun with some LINQ and a few extra vars that I don't mutate. :)


True :-)


FP is a style of programming and not a language. You can write imperative code in Haskell.


My cursory experience with C# was very pleasant too, and Erik Meijer managed to put in some monads into the language thru LINQ when no one was looking (https://queue.acm.org/detail.cfm?id=2024658)

Java 8 was a joy to work with as well - FP is filtering down through lambdas and map, filter, fold et al.


Great article, thank you for sharing!


Not contradicting your other excellent points in any way, but I'd argue the Java type system is better described as "class oriented" rather than object oriented.

OOP is a rather wide umbrella of techniques, involving inheritance, abstraction, encapsulation, etc, all of which can be supported in ways very different from the "let's build everything on the concept of classes" approach taken by Java.


> I think the prevalence of scripting languages in large systems today is a quirk of history.

Disagree. This is happening because software development has become an enormous industry, and is now subject to Sturgeon's Law.

If the bulk of software development really was about correctness and quality rather than rapid development and low skill barriers, we'd be seeing a lot more Ada and OCaml, and a lot less JavaScript. But here we are, and it's not going to change just because someone makes a technical breakthrough in the design of functional programming languages.


JS's popularity is the canonical example of "a quirk of history". It is popular solely because until roughly now, it has always been the _sole_ language for the enormously popular web platform.

Better proxies would be Go and Ruby since both occupy the same niches; however, this doesn't support your point as Go has grown much, much faster than Ruby.


I've been mesmerized with FP and algebraic typing as soon as I've tried Scala. But now I end up wondering why would anybody choose something other than Clojure or Scheme.

Nevertheless I currently use Python because 1. really great and alive library ecosystem (Pandas, Numpy, Keras and PyQt especially) 2. everybody codes at least some Python, this way whatever I write it is actually going to be hackable by everybody.

I just hope Reason is going to be more successful that F# in going mainstream an will actually replace JavaScript for React developers at least. I also hope an option to force "type hints" (I put them all over the code) is going to be introduced in future versions of Python and Hy language support (Hy is a Lisp over Python, like what Clojure is for Java) is going to be introduced in future versions of PyCharm.


>I also hope an option to force "type hints" is going to be introduced in future versions of Python

Have you checked https://mypy.readthedocs.io/en/latest/command_line.html#unty... ?


Looks curious, thanks. Nevertheless it (or a part of it) has to become a part of the CPython standard and get switched on by default at some point for the quality of the ecosystem code to start increasing by means of more Python coders being gently pushed to type-mindfulness. I also couldn't find a flag that would rise warnings when a value of a non-matching type gets assigned to a variable with a defined type hint yet allow everything else - IMHO such one would be the most useful.


Rust is actually a contender in this space as well [0]. It is not something that is widely touted, maybe because of a fear that an FP-label would give rust an academic tinge?

This is one of the reasons I will dedicate this year's advent of code [1] to learning rust.

[0]: https://science.raphael.poss.name/rust-for-functional-progra... [1]: https://adventofcode.com/


My vote is still for OCaml, if for no other reason than the compiler speed.


And memory management productivity.

I agree GC free management is necessary, but it should be constrained to certain high performance niches, which cannot cope with GC + value types.


Nice. Now the big question is what language to pick, because if there are many small communities around each language it won't help the cause.

I see Elixir (also mentioned in this thread) getting some traction. Like most of the time its success is mainly due to a singular project with a specific application, namely web programming in Phoenix. Just as back then with Ruby and Rails.

Probably F# would be a better candidate for mainstream adoption with a vast .NET ecosystem and the open source release of .NET Core.


From my biased perspective, it could be OCaml thru Reason.

The .NET/Microsoft ecosystem is a no-go for a lot of people, and F# doesn't have functors. Being a statically typed language, you don't have something like Ruby's mixins to share code, nor do you have OO's interface inheritance. Functors fill this gap by allowing you to mint new modules out of existing modules, and is a powerful way of achieving modularity. Simon Peyton Jones critiques that while OCaml's modules are very powerful, the power/cost ratio is not very favourable. And it is true - there is a bit of learning curve, but I think it can be eased as the community grows.

For good or for bad, Javascript is the lingua-franca of computing today if adoption is any measure, and anything that goes against its norms face strong downdraft (try getting Ruby+Opal mainstream adoption). But the updraft can be useful (I'm probably the only person on the internet who don't understand React Hooks yet which was announced just a few weeks ago). Reason is Javascript++: the syntax is very close, and it compiles to vanilla JS with great interop with npm. The promise of gradual adoption I think is a good marketing lever. We'll see in a few years.

There is also Elm, which is just so good for building rich web applications on the browser. It is a language with wonderful aesthetics, clear long-term vision, and avoids importing the complexity of academic research, which is often the case in other typed functional languages.

But you don't really need to pick a language more than you pick Typed FP. F#, Haskell, and PureScript are great as well - each has different tradeoffs. But once you understand the set of features underpinning Typed FP you can move around languages with relative ease, and that's what I think matters most.


The Facebook ecosystem is a no-go for a lot of people.

Reason seems interesting and I'm glad they shed some of the OCaml syntactic baggage to make it more programmer ergonomic, but it's very unlikely that it'll ever take off. Typescript already has the statically-typed JS market share, and that won't change.

Once Web assembly becomes a mainstream, viable thing, I think we'll see a couple new languages come out that don't have the baggage of legacy like OCaml or Javascript, and could possibly flourish.


Languages need a reason (OS SDK, framework, use case) to flourish.

Mainstream picks languages based on use cases, not the other way around.


> Being a statically typed language, you don't have something like Ruby's mixins to share code, nor do you have OO's interface inheritance.

But you do, though. You have open object types, and you can write generic functions in terms of those types, which effectively serves as interfaces (or traits) in other languages.


"For good or for bad, Javascript is the lingua-franca of computing today "

a very strong claim. There is a lot of 'computing' outside the browser.

lingua franca of front end web dev, I grant you, (almost by definition 'front end web dev' is mostly javascript).


Yes you're right. I thought Javascript had become the most popular language out there, but TIOBE claims otherwise.


> The .NET/Microsoft ecosystem is a no-go for a lot of people

Why? It's not 2005 - 2009 anymore, when there was this Mono push in Linux-land (remember Banshee? :) ).


Functors can be implemented via classes, interfaces and generics.

It is 100% the same?

No, but it is pretty close for practical purposes.


Elixir is not statically typed. It's definitely interesting but you don't get a lot of the same safety as you do with ML.


In fairness, a lot of the 'safety' in BEAM languages is delegated to processes. It's ok for a process to crash.


> In early 2000s, as the web was growing to be an application delivery platform, the only practical statically typed language was Java.

We also had Eiffel, Delphi, VB, Component Pascal, Oberon back then.

But Sun was pumping money and offering the JDK as free beer.


There was also SML, which is not so different from OCaml.


I can't wait until code augmented with general formal proofs (not just types) becomes mainstream in e.g. kernel modules.


> Choosing Ruby/Python over Java was at the time an act of rebellion and a show of technical superiority.

Not much of a rebellion, it was just that back then you had instant access to Python's REPL by just typing "python" in any Linux distro terminal. A Python "program" could have been just a simple script_that_does_stuff.py file run from the same terminal with:

> python script_that_does_stuff.py

On the other hand Java was a total mess.

Source: me, a Python programmer for 13+ years who directly referenced this (famous at the time) "Python is not Java" blog post [1] during my first interview for a programmer job

[1] https://dirtsimple.org/2004/12/python-is-not-java.html


Give me Algebraic Data Types or give me death.


I hope it never happens. Ocaml is now in that sweet spot where it's big enough for real work, but can ignore mainstream fashions and keep out the inevitable accompanying "how to program when you cannot" crowd.

The last thing Ocaml needs is mainstream attention. FB's "Reason" being a point in case: "Since I don't understand what I'm doing, can you at least make the syntax look familiar?"

Let the masses stick to writing UX-driven apps in Javascript. It's where the money is anyway, which seems to be the primary motivation for "software engineers" these days.


I absolutely abhor this elitism. If something is truly worth it, it should be accessible to the "masses". The polish and QA investment mass adoption provide make for much more solid products. It's not like Ocaml would be sold in stores, next to Coke Light.


Why do you abhor elitism? It's elitism (not group thinking) which yields progress. For instance, da Vinci, Newton, Einstein, etc. and all the Nobel price winners were all outstanding elitists.

> If something is truly worth it, it should be accessible to the "masses"

OCaml is accessible to the masses, for 10-20 years already.

I doubt that the masses will switch to FP languages (OCaml, Haskell, Lisp, ...) ever since the classical lightweight languages (Javascript, PHP, ...) provide anything they need for their basic stuff. Only few need more, and they have free choice of several nice niche languages which don't need to become mainstream.


> Why do you abhor elitism? It's elitism (not group thinking) which yields progress. For instance, da Vinci, Newton, Einstein, etc. and all the Nobel price winners were all outstanding elitists.

We're here because of mass progress, not because of a few "chosen people". The "great man" theory of history fell by the wayside a one century ago.


> We're here because of mass progress

What is "mass progress"? There are two options: a) let the masses learn to use the inventions of individualists (which happened with C/C++, PHP and Python for instance), or b) let the inventions be customized to the mediocre capabilities of the masses.

You want option b), obviously. I wouldn't call that "progress".


"The polish and QA investment mass adoption provide make for much more solid products."

That's pretty much untrue unless by polish you mean "pretty but pointless IDEs and buzzword/checkbox compliance."

Ocaml is already very accessible, just on its own terms.


I guess this is a sort of:

> Unix is user-friendly — it's just choosy about who its friends are.

> Anonymous, in The Art of UNIX Programming (2003) by Eric S. Raymond

At the end of the day, each community chooses its path. If Ocaml makes you happy how it is now, more power to you :)


> Ocaml is already very accessible, just on its own terms.

i.e., "ocaml is already very accessible to people who know ocaml"


Hey, author of the linked post here.

A few thoughts, since some things have changed since that post was written:

First, the tooling limitations that I mentioned in the article have gotten a lot better. In particular:

Merlin now provides IDE-like functionality for your editor of choice (including code, vim, and emacs).

Also, Dune is an excellent build system for OCaml that does an enormous amount to simplify the build process, and tie a bunch of different tools in the ecosystem together. One great thing about Dune is it does a lot to unify the experience we've long had inside of Jane Street with the open-source OCaml experience. It's really a big upgrade.

We've also made some progress on debugging tools, like the spacetime allocation profiler. There's also active work on making GDB/LLDB debugging in OCaml really first class.

Also, OCaml has had some major industrial uptake. Notably, Facebook has several major projects built in OCaml (Hack, Flow, Infer) as well as their own syntactic-skin-plus-tooling on top of OCaml, in the form of Reason. Reason has gotten a lot of traction in the webdev world, which is awesome. Bloomberg, and Docker are some other big names that have real dependencies on OCaml, along with some more names you probably don't know like Ahrefs, LexiFi, and SimCorp.

People sometimes feel like Jane Street is the only real user of OCaml, so they imagine that Jane Street's needs are the ones that drive the language priorities. So, the thinking goes, if you're not a trading firm, you should look elsewhere. But this is the wrong picture. First, there are other serious users, as discussed above. Besides, the community doesn't just roll over and do what we say. If you don't believe it, go and see how often our PRs to OCaml get rejected.

And even our interests in the language have grown beyond what you might imagine a trading firm would care about. We use OCaml for building traditional UNIX system software, like MTAs, for designing hardware (via HardCaml), and for building dynamic browser-based applications (via Incr_dom).

For sure, there are still challenges of being a minority language (and there's still no multicore GC, despite some exciting progress). But I believe OCaml is a yet better choice than it was in 2011 when I wrote the article.


And, shameless plug, if you're interested in seeing what it's like using a functional language at scale for solving real problems, well, you can apply...

https://www.janestreet.com/programming/


I'm 27 years old and I haven't prioritized my career as much as I could have. For example I've joined teams that allow me to work remotely from countries of interest instead of searching for ones that had the best environment for growth as a software developer.

I'm not unhappy at all with the choices I've made so far, but my goals in life are changing and it's making me wonder what it takes to be accepted into companies like Jane Street.

Skill wise I think I'd have to start as a junior developer, which I don't mind, but perhaps having 4-5 years of working experience would disqualify me for those entry level roles?


You mentioned HardCaml in your other post, but I don't see any hardware positions on the website. Are you guys really doing hardware too?


Right now, we're looking to hire an FPGA engineer in London. Our hardware team is still small, but I expect we'll be doing more hiring as time goes on.

https://www.janestreet.com/join-jane-street/apply/ldn/full-t...


Why Mercurial instead of git?


Seems like a nice and interesting company to work for! Except I honestly wonder ... how do you explain to your friends and family what kind of work you do, and how it makes the world a better place? (besides "making the market more efficient")

The answer to this should be placed more prominently on the website, imho.


You built your own MTA in Ocaml?


Hilariously, yes.


If people want to move towards functional programming, I would recommend Elixir as the first step.

There is a fantastic web framework called Phoenix built in Elixir. This provides a way that you can immediately build something useful, and it teaches you how to use Elixir well: the documentation is good, the generated code that you start working with is a great example of how to use the language properly, and when you start understanding the design of the framework it's a great example of functional systems design (e.g. the Plug.Conn structure that gets passed around).

Not having to deal with a full ML (or Haskell) like type system takes away a large barrier to entry, but you still have to change your mindset about how you implement things. And then the dialyzer is there, when you later want to start using gradual typing.


Elixir is not statically typed. Thus you lose a lot of what makes OCaml great; a whole class of errors caught by the compiler before you run your program.

I'd say the type system isn't that much of a barrier to entry. You're likely used to one from any existing statically typed language; the only new things are sum types, polymorphic types & type inference. These aren't _that_ hard to get your head round but really improve the safety of your program.


My point was about barrier to entry. My experience is admittedly with Haskell rather than OCaml, but while I did get my head round these things to some degree, I was still a long way from being productive. And this is despite having come from a background of C++.

The debate on static vs dynamic is a far from settled one, but I think you would be hard pressed make the case that Python has no place. In this case, I'm suggesting that Elixir is a great way of getting people into functional programming and learning the value, while being immensely useful in its own right.

Learning that value is a good first step. I now find it immensely depressing every time I go from my Phoenix web-app back to the Android app. Kotlin is better than Java, but it's still a mess of objects and interfaces, and having to construct 5 things when it could just be passing a map to a function. The result is that I will now more aggressively pursue FP options when I'm looking at technology decisions (e.g. bucklescript is on my 'to investigate' list).


Haskell has a famously high barrier to entry that has little to do with its type system or functional nature. As a pure lazy language you have to learn a lot of concepts (What does pure mean? What does lazy mean? What is a monad?) before you can be productive. Extending this into intermediate usage is difficult too; how do I begin to combine these monads I now understand?

I see your argument. If your aim is to get into functional programming then Elixir/Erlang, even Lisp/Scheme is fine. I don't think it's the functional nature of OCaml and Haskell that make them powerful; I think those parts of the language are great, yes, but the type system (static, algebraic, inferred) is the real secret sauce.


If you want to teach FP you should use the very first and most simple FP languages --- Lisp and Scheme. Despite their extreme simplicity, they provide an extremely expressive power (through their macros) which is still unmatched in most other languages.


These languages are horrible, filled with parenthesis and not fun. I had to learn these in school, like many other people, and I stayed away from FP languages for a long time because of that (like most people).

Try to learn Erlang or Ocaml, it's fun.


I wouldn’t characterize either as FP, though—both of them still actively encourage maintenace of mutable state and use of macros.

They may be “simple” in that they are very analyzable but you’re teaching a whole bag of skills, and the most FP part of lisp/scheme is just the closure.


I'm currently looking for the language and framework to use for a project, and I'd like to go with a functional language.

I really, really want to do my next project in Ocaml, but... I find the ecosystem seriously lacking. You find Ocaml libs for a lot of needs, but a lot of those a unmaintained and have their last commit a couple of years ago. I'm afraid that choosing Ocaml would mean spending quite some time on libraries I need, and less on the app I want to develop.

There's F# and the SAFE stack, but I don't feel home there. A lot of docs/libs still are (or have quirks due to having been) Windows specific, and joining the most popular f# community communication channels requires you to join the F# Software Foundation....

Then there's Scala, with functional programming and access to Java's ecosystem. But I prefer the ML style of Ocaml and F#.


Another issue with OCaml is that the community is small (although helpful). If you have very specific questions (esp. on more recent development), it may be hard to find help on Stack Overflow for instance.

Besides, the language has became quite complex: Monad-based concurrency and error handling, functors, objects, GADT, PPX... it's cool from a language point of view, but as a developer it can get overwhelming. You don't have to used all of this in your project, but still you may have to deal with existing code that is quite complex.


The people at https://discuss.ocaml.org/ are quite helpful.

> it's cool from a language point of view, but as a developer it can get overwhelming

If you consider OCaml's features overwhelming why don't you consider a much simpler language (Typescript etc.)?

I am learning OCaml myself right now, and I only use the features which I currently need. If I need more I will learn OCaml's suitable features then. Btw. I like OCaml, after having tested Haskell for a while. OCaml feels like a really practical usable "Haskell light". The compilation speed is outstanding (like LuaJiT).


I'm doing F# from 2 years ago ONLY on OSX. Use .NET core & xamarin currently. So far I can do a fairly complex project with dozen of external libraries, for iOS/Android/WebAPI with PostgreSQL, Sqlite. Deploy with docker on Ubuntu.

Today I say F# is good enough on *nix.

Also doing a little of Rust but the productivity with F# is far higher (you need to ramp up on Rust for a while!)


Last I checked, F# on Mac still requires mono for development, its repl is super janky, and its tooling is all over the map. Have things improved in those areas in the last year or so?


Mono is required, but all my libs are on .NET Core now, so is only as part of the deal. I use Visual Studio Mac.

The repl is not great, even on windows. F# is not the kind of language for a great repl experience, imho. I rarely use the repl anyway (the only one I use regulary is the python one)


Conversely, Windows does not look like a well-supported platform for OCaml (in fact, it looks worse now than it was a few years ago). There's no official Win32 package or installer; the website just directs you at third party ones. And you have a choice of an experimental OPAM build that requires Cygwin (and needs a forked repo to install packages from, that upstream package authors don't necessarily maintain), or OCPWin that is stuck at v4.01.

It's not an arrangement that inspires confidence - it feels like the community in general really doesn't care about Windows, and it's a few people trying to keep it afloat there - but if they step back, it won't keep.


My ML of choice is F#, I came from OCaml because I wanted better tools, librairies and an improved syntax.

The only place where I suffer from windows only librairies is UI and graphics in general (I don't do web developement so I cannot comment on that) otherwise developement on linux feels good (first via mono and now dotnet core).

Without the heaviness of dotnet (and, in particular, its project files), it would be perfect.


F# seems kind of not-friendly to the macOS environment. Looking at https://fsharp.org/use/mac/ it looks like you have to install a ton of things to get started. What about F*?

Now if I can get away from Ocaml's weird syntax, I think this is still a good deal.


I don't have a mac but it looks like incomplete documentation. Since Vs Code and dotnet core both works on Mac, you should be able to follow the linux instructions from these videos without changes : https://youtu.be/-uqbk0v7qxo?list=PLlzAi3ycg2x0TScJb7czq7-4L...

F* is intented for proofs, probably not what you want if you are looking for a general programming language.


As you have experience in both, are there Ocaml things you really miss in F#? How long are you developing in F# (and which type of apps if I may ask)?


FYI F# started as an OCaml implementation to .NET. It quickly evolved as an independent languages as it was easier to do, but a good part of it is straight up OCaml.


I have been using F# for data science, developing algorithms (where ML truly shines) and most of my scripting needs for 3 years now (it is not my main language as, these days, I need to instrument some C++).

As remify said F# came from Ocaml : it lacks modules, GADT and other thing but you can easily translate most Ocaml to F#. For me the syntax was a net win.

The thing I miss is the Graphics module of Ocaml's std. I used to build quick visualizations with it and I have not found a good F# equivalent that would run flawlessly on Linux (the relevant section of mono's std was very buggy the last time I tried using it).


But now there is no need to rely on Mono, right? Now that Microsoft has released dot net for other OS.


When the first version of dotnet core went out, parts of it such as system.drawing were just empty shells. Nowadays dotnet core is all you need (happily F# seems to be developed first and foremost by the community and not by Microsoft).


> You find Ocaml libs for a lot of needs, but a lot of those a unmaintained and have their last commit a couple of years ago.

Not sure what issues you ran into, but this isn't necessarily a bad thing. I had a similar experience with Elixir, where some libraries had been last touched two years ago, but did the job with no issues whatsoever.


This is something I see and feel often. If something is untouched, that doesn’t mean it’s broken or for that matter unmaintained. It can simply mean it was simply done correctly from the get go. Think of code as a formula, how many formulas are there that haven’t been touched in hundreds of years, due to the fact that they are correct.

The desire for libraries that are constantly in flux stems from the “move fast and break things” mentality, mixed with the continuous flow of breakage in mutable languages. If you move slow and do things well, you can often build some reliable with a fraction of the effort.


I would agree. I'm happier using a stable, rarely changing library than something that is in constant flux and changing every week. Really, that kind of breakneck pace signals to me that people are just throwing stuff at the wall to see what sticks, and reduces my confidence in the quality of the product...


Well, one example is oauth in ocaml. I find this https://github.com/jaked/ooauth but it has an open issue from 2013 titled "Fix compilation on recent ocamlnet, gcc".

I have the feeling I encounter a lot of these situations when I look at libraries I would need for a project. It's rather uncommon I see a very active project in Ocaml (even if there are some indeed).


Give clojure a shot. It's mature, expressive and you have access to the JVM.


> Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function. - John Carmack

This mirrors a discussion I've had with my boss a number of times. Most of our products are written in C++, and we complain about how often developers (usually coming from Java) think that everything needs to be inside a class hierarchy. The procedural parts are still there for a reason: the CPU is procedural. A procedure or function is a closer mapping to what the CPU is actually going to do when the code runs.

On the other hand, I've never done serious work with a pure functional language like OCaml. I would welcome the chance, though I wonder if one finds the same kind of dogmatism as in the OOP world...("it must be a pure function!")


Haskell is dogmatic. Everything must be lazy. Everything must be immutable. Everything must be pure.

Ocaml (and SML) aren't particularly dogmatic. They are both eager, so reasoning about performance is much easier. Most things are immutable by default, but you can make mutable data structures in both. Rather than insisting on pure functions, SML and Ocaml allow side effects in functions and have the `unit` type for functions without a return.

The type system is actually strong (eg, no implicit casts) and null exceptions simply do not exist. Multi-threading is the big weakness. Ocaml has been promising support for at least a decade without mainline support. Likewise, most SML variants don't have support (though PolyML does along with a couple others).


The first one, about everything having to be lazy, isn't true. You can define strict data types or functions. The other two are though.


You can still have mutable references and impure functions in Haskell, but its not like "path of least resistance" of other languages. The IO type marks impure functions

Also one can use STRef if they want to have a pure function that is internally implemented with mutable values - that is all the side effects are self contained.

http://gamasutra.com/view/news/169296/Indepth_Functional_pro...

"a function can still be pure even if it calls impure functions, as long as the side effects don't escape the outer function"


> I would welcome the chance, though I wonder if one finds the same kind of dogmatism as in the OOP world...("it must be a pure function!")

Nah, OCaml is a very practical functional language, that's why it features mutable state. You just have to opt-in, so it just encourages [im]mutability as a default, as it should.

Edit: fixed typo!


Do you mean "encourages immutability as a default"?


Yes, that's what he meant ;). In OCaml references are always explicit (different syntax) and mutable field in records must be declared explicitly as mutable.

However, OCaml does not bother you too much (as compared to Haskell for instance) when using side effects like printing.


Since the 90s it's driven me crazy that people don't recognize the flexibility of having data and functions separate when needed.


If you are in for a surprise with how plain functions interact with ADL, versus code inside classes.


I'm personally having a lot of fun learning Rust, which seems to have been largely inspired by OCaml. Algebraic data types and pattern matching are a revelation!


Fun fact: Rust compiler was initially written in OCaml.


I would love to see something more like SML, with OCaml compile times, and a Go-like standard library. If that could be accomplished, I think we would see statically typed functional programming go mainstream.


Python makes it easy to start a program. OCaml is easy to finish one.


ReasonML is going to bring Ocaml to the masses. Massively popular React apps can be created from ReasonML. Ocaml + syntactic sugar = ReasonML.


The masses won’t be coming to ocaml unless it comes to them. ReasonML seems like a much more likely approach to succeed to me.


What seems to be working is Rust.


Rust is a very different beast when it comes to development speed. When compromises must be made, its design decisions always favor runtime speed over developer friendliness. That results in rather unwieldy APIs compared to the alternatives in dynamic languages.

As a replacement for C/C++ this makes perfect sense, as a replacement of JS/Ruby/OCaml, not so much.


Any particular examples of “development unfriendliness” in Rust?

I would also say safety/correctness usually win over runtime speed concerns if no other factors are involved as far as Rust design decisions go, at least that’s how it has typically been.


OCaml belongs with C++ and Rust for compile times...


How so? OCaml complies way faster and you don't actually need to compile it. You can sort of use it as scripting language without any compile step to begin with.


How so?

OCaml has bytecode + native code compiler, both much faster than heavy C++ templated code, or plain Rust code.


Great writing.

By the way, How is it that HN readers always heavily promote OCaml related threads ?


I upvoted the article as OCaml is one of those “aspirational” languages that I’ve always had an interest in, but never found the time to explore. I’d like to hear more about it, might prompt me to pick it up one day.


The nice thing about OCaml (and its MS cousin F#) is that it's a hybrid language. Know FP? Fine, you can code that way. Know OO? Fine, you can code that way. It meets you where you are.


I would even argue that for OO, OCaml is a better object-oriented language than most dedicated OO languages. It nicely decouples classes from types, for example, so that the subclass diagram needs not correspond to the subtyping diagram (i.e. you can reuse code via inheritance without following LSP subtyping rules, but the type system will still prevent any unsafe use of such hierarchies).


This article is very good and OCaml is an excellent language that is underappreciated (IMHO). I have used Caml-Light when I was student and I loved it.


>> How is it that HN readers always heavily promote OCaml related threads ?

Because functional is still the flavor of the decade.


I want to spend more time in OCaml, but without threads it's a tough sell. Modern performance requires parallelism with shared memory, and the current mainline solutions don't offer that.

Where threads aren't part of the equation, OCaml has become suddenly popular. That's telling.


https://pl-rants.net/posts/go-and-ocaml-scalability/ has an interesting take where they found multi-process OCaml performing better than goroutines. I don't know how to interpret those measurements correctly, so any help there would be appreciated.


This experiment has "embarrassingly parallel" data, ie there is little to no benefit to sharing data between computations. It's not surprising that multiprocess does well on this, but this is not common in most real world applications. Imagine a simple chat server, where every request needs to read and write shared state. Even when the application doesn't seem to have shared state, any in-memory caching requires it.


"most real world applications" is a meaningless phrase, I doubt anyone can quantify "real world applications" one way or the other

also the result is still interesting, why wouldn't goroutines scale just as well, but apparently didn't?


That's fair, but perhaps I can rephrase it to say that embarrassingly parallel applications are a well known edge case and not the norm.


That makes sense. Thanks.


Did you try Lwt/Async? I'm a ocaml novice and i'm curious how far one can get with them.


Neither lwt nor async allow for parallel execution cause of the Gil. They are akin to async await in Python


Correct. For parallel execution you can use parmap or a submodule of ocamlnet that I never remember


Elm is the langauge that showed me a functional language can be amazing. It clicked in my head like no other language. I did a substantial web app in it, and came to appreciate it more and more. I only wish it were available as a general purpose language. I used to call elm a gateway-drug to ocaml but I have a hard time swallowing a lot of ocaml syntax. Reason seems to be a good way around that. I'm also keeping an eye on the Grain language.


Is F# still being used/developed, or has MS started to leave it out to dry?


The problem with Reason is non-automatic understanding 3rd party Javascript codebase. It's the pain to implement correct type to use in Reason program with existing JS codebase.


is there any NON-garbage-collected functional programming language? Something suitable for real-time or deterministic timing usage?


http://intuitionistic.org/ ... seems barely alive


Rust.


These kind of blog posts are one of the reasons why I love the internet, and why Hacker News is such a great place.


It seems like Rust would have been an even better choice, had it been available at the time they chose OCaml.


Rust may be a very good choice for systems programming. However, safety criticial software can also be written in other languages which don't need a borrow checker -- Ada and SPARK for instance, or even in C with verificatino tools (FramaC etc.). Most developers also don't need Rust's "feature" of not having a garbage collector since they are not involved in systems programming. As for me, what makes OCaml attractive is its functional nature combined with a very practical imperative syntax. There is no steep learning curve like in Rust, and OCaml's compilation speed is staggering.


I am still waiting a friendly backend language similar to Elm. OCaml is hard and lost timing.


You might find this interesting: https://wende.github.io/elchemy/

IMO OCaml is not any harder than Elm.


OCaml is not for the masses, it never has been




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: