Somewhat off topic: I'm very interested but as so often before with these kind of texts I'm at least somewhat stumped by the terminology. I can almost read this and I've done two and a half ish courses of Haskell.
I've wasted so much energy feeling stupid when reading texts that I don't understand. But most of the time not understanding is not about "intelligence" but simply not having the right meaning assigned to the right amount of words.
You're not stupid, you just don't have enough structured data yet. I wish I had that kind of insight and believed in it when I started out in university. Not that I'm completely convinced even now.
I first learned Haskell so I could understand some of the papers I wanted to read that used Haskell for examples. Even when doing so, I realised that I would eventually have to learn ML, but haven't gotten around to doing it. I think it's a bit like literary researchers who have to learn ancient Greek or ancient Chinese to get to the bottom of things.
It's funny that when I grew up I was always praised for being clever, so I never once doubted it. However, when I chose to learn Japanese and moved to Japan, suddenly many people assumed I was stupid -- because I could never quite understand what was going on. It's been an interesting experience. Which is it? Am I clever or stupid?
Eventually I've come to the point: clever or stupid makes no difference. The only important things to ask are, "Do I understand" and "What do I need to do to understand". It's also made me realise that relying on my supposed cleverness has actually been a source of problems in my life that I had not thought about before (because I'm clever and clever is good, right?)
It's interesting to read this comment because I experienced nearly the same thing. Thought I was clever because everyone told me it was so, was admittedly quite arrogant about my technology skills, learned Japanese, moved to Japan, and could barely explain anything to my programming peers at the level of a five year old.
It took several years of forced humbleness to erode the arrogance, stop thinking I was so clever, and simply apply myself to the task of learning the necessary Japanese to explain myself correctly.
What I generally see with people new to learning Haskell (or similar) is that they approach it with their bag full of imperative language cleverness, and find the concepts and terminology hard to grasp because they refuse to become humble for a bit and examine it as a paradigm different from the one they know.
To be fair I was no different until I finally started understanding it, but looking back, learning Haskell was similar to my Japan experience. It became much easier once I buckled down, learned the parts that didn't make sense, and most importantly began applying them.
Your thoughts on cleverness reminded me of this quote from David Foster Wallace's Infinite Jest.
Talent is sort of a dark gift ... talent is its own expectation: it is there from the start and either lived up to or lost....
Here is how to avoid thinking about any of this by practicing and playing until everything runs on autopilot and talent's unconscious exercise becomes a way to escape yourself, a long waking dream of pure play.
The irony is that this makes you very good, and you start to become regarded as having a prodigious talent to live up to...
That's my experience too. An amazing amount of "hard" things are just "ohhh that's what they meant".. basically a communication problem when reading others. What they see is not what you see and here lie most of the problems.
For instance I failed linear algebra in college. Even though I loved CGI and 4 dimension matrices are bread and butter of 3D transforms. So I was eager to master it. I tried for 10 years reading a math text book and made no progress. I did lots of programming which helped me get a feel of what notation could be used to and also bought a stupid linalg book on amazon. Their wording and approach was different. All of a sudden everything became trivial. When I go back to the older book I understand what they mean now, but it hid so much meaning. I needed to get a culture of mathematical notation. Some kids might develop it naturally, or be able to grow with the help of teachers, I wasnt able at the time so I was left in the dark.
Now most advanced abstract subjects feel different. It's gone from mental cramp to tiny things that MUST HAVE meaning, and I just have to change my perspective and I'll see too. And weirdly it applies to CS fields too. After that I could finally read fat compilers book full of succint notation. Combinatorics became reachable, same for fancy physics. The symbols carry almost no fear no. And in a way I'm still a bit sad that I didn't know that before, feels like a huge waste, for me but for other kids too.
Lastly, going back to programming, haskell and the likes. After struggling to grok continuations and non determinism, I got a very very different picture of computing altogether. And I realize how crazy bad imperative first programming courses are. I didn't forget how impatients students can be and how efficient a simple statement sequence with visual output can be .. but gosh how far it sets you back in trying to approach different and vastly more useful ideas (say prolog).
It was the final nail in the coffin. Linalg was a limit cas of something that was "hard", foreign and serious. The book itself was not all, that would be dishonest, I must say that reading FP for years also brought my mind up. It took me off the material side of things, no more thinking about setting memory with bits, and looking at abstractions instead (doesn't really matter how functions and types are done below, the semantics and algebra is the focus). That said the difference in mindset and perspective in that book (Gareth Williams Linalg 5th edition) told me that a topic can be approached in many different ways. What group A say about a domain may not speak to your brain, while group B perspective will fed your brain just the right way. What is hard is mostly "impedance mismatch", not a limit in the reader.
Slightly related, when doing the first coursera proglang mooc, I remember newbies crying on IRC because meta evaluation made no sense to their mind. Some of the dudes that had gone through this before and could discuss it without trouble. I remember the odd feeling of being completely incapable of explaining that "simple" thing to the newcomers. Some actual mathematicians say that math makes no sense, you just get used to that, so maybe don't run from one book to another and try osmotic learning. Read some, think about it. At times I just note what eludes me without overthinking it and I come back to it later. Maybe my lower brain made sense of that overnight. One thing is that newbies were very impatient that's another thing to avoid. It's just odd to balance between patience and boredom vs impatient excitment and pain.
One particularly cool (and surprising, to me at least) trick is defining interpreters that carry out composable optimization passes on a DSL without needing an intermediate representation: http://okmij.org/ftp/tagless-final/course/optimizations.html
The trouble with this style is that your sublanguage program is no longer a value, so you can't reason about it or treat it as one. Instead you've created, effectively, an imperative method that accepts a bunch of callbacks that implements effects and runs those.
I mean, the imperative programmers already knew they could write programs full of effects without monads. But there's a reason we regard that as bad style. As far as I can tell this piece is just a longwinded reinvention of imperative programming?
Yes, the program is written in terms of callbacks, just like monadic programs are written in terms of "return" and "bind" callbacks. This style isn't any more imperative than monadic programming. You could port this code to Haskell (using typeclasses instead of modules) and it'd work just as well.
Tagless final style programs are indeed written in terms of "return" and "bind" callbacks, but programs written with "vernacular monads" or free coproduct style aren't - they're plain old values and can be inspected as such. That's the style I prefer.
Thank god. I've felt like we've been getting closer to breaching the peak of the hype curve for what I'd probably call 'monadic pure functional programming' i.e. Haskell, Scalaz, etc. It'll have its hardcore advocates forever, of course, but I think we might be finally getting to the point where people realise 'wait this sure is an absurd amount of complexity to represent some fairly simple functions, surely there must be a better way?'.
The majority of internet conversations about monads never get past the most shallow introduction. The most important feature of monads is that they can be composed via monad transformers. A la carte languages and algebraic effects are nice, but monad composition is more powerful.
Also, as someone in the Haskell industry, I think you're wrong about trends. The last 5 or so years have been an accelerating trend of Haskell adoption. Of course it is still dwarfed by most other languages, but the number of multi-million dollar contracts I've seen executed in Haskell has been multiplying.
As a relatively tiny data point, I've introduced Haskell at a small telecom doing $350M/yr in revenue. This kind of environment screams "Enterprise", but as it turns out the vast majority of problems that need solved can be done quite gracefully with Haskell.
It's an incremental improvement on what's gone before, sure, but it does offer a bunch of genuinely useful facilities that a lot of programming languages just don't have. I'd never want to go back to a language that was missing HKT, missing libraries of generic operations on monads, missing libraries of generic operations on recursive datastructures via fixed points, or missing safe derivation of operations on structured data via typeclass derivation or similar.
The most important feature of monads definitely isn't composition, because they don't compose well at all. Monad composition is the worst feature, by far, because it doesn't exist.
>Also, as someone in the Haskell industry, I think you're wrong about trends. The last 5 or so years have been an accelerating trend of Haskell adoption. Of course it is still dwarfed by most other languages, but the number of multi-million dollar contracts I've seen executed in Haskell has been multiplying.
That's my point: that's the hype cycle, and it's now peaking as people realise what a lot of effort it is to actually use monads in reality with the awful complexity of monad transformer stacks and such.
Can you clarify what do you mean by they don't compose at all? It feels like you've had some tough experiences with monads (monad transformers), and I agree, they are hard to get into.
Sometimes two monads composed don't form another monad. Unlike Functors and Applicatives.
Simple example, if you have a list containing functions ([Reader r a]) it forms an Applicative fine as they compose. However you can't satisfy the conditions needed to form a Monad.
You can compose monads whenever you have a distributive law for them. Usually there's an obvious one. In your example there's a distributive law that uses the same parameter for every function, this allows you to compose the monads.
When there's several possible distributive laws or none, that's important. It means that either the interactions between the two effects are subtle and need to be further specified or they are completely incompatible.
Algebraic effects can only handle effects that trivially commute. So monad composition is a finer, thus more expressive operation.
This reminds me of the 'fine-grained permissions' vs. 'broad permissions' security models. The former is more expressive, more powerful, etc. It also is technical, complicated and confusing. The latter is less powerful and less expressive, but people actually use it, because it's simple and easy to understand.
Expressing effects using monad transformer stacks might be more expressive and allow finer-grained distinctions, or whatever (please elaborate what you mean, I'm interested), but it's syntactically ugly (lift), it's confusing, and it's not clear that the expressiveness you get is practically useful.
In contrast, algebraic effects might 'only handle effects that trivially commute' (again, going to need more detail on this), but they're easy to understand and easy to use. They're much more likely to be a practically usable effect system that real programmers can really use in the real world in a way that doesn't convolute code with details of how you compose effects unduly.
The benefit of working with typeclasses has always been the ability to see and abstract out patterns that occur through Functors, Applicatives, Monoids, et al. I think ML is much more hyped up than monadic pure functional programming.
>The benefit of working with typeclasses has always been the ability to see and abstract out patterns that occur through Functors, Applicatives, Monoids, et al.
Which leads to code that needs to be puzzled out, rather than code that flows naturally. 'What does [arbitrary-monad-operationM_] mean for SomeMonadTransformerStack again???' is all too common in my experience.
I really think that's false abstraction, like an 'object' superclass in OO languages.
There aren't that many different compositions, and they're mostly just variants of map and fold for different contexts.
And once you learn them, it's a lot easier to see what code using them does at a glance, rather than reading the corresponding for loop to work out what it does, and whether it has any side effects.
I disagree. It's a lot harder to see what code using them does at a glance, because the names are meaningless. 'forM' means what, exactly? Nothing. It means nothing until you apply it to a particular monad.
It's trivial to read a loop, because it's right there. It's trivial to tell if it has any effects, because they're notated right there as algebraic effects.
>'forM' means what, exactly? Nothing. It means nothing until you apply it to a particular monad.
It means you're mapping over something and collecting the results. Yes you need a little more information to understand exactly what you're mapping over or what "collect" means in this context, but this is still more information that `for` gives you without reading more.
Although I find in practice it's usually fairly obvious what `forM` means in context, because you're likely writing a lot of code working with the same monads.
>they're notated right there as algebraic effects.
I'm not sure what you mean by this. Unless you're using some kind of advanced imperative language with effect types, they're not "right there", you have to infer them from the code.
Whereas with Haskell you usually have a type signature which tells you what the effects are.
I don't understand this comment, almost every compiler and language in existence uses monads for computation. Monadic sequencing is the foundation of almost every CPU. Most languages are utterly dependent on them.
Haskell and other pure languages let you compute values independent of the monad its computed in, which is very useful. Of course, this is all conceptual, eventually everything is translated to underlying CPU monad and its implicit join.
I'll point out again that this is very reminiscent of the OOP fad and "everything is an object". "Everything is a monad" is similar.
"It's all just the CPU monad" is just.. inane. It's like saying 'C functions are pure functions that implicitly take and return the world'. Maybe? But at that point 'pure' has lost all of its information content.
Honestly I feel like Haskell and OCaml are severely over-engineered and overly focus on nuance far too much these concepts and ideas can be expressed in a more concise and legible manner in mathematical notation than the programming languages used as examples here.
Haskell is an incredibly simple language with only a few core constructions. You have ADTs/pattern matching and lambda abstraction/application and that's basically it. Everything else in the standard language is (usually straightforward) syntactic sugar around those two concepts. This includes the vast majority of data structures, monads, etc. All the other things that people use exist as (individually) extremely simple extensions that you can learn in isolation in a few minutes.
Haskell is possibly the least over-engineered language out there; it's almost as if laziness (no pun intended) was a guiding principle when building the language.
Ocaml is more complicated, but I'd still consider it vasty simpler than C++ or Java (as long as you ignore ocaml's object layer).
Python: What if everything was a dict?
Java: What if everything was an object?
JavaScript: What if everything was a dict *and* an object?
C: What if everything was a pointer?
APL: What if everything was an array?
Tcl: What if everything was a string?
Prolog: What if everything was a term?
LISP: What if everything was a pair?
Scheme: What if everything was a function?
Haskell: What if everything was a monad?
Assembly: What if everything was a register?
But in any case, there are typically lots and lots of details (Haskell is no exception), and they matter. And also, it takes a lot of experience to get anything of quality done!
I like what Jonathan Blow calls this pattern: "Big Agenda language". It might make a programmer's life harder if the language is so opinionated. (Start here: https://www.youtube.com/watch?v=TH9VCN6UkyQ)
He specifically named Haskell, Java, and a few others, as being guilty of that, and being less usable for sticking to ideological purity.
I agree that Haskell is made more difficult than necessary by sticking to purity so tightly [0]
Having said that, Haskell is very much not built around the question "what if everything was a monad". Monadic IO wasn't even a thing until Haskell 1.3
[0] Although this is, in large part, forced by its unfortunate choice of being lazy. It has also, arguably, forced advances in PL research that would have been avoided had Haskell added first class support for imperative programming.
Being lazy is what makes the language so elegant. Even a high level game developer (Carmack?) said, at one point, that his ideal language would be lazy by default.
My impression is that laziness forced people to invent new abstractions to deal with IO, because you can't rely on the easily predictable evaluation order of eager evaluation to provide a good enough solution. And it's those abstractions that make Haskell elegant.
However now that we've invented them, while some things benefit from laziness, eager evaluation is probably a better default.
I would disagree with your impression. Laziness made lots of things very elegant but things like IO initially very hard. Eventually elegant solutions were found for IO as well.
An example of the elegance is a simple function that takes some type which can be ordered and picking the "largest" one:
max :: Ord a => [a] -> a
max = head . sort (<)
The type declaration says the function "max" takes a list of some type a, which is Orderable and will return one. The implementation says first sort the list, then return the first entry of the sorted list. In a strict language this would be a lot of wasted effort but in a lazy language (depending on how "sort" is implemented) what ends up happening is basically the same steps you would have to do to pick a max value. The unnecassary sorting of the rest of the list cannot happen because that result is thrown away before it can be accessed.
If you asked someone to describe how to get the largest entry in a list, the most consice way to describe it would be to sort the list and then take the first entry. In Haskell the most concise way to describe something is often the correct implementation as well, and that's what I call elegance.
I consider it very inelegant to sort a list only for extracting the largest element. Usually sorting refers to a O(n log n) operation, which is a waste.
Yes, for some sorting implementation taking the first element of the sorted list may actually amount to a linear time operation given that the sorting is carried out lazily, but that's not very obvious and may fail on you fast. Bad idea from an engineering standpoint.
There is a better solution in Haskell, which is
getMax $ mconcat (map Max [1,2,3,0::Int])
Or more abstractly
findMin :: (Ord a, Bounded a) => [a] -> a
findMin = getMax $ mconcat . map Max
You need the ::Int so that it's bounded (by the smallest representable value, which Int does have, but Integer does not have).
Now, if you actually try getting this version to work, you will notice there is a lot of conceptual overhead involved for such a small thing. And it bugs me that you have to use the "Bounds" machinery just because the type system doesn't know that the list is non-empty (which I assume here).
So I ended up with it after fiddling with foldr1, mappend and so on, noticing that the underlying basics have changed AGAIN in Haskell...
Really, I consider the following C thing to be so much clearer and better engineering:
int getMin(int *a, int n)
{
int r = a[0];
for (int i = 0; i < n; i++)
if (r > a[i])
r = a[i];
return r;
}
Apart from the fact that in practice there is usually enough context that you will never need such a function, because you get the minimum in constant time with some other clever trick...
>I consider it very inelegant to sort a list only for extracting the largest element.
Which I explicitly stated would not be happening in my example. In reality, you couldn't just use "sort", you'd need to make sure it was a sort with the right behavior. But the point wasn't to show how to get the largest element in a list but rather to show how non-strict (i.e. "lazy") evaluation can make the simplest code actually correct. Obviously this doesn't apply in every possible case, but if you compare it to e.g. C which is pretty much never the case, it's a win in elegance IMO.
>just because the type system doesn't know that the list is non-empty (which I assume here).
In modern haskell there is a type where the list is statically known to be non-empty.
> noticing that the underlying basics have changed AGAIN in Haskell...
Not sure what you mean here, are you complaining about standard library changes? I personally hope Haskell keeps periodically breaking backwards compatibility until they fix more of the ugly parts of the standard. I'd hate to see it go the way of Java and be stuck with a hideous e.g. file access model because that's what the very first release had.
>Really, I consider the following C thing to be so much clearer and better engineering:
I find such a function not remotely clear and definitely not better engineered.
* It's using a "for loop" so it could be a filter, fold, map or any combination of those. I can't know without reading it.
* The function this proposes to replace was able to find the max of any type which can be ordered. This proposed replacement can only do ints. Given the radically reduced scope, the typing is sufficient but you won't have to add many features before the C compiler simply cannot help you anymore. Haskell's type system is much more powerful, stopping just short of the more advanced dependant type applications.
> Which I explicitly stated would not be happening in my example. In reality, you couldn't just use "sort", you'd need to make sure it was a sort with the right behavior.
I'm pretty sure most experienced software architects would not approve this as a robust approach. Notice that this is a property that actually matters, and it's not captured in the type system (most things aren't).
>>just because the type system doesn't know that the list is non-empty (which I assume here).
>In modern haskell there is a type where the list is statically known to be non-empty.
Sure, go ahead and add another type to make everything more incompatible... no, this approach doesn't work out in practice. There are much more invariants than you can hope to capture with new types in any practical project.
> I personally hope Haskell keeps periodically breaking backwards compatibility until they fix more of the ugly parts of the standard. I'd hate to see it go the way of Java and be stuck with a hideous e.g. file access model because that's what the very first release had.
I absolutely agree with you that everybody should strive to improve their own stuff. But with public, standard libraries, it's a different thing. You simply can't afford to have a hard dependency on stuff that breaks randomly. From an economic perspective, it's much, much simpler to just write your own getMin routine without any dependencies at all. It's not rocket science. And the practical gains by some vague idea of "type safety" for this little procedure are nearly zero.
> I find such a function not remotely clear and definitely not better engineered.
> * It's using a "for loop" so it could be a filter, fold, map or any combination of those. I can't know without reading it.
This is just not true. Almost any beginner to programming can understand this immediately, it's not much effort to understand this to almost anyone. Contrary to the Haskell version (the more realistic one - the one I listed). The average time to ramp up any beginner to understand the Haskell version, and the average time to anyone already competent enough to figure this ad-hoc thing out, is much higher.
Do not think that just because there is a for loop this is somehow wrong. There is a simple statement below the for loop and everyone can immediately spot what happens there (it's a "fold", if you like to think in these terms. That was not hard). Any suggestion to the contrary, that's just zealotry.
> * The function this proposes to replace was able to find the max of any type which can be ordered. This proposed replacement can only do ints. Given the radically reduced scope, the typing is sufficient but you won't have to add many features before the C compiler simply cannot help you anymore. Haskell's type system is much more powerful, stopping just sort of the more advanced dependant type applications.
Sure, thanks, I know that. See, I don't care. I don't need that in my own practice. If I want to find a routine which returns the max integer I do this. If I really needed to write a generic routine (which I don't), I would do C++ or Python or whatever other standard procedural approach. Nothing wrong with that. Any of these is more understandable, more deterministic in runtime behaviour, more modular (less dependencies), easier to write, easier to maintain, etc.
>I'm pretty sure most experienced software architects would not approve this as a robust approach.
What approach? Not caring which sort is used? I explicitly stated that in actual practice I wouldn't do that. I simply took a quick example to make a point and you're turning it into an interigation. Look, you don't like Haskell. Fine, don't use it. I'm not cashing checks from your choice of programming language, use whatever you like.
No, I actually mean the approach where you explicitly pick a "sort" implementation that would yield a linear-time smallest-element when evaluated lazily. I don't know if a popular sort with this property exists, but anyway, relying on that is too clever. It's not obvious and also fragile (maybe the sort changes at some point its constant or even asymptotic behaviour?)
If you want the smallest element, get the smallest element by scanning the list. No big deal.
Well, I'm sorry for turning this into almost an emotional issue. I guess I am the one who is cashing checks here. It does annoy me: I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development. It is not. It doesn't help all that much for correctness, and other aspects like modularity, efficiency, portability, compiling speed, development speed etc. are at least equally important for the vast majority of projects.
(I could say I've "wasted" a lot of time for investing in Haskell before learning that these other qualities matter as well, but I would not go this far. It's been a good investment, even if it's not a practical language)
It is no coincidence that most Haskell projects never take off. Just look around at the software in existence. What you can find is mostly little tinker projects. When I look at some higher-profile libraries for example for web programming, they focus way too much on some perceived gains from very advanced type system tricks, but they are basically too hard to use in a practical setting. The big links page on Haskell.org has mostly toy projects and broken links. When I look at the open projects page at toptal.com I can see exactly one Haskell project, and it is about refactoring a completely unmaintainable Haskell codebase!
There are areas, like compilers, where Haskell seems to do quite well, but for the most part it is impractical. Even most Haskellers don't disagree that it's mostly good for expanding your mind, exploring mathematical ideas etc.
> I actually mean the approach where you explicitly pick a "sort" implementation that would yield a linear-time smallest-element when evaluated lazily
Agreed. It's quite painful to see Haskell advocates sharing this example. It's far too clever and lacks robustness. It's a cute triviality, not an example of what the language is actually good for.
> I don't know if a popular sort with this property exists
Well yes, Haskell's sort function has this property. It's a merge sort.
> If you want the smallest element, get the smallest element by scanning the list. No big deal.
Yes, or just use the built in min function.
> I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development.
It's a shame you seen too many advocating that, because that's not all that Haskell's good for. It's also good for
> modularity, efficiency, portability, ..., development speed
>It's not obvious and also fragile (maybe the sort changes at some point its constant or even asymptotic behaviour?)
Tome pointed out that "sort" in haskell is a merge sort, but I have to say I strongly disagree with this point. I was a hardcore C++ proponent for a long time and the reason for that was the STL. What made the STL great, for me, was that the performance was effectively part of the spec of the container types.
So it's true, if you wanted to implement "max" as I did in my example you couldn't use a sort that was simply "sort as efficiently as possible" or some such. You would need a function known to be "mergeSort" or similar. I don't view this as fragile, it's higher level programming: building upon blocks with known correct behavior. It is not trivial either, but I'm not sure engineering can be.
>Well, I'm sorry for turning this into almost an emotional issue.
If you're arguing in good faith, then all is well. There are just a hand full of trolls out there that have to go out and trash haskell where ever they see it mentioned, so if the conversation develops that kind of feel I tend to back out.
>I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development.
For me, the reason I switched to Haskell was simply time: I need as much help from the system as I can get. The more the system can find bugs for me the better. But, of course, this only works if you go "all in". You need to make an effort to push as much information as you can into the type system where the compiler can help you (I find this part similar to how I programmed C++). You need to make an effort to make invalid programs impossible to compile. If one doesn't do these things then they probably won't get enough out of the language to make it worth it.
>It doesn't help all that much for correctness, and other aspects like modularity, efficiency, portability, compiling speed, development speed etc. are at least equally important for the vast majority of projects.
For correctness, it depends very much on how successful one is in encoding their program into the type system.
I find modularity to be good but tends to be a community problem more than a language problem: for what ever bizarre reason, Haskell programers tend to not like imports and/or dependancies. Amusingly, one of the most popular packages in Haskell (Lens) mostly ignores these convensions.
Efficiency can be done but it's effort. I find Haskell quite good here: you can do the simpliest thing and it will often be good or correct. If benchmarking tells you that it's too slow then you have a lot of opportunity to make things a lot more efficient without losing all of the elegance.
Portability, I find also good. Not the most portable language that exists but workable.
Compiling speed... I don't see Haskell ever competing with languages like e.g. Go on compile speed. Ocaml is pretty fast though. :)
Development speed, IMO, depends on having a good IDE. Some members are working on this but Haskell doesn't have the community of more popular languages to get this one completed and polished. I would expect Haskell development with an IDE on the level of Visual Studio+Resharper to be the fastest of typed languages because e.g. refactoring would have so much better information to work with (assuming proper use of the type system).
>It is no coincidence that most Haskell projects never take off.
Well, I doubt that is a coincidence but I suspect there are a lot of reasons. Haskell doesn't have the popularity of e.g. Java so it won't have the amount of libraries, the robustness, etc. For me, Java is a classic example of what community focus can do: the language is trivial. It's not powerful at all, all it has are classes, functions and interfaces basically, yet so many people have been working with it for so long it has libraries and software for most anything one can think to do with it. There are tools to deal with its horrible verbosity, etc.
For me, I have no expectation of Haskell to compete directly with Java. My only expection is that Haskell would get to a state at least as good as Java with a lot smaller community (though, obviously, larger than it currently has).
>they focus way too much on some perceived gains from very advanced type system tricks, but they are basically too hard to use in a practical setting.
I'm not sure on this. The type system for e.g. Servant is so good you can impliment trivial servers basically with the types alone. Of course, the critical question is: is this the place most people are having difficulty in web development? If it's not then this effort, cool as it is, will obviously have a limited effect on anything.
>When I look at the open projects page at toptal.com I can see exactly one Haskell project,...
Is this not likely to be the consequence of community size? Community size probably has some relation to language but, for me, it's hard to nail down. Haskell works different than other languages so that, on its own, will exclude a lot of people no matter how good it is. Java took C++'s well known syntax and get rid of anything remotely difficult to use, so that's easy for popularity.
>Even most Haskellers don't disagree that it's mostly good for expanding your mind, exploring mathematical ideas etc.
This could be, I don't know what most Haskellers want with it. I know there are companies that use it in production and I use it as a productive language.
Indeed. I would argue that a better line for Haskell would be "What if everything was a value?" Basically the IO monad is a way to construct imperative programs by composing side-effect free values.
And indeed, much of the subsequent work in Haskell has been materialising even more details to avoid the hidden side-effects that cause the IO monad abstraction to break down.
I agree with you, and disagree with my quote! There are various of these types of lists on the web and I just picked the first I could find, arguably not the best.
Being pretty familiar with Haskell and programming in Scala nowadays, I think Haskell's dogmatic nature makes it much easier to program in! Consistency minimizes conceptual overhead when programming by a lot.
> Haskell is an incredibly simple language with only a few core constructions.
The number of constructs has absolutely nothing to do with what people mean by "simplicity of language", although language designers are often confused about this point because they obviously care about the number of constructs. For example, a Turing machine, or even some cellular automata like the game of life, are also extraordinarily simple (much more than Haskell), and yet no one would think to call programming in them simple, and the programming patterns that would emerge if we were forced to, while no doubt very clever, would likely very much deserve the term "over engineered". What matters, then, is not how a language is built, but what programming patterns emerge over time in programs that are written in that language.
You have Haskell and OCaml backwards. Certainly Haskell's semantics might seem simpler, but that doesn't make it simpler to program in, which seems to be what you're saying.
It's also important to distinguish modern Haskell from what's specified in the Haskell standards. Modern Haskell is basically 100% dependent on GHC's language extensions [1], and as soon as you factor these in the language is vastly more complex than OCaml.
This also shows up with equality and comparison, numeric operators, etc. (which beyond being syntactically inconvenient also makes data structures trickier to use ,usually with a somewhat convoluted functorized data structure module).
There are also some things about the language that break composability. For example, the presence of mutable references means you can't compose polymorphic functions (sounds weird, I know), which is inconvenient.
It basically boils down to less ability to front-load work by creating good abstractions.
Since I am calling out the responses to you, I feel like I should do the same here.
> All the other things that people use exist as (individually) extremely simple extensions that you can learn in isolation in a few minutes.
If by "learn" you mean you can type in something that is the correct syntax, sure... but in terms of actually learning how to use them to model problems? Come on...
> Haskell is possibly the least over-engineered language out there
Javascript is pretty under-engineered. But more seriously there are several real contenders here in Smalltalk, older Scheme standards, and Go (though again that may be considered under-engineered).
I take issue with your claim that OCaml is more complicated than Haskell. It has a much simpler core language (roughly Hindley-Milner + ADTs + GADTs + modules) whereas Haskell has Hindley-Milner, ADTs, GADTs, backpack, type families, type classes (plus several mutually inconsistent extensions related to them), new types, "roles", and more.
This is a weird comment since you've purposefully included many Haskell extensions that are in no way part of the "core" language.
Hindley-milner, ADTs, type classes, and laziness are core i.e. part of Haskell 2010. GADTs aren't even part of core Haskell. So I guess it is simpler than Ocaml after all!
I intended the phrase "core" to refer to aspects of the language which are commonly used, leaving aside less frequently used or more obscure GHC extensions like existential quantification, functional dependencies, implicit params, etc, etc.
Which language features that I mentioned do you not think are commonly used in Haskell?
Backpack, for example, was _literally_ only released in the latest version of GHC. Do you just glance over the subreddit occasionally or something? That would explain your list.
You list things that aren't part of the Haskell standard and just claim that they are "core" while not even listing all of the _actually core_ features of OCaml -- all of its OO parts.
You're leaving out a ton of complexity present in ocaml, including first-class mutable references and all their consequences (including broken polymorphism under eta-reduction), order-dependent evaluation, side effectful imports, etc.. Extensions are also common in production ocaml code.
I agree the typeclass stuff in Haskell can get complicated, but the more abstruse typeclass features are used about as much as the similarly abstruse ocaml object system. It's usually quite straightforward, just like ocaml modules are usually (but not always) quite straightforward.
A compiler takes one language (usually higher level) and transforms it into another (usually lower level). GHC, like C, go, or rust compilers, compiles to assembler, a very low-level and complicated language.
One would expect that a system to transform a simple high level language into a complicated low level language would would grow proportional to how simple and high level the source language is. Thus, a very simple, high-level language ought to have a larger compiler than a more complicated, lower-level language.
Indeed, we see this in real life. Writing a haskell compiler is a fairly big undertaking whereas a C compiler could be written in a few days. This is because C is complicated, but low-level, and Haskell is simple, but high-level.
Let's not confuse the simplicity of translation with the simplicity of semantics.
I believe a toy Haskell compiler could be made with comparable effort to toy C compilers. Admittedly, I cannot find people making toy Haskell compilers.
What makes Haskell higher-level than C? Garbage collection does not really complicate the compiler if you just use BoehmGC.
You can use a simple subset of assembly instead of the full complicated set. Toy C compilers do this, e.g. by treating x86 as a stack machine.
Hopefully other people more knowledgeable will answer, but there are a couple things that are pretty obvious off the top of my head:
- lazy evaluation
- type inference
- ridiculous amounts of programmer tools (it *derives* type classes for you!!!)
- lots of experimental ideas that are in the compiler but that people don't really use (it's a research tool after all).
The language is simple from a conceptual standpoint. Not every thing in the compiler is there simply to support expression of an idea. Quite a lot of it is there to help the programmer.
I think there is something to be said for the idea of writing a similar language which has a less monolithic set of tools (although, to be fair, I have not look at the architecture of GHC at all, so maybe it is really beautiful under the hood). The point is that I don't think that you can necessarily equate the simplicity of expression with the complexity of the tools.
Simple languages often require complex compilers if they are to run fast. You need to do more analysis of the program structure to squeeze out performance.
This post has nothing to do with the Haskell or OCaml language. These languages simply be the most obvious ones in which to do language research. Oleg is describing an embedded language. One could embed the language in C++, Python, Java, or any other language. It's just easier and more concise to express nuanced concepts in OCaml and Haskell and other functional languages.
I've wasted so much energy feeling stupid when reading texts that I don't understand. But most of the time not understanding is not about "intelligence" but simply not having the right meaning assigned to the right amount of words.
You're not stupid, you just don't have enough structured data yet. I wish I had that kind of insight and believed in it when I started out in university. Not that I'm completely convinced even now.