I've wasted so much energy feeling stupid when reading texts that I don't understand. But most of the time not understanding is not about "intelligence" but simply not having the right meaning assigned to the right amount of words.
You're not stupid, you just don't have enough structured data yet. I wish I had that kind of insight and believed in it when I started out in university. Not that I'm completely convinced even now.
It's funny that when I grew up I was always praised for being clever, so I never once doubted it. However, when I chose to learn Japanese and moved to Japan, suddenly many people assumed I was stupid -- because I could never quite understand what was going on. It's been an interesting experience. Which is it? Am I clever or stupid?
Eventually I've come to the point: clever or stupid makes no difference. The only important things to ask are, "Do I understand" and "What do I need to do to understand". It's also made me realise that relying on my supposed cleverness has actually been a source of problems in my life that I had not thought about before (because I'm clever and clever is good, right?)
It took several years of forced humbleness to erode the arrogance, stop thinking I was so clever, and simply apply myself to the task of learning the necessary Japanese to explain myself correctly.
What I generally see with people new to learning Haskell (or similar) is that they approach it with their bag full of imperative language cleverness, and find the concepts and terminology hard to grasp because they refuse to become humble for a bit and examine it as a paradigm different from the one they know.
To be fair I was no different until I finally started understanding it, but looking back, learning Haskell was similar to my Japan experience. It became much easier once I buckled down, learned the parts that didn't make sense, and most importantly began applying them.
Talent is sort of a dark gift ... talent is its own expectation: it is there from the start and either lived up to or lost....
Here is how to avoid thinking about any of this by practicing and playing until everything runs on autopilot and talent's unconscious exercise becomes a way to escape yourself, a long waking dream of pure play.
The irony is that this makes you very good, and you start to become regarded as having a prodigious talent to live up to...
"What makes you tick?" "What makes you scared?"
If math makes you tick and doesn't scare you will eventually learn a lot of it. It's not the whole story but I believe it accounts for a lot of it.
For instance I failed linear algebra in college. Even though I loved CGI and 4 dimension matrices are bread and butter of 3D transforms. So I was eager to master it. I tried for 10 years reading a math text book and made no progress. I did lots of programming which helped me get a feel of what notation could be used to and also bought a stupid linalg book on amazon. Their wording and approach was different. All of a sudden everything became trivial. When I go back to the older book I understand what they mean now, but it hid so much meaning. I needed to get a culture of mathematical notation. Some kids might develop it naturally, or be able to grow with the help of teachers, I wasnt able at the time so I was left in the dark.
Now most advanced abstract subjects feel different. It's gone from mental cramp to tiny things that MUST HAVE meaning, and I just have to change my perspective and I'll see too. And weirdly it applies to CS fields too. After that I could finally read fat compilers book full of succint notation. Combinatorics became reachable, same for fancy physics. The symbols carry almost no fear no. And in a way I'm still a bit sad that I didn't know that before, feels like a huge waste, for me but for other kids too.
Lastly, going back to programming, haskell and the likes. After struggling to grok continuations and non determinism, I got a very very different picture of computing altogether. And I realize how crazy bad imperative first programming courses are. I didn't forget how impatients students can be and how efficient a simple statement sequence with visual output can be .. but gosh how far it sets you back in trying to approach different and vastly more useful ideas (say prolog).
One particularly cool (and surprising, to me at least) trick is defining interpreters that carry out composable optimization passes on a DSL without needing an intermediate representation: http://okmij.org/ftp/tagless-final/course/optimizations.html
I mean, the imperative programmers already knew they could write programs full of effects without monads. But there's a reason we regard that as bad style. As far as I can tell this piece is just a longwinded reinvention of imperative programming?
Also, as someone in the Haskell industry, I think you're wrong about trends. The last 5 or so years have been an accelerating trend of Haskell adoption. Of course it is still dwarfed by most other languages, but the number of multi-million dollar contracts I've seen executed in Haskell has been multiplying.
>Also, as someone in the Haskell industry, I think you're wrong about trends. The last 5 or so years have been an accelerating trend of Haskell adoption. Of course it is still dwarfed by most other languages, but the number of multi-million dollar contracts I've seen executed in Haskell has been multiplying.
That's my point: that's the hype cycle, and it's now peaking as people realise what a lot of effort it is to actually use monads in reality with the awful complexity of monad transformer stacks and such.
Simple example, if you have a list containing functions ([Reader r a]) it forms an Applicative fine as they compose. However you can't satisfy the conditions needed to form a Monad.
When there's several possible distributive laws or none, that's important. It means that either the interactions between the two effects are subtle and need to be further specified or they are completely incompatible.
Algebraic effects can only handle effects that trivially commute. So monad composition is a finer, thus more expressive operation.
Expressing effects using monad transformer stacks might be more expressive and allow finer-grained distinctions, or whatever (please elaborate what you mean, I'm interested), but it's syntactically ugly (lift), it's confusing, and it's not clear that the expressiveness you get is practically useful.
In contrast, algebraic effects might 'only handle effects that trivially commute' (again, going to need more detail on this), but they're easy to understand and easy to use. They're much more likely to be a practically usable effect system that real programmers can really use in the real world in a way that doesn't convolute code with details of how you compose effects unduly.
Which leads to code that needs to be puzzled out, rather than code that flows naturally. 'What does [arbitrary-monad-operationM_] mean for SomeMonadTransformerStack again???' is all too common in my experience.
I really think that's false abstraction, like an 'object' superclass in OO languages.
And once you learn them, it's a lot easier to see what code using them does at a glance, rather than reading the corresponding for loop to work out what it does, and whether it has any side effects.
It's trivial to read a loop, because it's right there. It's trivial to tell if it has any effects, because they're notated right there as algebraic effects.
It means you're mapping over something and collecting the results. Yes you need a little more information to understand exactly what you're mapping over or what "collect" means in this context, but this is still more information that `for` gives you without reading more.
Although I find in practice it's usually fairly obvious what `forM` means in context, because you're likely writing a lot of code working with the same monads.
>they're notated right there as algebraic effects.
I'm not sure what you mean by this. Unless you're using some kind of advanced imperative language with effect types, they're not "right there", you have to infer them from the code.
Whereas with Haskell you usually have a type signature which tells you what the effects are.
Haskell and other pure languages let you compute values independent of the monad its computed in, which is very useful. Of course, this is all conceptual, eventually everything is translated to underlying CPU monad and its implicit join.
What on earth are you modelling a CPU as? What thing a CPU does corresponds to join?
"It's all just the CPU monad" is just.. inane. It's like saying 'C functions are pure functions that implicitly take and return the world'. Maybe? But at that point 'pure' has lost all of its information content.
Haskell is possibly the least over-engineered language out there; it's almost as if laziness (no pun intended) was a guiding principle when building the language.
Ocaml is more complicated, but I'd still consider it vasty simpler than C++ or Java (as long as you ignore ocaml's object layer).
I'm not sure it's the least over-engineered language.
What you say is like saying that C is simple because there is only memory and functions!
Or, to take from for example https://argumate.tumblr.com/post/118013166244/python-what-if...
Python: What if everything was a dict?
Java: What if everything was an object?
C: What if everything was a pointer?
APL: What if everything was an array?
Tcl: What if everything was a string?
Prolog: What if everything was a term?
LISP: What if everything was a pair?
Scheme: What if everything was a function?
Haskell: What if everything was a monad?
Assembly: What if everything was a register?
I like what Jonathan Blow calls this pattern: "Big Agenda language". It might make a programmer's life harder if the language is so opinionated. (Start here: https://www.youtube.com/watch?v=TH9VCN6UkyQ)
He specifically named Haskell, Java, and a few others, as being guilty of that, and being less usable for sticking to ideological purity.
Having said that, Haskell is very much not built around the question "what if everything was a monad". Monadic IO wasn't even a thing until Haskell 1.3
 Although this is, in large part, forced by its unfortunate choice of being lazy. It has also, arguably, forced advances in PL research that would have been avoided had Haskell added first class support for imperative programming.
Being lazy is what makes the language so elegant. Even a high level game developer (Carmack?) said, at one point, that his ideal language would be lazy by default.
However now that we've invented them, while some things benefit from laziness, eager evaluation is probably a better default.
An example of the elegance is a simple function that takes some type which can be ordered and picking the "largest" one:
max :: Ord a => [a] -> a
max = head . sort (<)
If you asked someone to describe how to get the largest entry in a list, the most consice way to describe it would be to sort the list and then take the first entry. In Haskell the most concise way to describe something is often the correct implementation as well, and that's what I call elegance.
Yes, for some sorting implementation taking the first element of the sorted list may actually amount to a linear time operation given that the sorting is carried out lazily, but that's not very obvious and may fail on you fast. Bad idea from an engineering standpoint.
There is a better solution in Haskell, which is
getMax $ mconcat (map Max [1,2,3,0::Int])
findMin :: (Ord a, Bounded a) => [a] -> a
findMin = getMax $ mconcat . map Max
Now, if you actually try getting this version to work, you will notice there is a lot of conceptual overhead involved for such a small thing. And it bugs me that you have to use the "Bounds" machinery just because the type system doesn't know that the list is non-empty (which I assume here).
So I ended up with it after fiddling with foldr1, mappend and so on, noticing that the underlying basics have changed AGAIN in Haskell...
Really, I consider the following C thing to be so much clearer and better engineering:
int getMin(int *a, int n)
int r = a;
for (int i = 0; i < n; i++)
if (r > a[i])
r = a[i];
Which I explicitly stated would not be happening in my example. In reality, you couldn't just use "sort", you'd need to make sure it was a sort with the right behavior. But the point wasn't to show how to get the largest element in a list but rather to show how non-strict (i.e. "lazy") evaluation can make the simplest code actually correct. Obviously this doesn't apply in every possible case, but if you compare it to e.g. C which is pretty much never the case, it's a win in elegance IMO.
>just because the type system doesn't know that the list is non-empty (which I assume here).
In modern haskell there is a type where the list is statically known to be non-empty.
> noticing that the underlying basics have changed AGAIN in Haskell...
Not sure what you mean here, are you complaining about standard library changes? I personally hope Haskell keeps periodically breaking backwards compatibility until they fix more of the ugly parts of the standard. I'd hate to see it go the way of Java and be stuck with a hideous e.g. file access model because that's what the very first release had.
>Really, I consider the following C thing to be so much clearer and better engineering:
I find such a function not remotely clear and definitely not better engineered.
* It's using a "for loop" so it could be a filter, fold, map or any combination of those. I can't know without reading it.
* The function this proposes to replace was able to find the max of any type which can be ordered. This proposed replacement can only do ints. Given the radically reduced scope, the typing is sufficient but you won't have to add many features before the C compiler simply cannot help you anymore. Haskell's type system is much more powerful, stopping just short of the more advanced dependant type applications.
I'm pretty sure most experienced software architects would not approve this as a robust approach. Notice that this is a property that actually matters, and it's not captured in the type system (most things aren't).
>>just because the type system doesn't know that the list is non-empty (which I assume here).
>In modern haskell there is a type where the list is statically known to be non-empty.
Sure, go ahead and add another type to make everything more incompatible... no, this approach doesn't work out in practice. There are much more invariants than you can hope to capture with new types in any practical project.
> I personally hope Haskell keeps periodically breaking backwards compatibility until they fix more of the ugly parts of the standard. I'd hate to see it go the way of Java and be stuck with a hideous e.g. file access model because that's what the very first release had.
I absolutely agree with you that everybody should strive to improve their own stuff. But with public, standard libraries, it's a different thing. You simply can't afford to have a hard dependency on stuff that breaks randomly. From an economic perspective, it's much, much simpler to just write your own getMin routine without any dependencies at all. It's not rocket science. And the practical gains by some vague idea of "type safety" for this little procedure are nearly zero.
> I find such a function not remotely clear and definitely not better engineered.
> * It's using a "for loop" so it could be a filter, fold, map or any combination of those. I can't know without reading it.
This is just not true. Almost any beginner to programming can understand this immediately, it's not much effort to understand this to almost anyone. Contrary to the Haskell version (the more realistic one - the one I listed). The average time to ramp up any beginner to understand the Haskell version, and the average time to anyone already competent enough to figure this ad-hoc thing out, is much higher.
Do not think that just because there is a for loop this is somehow wrong. There is a simple statement below the for loop and everyone can immediately spot what happens there (it's a "fold", if you like to think in these terms. That was not hard). Any suggestion to the contrary, that's just zealotry.
> * The function this proposes to replace was able to find the max of any type which can be ordered. This proposed replacement can only do ints. Given the radically reduced scope, the typing is sufficient but you won't have to add many features before the C compiler simply cannot help you anymore. Haskell's type system is much more powerful, stopping just sort of the more advanced dependant type applications.
Sure, thanks, I know that. See, I don't care. I don't need that in my own practice. If I want to find a routine which returns the max integer I do this. If I really needed to write a generic routine (which I don't), I would do C++ or Python or whatever other standard procedural approach. Nothing wrong with that. Any of these is more understandable, more deterministic in runtime behaviour, more modular (less dependencies), easier to write, easier to maintain, etc.
What approach? Not caring which sort is used? I explicitly stated that in actual practice I wouldn't do that. I simply took a quick example to make a point and you're turning it into an interigation. Look, you don't like Haskell. Fine, don't use it. I'm not cashing checks from your choice of programming language, use whatever you like.
If you want the smallest element, get the smallest element by scanning the list. No big deal.
Well, I'm sorry for turning this into almost an emotional issue. I guess I am the one who is cashing checks here. It does annoy me: I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development. It is not. It doesn't help all that much for correctness, and other aspects like modularity, efficiency, portability, compiling speed, development speed etc. are at least equally important for the vast majority of projects.
(I could say I've "wasted" a lot of time for investing in Haskell before learning that these other qualities matter as well, but I would not go this far. It's been a good investment, even if it's not a practical language)
It is no coincidence that most Haskell projects never take off. Just look around at the software in existence. What you can find is mostly little tinker projects. When I look at some higher-profile libraries for example for web programming, they focus way too much on some perceived gains from very advanced type system tricks, but they are basically too hard to use in a practical setting. The big links page on Haskell.org has mostly toy projects and broken links. When I look at the open projects page at toptal.com I can see exactly one Haskell project, and it is about refactoring a completely unmaintainable Haskell codebase!
There are areas, like compilers, where Haskell seems to do quite well, but for the most part it is impractical. Even most Haskellers don't disagree that it's mostly good for expanding your mind, exploring mathematical ideas etc.
Agreed. It's quite painful to see Haskell advocates sharing this example. It's far too clever and lacks robustness. It's a cute triviality, not an example of what the language is actually good for.
> I don't know if a popular sort with this property exists
Well yes, Haskell's sort function has this property. It's a merge sort.
> If you want the smallest element, get the smallest element by scanning the list. No big deal.
Yes, or just use the built in min function.
> I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development.
It's a shame you seen too many advocating that, because that's not all that Haskell's good for. It's also good for
> modularity, efficiency, portability, ..., development speed
(not "compiling speed"! :) )
Tome pointed out that "sort" in haskell is a merge sort, but I have to say I strongly disagree with this point. I was a hardcore C++ proponent for a long time and the reason for that was the STL. What made the STL great, for me, was that the performance was effectively part of the spec of the container types.
So it's true, if you wanted to implement "max" as I did in my example you couldn't use a sort that was simply "sort as efficiently as possible" or some such. You would need a function known to be "mergeSort" or similar. I don't view this as fragile, it's higher level programming: building upon blocks with known correct behavior. It is not trivial either, but I'm not sure engineering can be.
>Well, I'm sorry for turning this into almost an emotional issue.
If you're arguing in good faith, then all is well. There are just a hand full of trolls out there that have to go out and trash haskell where ever they see it mentioned, so if the conversation develops that kind of feel I tend to back out.
>I have seen way too many advocating that makes it seem like this vague idea of "safety by type-safety" is the only issue that comes up in software development.
For me, the reason I switched to Haskell was simply time: I need as much help from the system as I can get. The more the system can find bugs for me the better. But, of course, this only works if you go "all in". You need to make an effort to push as much information as you can into the type system where the compiler can help you (I find this part similar to how I programmed C++). You need to make an effort to make invalid programs impossible to compile. If one doesn't do these things then they probably won't get enough out of the language to make it worth it.
>It doesn't help all that much for correctness, and other aspects like modularity, efficiency, portability, compiling speed, development speed etc. are at least equally important for the vast majority of projects.
For correctness, it depends very much on how successful one is in encoding their program into the type system.
I find modularity to be good but tends to be a community problem more than a language problem: for what ever bizarre reason, Haskell programers tend to not like imports and/or dependancies. Amusingly, one of the most popular packages in Haskell (Lens) mostly ignores these convensions.
Efficiency can be done but it's effort. I find Haskell quite good here: you can do the simpliest thing and it will often be good or correct. If benchmarking tells you that it's too slow then you have a lot of opportunity to make things a lot more efficient without losing all of the elegance.
Portability, I find also good. Not the most portable language that exists but workable.
Compiling speed... I don't see Haskell ever competing with languages like e.g. Go on compile speed. Ocaml is pretty fast though. :)
Development speed, IMO, depends on having a good IDE. Some members are working on this but Haskell doesn't have the community of more popular languages to get this one completed and polished. I would expect Haskell development with an IDE on the level of Visual Studio+Resharper to be the fastest of typed languages because e.g. refactoring would have so much better information to work with (assuming proper use of the type system).
>It is no coincidence that most Haskell projects never take off.
Well, I doubt that is a coincidence but I suspect there are a lot of reasons. Haskell doesn't have the popularity of e.g. Java so it won't have the amount of libraries, the robustness, etc. For me, Java is a classic example of what community focus can do: the language is trivial. It's not powerful at all, all it has are classes, functions and interfaces basically, yet so many people have been working with it for so long it has libraries and software for most anything one can think to do with it. There are tools to deal with its horrible verbosity, etc.
For me, I have no expectation of Haskell to compete directly with Java. My only expection is that Haskell would get to a state at least as good as Java with a lot smaller community (though, obviously, larger than it currently has).
>they focus way too much on some perceived gains from very advanced type system tricks, but they are basically too hard to use in a practical setting.
I'm not sure on this. The type system for e.g. Servant is so good you can impliment trivial servers basically with the types alone. Of course, the critical question is: is this the place most people are having difficulty in web development? If it's not then this effort, cool as it is, will obviously have a limited effect on anything.
>When I look at the open projects page at toptal.com I can see exactly one Haskell project,...
Is this not likely to be the consequence of community size? Community size probably has some relation to language but, for me, it's hard to nail down. Haskell works different than other languages so that, on its own, will exclude a lot of people no matter how good it is. Java took C++'s well known syntax and get rid of anything remotely difficult to use, so that's easy for popularity.
>Even most Haskellers don't disagree that it's mostly good for expanding your mind, exploring mathematical ideas etc.
This could be, I don't know what most Haskellers want with it. I know there are companies that use it in production and I use it as a productive language.
And indeed, much of the subsequent work in Haskell has been materialising even more details to avoid the hidden side-effects that cause the IO monad abstraction to break down.
The number of constructs has absolutely nothing to do with what people mean by "simplicity of language", although language designers are often confused about this point because they obviously care about the number of constructs. For example, a Turing machine, or even some cellular automata like the game of life, are also extraordinarily simple (much more than Haskell), and yet no one would think to call programming in them simple, and the programming patterns that would emerge if we were forced to, while no doubt very clever, would likely very much deserve the term "over engineered". What matters, then, is not how a language is built, but what programming patterns emerge over time in programs that are written in that language.
You have Haskell and OCaml backwards. Certainly Haskell's semantics might seem simpler, but that doesn't make it simpler to program in, which seems to be what you're saying.
Having worked professionally with both, I'd say OCaml has a gentler learning curve, but you have to do more work forever after getting over the curve.
Why do you suppose that is? Lack of type classes/overloading?
List.serialize (Option.serialize Int.serialize) my_list
There are also some things about the language that break composability. For example, the presence of mutable references means you can't compose polymorphic functions (sounds weird, I know), which is inconvenient.
It basically boils down to less ability to front-load work by creating good abstractions.
Re:composing polymorphic functions, I'm not sure I follow. Do you mean the value restriction?
> All the other things that people use exist as (individually) extremely simple extensions that you can learn in isolation in a few minutes.
If by "learn" you mean you can type in something that is the correct syntax, sure... but in terms of actually learning how to use them to model problems? Come on...
> Haskell is possibly the least over-engineered language out there
Hindley-milner, ADTs, type classes, and laziness are core i.e. part of Haskell 2010. GADTs aren't even part of core Haskell. So I guess it is simpler than Ocaml after all!
Which language features that I mentioned do you not think are commonly used in Haskell?
You list things that aren't part of the Haskell standard and just claim that they are "core" while not even listing all of the _actually core_ features of OCaml -- all of its OO parts.
I agree the typeclass stuff in Haskell can get complicated, but the more abstruse typeclass features are used about as much as the similarly abstruse ocaml object system. It's usually quite straightforward, just like ocaml modules are usually (but not always) quite straightforward.
I see your claim repeated often on HN, but I think it's disingenuous.
One would expect that a system to transform a simple high level language into a complicated low level language would would grow proportional to how simple and high level the source language is. Thus, a very simple, high-level language ought to have a larger compiler than a more complicated, lower-level language.
Indeed, we see this in real life. Writing a haskell compiler is a fairly big undertaking whereas a C compiler could be written in a few days. This is because C is complicated, but low-level, and Haskell is simple, but high-level.
Let's not confuse the simplicity of translation with the simplicity of semantics.
What makes Haskell higher-level than C? Garbage collection does not really complicate the compiler if you just use BoehmGC.
You can use a simple subset of assembly instead of the full complicated set. Toy C compilers do this, e.g. by treating x86 as a stack machine.
- lazy evaluation
- type inference
- ridiculous amounts of programmer tools (it *derives* type classes for you!!!)
- lots of experimental ideas that are in the compiler but that people don't really use (it's a research tool after all).
I think there is something to be said for the idea of writing a similar language which has a less monolithic set of tools (although, to be fair, I have not look at the architecture of GHC at all, so maybe it is really beautiful under the hood). The point is that I don't think that you can necessarily equate the simplicity of expression with the complexity of the tools.
It also implements a huge number of optional language extensions.