As such, it is of a position of extreme ignorance that he speaks of the uselessness of type checking and inference.
Claiming Smalltalk has the best closure syntax shows he doesn't understand call by need. Haskell defines easier to use control structures than Smalltalk.
Claiming patterns don't give exhaustiveness, ignoring their extra safety shows Gilad doesn't understand patterns.
Claiming monads are about particular instances having the two monad methods, when they are about abstracting over the interface, shows Gilad doesn't understand monads.
Claiming single argument functions have the inflexibility of identical Lego bricks shows he doesn't understand the richness of function types and combinators.
In short, Gilad sounds to me very much like a charlatan who'd benefit greatly from going through lyah.
- He claimed that tail recursion could be seen as the essence of functional programming. How so?
- He complained that tail recursion has problems with debugging. Well, tail recursion throws away stack information, so it should not be a surprise. You don't get better debug information in while loops either. And you can use a 'debug' flag to get the compiler to retain the debug information (at the cost of slower execution).
- His remarks about Hindley-Milner being bad are bizarre. Exactly what is his argument?
- His claims about pattern-matching are equally poor. Yes, pattern matching does some dynamic checks, and in some sense
are similar to reflection. But the types constrain what you can do, removing large classes of error possibilities. Moreover, typing of patterns can give you compile-time exhaustiveness checks. Pattern matching has various other advantages, such as locally scoped names for subcomponents of the thing you are matching against, and compile-time optimisation of matching strategies.
- He also repeatedly made fun of Milner's "well-typed programs do not go wrong", implying that Milner's statement is obviously non-sense. Had he studied Milner's ￼"A Theory of Type Polymorphism in Programming" where the statement originated, Bracha would have learned that Milner uses a particular understanding of going wrong which does not mean complete absence of any errors whatsoever. Milner uses a peculiar meaning, and in Milner's sense, well-typed programs do indeed not go wrong.
- He also criticises patterns for not being first-class citizens. Of course first-class patterns are nice, and some languages have them, but there are performance implications of having them.
- His critique of monads was focussed on something superficial, how they are named in Haskell. But the interesting question is: are monads a good abstraction to provide in a programming language? Most languages provide special cases: C has the state monad, Java has the state and exception monad etc. There are good reasons for that.
- And yes, normal programmers could have invented monads. But they didn't. Maybe there's a message in this failure?
It is instructive to read Bracha's blog too, mostly for the comments where readers refute a lot of what he claims.
His argument against Hindley-Milner seems to be that "he hates it", and that type errors are sometimes hard to understand. It is true IMO that they are hard to understand (even though, like everything in programming, you get better with practice), but what is the alternative? Debugging runtime errors while on production?
He also presents Scala as a successful marriage between OOP and FP, but in reality this is a controversial issue. Some of the resistance to Scala (witnessed here in Hacker News, for example) is due to it trying to be a jack of all trades and master of none. Scala's syntax is arguably _harder to read_ than that of other FP languages.
Some of his "funny" remarks sounded mean-spirited to me. Nobody in his right mind claims that FP invented map or reduce, for example.
The only point of his talk I somewhat agree with is that language evangelists are annoying. Oh, and that "return" is poorly named.
He pointed out that a more nominal type system is a solution. Because when you give meaningful names to your types the error messages will become clearer and not full of long, inferred types that reveal potentially confusing or unimportant implementation details.
More importantly, I think the reason why error messages are sparse and not meaningful in languages with Damas-Hindley-Milner is that nobody bothered to improve the situation. And the reason why nobody botheres is that it's simply not a problem in practise. Any even moderately experienced programmer can easily detect and fix typing errors as they are given in Haskell, Ocaml, F#, Scala etc.
Monads are not only a good abstraction, they are essential* if we are to move away from haphazard construction. Normal programmers have invented them many times; as the truism goes, you have have even "invented" them yourself.
Remember when function pointers seemed tricky and unnecessary? Remember when closures seemed tricky and unnecessary? Yeah. One day you're going to see monads in the same way.
* - Functional programming => programming with pure functions.
I'm somewhat curious on why the industry has such an aversion to simulating things in our mind. Especially when this seems to be one of the arguments employed against monads in this speech. That it basically couches something known in an odd name that is not known. Isn't this just stating that it is bad because it confuses the simulator that is the reader?
That said, the live coding aspect is something that I am just now learning from lisp with emacs. Being able to evaluate a function inline is rather nice. It is somewhat sad, as I still wish I could get a better vote in for literate programming. (Betraying my appeal to the human factor moreso than the mechanical one.)
1. entering the context (pure :: a -> m a)
2. collapsing nested contexts into one (join :: m (m a) -> m a)
Together with some coherence laws that ensure that these operations do exactly, no more or less, than entering the context and collapsing nested instances of it.
The video goes on to display an environment where you do not have to simulate the code in your head.
This progression seems somewhat interesting to me. As does the desire to not have to simulate code in your head.
But none of that has anything to do with monads.
I am taking issue with the video's critique of monads. Wherein it is claimed that monads manage to take a common and understandable behavior and make it laughably impossible to explain to people by giving it a weird name. Essentially, the problem with monads is one of it being difficult to "simulate" under the name "monad" for many individuals.
This part, I actually feel makes sense and resonates well. Simply follow the progression in the video and see how "FlatMappable" becomes less and less intuitive as it is given worse and worse names.
The part that is interesting to me, is how this then progresses into a point on how programmers should not have to simulate the code in their head. Now, I realize there is a big difference between "should not have to" and "is difficult to intuitively do so". Still seems an odd progression, though.
If you don't want to discuss something, then don't post. You are not making any sense, and calling people trolls does not help at all.
At no point was I trying to describe or discuss monads. That is something a response to me thought I was trying to do. When referring to "simulating" a system, I was referring to where the video refers to the process of reading "dead code" in a text editor. There is a large rant on monads in the video where the argument appears to be that the problem is strictly with the name. The reason given that it takes something understood, and hides it behind non-obvious names. I extrapolated this to be that it makes the program and the idea "hard to simulate" for the coder reading the code.
As time goes on I'm finding it more and more frustrating to try and maintain code that relies entirely on anonymous and structural constructs without any nominal component. Yes, I do feel super-powerful when I can bang out a bunch of code really quickly by just stacking a bunch of more-or-less purely mathematical constructs on top of each other. . . but as the story of the Mars Climate Orbiter should teach us rather poignantly, when you're trying to engineer larger, more complex systems it turns out that meta-information is actually really useful stuff.
I wasn't familiar with the Mars Climate Orbiter case, but a cursory reading suggests one of the causes was a type error (confusing newtons with pound-force).
For example, I strongly prefer F# to its cousin OCaml largely because F# uses nominal typing and OCaml uses structural typing. I've also got some misgivings about being overly reliant on type inference. Both structural typing and advanced type inference are admittedly incredibly convenient. What worries me is that they also seem to be incredibly convenient as ways to obfuscate the programmer's intent w/r/t types and their semantics.
In any case, there is certainly valid criticism of FP, but Bracha's just isn't it. My impression is that the guy -- as clever as he may be in other areas -- barely understands FP, and makes disparaging remarks about things he isn't familiar with. Read his blog; every assertion he makes is shown to be incorrect or misleading by people who do understand FP, like Tony Morris or (very politely) Philip Wadler himself.
That said, he's a terrible presenter. His smarmy style was really off-putting, and his motives a little sketchy. He spends a good portion of the talk slamming just about every language in existence except for the two he works on (Dart and Newspeak). It seemed very disingenuous and I don't need another ranting nerd spouting venom about why something's not very good in that holier-than-thou tone. I would have rather had a straightforward talk showing the strengths and weaknesses than the bitter tone this had.
As an aside, Scala is not unique in marrying a FP approach with an OO system. CL has had CLOS, IMO one of the better implementations of "OO" outside of Smalltalk, for much longer than Scala.
Definitely watch this!
As an aside, CLOS multimethods resemble Haskell's multiparameter type classes (except CLOS is dumber: you cannot provide any guarantee that the same types will provide two or more common operations) more than they resemble anything else also called "object-oriented".
The best descriptor I can find to date (of CL) is, "programmable programming language," which allows it to encompass almost every desired feature one may need; including many that fall under the FP umbrella which may be where the confusion stems from.
However one of the opening points of the talk was that, "FP," is not a rigorously defined term and is subject to interpretation. Which leads to bikeshedding over language features and a lot of hype.
I believe it also leads to a lot of misplaced faith in the purity and completeness of mathematics (it's almost as if the popular notion of FP is being reborn as a modern Principia Mathematica).
CL obviously cannot be called an, "FP," language since its inception seems to predate the popular notion of the term. Scala may suffer in the same way due to its reliance on the JVM and the expression semantics it has carried over from Java. However many of the features one tends to associate with modern FP languages (though not all) are present in both languages.
As for your aside, how so? Perhaps a discussion we can have over email if you're interested. You sound smart. However I don't understand your statement and would like to know more.
CLOS multimethods do not "belong" to an object or even to a class declaration. Particular implementations of generic methods are declared globally, just like Haskell type class instances. Although, as Peaker noted, type classes can dispatch on any part of the type signature. It is impossible to make a CLOS multimethod with signature:
(SomeClass a b) => String -> (a, b)
Sorry, I never check email. But I am almost always on Freenode. My nick is pyon.
> You sound smart.
Not really. The regulars in #haskell - now they are frigging smart.
> Not really. The regulars in #haskell - now they are frigging smart.
Don't sell yourself short.
I thought it was brilliant because Gilad provides a humble deconstruction of common myths and claims of the FP culture. He is skeptical and I didn't find any of his conclusions to be dismissive: he walks through the reasoning behind his opinions. I certainly didn't find any point where I thought he was ignorant of the subject of which he was speaking. And if you listen to his opening remarks about "deconstruction," and his conclusion do note that he points out some FP concepts that are useful and should be exploited more. He was there to break through the hype and I think he was successful.
He argued with a joke from a comic and lost. Even he would laugh in your face at the notion that there was anything humble about his talk.
>He is skeptical and I didn't find any of his conclusions to be dismissive
That is precisely the opposite of reality. He doesn't even understand functional programming, he is thus not skeptical, he is dismissive.
>He was there to break through the hype and I think he was successful.
The fact that both he and you believe there is "hype" is indicative of the problem. "Hey, you should learn things and improve your skills" is not hype.
Most of what he says is outright wrong. He talks about smalltalk inventing all of this FP stuff that was in ML before smalltalk-76 "invented" them. He pretends smalltalk predates FP, except again, ML predates smalltalk-76. and smalltalk-72 didn't have the stuff he is talking about. He talks about things "FP languages can't do", but that I do all the time in haskell with no issues. He repeats the oldest most worn out fallacious arguments that have been debunked over and over, and pretends that since nobody is allowed to interrupt the talk to correct him, his arguments are correct. Everything about his talk is an example of the exact opposite of what you suggest it is. If you want someone to convincingly lie to you about how FP isn't all that, look to Erik Meijer. Gilad sucks at it.
Regarding Haskell: The points he makes against obtuse names based in category theory are valid, but then again, Haskell has its roots in research programming languages. Math-based terminology makes more sense for an academic audience.
No, they aren't. When you have a class of "things" that doesn't have a name most people are familiar with, you are left with two options. Either choose a name people are familiar with, but which is wrong and misleading. Or choose the correct name and people have to learn a name. Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?
The first is that they hide the meaning. For example, "Monoid" is a really scary term, and explaining it further as "something with an identity and an associative operation" really doesn't help much either. Calling it instead "Addable" or "Joinable", and explaining it instead as "things with a default 'zero' version, and which have a way to add two of them together", while perhaps not a perfect definition, would be much more intuitive for the majority of people.
That brings me to the second problem I see, which is that the esoteric terminology in Haskell creates a barrier between those who understand it, and those who don't, and contribute to a sense of Haskell culture being exclusionary and cult-like, which discourages cross-talk.
Criticizing Hindley-Milner, on the other hand, I'm confused by. It's such a useful and powerful system. I suppose it can make compiler errors more obscure at times, but you get used to reading them and they aren't so bad. Hindley-Milner isn't just a type inferrence system; it's a typing system which allows for the most general typing to always be used, so that the functions one writes are as general as possible, encouraging modularity and code reuse.
"Monoid" will be very informative to anyone who learned it from mathematics.
A "Monoid" is a type which supports an associative operation (`m -> m -> m`) and a neutral element (`m`) which forms its identity element.
"Addable" suggests it is an "addition". Does this mean it is commutative? For the sake of preciseness, I'd hope so! (Monoids aren't commutative). Does this mean it has a negation? No. So it is not "addition", why use a misleading name for the sake of some false sense of "intuition"?
The actual explanation of what a Monoid is precisely is so short and simple, it makes no sense to try to appeal to inaccurate intuitions.
Monad is simple and hard, but Monoid is simple and easy.
My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions. Like I've said a few times, this isn't incorrect at all. Nor is it surprising given Haskell's origins, nor is it without purpose since it deepens your understanding of what's going on in the language. It's just a simple fact that the mathematical jargon is a turn-off to newcomers and those who don't feel they want to be forced to learn math while they're programming, or might think they're incapable of doing so.
As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.
There is no such term, that is the point. Offering up misleading terms that do not convey a sense of their meaning is much worse than a word that is unfamiliar.
>My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions.
But it isn't an example of that. It is quite bizarre to see people insist that this goes on, and give examples that do not support that claim, while being fully convinced in their proof.
>As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.
You don't need to be like that, that is the point we're making. I am not a math person. I am not a CS person. I am a high school drop out who taught himself to code in PHP and C. I learned haskell just fine. I learned monoids and functors and monads just fine. I am no more mathematically inclined now than I was before. I know nothing of category theory, and care nothing of it. They are very general abstractions that do not reflect a narrow, specific use case, and thus do not benefit from a word that describes some narrow, specific use case.
Some people will call me Satan now, and others will jump on what I said and say, "Hell yeah, the world is full of dumb blub programmers." But both those groups are misunderstanding me.
I think a programmer who cannot understand this level of abstraction can still do plenty of valuable things as a programmer. I would not call them dumb. They may be -- and many are -- fabulously creative, driven, capable and highly productive.
There's a certain type of programmer who is more comfortable with abstraction and whose brain is more wired to deal with these amorphous, unnamed concepts. The same kind of brain wiring is needed to go far with mathematics.
But as FP becomes more prominent, this is going to become a dividing issue. Some will not make the transition, or will do so only partially. I think it's great to try to communicate better where possible, but even the best communication is not going to completely erase the issue.
My 3rd grade daughter learns about associativity and identity. Is it too much to expect adults to not get all defensive over 3rd grade terminology?
This is just my opinion, of course.
A very large part of our job is to apply abstractions. I don't often hear lawyers complaining about how accessible the name of some law is, or from doctors about how accessible the name of some disease is. I've never heard an American football player say "We should call the pistol formation something else. Calling it pistol is potentially confusing". They just learn what a pistol formation is and carry on.
As programmers, abstraction is a very large part of our job. We owe it to ourselves to learn the basics and to improve our abilities with respect to our craft, even though sometimes it's hard.
In particular, it's easy to define a reverse monoid for any (non-commutative) monoid such that append becomes prepend. It's easy to construct monoids which have different spacial properties like Diagrams' "stacking" monoid (they have many others, too, see this entire paper http://www.cis.upenn.edu/~byorgey/pub/monoid-pearl.pdf). It's also easy to construct monoids which don't have any spatial sense at all like set union.
What is '+' ? Addition? Modular addition? Logical OR? Concatenaton? Sometimes, any of these.
There where always be more concepts than distinct labels, since the space of concept is exponential combination (power set) of words.
I don't have the perfect answer, but append certainly seems like choosing a specific concept , rather than trying to come up with a more general name.
We are even more pathetic than that. If the underlying concepts are misleading but evoke a warm and fuzzy sense of familiarity (objects), we will accept them wholeheartedly. If the underlying concepts are mathematical, we will reject them as disconnected with our everyday needs.
"""My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. ..
I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions."""
For most people, yeah, I think monads are a big hurdle. They look intimidating to outsiders.
I still have to admit that I like the approach Haskell has taken. Sure, it's harder to grasp the concepts if you don't have a background in math, but it's not like monads, monoids, arrows, and functors were thrown in there just to be pretentious. There's a whole lot of useful theory surrounding those concepts that can be used to the programmer's advantage.
After all compilers don't compile, they translate.
Perhaps try going through "Write Yourself a Scheme" which uses monads from the outset, or look at "Monad Transformers Step-by-Step" (be warned though, it starts off mostly simple but then makes a sudden and somewhat jarring leap forward). Try to implement a stack with "push" and "pop" monadic operations (use this as a starting point: http://brandon.si/code/the-state-monad-a-tutorial-for-the-co... but keep in mind it's much more important to WRITE the code, and play with it, than to try to understand how it's all working just via explanations).
For what it's worth, here's my ten cent explanation of monads:
A monad is interface for containers. A type which implements this interface must have two methods, `return` which inserts an object into a container, and `bind` which says what should happen when we use the value in one container to create a new container.
However, I was able to gain a solid understanding of monads by working through a number of those tutorials. Over that period of time, I came to suspect, and then eventually confirm, that I had previously created my own monad for a particular purpose in C# (my case was checking the value of something in an XML tree, an attribute of an element of an element of an element, where the attribute might be missing, or its parent element might be missing, or its parent, and so on. So it was much like Bracha's ".?" sugar example).
So, monad tutorials actually (eventually) made the concept clear to me. It's fun to make fun of them, and they deserve to take a little heat, but we have to give them credit where credit is due too.
But in the end I think for most people 'understanding' the concept of monads is just something that is not to be had within a couple of hours. It takes a little bit of patience thinking about them and using them for a while.
very 'operational', zero magic, good to get your hands dirty.
Dans' tutorial you linked is one of my favorite too.
How about "functor," "monad," "applicative" etc.?
As a reader of that code cannot easily understand whether the number and type of arguments is correct, one has to rely on the type checker that everything will work out.
However, this is more of a criticism of ML syntax than of currying – all things are good in moderation.
As a practical consideration, this rarely if ever becomes an issue, and if it does, the type checker will tell you straight away.
Type annotations can make clear what isn't intuitively clear with a function's signature, and since the correctness of the type checker is rigorously proven, I don't see anything particularly wrong with "relying" on the type checker.
There is nothing wrong with relying on the type checker, except that it tends to add cognitive overhead.
It's very natural, and as Lisp/Scheme/Racket shows, it's perfectly fine in a dynamically typed context as well.
Bracha's critique, as usual, is missing the point.
For some reason it seems parent post would like to specifically indicate the case where a function application results in a value that is specifically not a function? Seem quite strange to me.
OTOH, pipeline operators make a good case for currying. There really is something nice about being able to write
|> smearWith peanut-butter
|> smearWith jelly
|> topWith sliceOfBread
eat(cutInHalf(topWith(sliceOfBread, smearWith(jelly, smearWith(peanut-butter, sliceOfBread)))))
fun x |> f = f x;
x |> f = f x
let (|>) x f = f x;;
I don't follow this. My understanding is that currying is pure syntactic sugar: it's a cheap way to expression partial application.
What am I missing?
So why complicate the language with native support for a feature whose encoding on top of one argument functions is concise, elegant, and works well?
I only know too well. Everytime I write stuff like
std::bind(foo, bar, _1, _2, baz, _3, _4);
What about poison? Rabies? Rabies in moderation actually sounds quite appealing.