In my own opinion, I think the author would be better served by sticking with it for a little while longer.
And, no, to head off one common strand of arguments at the pass: no, as a Haskeller and FP enthusiast, I (very strongly) do not think pure, statically-typed, globally-type-inferred (?) programming is perfect or some kind of One True Way. Does it have warts? Of course it does, and don't believe anyone who tries to tell you otherwise.
But the pure FP way of thinking has a lot to offer, and seeing all its idioms through an imperative, C-family-language lens is counterproductive.
A good thing to keep in mind here is the "Principle of Charity": when faced with an unfamiliar spin on something you know how to do in a different way, it's useful not to approach it with the intent of disproving it. Existential Comics has a guide[0] to learning philosophy as an amateur that talks about this.
Eh, I think the OP had more of an argument. Saying something is a "non-argument" is not a useful rebuttal!
I completely agree about the lack of names. I based my current language project around Zephyr ASDL [1], a DSL used in Python to describe abstract syntax with the model of algebraic data types. And it does force you to use names, which I very much like.
I've looked at a lot of language implementations and algorithms in ML, and the first thing I read is the abstract syntax. And the lack of names always bugs me.
Also, the lack of namespacing. In Rust, the enum variants are scoped to the name of the sum type. But in ML you will often see people using explicit prefixes like in C, to provide your own namespacing. (At least, I saw this in the early implementations of the Rust compiler itself in OCaml.)
I agree with you on namespacing: I've found myself having to add prefixes myself when working with a bunch of similar datatypes with identically-named constructors (e.g. terms and types in a compiler have many similarities).
Saying "non-argument" and ending it there is not useful, but in this case it's shorthand for something along the lines of "not even wrong", and I think that's a valid answer for some (not all!) of the points in the OP.
To pick one example: the author gives an example of an eta-reduced function and then says:
let fn1 a b c d = ... do something ...
let fn2 a b c = fn1 a b c
fn2 a b c d
> and you're like ...wtf... where the hell does d come from.
(Or their criticism of partially-applied functions, which is essentially the same thing.)
This is a "non-argument": the only thing that is being made clear here is that the author hasn't spent sufficient time learning the idioms of an unfamiliar style of programming. Yes, horrible partial-application like
f = (g .) . ($) . (. (. h))
or whatever is always a bad idea, but saying that writing
map f
is always worse than
map (\x -> f x)
is much more likely to be a symptom of unfamiliarity than a deep, cutting critique of functional programming style.
The whole thing was obviously driven by unfamiliarity and stubbornness in clinging to the familiar but your comparison of "map f" vs "map (\x -> f x)" is not equivalent to his comparison of "array.map(fn)" to "array.map(x => fn(x))".
The haskell versions are equivalent but the javascript versions are not due to the airty issues raised in the comments.
If you do raise that then it's no longer an argument against partial application or "high[er] order functions", since you can't really call `fn` partially applied if fn and x => fn(x) aren't equivalent.
> Also, the lack of namespacing. In Rust, the enum variants are scoped to the name of the sum type. But in ML you will often see people using explicit prefixes like in C, to provide your own namespacing. (At least, I saw this in the early implementations of the Rust compiler itself in OCaml.)
Well then that was badly written OCaml code. Instead of prefixing, you should use the (fantastic) OCaml module system. Also, OCaml has type-based disambiguation support for records and variants.
Speaking of lack of names, OCaml supports "inline records" nowadays:
type foo =
| Bar of { this : int ; that : float }
| Baz of baz
How does record access work? Is it via generated record accessor functions (pretty crappy, used by Haskell) or does OCaml now have true row types (as in PureScript (and perhaps Elm))? Can you have multiple types with the same record field name?
The "inlined" record can only be accessed locally and can not be returned as a single value (See manual here: http://caml.inria.fr/pub/docs/manual-ocaml/extn.html#sec271). In general, multiple types can have the same record fields/constructor names. The typechecker uses type information to figure things out in case of conflicts.
For example:
type t1 = { foo : int ; bar : int }
type t2 = { foo : string ; baz : int }
let x = { foo = 2 ; bar = 2 }
let y = { foo = "foo" ; baz = 3 }
let f x = x.foo ^ "x" (* In case of doubt, uses the last one defined *)
let g (x : t1) = x.foo + 1 (* Uses the types to find the right one *)
let h { foo ; bar } = foo + bar (* Figure things out using the fields *)
(Of course this is an extreme example, good code should be clearer than that :p)
When a field "foo" comes from a type t in a module M (M.t), you can either use type disambiguation as above or qualify accesses : x.M.foo. All this also works with ADTs.
There is a deep desire in the OO world to make custom, domain specific ontologies for every domain.
The FP approach attempts to do the opposite: create as few new names as possible (or when we do create them, they should not be exclusive unless they are truly domain specific).
A modern way these meet in the middle is called "named row polymorphism" as demonstrated in PureScript.
All three of these approaches have tradeoffs. It's not a very good idea to say any of them are uniquely suited even to a domain.
I think you're talking about a different issue regarding names (maybe because the original post got taken down.)
I think he was just referring to lack of names in type declarations in OCaml.
type expr =
| Sum of (int, int)
| Negation of int
A sibling comment pointed out that this is now possible:
type expr =
| Sum of { left: expr ; right: expr }
| Negation of { child: expr }
The former style looks OK for this simple example, but I sympathize with the OP because "real" type declarations are more complicated, and that's where the problem with lack of names surfaces.
You can imagine C WITHOUT names in structs, but WITH destructuring assignment of structs. That means you have to write the same names every time you destructure! That leads to inconsistent programs.
-----
But I know what you are talking about with regard to names. I would say it is more of a Lisp approach than an FP approach in general. (There seems to be a resurgence of statically typed languages in FP in the last decade, so I distinguish them from Lisp. There is a good talk called "Observations of a Functional Programmer" which I link in this post [1], which talks about the schism.)
Anyway, I have heard Rick Hickey (of Clojure) argue passionately against domain-specific ontologies, and I get his point. Another way I've heard it said is that "Java is like legos where none of the pieces fit together" (I think from Steve Yegge).
Those criticisms are totally valid, but it's a different issue than the one OP was complaining about. If anything, OCaml has the same problems as Java. You need to "jump out of" the language to do metaprogramming, whereas in Lisp, metaprogramming is just programming.
That is essentially what my blog post is pointing out.
Thanks for the pointer on PureScript -- I will check that out.
------
Another thing I came across recently was libraries in statically typed FP languages to do simple visitor tasks, like:
In Lisp this is not something "special". It doesn't require a named solution or "innovation". I don't use Lisp for other reasons, but I appreciate this aspect of it.
> I think you're talking about a different issue regarding names (maybe because the original post got taken down.)
I think he was just referring to lack of names in type declarations in OCaml.
type expr =
| Sum of (int, int)
| Negation of int
A sibling comment pointed out that this is now possible:
type expr =
| Sum of { left: expr ; right: expr }
| Negation of { child: expr }
It's difficult for me to work out the specifics here, but I think my point stands. In general, you just don't add nested ontologies the way you do in the OO world. There's a lot to be said about that with lenses, but there's strong incentive in both Haskell and Ocaml to use module-level separation and minimal accessors.
As for type constructors, I think this was one of Rich Hickey's complaints from his now infamous interview too, and yeah, in some cases it can be confusing. This is in the same sense that order-of-arguments for methods without required names (and tooling combinators like -> vs ->>) as well. Style problems exist with every syntax!
But from my perspective, it's pretty rare to not use record syntax or lenses AND be pattern matching on very deep groups of Things In Sum Types. The incentive to not do this is very high now.
My perspective is not so uncommon in the circles I run in; We Avoid Doing That Because It's Easier Not To Do It: the Musical. When folks point this out, it feels somewhat like arguing with a time traveler who hasn't really examined the state of the art and common techniques in 2017. It'd be like me demanding lambdas be added to Java or suggesting Javascript needs a better lexical scoping assignment primitive. It was a problem, but now we have new problems.
> There is a deep desire in the OO world to make custom, domain specific ontologies for every domain.
well, yes. A big selling point of OO was to allow big services company to have a crazy turnover, where everyone working on a codebase is fired and new hires can start being productive the next day on it. So everything has to be documented and named. Write the class you're assigned and don't do more ; you're just a microscopic cog in the machine.
The lack of canonical names for parameters to data constructors is usually addressed by convention, and can present real problems for reading unfamiliar codebases. There are solutions, but this is a legitimate complaint and saying "OO also has some nameless things" doesn't help. A more helpful response would point at lenses and tools for jumping to datatype definitions, where fields' purposes should be documented.
Functional "mutation" tracking is subsumed by dataflow tracking, which is necessary in every programming language. The rebuttal is basically right here.
Partial application is occasionally useful, but in languages where it isn't built-in, it's easy to wrap multi-argument functions in closures to curry them by hand. The comparison to overloading is based on visual similarity but has no real relevance to potential difficulty in understanding the meaning of curried function calls.
The notes on higher-order function calls and state-passing are okay, but it would be nice to note the ways state-passing can be avoided even in pure code in many functional languages.
Lack of methods is a real annoyance, not because the syntax is important, but because methods enable type-dependent dispatch. The right response here is to point at typeclasses and traits, which are how ad-hoc polymorphism is usually done in functional programming languages. In Rust, for example, traits provide methods with the same syntax as "OO" classes.
Cons-/linked lists are O(n) access and there are many situations where avoiding significant slowdowns involves roundabout code transformations or "reverse; process; reverse" nonsense. Both random-access mutable containers and immutable functional ones are useful and there are applications where the constant slodown factor due to indirection (often more so than the logarithmic factor) associated with pure data structures is prohibitive.
Poo-pooing early return rather than pointing at tools in functional languages (do-notation) for writing similiarly expressive code is ignorant at best, and disingenuous at worst.
The problem of H-M error messages bringing up two parts of a program and saying they conflict, rather than pointing at one part of the program and saying it's a mistake, is a well-known drawback. There other points in the design space of type inference and typechecking that may better align with what programmers expect for assigning "blame" to type errors.
In a way C++ is using higher-order functions conceptually too for polymorphic runtime dispatch. Because it uses vtables, which are lists of function pointers--essentially the object doing the dispatch is a HOF which calls one of the functions it has a pointer to.
All this is just made explicit, and the boilerplate OOP mechanisms stripped away, when you just directly pass first-class functions around.
I guess I just don't understand the connection between first class functions and multiple dispatch. I have a function "add()" that works differently for strings and numbers. How can I replicate that behavior with higher order functions?
The key idea is delegating the type-specific operations to specific implementations which handle them, and then statically (at compile time) choosing which specific implementation you're calling.
In OCaml (ReasonML) you have to pass in the implementations manually, but Haskell and Scala have the ability to automatically choose the implementation based on the types of the arguments, so that it looks dynamic even though it's static and type-safe.
IMO there are a lot of needlessly downvoted comments on HN these days. A few months ago I was trying to stem that tide and upvoted downvoted comments when I saw no reason for those downvotes. It really looks completely random many times, or at the very least "I disagree", but without any attempt of making a counter argument.
However, my voting rights were removed, and when I asked HN by contact email I was told I was behaving like a "troll" - the reasoning: I upvoted comments other people had downvoted, so I must be out to cause trouble! I gave up that account and don't care any more, I also hardly ever participate any more. I think the HN site admins contribute to this, being called a "troll" for doing what I still think was reasonable - I did not upvote a single comment that was in any way, shape or form objectively bad. They all had no insults, were no "cheap shots", not too short, not useless - they didn't even voiced troubling opinions. I still don't understand why they were dowvoted in the first place. The opinion of the site admin seems to be "you have to live with downvotes" and, at least in my case, when you try to work against what I think is clear downvote-abuse, you are assumed to be a "troll".
Okay, I think voicing a negative opinion about the site is going to go down well... but that's okay. I think this site would be better off with no voting at all, given that a sizable number of downvoted comments don't deserve it at all.
FWIW I've always been of the opinion that HN would be better off without downvoting.
Most arguments for downvoting I've heard revolve around the desire to maintain a high signal to noise ratio, but in my view, the marginal increase in SNR downvoting offers over upvoting and flagging alone isn't worth the chilling effect it has on unpopular speech.
Even without downvoting, popular comments will rise up to the top and inappropriate comments will be flagged to oblivion.
Downvoting is just an additional layer of censorship on comments that express unpopular opinions. While the first layer of censorship (from popular comments getting bumped up by upvoting) can be justified because of its high marginal contribution to SNR over a system without any censorship mechanism other than flagging, I don't think the same can be said about downvoting in a system where upvoting and flagging already exists.
I feel downvoting has steadily been turning the HN comments section into yet another uninteresting, groupthinking hivemind with nothing provocative to offer, and will continue to do so until it gets reined in. And this is speaking as someone who doesn't get downvotes all that often.
I also work at Facebook. These comments do not reflect any internal wisdom at Facebook. The article should probably be retitled "...from a junior front end engineer."
Facebook uses Haskell internally for a handful of projects[0] and its C++ library, Folly[1], uses a lot of functional programming idioms (which are actually used extensively internally).
Christopher Chedeau's original post clearly doesn't represent Facebook's official stance on functional programming (nor does it ever purport to do so from what I can see), but I'm not sure that describing him as "a junior front end engineer" is appropriate either.
This goes into a bigger problem in the industry. I see people with inflated titles because they have X years of experience. But if you delve deeper, you will realize their knowledge of programming hasn't really improved in X-1 years. They were never curious enough to learn outside their comfort zone, like how a crawler works, how distributed transactions are handled, and why FP offers a different approach to programming. They are comfortable in their little dugout.
They will ace the interviews because they remember all those algorithms by heart and know the right people. Its just a sad state of affairs. There are more qualified people out there that just don't have the same opportunities because they didn't start their careers at the "right" firm or go to the "right" school.
I'd say yes, but I don't understand the enthusiasm around React. It solves a very specific problem for Facebook: it prevents the refresh rate of a page generated from many, independently developed pieces from getting clobbered by one misbehaving piece. If you don't have that problem, classic MVC or full FRP are preferable.
Regardless of your personal thoughts, it’s one of the most widely adopted open source projects of all time. And yet you’re asserting that one of its original creators (to say nothing of his other projects) qualifies as no more than a junior engineer?
> it’s one of the most widely adopted open source projects of all time.
Nope. The BSD network stack? The Linux kernel? Apache? React is part of the small, insular world of frontend development, where there are very few that aren't junior engineers.
Depends how you define "adopted." In this case I personally was defining it as how many people are touching the code. You seemed to define it as how many systems rely on the code.
At the end of the day, we are all people using computers. We interact with code at all levels of the stack. In any given request cycle, a person is interacting with the hardware layer, application layer, operating system layer... and those of dozens of other external systems connected to the internet. Each layer, each system, is just as important as the others. Together they form the experience that is delivered to the user by the computer.
So I don't think you can place a value on the "importance" of a piece of software; all software should be created equal, so to speak. Therefore you cannot evaluate the "adoption" of a piece of software based on some multiple of its importance. You can only rely on the number of people that touch that software as an indicator of how many other people, or systems, might rely on it. If many people, and therefore systems, rely on a piece of software, then I would consider it "widely adopted."
Software has developers and consumers. Both are important.
React, Linux, BSD, Apache... they are all software projects. You cannot compare them based solely on what type of software they are. You need to look at the codependencies they create and the value they provide. Their value comes from the product that developers create, and consumers use. Their codependencies arrive when the consumers are also developers, who incorporate the code into their own project with its own consumer users.
Software with developer consumers is not necessarily more important than software with end user consumers. Neither can exist without the other.
So I guess I'm saying it was presumptuous of me to describe react with the superlative of "most widely adopted" software project (of all time), but I would put it in a class parallel to that of those projects you mentioned. Really, in one class -- software.
When I first came out of school and resented such condescension, I wondered if it was something that would seem more acceptable later in my career, instead it now only seems more clear.
Meritocracy is one of the things that makes our field special. Its not perfect, we have our biases, but wearing shorts or tennis shoes to work is not a statement of rebels. It’s a statement, in my view, about less respect for formalities and most respect for people who think, do, innovate, and advanced the state of something.
Those people, a lot of times they are the senior and experienced ones, but it’s really not much of a surprise when their not.
Not to mention, OCaml in a bunch of places. Most significantly, the Messenger.com frontend codebase is about half OCaml and their bug reports dropped significantly since the conversion.
It shouldn’t read junior engineer, it should read one engineer’s opinion. It’s your argument that’s weak, if you must resort to using his rank as a point of persuasion.
Hey, I'm sorry I misrepresented Facebook here. This was a small rant I did in response to a comment on my latest blog post. This is not extremely well thought out.
Don't apologize, you have the right to have an opinion. What is not normal here is people attacking you on that very thread for your seniority (or lack there of), this is absolutely shameful.
Bon courage, tu n'as rien fait de mal. Je pense que ce genre de "rant" est mieux adapté à un blog ou il est clair que c'est juste ton opinion et pas celle de l'employeur.
Chiming in with others that it's not worth it to apologize. I appreciated the points in your post, and the comment you're replying to is just ridiculous. Differing opinions are one thing, condescending trash isn't worth anyone's time.
This isn’t really worth reading for most I think. It may be actively misleading to newcomers to FP (I would count myself as one). It might be useful for people who want to promote FP to see the kind of issues people new to FP face, but not really, because they’ve already seen this a bunch of times. I think the author inadvertently reveals the source of their problems with the following comment:
> I also do not believe that functional programming is "a completely different paradigm". I view it more as a bunch of programming patterns that are lumped together under the functional programming umbrella because they work well together.
FP only started to make sense to me once I discarded all my OOP/Imperative ideas. I can see why refusing to do so would lead to thinking a lot of it doesn’t make sense (having been through that myself).
On the other hand, programming is programming. I work in many languages, both functional and imperative. Often I'll use functional paradigms in imperative languages, and sometimes I have to use imperative escape hatches in functional languages. I have no issues flipping from one mode of thinking to the other. Use the best tool for the problem and all that. It is absolutely possible to look simultaneously at both functional and imperative langauges and compare them with respect to readability, debuggability, etc. I think criticism of the OP should be focused on comparing along these axes, not vague "think functionally" truisms.
Also, I will say that some of his complaints apply more to OCaml specifically than functional languages in general.
My own 2 cents is that the universe of functional languages could take a lesson or two from how much simplicity boosted Go's success, even though I think Go took it a bit too far.
What you are saying is not what the GP is talking about, and not what is displayed at the article either.
If you think primarily in imperative algorithms while programming in a FP language, or primary in functional algorithms while programming in an imperative language you will make nothing but assure everything will be wrong and work badly.
Modern languages do make it possible to mix some bit from other paradigms, but they do certainly not all accept the same code.
About the article, the problem seems to be even more superficial. The OP seems to have internalized the series of rules necessary to program in Javascript, and is understandably afraid of breaking them on his OCalm code. That is a natural problem to have when learning a second language, and can only increase when both languages are that different.
I've been playing with more complex languages like Rust and Pony (neither really functional, but with advanced type systems) and have to say writing some Go more recently has been a pleasure :D But it really is a trade off between whether you need to prioritize correctness or development speed.
Regarding your comment that Go maybe took it a bit too far: I agree... in the functional world, I found a language that I really enjoyed and that maybe fixes more FP problems: Elm! Quite easy to read and write Elm code, it's a shame that it only exists on the front-end (it compiles to browser JS), would be really cool if someone ported it to the backend as well.
I think the article might be worth reading from a how-not-to perspective. It seems very much like the author is applying paradigms from object-imperative languages to a functional language and meeting resistance.
It's perfectly fine to not know how to express things idiomatically when learning a new language that uses an unfamiliar paradigm, but a critique written from that perspective is only useful in that it's a list of indicators that you're not using the language effectively.
It's a shame that the post has been taken down, as there were some good criticisms there, and I suspect this comes from dealing with a larger codebase. I work on F# (similar to OCAML) at Microsoft. I learned a few things that made my life easier:
* Annotating your parameter types is a life saver with large, complicated codebases. No type annotations is cute and convenient in the short-term, but horrible when you need to understand some code that sorta doesn't work 6 months later.
* Write stupid code. I recently had to muck about in some old code that called into a function that called into a function created from another function that produced a 5-tuple of partially evaluated functions. This function called into another partially evaluated function, and then another, until it crossed an assembly boundary. Clever code like that is impossible to understand if you didn't write it yourself. Luckily, languages like F# and OCAML make it really easy to write stupid code.
* Write code that makes tooling "light up". This is usually also stupid code.
There are ways you can write F# code that make any kind of IDE tooling useless. It's not fun to deal with.
As a strong proponent of functional languages, I would have to agree these are some good criticisms. FP solves many problems it cannot solve is code readability. Readability will always be a problem no matter what paradigm you work in. This is why I both appreciated and disliked this article. It brings up many valid points about how FP can be (and is often) written illegibly. But on the flip side, don't throw the baby out with the bath water. Just make the functional code more readable.
> Write stupid code
Also really agree with this. As engineers our job is not just to write something that works, it's to write something that can also be read and understood by others. This is why simple solutions to problems are praised over complex solutions.
Can you elaborate on why you think purity is overrated? I find the destruction of local reasoning by impurity to be one of the biggest drawbacks of non-functional languages.
I think something that is impure when it needs to be is easier for me to maintain than something that really goes out of its way to be pure. Especially when the set of cases something should handle is large.
On the other hand, I think that immutable data is rarely difficult to deal with, and saves my stupid self from all kinds of bugs that I'd probably write.
At the same time, when a language is 100% pure you gain some interesting abilities, like being able to rewind and replay the state of your program, or to optimize code by replacing function calls with computed values (memoization).
What stops a 90%-pure language to dectect pure functions and do the exact same optimizations?
I'm not saying that is decidable in all cases (can't come up with a meaningful counter-example either), but usually a 90% good enough solution is better than a 100% solution that makes you go out of your way to do very simple things (like print to stdout).
I think the opposite. I feel that OOP had become such a mess exactly because of those ``compromises'' which were made in mainstream OOP languages (i.e. java, c++ etc). I expect the same from mainstream FP implementations.
Such ``compromises'' just ruin the whole carefully elaborated framework of the paradigm introducing breaches, which would be filled with workarounds. And voila, meet new java.
Re: type annotations, it's a best practice in OCaml (and at least possible in F#) to provide interface files for your modules. The interface files capture all the type info and documentation, and the implementations stay succinct. Best of both worlds.
I've never been attracted to the idea of partial application. Without it a function is either in state 0 or 1. It's not called or it has been called.
With partial application in the mix, functions are in an almost indeterminate state (to the programmer typing at the keyboard even if some compilers know).
In functional styles, programmers are encouraged to "just pass a function".
The f(x,y,z) gets passed around in an unknown state of f(maybe needs x, maybe needs y, maybe needs z).
This would be complicated even if it was the same function with a declining arity as it was passed around. Several languages allow named arguments which just adds to the signature complexity.
It's hard to see the benefits of the complexification.
Most pedagogical examples of the use occur in tight loops where the complexity isn't obvious. Once a function is passed outside of that tight loop I wonder at it's actually utility vs the negative effects.
Yea I totally agree, partial function application certainly complicates things. Instead of `called` vs `not called` like you said, a function can have any number of states depending on its number of arguments.
I will give you my two cents on its benefits though and let me know if you agree or disagree. I would compare it to taking a step up on the ladder of abstraction. Computers are powerful because they are programmable. If we take the example of a value like an integer, in a computer we can make it anything we want (0, 1, 2, 3, 4 etc). Functions are how we program a value. We say for example
speed(distance, time) {
return distance / time;
}
The value of the speed depends on these two variables (distance and time) which are programmable.
Partial function application is just a continuation of this idea. It's obvious that `values` are programmable, but why can't functions be programmable as well? After all, we gained a lot of power from letting values be programmable maybe the same will be true for functions? Using the previous example lets say we want a function to calculate the speed of runners in the 100m dash. We could write a new function for this
speedOf100YardDash(time) {
return 100 / time;
}
but we already have logic for calculating the speed and we don't want to reuse it. So the idea of having a `programmable` speed function starts to look a little better.
Obviously, this plays into your criticism of 'pedagogical examples' being simple, but I think the idea is even more valuable with more complicated code. Why? Because with simple examples it's easy it duplicate code without much consequence. The value of speed and division are not changing anytime soon. If we have more complex functions, however, this is where we absolutely want to avoid code duplication. Because if we write a complex function multiple times it's more likely that one of those implementations will be written differently or eventually diverge from the other one.
I'm really into partial application when you wish to mock some data for testing, and the caller chain of that test is shallow. That is:
* function under test takes in a data as its last parameter
* partially applied version of this function is used in a test
* testing function constructs fake data
* testing function calls the partially applied version
It's also nice for code reuse in libraries, but I'm generally less inclined to use it unless I know that the thing I'm working on has dead-simple requirements and won't change much.
I'm taking down this post. I just posted this as a side comment to explain a sentence on my latest blog post. This wasn't meant to be #1 on HN to start a huge war on functional programming... The thoughts are not well formed enough to have a huge audience. Sorry for all the people reading this. And please, don't dig through the history...
There's nothing wrong with jotting down some thoughts about your own blog post, however half-baked. The person who screwed up here was the person who posted it on HN and then clickbaited it up by adding 'engineer from Facebook'. That's just a poopy thing to do.
Never be ashamed for learning. Sure some people will think you're an idiot, but the real idiocy is judging the person instead of his arguments. Just fuck everyone that thinks someone is worth less for having a different opinion.
FWIW, I think you made some cogent points. I don't know why people get so upset at trenchant criticisms of beloved programming methodologies. From the outside, it often looks like Stockholm syndrome.
I don't have a lot of experience with functional programming and none with OCaml. I do have a little real world experience programming in Scala, and what I found was that my Scala code was more readable than my Java code, because I could use Scala's much greater expressivity to make the code more clear. The code in the Lift web framework was quite difficult to read, way worse than Java code, however. This has been a while so I don't remember the specifics, but I remember thinking that the Lift developers belonged to the "Look ma, no hands!" school of functional programming. Instead of splitting some logic up into four statements with some local variables, you'd have a giant four line statement that was all one giant expression with a lot of nesting.
I'm just writing JavaScript these days, and not in a particularly functional style. I do write some functional code weekly if not more frequently, in Bash. I start out with one command, and then incrementally build out a pipeline where I extract, and filter, and reformat to get some data into a format I can understand. The thing to note is that this code has to be built incrementally and tested at every step. The code written this way is not remotely readable or maintainable when I get done with it, but that's OK, because I'm usually going to throw it away after using it a few times. On the rare occasions that I need to immortalize this kind of code in a permanent script I have to put a fair amount of effort into refactoring it into something I can understand and modify in the future.
It kind of seems to me (but keep in mind that I am no expert) that we have some problems that are not so much with functional programming languages as they are with functional programming culture. There is a tremendous emphasis on being concise, but being more concise does not, in general, make code more readable or understandable.
I wouldn't take Lift code as representative of Scala culture, much less FP culture in general. That project had a lot of problems, both culturally and technically.
I sure that's right. Also I should add that while I was never a Lift fan, it did get the job done, and worked reasonably well for the limited stuff I needed to do on that project. It's just the best example I can draw on from personal experience. I think it's illustrative of problems I've seen with functional programming "in the small" in JavaScript, Java, and numerous online examples in a variety of languages.
But I have to admit, unlike Haskell it seems to be broadly used.
FB rewrote a good part of their messenger in Reason, which is basically OCaml. As far as I know React was written in OCaml back in the days? Also Rust.
Reading about stuff like Haskell and PureScript for years, OCaml and Reason feel a bit like a fresh wind. Somehow people using it can convey the idea of functional programming much better than the Haskell/PureScript crowd. The guy who writes "F# for fun and profit" (F# is based on OCaml too) manages to explain concepts really good.
It's interesting you say this, because a lot of measures (often objective but seldomly well-designed) suggest that Ocaml has a smaller and more specialized userbase than Haskell.
Which is not bad at all. But people keep saying Ocaml is somehow more mainstream but when the numbers suggest otherwise.
Not sure how long the GP has been programming functionally. I've onboarded engineers from purely imperative backgrounds to our Scala/cats/monad transformer stack and they haven't gotten stuck on these issues. Good things to keep in mind as we onboard more folks.
I for one would be more interested in seeing solutions to the problems OP has from a professional developer, rather than reading smug answers about how he missed his path to enlightenment.
I find this hard to read because some of his arguments are just subjective rants about style.
His argument about "mutations" and "passing values around" is exactly what makes FP great. It is to prevent side effects, which makes it hard to trace bugs when coding concurrently.
This makes me worry for Facebook if this is the quality of functional programmer they employ. Nearly every point of this rant shows an ignorance of even the most cursory parts of the subject (eta expansion, typeclasses, partial application/currying). Did OP bother to pick up a single book in the process of learning FP?
Just a small point: it's easy to write a purely functional list with logarithmic time random access. They're called "random access lists", and Chris Okasaki detailed them in his thesis on purely functional data structures.
More complicated structures like finger trees or whatever else provide similar benefits.
It's sad that the author has felt the need to take this down after the HN induced whirlwind of comments. Some of the replies could have been a lot more constructive / supportive. For someone starting out with FP (myself included) this sort of thing is really off putting. Things like "point-by-point" rebuttals are for debates and attacks which affect one personally, not for responding to someones opinions on a programming language / scheme.
Classes are a really nice way to group functions together
Spicy take. There's been a big crusade against this in the Python community lately, but honestly I don't understand the reason people hate it so much. Sure, it's called a "class" when you really want a "namespace" or something, but who cares what it's called?
Because if everything is a class, then nothing is a class! These are definitely pet peeves of mine:
(1) Classes used as pure namespaces. Just use a module. Modules are namespaces. You can have auto-complete on names in modules.
(2) Methods that don't use 'self'. These should be free functions.
However I don't think he's arguing that classes should be used solely as namespaces. Using OOP does seem to reduce the number of global names in a language though.
> `class` has `self`, which is some kind of implicitly shared state that will bite you (harder to reason about, test and maintain).
For reasoning about and maintaining, you can just think of `self` as an extra parameter to the function that gets bound to the thing to the left of the dot. In some languages, like Python, this is how it actually works, so the testing argument doesn't apply either:
>>> class C:
... def __init__(self, x):
... self.x = x
... def f(self):
... print(self.x)
...
>>> c = C(3)
>>> c.f
<bound method C.f of <__main__.C object at 0x7fdb6b9b9240>>
>>> c.f()
3
>>> C.f
<function C.f at 0x7fdb6b9b1ea0>
>>> C.f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() missing 1 required positional argument: 'self'
>>> C.f(c)
3
> For reasoning about and maintaining, you can just think of `self` as an extra parameter to the function that gets bound to the thing to the left of the dot. In some languages, like Python, this is how it actually works, so the testing argument doesn't apply either:
In Python it isn't implicit, but it's still shared state, and so still hard to test and maintain. You can't test the `f` in your example without having to pass in the whole big bundle of state that self tends to be, even if you do so explicitly.
> In Python it isn't implicit, but it's still shared state, and so still hard to test and maintain. You can't test the `f` in your example without having to pass in the whole big bundle of state that self tends to be, even if you do so explicitly.
cies said structs were ok, what's the difference in this regard? 'Shared state' doesn't usually refer to explicitly passing a data structure into a function as an argument; it's operating on shared state only in the same way that any function which takes for example a string or a hash does.
> cies said structs were ok, what's the difference in this regard?
Small, appropriately scoped structs are ok. Passing a big "god struct" around to functions that are only going to use a couple of fields is bad.
> 'Shared state' doesn't usually refer to explicitly passing a data structure into a function as an argument; it's operating on shared state only in the same way that any function which takes for example a string or a hash does.
Passing a mutable value into a function still counts, I think.
Indeed. Structs are specific, self is "the whole shebang". And while (annoyingly) explicit in Python, it is common and OO-practice to pass self to all non-static members, thus defacto the self is mostly there. Writing member methods as static (or class methods, as called in Ruby land), is a great way to make code a bit more "functional".
At some point you simply do not use self anymore and the class is merely a namespace with some unused features.
Can't you just use records and named user defined types?
* Hard to track mutations
I feel he's mixing imperative and OOP. Its not even every OOP languages that have the self referential this keyword. Basically, I struggle just as much with this in imperative and OOP languages. The fix in FP is that you don't design things to be mutated in the first place, so you don't need to track it.
* Partial evaluation
Strange name, I think he meant partial application. Anyways, I actually think OCaml is curried, which is different to partial in that its implicit. I agree with him a little here, don't abuse this, but the feature does come in very handy once in a while. I think that just comes to learning good practice like in all languages, you still need discipline.
* High order function
I really don't understand this one. Just look at the doc-string or the function signature if you're confused what arguments the higher order function takes.
* Passing values around
I find reduce way better. How weird is it that sum is mutated outside the loop? I've seen plenty of imperative code where the mutated accumulator isn't declared near the loop, or its reused from one loop to another.
* Lack of methods
I think he's confused with namespacing? Also, methods are an OOP concept, not imperative, an FP/OOP lang could easily have them. Anyways, I find methods are one of the worst thing in mainstream OOP. That's why all modern OOP langs are adding traits. Methods are closed, so a type can't be extended to support more of them. Also they get abused as namespaces, where really the methods should only operate on the class fields.
* Lists
Well, not all FP languages are built of the same datastructures. So that might be OCaml specific. I'm sure there's libraries to fill the void.
I think a lot of these grievances, substantial or no, are addressed pretty well in Clojure. Early return comes to mind. I do it all the time in the form of a loop macro "spitting out" a value instead of recurring. I know I felt more handcuffed in other functional languages, where I had to get weird to do what I wanted...but I don't think it has to be this way.
The pop culture of software engineering is tragic. Legitimate complaints about ergonomics and performance (linked lists are bad) are mixed here with pitiable ignorance (the problem with higher-order functions is... passing functions by name rather than making the inner application explicit? which is addressed by... eta-abstracting them, while not explicitly applying the resulting closure?). The author confuses partial application for partial evaluation, when the actual complaint seems to be about functions being curried by default: https://en.wikipedia.org/wiki/Currying#Contrast_with_partial...
There are legitimate complaints about the annoyances of threading state through pure functions, but these have been made much more cogently by other sources within the "functional programming" community: Haskell has no end of blog posts about the utility of do-notation to provide the appearance of mutability while leaving state-manipulation implementable in pure, easily-unit-tested code, and early return can also be captured with monadic notation.
The naïve rejoinder to this approach is "why use a pure language if you're just going to emulate mutation?". The answer is that it lets you reuse the pure computations that transform state in other parts of your code without having to create dummy mutable objects for them to modify, and you have guarantees that unrelated state is not perturbed; this enables purely local reasoning which keeps code easy to understand.
Presumably this is to be considered worthwhile reading because the author has a job at Facebook. It's shameful that we implicitly support the cult of corporation-worship that assumes companies wielding great social power also confer supernatural programming skill or insight on their employees (by powers of selection or mere post-hoc association).
Languages are competing for attention and mindshare. It's important to consider new user experiences both from a marketing as well as a product improvement standpoint. Right now, javascript is winning.
> if there's a bug in the first call site, then the inference engine is going to assume that it is correct and going to raise an error in the second callsite and the definition. So you've got crazy error messages and it's super hard to track down that the first callsite is responsible.
Is there a way to write OCaml code to minimize the risk of this?
I'm gonna have to defend vjeux here a bit. Disclaimer: I work on Reason (and help manage its community), a layer on top of OCaml, and targeted (well, cited) in the post. I've known vjeux for a while too. If he's accused of being a "junior engineer" then I don't know what the heck most of us are doing.
Lack of names: this deserves to be solved. OCaml in particular pays a bit more attention to it than others. Labeled arguments, inline records, objects, variant constructors, etc. are all solutions to this. Tuple's usually the target of criticism when it comes to lack of names, and I do think the criticism is mostly valid. The convenient-ness of a language feature dictates its usage. When you can trivially return an obj/map from js/clojure you wouldn't resort to returning an array of 2 elements. But when these alternatives are heavier in a language (ocaml objs are rather heavy visually and have no destructuring; records need upfront declaration), you do see a bit more of the proliferation of tuples. This can be solved, but since it's deemed an "uninteresting" problem, it stays as one of the low-hanging fruits. In general, though, I've come to appreciate OCaml's pragmatism regarding these matters. It does try to solve these language usability concerns. The other offender is parametric types, but the situation is the same in typed JS solutions.
Hard to track "mutations": tangentially related, but in parallel universe where FP is pervasive, I can see how folks might say "hey this pattern of passing a self-like argument first is used so often, we should create a new syntax for it (dot access) and optimize it specially for perf & tooling". Anyway, uniformity by definition erases distinguishability; sometimes the distinguishability is appreciated for e.g. perf and ease of grepping. Note how recent JS syntactical features are almost always faster than their polyfilled equivalent (obj spread, async, generator, arrow function).
Partial evaluation: the "monad" of beginner FP experience basically, in terms of social effects. From watching the community for so long, currying often seem to elicit a period of "this is weird -> oh I get it -> this is the greatest thing and I'll violently defend it against naysayers -> you know what, it's not all great; it's fine". Currying in an eager, side-effectful language is actually troublesome to implement & use. Some compilers don't optimize them, or worse don't even get them semantically right. Won't cite examples.
Higher-order functions: same problem in JS. But yeah, combined with partial app this isn't immediately clear: `Foo.bar(baz(qux))`. What's the arity of `baz`? More importantly, at what "stage/state" of the function's logic are we at now? For that specific example, you can argue that the name `x` isn't that much more indicative (which ties back to the first point). But these are the exceptions rather than the norm. I'm sure people are fine reading `map(foo)`. The general point's still valid.
===========
I'll stop here because I'm bored, but you can see how these things aren't black and white once everyone just take a deep breath, think a little, and find a way to communicate a bit more nicely with each other. In some of the above points, I'm playing the devil's advocate because I feel it's needed to balance the overwhelmingly negative sentiment. Sorry if my emotions come through a bit here, but it's a bit sad to see that that some of the less polite replies I've seen come from FP folks who actually barely started FP, through ReactJS/React Native, got overly excited to finally find a target to criticize in an act of catharsis, without realizing they're criticizing the co-author of said frameworks. Look, people are watching; disregarding whether the author's points are right, you'll be judged on how you react to them. And your collective reaction is a good assessment of how resilient the paradigm is against the real-world's sometimes nitpicky, sometimes serious, criticisms. The best engineers I've worked/am working with, are able to cite tradeoffs and admit that their paradigm isn't perfect. It's an indicator that you've finally "got it", that you're able to assess a subject's nuances rather than seeing it as a binary thing.
I'm a bit frustrated to see that vjeux's post had to be retracted and that he had to apologize. Imagine the potential improvements we could have collectively made had these issues not been casually/emotionally dismissed. Now once again the gist and this reply will be forgotten and we'll have to move on and count on word-of-mouth to propagate solutions to these criticisms rather than codify it somewhere like the programmers we should be. On the other hand, I am glad to see that most of the harsh replies don't come from the Reason community. Ultimately, I wish the community to learn to welcome newcomers, to learn _how_ to educate (and not just what), to understand FP's tradeoffs, to stay mature to get work done.
Pretend you're the CEO of Functional Programming. Isn't this precisely the sort of user feedback you want to hear, to gain traction and marketshare? Your competitor, Javascript, is winning.
I don't think it's particularly helpful. There's not much to be done, apart from write documentation explaining why what he thinks is bad is actually good, I suppose.
https://gist.github.com/vjeux/cc2c4f83a6b60d69b79057b6ef651b...
In my own opinion, I think the author would be better served by sticking with it for a little while longer.
And, no, to head off one common strand of arguments at the pass: no, as a Haskeller and FP enthusiast, I (very strongly) do not think pure, statically-typed, globally-type-inferred (?) programming is perfect or some kind of One True Way. Does it have warts? Of course it does, and don't believe anyone who tries to tell you otherwise.
But the pure FP way of thinking has a lot to offer, and seeing all its idioms through an imperative, C-family-language lens is counterproductive.
A good thing to keep in mind here is the "Principle of Charity": when faced with an unfamiliar spin on something you know how to do in a different way, it's useful not to approach it with the intent of disproving it. Existential Comics has a guide[0] to learning philosophy as an amateur that talks about this.
[0]: http://existentialcomics.com/blog