This is exactly backwards. OOP programming is about what things are. As Steve Yegge put it, functional programming is about verbs, about what your program does to the data passing through it.
Steve's original post, "Execution in the Kingdom of Nouns," is much clearer: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...
"Your code isn't just a linear sequence of instructions, but a tree of nested expressions."
This is also off the mark. If you're working in a Lisp, sure, you're directly manipulating abstract syntax trees. But this is a mark of a homoiconic language, not a functional one.
"Functions are ordinary mathematical objects, just like an integer is."
Also wrong. In a functional language, functions are first-class objects, meaning they have an existence independent of classes. But they're something different from integers, which are usually defined as primitives in the language.
Instruction sequence vs Expression tree It's jot just Lisp. Ocaml and Haskell apply as well. We don't manipulate the syntax tree, but it doesn't mean it isn't there. For instance, the following is a valid Ocaml or Haskell expression:
else bar (if condition2
: bar(condition2 ? baz : wiz)
Functions as ordinary mathematical objects. I'm just saying that functions belong to the bucket of mathematical objects. As are integers. And sets. And lists… "First class" just mean we treat them as such. From https://en.wikipedia.org/wiki/First-class_function
> In languages with first-class functions, the names of functions do not have any special status; they are treated like ordinary variables with a function type.
This is both misleading and false. Functions, by their very nature, "do something."
Perhaps you're referring to the immutability of data in a functional language? If so, say so.
"We don't manipulate the syntax tree, but it doesn't mean it isn't there."
A statement so ambiguous as to be meaningless. All compiled languages have a "syntax tree" lurking in the background. It just so happens that Lisp and its variants let you manipulate that tree directly, rather than writing in code that gets converted to that tree at compile time.
Perhaps you're referring to the fact that in Haskell, you often don't control the order in which code is executed, unlike in an imperative language where you have explicit control?
"...just saying that functions belong to the bucket of mathematical objects. As are integers. And sets."
Nope. The quote you use has it right. The names of functions are variables, whose value is looked up like any variable. This lets you pass functions as arguments to other functions, or use functions as the return value for other functions. Functions are most definitely not just like integers in a functional language.
On the other points, let me remind you that a function is a subset of the Cartesian product between its domain and its co-domain. How does that do anything? How does that isn't as ordinary as a mere set? Really, the only reason we feel that functions do something is because we generally perform only one action with them: looking up the result, given a parameter.
I insist because this is precisely this special holy first class status that makes functions scary. Remember the primal fear you felt when your high school teacher told you that there is an operator that can compose functions together? Neither do I. But it did make clear at that point that functions aren't that special. If they were, how could I write "f∘g" just like I would "x+y"? How could I write "f" alone, without an "(x)" right next to it?
Functions do have their specificities, and they are special in the sense that they are a tremendously useful, wickedly powerful concept. But they're still mathematical objects. Making them first class in a programming language just lift restrictions that were there only because it was easier to implement.
Perhaps the article would be clearer if you stated up front that you're trying to talk about the mathematical underpinnings of the functional programming paradigm, and not how imperative and functional programming languages differ in practice? From a practical programming standpoint, your characterization of functional programming is both incorrect and confusing.
"In the Haskell code, the data flows from right to left, instead of from top to bottom"
I find this article vague, imprecise, forced. It's like those "lets do OOP for the sake of doing OOP" articles, just swap OOP to functional programming.
Also, I didn't mean to advocate FP, at least not there. I have written it for the poor C++/Java programmer who is forced to decipher an OCaml script written by a jerk who since left the company. (Disclaimer: I would be the jerk.)
Not just you, approx. 90% of the software industry suffers from it (big IMHO here of course).
I think too much emphasis is placed on paradigms and magic approaches, instead of talking about things are.
Something like this:
"Is C++ a big pile of mess?
Is Haskell a much better designed language, even if its implementations are slower because the higher level of abstraction clashes with the underlying hardware currently in use?
Also, what you are trying to describe with all those top-bottom-left-right and similar analogies is the fact that in FP languages you mostly work with expressions, while the building blocks of imperative languages are statements.
I don't really have the time to continue here maybe I will write a blog post myself too :).
I'm a C programmer, and I've gone through the article a couple times without enlightenment. This isn't necessarily the fault of the article, but thought I'd offer the places I couldn't follow. Perhaps they will be useful for understanding a beginners viewpoint.
*int square(int n) │ square :: Int -> Int*
Now it is possible to write the Haskell program that will
naturally read from left to write
-- before │ -- after
compute n = baz (bar (foo n)) │ wiz n = bar (foo n)
│ compute n = baz (wiz n)
(.) :: (b -> c) -> (a -> b) -> (a -> c)
g . f = λx -> g (f x)
be expressed literally (see 1 vs λx -> g (f x));
-- A List is either
data List a = Empty -- an empty node,
| Cons a (List a) -- or a cons cell
inc-all l = map (λe -> e + 1) l
It is not obvious from the syntax, but Haskell functions
only have one argument.
Multiple arguments are emulated by returning functions.
inc-all (Cons e l) = Cons (foo e + 1) (inc-all l)
map (a -> b) -> List a -> list b
dbl-all (Cons e l) = Cons (foo e * e) (inc-all l)
Note that the first argument is a function, hence the (a -> b) between parentheses.
Very simple, but without the fundamentals I just gave,
one hardly stands a chance at deciphering it.
Despite this long list, I appreciate that you wrote the article. Much better to have something possibly flawed than nothing at all. Maybe one day I'll actually understand it!
Types are capitalized. Lower-case types in signatures are placeholders in polymorphic functions. The signature
(.) :: (b -> c) -> (a -> b) -> (a -> c)
The type signature only deals with types; this is not connected to the names of the function parameters.
Lambda expressions are written without types; they can be inferred (i.e. the return type is not void* , but exactly determined by the compiler). A lambda expression in lambda calculus notation would be written "λx.(+ x 1)". Haskell slightly modifies this, and allows infix operators: "\x -> x + 1"(the \ looks a bit like λ). Punctuation functions are infix by default.
About the data expression: This constructs a new type (typedef). The "|" operator is just used in a declarative fashion. Every time you write Empty in your source code, that thing is a list. The expression "list = Cons foo Empty" would create a list of one element (foo). A cons cell is a pair, this terminology is popular in Lisp. These confusing data type constructors allow pattern matching, as you have seen in the definition of "map": If the list is Empty, return Empty. If the list is a Cons of an element and another list, then return a Cons of (result of our function applied to that element) and (the rest of the list mapped) – recursion.
Now, about your questions…
foo :: Int -> Float
Oh, and type names are capitalized in Haskell. "list" was a typo.
λx -> blah_blah
(Oh and in Haskell, we actually use '\', not 'λ'.)
A cons cell is just a place where you store two values. Here this particular cons cell contains an element, and a pointer to the rest of the list. I reckon I was going way too fast on that one.
In type declarations, the arrow is right associative. So,
foo :: a -> b -> c
foo :: a -> (b -> c)
> Would be great if you offered an example where the argument and return type differed. As is, I don't know which order the Haskell declaration uses.
In Haskell, the last (rightmost) type is the return type. eg. countLetters :: String -> Int means it takes an argument of type String and returns an argument of type Int. Also, types are chained together so for example, foobar :: String -> Int -> Bool is a function that takes two arguments, the first being a String, the second an Int and it returns a Bool and foobarbaz :: String -> Int -> Bool -> Char is a function that takes three arguments, the first a String, the second an Int, the third a Bool and it returns a Char. This syntax makes a lot of sense when exploiting "higher-order functions" and "currying". I mention this only to provide keywords for your personal search.
> If this is essential at this point, it should be fleshed out more. Otherwise it feels like a distracting aside and out of place in a short introduction.
In my view this is an aside.
> Not clear why the left side is (.). Is this because it's non-alpha? Because it's infix? Something else? Order to read right hand side still unclear. Also don't know if reuse of a, b, and c as variables is significant.
To write a function whose name starts with a punctuation character, Haskell syntax demands that it is wrapped by brackets and I think its automatically infix. You can use any function infix by wrapping it in ` eg. x `f` y is the same as f x y which means (f x) y which means apply the function f to the argument x, and then to the argument y.
'a', 'b' and 'c' are type variables, similar to type variables in Java and C# when dealing with generics. They can stand for any type. So, 'a' could be String or Char or Bool or any other type. As could 'b' and 'c'. So 'a' may or may not be the same type as 'b' or 'c'.
I'm not sure I can make it any clearer but note the signature of (.) has two arguments and a single return type. The first argument is (b -> c) which means it is a function that takes a single argument 'b' and returns a value of type 'c'. So the first argument of (.) is a function. The second argument is (a -> b) which is also a function. Its return type is also a function.
Personally I think (.) is a very hard function to grok and certainly far too complex for the first higher order function you see, exponentially so if you're still trying to understand Haskell syntax.
> Would help to define a "cons cell". Is Cons a data type or a function here? Is there a standard for capitalization? Not clear to me how the 'a' on the left side relates to the two on the right. Is 'data' a keyword, a type, or something else?
data is a keyword. Its how you define a data structure in Haskell. In this case, its defining a data structure called List which will wrap elements of type 'a'. In other words, 'List Int' is a type, 'List Bool' is a type, 'List Char' is a type. List is broadly the same as generic in Java or C# where List<String> is a type and List<Bool> is a different type. Cons is a data constructor (and yes, its standard to capitalise data contructors), which you can think of as being a function Cons :: a -> List a -> List a. It takes two arguments, an 'a' and a 'List a' and returns a 'List a'. Here the type variable 'a' does something as Haskell ensures that if the first argument is an Int, then the second must be a List Int and will return a List Int. If the first argument is a String however, then the second must be a List String and will return a List String.
> I think a gloss of what 'λe' actually means would help
λe means "create an anonymous function that takes an argument called e with any type". And (λe -> e + 1) is an anonymous function that adds one to every element passed to it.
Seriously, check out Learn You a Haskell.
Perhaps you meant one of these:
Left "the author has a shiny new keyboard with a λ key"
Right "they really really like to copy and paste"
I don't think Haskell is really the best choice of language for demonstrating these FP concepts simply because Haskell's syntax is a little cryptic to those who haven't studied it. Lisp would have been a more readable choice since the syntax is trivial.
Also, you didn't touch on recursion, which is a key concept in FP. The idea of having a function call itself repeatedly in order to loop through a data structure, instead of iterating over the data structure by mutating a counter variable, is a huge paradigm shift for people who are new to FP.
Also, if you look at other popular technology like OOP, they were pushed even harder than functional programming. When I was initially learning programming, I read a ton of books and articles all advocating it. Similarly, I've seen more people advocating Python than Haskell, and that worked too.
So maybe it's not enough by itself to make seething popular, but it seems necessary or at least very helpful. There's certainly no reason to stop trying!