For all those hating on this blog post: It's short, but it doesn't have to be long to be useful, and apparently it is useful given a substantial chunk of the comments here. If it wasn't news to you then ignore it, otherwise appreciate that not everybody is quite as enlightened as you are.
HN has a tremendous reach these days, from pioneers in the field to total (and often clueless) newbies. Fogus felt compelled to write down his thoughts, someone else saw fit to post it and as of this moment 100+ people thought it was useful enough to vote it up. If that's not validation enough for you then remember that time when you spent half a day tracking down an endless loop and you had to ask a more seasoned programmer to point it out to you.
Fogus is a respected member of the community because he tends to write down what he thinks in a way that others much further down on the totem pole can grasp it too, his articles are not upvoted because he's 'well known' but mostly because people find them useful.
If you feel like this article was 'devoid of content' or whatever slur you feel like directing at it then please, write a better one.
Content is not determined by the quantity of words on a page.
Some of the deeper links between linguistics and programming can be quite eye opening and it does not require a large number of words to put those down. If you find one of these, especially through your own work (as in 'from the trenches') then that should carry some weight with you.
If you're doing this independently and re-discovering something that apparently a lot of people such as you have already found out or learned about elsewhere then indeed it is devoid of content. But I'll bet that it wasn't the case for the majority of those that read it.
Lots of programming wisdom is so terse that we have acronyms for it (DRY for instance), that does not mean there is no content there.
Anyway, enough said, I think it is worth reading, and worth 'grokking' for want of a better word, in case you had not already discovered this. Naming stuff is one of the harder things in programming, that different streams of programming should lead to different groups of words being used to describe the code should probably come as no surprise and yet I find myself amazed that there are such deep connections.
For me, the big confusion comes from the fact that if you simulate people, you almost inevitably have to deal with data about people. How can those two be mutually exclusive?
Without a clear definition or concrete examples of what the OP meant by "simulate people" versus "data about people", the post is almost devoid of information (to me; apparently, a lot of people understood what OP meant).
For me, the big confusion comes from the fact that if you simulate people, you almost inevitably have to deal with data about people. How can those two be mutually exclusive?
They're not mutually exclusive: the inverse is not necessarily true. This is basically the entire point of the article. When you're just working with data, don't use constructs which are meant for simulation. It's a subtle way of saying "Don't model data about people (or any other piece of pure information) with mutable objects; prefer values instead."
most 'objects' as they exist in today's OOP have methods, which makes them both nouns and verbs. Idiomatic code in most of today's mainstream OO languages doesn't make heavy use of closures, mostly only Javascript.
Anyway, the closures vs methods debate has been going on for decades is not a closed topic[1], and the answer certainly depends on the language you're using.
I don't believe this is a nitpick, but fully encapsulated OO provides only nouns and verb phrases that have a pre-specified noun. This is different from data structures (nouns) with collections of functions (verbs) that can act on many data structures in a consistent way.
What you say works well if you already have your data nicely divided into different kinds of objects. However a useful technique with closures is a dictionary of closures allowing you to map things that may be of the same type to different behavior. Wrapping those in objects so that you can use method dispatch can be a lot of useless work.
As a concrete example I point you at http://www.perlmonks.org/?node_id=34786 where I demonstrated how to take transform a markup language into HTML through the technique of grabbing a token, then passing to a closure that is associated with that token to consume a bit of the document. Porting that closure-based code to an OO style would be a nightmare.
Verbs/functions are easier to compose and recompose; they're easier to reason about (there is no "state" in verbs; verbs are declarative.)
Nouns (assume: with behavior; not just data -- otherwise you are in FP-land again) are harder to compose and recompose, harder to reason about (state transitions).
Consider calculating age. Assume general FP and general OOP ways of solving the problem. Then:
ageOf(p)...
...ageOf() can potentially be applied (or modified to apply) to different value types.
...self-evidently does not mutate p or cause a state transition (assuming FP).
p.ageOf()
...can only be applied to value types/classes that support that operation.
...may or may not mutate p/cause a state transition of p.
One is much easier to reason about and reapply when the world changes.
For a start when we're talking about closures versus objects, we're not necessarily talking about true functional programming. Closures can have and manipulate internal state. Pretending that they don't ignores real world usage where they do.
However let's limit ourselves to functional techniques. Whenever someone talks about something in the very abstract and says "easier to reason about" I mentally insert "for you". Certainly it isn't true for everyone or every situation.
A basic principle in programming is that starting at the hardware level we build abstraction layers that let us think about things in higher level ways, hopefully ultimately in a way that matches the terms that the end application uses. End applications have concepts like account balances, user status, and other such stateful pieces of information. If you're going to have the state, workaround that you pretend you don't, you will cause confusion. Pick your favorite 5 Monad tutorials for proof.
Consider as an example the use case, "Provide a form to let us mark spammers, and block them from our site." For all the theoretical advantages of avoiding state, this is going to be easier to model in a system where users have a piece of state known as is_spammer.
This post is nearly devoid of information. At least an example of what the author means by modeling data about people v.s. stimulating people might have helped. Most problems I encounter don't relate to people at all but maybe I'm to just replace 'people' with 'things'?
I'm puzzled why this is currently at the top of HN. I'm assuming the author must be someone well known who I've missed out on, as that seems to be the only way posts devoid of value push forward around here.
I'm not criticizing you at all. You're of course free to write whatever you want. HN's tendency to upvote stuff from well known people just because they are well known is a little annoyance I have with the site.
If that were the case for me then anything I would write would get upvoted in bulk, but that's just not been the case. I wrote it because I thought it might be useful to some people and presumably it has been for those who've upvoted it. On the other hand, those who have found it worthless have expressed so very strongly in this comment thread. So it goes.
I do agree with bluehex: an example would definitely helped. It seems you have a new perspective on the topic, which is great but the current version sounds more mystic than informative. Here are one most confusing thought: Why are you bringing in people into the equation? Are these end-users? Or programmers? If they are end-users, why do they care what programming paradigm is used?
The idea about mixing OO and FP at different levels is a good one. Have had similar experience.
I still don't understand. If you're writing SimCity ("simulated-people") you should use OOP and if you're writing genealogy software ("data about people") you should use functional programming!? I assume I misunderstood what you meant because otherwise I have no idea how you arrived to that conclusion.
No need to apologize. How do you factor in relationships (between objects)? I feel that with the different kind of relationships between objects, the kind of operations would change, and therefore different paradigms would provide different strengths.
I do think it is the case for certain high profile people (the mega super stars). Regardless, it's not important enough to warrant a drawn out debate here. Sorry to see you're getting such a binary reaction on this post. Like I said before, my concern with it is not aimed at you.
Yeah, "devoid of value" is a little harsh. I apologize for that. But it's not up to the blog author to ensure their post has enough value. In this context that's the job of HN.
It's definitely a very small post(I expected more when I opened it) but I assumed that people upvoted it because they were interested in the discussion that it would produce. There's definitely plenty to say on the subject of functional vs object-oriented programming, even if this article doesn't happen to say much.
I do it fairly often depending on the topic. There are topics for which I am more interested in the discussion than in the actual article.
For an article like this I'm split. I'm a passionate programmer and would love to read a good article on the tradeoffs between functional and object-oriented styles. I'd also love to read a good discussion about the two styles.
For some topics, say, a site being down, I don't really care about the discussion in most circumstances.
For many other topics, I'm more interested in a good discussion about the topic than I am in any particular article on the topic. It's harder for me to articulate which topics those are, but I know them when I see them.
Simula is regarded the first object oriented language and the purpose of the language was simulation.
The idea of using an object oriented approach to simulation goes back to at least the sixties.
The way I read his post is that working on sets of information tends to be easier in functional languages event based simulations tends to be easier in object oriented languages.
I always thought that FP and OO were orthogonal concepts and never considered them mutually exclusive. To me, the antitheses of FP would be imperative programming. Maybe I'm missing something?
CLOS, OCaml, Scala, I can think of many examples of languages that include components of both FP and OO. Or maybe we're simply considering that OO means this special paradigm used in C++/C#/Java (that really only claims to be the true and only OOP by the commercial entities backing these languages)?
PS: I googled around a bit, this Clojure article [1] seems relevant.
Actually, FP and OO are mutually exclusive to some degree:
OO comes from the imperative end of the spectrum (after all, objects are an abstraction over mutable state), whereas FP comes from the declarative end (a pure function is a relation between sets and mutability is a foreign concept).
However, most programming languages are impure and support both paradigms, so you can get away with thinking about OO and FP as orthogonal ways to reason about and structure your code; language syntax and semantics may of course favour one concept over the other...
You can have pure objects, depending on how you define 'object'. If your conception of object-oriented programming involves mutable objects changing state in response to messages... then yes, FP and OO are strongly at odds. However, if your conception of object-oriented programming involves information hiding via packaging functionality into units called 'objects' that expose public operations which are written in terms of some manner of implicit or explicit 'self' parameter... why not? That's what OCaml has, after all, and there was even an O'Haskell at one point[1].
Unfortunately, discussions on the merits of FP versus OOP tend to be asinine just because every participant has their own personal definition of both 'functional programming' and 'object-oriented programming', often based on implementations in a particular language. How does Haskell's idea of FP compare to Java's idea of OOP? What about Agda versus Python? Erlang versus Smalltalk? Does FP require purity, or types, or pattern matching?[2] Does OOP require inheritance, or private members, or classes? Without knowing these details, who can even tell what schools are being advocated or why?
[1]: In OCaml, objects can contain mutable values but they must be explicitly marked as such. An object with no mutable values is effectively a pure object, and there are many good reasons to use such an object—analogous objects are useful even in other languages not traditionally considered functional, e.g. new PureObject().someMethod().otherMethod() creates a series of objects which themselves needn't expose a stateful interface at all.
[2]: I'm not asserting that FP requires these things, but I have seen discussions where people clarify that languages can't be functional without function composition or algebraic data types or typeclasses or what-have-you, usually working off an informal definition of 'functional language' based on their language of first exposure.
Now you have made me curious. Do you have any pointers to some source material about "pure objects"?
I find it especially interesting because under normal circumstances an object with no mutable state is no different from a collection of methods (which can be encapsulated in different ways of course; say as an OCaml module).
So I am curious as to what advantage an object system that allows only pure objects provide.
The William Cook paper 'On Understanding Data Abstraction' gives a basic formulation of pure objects. Crucially, an 'object' needn't be what the language itself calls an object, e.g. I could write up a 'set object' in Haskell, which does not have objects:
data Set elem = Set { contains :: elem -> Bool }
insert :: (Eq elem) => elem -> Set elem -> Set elem
insert key set = Set { contains = c }
where c key' = if key == key' then True else contains set key
union :: (Eq elem) => Set elem -> Set elem -> Set elem
union set1 set2 = Set { contains = c }
where c key = contains set1 key || contains set2 key
emptySet :: Set elem
emptySet = Set { contains = (\ _ -> False) }
-- mySet corresponds to {1,2,3}
mySet :: Set Int
mySet = insert 3 (insert 2 (insert 1 emptySet))
-- myVal is True
myVal :: Bool
myVal = contains mySet 2
Depending on your definition of object-oriented programming, this could be considered an object or something else. William Cook argues that it's an object because access depends only on a public interface, i.e. I could add another implementation
fromList :: (Eq elem) => [elem] -> Set elem
fromList list = Set { contains = c}
where c key = key `elem` list
which uses a different representation underneath the surface, but the previous implementations of union and insert will work with it because it exposes the same interface... which is not the case with ML/OCaml modules, which must select a single implementation. This is a simplified version of the example Cook himself uses, so I urge you to read the paper if you want to understand more. (On the other hand, people in the Smalltalk/Ruby/&c camp will say, "No, of course that's not object-oriented—you're not sending messages!" So... it's a complicated debate.)
If you have only pure objects, you can still perform open recursion and leverage structural polymorphism (including row variables). Additionally, your objects are just values and, because they aren't mutable, can be held for backtracking.
For instance:
```ocaml
class virtual biggable x = object val x = x method x : int = x method virtual embiggen : biggable end
let o = object (self) inherit biggable 0 method embiggen = {< x = x + 1 >} end
let q f o = object (self) val x = o#x method x = x method embiggen = f {< x = x * 2 >} end
let a = q (q (fun x -> object inherit biggable x#x method embiggen = {< >} end)) o#embiggen
;;
# a#x;;
- : int = 1
# a#embiggen#x;;
- : int = 2
# a#embiggen#embiggen#x;;
- : int = 4
# a#embiggen#embiggen#embiggen#x;;
- : int = 4
```
> In OCaml, objects can contain mutable values but they must be explicitly marked as such.
Or in Scala, you have to mark object's variables either as `val` (immutable) or `var` (mutable).
Like /u/Fishkins said, the Scala course in Coursera gives some examples of the use of immutable objects ( https://www.coursera.org/course/progfun ). I am sure they are used in a lot of places in Scala code, not only in Odersky's course.
OOP is not about hiding / dealing with mutable state, although it can be used that way, but that's a perverted idea of what OOP is.
OOP is about subtype polymorphism [1] by means of runtime single-dispatch [2] and modularity, as in abstract modules or components that help with decoupling, a notion supported explicitly by SML, or exemplified beautifully in the Cake-pattern used in Scala.
For me, OO means stateful objects communicating by message passing, and same as asdasf, I consider it a way to structure imperative code. I don't consider this a perversion - it goes back to Kay.
Your definition is of course an equally valid one that might be more appropriate in certain contexts - it just doesn't fit my mental model of computation equally well.
I've developed the opinion, based in large oart on Kay's thinking, that the biggest wins of OO are better represented as a form of protocol design. The point of impedance comes in when you discover that not all protocols map cleanly to OO systems - some are synchronous and others asynchronous, some require more polymorphism than others, etc. Async message passing happens to be one of the more flexible mechanisms for protocol design, but it's only been relatively recently that industry languages have been considering it more carefully, and then it's framed as "concurrency" and not a general architectural consideration.
Likewise I see FP as a reactionary mechanism to write fewer and simpler protocols, because when the data is immutable the problem can usually be greatly simplified.
Other than that I would agree with him though.
Subtype polymorphism + runtime (single or multiple) dispatch.
Multiple dispatch is one of the things I really miss about Common Lisp. I do like OOP, but often the most intuitive way to think about something is multiple dispatch, and trying to shove it into a single dispatch model can produce some really ugly code. I'm pretty sure that is the main reason there are so many anti-OOP zealots in the world.
Also, I didn't realize Dylan had multiple dispatch. I should take a look at it sometime.
OOP is most definitely defined by encapsulation of state into objects that have identities. There are many points to deal with messaging, on the other hand, as well as polymorphism, which are all compatible with what OO is from a design perspective. Otherwise, the anthropomorphic properties of the object are no longer sustainable, and it's noun-ness disappears.
The cake pattern has as much to do with encapsulation as it does crosscut composition.
The antithesis of FP is imperative. And OOP is imperative. Imperative programming encompasses both procedural and object oriented programming. Those are simply two different ways to structure an imperative program.
I'd hesitate to say that FP is on the same level as logic programming, or to say that it is declarative.
It really depends on what you mean by FP. If you refer to lambda-calculus or to say, the head/tail decomposition so common in FP, then those are inherently sequential and not really declarative. If you refer to functors / monads, then you're on to something, but then again, OOP doesn't really imply imperative.
Good point, I think the common tendency to refer to procedural programming as either procedural or imperative interchangeably got me doing the same thing with functional and declarative.
I read that paper a couple years ago and it changed the way I see object-oriented programming. All of a sudden, the dependency inversion principle just clicked.
I hope I hit the upvote button instead of the downvote button. I'm typing this on a shaky train and my browser will only zoom in so far.
I'm curious, has anyone ever implemented a run loop in a functional style? I know there are recursive approaches to passing the time deltas and what not, but it seems rather difficult to make video games with a purely functional approach... Can anyone offer any insight?
The author's explanation was incredibly clear and I feel like I got a handle on it!
Just 'lift' the relevant game state and pass mutatations along in a simular way to how you might pass exceptions along with a promise based system... Then it eventually feeds to a makeSounds() or an updateAnimations() or whatever! That way the code can keep all related functionality in one place, instead of spreading sound or animation related methods all over the code base encapsulated in each object with its own logic! Brilliant!
And pass the previous frame state to the same functional pipeline, recursively... Now I need to make an OOP vs FP game loop/physics engine in javascript and compare and contrast the approaches! (Feel free to leave the votes alone on this one, or down vote to oblivion, because I am just using HN for note keeping so I leave the thought in context...)
Caveat lector: I'm not familiar with the current state of the art in gamedev, so this may not be new or interesting.
The gist of this is that you prefer composition over inheritance, which is a bit more functional than an OO inheritance hierarchy. It's not strictly functional and it's definitely not purely functional. But it's a step in that direction, as it lends itself to viewing your system as a series of transformations over data rather than a system of actors.
The second piece is also possibly only tangential, as it does have state. But it shows an interesting way to incorporate concepts germane to FP like continuations, and it does use a homegrown Lisp as its scripting language.
I've recently experimented with using functional reactive programming to handle the events of a game. I've found it allows for a greater degree of abstraction than an event loop with callbacks, in the same way that higher-level functions like map/filter/reduce provide an abstraction over iteration.
One insight is that functional programming and pure functional programming are very different beasts. As an example, Clojure does not aim to be pure, Haskell does. In Clojure, the focus is on immutability, and state is modeled as a succession of immutable values. Example: the employees in a company is represented as an immutable value. A place (a ref) stores the current immutable value, and you can always ask this ref for that value, and set it to a new one. But the value itself is always immutable. In Haskell, everything must be pure, and uses monads to models things such as random number generators, I/O, and other side effects.
I find this post suggestive. The title—"from the trenches"—makes the important point that this is a reflection from experience on what seems to work best where. That's more valuable and harder to come up with than it sounds, because if you identify with a paradigm then you don't easily notice it getting in your way.
I would like to hear a more detailed treatment of the distinction between "dealing with data about people" and "simulating people". If these overlap (and surely they do), then you get into a circularity where your favorite paradigm leads you to see a problem its way, just as much or more than the problem nudges you toward a paradigm for solving it.
On the other hand, my experience matches the OP's that OO's sweet spot is close to its origins in simulation--where you have a working system that exists outside your program, your job is to model it, and--critically--you can answer questions about how it works empirically. When you aren't simulating anything, OO is cumbersome because it requires you to name and reify concepts that become "things" in your mind; these ersatz "things" continually draw attention to themselves and get between you and the problem.
I don't think you have to choose one or the other. They are not mutually exclusive. Furthermore, I still use Structured programming ideas along with OOP and FP.
Kinda weak post, though what he said in point four vibes with me, where he argues for using OO on the lower level with a functional API / abstraction.
This is a really underutilized design strategy. Much too often architecture is discussed in OO vernacular, and as a result is made much too complicated. I think we'll come to see OO as a low-level technique more and more.
Functional programming works great for processing data - you don't want to change the data, and there isn't a lot of state. You largely write various transforms and filters, which is right in FP's wheelhouse.
On the other hand, simulation is quite state-full, often has tons of little details that you don't want to expose to the world (encapsulation), the thing that you are simulating usually has a collection of varying behaviors that you can represent via polymorphism or composition, they have attributes, and the 'engine' is often easier to write if you use dynamic(virtual methods)/static(generics)/duck polymorphism to get your things to do their, well, thing.
Which is not to say that either is undoable or really hard in the other paradigm, but if I am simulating something I am often thinking of 'objects' and doing things to them; the OO paradigm helps you program about that in a very expressive way. Ditto for FP and functional programming - I'm thinking about filtering and transforming data.
That is just my view of the author's article, with a caveat. I've seen terrible, beastly simulations done in OO that basically thought that single inheritance is how you spell OO. A single type hierarchy bites you so hard: 'Oh, I'm make everything a widget with draw/eat/process/whatever methods'. Then requirements change, and widgets in this subtree need to do something you thought only widget in that other subtree would do, next requirement has you needing to reflect on who can do what, and so on. You end up with a huge Blob object at the top of your hierarchy, or you endlessly end up with huge blocks of RTTI/reflection to figure out what kind of thing you are dealing with, or otherwise try to work around the problem that the world is not easily decomposed into a single hierarchy.
In practice I find all of this terribly reductive. Need some small thing whose behavior varies in well defined ways? Use polymorphism. Have a bunch of stateful stuff that you need to keep track of? Encapsulation is nice. Want to perform some operation on a large collection of data? Look to transform(). Need to make a lot of inferences and conclusions? Perhaps logic programming will work there. Need to provide a framework where people can build larger systems? Hopefully you are using component or services based ideas. And so on. A programmer should have a bunch of tools handy, and use the best one for the job. On the other hand, just about every person I interview opines that we should inherit for re-use, and I silently despair.
Have a bunch of stateful stuff that you need to keep track of? Encapsulation is nice.
I think this one is open for debate. There are other models for managing state which make things simpler to reason about than classical OO with encapsulation. One such example is Clojure's epochal time model.
I think everything I wrote is a matter for debate. It's impossible to capture all of software design in two paragraphs. Ten minutes ago I was writing some code to display a baseball game - a quick Python hack, not a game or anything. To me a class plus a bit of encapsulation for things like players, ball, the field is 'just right' in a Goldilocks way. A bigger problem, a different domain, and your link may make a lot more sense. My real point was that you pick and choose based on your needs, not that some incredibly hastily written list is immutable and not open to argument.
I would also suggest you (perhaps through equal haste in writing) made a category error. Encapsulation != OO. For example, I can achieve encapsulation in C just by putting variables in my .c file, and not distributing the .c, but only the headers and a lib. I am not trying to nitpick, but wondering if 'classical OO' is part of your assumption.
In any case, I have never programmed in Clojure, and know nothing about epochal time models. It looks interesting enough, but is it a tool I can readily reach for if I am programming in C++, Python, or what have you? Will others understand it? Googling provides only a dozen or so relevant links. I think all in all I stand behind "Encapsulation is nice". It is nice, it is not the only way or necessarily the best.
I would also suggest you (perhaps through equal haste in writing) made a category error.
Yes, indeed it was haste. Where I really want to draw a distinction is between values and mutable objects (which may or may not use encapsulation). Encapsulation is a leaky abstraction when applied to mutable objects because the hidden state of some object may impact other parts of the system in various ways. Values (and functions of values) are a much sounder abstraction because they are referentially and literally transparent.
know nothing about epochal time models
The epochal time model is a mechanism used to coordinate change in a language which otherwise uses only immutable values (e.g. Clojure). It provides the means to create a reference to a succession of values over time. This means that any one particular list is immutable but the reference itself is mutated to point to different lists over time. The advantage of this is that these references can be shared -- without locks, copying or cloning -- because the succession of values is coordinated atomically.
Having seen some in the MS camp suggest that F# is a good candidate for data work (vs C#) for similar reasons, are there languages in the cross platform crop whose strengths lend towards data (string) processing (vs app development)?
OCaml becomes interesting given the above due to syntax similarities. Scala/Clojure due to Java library support(?). Haskell because it seems to get a good amount of attention.
F# is a pretty nice language, but you are indeed still in Microsoft land. That said, mono has gotten pretty good over the years.
As far as data processing goes, I've seen both scala get some attention in the data game, because you can easily write hadoop jobs, and you can fairly easily interface with libraries like ATLAS for matrix work. However, working in a functional style in the JVM with lots of data can bite you sometimes, particularly with GC hiccups where the GC just wasn't designed to work efficiently with functional resources usage.
Personally I think Haskell is very well suited to data work. You get pretty good speed out of the box, concurrency and parallelism are baked in, and the library support is pretty extensive. It's also quite cross platform. In addition to that, you've got brilliant folks like Simon Peyton-Jones working on stuff like automated stream fusion, which can mean huge speedups. To me the main disadvantage to using haskell for data processing is the relatively long compile times. Since "data science" is typically fairly interactive, until you are pretty good at haskell you might find yourself waiting a long time to realize you've done some silly things. It's really a situation where things liker IPython's html notebooks are invaluable.
Sidenote: As far as string processing goes, I've found working with haskell's parsec library to be simply great for creating parser combinator libraries in a very natural way.
You know what has a lot less information than the blog entry? All the comments about how little information it has.
Well... and this post, now.
I always come to the comments to throw a little balance or extrapolation into my reading, and its disappointing to find people mostly just complaining.
Got a few interesting nuggets, though. So not all bad.
The next really big thing will be a language that does not mix things up.
I.e. a language that keeps a clean orthogonality between the three dimensions of a program --state, functionality, and event handling-- not favoring one over the others.
The hard part is to make hierarchical modularization mechanisms work simultaneously along all three dimensions.
We need hierarchy to manage complexity. Complex functions must be decomposed into component functions. Complex data structures must be decomposed into component data objects. And event streams must be decomposed into manageable chunks. Recursively to some degree.
Maybe deep nesting is only a problem because your language does not let you nest all three easily.
I use Haskell and C++ primarily. Haskell has local functions, ADTs, hierarchical modules, and good support for streaming and concurrent operations. C++ has (limited) local functions and lambdas, classes, nested namespaces, iterators, and...well, let’s not talk about imperative concurrency.
These are (mostly) good tools, but I still avoid hierarchy more than one level deep—in my experience it just introduces more complexity than it saves.
Actually, I would disagree with you, based on (1) my personal work and (2) discussions with a variety of other people as I thump the "not all programming needs to be imperative languages" drum.
First, working in a single language allows you to accumulate what, for lack of a better word, I call "IP". Components/libraries/frameworks; a body of work. Having a single language gives you the leverage of previously written code solving prior problems in a debugged fashion.
Second, having a single language allows easier social operation; people can review each others work, a common body of knowledge can form around the language under common use which is difficult to maintain for multiple languages simultaneously.
Third, having a language which is a bit of a melting pot allows idioms to be used in which people are comfortable with their specific idiom - OO/FP, etc.
Personally the only point of contention with FP vs OOP is immutability, or more directly managing state. The distinction between 'simulating people behavior' and 'measuring people behavior' is probably intentionally vague to spur the imagination, but it's quite apt.
Many languages implement multiple flexible looping/filtering structures to discourage shoving everything into for/while loops (which are susceptible to confusing and messy continue, break, goto, yield statements strewn about). Furthermore the rationale behind Clojure[1] is stateful objects are a new kind of spaghetti code, and managing state across scopes requires breaking mental boundaries.
I think the larger problem is referential transparency: most OO languages allow two objects that represent the same states to have distinct identities, even if compare-by-value claims they're equal.
Of course, it's not impossible to have referentially transparent objects. I wrote such a language (Reia)
> most OO languages allow two objects that represent the
> same states to have distinct identities, even if
> compare-by-value claims they're equal.
You can do this in languages that offer only compare-by-value by attaching a unique ID to every object. Of course, you may have to write your own equality relation if you want compare-by-value semantics in addition to compare-by-identity semantics.
This kind-of hearkens back to simulation I think. That is, if two objects have the same properties then it might be useful to identify them as distinct things because they are in fact distinct entities. That they happen to have the same properties is a matter of fidelity only.
Or Rust. Or D. Or Ruby. Or C#. Or Dylan. Or Lua. Or OCaml.
Lots of languages have concepts from multiple paradigms. What is the correct mix of concepts is up for debate.
I honestly believe that Rust has the potential to become the perfect mix for me, but OCaml and D are pretty good second places with the benefit of exponentially greater stability/maturity.
Sure, a lot of those languages (Ruby, C#, Lua) borrow some FP ideas, but I think sprinkling in some FP ideas is different from F#, OCaml, and Scala where FP seems to be more "font-and-center".
My litmus test for whether a language really succesfully marries OO and FP is: how well does a language blend pattern matching syntax and inheritance?
OCaml - basically only gives you pattern matching on primitive types. Object types end up having dramatically different code style
Scala - any syntax that deals with types quickly becomes so complicated that you can't explain it to non-experts
I don't know much about F#.
I've heard that Clojure's core.match grants pattern matching over abstract interfaces in a nice style. It's not an official part of the language, yet, though.
What is complex about Scala's pattern matching? x matches the value x, x:A matches instances of A, A(x) matches if A.unapply(x) evaluates to a Some, and _ matches anything. Erasure throws a wrench into it, but otherwise it's pretty straightforward.
I've always viewed OO as a nice optional UI for most programmers. Given an object to mutate and call methods on seems to click for most people. I tend to go for a more functional style internally for the composibility and maintainability properties you gain.
For actor-based programming in Clojure (complete with real lightweight processes, selective receive and an Erlang-like API), look no further than Pulsar:
FP makes an excellent backend -- architecturally speaking -- for OO-, REST-, front-ends, and a most excellent front-end for Relational systems. (Haskell, for example, would be a dream language for sprocs.)
One thing that bugs me in the imperative vs. functional debate is that we (functional programmers) often show people a side-by-side comparison with 20 lines of imperative code, 6 lines of functional code, and say, see, the functional code is so much better! Well, to us, it is. Filtered through our biases, it's much more intuitive.
I think we're wrong on one thing. Personally, I think imperative code is more intuitive. It matches how people think of operational behaviors.
For a 20-line program that will never grow, I'd rather see an imperative straight-line solution. FP wins when things get a lot more complex: various special cases and loops within loops. Stateful things compose poorly. Debugging a 300-line inner for-loop is no fun. Functional programming, done right, means people factor long before it gets to that point.
Factories and Visitors prove what hellishness comes into being when one doesn't have those functional primitives, but this does not make the superiority of FP intuitive or obvious.
Now, OO is so variable in definition that it's very hard to know exactly what people mean. There's the good OOP (Alan Kay's vision; encapsulate complexity) and then there's bad OOP (overuse of inheritance, auto-generated classes, Factories and Visitors).
One can conceive of core.async as an OOP win in Clojure; lots of complexity (macroexpand a go block at some point) is being abstracted behind the simpler interface of a channel.
> Personally, I think imperative code is more intuitive. It matches how people think of operational behaviors.
The masses agree with you, apparently. That's why they sit in their cubes all day writing the same god damn re-implementation of map over and over again
List<String> result = new ArrayList<>();
for (int i = 0; i < list.size(); i++) {
result.add(input.get(i).toString());
}
I don't think recursion itself is difficult for most people to learn. The hard part is translating recursive functions into procedures. The imperative coding is what makes it confusing.
That's another problem with imperative programming; it tends toward DRY violations.
The solution using map is more intuitive, once you know what map is.
I agree with you, as well, that the functional style should be used far more often than is the case. I just think that one shouldn't assume that our way of doing things is inherently more intuitive, even if it is most often better.
The code that the functional programmer produces is a more direct expression of the programmer's thoughts. The author thinks "I want to map this list using this transformation" (although perhaps not in those exact words). The author is never thinking "I want to create a new list, then for each index between zero inclusive and the list size exclusive, apply this transformation, append the result to the list I created, and then jump back to the call site providing that list as a return value." The imperative programmer is first thinking functionally and then translating that thought into procedures instead of directly writing expressions that match the thought (and like you said, it's just because they haven't learned the word "map" yet). Isn't that pretty much the definition of "unintuitive"? First thinking one thing, and then requiring conscious effort to think another thing instead?
HN has a tremendous reach these days, from pioneers in the field to total (and often clueless) newbies. Fogus felt compelled to write down his thoughts, someone else saw fit to post it and as of this moment 100+ people thought it was useful enough to vote it up. If that's not validation enough for you then remember that time when you spent half a day tracking down an endless loop and you had to ask a more seasoned programmer to point it out to you.
Fogus is a respected member of the community because he tends to write down what he thinks in a way that others much further down on the totem pole can grasp it too, his articles are not upvoted because he's 'well known' but mostly because people find them useful.
If you feel like this article was 'devoid of content' or whatever slur you feel like directing at it then please, write a better one.