Hacker News new | comments | show | ask | jobs | submit login
Programming without objects (falkoriemenschneider.de)
126 points by wkornewald 1127 days ago | hide | past | web | 131 comments | favorite



What this article seems to miss is part of the raison d'être of Object Oriented Programming. It's not just about how you encapsulate state and how you act on that state. Forget the exact way the type system works, or what extension methods are, or even what polymorphism is.

The big advantage of OO is that it acts as a distillation of how humans think. We're accustomed to thinking in terms of 'things that do stuff'. What OO provides is essentially a skeuomorphic element to your code, where your basic units have some resemblance to real items and concepts. This makes it a lot easier to reason about large codebases, and makes them easier to document (in theory). It means the most important thing you need to understand how a piece of business logic works is knowledge of the business. You don't need to know anything about the data, or how the application is composed in order to comply with the needed use case. If you know the concepts and their behaviour in the problem domain, you will be able to make sense of how the code is written.

FP has its place, as does procedural code, but advocating either of them as a complete replacement for OO is short-sighted. Claiming FP is a solution to OO's shortcomings is too. Each has its place. If anything, we should be working towards next-generation models, that combine the advantages of both and mitigate the downsides of both.


> The big advantage of OO is that it acts as a distillation of how humans think.

Honestly, while I think OO programming in the broadest sense does that, I think class-oriented OOP (what the article mostly focusses on) languages, particularly statically-typed class-oriented languages in the C++/Java lineage--don't do a great job of either supporting that intuition or facilitating applying it intuition to the construction of correct, maintainable computer programs. They aren't useless in that respect--there is a reason why they flourished--but there's also a reason that after them an important programming paradigm for several decades, and a dominant one since at least the late 1980s, there's a whole lot of moves away from traditional class-oriented OOP in newer languages.

I think that we're seeing a model--which is mostly viewed as being within the FP tradition but which has learned a lot from OOP languages and supports a lot of OO thinking--emerge (and that this article points to a lot of its elements) that provides a framework that better matches the intuition than class-oriented OOP does, while also better supporting building correct, maintainable, easy-to-reason-about systems.


>don't do a great job of either supporting that intuition or facilitating applying it intuition to the construction of correct, maintainable computer programs

My first thought is that this is dependent on implementation. It is possible to write classes in a way that aligns with intuition, but it is sometimes hard to do that, and even if you're great at OOP it is hard to do consistently. I think the Smalltalk message-sending way of thinking has huge value because it is easy to reason with, and facilitates this intuition.

That said, I do see tremendous value in FP, and I'm encouraged by the elements of FP that I've seen popping up in Swift. So I guess ultimately I do agree with you. I'd like to see OOP continue to flourish, but borrow elements from the Functional style that make it very difficult to write fragile code.


I'm exactly in your camp. I would love to see more functional influence in OO, and more careful thought about what to use at what time.


> The big advantage of OO is that it acts as a distillation of how humans think.

I would consider this a disadvantage, not an advantage. Human thoughts are imprecise and carry rich but ambiguous connotations. The major advances of mathematics in the 19th century (or thereabouts) onward are closely tied to the divorce of notation and natural language.

The ancient version of the Pythagorean theorem is something like, "The area of a square whose side is the length of a hypoteneuse of a right triangle is equal to the sum of the areas of squares whose sides are equal in length to the other two sides." A modern version is more like, "Let a, b, c be the lengths of sides of a right triangle, where c is the length of the hypoteneuse. Then a^2 + b^2 = c^2." You can see as time goes on, the text becomes simultaneously clearer, more concise, and more abstract.

This is why natural language programming is doomed to failure. Others have expressed the point better than I do (notably Dijkstra). But you can look at the examples of how inheritance in class-based OOP systems work and you will get sick just looking at them: examples of Dog inheriting from Animal, and Penguin from Bird (but penguins don't fly), et cetera. The real disadvantage here is that once people start thinking about taxonomy in general (which class-based OOP encourages), you're not thinking about programming any more.

The difficulty people have with even simple problems like "how do you describe squares and rectangles in a computer program" is another good illustration of how the class-based object-oriented system of thinking is unsuited for solving computational tasks, at least compared to the alternatives.


http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...

Steve Yegge's "Execution in the Kingdom of Nouns" has some interesting commentary on the idea of OO being in any way how humans think. The allegory has some harsh things to say about Java, which are not my intention in suggesting this link. The observations that objects tend to be nouns while functions tend to be verbs is very interesting.

/* footnote 3 is hilarious */


> The big advantage of OO is that it acts as a distillation of how humans think.

Citation needed.

Browsing around Google Scholar for variations of "object oriented empirical comparison" shows a huge body of research comparing various OO approaches to each other, but very few comparing OO approaches to anything else. Those which I have been able to find compare OO to procedural code, and find either no significant difference in comprehension levels, or that procedural code is easier to understand (ie. closer to "how humans think") than OO:

An empirical study of novice program comprehension in the imperative and object-oriented styles

http://ftp.cs.duke.edu/courses/fall00/cps189s/readings/p124-...

Assessing the cognitive consequences of the object-oriented approach: A survey of empirical research on object-oriented design by individuals and teams

http://arxiv.org/pdf/cs.HC/0611154

An exploratory study of program comprehension strategies of procedural and object-oriented programmers

http://www.ptidej.net/courses/inf6306/fall11/slides/11%20-%2...

I've only been able to find one source comparing OO with functional programming, which didn't measure comprehension. Instead, it compared code quality metrics between C++ and Standard ML. Most showed no significant difference, except for SML taking longer to run its tests (it also had more tests), having higher code and library re-use and having a larger number of errors per 1000 lines (although the same number of known errors overall):

Comparing programming paradigms: an evaluation of functional and object-oriented programs

http://eprints.soton.ac.uk/250597/1/report3_Harrison_95.pdf


The most commonly cited proponents of this viewpoint are Rosson and Alpert (http://dl.acm.org/citation.cfm?id=1455754, behind a paywall). The first study you linked refers to them and a few other explicitly, and uses their research as a basis. It's difficult to find much more than those, because this kind of 'programming philosophy' is rarely under this level of academic scrutiny.

Aside from official sources, it's a statement that I didn't think needed much citation. It's absolutely fair to criticize it, but it's a widely-held belief that seems to hold true (at least anecdotally)


The reason I was initially skeptical of these kinds of claims is because:

(1) Many of those people making it weren't too experienced in reasonable alternatives.

(2) Many developers I know (also anecdotally) spent years studying OOP to get to the point where they can use most of its higher level patterns effectively, and judge FP based off a single class or toy project built in a some lisp derivative, or what they heard from a friend.

(3) Those people making the opposite claim actually had fair bit of experience with both methodologies, since it is hard to avoid OOP.

In other words, it seems like the claim "OOP fits the brain better" is a prevailing meme, largely among those who have not tested the claim effectively, while "FP is pretty good and we need to lean on it more" is fairly popular with those who have actually tested the claim, instead of relying on hearsay.

Obviously, my personal experience shouldn't convince you that your statement is wrong. (I was a big fan of OOP who devoted quite a bit of time learning software engineering with both classes and prototypes before trying out FP and finding out just how much OOP was silly and could be replaced with simpler, more composable FP ideas). Nor do I think that all of the claims made by the FP people are right. But realize that your statement DOES require significant evidence to back its claims, and that resorting to populism gives it relatively little real support.


"The big advantage of OO is that it acts as a distillation of how humans think."

I first encountered functional programming in the 1980s on my Computer Science degree course a few years before I encountered OOP (CLOS and then C++) and I'm not convinced that OOP is fundamentally closer to "how humans think" than FP. Most programmers these days were taught an OOP language first so think that is most natural - but I don't think there is anything fundamental about that.


You should try explaining some business logic to a non-technical person some time. If you are able to properly model the problem domain, non-technical people have a much easier time understanding the flow of the code and how it works if it's written in an OO way. This is precisely because they recognize the concepts and ideas they are familiar with and can already reason about. They know that 'a Person pays their Bills by asking a Bank to deposit Money'. This means they can make sense of code that is structured in the same way. FP, for all its advantages and elegance, doesn't offer that same correspondence.


> The big advantage of OO is that it acts as a distillation of how humans think.

Objects are always a struggle. I see students new to programming take as long as their junior year before they even start to "really get it" and design reasonable classes.

Rolling identity, state, values, types, functions, polymorphism, modules, resource management, and who knows what else into a single thing is inherently going to result in something pretty damn complicated, compared to trying to keep those concerns orthogonal and thus only pulling in those that are needed for the task at hand.

I would certainly say that OOP-based software engineering has vast advantages in certain cases, but my hypothesis would be that its "ease" advantages is more one of mindshare rather then something inherent in the human mind. This hypothesis, however, is worth extensively testing... for a discipline that relies on stretching human cognitive performance to its limits, we spend far too little time actually figuring out exactly what limits those are.


The big advantage of OO is that it acts as a distillation of how humans think. We're accustomed to thinking in terms of 'things that do stuff'. What OO provides is essentially a skeuomorphic element to your code, where your basic units have some resemblance to real items and concepts. This makes it a lot easier to reason about large codebases, and makes them easier to document (in theory).

It seems to me that you're sort of confusing OOP with programming interfaces.

An interface describes a model of how to use a certain piece of code, and indeed a good one can be almost "obvious" and it begins to feel like a "natural" way to reason with it. But it's all about the interface: not about from what kind of language constructs the interface is composed of.

Interfaces can be done in object oriented programming too but for some reason, possibly exactly because of the resemblance to real items and concepts and how human people are so fond of those, most interfaces in OOP languages are utter horror.

This could be because it's so easy and almost fundamental to create "things that do stuff", i.e. classes in OOP, there will be a lot of them. The design pattern craze kind of formalized that for good, which is why we've all enjoyed our share of abstract factory builder singleton visitor, or whatever.

In contrast, you tend to see really good interfaces surprisingly often in... C.

As a disclaimer, this is a big reason why I like C language -- it forces you to think simple because it offers so many guns to shoot yourself in the foot and a lot of other places too that you need to focus on what's essential to your program. With an easier but inevitably more complex language, the lull times in the day of programming seem to produce lots of implementation and interfacing complexity that a C programmer would never dare to attempt. C programmers also use paradigms associated with OO but only where appropriate because it's a bit of hurdle to implement those in C.

I've also observed that there's a vague parallel between complex (object oriented) interfaces vs. simple (C) interfaces and the Alan Perlis' notion of to better have 100 functions operating on one data structure rather than 10 functions operating on 10 data structures. Good OO interfaces tend to be really simple and short, even so that they're not even particularly OO anymore.

The OO, procedural, FP paradigms are mostly orthogonal and can be combined freely in sensible amounts where needed, but they're also orthogonal to concepts such as encapsulation, polymorphism, type hierarchies, and inheritance which themselves aren't tied together either.


My intention was focused entirely on object oriented modelling and programming. Interface design is a tool to do that, but in and of itself does not provide the correspondence with the way humans think. The core of OO is to have independent objects that combine behaviour and data. Whatever your type system, whatever your language constructs, that core is what provides the advantage. I was considering mostly the basic ideas of OO modelling in general rather than whatever way you choose to achieve that.


> skeuomorphic

No, skeuomorphism involves ornamental details, which are at best hints about how something is to be used.

How you break-down the responsibilities of your system and name them is much much more important than that.


I agree it's not the best word for it, but what I meant is that the elements are made to look and behave similar to their real-life or conceptual analogues. Think of it mostly as the software UI skeuomorphism: we are able to recognize the functions of the software faster because it is designed to look like something we know.


> The big advantage of OO is that it acts as a distillation of how humans think. We're accustomed to thinking in terms of 'things that do stuff'.

Let's not confuse "how humans think" with "how humans ought to think to self problems effectively". There is little evidence that OO is any better than other principled approaches to programming.


'skeumorphic' - did you mean 'anthropomorphic'?


I didn't. I meant 'skeuomorphic' as in 'designed with cues that correspond to reality, to indicate corresponding functions and behaviour'. It's a bit more nebulous when applied to real code, since we're thinking in terms of concept and behaviour, but the same thought applies.


> The big advantage of OO is that it acts as a distillation of how humans think.

With how widespread OO is now and in the last decades, how much it is taught and how important it is in a fair share of popular programming languages (or even mandatory, for all practical purposes), this point might just be a self-fulfilling prophecy (if I'm using that expression correctly).


The way OOP is taught often doesn't bring up object thinking. But if you believe it is not natural, try thinking mathematically (without nouns or names as unique aliasable identifiers). Or without isa or hasa relationships. Our minds have 50,000+ years of language expertise, and only a couple thousand for formal non linguistic equational reasoning (where things only have structure and are unnameable).


> The way OOP is taught often doesn't bring up object thinking. But if you believe it is not natural, try thinking mathematically (without nouns or names as unique aliasable identifiers). Or without isa or hasa relationships.

Is-a and Has-a relationships don't require class-based OO structures to express in a language. E.g., both membership in a abstract group sharing a common interface (is-a) and composition (has-a) relationships are readily expressed in FP languages like Haskell quite directly, or in languages that are more like traditional OO langauges but which separate interface from implementation rather than combining them as is done in class-based OO.

The OO approach to programming, and thinking about domains, has broad utility, but the particulars of static, class-based OOP are not necessary to realize that utility.


> but the particulars of static, class-based OOP are not necessary to realize that utility.

The design goal of a OOPL is to support object thinking. FPL generally have other goals, and some of their design philosophies reject object thinking altogether. You'll often see lots of objects in ML or Scheme libraries; but the languages are not really optimized for this way of solving problems.

Haskell, as a real pure FPL, is really the anti-thesis of object thinking. You cannot express nominal is-a relationships in Haskell very easily at all: it is all structural (and type classes don't get around that) and supports equational rather than name-based reasoning. You could add names to Haskell via GUIDs or impure language features. This is not really the way to program in Haskell, however.


> The design goal of a OOPL is to support object thinking.

Yes, my point is that class-based OOPLs aren't the only way to do that (and in some respects create some unnecessary problems with that), which is among the reasons that many newer languages -- even ones that are not particularly functional languages, and which have OO roots -- are not class-based.


A "class" is quite useful to programmers vs. some of the alternatives; even JavaScript is (syntactically at least) moving away from prototypes to having real class constructs. I prefer traits (the mixin-like scala kind, not the field-free smalltalk kind), which are basically classes enhanced with linearized multiple inheritance. There are also Beta-style virtual classes, which are quite useful, and there are even languages that support dynamic inheritance (via prototypes, predicates, or on demand).

The OOPL design space is huge. I wish there was more activity there, but all the new hot languages either do little to innovate on OO constructs, or emphasize the functional (or worse: try to push type classes as OO).


I'm not so sure. There's some truth to what you're saying, but by "is a", statically typed OO languages generally mean something much different than what we mean by that in human languages. I find OO's obsession with hierarchy and taxonomy to be profoundly unintuitive.


There is much more to OOP than Java.


Something like Smalltalk or Ruby is certainly closer to my preferences. The focus on taxonomy is definitely most obvious and most painful in languages with static typing and fewer dynamic features.

Can you suggest an OO language that really avoids the issue, though? It seems inherent in the notion of inheritance to me, even interface-only inheritance. Something like Haskell's typeclasses or Rust's traits seems to me like an easier way to model concepts from the real world.


Believe it or not, Scala can be quite powerful since it supports mixin-like traits. I'm not sure about "avoiding" taxonomy though, I find variants useful, but I also like to play with layers (think modularized features of variants).

Type classes aren't reall OO, I mean, they allow for some non-nominal subtyping that meshes with purity. I would argue that OO is really about the names, and OO thinking is really just a way of naming everything in your problem, while type classes mostly keep with the name-free equations reasoning.


I really need to check out Scala! Thanks for the reply.

(And I'm also interested to hear more about OO as naming.)


That's interesting. Could you say something more about OO being about naming everything?


Object thinking is about naming things, naming meta things, and then using those name to describe those relationships. E.g. Fido is a dog, and a dog is an animal because I said so. Names are incredibly useful as abstractions: I know Fred, if I see him wearing different clothes and a haircut, I know he is still Fred; my other friends can tell me things about Fred, fred can have a criminal history. This is only possible because Fred has a name, otherwise, he would just be a stateless blob.

Note that nothing is really proven with names, just asserted. This is why theoreticians dislike them. Equational reasoning gets away from names by working purely with structure. Of course, we can apply labels to structure (since our brains are so reliant on names for reasoning), but they have no special meaning and we are careful not to let them bias the results.

Most programs involve heavy doses of object thinking, even if the language is not specifically OOP. It takes a real genius (or a Vulcan...joking) like the high end haskell crowd (SPJ, Conal elliot, etc) to leverage equational reasoning where most would otherwise use object thinking.


Then again... capturing things with hierarchical, permanent, non-context dependent hasa / isa relationships is very, very far removed from how we've used language for the past 50k years.


Can you give me an example? I'm at work right now in a hierarchical, permanent, non-context dependent environment and is-a, has-a relationships abound. We deal with the complexity of the world by organizing it into hierarchies.


Language is incredibly fuzzy... and for good reason.

Creating fixed structures that span across groups of people is incredibly hard. Just take a look at how much work/debate has gone into the taxonomy of life.

Apart from that relationships are context dependent. I can have a father but if he passes away I still have one in some sense but not in another.

I can also have ideas... or friends which can be mutual or not. Heck... even "I" isn't fixed. If I'm sleepwalking it's me but not really me.

My point was: do you know of any OOP language which can fluently handle this without having the object metaphor break down?


No language can encode exactly how we think. We aren't going to get there until we have hard machine intelligence (then we are programming anymore). But we definitely think with objects, and OOP languages are simply trying to exploit that (we can definitely debate about how well!).

OO thinking has no problem with stateful reasoning, your father can be currently dead and previously alive.


This article, like many that cheer functional programming, falls into a certain cognitive bias, that prevents it from seeing what OO is good at.

Alan Kay wrote "The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be."

To start to see what this means, consider the annoying String / Data.Text split in Haskell. String is very much in the "leave data alone" mindset, baring its guts as a [Char]. Now you're stuck: you can't change its representation, you can't easily introduce Unicode, etc. This proved to be so rigid that an entirely new string type had to be introduced, and we're still dealing with the fallout.

Great and growable systems! The large scale structure of our software, decomposed into modules, not just at a moment frozen in time, but in the future as well. We are tasked with thinking about relationships and communication.

So here is how "the better way" is introduced:

> Data is immutable, and any complex data structure...

There's that insidious FP bias: to immediately dive into data structures, to view all programs as a self-contained data transformation. So reductionist! It's completely focused on those "internal properties and behaviors" that we were warned about above.

I would end here, but I just couldn't pass this up:

> To come up with a better solution [for dispatching], Haskell and Clojure take very different approaches, but both excel what any OO programmer is commonly used to.

"Any OO programmer?" No way! OO as realized in truly dynamic languages exposes not just a fixed dispatch mechanism, but the machinery of dispatch itself, i.e. a metaobject protocol:

"...there are not only classes for workaday tools, such as UI widgets, but there are also classes that represent the metastructure of the system itself. How are instances themselves made? What is a variable really? Can we easily install a very different notion of inheritance? Can we decide that prototypes are going to work better for us than classes, and move to a prototype-based design?"

This is far richer than the anemic switch-style dispatch that Haskell's guards and pattern matching provide. For example, try modifying a Haskell program to print a log every time a string is made. You can't!

I'm not familiar with Clojure but I'll bet its object model has its roots in CLOS. Whether or not you call it "object oriented," CLOS is solidly in the spirit of having a dynamic meta-structure.


I think you and the author have posed a false dichotomy.

I avoid "traditional" OO in my own work for the some of the same reasons the author points out; not least of which that traditional classes are a kitchen sink.

But many of the ideas of OO; notably extensionality (what the author incorrectly calls intensionality), I could never do without. I agree with you, that exposing the innards of my data structures is a crime: not only do I lose control over their construction and use (including defining equality), but I'm restricted from ever modifying the structure.

But nothing in FP prevents hiding structure. You can see it all the time in OCaml: a module signature will declare an opaque, possibly parametric, type, as well as a set of operators over that type. The internal structure of that type is never exposed. All creation and use, and ideally comparisons (though it is unfortunately not enforced in OCaml) must go through the module's API.

(Module signatures, it should be noted, may be shared by multiple implementations, permitting compile-time dispatch.)

Yet while maintaining opacity, I am free to dispense with the excess baggage an OO class entails: run-time dispatch; a single "self" (i.e. the "friend" problem); that abomination known as inheritance; all these things I need no longer worry about, and my code can be cleaner and more efficient.


I suspect some of the problems that many people have with OO tend originate from the C++ and relate languages such as Java. These languages aren't really OO in the Alan Kay sense of the term[1]. They are languages with classes, polymorphic inheritance, and object style binding of methods to structures, but they do not feature "everything is an object with message passing".

By comparison, you really see a lot more of the utility of OO in languages smalltalk or possibly ruby[2] where you can extend everything. I know tend to write my ruby (despite it being a multi-paradigm language) in a manner that you describe: FP style with objects hiding the details.

Of course, all of these languages have their strengths and weaknesses and OO isn't useful for everything. I just think OO has gotten a bit of a bad reputation from some of the languages that chose to label themselves OO even when their implementation was only superficial. This bad reputation may lead to dismissal of the whole idea, producing the false dichotomy you mention.

Incidentally, the lack of strict OO (or any language style) in ruby is what I really like about the language. You can be strict OO if you want, but you can also use classic (C-style) imperative programing when it makes more sense (or FP, or whatever).

[1] http://c2.com/cgi/wiki?AlanKaysDefinitionOfObjectOriented

[2] Regrettably, OCaml is one of those languages that is still in my "looks interesting, I should learn that" queue, so I cannot speak to how it implements OO.


OCaml's OO is IMHO not very interesting, beside the concept of "functional objects", which really ought to exist without the rest of the Java-style OO baggage. (Briefly: methods can easily return a copy of an object with some fields modified; and anonymous, structurally typed objects may be constructed.) Otherwise it is standard Java/C++ fare (albeit more streamlined and with better typing).

On the other hand, OCaml is worth learning for the module+type system alone. Every other language could benefit from its ideas; the only language I've seen that's comparable is Coq (which bases its module system on OCaml's). (And the module and type system really work in tandem: there are advanced mechanisms for type structure hiding that aid forward compatibility.)


The module system in OCaml sounds very nice (and we all know what the "O" for!). But there's still a bias towards a sort of static-ness in FP. For example, the use of abstract data types where a Java programmer may use a class hierarchy. Clients cannot extend an ADT: I can't make my own List in Haskell and pass it off to a function.

Regarding the OO "excess baggage," I would respond that what is "excess" depends on the nature of the system. I can understand dismissing that stuff when your program is self-contained. When the only code at play is your own, when you can statically enumerate every type, function call site, etc, it may be hard to see the value in those features.

My project is a shared library, and so is dynamically linked with code written by other teams, perhaps years ago, or even yet-to-written. The system is thus not my program in isolation, but an intimate collaboration between my component and client components. Runtime dispatch, inheritance, reflection, and even occasional mucking with meta-objects are the tools we use to cooperate. This is a type of extensibility that Haskell doesn't even try to support. I don't know about OCaml here.

(Alan Kay called this the "negotiation and strategy from the object’s point of view.")


In the same way that many recommend programming to interfaces rather than concrete classes in static OO languages, programming to typeclasses rather than concrete types (when you can't avoid that kind of dependency and write completely generic code) is an important recommendation in Haskell.


> The module system in OCaml sounds very nice (and we all know what the "O" for!). But there's still a bias towards a sort of static-ness in FP. For example, the use of abstract data types where a Java programmer may use a class hierarchy. Clients cannot extend an ADT: I can't make my own List in Haskell and pass it off to a function.

Depends on what your function accepts. If it takes explicitly a list, you're screwed, but it clearly was never intended to be generic. If it accepts something Foldable or Traversable, just make sure your data structure has an instance for these type classes.

In OCaml, you can have objects and inheritance if you absolutely want to, but you can get a lot out of structural typing before going there. If you want extensible ADTs, you can, but you need to plan for it by using polymorphic variants [1] at the expense of some safety.

1: https://realworldocaml.org/v1/en/html/variants.html - scroll down to the "Polymorphic variants" section


> My project is a shared library, and so is dynamically linked with code written by other teams, perhaps years ago, or even yet-to-written.

Constructing a component architecture is the goal of many approaches, and shared libraries is one expression of that ideal. When a library is compatible with the calling application and the OS, a library can closely approximate an ideal component.

However, components, applications and OSs are not static but constantly changing. In order for a library to be a component used by many other entities, the library must be continually (at least frequently) curated to remain compatible with all the other components it cooperates with.

While the point of a library is to abstract an API so users of the library don't have to think about how it's implemented, the creator of the library must consider those details very deeply.

Whatever techniques or languages are used to create a library, OO, FP, both or neither, the most important consideration is that its source code is clear, concise, logical, and understandable. The library I create today will decay if not maintained, and if I'm not around, how easily can someone pick up where I left off?

The inevitable tricks employed making procedures or methods work in real code will not be obvious to our successors. Good ideas, even embodied in obsolete code can be useful if clearly expressed and adequately explained. Thorough documentation transforms the work into lasting value.


> (Module signatures, it should be noted, may be shared by multiple implementations, permitting compile-time dispatch.)

With first class modules, you can even get runtime dispatch, just build a new module dynamically, selecting the concrete implementation depending on, say, on a command line parameter.


> This proved to be so rigid that an entirely new string type had to be introduced, and we're still dealing with the fallout.

There are a lot of things wrong with, say, Haskell '98 from the perspective of a modern Haskell programmer. Strings are one, but monads aren't applicative functors, it took us a long time to figure out how we wanted to write monad transformers, lazy I/O is terrible and we should use conduits or whatever instead. But you picked strings. This example does not help your point for the following reasons:

1. You can't just change the implementation of the string type without messing up someone's program. For a fantastic example, look at the recent change to Oracle's string type in Java. In theory, the interface is the same. In practice, it made a bunch of people mad.

2. You can encapsulate data in Haskell. Look at the "Text" data type, and ignore Text.Unsafe which exposes the gory innards. This is module level encapsulation, which is just as good as class-level encapsulation (better, actually, since it's more flexible). You could replace Text with a UTF-8 implementation or a UTF-32 implementation or some magic implementation that switches between the types, and you wouldn't break consumers of the Text interface.

> For example, try modifying a Haskell program to print a log every time a string is made. You can't!

This is a really contrived example. First of all, there is the question of whether you will need to create a string whenever you log something to a file, and presumably you wouldn't want to log those strings. Second, this is something you'd do with a debugger, you wouldn't actually do this to a program.

Besides, if you had access to the string implementation (which I'm assuming here is Text, because that's what most people use), you could just put some kind of unsafePerformIO call in front of uses of the Text constructor, and since the Text constructor isn't exported from the Text module, you're done.


> 1. You can't just change the implementation of the string type without messing up someone's program.

Yeah, you can. 'NSString' is in fact a class cluster that provides different implementations/representations. Well, used to be on OS X, because it was changed to be a wrapper for a single CoreFoundation representation.

In GNUstep and Cocotron, I think they still use the older class-cluster implementation, and programs are portable between these implementations.

Polymorphism, baby :-)


Is there a good reason for why lazy I/O is terrible? It seems like the ideal solution for async-heavy programs


Its harder to reason about resource usage with lazy IO. For example, when is it safe to call hclose to close a file handle?

     do
        f <- open "file.txt"
        s <- readContents f
        hclose f
        print s
Since readcontents is lazy, it only tries to get data from the file when you print s. But by that point the file has already boon closed!

If you think about it, its a bit similar to the tradeoffs between garbage collection and reference counting.


http://www.reddit.com/r/haskell/comments/1e8k3k/three_exampl...

Tekmo

I highly recommend reading these slides by Oleg:

http://okmij.org/ftp/Haskell/Iteratee/IterateeIO-talk-notes....

They are his old annotated talk notes and they give a really thorough description of real problems that lazy IO causes with lots of examples.

Edit: Here's a select quote from the talk:

> I can talk a lot how disturbingly, distressingly wrong lazy IO is theoretically, how it breaks all equational reasoning. Lazy IO entails either incorrect results or poor optimizations. But I won’t talk about theory. I stay on practical issues like resource management. We don’t know when a handle will be closed and the corresponding file descriptor, locks and other resources are disposed. We don’t know exactly when and in which part of the code the lazy stream is fully read: one can’t easily predict the evaluation order in a non-strict language. If the stream is not fully read, we have to rely on unreliable finalizers to close the handle. Running out of file handles or database connections is the routine problem with Lazy IO. Lazy IO makes error reporting impossible: any IO error counts as mere EOF. It becomes worse when we read from sockets or pipes. We have to be careful orchestrating reading and writing blocks to maintain handshaking and avoid deadlocks. We have to be careful to drain the pipe even if the processing finished before all input is consumed. Such precision of IO actions is impossible with lazy IO. It is not possible to mix Lazy IO with IO control, necessary in processing several HTTP requests on the same incoming connection, with select in-between. I have personally encountered all these problems. Leaking resources is an especially egregious and persistent problem. All the above problems frequently come up on Haskell mailing lists.

Oleg is a good guy to listen to.


Great comment, provides good food for thought.

The core FP idea is to focus on immutable data and data transformations. This is the minimal set of concepts one needs to juggle to get computations going. When modules communicate, they need to pass data and identify the transformations, so there is no dichotomy here between FP and OO (!). Especially if you think of method tables as data.

The String / Data.Text split in Haskell is an artefact of Haskell's ecosystem. It is not a conceptual hurdle, but rather an implementation detail. It is not too hard to imagine a different FP ecosystem where one can readily substitute different implementations under the same immutable data structure API, all with very explicit parametrization of the data transformations. All of (1)immutability, (2)simple data API, (3)polymorphism and (4)explicitism are important. Note that OO systems encourage (3), while FP systems encourage (1), (2) and (4).

Code as if you have immutable data and apply data transformations, tune performance by using the best implementations under the common simple data API. The question becomes how to build a system where all of them are ergonomic to use. IMHO, Haskell is not quite it, rather places like Dart / C# offer better ergonomics.

The other example is also thought provoking. In a system with polymorphism support, it's relatively straightforward to supply one's favorite String implementation, including one that prints a log on every String construction. The question is how to provide the new module to clients, which is reminiscent of dependency injection, but concrete implementations of DI are magic bad. In an explicit style, this would be realized by making modules functors of other modules and explicitly passing in the method tables:

  function Foobar(string) 
    return {
      foo: function(x) 
        return string.concat(x, string.new('abc')) 
      end
    }
  end

  function main1()
    string = String()
    foobar = Foobar(string)
    foobar.foo(string.new('xyz'))
  end

  function LoggingString()
    return String() + {
      new: function(x)
        print(x)
        return String.new(x)
      end
    }
  end

  function main2()
    string = LoggingString()
    foobar = Foobar(string)
    foobar.foo(string.new('xyz'))
  end
But it takes discipline to write the above and not sprinkle the code with String().new(...) everywhere, which defeats the purpose.


How does FP, most notably Haskell, not encourage polymorphism? This statement is just plain wrong.

Type classes are the definition of polymorphism. And you can write very generic, abstract, polymorphic code using these constructs. It's not any languages fault if you write your code expecting only concrete types. By this standard it's Javas fault if the developer isn't using generics. No it's not it's the developers fault. The mechanisms are there. Use them.


I'm not hacking Haskell very often, but I get now and then vibe from the community that:

a. Avoid typeclasses until strictly necessary, http://www.reddit.com/r/haskell/comments/1j0awq/definitive_g...

b. Haskell has no first class instances. http://joyoftypes.blogspot.com/2012/02/haskell-supports-firs...

> The ability to have only a single instance for each class associated with a type makes little theoretical sense. Integers form a monoid under addition. Integers also form a monoid under multiplication. In fact, forming a monoid under two different operations is the crux of the definition of a semi-ring. So, why does it make any sense at all that in Haskell there can be at most one definition of “Monoid” for any type?

<removed useless stuff>


> String is very much in the "leave data alone" mindset, baring its guts as a [Char]. Now you're stuck: you can't change its representation, you can't easily introduce Unicode, etc.

You're conflating the fact that Haskell had poor modularity when it was first conceived and String first defined, with the claim that only OO can provide the necessary modularity.

Clearly ML modules provide and always provided the necessary modularity to abstract over string representations, but there's no OO in most MLs. And now with support for ML modules as first class values, we don't need objects for modularity at any level of programming.


Every OOP developer is on a journey, they just don't know it. Some of them will never make it. But some will reach a point of realisation where writing well-designed software comes naturally to them because they've inadvertently stumbled upon the core concepts of functional programming. It then requires them to realise that what they've found is just FP and then requires a further minor step to actually learn a more appropriate language. Once this developer makes that jump he reaches a new plain of development happiness and a feeling of power over OOP practicers because his level of productivity has magnified by about 3x. Enabling themselves to allocate brain power to more important issues. That's just called progress though.

Having said that, I dislike articles like this because they shout too loudly. Just use a proper hybrid OO-FP (ala F# / Scala etc) language and be done with it. These languages are designed for business productivity - not academic box ticking. Everybody happy.


What arrogant hogwash.

I was exposed to functional programming my first year at university (it was used in the introductory courses) and quickly noticed that contrary to the hype (similar to yours), functional programs tended to be more bloated with trying to work around limitations of the less expressive programming model/language and not particularly more robust.

That doesn't mean certain aspects can't be nice to have ('let' is kinda nice), but even that is mostly for programming-in-the-small.

Your unsubstantiated claim of 3x productivity increase is, er, "interesting".


> was exposed to functional programming my first year at university (it was used in the introductory courses) and quickly noticed that contrary to the hype (similar to yours), functional programs tended to be more bloated with trying to work around limitations of the less expressive programming model/language and not particularly more robust.

In other words, when you first tried a functional programming language you tried to write imperative code in it, and the result ended up bloated and fragile? I'm not surprised.

Guess what, trying to write pure FP in Java by chaining together static methods and "Function<A, B>" objects would end up with bloated and fragile code too; but that's not a valid criticism of OO.


> when you first tried a functional programming language you tried to write imperative code in it,

Guess again. Did I mention the arrogance and the unwarranted assumptions?


I've learned Haskell at university too. That doesn't prove anything though does it? At the time I've found it useless and plain wrong ... it is much later that I learned to appreciate it and value what it brings to the table...much much later.


We didn't have Haskell in 1989. As I wrote, there are things I value in FP languages in general, and more specifically Backus's FP calculus inspired me to come up with Higher Order Messaging[1].

It's nice to have language support for functional style (let, for example) where that is appropriate for the problem at hand, but you can write in that style without the language support easily enough.

On the other hand, when FP style is not appropriate for the problem at hand, it really, really gets in the way, and that's the case a lot of the time. Many if not most problems (outside of writing compilers for FP languages) don't really fit the functional style, and have to be made to fit.

Experienced devs will choose appropriate tools for the problem at hand. Me, I like adaptive tooling that I can bend to fit the problem, which is why I like dynamic OO languages, internal DSLs and Domain Modeling[2] in general.

In fact, I think the current tools are still a little too inflexible for this, which is why I am creating a language to address some of these issues: http://objective.st

FP seems to be more about bending the problem to fit the tooling, which I guess may work for a specific kind of mindset.

[1] http://en.wikipedia.org/wiki/Higher_order_message

[2] http://www.amazon.com/Domain-Driven-Design-Tackling-Complexi...


Adaptive tooling. You mean like a FP ML then ala OCaml / F#.

I said I like hybrid OO-FP languages for business productivity, rather than concentrating on meaningless software architecture astronautics like that "Blue Book" you linked. I've read it, yes starting from chapter 11, and while I took it on board I find many of its ideas and practices completely toxic now. Just like Gang of Four patterns and the inane amount that OOP inherently relies upon them.

Then you go off on some academic rant about "well in 1989" (no one cares) and HPC computing (no one cares, it's hardly relevant either). FP has progressed a lot since 89 but you're seemingly too old and set in your purely OOP ways to realise it. Carry on. But take a look at a modern OO-FP multi paradigm language and feel enlightened.

If you carry on down the path of ranting about FP because, shock horror, yes it is slower than imperative code then you'll look even sillier. Not that anyone really cares about some randomer making himself look silly by poo-pooing a whole programming language paradigm whilst paradoxically claiming he always likes to choose the right tool for the job. I guess your jobs have just never been varied enough then?

PS: Higher order messaging is just function composition with presumably a dash of actors. Congrats on reinventing a functional programming concept. But it perfectly illustrate the ignorance so prevalent in individuals that only know OOP and will attack anything that isn't OOP.


>academic rant about "well in 1989" (no one cares)

Funny, my age seemed to be important when I was "young and inexperienced". Now "no one cares"...and I am "too old and set in my ways". Which is it? Both? Does my age matter or not? Both again?

Hint: if your conclusion remains the same, but your reasons for that conclusion are this inconsistent, then your conclusions is almost certainly not supported by those reasons. In the words of Popper, an "immunized" theory, meaning it is immune to falsification by empirical evidence.

Anyway, as you gain experience, you will probably appreciate the wisdom of domain modeling. Or remain ignorant. Not sure how the GoF Pattern book got into this discussion, but note that it is largely a description of workarounds for non-dynamic OO languages.

The 6x slower performance was relevant for "Data Parallel Haskell", because performance is pretty much the only reason for doing that sort of parallelism in the first place, as I explained. HPC was relevant because they had been doing the thing that was claimed "impossible" by SPJ in languages not like Haskell...in FORTRAN (which I hope we can agree is not all that much like Haskell).

Had you paid attention, you would have noticed that I use functional techniques when appropriate. I just don't buy the inflated claims, which have been consistently inflated and consistently unsupported by evidence for well over 2 decades now ... and object to arrogant ignorance such as that which you have amply displayed and continue to display.

To call HOM derivative is not exactly a deep insight, when I very specifically told you that it was derivative (and the papers are also very clear about that). However, you display fundamental misunderstanding of not just HOM (which could be forgiven), but also OOP and FP: HOM is exactly not "function composition". OOP languages have had higher order functions ("blocks" in Smalltalk) for decades, and these can be and have been composed quite easily. The point of HOM is that the first order mechanism in an OOPL is messaging, so having functions as the higher order mechanism is inconsistent. HOM creates a HO mechanism that is based on messaging instead of functions, hence HOM. Actors are an unrelated concept.


Cheeky bugger ain't you? "Had you paid attention"? "As I gain experience"? You mean on top of the 20 I already have, and which I clearly used to better effect than yourself as I'm not still rolling around believing the DDD Blue Book is the be-all-end-all silver bullet of software development. I took it on board, and kept some of its ideas in my toolkit, but really it is a book all about over-engineering for those that don't know any better. Domain modelling gives the impression of UML diagrams and all that lark, is that you? It isn't me. My domain models are honed over time as the project evolves. You act like domain models can't be done in FP. My domain models that I write in F# are sodding impressive and they only use a quarter of the lines of code required by Obj-C, Java, C# and similar ilk.

HOM? Don't make me laugh. It's just more shit for the objective world to work around limitations of the languages. A message is just an object that can be and queued somewhere, and potentially serialized. That's not special.

- Loves dynamic languages. Check. - Loves OO. Check. - Loves DDD Blue Book. Check. - Invents "new" programming design patterns all by himself believing he is the sole inventor. Check. - Dabbled with an FP language in '89 and hasn't touched FP ever since. Check. - Created his own shitty programming language to try to improve upon OOPs limitations. Check. - Dares to talk down to anyone more experienced or that disagrees with him. Check.

Yeah, I wouldn't employ you either.


I know it's always tempting to argue vagaries with trolls, but if you have the time I would very much appreciate it if you could give a short answer to my good faith question here:

https://news.ycombinator.com/item?id=8339654


> Many if not most problems ... don't really fit the functional style, and have to be made to fit

Some examples would be helpful here.


Hmm...didn't realize this was (or could be) a serious question.

Anything with state comes to mind. The text field I am typing this into, for example. I type on my keyboard and the state of the text field changes, and after I hit "reply", the state of the page changes with a comment appended. Before/after.

Yes, you can implement this by creating a completely new page containing the concatenation of the old page with the comment, but externally (when you visit the URL), the state of that page has changed. So if you choose to implement the problem in a functional style, you have to bridge that gap somehow between that style and the problem at hand.

Any sort of document processing done on computers in Word, Excel (regarding the document itself, not the one way dataflow constraint program inside), OpenOffice, PowerPoint, Pages, Keynote, Quark XPress, InDesign, Photoshop, Illustrator etc. People use these programs to change the state of documents. That is the purpose of these programs.

Anything that interacts with the world, for example the user interface.

Or Wikipedia. Pages change, sometimes because there is new information, sometimes because something in the world has changed. Or most any other web site.

Really, the world (a) has (a lot of) state and (b) that state is changing incessantly. It is not (a) stateless or (b) immutable.

But don't take it from me: "In the end, any program must manipulate state. If it doesn't, there is no point in running it. You hit go and the box gets hotter" - Simon Peyton-Jones. https://www.youtube.com/watch?v=iSmkqocn0oQ&t=3m20s


Thank you for the food for thought.


It's pretty rubbish food he just gave you there. This is why F# has the mutable keyword. If you are so desperate to use it, that is. Most good programmers try to avoid it.

It's funny that this guy claims fitting the problem to suit FP is a bad thing. But fitting the problem to suit OOP is seemingly a good thing. There is no difference really. All problems have to be made to fit your tooling and practices in same way. The difference is how much squashing is required and the two or three second decision it takes to select the right tool. Only performance optimisations really warrant falling back to mutation of state, other than IO of course. The default should always be immutable. Don't let some brain dead OOP-only troll deter you from seeing the light.

He is speaking is riddles, like pseudo academics love to. Seriously, he is suggesting you can't have an editable text box on a GUI, written in a language like Scala, F# or OCaml? What a moron. (This link will prove particularly embarrassing to a certain person here: http://www.scala-lang.org/api/2.10.2/index.html#scala.swing....) He is arguing an argument that doesn't even exist here, but one that only exists in his own head.

Pure FP, much like pure OOP, is utter shit and painful. The sweet spot is reached by mixing the two and using a multi-paradigm language, ala F#, Scala, OCaml, etc.


Don't worry, you haven't reached the right level of experience yet. You'll get there, though you have some attitude problems to overcome first like most young programmers.


LOL! My 1st year university course was 1989. When were you born?

Interestingly, that is exactly what I see in the current generation of FP hypsters. Grandiose claims based on very limited experience. Aka: know so little, assume so much.

But don't worry, it happens to the best. My favorite is still the Simon Peyton-Jones talk on Data Parallel Haskell, where he can't help himself, but just has to make the snide comment "this is only possible in a language like Haskell". Hand raised in the auditorium: The HPC community has been doing this for years. In FORTRAN. But carry on...

(And of course, the results are dismal: 6 cores required to match sequential C! Considering that this sort of concurrency [there are others] is exclusively for better performance, that is a damning result).

Despite these, er, issues, it is an absolutely fascinating talk, and the techniques look quite worthwhile. After all, they've been in use in the HPC community for some time :-)

https://www.youtube.com/watch?v=NWSZ4c9yqW8


I wonder why the parent comment is downvoted. He's dead on.


nbevans made a post dripping with condescension and arrogance. mpweiher pointed that out. nbevans replied with... more arrogance and condescension. And you wonder why he got downvoted?

He's also (at least partly) wrong, and so are you. FP has it's place. But what people like you and nbevans never seem to realize is that the rest of the programming world is neither stupid nor ignorant. Yes, there are better ways of working than the ways that we work. Yes, we don't know all of those ways, even though some people do. But that's true for you, too. There are people who are ignorant about FP. There are also people who are both smarter than you and more experienced than you, who do not use FP for good reasons. But you and nbevans arrogantly tell us that once we learn enough, we'll see that you're right. Your inexcusable assumption that everyone who disagrees with you has to be both wrong and ignorant is why you're getting downvoted.


OOP people are too sensitive!

I still use OOP just not as much as I have another tool in the box (FP) that is sometimes (often?) more appropriate.


I made the switch to FP. I first learnt Basic then Delphi then C then Java then C# then Python then JS and now Scala. Python is my still my favourite and I really like Scala too. Anyway, I could write big stuff in any typed language OO or otherwise. I don't get what the revelation about FP is. If you want bullet proof code, get the model checker out and work in state machines. That is far the biggest revelation I have had in my career. FP doesn't save me much time or much safety . It doesn't solve concurrency. Its usually terser which is nice. That's about it. Python is a massive productivity boost but it doesn't scale. Scala scales but isn't that productive IMHO.


I'm not going to claim it solves concurrency, but Haskell's combination of STM, expressive types, and immutable data by default make it the nicest language I have used for concurrency. (I have heard similar praise of Clojure as well, but I don't have any experience with it.)


I found Scala to be hugely more productive than Python.


You don't sound arrogant and condescending at all.

Here is the truth: in a few years, you'll realize that OOP is actually vastly superior to FP. It's okay if you don't believe me right now, you haven't progressed enough yet. Maybe you will, maybe you won't. If you succeed, you'll look back on FP and you will wonder how you could ever tolerate such an inferior programming model.


If you think you have found the silver bullet then you are still on the journey too.


I don't think that. I think I found a big productivity boost and the ability to focus the mind on more important issues than trying to repetitively trick a imperative language design to do what I want.

Always looking for the next step on the journey. (It isn't Go)


It's also possible that they're on a journey to Go, not FP.


I think that contrasting OOP with FP brings too many implicit assumptions to the discussion. The value of immutability, for example, seems to be orthogonal to the technique used to structure a computation and is more closely related to the problem to be solved.

In my opinion OO languages popularized the idea of having separate internal and external representations by providing language constructs that made it practical. But they also promoted the coupling of data with the methods to manipulate it -- this is not a necessary characteristic of OO but it is common in popular implementations. This association of data and methods turned out to be a limiting factor in the face of changing (maybe unknown) requirements. The flexibility of more dynamic run-times that allow to mutate this relationship during execution (monkey patching) was not a satisfactory solution as it inhibits static analysis. In my experience this is generally the main motivation when looking into alternatives.

Modeling computations as dynamic data threaded through a series of statically defined transformation seems like a sensible solution to the issue. It also brings additional benefits (e.g.: easier unit testing) and makes some constructs unnecessary (e.g.: implementation inheritance). This approach is commonly used in FP languages and I think is the main reason why they are contrasted as alternatives.

Since it's not always possible or desirable to re-write a project, sometimes the technique is dismissed because it is confused with the languages that favor it. The relative lack of resources explaining how to use FP techniques in OO languages doesn't help either.

Separating the techniques from the implementations has practical value and it allows evolving existing bodies of work.


One reason I think Objects and classes get a bad rap is the mutability and lack of intentionality that comes from getters and setters.

When the innards basically are laid bare by setters, you lose a lot of control of state and flow and object lifecycle that really hurts good design.

Looking at objects that need some kind of validation run on them, in many cases the validations aren't run every mutation but rather on some kind of save or persistence event, long after the validity of the object should have been checked.

OO can lead to great design, and there are some great techniques in FP, but average software written with either is probably terrible.


The point of getters and setters (rather than having public fields) have always been to control how an object's users access the internals of the object and how state is changed if at all. The idea that you should write them as a ritual or a boilerplate is just bananas.

In this discussion of major programming paradigms it is important to realise a couple of things: That this is very old discussion and that there are no clear winner. The productivity and usefulness of a language is ultimately shown when it is put to the test of big practical development projects. Currently business is dominated by the object-orientated languages (C#, Java, JavaScript, Objective-C, C++ etc.) and probably for good reason: The dominant problems that software spends its line count on seems to be things like user interface and interface to other "platforms/paradigms" such as relational databases, web services or data files. Object-orientation have arguably shown itself to solve these problems well.


That's a pretty underwhelming case against objects, and equally underwhelming in favor of functional programming.


Agreed. The problem is, what you're asking him to do is really hard.

Why was OOP considered a good idea? Before OOP, programs were just structs and functions that operated on them. But as programs got larger, that approach broke down. Someone on a team of programmers would create an instance of a struct, but would initialize it in a way that some function (written by a different programmer) would regard the struct as having an inconsistent state. Or some function, which was meant to only be a helper function for other functions that were the API, would be called directly by some ignorant and/or lazy programmer. The result was increasing chaos as programs got larger.

OOP was viewed as the fix to that kind of problem. It was somewhat successful as a fix, too.

So the problem with most of these pro-FP articles is that they show you small examples. But the pre-OOP style worked fine on the small examples, too. In fact, almost any style works on small examples.

And showing that I can write the same thing in fewer lines by using FP isn't the answer, either. It's like saying that your algorithm has a smaller scaling constant than mine does. That's fine, but first let's talk about whether your algorithm is O(n^2) and mine is O(n log n). That is, if your approach scales worse than mine for large problems, then showing me that your way is more efficient for small problems completely misses the point.

But to build a convincing case for that, you'd have to do something like having a team of competent OOP programmers write a large-ish program (say, 10 person-years of work), and have a team of competent FP programmers write the same thing, and report on the results. Oh, yeah, and have the teams maintain the programs for five years. That could be a convincing paper, because it would address the actual issues.


This is the argument I would use against FP. It's an open question -- to some degree.

Couple of notes. First, have you seen what real-world large-scale OO looks like? Dude, it ain't pretty. We're swimming in projects that are staffed at 10x or 20x the number of coders they probably really need, and a lot of time is spent in support activities.

Second, maybe the real question here isn't one of scale, it's how the model falls apart under strain. A good encapsulated system should offer you a bit more defensive programming protection than an FP one. But if you're using compsable executables, while testing at the O/S level? Meh.

I know why we went to OO. And let's not forget OOAD, a beautiful way of joining analysis and design about the problem with the solution itself. I'm of a mind that OOAD might live on even once the world moves to mostly FP.

I think maybe that OOP was a stopgap position; a place to try to build a firewall against the encroaching overstaffing associated with Mythical Man-Month management. Now that programming is a commodity, however, we're seeing diminishing returns. Don't know.

I am much less convinced of the "We need OO for large scale projects" argument now than I was two years ago. I expect this trend to continue. We might be able to solve the scaling problem with things like cyclomatic complexity limits on executables, or DSL standards. Not sure of the answer here, but I think we're going to find out.


Programming is a commodity? In a generation I hope so - now?

And to be fair I think that OoP and FP are the wrong paradigms for handling bad management of a large project. Open source methodologies are eating into that (I have just started at a major Fortune 500 with effectively a huge open source project inside it's employees) and even without that a good focus on risk management will help

In short, talking about your problems and taking early steps to resolve them will do more to fix a project than moving from OoP to fp.


> I know why we went to OO. And let's not forget OOAD, a beautiful way of joining analysis and design about the problem with the solution itself. I'm of a mind that OOAD might live on even once the world moves to mostly FP.

Never heard of it, but sounds interesting.


This is quite a good article on exactly that:

http://simontcousins.azurewebsites.net/does-the-language-you...

The same application written in C# and F# by the same programmers (the C# project took 5 years, the F# project 1). It in no way proves anything, but it's food for thought.

Personally I've had huge wins moving from OO to functional (Haskell and F#).


I've seen similar studies. I wonder, however. Functional programming is somewhat esoteric, so a random selection of programmers who practice FP are likely to be more dedicated to the craft of programming in general. I think the truly interesting question to ask would be, "Can we make software better by having the same programmers use functional programming?"


The second time you write the same thing is always a lot faster than the first time.


It's very hard to tell from that brief, offhand comment whether there's any substance behind it. I'd be delighted to hear more of your thoughts.


The sole argument in this article seems to be that OO systems introduce more complexity than is justified which results in larger code bases than an FP-based system would, which increases maintenance costs.

Then the article goes on to show some toy example solutions in FP style, without really touching on the challenges that don't show up until larger problems.

I can write a great Account balance summer program in Python OO too, and it'll be pretty, simple, and readable...


I'd really like someone on any side of this debate (and there are certainly more than two; for example, some people are advocates of "FP in the small, OO in the large") to write an article that does describe how their approach handles the challenges of designing and maintaining a large system.

I think such articles are rare because they're much harder to write than something like this. In complex systems it becomes very difficult to explain all of the tradeoffs and constraints that have led to the design you've ended up with, and it becomes harder to evaluate that design.

FWIW, at this point I agree with the author that modern FP has strong advantages over OO. One reason I feel this way is because extensive experience with large OO systems has shown me lots of ways in which OO causes problems. However, I admit that I don't have a similar level of experience with large FP systems. I'm sure that as FP gets more popular and more large FP systems get built, we'll find plenty of things to complain about on that side of the fence too.


So strong, pure FP coding will lead to a naturally decomposed system of small pieces -- once the re-factoring is done. There are no large pieces. That's the beauty of it. I believe that the premise of your question is in error.

The sucky part is that there is no guarantee that you will ever get there. A bad programmer or two and you've got a mess. Large FP systems crucially depend on high-quality coding. There is no place for everything to go, that's what the coders figure out!

Contrast that to an OO system, where things go where they naturally belong, but you really don't know what the algorithm is. Hell, you can spend days just wiring stuff up and putting stuff in place before the actual "real" code finds a home. But you always have a plan for where things go.

I don't think you can find a large, complex FP project because I think all the good complex FP projects are clusters of small executables.


What was the premise of my question that you thought was in error? I'm guessing the answer is in your last paragraph: there are no large, complex FP projects because such a large project inherently isn't good FP. We'll have to agree to disagree on that; in my opinion, some projects just don't cleanly decompose into a set of small, manageable subprojects.

So moving beyond that: if it's really true that large FP systems depend on high quality coding, I think FP is doomed.

One aspect of large systems is that you're no longer able to depend on consistently high-quality coding, because even if all the coders involved are highly skilled, there are new people being added to the project all the time and old people leaving. Knowledge and context gets lost, and new people write code that makes sense locally but doesn't fit the needs of the project as a whole. That's just reality. And even the experienced coders on the project lose the ability to consider the whole thing at once after a while. There is a limit to how much modularity and encapsulation can help with that, although they're very useful tools.

In a large scale project, it's really important to consider how features of the language and tooling and ecosystem help or hinder you in managing those kinds of problems. That's the sort of thing I think we could use more discussion about. And I feel completely opposite from you here - when it comes to dealing with imperfect coding and imperfect coders, I believe that modern FP languages have better solutions than modern OO languages. I think FP's popularity is only going to grow, exactly for that reason. But I also know there are places where current FP languages need work, or where the paradigm may be a poor fit, and I think it won't be clear where all of the weak points are until we've got more experience as a community with large FP projects than we have right now.


> I don't think you can find a large, complex FP project because I think all the good complex FP projects are clusters of small executables.

That's certainly one (optimistic) conclusion. Another could be that FP is not suitable to large, complex projects.


I'm not sure what you mean by "large" but Jane Street has apparently millions of lines of OCaml written. Of course, I'd argue that the fact that PHP is inherently unsuitable for anything complex doesn't keep Facebook from having a gigantic amount of it. The difference is that they had to write a type checker for it :)

I think the truth is more simple. Traditional OO is what is taught, traditional OO languages have tons of libraries, and there are tons of legacy code in traditional OO. You can easily find OO programmers. Nobody ever got fired for making OO systems, even when they end in barely-maintainable horrors full of mutable state.


> I'm not sure what you mean by "large" but Jane Street has apparently millions of lines of OCaml written.

Ocaml is a nice, pragmatic hybrid of imperative, OO and FP. Adding sporadic side-effects to some component (1) will not force the rest of your program interacting with it into some monad. I guess there's a reason they didn't use Haskell :P

(1) For example, you want to compute on-line summary statistics, where the input is run-time configurable, i.e., items can come from a file, network or memory stream.


That's great - if you can do it. The Unix design philosophy has held up well over the years.

But what you're doing is building small pieces that communicate with each other (via pipes, files, databases, or something similar). That looks almost like an OO design (pieces that communicate with each other over defined interfaces, hiding their internals from each other), except that the inter-object communication channel is both more inefficient and more impoverished in what it can express.


The rich typing of most OO languages and frameworks means that the "defined interfaces" are usually many and varied, and the system is less composable and reconfigurable as a result.

Unix pipes work so well in part precisely because the medium of exchange is so unstructured, with every "module" speaking the same language. You may need to massage the medium between two modules, but guess what, we have other modules like cut and sed and awk, that are not only able to transform the medium so that modules can be attached to one another, but themselves only had to be written once.

I think the Unix pipe pattern of architecture works very well in the large, and you see things very close to it elsewhere. C#'s Linq is fundamentally based on transforming iterator streams - little different, architecturally, than Unix pipes. The Rack middleware stack in Rails has a similar structure - every module has a single method, and recurses into the next step in the pipeline, and gets a chance to modify input on the way in and output on the way out. Both get their power by using fundamentally the same "type" on all the boundaries between modules, rather than module-specific types. It's the very antithesis of a language like Java, which even wants you to wrap your exceptions in a module-specific type.


I found your comment accidentally extremely funny. It's also illustrative of the problem here. I decided to reply not in order to goad you but to try to make some sense to the other OO folks reading along. Hopefully I can disagree and add some nuance without sounding like an asshole.

"That looks almost like an OO design"

Yes. Yes it does. You can only move data so many ways. I've got pipes, you've got messages. Life is good.

"except that the inter-object communication channel is both more inefficient and more impoverished in what it can express"

Really wanted to call bullshit on you here. If it's working, then somehow the efficiencies and paradigm of construction has overcome all these limitations, no? Lot of loaded words here. Are OO paradigms richer in terms of expressiveness? Gee, I don't know. You could say so. But in my mind it's an uniformed opinion. It's all pretty much the same.

Many times OO folks get really frustrated when they start learning FP. I know I did. The sample code did silly things like sort integers. Everything was simple, trivial, academic. Where's the real code? I would wonder. I'd read three books and we'd never get around to building a system.

Looking back, what I missed was that I was already looking at the real code. It was my mindest of wanting all of this expressivness, efficiency, and richness of expression that was preventing me from seeing a very important thing: we were solving the important problem!

Instead, I had a very fine-tuned idea of how things should look: this goes here, that goes there. This is obviously an interface, we should always use SOLID, and on and on and on and on. I had a feel for what good OO looks like. It's a beautiful, rich thing. Love it.

But this kind of thinking not only was not useful in solving FP problems, it consistently led me down the wrong path in structuring FP solutions, which was weird. I would look at things as all being the same -- when I should have been looking at the data and the functions.

Guy I know asked online the other day "What's the difference between microservices and components?" My reply "Everything is the same, but there's a difference in how you think about them. A component plugs in, usually through interfaces. A service moves things, usually through pipes."

If you're looking at a service as being another version of a set of objects passing messages, you're thinking about system construction wrong. Wish I could describe it better than that. It was something I struggled with for a long time.


> Hopefully I can disagree and add some nuance without sounding like an asshole.

I think you succeeded.

And I think you're right that OO thinking is probably not going to lead you to a good FP design. Why should we expect it to? (And you're probably also right that OO programmers, unthinkingly, do expect it to.)

Perhaps what I should have said is this: The architecture you're coming up with looks somewhat like Object-Oriented Analysis and Design (OOAD), even if it's implemented with FP rather than OOP.

On to this line: "except that the inter-object communication channel is both more inefficient and more impoverished in what it can express."

There's two kinds of efficiency in play here: programmer efficiency and machine efficiency. In many cases, it makes more sense (now) to worry about programmer efficiency - we're not pushing the machines all that hard. But if I do care about machine efficiency, I can get more of it with a single app than I can with a series of apps connected by pipes, because I don't have to keep serializing and de-serializing the data. Should you care? Depends on your situation. So that's the efficiency argument.

Expressiveness: This chunk of code is expecting a floating-point number here. If it gets that via a (typed) function call, it can demand that the caller supply it with a floating-point number. If it gets it via a pipe, it can't. All it can do is spit out an error message at runtime.

[Edit: fixed typo.]


> we were solving the important problem!

Can you elaborate please? What exactly was the important problem? Was the important problem turning huge codebases into trivial problems?


All this FP revolt kind of start to remind me Whitehead's and Russell's attempt to invent a formal system which would be paradox free...


Martin-Lof Type Theory doesn't actually have any paradoxes in it.


Well, it does—Girard found one. It also has lots of troubles with equality.


The Equality point focuses on Java's issue with string equality but doesn't mention that other OO languages let you overload == in order to provide the type of equality you want, such as C#.


The OO being addressed here is the statically-typed variant made popular by C++ and its followers. Many of the points made (classes being types, interfaces, type variables, etc.) do not apply to dynamic OO languages in the Smalltalk vein.

There's even a footnote referencing Kay-style message passing OOP, but it suggests that message passing languages are not "available today in the mainstream". There are several major OO languages today based on message passing, so I don't know how that claim is justified.


Notably Ruby and Objective C.


Objective-C is probably the saddest story of any programming language.

Absolutely fantastic libraries, incredibly easy to write high-level abstractions over very low-level C code, and completely useless outside of one platform.

I wish projects like GNUStep would get more love.


I've been taking a similar journey in my blog, where I talk about the difference in thinking in FP and OOP. ( http://tiny-giant-books.com/blog/real-world-f-programming-pa... ) I use C# and F#.

I was at a Code Camp a few years back where one of the speakers was introducing F#. He was looking at a map function or some such on the screen and muttered something like "Well, you know, you can see the C# this compiles down to. It's all just a while statement. So there's really not much here."

At the time, I was concerned that he missed the point.

It's tough moving from 20 years or so in OOP over to FP. Whatever you do, you don't want to give folks the impression that you're just a fanboy for some new latest whizbang tech eye candy. Yet it's important to convey that there's something different going on here too. Yes, it's all just ones and zeroes, but while true, that observation is not important.

You reach a point where you say "You know, an object is just a function with closures. A function is just an object. It's all the same"

Yeah, it's all the same. But it's not. Just like that C# guy, you understand that at the end of the day all we have is state machines, but you missed the part about how thinking about things in various ways is better or worse for creating certain types of solutions.

This author tries to make the case for FP by taking apart the wild and wooly world OOP has become, where you're not just coding a solution, you're creating a extension to the type system for your very own stuff. Very cool stuff. But I think once you go down that path, you're arguing the wrong thing.

Thinking about problems in pure FP leads to small, composable functions. These group into small, composable executables. These can be scheduled and pipelined across many different O/Ses, networks, and machines. This land of unix-like coding has a multi-decade track record and solid tools that any administrator can use and understand. Thinking in terms of object graphs almost inevitably leads to very complex and nuanced solutions, with lots of hidden dependencies, that only work under certain conditions, and where you may need an expert or two around to help out when things go wrong.

FP is not nirvana. Nothing is. But it is very refreshing to have it solve the same old problems in a much less complex way. I don't see any future except for pure FP -- although my money says it might be 20-30 years until the industry catches on. Now's a good time to get an early start!


I wonder how much of the "small, composable functions" nature of FP can be attributed to the types of programs you write in functional languages, and the types that you don't.

Unix-like tools such as grep (or ghc) are very much like pure functions: programs that accept input, and produce an output data. It's not surprising that they lend themselves well to FP techniques. But other programs, like the web browser I'm using now, have lots of "inputs" and "outputs." There's many knobs that can be turned, and output goes to screen, disk, network, other programs...

I suspect these programs have a larger essential "hairiness." grep only has to search text. But Find in a text editor has to show a progress bar, cancel button, intermediate result count, etc. These features are intimately intertwined with the algorithm itself, and that's often hard in FP. Try writing a Haskell function that does text find/replace with live progress reporting. It's not easy, and it ends up looking a bit like Java.

Note that the land of unix-like coding isn't very good at UIs either!


Great question!

I'm finding that FP tends to shave the "hairiness" off things, many times in ways I had not anticipated.

UIs are a completely different animal. I've done a lot of UI stuff in the OO world in the past, and some in C/C++. The couple of apps I wrote in F#? I ended up doing a kind of functional-reactive thing. I really like the FRP paradigm for UI work, but I need a lot more experience to say anything useful about it. One of the things I started doing was setting up derived types from Win32 objects. Looking back, with that kind of attitude I was probably headed down the wrong road.

A web browser, eh? that's very interesting. One of my projects does some screen scraping. I found that scraping could be done in a pipeline -- get the page, score the sections, run some rules, do some QA, etc. Each stage did some work and left things for the next stage. But, of course, I was processing many pages at the same time. Rendering one page for a user sitting in front of a screen is a completely different scenario. I think.

Writing a pure FP browser would be a hoot.


> I suspect these programs have a larger essential "hairiness." grep only has to search text. But Find in a text editor has to show a progress bar, cancel button, intermediate result count, etc. These features are intimately intertwined with the algorithm itself, and that's often hard in FP. Try writing a Haskell function that does text find/replace with live progress reporting. It's not easy, and it ends up looking a bit like Java.

Ah, I suspect you're not familiar with things like conduits/pipes/iteratees/etc. These allow you to express and compose those kinds of dirty things that you're looking for--incremental input/output, progress reporting, etc. So you could write a text search function, compose it with a progress reporter that reports how quickly it traverses input, and then report the output progressively, and dispose the whole thing at any time. Haskell's type system is fairly awesome at letting you do things like this, enough that people are still discovering neat tricks decades later.


> Thinking in terms of object graphs almost inevitably leads to very complex and nuanced solutions, with lots of hidden dependencies, that only work under certain conditions, and where you may need an expert or two around to help out when things go wrong.

Somehow nobody complains about this when coding in Erlang. Now do the analogy of actor ~ object.

Besides, why do you think that it's easier to think about large function composition graphs rather than large object graphs?


First I'll comment on footnote #1 in the text:

> "Here, I refer to OO concepts as available today in the mainstream. Initially, objects were planned to be more like autonomous cells that exchange messages, very similar to what the Actor model provides. [...] "

There is absolutely nothing in modern OO languages (Java, C++, C#) preventing you to design systems as if object instances were actor instances with method invocation corresponding to message passing. His text criticizes therefore the wide-spread (mis)understanding of OO, including his own.

Second, he equates "concurrency" with "shared memory, mutation-based concurrency". Well, there's message-passing too, and it works perfectly fine in OO programs.

---

His problems with OO stem exactly from the reductionist approach of OO=encapsulation+polymorphism. If you make the object ~ actor conceptual jump, you'll suddenly get a new perspective on how to use objects in program design.

(In the actor model, actors do not share state and are conceptually immutable. However, there's a "become" operation which the actor can use to change its future behavior on incoming messages, in effect giving you means to implement a memory cell -- not that you'd really want to do it.)


Another article that fails to see mainstream FP languages are actually multi-paradigm.

First of all the article fails in a few points regarding OO programing.

Single dispatch is not a synonym for OOP, Common Lisp, Dylan, Julia are all examples of languages with multiple dispatch.

Second, unless Erlang, Miranda, Caml Light, Standard ML, Scheme, pure Lisp are being used as examples of FP, most certanly the language will have support for some type of OOP.

The only difference is if the language is functional first or object first, concerning which paradigm is usually the one to reach to first.

I guess we need to have a few blog posts with UML examples mapped to OCaml, F#, Clojure, Common Lisp, Haskell.

At very least it would make these comparison posts focus pure FP languages, I guess.


It does not follow that a poor OO implementation in one language (Java) means all OO is poor (and, in fact, the article agrees with this when it decides Haskell is an ok way to do OO).

Yes, every concept in OO can be accomplished without OO and, in many cases, are reduced implementations of more general concepts, so polymorphism is a simplified kind of type-based function dispatch. The power comes in the consistency and in the simplification itself. Instead of having multiple implementations of polymorphism that have to be developed and managed by multiple teams, you have one that is developed and managed by the compiler and understood by everybody who writes the language.

Java was very opinionated about certain aspects of OO, missing the mark of why OO is valuable in some places. It also made some very poor decisions in the implementation of its VM (int/Integer, ==/.equals) that made things worse. Finally, living in the Kingdom of Nouns[1] just sucks. But this feels like a lispers rant against Java more than a true critique of object oriented programming.

1. http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


One thing where I think OOP is particularly elegant compared to FP is refinement.

Say I have a class with five methods on it. I realize I need to create a new class that has almost the same behavior: I want to reuse four of these five methods but the fifth one is different. This is a very common problem.

This is trivial to do in OOP: create a base class, override the method whose behavior you need to change, done.

I've never found an FP language that makes this as elegant.


It seems to me that everyone is missing the point behind object-orientation.

Object-orientation sucks for everything except user interfaces. If you don't believe me, try writing a UI library without objects and see what happens. UI is OOP's best (and only compelling) use case.

For all other algorithms classical data structures/ADTs are much better. If you don't believe me, try writing an OOP compiler.


I write a lot of interactive SVG based user interfaces in JavaScript and always default to using "classes" to represent composable elements in the UI. I've recently been wondering about whether writing UI code with a functional language would even be possible. When it comes to data manipulation though I will always use underscore.


My thesis is that by trying to write functional/imperative UI code you will create your own ad-hoc OOP system.


I think you've confused "intensional" and "extensional".

("Intensional" means "identity determined by structure"; "extensional" means "identity determined by behavior". You seem to use them in the exact opposite sense.)


I'm not a good programmer but I've never understood OOP. For me it feels much more logical to work with pure functions with consistent output based on input. At work I'm told to use more OO but it feels really enforced and not intuitive at all. The only time it makes sense for me is in game development where you have many entities of the same type that all have their own states.

I feel I should learn OO properly and maybe try Java or something where you have to code OO. On the other hand I also feel I should go with my gut feeling, forget about OO and just learn FP instead. It feels wrong though to "skip" OOP seeing most serious programmers seem to have a background in it. What do you think?


The secret about real world Java applications is they tend not to be very object oriented at all, in the way OOP is taught in textbooks.

Interfaces are very heavily used, along with dependency injection (boy, do Java programmers love their dependency injection).

Inheritance is now widely discouraged. "Java Beans" are everywhere, which are glorified structs breaking every single rule of encapsulation.

Java runs fast, is garbage collected, has great support for multithreading, great Unicode support, and many other advantages. But it's certainly not very object oriented, especially in the Alan Kay sense.


What does large scale applications look like in Haskell or Cloujour? Why can't both co-exist? Can we have a humanized OO language that compiles to something immutable?


Pattern matching seems very limited compared to subclass-based method dispatch. Using pattern matching requires you to know every case in advance. But most "standard" OO languages allow subclasses to be created without touching the original base class. This allows injecting new behavior into existing systems. AFAIK, pattern matching alone can't do this.


That's because it is limited. It's not meant to be a way to provide polymorphism. One of the things that it does provide is the knowledge of every possible case and a reminder from the compiler if you missed one. This is not possible in many OO language.

FP languages have different ways to inject new behavior. You could for example define a function a -> (a -> Int) -> Int. This function now works for any type for which you can also provide the "interface" a -> Int.


You could also define a typeclass Intable, and a function (Intable a) => a -> Int, which will work on any type that implements Intable.


The whole OOP vs FP argument is like food vs drinks argument. I am empowered and capable of using both. What is the problem?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: