Hacker News new | comments | show | ask | jobs | submit login
What's wrong with Object-Oriented Programming and Functional Programming (yinwang0.wordpress.com)
227 points by roguelynn on Nov 12, 2013 | hide | past | web | favorite | 142 comments



This is a low quality article from someone who, given his qualifications, should know better.

The criticisms of functional programming languages range from trivially true (you can't be purely functional and have side effects) to incorrect (it is certainly not difficult to create a circular data structure in Haskell - it's easy given laziness).

The criticism of OOP is verging on nonsensical. Of course functions can be objects. A general definition of an object (following Cook somewhat) is something that satisfies an interface along with some form of dynamic dispatch. There is no reason why a function can't fall under that definition. The distinction between “fundamental” and “derived” isn't a technical argument, it's pseudo-philosophical junk. As several others have pointed out, the fact that Java doesn't have proper first-class functions is also utterly irrelevant. In fact, it is possible to program in a very pure OOP manner in any language with proper closures.

If the author is representative of the quality of researchers working on programming languages, it's no wonder the field seems stagnant.


> This is a low quality article from someone who, given his qualifications, should know better.

It really depends which audience this article is targetting (Do you believe that S. Peyton Jones speaks in monads to his mother?). If this article was written in response of a talk between undergraduates in a TA session, it is perfectly acceptable, and even provides links to advanced material for the curious.

> A general definition of an object (following Cook somewhat) is something that satisfies an interface along with some form of dynamic dispatch. There is no reason why a function can't fall under that definition.

Yes, but why have that definition, if you already have functions as builtin types? Conversely, if there's a need for functions as builtin types, why force programmers in using that clunky alternative? That's pretty much the point made by the article: don't corner yourself in one paradigm when others might be better at some tasks.

> As several others have pointed out, the fact that Java doesn't have proper first-class functions is also utterly irrelevant. In fact, it is possible to program in a very pure OOP manner in any language with proper closures.

Careful, the wording is a bit lacking here imho: you join two different ideas and make it seem like the second one validates the first, while it's not the case.

> If the author is representative of the quality of researchers working on programming languages, it's no wonder the field seems stagnant.

That shows you probably don't follow much the field. Anyway, this article doesn't fit your standard (see first point), ergo this man's whole work doesn't, ergo the whole field doesn't? That twice too much stretching from someone who isn't very careful in his own argumentation.


> It really depends which audience this article is targetting...

I disagree. Low quality articles like this are part of the reason why many undergrads are full of strongly held opinions about things they don't know much about. They think you can dismiss a huge paradigm with a glib one-liner: "functions are not objects".

> Yes, but why have that definition, if you already have functions as builtin types?

Because we can't talk about objects without having a definition of what they are? I have no idea what point you're trying to make here. I'm just trying to be clear about my terminology.

> Careful, the wording is a bit lacking here imho: you join two different ideas and make it seem like the second one validates the first, while it's not the case.

Think about it a little bit and you'll see why they are actually very related. His statement about Java is meant to imply that proper higher order functions and OOP are in opposition in some way (at least I assume that's his point, it's still not clear to me because that is obviously incorrect). I am saying that proper higher order functions alone are enough for a very pure form of OOP, so there is clearly no opposition.

> That shows you probably don't follow much the field. Anyway, this article doesn't fit your standard (see first point), ergo this man's whole work doesn't, ergo the whole field doesn't? That twice too much stretching from someone who isn't very careful in his own argumentation.

I'm going to ignore the digs at me and elaborate on what I meant. The author wrote an article about programming languages that was full of elementary errors, both in logic and in technical details. If he can't make a simple argument correctly, how am I supposed to have faith that his research isn't similarly full of errors? It's easy to disguise sloppy thinking in technical writing.


> I disagree. Low quality articles like this are part of the reason why many undergrads are full of strongly held opinions about things they don't know much about.

I doesn't happen with undergrads only (but that doesn't make the whole profession guilty of it all the time).

> They think you can dismiss a huge paradigm with a glib one-liner: "functions are not objects".

I don't think the article dismisses any paradigm whatsoever. It dismisses the entrenchment in one paradigm at the expense of any other. That's the main point. The explanation given is perhaps a little bit careless, I can concede it.

> Think about it a little bit and you'll see why they are actually very related.

Relation does not imply causation, which I perceived in your wording, but I might have been mistaken. Was I?

> His statement about Java is meant to imply that proper higher order functions and OOP are in opposition in some way

I don't think he meant that, but rather that the way Java never included functions as "primitive types" was a mistake because of that need of OO purity. The Runnable interface (or any interface containing a single method) is an example of that problem (there are other reasons which may support the existence of that interface though).

> I am saying that proper higher order functions alone are enough for a very pure form of OOP, so there is clearly no opposition.

That's debatable. I totally agree with you on the idea that FP let you implement OO. Likewise, OO let you implement closures easily (there's a duality there obviously, as in algebra vs coalgebra). However, I think that OOP (or FP) as a full fledged paradigm included in any language - and not just a library built on top of closures (or closures on top of objects) - makes a lot of sense: it let the language incorporate syntactic sugar for all the specific constructs related to OO, and it also enable the compiler to perform better code analysis (though I suspect that it should be possible with a library based implementation, but that would imply either the compiler to know about it, or it having a very flexible - plugin like - mechanism to include enhancements as libraries as well). I don't think that the latest evolution in Java to include FP is a coincidence. Many languages have already been mixing these paradigms with good success.

> I'm going to ignore the digs at me [...]

Good. Let's get that out of the way. I'm glad we're having a saner discussion.

> If he can't make a simple argument correctly, how am I supposed to have faith that his research isn't similarly full of errors?

I understand your point better now. Initially, I was very afraid you'd be having the very same issue.


This looks like an attack to functional programming mainly. And if you follow Haskell closely I think that the guys building it are well aware that the world is actually full of side effects. But the thing is that if they push the "pure" ideology to its limits they will uncover a number of useful things along the way. And that is the whole reason to stick to it. Maybe you don't find the usefulness of monads, but that doesn't mean that there is no value of composing part of you programs in this way. Otherwise you are stuck to the same old ways of doing things.

So going with pure has its benefits for all of us, but this does not mean that we should stick with something just because. Unfortunately the answer to most of the questions in programming/software development is: it depends.


> This looks like an attack to functional programming mainly.

I don't know. People call many languages (F#, Clojure...) functional even though they have no separation between pure and impure code. So it's mostly an attack against Haskell.

Which is not entirely unwarranted, I have to say. While monads are fairly straightforward, stacks of monad transformers are frankly a pain, and error handling is miserable. It does not make the whole idea useless, but I think the way forward is to separate the effect system from the type system. However, hand-waving it with "just use static analysis" is not, I think, a panacea. It's very useful documentation for the programmer, and helps to organize your code (in fact, it's usually a good idea to do that even in OO-languages). And it does not take into account the extremely powerful things it gives you. Early termination without exception handling, powerful and easy parallelism for pure code, etc.


> But the thing is that if they push the "pure" ideology to its limits they will uncover a number of useful things along the way.

For example new elegant approaches to parallelism/concurrency that would be very difficult in an impure language.


Indeed. For instance, something like

http://hackage.haskell.org/package/parallel-3.1.0.1/docs/Con...

Would be very difficult to implement in a language that is not referentially transparent and lazy.


From a mathematical/theoretical point of view, this is an interesting article, but I think it misses the larger point of programming language paradigms. Yes, the PL researchers have their reasons for creating languages that are object oriented, functional, procedural, imperative, declarative, logical, etc. but those are rarely, if ever, the reason that programmers choose these languages.

Ultimately, the main purpose of a programming language is to convey intent to other programmers. If calling an object a function just because it has a `__call__` method makes it easier to convey intent, then it doesn't really matter that the object is not really a function. Personally, I'd be interested in any research into the ability to convey intent using these different paradigms.


In the case of natural language, I would agree with a statement similarly worded to yours above, "The main purpose of a ... language is to convey intent to other(s)". (Sorry for butchering, I promise I'm getting to a point =) )

I'm not sure I'd agree with respect to programming languages, however. These are artificial languages designed to convey a limited set of ideas, with varying degrees of abstractions over various concepts in electrical engineering and mathematics. Given this, the purpose of a programming language isn't just to communicate with others, it's also to shape how you approach problems.

Anecdotally, learning the strict functional programming paradigm is what allowed me to pass an extremely difficult technical interview recently. The emphasis on dataflow and "do what I mean, not what I say" truly changed how I thought and approached problems. Previously, my years of experience in imperative languages lead me down a road that was a bit too missing-the-forest-for-the-trees when it came time to solve a difficult, interview-style algorithmic question.


The author should mention Smalltalk to talk about pure OOP. Not mentioning Smalltalk means the author didn't really researched the pure OOP. The author just tasted some OO style languages. I really don't understand how can the author argue about drawbacks of pure OOP with Python (which is more like procedural) or Scalar (which is more like functional).

Also in the middle of text, the author frequently mention most OO langauges which is nothing related to topic - extreme, pure OO language.

Mention about Haskell is pretty agreeable, but that drawback is already published on Haskell website, and that's why Haskell support impure operations.

I don't understand how can a Ph.D person can write this poor text.


This is a gross mischaracterization of functional programming and basically attacks a straw man--one that's lamentably common when talking about functional programming.

I'm going to repost a comment I wrote on the blog. It's long and really needs editing, but I hope it gets my thoughts across. I think the part about OOP is also misguided, but it's so obviously tacked on to a rant about functional programming that I just ignored it.

Haskell’s “purity” is not about getting rid of side-effects but about controlling side-effects. It’s making side-effects first-class citizens: now you can write code that talks about having or not having them!

You can still have side-effects however you want, you just have to be explicit about it. To some extent, the fact that this uses monads is incidental: all that’s important is that there is some IO type, some ST type and so on–the fact that they all form monads is almost an implementation detail. That’s why some of the most exciting Haskell features–STM, the IO manager and so on–are all about effects. Clearly, Haskell more than acknowledges effects, so the entire diatribe about ignoring the existence of side-effects is attacking a straw man.

Besides, you can’t simply replace a first-class system for managing effects with static analysis, without essentially reproducing the same restrictions. How would you do something like Haskell’s deterministic parallelism model or reliable STM or the very aggressive loop fusion (and general rewriting) Haskell uses? There’s a reason you don’t see these things done nearly as well in any other languages: all of these fall apart as soon as you introduce side-effects, so you need some way to help the programmer ensure things like this are only used safely.

And this is exactly how types like IO and ST help make code safer. Sure, within an ST block, you have stateful code that’s just as hard to analyze. But you can guarantee that this does not leak outside the block. Similarly, functions can rely on their inputs not doing anything untoward however they’re used. This allows you to explicitly state the assumptions about your inputs: how is this a bad thing? In turn, this makes writing code that takes advantage of these properties much easier: you can ask that your inputs do not cause side-effects in a way that’s self-documenting and easy to verify. Then you’re free to re-evaluate your inputs or call functions however many times you want, as concurrently as you want. At the extreme, this can even be used for security purposes: see Safe Haskell.

Sure you can write pure functions in any language. And you can write side-effecting procedures in Haskell too. But the difference is that Haskell lets you be explicit about whether you want side-effects or not–it’s just another part of your interface. And this is how types like IO and ST help make your code easier to think about: any code that is not in a type like IO or ST can only depend on its arguments, making all the dependencies more explicit. (Note, again, how this is all independent of “monads”–it’s all about effects, and the types just happen to form monads.) This does not make static analysis too much easier, but that was never the point–the goal is to make the code easier to think about, and knowing that there are no hidden state dependencies certainly does that. A static analyzer can follow data flow easily, but it requires quite a bit of thinking for the programmer to do the same!

The core motivation for managing effects à la Haskell is not “mathematical purity”: it’s software engineering. We want code that is easier to think about, has better guarantees and is more modular. The goal is to make code less complex by removing hidden dependencies between distant parts of your program (mutable state) and the effect of evaluation on the meaning of your program (side-effects in general). You can refactor and move around most Haskell code without worrying about breaking its surroundings because any dependencies are explicit. You can extract something into a function, consolidate multiple function calls into one or split one into multiple and generally change your code up quite a bit without worry–these actions cannot change the code’s correctness because effects are managed for you.

Ultimately, functional programming like Haskell is not just normal programming with side-effects outlawed. Instead, it’s a different basis for programming which allows you to manage side-effects explicitly. In this light, papers like “solved problem but with monads” are entirely reasonable: they’re about bringing things over to this new basis. And this goes the other way too: there’s a reason why you don’t see good STM, deterministic parallelism, stream fusion (and rewrite-rule optimizations in general), anything like DPH and anything like Safe Haskell in other languages.


The way Haskell manages side effects is inspired, and I don't think anyone can say that Haskell is not a beautiful and coherent language. I only dabbled in Haskell a bit so I may be wrong, but I think that the problem with the way it manages side effects is that it does so through lazy evaluation, and lazy evaluation is hard to wrap your head around.

So I think that if there is one big problem with Haskell, it is this: you say Haskell is about "software engineering". Indeed, the guarantees it provides may assist with software engineering, but those guarantees aren't free. They require programmers to think and program solely within Haskell's lambda calculus and lazy evaluation framework, which is neither the way people think nor the way computers do (the latter is important when doing performance analysis on your code). This constrained framework also takes its toll on software engineering, then. The question is, therefore, when are the guarantees provided by Haskell worth their price? I think there are cases where they certainly are, and cases where they certainly aren't.

(A tangential point: when thinking about software engineering, there are vital issues that have little to do with the choice of the language. For example, a language would have to be truly magical for me to forsake the JVM's vast ecosystem, runtime linking, and runtime profiling and monitoring capabilities – that's why I won't use GHC in my projects, but may certainly give Frege a try)


Dabbling really is not enough to show you the problems with your assumptions. You are right that there is a cost, but it is really paid once up-front by each programmer.

I could have written this same comment five months ago, before I started down the long dark tunnel of doom which is to go beyond LYAH and trying to write a real application. I think it was two months before I saw a light at the end of that tunnel - and only recently that I have really begun to feel as productive in Haskell as I previously was in Ruby. At this point, the costs you speak of are paid up and I do not incur them when I write new code. Yes the type system does impose a lot of structure and some boilerplate, but the benefits are profound. Its the sort of statement I did not really fully credit before experiencing myself, but I will say that is profoundly amazing how frequently my code works perfectly once it compiles. This is particularly true when refactoring. I remember sometimes in Ruby despairing to refactor some code because I didn't want to fix all the tests...in Haskell once my tests are compiling they are passing, and the compiler errors are usually very easy to understand and fix (after several months of head-banging frustration).


I don't doubt you, but your experience also provides little evidence. Have you done profiling and performance analysis yet? Have you tried working in a large team on a large project?

Different languages have different strengths. Some excel at quick and dirty prototyping or small projects/small teams. Others make a 50+ developer team more manageable. Some have good runtime performance, some have a shorter development time, while others have better runtime monitoring and maintenance tools. No language excels in all of these.


Certainly my experience is insufficient. I have only dabbled in profiling and performance analysis - enough to prove it can be done but that it can be time-consuming and that sometimes you may have to give up some elegance/purity in favor of performance. However the performance I get from naive code is so good (so far) that I have not yet actually had to delve into it.

I am dubious of Haskell's prospects in large teams but I was completely dubious of Haskell in general (like you) before I had experience in it. Perhaps after experience with large teams I would be more bullish on this point but I haven't had that experience so far. I am quite optimistic about the size of the problem space that a small, skilled team (say 5 developers) could solve with Haskell.


> Different languages have different strengths.

On the other hand, some things are better in all important respects than something else. Merely stating that among similar things "different ones have different strengths" is not much of a counterargument.

(This comment is not related to any language or technology in particular.)


I also started to study Haskell recently. In my view, it's great from mathematical perspective, but it has its problems.

Monads sort of force you to make plumbing visible, and it's not so neat as a result. For example, consider a big program that has two modules. The module A calls module B to do something. Now later, you want to add logging to the application. In normal languages, you can just call logging functions (which do IO), from within module B; module A doesn't have to know a thing. In Haskell though, you have to wire the IO monad (or some other monad that does the logging and encompasses it) all the way from A to B. Or say, you want now to access database from module B. Again, you have to wire your DB access up from A, because that's where the entry point is.

In normal languages, plumbing like log access, configuration, DB access can be accessed from any place via global variables or singletons, without imposing a dependency on the main module (or other modules). In Haskell, it's like a military installation - every interaction with outside world has to go through main gate. I am not really sure if there is any benefit to it, but there is certainly a downside that the plumbing is becoming visible in Haskell.

If there would be way to declare something as "plumbing" and have it always available (but still explicitly declared as part of function signature), without having to pass it along everywhere, it would be great compromise, I think. It could even make the programs more type safe, because instead of passing RWS or IO monad everywhere, you could make functions dependent on just DB monad for database access, for instance.

Or you could then configure the plumbing for specific modules, something like dependency injection.

Maybe I am missing something, but I tried to find some articles about how to write large scale programs in Haskell, but no one really seems to explain this.


I don't think the "Xy monad will taint all your code" stands.

I used to think that too, but if you have monadic code M and pure code P, if you need to tie P to M (say at a third callsite C), you just lift the P into the monad at C, and that's it. P stays pure, C of course gets monadic, but that is since it _is_ monadic.

Now logging: I guess people overpanic this. There are two separate sides of logging I think:

1) Effect logging: If you want to log effects (going to send the email, etc), you are already in IO, no worries.

2) App logic logging: This is more like debug-logging to verify that you logic works and flows as expected. If you need this in pure code, throw in a Writer monad for logging the stuff (can discard it if not needed).

2a) Eventually you'll get somewhere where you have IO, so you can dump the aggregated logs if you want.

2b) Or just use unsafePerformIO to send to a logging thread (beat me with a stick).

As a bonus, for app logging you might use a proper ADT for your log statements instead of string, which is great for testing, and even greater for persisting (in json, protobuf, whatever) and later inspection.


I am aware of lifting, but the question is if you have monadic code and pure code in different modules (or you need to go through function which was previously pure), where do you put the lift? If you put it outside the module, you break the modularity. If you put it inside, well then you might as well make the functions monadic in the first place. Basically if you have functions in module API (which may be in itself pure) that may eventually end up calling unpure functions, you have to provision for that somehow, either in the module by making them monadic, or in the caller via lifting. Either way, it's not as clean as it could be.

But I thought about it some more, and to me it seems that actually parametrizing the functions to outside world is not that bad; it's a kind of dependency injection, and seems fine. What is really problematic is returning all the IOs (or other monads) from them; especially since you cannot curry return parameters just like you can entry parameters. So even if that could be replaced by some other mechanism, it would be helpful.

But I didn't know unsafePerformIO, sounds like it can be helpful in some cases.


I agree there isn't a lot written that explains this but I have inferred and used the following pattern. I build a massive monad tower that includes the different things I need like a reader with configuration data, a resourcet process monad, a logging writer etc. However I rarely use that monad in my signatures, I generally use a more restricted type class. For example I have a ConfigReader type class that does exactly and only what you would expect. I have a typeclass for writing to the database, for call distributed process, etc. Now because some of those require MonadIO I do end up with a lot of code that could theoretically do any IO. That could be restricted at the cost of creating more and richer typelcass interfaces myself but I do not think it is necessary for the most part.


> but I think that the problem with the way it manages side effects is that it does so through lazy evaluation, and lazy evaluation is hard to wrap your head around.

Haskell manages side effects by making assertions about whether a function has side effects part of its type. Lazy evaluation isn't quite orthogonal to that, but the connection goes in the other direction from what you imply. Because evaluation order in a non-strict language can be complex to reason about, such languages become impractical if unrestricted side effects are allowed. In other words, restricted side effects helps non-strictness, but non-strictness is not necessary to restrict side effects.


Though as SPJ described in "Wearing The Hair Shirt", non-strictness might be necessary to motivate language designers to sufficiently restrict side effects...


> but I think that the problem with the way it manages side effects is that it does so through lazy evaluation.

This is not so.

> and lazy evaluation is hard to wrap your head around.

technically haskell is non-strict, not lazy. (a + (b * c)) evaluates + then * instead of * then +. Also strictness annotations can change this behavior.

> They require programmers to think and program solely within Haskell's lambda calculus and lazy evaluation framework, which is neither the way people think nor the way computers do

This is not supportable. You might be more familiar with strict evaluation, but don't pretend its a feature of humanity. All runtimes come with assumptions.


This is another problem with Haskell: comments like this. But in order to be helpful, let me explain why.

> technically Haskell is non-strict, not lazy.

Now, see, I don't care. Neither does anyone really other than PL researchers. I mean, I can care in my spare time if I like spending it on PL research, but when I write a 2 MLOC software for a large customer, I couldn't care less whether "technically" it's "non-strict" or "lazy". As far as I, the programmer, is concerned, it's lazy. If one must be this familiar with PL jargon in order to program Haskell, then this is a problem.

> but don't pretend its a feature of humanity. All runtimes come with assumptions.

Again, I'm not trying to make a provable statement (how does that joke go? you can tell if someone is a mathematician if everything they tell you is all true and all irrelevant). We're talking software engineering, right? So what percentage of production code anywhere in the world is written in an eager (strict, whatever) language? If you tell me it's less than 99.999%, then you're being dishonest. 99.999% is a "feature of humanity". If your point is that education can change people's habit and way of thinking, I say, you're absolutely right. Go for it, and we'll talk again in 15 years.

> All runtimes come with assumptions.

Again, true but irrelevant. Some assumptions are more familiar and therefore feel more "natural",and some are less so.

I wasn't dissing Haskell. It's a very impressive and elegant language. I was only pointing out that while it has some advantages from a software engineering perspective, it also has some disadvantage.


Laziness is an implementation detail that permits equational reasoning and a declarative programming model; the only reasons it's even useful to be aware of haskell's evaluation strategy are 1) to know that it isn't strict (if you're already a programmer), and 2) to solve and anticipate space leaks in production code (if you're using it for Real Work).

Anyway, you seem pretty [sure](http://www.idlewords.com/2005/04/dabblers_and_blowhards.htm) of your opinion so it's probably not worth further discussion.


I don't think I've expressed an opinion because I don't think I have one (I have strong opinions on Scala, but they don't apply to Haskell). I'm just pointing out what seem to be major obstacles to Haskell adoption in the industry. I'm not saying it's not worth the effort because I really don't know. I'm just saying it's not obviously worth the effort. Clearly, the data isn't there yet because there is very little use of Haskell in the industry. This may be unfortunate – or not – but we just don't have enough information to tell yet.

Haskell's roots in academia often steer the discussion towards theoretical PL, which seriously hurts Haskell adoption. I actually like tikhonj's comment because it focused on practicality rather than jargon, so in response I merely pointed out that Haskell is not 100% pure gain in practical terms. That does not mean we shouldn't all adopt it, it just means that the jury is still out.


The laziness-vs-non-strictness part pretty much nailed one problem beginners may have with the Haskell community.

On the other hand, I don't think your 99.something are a feature of humanity, they are the result of the last decades of mainstream programming development. It took me less than a year of on-and-off Haskell hobby fiddling to find functional and non-strict less awkward to think in than imperative and strict.


I think his point is that from an adoption perspective, whether or not it's actually a genetic feature of humanity or merely might as well be is basically irrelevant. Either way, it's a giant hurdle from a practical perspective, especially a commercial one.


> So what percentage of production code anywhere in the world is written in an eager (strict, whatever) language?

How is this relevant?

> If you tell me it's less than 99.999%, then you're being dishonest. 99.999% is a "feature of humanity".

Make all && and || strict in all C, C++, Java, C# code, and chaos will reign.


&& and || are evaluated by first looking at the left operand, then at the right. That seems pretty strict to me. You're thinking of their short-circuit nature, that the right operand is not always evaluated; I believe that that is a different issue than strict vs. lazy.


Consider '||' as a normal function. Consider ||(f(x),g(x)). If this were strictly evaluated, we would compute f(x) and g(x), then pass them as arguments to ||. Instead, we compute f(x) pass it to ||, and compute g(x) only if it is needed. This is lazy evaluation.


Well, the Boolean operators in C-like languages are NOT normal functions, which is the point. Furthermore, what is strict about them is the order in which their operands are evaluated, which is a different aspect than the one you mention: whether all arguments are evaluated before considering the function. Consider e.g., notUnderAttack() || enableDoomsdayDevice(). A lazy language is free to decide that it's more optimal to evaluate the second operand first.


I don't think a lazy language can necessarily make that decision. Consider the simple implementation of ||:

  ||(f,g){
    if (eval(f)) then return true
    if (eval(g)) then return true
    return false
  }
A lazy language will not evaluate a function until it knows it needs it. In this implementation, there is no way of knowing that g will be needed until after computing f, so f will be computed first. Having said that, it is possible for an optimizing compiler to realize that the order doesn't matter and make a decision on which one to check first, however that is an optimization that the compiler would need to prove does not change the results.

This seems like a good example of why purity seems to be really beneficial to lazy evaluation.


> Well, the Boolean operators in C-like languages are NOT normal functions, which is the point.

"Not normal", because the language is strict, and there is no way to make a lazy function on your own, even when it is so tremediously useful.

In other words, the C standard comittee decides which functions may be lazy. As a Haskell programmer, you decide.


> technically haskell is non-strict, not lazy.

What do you mean by that?

> (a + (b * c)) evaluates + then * instead of * then +.

AIUI,

    head 2 [1, 2, (digit_of_pi 1000000000) ]
is forbidden from calculating the billionth digit of pi. That's a pretty strict (play on words intended) kind of laziness, even if not the theoretically maximal definition.


> ... with the way it manages side effects is that it does so through lazy evaluation

Modern Haskell manages effects through it's type system, not through lazy evaluation.

> They require programmers to think and program solely within Haskell's lambda calculus and lazy evaluation framework, which is neither the way people think nor the way computers do

Haskell doesn't require you to think in the lambda calculus any more than C requires you to think in register allocations, it's a low level implementation detail that effectively gets abstracted away. Nor are you required to think solely in terms of pure code, Haskell 2010 has a whole variety of ST Monad solutions where you can effectively do whatever pointer manipulations you want and still remain pure with respect to the rest of your program. I take serious issue with the description of Haskell being "constrained", it has reached a compromise of safety and power that I have yet to see in any other language.


> the way it manages side effects is that it does so through lazy evaluation

This is actually not true. Lazy evaluation forced Haskell to stay pure, but it is purity that provides the tools for constraining effects. You could have a strict Haskell-like language that manages effects in the same way, indeed such languages exist.


> You could have a strict Haskell-like language that manages effects in the same way, indeed such languages exist.

I'm unaware of any strict purely functional language with monads. Could you give an example? Thanks!


Idris is one example.


Disciple[1], and I believe "Mu" Haskell ( the Standard Chartered dialect ) though I don't know for sure because it's not open source.

[1]: http://disciple.ouroborus.net/


Isn't the side effect returned as an IO type which is later evaluated by the runtime? (I'm talking about Haskell)


Yes, but you can create the thing of type `IO a` strictly.


Yes, but when are they executed? When is the byte written to the file?


Well, you certainly can't start executing until the `IO a` thing has been evaluated, but there aren't any other constraints.

Something that can confuse people (and I'm not sure if this is the case with you or not) is that an `IO a` may/probably will contain a closure. So evaluation and execution are interleaved, but not because of laziness.


Closures can be a way of implementing laziness, so I'm not sure you've shown it's "not because of laziness" - though certainly it's not because the language is lazy by default.


You can picture functions with the IO type as something to compose a program out of. This program is then run by runtime system of your implementation, and that's "when the byte is written".

Here's a nice explanation of Haskell IO: http://stackoverflow.com/questions/13536761/what-other-ways-...


> They require programmers to think and program solely within Haskell's lambda calculus and lazy evaluation framework, which is neither the way people think nor the way computers do (the latter is important when doing performance analysis on your code). This constrained framework also takes its toll on software engineering, then.

I disagree. People don't automatically think in certain terms unless they're taught to.

I personally never took CS in University, and I'm completely self taught as far as programming goes. Haskell feels quite natural, so does Lisp. Java and C feel foreign.

Maybe if someone's had imperative languages drilled into their head for 4+ years Haskell will feel foreign, and maybe that's why mathematicians, financial people and others who don't quite fit the typical programmer paradigm are into Haskell (or so I hear - I always hear about Haskell and OCaml being for 'academic' types).

To me, Haskell has nice syntax, makes sense, is fast, and feels like a dynamic language for the most part with its type inference and great interpreter...


Sounds like c# and the .NET platform might fit that criteria in yoyr tangential point.


Thanks tikhonj for the very well articulated response. I wish that this thread had been pointing directly to your comment rather than the blog post that you responded to!


It looks like your comment has been "moderated" off the blog. Such a disappointment.


With regard to declaring the lack of side effects in an interface, I'd just like to mention that C++ is very nice in this regard too, and I think it's an important feature of C++ that is often overlooked.

Yes, it's true that with casting and so on you are not actually ensuring anything when you declare a function const like you are with Haskell, but you announce to other programmers who will use your code that:

1) There will be no observable state changes

2) That the function is thread safe

This is a very useful thing for a language to support in it's function declarations, and other languages could do well to learn from that.


That is not true. In C++ you can always retrieve the current time, store parameters in a database, print to stdout, or retrieve a global variable without changing the interface. Merely announcing purity to other programmers does not solve this problem: announcements can be wrong, missing, incomplete, and maybe most importantly the compiler doesn't know about them.


Heck, mere "announcements" might even be quite useful, especially since it is slightly compiler supported. But const in C++ does not announce purity, nor does it announce thread-safety. I just announced that it won't observably change non-mutable members, that's it.


D actually has a better model for pure functions than C++. See http://dlang.org/function.html#pure-functions


Modern C/C++ compilers can detect a sub set of pure functions and do possible aggressive optimisations. There's even an `__attribute__ ((pure))` in GCC for explicit pureness.


I think the point of the argument about functional programming is that you can't defend a stance of absolute rejection of all things non-functional, because that kind of world could not exist. Similarly, an all-object world does not make sense, either.

His other point about OOP, though, that it doesn't have first-class functions -- that's equivalent to complaining that it isn't functional enough, which is not supposed to be the point of OOP, anyway. There's a lot you could criticize about OOP, but he doesn't make that strong of a case here. He's sort of treating it as a mirror image of fp, but it came from a different world with different values and it's not on the same footing.


    Have you noticed how easy it is to implement circular 
    data structures or random number generators in C? 
    The same is not true for Haskell.
I might be wrong, but, isn't this a circular data structure (or, what they are referring to) http://www.haskell.org/haskellwiki/Tying_the_Knot ?

    In a language like Haskell, where Lists are defined 
    as Nil | Cons a (List a), creating data structures 
    like cyclic or doubly linked lists seems impossible.
    However, this is not the case: laziness allows for 
    such definitions, and the procedure of doing so is 
    called tying the knot.


Yes.

  oneTwo = 1 : 2 : oneTwo
is a circular singly-linked list containing 1 and 2. If you wanted to change the 1 to a 4 though, you would not be able to do so without creating an entire new list. You can use Vector or Array (i.e. fixed-size containers) to make yourself a circular buffer, but the implementation will be very similar to what you would do in C.


Wow. That looks hard. I bet you stayed up all night working on that. ;-)

Seriously: sometimes things are hard because you don't know the language all that well. Consider that possibility.


Just to be clear: the advice of "consider that possibility" is intended for the author of the article. quchen clearly knows the "easy" way to do this kind of thing with Haskell.


The author states the "extreme" case of OOP is "everything is an object" and the extreme case of FP is "purity."

The problem with this analogy is that everything being an object doesn't by you much. But in Haskell, the type system helps you keep track of what's pure and what's not. So having a type system that keeps track of purity buys you something: lots of pure and extremely easy to reason about code. Non-deterministic (unpure) code is the same difficulty to reason about in any language. Isolating it from pure code just reduces the amount of code which is difficult to reason about.


It looks like the author prides himself on reinventing ideas. I've read a handful of his other posts, and he's "reinvented" the Y combinator, the proof of the halting problem, and other such things. It appears that, perhaps partially by a perpetual mishap with diction and connotation, the author claims that his reinventions are original ideas, his opinions are fact, or that his insights are somehow deeper or his arguments are better.

I see this happen all the time, with myself and with others, and I think it's a common thing in mathematics (the author is in CS, so he might not get exposed to this as much as I do). I've seen undergrads finally realize that calculus makes sense and explain how much deeper their understanding is now that they've really thought about it, and I smile and nod my head. I've myself touted my own deep understanding like it's not completely trivial to somebody with years of experience.

I think this article shares that sentiment. The author has thought of on his own accord what we all consider a dead and beaten horse, and is shouting his thoughts from his proverbial mountaintop: no paradigm is perfect for every situation.

Best of luck to him for all his thoughts, but he'd better learn when and where to curb his enthusiasm if he wants others to take him seriously. For the record, this doesn't apply to his personal blog. I expect sooner or later he'll have a post talking about how Haskell's pure style is all about giving the sculpter more clay.


So does the author deny the existence of any sort of domain where these paradigms would be "right"? Or are they universally "wrong"? Also, he states a view or philosophy: "everything is an object" and then he says "this is wrong because I disagree. functions are not objects". Wow. If you remove the commentary about scala and python, which is irrelevant to his philosophical thesis, then what is his argument? And why is it not a matter of just having a different definition?


This article is remarkably low on arguments, despite being so long. Just a few of its gems:

> OOP is wrong because of its definition of an “object”, and its attempt of trying to fit everything into it. When that goes to the extreme, you arrive at the notion “everything is an object“. But this notion is wrong, because: > There exists things that are not objects. Functions are not objects.

This has two problems:

1. It's factually wrong. Smalltalk functions are first-class objects.

2. It fails to explain why that would be wrong (assuming it were, you know, true).

The author does give the (poor) examples of Python and Scala, which -- in his opinion -- incorrectly call a function an object, because only the __call__ and apply are the "function objects" one is trying to define, but they were "kidnapped" and jailed into their wrapping objects. IMO, this is ontologically incorrect. The same can be said of any object's definition: if I have an object called PlaneVector, the "vector object" itself would actually be only its x and y properties, which I have kidnapped and jailed into my object.

The fact that this structure (the object I'm defining in my programming language) is not the same as its counterpart in the world of ideas (i.e. the plane vector in maths) should be fairly obvious, despite using the same generic term (i.e. object) to denote both.

Behold, then, the straw man:

> Most OO languages also lack correct implementations of first-class functions. An extreme is Java, which doesn’t allow functions to be passed as data at all. You can always wrap functions into objects and call them “methods”, but as I said, that’s kidnapping. The lack of first-class functions is the major reason why there are so many “design patterns” in Java. Once you have first-class functions, you will need almost none of the design patterns.

There are a lot of things wrong with Java, of course, which does not mean that they are also issue of object-oriented programming. Inheritance (sorry...) doesn't work both ways: the fact that a Lexus is prohibitively expensive doesn't mean moving vehicles are prohibitively expensive.

I have less of a problem with the author's treatment of functional programming. What he's essentially trying to argue is that:

> Simulating them [side effects] with pure functions is doomed to be inefficient, complicated and even ugly. Have you noticed how easy it is to implement circular data structures or random number generators in C? The same is not true for Haskell.

IMO, this is true, but the isolation of side effects doesn't have to happen only by the ugly means he outlines next (i.e. monads) in every application. A typical CRUD application the web kids like to write can be written so that the only side effects involved are in fact modifications of the database, in which case the side effects are confined to SQL.

I also think there's an ontological error in this argument. Its foundation is that FP tries to "ignore" side effects even though they are real, but the examples the author gives are several layers of abstraction below. C also woefully ignores some of the side effects in the silicon (e.g. if you go low enough, calling a function with no side effects that does not even alter the state of the program will, in fact, alter the state of the underlying silicon, and quite substantially so), and I think it's reasonable to assume that we can't quite move the debate into which of those should be ignored or not.

All languages abstract the silicon away, and it's not only good that they do, it's what they were built for. I honestly don't miss hardwiring relays.


>> There are a lot of things wrong with Java, of course, which does not mean that they are also issue of object-oriented programming. Inheritance (sorry...) doesn't work both ways: the fact that a Lexus is prohibitively expensive doesn't mean moving vehicles are prohibitively expensive.

But isn't the main point of the article that OOP just doesn't always fit the problem domain, that sometimes you want to model something in a way where forcing it into objects (nouns and verbs) only adds complexity without benefits?

I don't think the article condemns OOP in any way except when it becomes a religious dogma that everything needs to be an object, as it is in Java. I've written many different types of programs in various languages, and I can only agree that sometimes, OOP simply isn't what you are looking for. Think about streaming data processing for example, or something reactive like a network service. Surely some parts of the outside interface or the 'glue' between the outside interface and the actual number crunching or request processing can be modeled using OOP, but at the core, there's all kinds of processing on blocks of data or asynchronous IO going on that doesn't need objects and methods, and doesn't add to code clarity or stability in anyway (on the contrary even).

The article basically perfectly describes why I very much prefer programming in Python or C++/Objective-C over Java: they allow me to mix and match different programming paradigms for different problems.


> I don't think the article condemns OOP in any way except when it becomes a religious dogma that everything needs to be an object, as it is in Java.

But Java doesn't make everything an object. The obvious one is that 'primitive types' aren't objects (int, bool, float, etc.), but also Java has hard-wired control structures (for, while, if, try, etc.) which aren't OO (compare to Smalltalk, for example; where "ifTrue" is a method of boolean objects, "each" is a method of collection objects and "times" is a method of number objects). Also its classes and methods aren't objects, like they are in Smalltalk, Python, etc.

I would argue that that Python's first-class classes and first-class functions/methods makes it much more 'religiously OO' than Java.

Just because all code must be in a class, doesn't mean that everything in Java is an object. I don't know why people keep saying that.


My favorite in Java are String instances, which are objects, but adding them with + is not a method call, because Strings are also primitives, sort of, hence + is in fact either eliminated (if adding two constant values) or the code gets translated to a concatenation powered by an ad-hoc StringBuilder instance. The JVM doesn't even pretend that a static method was invoked. It simply pukes some bytecode.

For example there is no interface in Java that you can use to add stuff, because from the language's point of view, adding integers is totally different from adding strings which is also totally different from adding BigIntegers, which is totally different from adding BigDecimals. Well, to be pedantic, Strings addition is in fact concatenation, as Strings are in fact Lists of Chars. I know, not fair to put them in the same bucket.

But, but, wait for it - BigDecimal and BigInteger actually do implement a Number class. But apparently Numbers in Java's vision are not even monoids, let alone rings (because duh, you also have multiplication). Let's not even mention that all real numbers have natural ordering. Like seriously, how can one get this so wrong?

Coupled with the explicit typing needed and the lack of type-classes or anything similar, it means that you simply can't write functions that operate on Numbers, as there is no such thing and you can't implement it by yourself.

People that bitch about Java being too OOP, missed a couple of lessons on their way to enlightenment.


Just once I'd like to see someone lament the "not everything should have to be an object" using as an example a language where everything actually is an object.


> But isn't the main point of the article that OOP just doesn't always fit the problem domain, that sometimes you want to model something in a way where forcing it into objects (nouns and verbs) only adds complexity without benefits?

Perhaps that's the point of the article, but the fact that a programmer might want to use a paradigm in an unfit manner is arguably not a problem of the paradigm.

OOP is fairly foreign to my daily work, so I'm not too attached to it, but I think a lot of the criticism it receives is unfair. It gets a lot of crap because Java and C++ implement it incompletely (and the part that they do implement is done quite poorly), and people naturally think it's a problem of the paradigm itself. I think 90% of the "OOP is bad because..." arguments are routinely handled with "Yeah, Smalltalk actually solves that by..." and should actually be phrased as "Java is bad because...".


> not a problem of the paradigm

Correct. The main problem I see with OOP is that some people advocate it as the only correct way to write maintainable code, which, of course is completely wrong. In fact, OOP doesn't translate well to some problems. There are also problems where FP doesn't work as nicely as other paradigms. In short, there is no silver bullet and advocates of any paradigm should be more open about that.


> Correct. The main problem I see with OOP is that some people advocate it as the only correct way to write maintainable code, which, of course is completely wrong.

Yes, my opinion is similar. There are problems that naturally lend themselves to being modeled using objects, and others which have to be beaten into submission. Not having to beat them into submission with half-witted OOP implementations, like C++, does help, but it is not always sufficient to take away the sensation of "unnaturalness", if you don't mind the invented word.


Everything in Java is not an object, and the author never made this claim. Languages he referenced where everything is an object are Python and Scala.


Searching for the main point, when the arguments brought forth to support the conclusion are wrong, does not make sense.

From elementary mathematics, A => B only makes sense when A is true, because falsity implies anything. And I know that productivity and all the other traits that matter haven't been proven formally for the paradigms we talk about, but in our flame-wars we could at least pretend to be scientists.


This article's thesis is pretty much "both object-oriented and functional paradigms have limitations which become apparent when you push them to ridiculous extremes". Which is neither an interesting thesis nor one that is in dispute (except maybe among a very small number of people). But to be fair that thesis makes for far less linkbaity a title than "What’s Wrong with OOP and FP".


> A typical CRUD application the web kids like to write can be written so that the only side effects involved are in fact modifications of the database, in which case the side effects are confined to SQL.

Well, writing the response is a side effect as well, so no. But the business logic itself can (usually) be pure, even if it's sandwiched between effectful layers (web and SQL). And looking at how a well-crafted CRUD application is architectured, the business layer is usually made up of singleton services holding a reference to a few persistence-related singletons, nothing that could not easily be made functional.


Indeed; I'm used to writing systems software, so user interaction tends to elude me :).

That being said, I would dare say that the business layer is the one which would benefit the most from a FP perspective. I can think of dozens of bugs in my code that originated in my inability to correctly keep track of what was otherwise needlessly exposed state.

Being many layers closer to the silicon, I don't actually use any functional programming language for my work (it's C and Assembly all the way...), so I can't speak for using one. But applying some functional framework to my code proved immensely useful once I started doing it.

(At the risk of sounding like a hipster, that was actually before FP suddenly becoming cool. I tried learning Haskell once and miserably failed; I was only smart enough to learn some Common Lisp).


Indeed, many languages would benefit simply from having modifiers that reverse the typical use of "const". Make everything immutable / side-effect-free by default, and add a "mother may I" keyword that allows mutation / side-effects without giving a compile error.

Hopefully, programmers would learn to avoid writing code that requires the "stomps-all-over-shit" keyword except when they really did need it. (assuming tail call elimination for simple loops)


One of D's improvements upon C/C++ is its ability to enforce "deep const-ness".


Well, that's for theory. In practice I have yet to see a big CRUD like Facebook rewritten functional style, and would very interested to check how this magically dissolve the inherent complexity of such a beast.

Until then, I'd continue to think that functional is shiny and sexy but that the proven and pragmatic way to keep complexity in check is still the good old and boring oop way.


As other commenters have said the point isn't the eliminate complexity. Some problems are actually inherently or irreducibly complex! The goal is to find ways to manage the complexity. Abstraction barriers are a trivial example[0]. Type systems are another approach, hopefully foisting the complex management of datatype correctness on the compiler rather than the programmer.

None of these approaches necessarily make the problem itself less complex. But as with a type system there are ways in which a language can help, or hinder. A language which adds too much complexity incidental to the problem you're trying to solve is an example. TFA is groping towards this point, but they did a poor job arguing it.

The proposition is that a functional language's abstractions, conventions, and patterns may make it easier to manage (not dissolve) complexity. It's hard to argue that (e.g.) functions which are referentially transparent are easier to reason about, for instance. Whether this is more difficult in the large, as a project scales, is unknown to me.

[0]: http://mitpress.mit.edu/sicp/full-text/sicp/book/node29.html


I don't think anybody is claiming that functional programming makes complexity magically go away. The claim is that functional programming (especially when backed by a strong static type system) makes several categories of issues essentially disappear. That's a much more reasonable claim to make.


And a very powerful property. Never mind conserving CPU or memory; the slowest component in any system is the wetware programming the system: you. I always try to use tools and techniques that eliminate the possibility of categories of bugs. I use scope to control use-counting when possible; I use as strong a type as is helpful in the tool I'm using etc.


Chill out people. The man is saying that the pure object-oriented model and the pure functional programming model have corners where they are lacking. He's not saying it's a bad idea to write object-oriented programs or pure functions.

If you have ever needed a little helper function in Java and wondered why you had to stick it in a static method in a class, I think you'll understand his point. Similarly, many of the Lisp programs I've seen end up with a fair amount of procedural code.

Basically, he's asking you to take a step back and appreciate what you have (and don't have), rather than what you think you have. This would lead to more interesting conversations.


If you have ever needed a little helper function in Java and wondered why you had to stick it in a static method in a class, I think you'll understand his point.

But that's a flaw in Java, not OOP, and it's not a consequence of Java being a pure OOP language (which it isn't).

If one wants to criticize OOP, the language to focus on should be Smalltalk.


He's also making accusations and not providing proof for them. As well as casting each paradigm in a negative light (extreme friendliness, extreme helpfulness, extreme eating all sound negative) without even defining what exactly "extreme OOP" or "extreme FP" are.


I can't speak for the author, but it seems he presented his argument poorly on OOP. I am not sure if it follows the same line of thought I have about OOP, but I regard OOP more as a mental model than a practical one. In that way I don't think everything is an object, more of there are nouns and there are verbs, and everything is to be described in such relationship. I prefer to think and work in component based approach though. Functional programming, on the other hand, is a set of rules over your mental model. Ultimately, I think, it boils down to your mental model which suits you best and, having said that, subjective reasoning. But that's how I think about it, doesn't have to be true.


I think the author also gets it wrong on the FP side, as he forgets the impure ones, like ML and Lispy ones, where side-effects are accepted.

This type of articles is also nonsense in a time and age where mainstream languages are going multi-paradigm and it is up to the developers to choose the best paradigms to model the application's architecture.


I think the author also gets it wrong on the FP side, as he forgets the impure ones, like ML and Lispy ones, where side-effects are accepted.

That is not in conflict with his argument. He does not argue that object-oriented programming or functional programming are wrong, just that taking a paradigm to an extreme (e.g. pure functional programming) is bad.

He is implicitly arguing for ML and Lisp and against Haskell, since e.g. ML is a functional language that recognizes that the world is mutable by allowing mutable data structures.

(Not that I agree - one could argue that Haskell acknowledges impurity even more by making it part of the type system.)


> Smalltalk functions are first-class objects

What about the functions inside those objects? And functions inside that function object. Like a chicken and egg problem. I neither agree nor disagree with him, what he's trying to say is that there is a difference between methods and functions. Methods (1) are functions (2) wrapped inside an object. (1) and (2) are different. He's just trying to say that "everything is an object" can't be rigid. At least that's what I understood.


It's been a long time since I had anything to do with Smalltalk, so I may be shitting you -- unfortunately, I'm also on the run and cannot properly check the following statement, so please take this with a grain of salt -- Smalltalk does not have named functions that aren't methods. In other words, if something takes arguments, returns things and has a name, it's always a method of an object (i.e. it's a message to which an object responds). In other words, while (in abstract terms), a method (1) is different from a function (2), Smalltalk only has (1). You can't define a "function" outside the scope of an object, like in C++. You could define a class that only has a long bunch of static methods, not modeling any kind of logical abstraction, but it's considered bad style.


Reminds me of this paper;

"The Interactive Nature of Computing: Refuting the Strong Church-Turing Thesis" -- http://cs.brown.edu/people/pw/strong-cct.pdf

"The theoretical nature of computing is currently based on what we call the mathematical worldview. We discuss this worldview next, contrasting it with the interactive worldview."


The paper's author should be slapped by the any CS book contains (Theorem: IP=PSPACE)


This is an interesting article.

It is the first critique of functional programming I've read from someone who clearly knows their shit.

I plan to look at miniKanren more when I get the chance.

I done a fair amount of math though likely less than the author but I have a similar impression - the things that are hardest to understand aren't necessarily the best tool for every job, despite their beauty. And beautiful abstraction for its sake can be a dangerous anti-pattern as much as excess hacks and simplicity.


It is the first critique of functional programming I've read from someone who clearly knows their shit.

The critique is 'Haskell takes FP to an extreme and I show this is true because someone tried to implement miniKanren and needed Oleg to do it'.

That is not really a strong argument.

Also note that he isn't really arguing against functional programming, but pure functional programming.

the things that are hardest to understand aren't necessarily the best tool for every job, despite their beauty.

Haskell in itself is a very simple language. Most functor and monad instances are also easy to understand or use. I agree that there is a tendency in the Haskell community to put abstraction on abstraction, especially when the abstraction is new. But in the end, most of the Haskell packages that became widely-used (bytestring, text, vector) are easy to understand and use.


> The critique is 'Haskell takes FP to an extreme and I show this is true because someone tried to implement miniKanren and needed Oleg to do it'.

>That is not really a strong argument.

It's worse than that. The linked paper is about implementing minikanren as a monad transformer, which is a lot trickier than just porting it over, regardless of language. It's probably a lot easier in Haskell than in Scheme though.

Furthermore, if pure FP is useless, then monad transformers are useless too. But then the argument becomes "this useless thing is really tricky to implement in Haskell, therefore Haskell is useless". The only reason you would want Oleg's minikanren to be easy to implement is if you actually want pure FP. So the argument kind of negates itself.


I think all this blog post really highlights is that you can get a PhD without understanding OOP or functional programming.


Nothing wrong with OO or pure FP. They combine very well in languages like scala. The author complains that "in Scala, functions are just objects with a method named apply" and complains also about "lack of first-class functions" in java. In fact, these languages supports clean modelisation of the algorithms using high level concepts. The target is a microprocessor that understand low-level imperative code with side-effects. If you look only at the first steps of the transformation, it may always look wrong. The distance between the code and the execution may make the program more difficult to analyse but it is a normal price for abstraction. The only thing that is wrong is that the author is looking at high level languages with a focus on low level semantic. The fact that "int" is not really an object in many languages is becoming more and more an implementation detail.


This person is pursuing a PhD in Computer Science?

"If you look deep into them, monads make programs complicated and hard to write, and monad transformers are just ugly hacks."

Wow. Just... wow.


Monad transformers are ugly, ugly, ugly (though not really hacks, since they are based on sound mathematics.)


I know this person from many ways. Here is a simple version of the education background of the author: He pursued a PhD from TsingHua university in China, but failed (Or he chose to quit.) Then he got an offer from Cornell and then quit within one (or two?) years. After that, he tried to pursue a PhD from Indiana University in programming language area but failed again. Now he got a master degree and got a job. In many ways, he is really not a researcher in programming language field.

I don't want to judge someone from his/her education background. But keeping saying "I'm a PhD, my research is about programming language and my advisor is some guy who you should believe" really doesn't help his argument. To be honest, I can hardly find the main argument he want to make. If he just want to say that FP and OO both have their cons and pros, then I totally agree and I really think it doesn't need such an article to say this simple fact. But the author seems have other things to say, which to me is quite vague and I can hardly get it.


I think that OP mistranslates "extreme functional programming" as "we never have side effects and compose functions".

Also what exactly is "extreme functional programming"? Only using functional programming paradigms and abstract data types instead of OOP paradigms and objects? Is "extreme functional programming" not taking advantage of do notation, ST monad, etc and only programming in pure functions thereby accomplishing nothing?

I think the author should have defined what he believed to be extreme OOP and extreme FP before going on such a diatribe (on mostly FP and haskell in particular, since others don't separate side effects).


Functional programming wouldn't be useful if side-effects didn't exist. So, arguing that functional programming isn't useful because:

> There exists things that are not pure. Side-effects are very real.

Is nonsensical.


"The lack of first-class functions is the major reason why there are so many “design patterns” in Java. Once you have first-class functions, you will need almost none of the design patterns."

Now this is a very bold statement. I'm dying to hear more on how you can get rid of design patterns with first-class functions. Does it mean getting rid as in: sharing knowledge is not needed anymore and programming becomes art as we all head to the rapture?



No, what it means is that instead of documenting pattern templates that you have to fill in with the bits that pertain to your use case, you can provide library functions (or, in some languages, library macros) that take well defined arguments and do what the pattern is intended for, rather than providing boilerplate fill-in-the-blanks code Mad Libs.

Instead of books of design patterns, you have code libraries with APIs, and you don't have to rewrite the code following the pattern for each new project.

The less expressive the language is, the more need there is for template-style design patterns. The more expressive the language is, the more patterns can be implemented in reusable code.


FactoryFactoryFactoryClass, most of dependency injection frameworks, anonymous classes, several different one-method interfaces, (such as event handlers/observers), and probably more become unnecessary when you have first-class functions.


Note that it also becomes easier when class types (generally speaking) are available.

Also, one interesting aspect of interfaces is that they act as tags, allowing for stricter formal contract checking (as in: am I registering a function to this Observable which is meant to deal with its events, or is it just a coincidence that the function signature matches?).


I am currently experimenting with Spring JavaBased config (As opposed to XML based) It seems to me that a lot of those factories are created to be able add complex behavior in the XML configuration. I almost want to say that it is XML injection what is causing such complexity.


No no no, I wasn't trying to say that Factory pattern in itself is useless, or dependency injection frameworks, or similar. These are all useful things, time-tested. However, in a language with first-class functions, all these cease to be "obscure patterns that you need a certificate to think of and design" and simply become "just another day of life". For example, you don't need to create a FactoryFactoryFactory class with appropriate functions and the whole framework around it, you simply write a function that takes the appropriate parameters and returns the appropriate object, and pass it around. Similarly, you don't need a complicated Dependency Injection framework, you just pass parameters to a functions and it gives you what you want. It's that simple. Still, occasionally some advanced IoC functionality can be useful, but much less frequently.

To see this in action, try googling "factory pattern in Python". You will find hardly any results.


Sign. Yet another discussion full of trolling pros and cons of FP vs. OO

Guys, listen to Simon Peyton Jones himself: "Haskell is useless". He's discussing stuff with Erik Meijer in this video:

http://www.youtube.com/watch?v=iSmkqocn0oQ


His comment was of course tongue in cheek. He wouldn't spend 25 years of his life working on it if he thought it was useless.


You didn't actually watch this video did you?


If we accept the author's argument; then what is the "correct" type of language? Hybrid? A functional-OOP-CES-actor-message-passing language?

In all seriousness I do think there is definitely room for something better than what we have today. I remember a quote saying that "compilers are build for computers, not humans" and I'm starting to agree with that more and more. Our minds are primitive and OOP, functional and threading is simply asking too much of them (proof: all software has bugs). So while the author might have called out OOP and functional in particular I think programming languages as a whole are lacking.


I was reminded of Anton van Straaten's "koan" on closures vs. objects with venerable master Qc Na [1] ... but I digress. This post is actually a troll post designed to re-ignite fiery discussions about which is the superior PoV - functional or oop - and I'm going to bite.

The only point I'm willing to give is the somewhat sane zen-like advise of "don't fall in love with your models".

Regarding OOP, I recall Alan Kay saying something to the effect of "OOP is not about objects. It's all about messages." which nicely points to the dualism between the two points of view.

    > There exists things that are not objects. Functions are not objects.
Well, objects are any "thing" that you can talk about, toss about in your head. Functions are certainly such "things" and count as "objects" in that sense. If we accept that "monotonic" is an adjective, i.e. an attribute of a noun, then the function described as "monotonic function" is an "object".

    > There exists things that are not pure. Side-effects are very real.
Um, how so? One notion that characterizes side-effects is irreversibility. You cannot unlaunch a rocket, for instance and so it counts as a "side effect" of the code "launch rocket". Despite that, when we look at the basic equations of quantum mechanics, they are all pure unitary! I mean, not only are they perfectly reversible, quantum "information" cannot be destroyed or even copied. So with fundamental physics we have the opposite question - how the hell do we get "side effects" out of this much purity at the core?

    > Also purely functional languages cause a huge cognitive cost. 
I call pure BS on this section. "Huge cognitive cost"? For whom? Wouldn't "structured programming" have placed huge cognitive costs to former "goto programmers"? It is more honest to just say "I don't understand monads, so I won't use them."

    > If you look deep into them, monads make programs complicated and hard to write
Again, "complicated" for whom and "hard" for whom? Is there some objective sense in which such declarations can be made? I, for one, have benefited greatly from learning about monads, reading monadic code and recognizing the pattern in existing code.

    > You can write pure functions in any language, but the important thing is, you should be allowed to use side-effects too.
Why? Who's to say you "should" this or "should not" that?

   > Everything starts to do harm when they are pursued to the extreme.
I'd say the exact opposite actually. Both OOP as well as the functional perspective here illuminate the programming world only when taken to the extreme. Until then, folks just keep arguing about what is functional and what is oop and what isn't. Haskell and Smalltalk/Self have done this favour for us by adopting these extreme dual perspectives.

[1] http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


The Alan Kay reference is from his OOPSLA'97 talk [2], where he says that he's "apologized profusely over the last 20 years for introducing the term 'object oriented'" and suggests that the Japanese notion of "Ma" or "the unseen stuff that goes between objects" as what is important.

(edit) [3] is a quote from a communication with Kay in 2003 -

"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them."

[2] https://www.youtube.com/watch?v=oKg1hTOQXoY (around 38minutes)

[3] http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...


It seems that the author has deleted the original article and posted a reply to critical comments from Haskell school: https://yinwang0.wordpress.com/2013/11/09/oop-fp/

And some content of the original article was posted to: https://yinwang0.wordpress.com/2013/11/16/pure-fp-and-monads...


Is there a blend between the two? I really like loops...


Let go of your loops and learn to map, flatMap, and fold! Once I really figured out how to use those effectively, I've written far few indexed for loops. I tried to explain this philosophy on a recent and unheralded SO answer: http://stackoverflow.com/questions/19720830/is-it-safe-to-mo...


That looks sweet but I can't help but wonder: How are those implemented internally? It seems like it would be loops?


It depends on the language and library. You can implement them, or any loop for that matter, or any computing construct furthermore, with function application alone. This is what the lambda calculus [1] and combinatory logic [2] are all about.

In actual languages, it depends. In languages without tail-call optimization [3] (i.e. Python and JS), a loop is probably the most efficient way. But with it, it's not particularly important. In a purely functional language, loops indexed by mutable variables aren't on the table, so you have to go the recursive route.

A basic flatmap on a list is pretty easy to define without for loops in JS (although I would definitely recommend just using a library for all sorts of practical concerns):

    function flatMap(list, f) {
        if (list.length) {
            return f(list[0]).concat(flatMap(list.slice(1), f));
        } else {
            return [];
        }
    }

    flatMap([1,2,3], function (x) { return [x, x]; });
    // -> [1, 1, 2, 2, 3, 3]
[1] http://en.wikipedia.org/wiki/Lambda_calculus

[2] http://en.wikipedia.org/wiki/Combinatory_logic#Combinatory_l...

[3] http://en.wikipedia.org/wiki/Tail_call


Though poorly articulated, I am with the author that OOP done wrong (e.g., Java) is horrible. It leads to FactoryFactoryFactory and a lot of insanity!


Use the tool that is apt for the job. Programming languages and paradigms are tools. Tools are not greater than end products - period.


Part of me likes the part where he wrote "Don’t fall in love with you model".

Another part of me feels like that's saying "Don't fall in love with your future spouse".


I don't know about you, but I find those debate a little depressing. A language will always have a certain flavour and angle to how it outputs or executes machine code. A language will always abstract thing so it can be better understood by a human brain.

There will always be a problem about orienting a language in a certain way, because there are not such thing as perfect when you design something, there are only compromises. A language orientation is a choice among many, and that will only better fit certain cases more than other.

Just be thankful to have working languages, and if you're not, just try to make one what either fit what you want, or try to have both OOP and functional features. I can't understand how people can actually criticize programming styles and language without trying to understand why such or such language is already great.

And by the way, you can't really weigh how a language is good or not, it's like benchmarking english or mandarin or spanish or french.

I'd just like to know how functionnal programming and OOP solve certain similar problems, and when it's better with one or another. Because let's be honest, programming style have nothing to do with theory, you can only judge when you have to engineer something.


I thought this was going to be one of those one-word websites that said "Nothing". There's nothing wrong with either OOP or FP, only something wrong when people misapply them.


This ignores the rise of effect systems, which take monadic effect-handling and place it into the realm of static analysis, where it belongs.

Disciple is the future.


Just http://www.scala-lang.org/ and everybody's happy :)


I'm putting a lot of time and effort into mastering Scala, because I think that it's the future, paradigmatically speaking. But I think the muddled syntax and the complexity of the language may doom it. But I think it's the vanguard of the revolution. I was really psyched about Pyret (http://www.pyret.org/) when I read about it here over the weekend, and I think it hopefully represents the future of how we program.


Come to #Scala and #Scalaz on Freenode IRC, it's a great learning resource. (REPL in the chat so you can get realtime help with code).


Complete nonsense.


This is solved long ago, without taking any extremes. In short, a language should be mostly-functional, which means when you really need to overwrite a value you just do it. In well-researched languages, such as Scheme you have set!, in CLs you have setf.

All the monadic stuff (remember, that a monad is nothing but an ADT - Abstract Data Type) is already an extreme, because one must structure the code in a certain way. However, one could write monads in Scheme, there is absolutely nothing special about them.

I think that monad madness it is of the same nature as over-engineering madness that plagues OOP - wasting of time and effort on construction of vast, meaningless class hierarchies.

On the other hand, to find a balance is the most difficult task. Scala, it seems, is close enough, but the ugliness of static typing - parametrized types polymorphism is still here. On the other hand, it avoids mutation whenever possible, uses persistent data structures and first class functions, which results in a much more reasonable and predictable, and, as a consequence, reusable and manageable code (modularity with almost no side-effects).

Yet another point is that there are literally tens of implementations of OOP features for CLs and even Schemes, which might suggest to you that OOP is just a set of conventions and rules - how to reference and dispatch methods - nothing but pointers and procedures, which are the most fundamental abstractions in CS.

The big ideas from Lisps, like everything is a reference, values have type (tags) not variables (you don't need Nothing to be a subclass of everything,) together with first-class functions without side-effects is that good-enough set of features for a programming language.

The point is that so-called "best of the both words" was discovered long ago in classic Lisps and it could be loosely called a mostly-functional language.


You can't have monads in Scheme due to the lack of a type system. What you can have are instances of monads, but the point of using the concept of a monad is to abstract over it, and to have functions that work with any possible instance.


Functions that work on "any monad" in Scheme can accept the monad operations as an extra parameter. A macro can hide this, like

    (with-monad m
       (monad-operation data))
See https://github.com/clojure/algo.monads for an implementation of this in Clojure.


> You can't have monads in Scheme due to the lack of a type system.

You can have monads in almost any language. The lack of static typing means you don't have static type safety with them, but monads aren't any different than anything else in that regard.

> What you can have are instances of monads, but the point of using the concept of a monad is to abstract over it, and to have functions that work with any possible instance.

How does not having static typing prevent you from abstracting over monads? You can still write functions that work with any possible instance of a monad in languages without static typing.


> How does not having static typing prevent you from abstracting over monads?

They were probably talking about the way Haskell can figure out which monad you're in through type inference. As an example, `return :: a -> m a` dispatches on the type of the function's return value, which doesn't map cleanly to a dynamically typed implementation. You need to be explicit about the monad you're in if you don't have the compiler helping you out.


That's true (and, heck, its true even of many static languages that support monads; that feature of Haskell is due to its better-than-most type inference, not just having static typing), but I don't see how that limits the capacity for abstraction. Certainly, you may need an additional explicit parameter in some cases rather than effectively implicitly supplying information via type inference, but that doesn't change the level of abstraction.


http://www.pvk.ca/Blog/2013/09/19/all-you-need-is-call-slash...

Think again, Paul Khoung walks you through implementing some monads using call/cc



Genuinely wondering: does this provide type safety?


No, that would happily evaluate "(sequence_ ((display 'a) nothing))".


I think that despite all mistakes that are in the article all agree that extreme OOP and extreme FP are almost equally bad and the balance between those gives the best results.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: