Hacker News new | past | comments | ask | show | jobs | submit login

I grasped the concept of OOP pretty easily. Can't say the same about FP.



People say this, and I'm not sure it's actually true. People grasp some basic use of structs with methods on them pretty quickly and using that to do polymorphism. Most programmers I know haven't grasped how to use a system of message passing objects to make the semantics of the underlying language programmable at runtime.


To me OOP is about using "templates" to create "objects", which map nicely to real world entities/concepts. So a class is like a function, but unlike a function it can do more than define an input to output mapping. It groups together multiple related functions and variables, and allows "objects" to exist and be interacted with. These "templates" can be hierarchical and/or have more than one "base template", combining all their "features". This type of thinking feels natural to model real world objects and tasks.

The above is my "layman's" understanding of OOP (I've never taken programming classes, but I have been using OOP extensively in the code I write to do my machine learning research).

Can someone describe FP in a similar manner to someone like me? Because every couple of years I stumble upon another "intro to FP", and it just does not click for me. When exactly should I use it? How "monads" or "functors", or whatever else is important in FP, map to the real world, and real tasks? Maybe I just don't write the type of code that would benefit from it.


> This type of thinking feels natural to model real world objects and tasks.

There is some truth to this, but programs don't store data, they just process it. We already model the data in our data store. If we're accessing data from a DB or receiving it in a defined structure like JSON or XML, the data has (or at least should have been) modeled and classified already. What's the advantage of doing it again? The other issue I have with this core OOP concept of modeling things in human terms is while humans think like this, computers don't. It's just a machine. OOP just seems to create complex abstractions that don't really correlate to how a computer actually works.

That being said, I don't think the purist approach to FP is a magic solution to our coding woes either. The pure FP style definitely has it's uses but I would never recommend that someone build something like a web app in Haskell or Erlang.

Personally I favor a hybrid style. Just use what works and makes sense for the task. Don't restrict yourself to some arbitrary set of self-imposed "rules" that define how you code. Any style of programming is equally capable of producing "bad" code, OOP and FP included.

If my task is to write a program that adds 2 numbers together the correctness of my coding style in relation to my chosen paradigm is irrelevant if the answer to "2 + 2" isn't "4".


I feel the same way. As hobbies I like to work with my hands on things like carpentry and metalworking. OOP maps pretty well to how I think about accomplishing step by step tasks on physical objects:

Step 1: Use this tool (instance of a class) to perform that modification (call a mutating method) to affect this workpiece (another class instance). Step 2: Now use that tool...

So if I’m building a cabinet, my mind plans out something like:

  saw.cut(wood1);
  saw.cut(wood2);
  wood3 = glue_bottle.apply(wood1,wood2);
  sander.sand(wood3);
Etc.

I’ve always thought of a program as just a more elaborate series of step by step commands. Computer do this to this thing, now do that to that thing (which may, through a subroutine, mean doing other nested tasks). I can’t wrap my mind around accomplishing a step by step process with functional programming. Just haven’t made that mental leap yet, probably because I’m so used to thinking the other way.


> I’ve always thought of a program as just a more elaborate series of step by step commands...Just haven’t made that mental leap yet, probably because I’m so used to thinking the other way.

Yes, it's a hard mental leap the first few times. And there's another leap to logic programming that is almost as large as the jump to functional programming.

Perhaps a physics analogy would be useful.

A physicist may use a local, kinematic view: the object is here and has this velocity, and this force on it, so in a fraction of an instant that will make it move this much, and in another fraction of an instant it will move this much more...

A physicist may also use a global view: if I consider the set of all possible paths that start at point A and end at point B, what distinguishes the one an object will actually take?

Both are useful, and things that are obvious in one are usually quite obscure in the other.


For a simple case like this with no diamond-shaped dependencies, the direct functional expression is inside-out rather than top-to-bottom:

  sand(
    sander, 
    glue(bottle, 
      cut(saw, wood1), 
      cut(saw, wood2)))


Now write that code with tools having durability. In OOP that is trivial and require no code changes elsewhere.


> To me OOP is about using "templates" to create "objects", which map nicely to real world entities/concepts.

That's a really commonly held view, and I think it goes off the rails in several fundamental ways:

1. The templates as a separate language feature are unnecessary, a vestigial inheritance from previous languages. One of the things I really like about JavaScript is that it took Self's object system, so I don't need to have a class to get an object. Sometimes it's useful to have a function to generate a family of similar objects. Sometimes it's not.

2. Objects (values in the language) mapping to real world entities/concepts is a dead end. This was actually realized by the database community by 1970, with the relational model. In a relational database, you define relations which may be aspects of an entity, but except in simple cases the entity exists only as a smeared out set of aspects across tables. And further, there need not be an entity that all those aspects describe, or there may be a complex web of entities and different sets of aspects may pick out contradictory, overlapping members of that set. I have found the terminology of phenomenology (suggested reading: Sokolowski, 'Introduction to Phenomenology') as a really good vocabulary for thinking about this.

3. The original motivation for OOP was very different. Consider that you're exploring an idea for a new programming paradigm, approach to AI, something that expresses itself in a mathematical formalism. One of the bottlenecks of CS and AI research was implementing these formalisms as programming languages. OOP came from the observation that you can express these formalisms in terms of message passing actors, and message passing actors are a straightforward thing to implement. So if you have a programming system that's built on message passing actors and you want to try a new formalism, you can add a new arrangement of message passing actors to the same, live system to implement your formalism, which is a lot faster than writing a whole new compiler or interpreter.

> Can someone describe FP in a similar manner to someone like me?

The impetus for FP comes from Backus's Turing lecture "Can programming be liberated from the von Neumann style?" Backus's observation was that thinking programming in terms of "do A, then do B, then do C" doesn't scale to intellectual complexity very well, and pointed out that the obvious way to scale is to have some kind of "combining forms" that let you combine hunks of programs, and that a somewhat restricted form of function is particularly easy to combine.

The simplest version that we all know is a shell pipeline. This assumes that each command is a function from string to string and uses a function | to compose functions. We take it for granted that each shell command can't mess with or look at the inner workings of the other ones in the pipeline (a restricted form of function that is easy to combine).

In bash you would write

    ls . | grep pig | wc -l
In Haskell you would write

   (getDirectoryContents .) . (filter (=="pig")) . lines . length
The next step is map, filter, and fold. It turns out that fold is as powerful as loops, so those three together are a complete set of programming primitives, and they're super easy to combine.

And then you start climbing up the ladder of abstraction, looking for similarly easy sets of primitives that you can express stuff in and combine them. Monads, applicative functors, arrows, lenses...these are all structures that people have found while building such sets of primitives. There's nothing magical about them from that point of view. It's kind of like the fact that sequences (lists, arrays) show up everywhere in procedural programming.


I appreciate your answer.

programming in terms of "do A, then do B, then do C" doesn't scale to intellectual complexity very well

I think that's the bit I'm struggling to understand. You gave an example of shell pipe, but that's easy to implement with simple procedural programming:

  f = ls(.)
  g = grep(f, "pig") 
  h = wc(g, mode="lines")
  print(h)
This code is easy to understand, and the advantage of doing it in Haskell is not clear. I guess it's not "intellectually complex" enough to demonstrate the power of FP. Is there some example where I'd study it for a bit, then realize "Yes. This is nifty way to do X", where X is some task that is cumbersome (or dangerous, or ...) if done in procedural or OOP style of programming. Because I definitely had a moment like that when I was looking at OOP examples.


That's a really clear statement of what you're after, and I'll try to provide a good answer.

When you write a procedural program, there are larger patterns or strategies that you use again and again, such as the simple pipeline you show above. Or you repeat until a condition is met, or you iteratively adjust a value. Any competent procedural programmer deploys these strategies with barely a thought. They aren't things in the language that can be manipulated and combined. And, truthfully, a lot of this has been dragged into what we think of as procedural languages today.

So consider a procedural implementation of listening on a socket and on each connection reading the request and sending a response. It's straightforward to write in a procedural language, but if we want to reuse that behavior, we need to parameterize it over the handler. That may seem obvious today, and it's a trick that C programmers have known forever (see qsort in the stdlib). But imagine setting up the whole language like that, making all the strategies that we toss out so blithely in procedural code, into higher order functions.

This pays off because you can debug and perfect the strategy and the particular parameters for behavior separately, which turns out to be a really useful separation. Get the socket listener code working where the response is echo, then plug in your web app's handler.


> Most programmers I know haven't grasped how to use a system of message passing objects to make the semantics of the underlying language programmable at runtime.

Can you please elaborate on this?

By “message passing” are you referring to ObjectiveC-style virtual calls - or to message-passing in the context of OOP loose-coupling for disseminating events (i.e. broadcasting) instead of using strong references to call event-handlers?


Objective-C's virtual calls are exactly the message passing I mean. They're a direct copy of Smalltalk's.


You could frame OOP as a single concept. OOP is making an abstraction that within that abstraction can store mutable state. Though, that doesn't really hit at all OOP has to offer.

FP, on the other hand, is more like a set of tools. You can use many of those tools in many OOP languages. In this way FP is a lot like procedural programming paradigm, as it too is a set of tools.

And that is why

>I grasped the concept of OOP pretty easily. Can't say the same about FP.

is a no. Not because FP is difficult, but because it isn't a singular object.


Ok. Can you give me an example of any FP objects/tools which are useful? By useful I mean it would make my life (as a coder of short research oriented programs) easier. Similar to how learning OOP made it easier to keep track of all the moving parts in my code?


One example is streaming. Streams allow for a near infinite array size to be processed, because it works on one element at a time as it comes in. Streams come from the FP world.

From that function chaining originates from the FP world and is used in modern languages like Python quite heavily.

You mentioned research, so a valid example of this is the difference between Spark and MapReduce. Spark was introduced to optimize processing large sets of data, by streaming it. MapReduce does not.

For a more heavy example, a few data structures that are quite powerful come from the FP world. For example, this cppcon talk is introduces one, which gives near O(1) time for all uses (insert, delete, lookup, ...), and can be used in a thread safe way: https://youtu.be/sPhpelUfu8Q


Because the OOP concept doesn't map to how a computer actually operates on data, and FP does much more. It's certainly much easier in a Functional language to optimize for the CPU cache and to support concurrency and parallelism.

OOP was designed for humans, and not computers; of course it is going to be easier.


I strongly disagree. How a computer actually operates on data is the CPU's opcodes, which: 1) have no notion of what a function is, 2) are quite happy to deal with global variables, 3) are perfectly willing to mutate data anywhere, 4) are willing to branch to arbitrary locations.

That's about as far from FP as you can get.


And OOP is closer?

Are you saying CPUs are object oriented?

Or maybe you just latched onto the comment about FP and decided to argue against it in isolation.

I am saying that CPUs are NOT object oriented and that functional programming is closer to how a CPU operates than OOP is, and therefore much easier for a compiler (and a compiler author) to reason about and to optimize.


> And OOP is closer?

No.

> Are you saying CPUs are object oriented?

No.

> Or maybe you just latched onto the comment about FP and decided to argue against it in isolation.

Yes, because it was flat-out wrong. FP is not in any way closer to how a CPU operates. (I mean, the rest of your post had problems, too, but that one bit was glaringly wrong.)

> I am saying that CPUs are NOT object oriented

That I will agree with.


Question:

Which functional programming languages are you specifically referring to with that statement?


All programming languages were designed for humans and not computers, outside of assembly and machine code, and some higher level languages that are tightly coupled with a specific machine architecture.

Computers are inherently procedural. Their code is inherently "unstructured". Structured Programming was developed for humans, for the maintenance and communication of programs by and to humans. And that's just basic procedural programming. Functional and OO programming are even more divorced from the underlying machine. And that's not a problem, that's a good thing for most programs that need to be written.

EDIT: Actually, even assembly was designed for humans. It's just that there's usually a 1-to-1 mapping between an assembly language and the underlying machine code.


Even machine code these days is pretty divorced from the corresponding hardware ops. A modern processor usually has to look up what to do when it receives a machine code instruction.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: