Hacker News new | past | comments | ask | show | jobs | submit login
Why I Prefer Functional Programming (morgenthum.dev)
141 points by allenleein 34 days ago | hide | past | web | favorite | 156 comments



I think these sorts of examples dodge the real issue. Of course you can show a (perhaps needlessly) verbose procedural example of an algorithm that lends itself perfectly to FP and then demonstrate how FP is far more concise.

But the real issue, in my view, is what happens when the algorithm is not ideally suited to FP, otherwise we're back to the cute inheritance hierarchies in OOP textbooks.

Say we're dealing with something like the UK share matching rules [1], where you can't simply process your items in sequential order (because the cost of shares sold on a particular day may depend the cost of shares purchased on or after that day).

The open question in my mind is whether pure functional languages scale well to problems of real life complexity, ugliness and the occasionally crucial optimization requirement.

Does FP fall apart completely in those circumstances or does it just degrade to the point where procedural code was all along? I don't have an answer to that, but it may depend on how dogmatically the purity constraint is enforced.

[1] https://www.accaglobal.com/uk/en/technical-activities/techni...


> The open question in my mind is whether pure functional languages scale well to problems of real life complexity.

I came across this website [0] on news.yc way back when: It has a nice one-to-one mapping of GoF design patterns in C++/Java to Clojure.

> The open question in my mind is whether pure functional languages scale well to problems of real life complexity

OCaml [1] really shines in this aspect having used it in a project involving static code analysis. Facebook's ReasonML is based on it. The rust-lang compiler was once written in OCaml. And then there's amirmc's unikernel.org / mirage.io, as well.

[0] https://mishadoff.com/blog/clojure-design-patterns/

[1] https://ocaml.org/learn


Agreed. My first thought was that you can do the exact same thing as the Haskell code in C++ using <algorithm> from the STL.

The biggest advantage I find to functional languages is they force you to write code that's more testable and modular. I now actually write almost every OOP class to have all functionality in private static methods with the public interface methods mostly only doing the work of gathering the args for the private static methods.


Haskell is perfectly capable of expressing imperative algorithms and mutable state. Such algorithms are still expressed using pure functions, pure functions that return IO actions.


And x86 assembler is perfectly capable of expressing pure functions. Such algorithms are still expressed using subroutines, subroutines that save and restore all state to its start state.

The question isn't whether a language can express an algorithm, Church and Turing already proved that. The question is whether the language can do so scalably. We want to see nontrivial examples of FP applied to messy, side-effect-laden domains in a way that rivals other representations for clarity and maintainability.


Depends on what you mean by "scalable". Given that Haskell can track the use of effects and mutable state in its type system, might suggest that it allows such techniques to scale beyond a language that does not. Haskell also has syntactic and compiler support for imperative code, no encoding is necessary, so I don't think your analogy of writing FP in assembly is fair. Writing imperative code in Haskell is much the same as in Java.


I'm not saying that Haskell is as bad at imperative as assembly is at functional, just that being able to accomplish all tasks in a single-paradigm language does not constitute proof that the language is suitable for all tasks.

I love FP concepts, and I've enjoyed my experiments with Haskell. What I object to is that FP advocates don't push FP as a valuable addition to a larger toolbox, they push it as a single paradigm to rule them all. For daily work, I prefer a multi-paradigm language that allows me to choose a programming model that matches the domain well.

(FWIW, I have the same problem with OOP advocates who try to squeeze everything into a class. Some things are better modeled as pure functions.)


I don't think modern Haskell is a single paradigm language. As I said above, it has good support for imperative programming and lots of imperative code has been written in Haskell. Even the GHC compiler is full of imperative code and uses algorithms with mutation where it makes sense (e.g. unification).


A simple thing like a DNS resolve is much simpler in non FP. FP shines when it's more complex, but not when it's simple. I find Go-lang a reasonable crossover where you could do a lot of FP in it where it is needed and keep it simple and boring in the rest.

It's like me as an engineer: Don't use me for simple tasks because I will make them really complex.


> I find Go-lang a reasonable crossover where you could do a lot of FP in it

When people talk about FP, they usually include things like pattern matching and generics, none of which golang has. Not to mention it actively prohibits chaining functions that return errors because of the botched way it decided to handle errors as a product type instead of the correct way as sum type.

golang is a very imperative language with very little going for it.


tbh if you have the ability to easily treat functions as first class citizens in a language then it's fair enough to say you can code functionally in it.

I think the attitude "X is inherently a functional language" is being replaced by the idea that there are clusters of language properties with labels like "functional" or "object-oriented" or "array-based" or whatever, and that a language may overlap with bits of various clusters to a greater or lesser degree.


There's more to the admittedly fuzzy-edged concept of functional programming than function application.

I'd agree with you that the feature list is more important than a vaguer title. That's something Robet Harper talks about: http://www.cambridgeblog.org/2017/05/what-if-anything-is-a-p...

But the selection of features has limited value if the selection isn't coherent and the features don't compose. Within both FP and OOP there are languages whose design choices represent coherent choices with synergy... and ones that don't.

Go's feature set doesn't provide that coherence and synergy for functional abstractions. Frankly, it's an awkward obstacle in that direction. Beyond that, its feature set is deliberately designed to place arbitrary limits on abstraction.


Thanks, this is a measured response that's made me think a little deeper and refine my opinions.


Agreed, it's still relatively possible to invent stuff that is missing though. It will grow :)


Not without having a proper way to handle errors, and generics at the very least.


I think this most effectively demonstrates why I like a lot of OOP: it can be verbose. This is example function is relatively illegible:

  alignCenter :: [String] -> [String]
  alignCenter xs = map (\x -> replicate (div (n - length x) 2) ' ' ++ x) xs
      where n = maximum (map length xs)
One of the most verbose languages I've used, Objective C, has made this a best practice. Despite the brackets (which scare people off), it can be some of the easiest to read code.


I think this is a question that's orthogonal to OOP-versus-functional-programming, but is rather about programming style. Haskell programmers like being terse, but there's also nothing stopping you from writing Haskell like this:

    alignCenter :: [String] -> [String]
    alignCenter lines =
      let maxLineLength = maximum (map length lines) in
      [ leftPadding ++ line
      | line <- lines
      , let leftoverSpace = maxLineLength - length line
            leftPadding = replicate (leftoverSpace `div` 2) ' '
      ]
You could have verbose and easy-to-read functional code—you don't see this often in Haskell by convention, but you might in OCaml or in Scheme—and you could also have terse OOP code filled with single-letter variable names (less so in Objective C by virtue of the method call syntax, but I've definitely seen code like this in Java and JavaScript.)


Functional programming seems to let people lapse into "point-free style" fairly easily, which (imho) has a much greater chance of becoming rapidly unreadable. Not to say FP can't be readable as you've demonstrated very well.

https://en.wikipedia.org/wiki/Tacit_programming


Point free can totally be abused. It's also, in my opinion, often very useful for clarity.

It's my experience that, especially with FP code, often the important things to write down are the stages/steps in a transformation and that the intermediate outputs do not, by themselves, hold much semantic meaning.

Pointfree lets you write just those steps. Moving to "point-full" coding forces you to give names to these semantically meaningless intermediates. This can sometimes just be noise.

Terseness is also just shocking sometimes. It can take time to get used to it. I feel like when I read more verbose languages I skim each block of code numerous times and collect the meaning iteratively. When I read Haskell, I depend upon each name having distinct and important semantic meaning, I leverage type information heavily, and when I do read something I read each word carefully.

Totally different styles, but I've found strong preference for terseness.


This may be more common with Haskell specifically, because Haskell makes currying so easy and has tools like http://pointfree.io/ (which I’ve definitely abused before).


Point free style possible in languages like Haskell and SML where the syntax makes it practical. Haskell and SML make it easy, Scala does not. If you made an OOP language with the right syntax features, you could have point-free style in that language too. You can do point-free style in Python with just a little effort. It's orthogonal to the FP/OOP debate.


Maybe true, but writing readable code is a difficult thing in any language. No matter what language you're using, you're going to have to come up with a set of coding standards in the long run.


Honestly...how is that more readable? It's more verbose sure but the first example is much more readable and straightforward. Here, you have to remember Haskell's list comprehension syntax and you have to scan up and down a few times to keep track of the variables.


> …you have to remember Haskell's list comprehension syntax…

I get the point you're trying to make, but this feels like a huge stretch. Haskell's list comprehension syntax is straightforward and in many cases a terse and useful way of implementing complex functionality. You'll find multiple uses of it in the base libraries; notably, catMaybes is implemented in terms of list comprehension:

    catMaybes :: [Maybe a] -> [a]
    catMaybes ls = [x | Just x <- ls]
I'm all for reducing cognitive load, but Haskell has such a sparse syntax and this specific feature is so useful (and honestly, so consistent with the rest of Haskell's syntax) that arguing it's some kind of unnecessary cognitive burden feels ridiculous to me.


Yeah I mostly agree but I mentioned it because I spent some time trying to figure how how the leftPadding variable in the let block can be referenced outside the let block. It's been a couple of years since I've used Haskell.


The main benefit of example 2 IMO is that some variables got readable names. E.g. n became maxLineLength and x became line.

But bad naming practices have nothing to do with FP per se. Although it does seem to be fashionable in part of the FP community to have everything as terse as possible including variable names.


To me, code with terse variables are much more readable since it's much easier to keep track of the variables when scanning the code. When I read that piece of code above, I didn't try to ascertain meaning from the variable names which, it's arguable whether I (or anyone) should or not since coders can assign variables arbitrary names.


I honestly think a good amount of legibility is a matter of familiarity. I've used Haskell for a couple of years, and I didn't find this code particularly illegible (if I had to guess, this is probably more easily-understandable to me than the equivalent OOP formulation).

That's not to dismiss your criticism, not at all; rather, I think that the challenge would decrease with experience.


To underscore the point that legibility is as much a matter of familiarity as anything else: I think a big difference with an "OOP language" like the above poster seems to prefer versus Haskell has a lot more to do with the English language keyword operators that most contemporary OOP languages favor versus Haskell favoring more mathematical notation.

There exist "OOP Languages" like OG Smalltalk, Self, Io that might make for better comparisons to Haskell by virtue of being closer to the simpler syntax and general focus on more "mathematical" operators. Just as there exist more functional languages that use much more of an English keyword approach than Haskell.

(Some forms of Lisp/Scheme are so English keyword-forward with micro-DSLs that they may be a better example in a lot of posts like these for comparing "contempory OOP languages" and "functional languages". Plus there's all the bits of hybridized "functional languages" embedded directly inside contemporary "OOP" languages such as LINQ syntax in C#. It's interesting to me how many of these sorts of articles jump straight to Haskell.)


To me this looks like perl golfing, but for some reason perl golfing is bad but writing extremely terse functional programs is not.

Personally I like to combine both approaches. In swift I would write something like this:

    let l = ["abc", "ab", "abcdef", "abcdefgh"]
    width = l.reduce(0, { max($0, $1.count) })
    let centered = l.map({
     (line: String) -> String in 
      var padding = (width - line.count) / 2
      return String(repeating:" ", count: padding) + line
    })


Obligatory C#:

    var input = new string[] { "abc", "ab", "abcdef", "abcdefgh" };
    var maxLength = input.Select(e => e.Length).Max();
    var output = input.Select(e =>
    {
        var padding = (maxLength - e.Length) / 2;
        return new String(' ', padding) + e;
    });
Most modern `oop` languages these days all support functional constructs and achieve the same thing in same amount of code & style. Language and oop/functional style are not mutually exclusive anymore, which this article seems to overlook by comparing both languages and styles at the same time.


This looks like the Java if the Java was written by someone who knows Java:

    List<String> alignText(List<String> texts) {
      int maxLength = texts.stream().mapToInt(String::length).max().orElse(0);
      return texts.stream().map(text -> {
        var spaceCount = (maxLength - text.length()) / 2;
        return " ".repeat(spaceCount) + text;
      }).collect(Collectors.toList());
    }


/nit Move the lambda to private method and replace it with method reference and it’ll be just perfect


Good feedback. +1


I tend to avoid doing a Select followed straight by a Max when you can just pass the select function directly into the max. Also multiline lambda expressions feel ugly but that's probably just me! This is what I would do these days:

    string pad(this string e, int padLength) => new String(' ', padLength) + e;
    var input = new string[] { "abc", "ab", "abcdef", "abcdefgh" };
    var maxLength = input.Max(e => e.Length);
    var output = input.Select(e => e.pad((maxLength - e.Length) / 2));


Sadly C# is still missing a few important ML features that making functional programming more difficult than in a true ML.


It would be interesting if you could mention which feature/s you think it most sorely lacks. Some of the recent syntax additions have made it really easy to write concise functional code. I don't even use curly braces much anymore. What about f#?


Personally, I’d replace

  width = l.reduce(0, { max($0, $1.count) })
with

  l.map { $0.count }.max()


Much better indeed!


"To me this looks like perl golfing"

I think that's preposterous and totally unfounded.

The identifiers have names like "alignCenter", "replicate", "maximum" and "length". Those are all actual full English words, nothing remotely obfuscated about them.

"map", "++" for concatenation, and "x/xs" for an arbitrary list of things are all idiomatic so can be kept short because they are used so often, like pronouns in English.

Lastly, your program looks very similar (almost isomorphic) to the Haskell version, so I think it's just dumb you use it as an argument to criticize the Haskell version as "extremely terse".


Ok.

My problem with functional programming written like this (I admit that this example is too short) is that once there are too many functions combining with other functions it is very easy to lose thread of what is happening. My comparison with perl is more to the point that these programs are way easier to write than read.

Again, this example is too simplistic to illustrate my point, but what I like with combining imperative and functional programming is that it is easier to keep trace of what is happening by using intermediate variables that hold the state and thus break the chain of functions.


That is why the importance of being terse but following the rule of not going too far with nesting and pointfree is so valuable. I don't share your difficulty to follow the flow of the program as usually the name of the function, type, and documentation say all you need to know about it.

That said, I find it difficult to follow what is happening when using several variables as you're then likely doing something wrong in the complexity of the function. A function should say much with very little and control flow should be clear instantly. The only thing outside of this would be a procedure which do-notation allows one to express.

Imperative programming is already within Haskell with do-notation an it's composition is that of imperative. Mixing functional with something else makes it much harder to make use of combinators and other forms which ends in more messy code (in my experience). If one could go pure functional without do-notation emulating the "C monoid" it would be rather nice.


It does look like APL or Perl without enough understanding of Haskell (which I know very little about). I am guessing you have some experience writing Haskell code, so it looks clear to you.

The example used idiomatic one or two letter variable names (x, n, xs) which I found difficult to read. The rewritten example was much clearer to me because I could see the variables.


> $0, $1


> for some reason perl golfing is bad but writing extremely terse functional programs is not

Because the Haskell has types checking that you're not doing anything completely stupid.


and in Kotlin

    fun alignLines(lines: List<String>): List<String> {
        val maxLength = lines.map { it.length }.max()!!
        return lines.map { " ".repeat((maxLength - it.length) / 2) + it }
    }


This opinion is flameworthy and stereotypes heavily: what seems to happen in FP is that the overall community is substantially math-IQ smarter than the imperative languages. Alas, that ALSO means the overall community loses social-IQ in the process.

This leads to:

1) higher barrier to entry for the general programmers, in language semantics, documentation, examples...

2) a tendency to overabstract, underdocument, and write clever code

3) write code that is obvious to the person who wrote it (at least for a few months), but being CODE is tougher for people to unpack.

4) people don't collaborate in packages, so they tend to be single-hero projects that are abandoned, and since FP code is a bit harder to parse for typical programmers, don't get adopted/unorphaned. Thus the library code beyond the standard library gradually degrades.

5) I won't say that FP is the only domain of religious zealotry in language minutae (syntax, etc), but it does seem to have a higher proportion or much louder zealots.

I've been in the industry for 30 years, and while it could just be mental ossification of age, nothing seems to have changed since the usenet days with Scheme/LISP, despite the fact the toolchains are now free and downloadable, and despite some of the smartest people in the world preferring them.

The fact that Rust is gaining some momentum with its fairly alien memory management is yet another example to me that the IQ barrier of purer FP (no infix, lots of recursion, etc) is just too high to surmount.


3) write code that is obvious to the person who wrote it (at least for a few months), but being CODE is tougher for people to unpack.

Exactly this.


> 3) write code that is obvious to the person who wrote it (at least for a few months), but being CODE is tougher for people to unpack.

One could have a decent debate about much of what you say. This is the only one where I utterly object. The Haskell code I write is far easier to come back to months later than the Python code I write.


Maybe we just need to have more math training for people who want to become Computer Programmers.


That's kind of another aspect of the social-IQ divide.

Programming is very democratic and blue collar as white collar jobs go: you can get stuff done without a lot of formal education (OR the formal certification of (ahem) REAL engineering professions).

So either you ivory tower and sniff at the lower classes and use FP and higher tools, or you "get stuff done" actually making tools (like the dude who make the OSX package management getting stiffarmed by google).

The old CS vs no CS divide actually fissures quite dramatically in alignment with FP vs no FP.


I suck at more advanced math. I am simply not able to grasp it. However I am very good at logic and systems thinking, which makes me a good programmer for certain kind of tasks, provided I can use imperative programming. Force me to use functional programming (like my current Angular/RXJs assignment) and I'm a lousy programmer.


This deserves a blog post of it's own. You are on to something here that needs to be acknowledged!


Opinions only backed by ~"I know due to my experience" but nothing else are not even "flameworthy".


And this opinion is backed by your experience?


No, by basic rules of intellectual intercourse.


As a lisp programmer who never wrote in Haskell the snippet you quoted is crystal clear to me, while with explicit loops I always have to double check, because I am pattern matching the code "ah, a map or reduce was meant" but I still have to double check assignments and the abort condition (off by one, incrementing the wrong index etc).


I write Lisp but I have no idea what /x means in the example above for instance.


That's a syntax knowledge problem, not a readability problem. \x -> introduces a lambda function with a parameter x, like (x) => in javascript etc.


The "\" (which is "\" rather than "/", as already pointed out) is just an ascii approximation of λ (just like "u" is used to approximate μ).


It means a 1-argument lambda, where that one argument is named x


I think this is definitely a problem with a lot of functional code styles so when working functionally myself I try to be quite verbose and explicit within functions, spacing things out in recognizable patterns and using lets to capture intermediate computation steps.

I love side-effectless code, but I hate the assumption that side-effectless code is maximally perfect when on a single line - don't use variables for variable usages, use labeled pre-computations to clarify what you're actually doing to try and maintain readable self-documenting code.

(all that said, the example above is... sort of a weirdly terrible one like most string manipulation tasks are, string manipulation usually has a high ratio of computation to significant design decision making)


I'm finding that in Javascript, I really enjoy map() and filter(), but map() caused some friction on code reviews at one place, from someone I don't think had any FP background. The are best when they are one liners, of the form .map((val) => someFunction(val)) or .filter((val) => val.length > 1)

reduce() on the other hand is always illegible to me. I get pretty grumpy when I am 'forced' to use a reduce because the code calls for it. It just looks like I'm trying to write obfuscated code.

Often I'll just unroll it to a forEach.

This code structure in your example reminds me of some Python code I was reading recently (Norvig's Sudoku solver). Putting the conditional at the end of the line fights the way humans process data. Quite often we are trying to filter lines of code that could be involved in a problem and saying things like "do an action (if this condition is true)" increases the amount of short term memory I have to use to process the function. That's bad ergonomics.


"but map() caused some friction on code reviews at one place"

Someone not familiar or comfortable with "map" should not be working as a professional programmer today. Most languages now support "map" or something very similar.

Try to move to a different team or different company with actual professional developers.


> Someone not familiar or comfortable with "map" should not be working as a professional programmer today.

I completely disagree - you are measuring the quality of a programmer on a single dimension.

I have worked with very weak programmers that produce gold e.g. great at pulling a team together, great at producing outcomes that clients love, great at focusing on features that sell.

I have also worked with great programmers that just stick to what they know - they don't know map() because they concentrate on being productive rather than continually chasing the next greatest language or library.

I have also worked with technically awesome programmers that produce absolute crap e.g. struggle to communicate, struggle to make good engineering compromises, struggle to understand requirements. One smart guy was so creepy no women could work with him which meant he was actually pretty useless - I saw one friend who worked as a consultant there hide under her desk to avoid him!


I almost fully agree. A professional programmer, how can such a person not know about stuff like map, filter and reduce? It can only happen, when they never cared to learn programming languages, paradigms or concepts of various kinds. This would betray an attitude of not continuing to learn. Even, if they have not come into contact with other programming concepts, how can any professional not have heard anything about map-reduce stuff at some point somewhere? Seems very unlikely to happen without learning-resistance or disinterest towards learning or informing oneself.

However, I would exclude junior software developers from this, as they might have just come from university, where they might not have learned about this at all, depending on the university. Still computer programming is then what they do as "profession", so we would have to count them as "professionals".


That was more of a "it's not universal" comment. Sometimes the right solution is self obvious, other times it needs PR.

If I write a piece of code so others don't have to, then it's a service I'm providing. If they don't 'get' the code then the problem is not always with them. It's important to watch for patterns in the questions or complaints you get. There are often multiple acceptable ways to solve the same problem and everything goes smoother if the one you use doesn't trip people up.

In this case I pointed him to some documentation on filter and map.


I've been noodling with Julia recently and the dot-syntax is really surprisingly ergonomic for transforming hunks of data. It has the benefit of making vectorization easier for the compiler, but I enjoy the syntax.

https://docs.julialang.org/en/v1/manual/functions/#man-vecto...


That's a cool trick but I'd like an operator with more pixels for that behavior.

When I used to track new languages, I ran into one where the . was implicit. You could change a member of an object from a single value to an array and it would iterate over all of the values. So you could do an information architecture that was 1:1 and change to 1:many later and things would still work.


If you think of 'reduce' as instead 'accumulating' a value, then it takes a function that adds a single element to the accumulation, and a base accumulation to use if the list is empty, and a list, and returns an accumulation of all of the elements of the list in a directional (in this case right-to-left) order.

(accumulate plus 0 lst) is equivalent to (sum lst)

(accumulate multiply 1 lst) is equivalent to (product lst)

(accumulate (lambda (x acc) (cons (fn x) acc)) '() lst) is equivalent to (map fn lst)

(accumulate (lambda (x acc) (if (fn x) (cons x acc) acc))) '() lst) is equivalent to (filter fn lst)

and so forth. The essential insight is that reduce/fold/accumulate reduces the problem of accumulation to a base accumulation and a function that only has to add a single item (i/o)nto the accumulated value. Using them to directionally processes a container is idiomatic in a functional style.


C++ even calls it std::accumulate. (There’s a std::reduce in C++17, but it’s just std::accumulate with the restriction that the operation is commutative.)


> reduce() on the other hand is always illegible to me.

I used it recently in ruby to write an sql query using Rails' ORM and it seemed quite nice to me. Something like:

  foos.to_enum.with_index.reduce(initial_query) do |query, (foo, i)|
    query.joins(...).where(...)
  end


I agree it can be hard to read at first, but after learning the syntax, it's not hard to read. It's also about what the language allows you to do (or prevents you from doing) that determines, to me anyway, the utility. I love Objective-C but it allows you to do a lot of things that can lead to bad code (mostly related to state and hidden side-effects). FP overall is a paradigm that helps prevent you from doing that and helps you rethink and describe problems declaratively.


Haskell is very terse, but it doesn't mean that this is true for every FP language. Code below is in Gerbil Scheme and should be readable even for OOP programmers.

    (def (alignCenter strings)
      (def n (apply max (map string-length strings)))
      (def (align s)
        (let* ((count (round (/ (- n (string-length s)) 2)))
               (whitespace (make-string count #\space)))
          (string-append whitespace s)))
      (map align strings))
Here also the same code in JS, which is almost identical.

    function alignCenter (strings) {
      const n = Math.max(...strings.map(s => s.length))
      const align = s => {
        const count = (n - s.length) / 2
        const whitespace = ' '.repeat(count)
        return whitespace + s
      }
      return strings.map(align)
    }


Once you become used to reading it, it doesn't seem like a big deal. But Haskell has a steep learning curve if you have never used functional programming languages before so it may take a while to pick up a few things.


What are some of the better ones to start with? I've tried Erlang and F#, but the hardest part is figuring out whether I'm writing code that's too procedural.


If you're looking for statically typed FP, honestly, start with Elm. You can do it in a weekend it's so conceptually contained. Then F# and OCaml are highly regarded. Avoid Haskell - worst bang for your buck on seeing returns on learning. It has some interesting concepts, but is a terrible language for learning statically typed functional programming.

If you're looking for dynamically typed FP (ex. LISP), go with Clojure, or lesser rec possibly Racket flavor of Scheme.


I don't think you should judge it based on the syntax. With UFCS (Uniform Function Call Syntax), or a pipeline operator, it could easily be rewritten to something like:

xs.map(\x -> (div (n - length x) 2).replicate(' ') ++ x)

This is just pseudocode obviously, but you get the point I hope. There is nothing fundamental to FP about this particular syntax.

Also to be honest, I don't find the original unreadable at all. But that may be because I have more exposure to this style of programming? FP doesn't somehow remove the inherent complexity in tasks like string manipulation.


I don't really like 'where n' afterwards instead of say 'let n' at the beginning, but I'm not very familiar with haskell (really just enough to know the first line is a type signature) and I found this ok. Maybe objective C is better though.


You can write code without ever using `where ...` :)

It's just how mathematicians are used to express their version of "top down problem solving". It's also similar to what you do when you define a function/method before the smaller ones it calls and before the even smaller ones they call (using what's called "function hoisting" in Javascript for example).

Now, if you really dislike "top down problem solving", you might have an allergic reaction to `where ...` which applies the same pattern but does it down even to the micro level of small functions of few lines...


My first thought, too. OTOH, with indentation like in the example I could live with it.


Indentation is always like in the example. Haskell has either semantic whitespace of braces everywhere, and nobody uses the braces.


What is illegible about that?!

It just says, in code: "replace each line of text with itself prefixed by a number of spaces equal with half of the difference between its length and the length of the longest line in the text".

It's like the most obvious way to think about this particular problem!

Now, how well this scales to different problems, and especially to problems where mutation is a natural way to think about things... that's a different problem and part of the reason I'm not that much in love with extremist functional programming. Mutation has its place and the monadic abstraction is something I dislike.

But for simple examples like this pure FP rocks and is very readable and intuitive!


As usual in FP code, the order of your words are completely different from the order things appear in the code. Also, most developers aren't used to concise function declarations (yeah, even with JS pushing them), what further adds to the problem.

This is not something people with no experience on FP can read easily. Now, calling it illegible strongly implies that people with no FP experience are the metric one should code for, what is perfectly valid if you are at a company aiming to hire cheap coders, but not something to brag about.


Order of words is not that relevant, in English I can say instead of the above:

"take each line of code [map ... xs] and replace it with [\x -> ...] a number of spaces equal to half the maximum line width [replicate (div (n - length x) 2) ' '] prepended to it [... + x]"

All languages have passive voice and other tools so you can make the order of words be whatever you'd want for customizable emphasis while keeping meaning the same.

Sure the order for arguments for `map` may not be the most intuitive, but that doesn't make things harder. See ReasonML for an example of functional programming with regular C-like syntax.

Now, if you want to see hard to read functional code, look into code abusing point free style (I'm not even sure there is a fine way to use it at all...), plus mind bending types (usually to satisfy some monadic style abstractions), plus over-currying stuff all over the place.

But the snippet above is only unreadable for people who stubbornly refuse to invest a couple hours of their time into learning a new notation for things! Heck, you can even translate it almost 1-to-1 to modern Javascript, or even add an extra function name and some more sensible variable names to make things more readable while keeping it functional:

    function alignCenter(lines) {
      let width = Math.max(...lines.map(ln => ln.length));
      let getPadding = len => ' '.repeat(Math.max(0, (width - len) / 2));
      return lines.map(ln => getPadding(ln.length) + ln);
    }
...the above is more like something I'd actually let pass code review in real life :)


But the example you posted has more to do with whether a language or author has the aesthetics of preferring succinctness and symbols (Perl-like) to words (Ruby-like).


TXR Lisp:

  (defun align-center (strings)
    (let* ((maxl (find-max [mapcar len strings])))
      (mapcar (op fmt "~^*a" maxl) strings)))


> I think this most effectively demonstrates why I like a lot of OOP: it can be verbose.

Verbosity is not inherently good. In fact, I think verbosity is inherently bad. Have you read much first-year programmer code? It's absurdly verbose at the cost of legibility.

The real issue is clarity. Your code should be sufficiently verbose that its purpose is self-evident, but it should not be overly verbose such that your screen is cluttered with meaningless junk (see: Java).

---

To me, there is a fundamental distinction between the intents of functional programming and imperative programming that a lot of these articles either gloss over or miss completely.

Imperative programming is about describing to the computer a procedure by which to accomplish a goal.

Functional programming is about manipulating the relationships between data directly such that you transform your given input into the desired output.

If you want to understand what a program does, then imperative code is going to be better to read. But if you want to understand what a program means, then (well-written) functional code is going to be better. This is also why so many functional languages have strong static type systems: to better enable the programmer to express programs' meanings outside of the implementation.

However, as others have mentioned, string manipulation is always kind of hairy anyway, so reading this function will result in understanding what it does instead of what it means (unless you just read the signature, which is actually what I do a lot of the time). My thought process of reading this particular function without prior context would be something like:

- The function is named `alignCenter` and takes a list of strings and gives back a list of strings... so the strings are being centered amongst themselves. I know they aren't being centered relative to anything else because there are no other inputs to the function — which I would not know in an imperative language, where there could be hidden state somewhere. - It maps some function over the list of strings. I assume that function will do the centering. - This inner function takes a string and does something to it. Let's investigate. - We replicate a space character some number of times and prepend the resulting string to the input string. (I think the author's reliance on operator precedence is disappointing here, as it detracts from the clarity IMO. I would rather put the `replicate` call in parens before the `++`.) - The number of spaces is determined by dividing by two some other number less the length of the string. - That number is the length of the longest string.

So: to center a list of strings, we indent each string by half the difference between its length and the maximum length among all the strings. This seems like a reasonable way to center a group of strings. (Notice that I did not use words like "first", "then", etc. which indicate a procedure. Instead, I have described the relationships of the strings.)

(Of course writing it all out makes it seem like a longer process than it is, but in reality it probably took me 10-15 seconds to go through everything and figure out what was going on.)

---

I think this isn't a great example for the author to have chosen. My go-to example of a beautiful functional solution is generating the Fibonacci sequence.

A "good" imperative solution uses iteration and might look like:

    def fib(n: int) -> int:
      a = 0
      b = 1
      while n > 1:
        t = a
        a = b
        b = t + b
        n -= 1
      return b
If you were to just glance at this code without being told what it does (and if you'd never seen this particular implementation of the algorithm before), you would probably need to write out a couple test cases.

Now, the Haskell solution:

    fib :: Int -> Int
    fib n = fibs !! n
      where fibs = 0 : 1 : (zipWith (+) fibs (tail fibs))
We can easily see that the `fib n` function simply retrieves the nth element of some `fibs` list. And that list? Well it's `[0, 1, something]`, and that something is the result of zipping\* this list with its own tail\* using addition — which very literally means that each element is the result of adding the two elements before it. The Fibonacci numbers are naturally defined recursively, so it makes sense that our solution uses a recursive reference in its implementation.

\*Of course, I'm assuming the reader knows what it means to "zip" two lists together (creating a list by performing some operation over the other two lists pairwise) and what a "tail" of a list is (the sub-list that simply omits the first element).

To me, this data-relationship thing often makes more sense than a well-implemented imperative solution. I don't care to explain to a computer how to do its job; I care to think about how to move data around, and that's what functional programming is all about (to me).

Of course, this is just my subjective opinion, but I think there's some merit to it. I'd like to hear your thoughts!


> Verbosity is not inherently good.

Agreed.

> In fact, I think verbosity is inherently bad.

Here I disagree. To take it to an absurd extreme: LZW compress your source code. Hey, it's less verbose! But that's not a net win.

Instead, I think that there is an optimal value of terseness. More verbose than that, and you waste time finding the point. More terse than that, and you waste time decoding what's going on.

Now, what is "optimal" is going to depend on the reader, both on their experience and their preference. With experience, certain idioms are clear, and don't require thought. The same is true of syntax. (Both Haskell and C become more readable with experience.) But some people are still going to prefer (and do better reading) a more terse style, and others are going to prefer a more verbose style.


I think we have different interpretations of what it means to be "verbose", which is why I instead directed my previous comment towards "clarity".

Wiktionary gives the following definition of "verbose" [0]:

> Abounding in words, containing more words than necessary; long-winded.

My point is that adding words for the sake of adding words is bad, always. It's one thing to say "My code tends to be on the more verbose side of things" and another to say "I prefer writing very verbose code." You should always be seeking to make your code as concise as possible while maintaining clarity.

It's that "while maintaining clarity" bit that's the tricky bit, really. On this, I think we agree. I always try to make my code as short and direct as it can be, but never at the cost of clarity. For example, I don't use cute inlined tricks unless they're idiomatic (or used everywhere in the code and explained in at least a couple places). I try to strive for clarity instead of verbosity.

[0] https://en.wiktionary.org/wiki/verbose


> You should always be seeking to make your code as concise as possible while maintaining clarity.

Absolutely.

> It's that "while maintaining clarity" bit that's the tricky bit, really. On this, I think we agree.

Yes.


You chose a very bad example for "imperative code". It seems artificially complicated.

    def fib(n: int) -> int:
        prev, cur = 0, 1
        for _ in range(n-1):
            prev, cur = cur, prev + cur
        return cur
I think most humans will find this variant more readable and understandable then functional one.


Ah yeah, I think you're absolutely right: this is much better. It's been a while since I've implemented Fibonacci iteratively haha. Thank you for pointing this out. :)

Personally, I still find the functional version a bit more direct. The iterative code still relies on me understanding what the program is "doing"; it requires me to hold state in my head and think through "okay, now `prev` has this value and `cur` has this other value" to reason about what's going on.

I would be interested to find some people with minimal programming experience and show them imperative vs functional implementations of simple functions or algorithms and see what they prefer. Maybe I'll try to do a small study on that or something.


Fibonacci is usually introduced as "the next term is the sum of the two previous terms" (sometimes with the story about rabbits or whatever). There's two obvious ways to implement this:

  1. Direct recursion
  2. Keep track of the two previous terms
If you start with (1), you run it and it takes exponential time, and so you should instead remember the two previous terms, leading you to (2). When you go with (2), you either use 2 mutable variables and a loop in the imperative version, or you have 2 accumulators and tail recursion in the functional version... which is the same thing, since a tail recursive function is just a named loop.

There's nothing about laziness or self-referencing an incomplete structure here, which is just a Haskell thing. Taking the tail of an incomplete structure, in particular, is indirect and hard to understand.

If you want to demonstrate laziness, you can still do it directly by doing something like:

  fibs a b = a : fibs b (a + b)
  fib n = fibs 0 1 !! n


A list comprehension in python would be more comprehensible than either AND less verbose.


Arguably, list comprehension in Python is already functional programming :).


Once you start nesting them then they become a nightmare to read. So you might as well mix some imperative style intermediate variables.


As someone strongly favoring a functional or even purely functional approach, I strongly dislike the flood of pro-functional-programming blog posts that attack the straw man of "imperative programming wrapped in classes". I don't want to believe that is what experts of the paradigm consider object-oriented programming. I would love to read an honest comparison of both, judging benefits and cost.


Well said. I think a lot of people are drawn to Haskell because the abstractions are so well integrated. I remember reading the Prelude and muttering "wow so true" all the way. But then I got stuck when designing a first program.

OOP to me is not imperative control flow but about managing responsibilities. What does this object know, what can I ask it to do, what does it need for the job? If I look at my program at the level of for-loops I'm lost in the brush. And granted, sometimes I do miss Haskell's expressiveness in the brush.


> I don't want to believe that is what experts of the paradigm consider object-oriented programming.

Once you get to the scientific literature about the subject (where actual experts reside), it tends to be much more common to see the conclusions of "there isn't any formal difference between OOP and FP languages", or that in Haskell in particular "implements an strict superset of OOP".

The problem is that FP and OOP are ill-defined concepts.


Formal differences don't capture how awkward something can be to use. For applied programming this matters. Maybe I can do functional programming in Java, but if it's much more verbose than Haskell and the type checker occasionally falls over then it's not a good choice.


In what sense is Haskell a "strict superset" of OOP?


In the sense that you can recreate all the usual OOP syntax and behavior on Haskell. What people only do on practice for very limited extents, because it's not very useful to import those concepts to a language aimed at dealing with pure functions.


I don't see why "Object Orientated Programming" requires for loops, over map / streams (which Java has).

Is there a definition of "OOP" which requires using for/while? This doesn't really sure any OOP at all.

A better example would be to show a case where OOP would be useful, say having a base class and deriving it several times (iostreams for example). Show me how FP does somewhere where (traditionally) OOP is considered strong.


Agreed. Ruby is a very fundamentally OO language, but its Enumerable interface is more comprehensive than many FP languages' equivalents (in addition to the standard map/reduce/select, it has methods for iterating over permutations of elements, n-sized slices, etc, all of which can be made lazy), and because it's an interface you can use it with custom data types easily. But FP enthusiasts often speak as if the ML and Lisp language families have a duopoly on higher-order functions. It's odd.


Having for/while loops definitely isn't fundamental to OOP at all. Languages like Erlang and Racket capture the essence of OOP fairly well, yet you don't need to use loops to write programs in them.


It depends what you mean by "OOP". Generally most mainstreams programming languages are mixed: that is they support more than one paradigm (e.g. procedural, class based OOP, functional, etc).

Though I've noticed that programmers these days often equate OOP with classes. Therefore anything with classes is OOP whereas anything without isn't. This comes up a lot with languages, such as Rust, that are OOP but lack the class keyword.


> I don't see why "Object Orientated Programming" requires for loops

Yeah, I think he (and most people who talk about FP vs. OOP) actually use OOP as a short-hand for "not FP": Scala is actually very object-oriented and very functional at the same time; they're not mutually exclusive.


Functional Programming is not just for Haskell; modern Java has lots of pretty decent options so I feel like their Java version of the code is a bit of a straw man. Here is how a more modern code style in Java 8 (which is years old now) might look. Notice that the logic for this problem is 7 lines of code, with one line being the closing brace.

https://gist.github.com/haroldl/aee6a407a01131345fc4ecb1b9c9...


You can write mutation-style java tersely too...

    void alignCenter(String[] text) {
         int maxLength = 0;
         for (String line : text) maxLength = Math.max(maxLength, line.length());
         for (int i = 0; i < text.length; ++i) {
             text[i] = " ".repeat((maxLength - text[i].length()) / 2) + text[i];
         }
    }

No lambdas to be found.


Or just use Scala


Scala is a pretty immutability-functional-happy language. My point was that the mutation/readonly functional/oop comparison was unfair.


I prefer functional programming because referentially transparent functions are easier to reason about and test. Brevity of control flow hasn't brought me much benefit.


Yeah, in his Turing Award lecture introducing FP Backus made a big deal of the ability to reason about programs mathematically. I've been playing with a functional language Joy that combines (IMO) the best features of Forth and Lisp and one of the neat things about it is that you can derive functions from simple (almost geometric) syntactic manipulations as the whole language is just referentially transparent functions. ( http://joypy.osdn.io/notebooks/index.html )


Unfortunately nowadays the majority of my functions are not referentially transparent.


Couldn't agree more! ... I just had to laugh to myself at how this, THIS is why functional programming is so... rarely adopted. Like, "referentially transparent functions"?! What happened to `website.run.now!`?!


Referential transparency is a handy thing to think about in OOP as well as functional programming. If you provide the same parameters to a function, will you always get the same result? If yes, and if the function is 'pure', e.g. does not result in some side effect like state manipulation, you could replace the function itself with the output of that function. If so, and the function is computationally expensive, you can memoize the function, which is a specific form of caching where you auto cache the results by the parameters passed to the function.

This is a handy chain of reasoning in any language, and in a more dedicated functional language might be supported as a concept in the language itself.

Like in OOP, you don't have to have that term memorized to actually use a functional language, just remember the pattern.


website.run.now is referentially transparent in Haskell

(or rather, to mention the one I use, Happstack.Lite.serve: https://hackage.haskell.org/package/happstack-lite-7.3.6/doc...)


I like functional programming too, but I'm not sure it is for the same reasons. Anyway, I think what the author describes as object oriented programming is actually imperative programming. In purely object oriented languages, such as Smalltalk, control flow can be encoded in objects too (for example Booleans can be an abstract class with two methods ifTrue and ifFalse that takes a callable object, and True and False can be subclasses that implements these methods by appropriately calling or not their arguments).


Dear functional programmer, please do not slam on languages you do not even know how to program in. To be honest, I have no idea what this code is even supposed to do and which idea of "centering" it satisfies. Whether this mess of Java or the mess of Haskell the blog writer created is more readable is up to you. C# has been functional forever and Java caught up a lot.

  int maxLength = Stream.of(text).map(String::length).max(Integer::compareTo).get();
  IntStream.range(0, text.length).forEach(i -> text[i] = " ".repeat((maxLength - text[i].length()) / 2) + text[i]);
(This becomes significantly more readable if you change the contract from passing an array of Strings to passing a list and returning a list. Since functional programmers are so in love with immutability: Java now provides immutable lists in the standard lib.)


Functional programming is all nice until you need to debug someone else's code. (Maybe it just because i have done a lot more of it with imperative style, but I am pretty sure that's the case for the majority of programmers).


> Obviously, composition over inheritance strives a bit against one of the original key concepts of OOP - which is inheritance.

I don't understand why people keep pushing this idea of OOP as requiring classes and inheritance, when it very much does not.

Also, it's really easy to say that FP is better than OOP, or the other way around, when people keep comparing their favourite with a straw-manned version of the other.


I assume you're referring to the definition of OOP coming from the Alan Kay quote: “OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things."

This definition includes nearly every language with modules, polymorphism, and depending what counts as message passing might either dis-include statically typed languages, languages without a defined model of concurrency that disallows shared mutable state, or languages without message passing library support.

If that's what canonically counts as OOP, there's no straw-manning going on here, the discussion is just a comparison against a pretty well-defined category of programming languages with different features that are often described as and understood to be object-oriented.

Whenever people bring this up I often hear that Erlang is supposed to be the canonical object-oriented language, and if that's the case, I'm not sure how we could call this discussion straw-manning, because it's just talking about totally different languages.


What I dislike about functional programming is that when objects are not supported or they are avoided I have seen poorly documented and complex hashes/collections data structures that end up reinventing the oop wheel (and the problem the wheel was solving in the first place).

It feels like a false choice and you can have rich FP capabilities in a language that supports objects and relationships between objects.


> omplex hashes/collections data structures that end up reinventing the oop wheel (and the problem the wheel was solving in the first place)

I don't think I know what you mean here...Typically wheel-reinventing in FP languages happens because the stuff in the OOP/Imperative languages require a lot of mutation, which doesn't really jive in FP land for a whole plethora of reasons.

> It feels like a false choice and you can have rich FP capabilities in a language that supports objects and relationships between objects.

Sure, I'm partial to F# myself, but at some level I get frustrated when people try and mix paradigms too much. C++ has this problem of trying to be everything, and as a result it is very hard to read large C++ codebases. Scala isn't quite as bad, but it has similar problems.


Then you should look into Scala


Scala is fine, but do you not feel that it tries to do too much sometimes?

Like, I like Scala if I'm the only person writing it; I basically write Haskell while still having access to all of Java's libraries, but I absolutely dread having to collaborate with people using Scala.

A coworker of mine (whom I respect very much) said it pretty well once: "I write Scala...but in a Java accent"...he doesn't use the functional features of the language, and basically just writes Java without semicolons. When we try and collaborate on stuff, it becomes difficult because of the sometimes-conflicting way we write code.

Personally, for functional-on-the-JVM, I prefer Clojure. You still have access to all the Java libraries, but it's a bit more decisive on what it wants to be: a functional, dynamic lisp on the JVM. While you can write OOP-style code in Clojure using records and protocols, it's often frowned upon, and not always 100% intuitive....virtually every tutorial you'll find on Clojure writes in a pure-ish functional style.


It was in response to the parent lacking OOP in FP.

I think this is a problem with any language that supports multiple paradigms and ways to skin a cat.

Enforcing guidelines is a way to combat that. Ofc you might also just opt for a more specific language


The problem with coding guidelines is that they actually are really difficult to enforce, especially during crunch-times. To quote Carmack, if the compiler allows something, it will end up in the codebase at some point.

Of course, this is also true of Clojure; people will make use of bad JVM libraries because they're available if you follow this logic, F# can do unsafe mutables, and even a more pure language like Haskell allows binding into C.

The difference in how idiomatic and easy it is to do a bad thing. For Clojure, the language has the "right" versions of the main data structures as first-class syntax, making it unlikely that you'd call into a Java library until needed, in which case hopefully you're using the correct one; while calling into Java isn't difficult, it is made explicit, and it does feel a bit unnatural as a result. Since Scala attaches no syntactical stigma to using the "bad" Java conventions, it's common for people to do things in a bad way (e.g. using `var` everywhere instead of `val`).


You made that comment twice in this thread, but you haven’t said why people should look into Scala.


It's multi-paradigm. Want FP? Do it. Want OOP? Do it. Want both? Do it.


I like FP, but cautious as there aren't many big applications (open source or private) that are written that way. Am I wrong?


You don’t need to go full FP to experience the benefits. You can just apply FP principles on a small scale to sections of your code, where appropriate.

As for “many big applications”, it depends on what your threshold for “many” is. There are a number of firms that use e.g. Haskell, F#, or ML privately, e.g. Jane Street, Galois. For whatever reason, functional programming seems to be more popular in finance.


Two years ago, CircleCI posted about how they deploy their 100,000 lines of Clojure backend code and 30,000 lines of ClojureScript code to Google App Engine.

Also, Walmart has been using Clojure "at scale" since before 2015. And based on a related tweet, the project at that time had 66 modules, 3000+ files, 560,000+ LOC (of Clojure). Last measure, Walmart was the largest US retailer at $374 billion annual revenue. I'd say their dependence on Clojure is a pretty good validation of the language's capability.


I may just be not familiar enough with them but I think FP tends to be a bit weaker when it comes to supporting code contracts which is a bit of a godsend when working on large projects - that said most also have very resilient type checking rules which can be used to the same end.


FP is pretty big in finance (not in trading platforms where speed reigns, though)


Is it just me or is iterative non functional code usually easier to debug? Or are the tools a lot more mature?


I started programming (like a lot of people) after I bought one of those "Learn C++ FAST!" books (I don't remember the actual title).

A large chunk of this book was about the object-oriented part of C++, and comparing it to the equivalent version in C, and acted that since the only two paradigms that exist are OOP and imperative, you should always use OOP. (NOTE: I'm paraphrasing, that was the tl;dr as I remember it from 18 years ago).

Later, when I was 19 (around 2010), I was on an IRC board and someone was talking about how cool Haskell was, and when I asked him to explain why it was better than C++, he went into elaborate detail about how "C++ is total bullshit because you have to mix your types with your functions". He then went on elaborate detail (that went completely over my head at the time) about how Haskell's typechecker made C++'s look like "dogshit".

Maybe I am too suggestible, but at that moment I agreed with him, and have kept that mentality ever since. I fundamentally do not like attaching methods to my structs/records/whatever, since I think that forces strong and unnecessary coupling. I do really hate the type systems of Java and C++ because I think they're restrictive and don't actually help.

Obviously opinions vary; a lot of very smart people really like Java and C++, but I seriously cannot personally understand why. I feel like I get more done quicker with Clojure than I ever could with Java, and I have trouble comprehending anyone who says otherwise.

I'm not trying to start any kind of flame war; if you like OOP we can still be friends, I'm not judging you as a person, I just disagree with your language choice :)


> I do really hate the type systems of Java and C++ because I think they're restrictive

Can you give an example of how the C++ type system restricts you? Is it just lack of some inbuilt mechanisms/syntactic sugars or is there something that you can't actually model with it?

> I think that forces strong and unnecessary coupling

How do you model/maintain invariants of a set of data in FP? That's the part that I don't understand yet.


I haven't touched C++ for several years at this point, so bear with me a bit, but in the category of "just really annoying, not really restrictive", there is the fact that you have to constantly re-type the types for everything, like `MyType x = new MyType()`, but my understanding is that has been addressed by the `auto` keyword.

Is there some equivalent to a monad in C++? As in, something that will "pollute" the function so that if it's trying to call something with side effects, it's reflected in the type? I have no doubt that you could stitch something together with the templates to get something, but in Haskell, out of the box, your functions that don't have side effects if they don't return `IO`.

Of course, this wouldn't be a sign of it being "restrictive", just a feature that's not in there. Maybe I'm being a bit unfair to C++ by tying its type system to Java, which is terrible.

One thing that I really dislike is the fact that, in order to make class X part of interface Y, you need to have access to X's source, or extend it. I find this incredibly annoying, and with Haskell, you can attach any type to a typeclass by providing its implementation. I know C# has this with its extension methods, but AFAIK C++ doesn't have any equivalent.

> How do you model/maintain invariants of a set of data in FP? That's the part that I don't understand yet.

I don't really know what you mean by that; you can restrict the allowed input types with typeclasses or existential quantifiers, but I'm not sure that I'm answering your question.


You almost never use `new` in modern C++, and you would never use it as you suggest because it wouldn't compile, you would simply say:

     MyType x;
C++ is nothing at all like Java.



Sorry, you'd need to use the MyType* x to use the `new` keyword; it's been awhile. It doesn't really deter from my point though; in order to heap-allocate something you have to use a pointer, and you end up doing something not that dissimilar from Java.

> C++ is nothing at all like Java.

Is that supposed to be a joke? Java was marketed specifically towards C++ engineers...


Javascript was marketed towards Java programmers, and those languages are pretty different from each other too.


I would say that C++ is closer to Java than JavaScript is; it's fairly easy to copy-paste and edit C++ code into a Java file with, just changing a small amount of syntax and get rid of the manual memory allocations.

Try copy-pasting JS into a Java file, and there are semantics there that are completely foreign; you can't simply translate it.


I agree that JS and Java are very different; that was my point.

I disagree about Java and C++. If I have a C++ file that uses RAII or references, for example, it won’t translate to Java easily. And those aren’t unusual, they’re both common techniques.


> in order to heap-allocate something you have to use a pointer

Usually the heap allocation is hidden behind a handle.


Not on the libraries I usually have to touch.

I would already be happy when I am allowed to re-write some of it into C++11.


> How do you model/maintain invariants of a set of data in FP? That's the part that I don't understand yet.

If you tell me how you do it in not-FP then I'll tell you how to do it in FP!


fyi, methods being attached to structs/whatever is a property of single dispatch object systems, but not multiple dispatch object systems. The coupling/attached relationship derives from single dispatch systems privileging the first argument of the method.


> I fundamentally do not like attaching methods to my structs/records/whatever, since I think that forces strong and unnecessary coupling.

I see it almost exactly backwards from that. I like being able to make it so that nobody other than a select set of functions can modify my structs/records/whatever. To me that prevents unnecessary coupling.


I still don't get the point of immutability. Sure state change is a problem. How about a log like data structure that stores all modifications of the data?

I'm sticking to procedural coding for the next 10 years and I will try my best to unwash young coders from poop.


That seems like an implementation detail. If you have an append-only log, you can always rely on each entry to never change. That's a kind of immutability. If you hold on to a reference to the log at a particular time, it will never change out from under you.


There should be an alternative to both extremes - complete mutability and complete immutability.


> I still don't get the point of immutability

If you're the sole developer, it's probably not a big deal. If you're working with a group of people, immutability is the only way to maintain your sanity.


As compared to POOP sure. How about as compared to Go? Go is a procedural language.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: