Hacker News new | past | comments | ask | show | jobs | submit login
Functional Programming For The Rest of Us (defmacro.org)
340 points by netvarun on March 20, 2012 | hide | past | favorite | 67 comments



Kudos to the piece for being well written, readable, and clear. But I'm worried about a "for the rest of us" that starts off with first Plato and then lambda calculus.

The problem with "the rest of us" is that these concepts aren't inherently neat. Disclaimer: I'm not one of "rest of us". I'm learning Haskell, just because I enjoy having to think in a new way.

A lot of FP explanations start off with immutability. After plato and lambda calc, that's where this one goes, too. I think because it's a relatively simple concept, and kind of mind blowing that you could do something useful without incrementing i in a for loop. And then the follow up example is normally how you can replicate a for loop in functional programming. Which is again neat.

But if you aren't into neat things for their own sakes, you've got to ask why? Why replicate a for loop when you could just use one? (answer: you don't really replicate for loops in FP) They aren't asking why, in the sense that they're curious. They're asking why in the sense of "Why would anyone do that? That's stupid."

If you aren't into having your mind blown for no apparent reason (some aren't) the only really compelling thing about the first several pages of the piece is the "Benefits of FP". That's the motivation to learn these crazy new concepts. It should really come first. I think it should come first in any discussion of FP for people who aren't familiar with it.

Because for most people to learn, they need a motivation. Your programs will be faster because you can do concurrency super easy is pretty good motivation. Sadly, "because it's neat," is generally not.


This article is another one of those that confuses functional programming with pure programming (which is, I believe, by definition also functional).

You can do a lot of functional programming in a language where values are mutable. I use it all the time in python: the map and filter functions, list comprehensions, zip, ... For me, functional programming means having first-class functions that can be passed to/from other functions. A standard library of common functional combinators help, but you can always write your own.

Funny, but some of the most often mentioned functional languages, such as OCaml and Scala, allow liberal use of mutable variables. They just make it a bit harder/more explicit, so the programmer has to give more consideration into it.


You must have read a different article than the one I read.

"Functional programming is a practical implementation of Alonzo Church's ideas. Not all lambda calculus ideas transform to practice because lambda calculus was not designed to work under physical limitations. Therefore, like object oriented programming, functional programming is a set of ideas, not a set of strict guidelines. There are many functional programming languages, and most of them do many things very differently. In this article I will explain the most widely used ideas from functional languages"

No where does he say that functional programming is pure programming.


You can do a lot of functional programming in a language where values are mutable.

Sure you can do whatever you want, but if the language does not support things like tail call optimization, non-imperative function definition, writing code in functional style turns out to be a lot slower (due to non-tail-optimized function call overhead) and less safe (due to potential side effects in function definition) than if you are writing it in a functional language with explicit support for these facilities.


That's true, but this functionality is perfectly compatible with mutability.

Various flavours of Lisp have been in the forefront (CMA: among 'popular' languages) in making this functionality available, but I don't know of any (CMA: 'popular') pure flavour of Lisp. (Maybe Clojure --but its preference for 'immutable by default' is quite different from not having mutable values at all!)


OK - I checked wikipedia:

In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data.

Sounds exactly what I was thinking - map is not mathematical function - because it can interact with the global state (you can apply map twice to the same data and get different results - for example if in the function you are mapping you use 'time') - and thus using it is not 'functional programming'. If your first-class functions were mathematical functions - then we could agree - but map is not one.


"map is not mathematical function - because it can interact with the global state (you can apply map twice to the same data and get different results"

I don't think I quite understand you. I think when you say "mathematical function" you mean "pure function". And when you say "global state" I assume you mean "impure operations".

But (from my understanding) map is purely functional and doesn't do anything impure like IO. Its purely a mathematical mapping between a collection of input values to a collection of output values, based on applying a function to each input value in turn. You seem to be suggesting "map f (map f x)" (that is applying map twice to the same data) is not the same value as "map f x" and therefore its not "mathematical".


OK - I can agree with the interpretation that the 'impurity' is not inside map - but rather in the argument it takes. You can reconcile it with the mathematical definition of function by saying that some 'programming functions' take the implicit 'state' argument and that map is not one of them.

It still does not resolve the definition problem - is functional programming programming with pure mathematical functions - or is it using first-class functions (not in mathematical sense)? Which definition is more useful? Which is more widely used?


I'm reading this differently than you are;

> functional programming is a programming paradigm

NOT a programming language. What I think this means is that your usage of map is such that the computation is what matters, not mutating the state. I can do that just as well in Python, JavaScript, OCaml, Scala, Clojure and Haskell. In other languages too, yes, but not nearly as easily (due to lack of closures/complicated functional types/no first-class functions/...).

If you were to insist that functional programming is just math (no mutable state), then even Haskell and Mathematica are far from functional.


I insist that what is important is how much of the purity you have - because it is a more widely used definition (as supported by my wikipedia quote) and also this is what implies the nice computational properties of the code, and what is not important is if you use first class functions (not in the mathematical sense).


This is a single introductory sentence. You should not reason that far with it.

With that reasoning you could claim addition is impure, because time() + 1 changes over time. map is definitely a pure function. If you apply it to impure arguments, of course the result will be impure. The point in FP is to avoid using 'time', not 'map'.


That's a good argument - but it does not really resolve the definition problem. It seems that some people think that 'functional programming' means 'programming with mathematical functions' - while others think that it means 'programming with first-class functions'. I can take this or that definition - but if wikipedia uses one definition - this sounds like a good argument that more people use the 'mathematical functions' definition. I also think that 'mathematical functions' is a more useful definition - because it implies 'referential transparency' and probably other interesting things that are usually quoted as the advantage of using functional programming.


> wikipedia uses one definition

Let me repeat: you used an introductory sentence as a "definition" of FP and are overinterpreting it. If you read later you will see:

> Functional programming languages, especially purely functional ones...

So Wikipedia distinguishes functional programming and pure functional programming. In fact there is a separate article on pure functional programming, which says that destructive modifications are "excluded", not just "avoided".

Functional programming does not have clear boundaries. You can have advantages of functional programming even if you introduce _a bit_ of FP style in a very imperative program. Conversely, you can write in very imperative style in Haskell.


My point is that the definition of functional programming is this purity you mention not using first-class functions. Pure means that something has more of that thing you want and impure means it has less - so pure functional programming has more 'functional programming' then 'standard functional programming'. Sure completely pure functional programming is impractical - there are degrees to how purely functional are the languages we use in practice - but this is the point practical languages are more or less functional and how much are they functional depends on how much they depend on state manipulation not on using higher order functions or having first class functions.

Defining functional programming as programming with first-class functions is a consistent definition and could be used if not the fact that more people use the other definition (at least that is what the wikipedia definition and other online material makes me believe).


Is Prolog a functional language, as it avoids state manipulation but has no first-class functions? I think not. There are some characteristics of functional languages - including first class and higher order functions, relying on immutability, declarativity etc. Choosing just one of them to measure how much a language is "functional" is a mistake IMO.


I think by "rest of us", he meant nerdy, curious programmers. And not academic PHD mathematicians.

I don't know your background, maybe you are deep in the academic world, but if not, it's likely you are the "rest of us" he is addressing.


With that in mind, I recently wrote a series of posts about integrating functional programming concepts into everyday Javascript programming (and AS3, which is nearly identical). The last two, in particular, might be interesting even for programmers who already understand the basics.

http://www.alanmacdougall.com/functional-series/

I'd really like to see these techniques emerge into the spotlight, instead of just being voodoo magic hidden deep in the guts of frameworks.


Very interisting posts in there from an interesting point of view.


I agree. It’s fun for someone like me who likes learning new stuff for its own sake, but I think the “Rest of Us” might feel a bit left out without more practical examples. But hey, that’s the same mistake I made when I last wrote about concatenative programming. Hindsight’s 20/20.


Aside: I am maybe 50-60 hours into my first haskell project using yesod. I have read about half of "real world haskell" and about half of "learn you a haskell".

Other than this, all of my experience is in iterative languages. (c++,python,php).

One thing that comes up is "if your program compiles it's almost certain to be correct"; and it has been true again and again for me. It's an odd, eery feeling that honestly I haven't quite groked how it is happening that way.

I don't understand completley what I am doing, a combination of not thinking like a FPer, not knowing the idioms of Yesod, and not being entirely familiar with the syntax of Haskell; so I am sometimes stuck on "how" to get the type that I need.

So I will sometimes try a couple things almost randomly; then it will compile and it basically is always doing what I want it to do.


I think the "if it compiles, it works" has to do with Haskell's type system being much more expressive than most static languages, and that most of your code will be declarative and pure.

Purity helps because your program will necessarily be composed of small, self-contained modules (usually functions) that have no implicit dependencies.

The type system helps because you can encode a lot of the code's intent in type signatures, which prevents code that does not match that intent from compiling.

Monads are a good example. Whenever there is some special execution requirement (eg. ordering for IO) for operations, you may be able to express the requirement as a monad, and then any operation outside that monad can't accidentally mix with it.

In addition, because monads allow you to enforce how operations are composed, there is no way for the programmer to forget to adapt the output from operation 1 so that it works as an input for operation 2. That's abstracted by "bind".

I'm a Haskell newbie myself, but there are a lot of mind-bending tidbits around the Internet in how to encode intent in various ways using the type system... It's not the most powerful system imaginable for that purpose, but as far as practical languages go, it's probably at the top.


Agreed; after getting used to working with Haskell's type system, I've found it kind of painful to use languages with weaker, less expressive type systems (which unfortunately, is basically every practical language other than Haskell). The ability to transform all manner of bugs into compile-time errors via the typechecker is indispensable. Once you start digging into GHC extensions like GADTs and type families, it gets even better. I never feel nearly as confident about the correctness of code I write in, say, Python, even when using a full test suite.


When I know what I'm programming, I agree, but I tend to find strong static type systems get in my way for exploratory programming. If I'm trying out some music-synthesis idea, for example, the #1 thing I want is to hear some sound ASAP, even if the code only runs for certain choices of parameters and crashes after 30 seconds. Then, I'll fix it later if it sounds promising.

I find that in Lisp, for example, I'm able to do partial, tentative refactorings, where I sort-of change something as an experiment, just enough that it works in one example case, but don't completely refactor everything to be consistent with the change yet, because I'm not sure if I really want the change or not. That kind of thing is really hard to push through in languages that care more about static types, and I often find myself thinking something along the lines of: yes I know that function there isn't updated with the new type signature, but I'm only planning to try this out initially with one choice of parameters, and I know that in that case the function isn't even called, so please just run the damn program!


When this happens to me (using haskell), I just quickly do:

   otherfunc = undefined
   otherfuncReal :: ...
   otherfuncReal = ...
The best exploratory programming I've seen in haskell also does this for types to some extent; you know you're writing music, so you start with something like:

   data Music -- stubbed in data type with no constructor
Then maybe you decide it involves a series of notes with duration, so you change it:

   data Music = [(Note, Duration)]
   data Note -- stub
   type Duration = Int


> which unfortunately, is basically every practical language other than Haskell

Is this also true of Ocaml? I have experience in neither, but my understanding from reading up on this says Ocaml and Haskell are roughly equivalent, and if anything - Ocaml is even more expressive.

Anyone with real world experience who can comment on this?


It has been proven that you can express certain things with ML-style functors that are difficult or impossible to express with standard Haskell-style type classes. But almost everybody uses GHC and turns on lots of type system extensions in their projects, which is going to make the issue a lot murkier, and on top of that ML-style functors require a bit more finger work and explicitness. Ultimately, it's going to come down to preference.

They're roughly equivalent the way Python and Ruby are roughly equivalent, or Common Lisp and Scheme. From any other linguistic paradigm, yeah, they're extremely close. But from one to the other, it feels like there is a tremendous gulf.

A lot of the benefits tmhedberg is talking about come from I/O being a monad in Haskell and the lack of "assignables," which constrains your ability to mix side-effects and state changes with pure code. Of course, you can defeat it if you really try (MVars and unsafePerformIO) but it's not the kind of thing you would do accidentally. And you could achieve a lot of the same benefits in OCaml by writing monadic standard libraries and never using refs.

So, if by "expressive" you mean the type system allows you to express complex relationships, Haskell is more expressive. But if by "expressive" you mean the language allows you to express calculations in more ways, OCaml is more expressive. In my experience, "expressiveness" is usually a foil for some other attribute and doesn't usually illuminate the debate much.

In the end, they're just tools, albeit with overlapping purposes. OCaml is easier to learn and compiles faster, Haskell raises your confidence in the code and has cooler tricks. Both are fascinating and stunningly better than lots of other options. If I had to choose which one to learn today, I'd look at some example code in both and ask myself which one I want to be staring at for the next six months.


I can't answer you with any real authority, since I haven't used OCaml. But from what I understand, it lacks type classes, which are absolutely central to the richness of Haskell's type system. Whether or not it makes up for this in other ways, I can't say.

OCaml is also strict by default, with optional laziness, whereas Haskell is the opposite. This could be seen as either an advantage or a disadvantage, but either way, it is a very fundamental difference.

If you are considering learning one or the other, I'd advocate for Haskell purely on the basis of it being the closest to mainstream of all the static functional languages. There is a huge variety of libraries available on Hackage, and the community is very active, welcoming, and friendly. Haskell (GHC in particular) is also the laboratory in which the majority of interesting FP research is currently being done, if you care about that at all.

Or just learn both languages, so you can compare the benefits and disadvantages for yourself. :)


OCaml has a powerful module system which can be used in a way quite similar to typeclasses (https://ocaml.janestreet.com/?q=node/37).

The main difference is the lack of purity, which means that the side effects of a function aren't part of a function's type. It also doesn't have the wide variety of extensions which you can use to do really crazy stuff (like verifiably correct red-black trees, https://github.com/yairchu/red-black-tree/blob/master/RedBla...).



On the other hand, mixing multiple monads is sheer hell, and lazy evaluation makes speed and memory usage difficult to predict.

Try Disciple sometime, or Scala!


not the most powerful system imaginable for that purpose

had to ask, what is more powerful, agda/coq/other?


A full dependent type system is nominally “the most powerful system imaginable”, but it’s also hopelessly problematic. You can’t be sure that type inference or type checking will halt in the general case, although you can infer some value-dependent types (see “Dependent Type Inference with Interpolants” by Unno & Kobayashi).

In practice this means you need manifest type signatures, otherwise the type of a function would be “lol iunno ¯\(°_o)/¯”. Then again, Haskell requires signatures in some cases, such as for Rank-N types, or when you run up against the Dread Monomorphism Restriction. So yeah, Agda and Coq are awesome, but most useful as what they’re designed to be—proof assistants—rather than general-purpose languages.

tl;dr: A constrained system such as Haskell’s, with extensions for aspects of dependent typing, is more practical than actual dependent typing.



If you add some QuickCheck specifications, the programming by brownian motion ("try a couple things almost randomly") will work even better.


I'm in pretty much the same boat. I feel kinda dirty playing with the type signatures to see what compiles, but if it compiles in Haskell it probably works.


I was disappointed because I was hoping it was a new piece -- in general, I have found coffeemug's writing well worth my time.


Functional programming for the rest of us: please structure your code, to the extent that it is possible and practical, around functions whose return value depends only on its arguments.


I found this article very useful, and helpful in de-mystifying the world of functional programming. Are there other articles that cover functional programming theory in plain words like this?


Glad you asked! I did this in about 7 minutes, but I could spend a week talking about how lispers, ML, clojure scala and haskell approach this. http://www.reddit.com/r/haskell/comments/qxmh3/what_are_some...

http://stackoverflow.com/questions/tagged/functional-program...

esp this one (Harrop, Ramsey, Brian are FP heavyweights

http://stackoverflow.com/questions/485418/distinctive-traits...


I'll add an additional question to this one: are there any good sources of small, "real world" app examples available? I realize I could look at, say, the HN codebase, but I'm curious if there are smaller examples available, too.


Is it me or that "A Walk in the Park" sounds like Sheldon Cooper from the Big Bang theory:

"Fire up the time machine. Our walk in the park took place more than two thousand years ago, on a beautiful sunny day of a long forgotten spring in 380 B.C. Outside the city walls of Athens, under the pleasant shade of olive trees Plato was walking towards the Academy with a beautiful slave boy. The weather was lovely, the dinner was filling, and the conversation turned to philosophy."

Big Bang: http://www.youtube.com/watch?v=lDxv9hSKdY4


Other way around, I believe. The date on this article is 2006, BBT started late 2007.


Nice article, but its a bit irksome that the MessageHandler example drops the conditional and doesn't replace it - the code at the end of that section isn't equivalent to what he started with.


This is a much clearer explanation of the benefits than most of the other attempts I have seen.


A great read. Just a couple of things that jumped out at me:

“In functional languages automatic analysis of functions and finding good candidates for concurrent execution is as trivial as automatic inlining!”

It isn’t exactly trivial. If you can evaluate anything concurrently, then you still have to figure out what’s worth the overhead of spinning up a thread to run concurrently. And many operations are inherently sequential even if they aren’t imperative or side-effectful—parsing, for example. Still, the gist is that concurrency is easiest in a pure language and a functional style, and I have no bones to pick with that.

“…infinite data structures, something that's much more complicated in a strict language.”

Infinite structures are still pretty easy to construct in strict languages. For example, you can use ranges of the sort in D’s std.range library, as described in Alexandrescu’s “Iterators Must Go!” talk.


Of course benchmarking is essential, but the Control.Parallel library can decouple expressions to run in parallel (called sparks) from threads, and manages the whole shebang for you. You more-or-less say x `pseq` y and it will try to evaluate them in separate threads if possible. The RTS statistics will tell you how many sparks got "converted" to run on another CPU and whatnot, so you can compare and see if you actually got an improvement, but it is a bit easier than spinning up dedicated threads, assuming you're parallelizing pure computations. For true concurrency, you still need to deal with threads.


Interesting, well written article.

However I was surprised to be made aware of the existence of a logician named Haskell Curry. The name struck me as the punchline to a joke only told and appreciated by the uber-nerds in the halls of elite university mathematics departments.


Good article , cleared a few things up for me.

One thing I am still a bit hazy on though. He mentions that continuations could be used in the context of a web app to maintain state between HTTP requests.

I'm not quite clear how this would actually work. Lets say you have something like this (in psuedocode):

Let's say we have a counter we want to increment on each request.

doHTTPRequest(requestVars)

{

  renderPage(requestVars)
  pause()

  return requestVars + 1 
}

doHTTPRequest(doHTTPRequest)

Is that approximately how this works?

In which case, how would it work for doing more than 2 requests in a row?


Node.js is built around a lot of continuations, if I recall. It's very common in event driven computing.

I believe the basic way to do it is that the function that processes the original request both returns the output and the continuation, then the next time the user connects it would use the continuation instead of the original function.

    function doHTTPRequest(requestVars, renderPage)
    {
         (content, renderNextPage) = renderPage(requestVars);

        return (content, function (newRequestVars) {
            renderNextPage(newRequestVars);
        });
    }
It's a little awkward, and maybe not the best example of CPS. But basically what happens is the request gets handled and returns the output and something to do next. Then you return your output and some additional computation that will get executed later, in this case a function that renders a different page. This is computation that hasn't been evaluated, but which can be passed around to evaluate later. Note that this whole function could be replaced by renderPage, but I wanted to make things more explicit. A good language would let you write

    doHTTPRequest = renderPage;
Also, when your config would probably specify that it should call renderPage and so that function would get passed around or held by your controller/dispatcher until the actual doHTTPRequest is called.


What you're describing is CPS, but that's not how continuations would typically be used in a webapp, it's actually a compiler technique (I've always thought it was crazy that node.js forces you to do by hand something that should be done by the compiler). A language that supports full continuations as first class objects, for example some Scheme implementations, allow you to actually suspend the current execution (including the entire call stack) and store it somewhere, and then start executing it again at a later point in time. So your web container can start running a thread of execution for a particular request, suspend it after it writes its response then resume it again when it receives the next request corresponding to this web session. It allows you to write code like:

    function handleSession() {
      renderFormPage()   // writes response with HTML for form
      suspend()          // wait for next request in the session
      processForm()      // process the form data - notice this is the same function
    }
It's a very powerful technique and can make web programming very intuitive since you can have a single linear flow for a complex web interaction.

Check out Chris Double's tutorial on this:

http://double.co.nz/scheme/modal-web-server.html

Or SISC-Web for a simple but complete implementation:

http://siscweb.sourceforge.net/overview.html


Thankyou, that has made it at least slightly more clear to me I think :)

So what you need basically in the core of your server is some data structure (possibly keyed by the client's IP address and TCP port number?) that contains a bunch of functions with prepopulated arguments (curried functions?).

The prepopulated arguments are basically the state set by the previous request. When a new request comes in you somehow match it up with your data structure and execute?

I guess the continuation part is to force the I/O to happen at the right point, i.e make the next request depend on the execution of the first?

Of course Javascript is not really functional so that needn't be explicit.



Anyone who liked this read might also like the read "Gödel, Escher, Bach: An Eternal Golden Braid". Its such an interesting book, and teaches through very simple ways.


> Programmers are procrastinators. Get in, get some coffee, check the mailbox, read the RSS feeds, read the news, check out latest articles on techie websites, browse through political discussions on the designated sections of the programming forums. Rinse and repeat to make sure nothing is missed. Go to lunch. Come back, stare at the IDE for a few minutes. Check the mailbox. Get some coffee. Before you know it, the day is over.

You're fired.


So you'd rather have some dinosaur sitting at a computer? Someone who doesn't care about programming, doesn't care about the latest or newest ways to do things? Doesn't care about improving themselves?

I must admit, I'm like this _sometimes_. But that's only because I'm using PHP :) If you have a problem with your developers doing this, you're doing something wrong.


Just a nitpick: Erlang is not an academic language, it's designed in the industry, for industrial use.


After racking my brain for years trying to figure out why shitty "design patterns" have been winning while functional programming (despite its superiority to typical object-obfuscated architecture) remains on the back-burner, I've come to a few conclusions.

1. The name "functional programming" doesn't sell us; it just sounds impractical. What we do isn't really "functional programming" in a purist sense. We need side effects all the time. We just have an understanding of the need to manage state (not eliminate it) properly. We need a better term for the hybrid functional-preferred/imperative-when-appropriate/very-occasionally-OO style that good engineers use, but all I can come up with (for how we "functional" programmers do things as opposed to the enterprise hoipolloi) is "non-shitty programming" and that sounds biased.

2. Usually, when we showcase FP, it looks like we're hawking complexity. We get into tail recursion and abstract data types right away. In fact, proper use of FP reduces complexity a great deal. For example, Scala is actually less complex than Java by far. On the other hand, it's hard to present Scala to a Java programmer without it appearing that one is trying to shove more complexity into his workflow, because it's only after a few months of learning how FP works (and that you don't need classes to program, you just need functions, because that's what algorithms actually are) that people realize how much simpler code becomes when done properly, and that functional programming (unlike <managerially-empowered flavor-of-the-month X> in Java) does provide life-long, substantial boosts in productivity and product quality. Until people get to that point, they'll feel like we're just shoving a tough learning curve on them.


Scala is actually less complex than Java by far.

I'm sorry, but I can't let this one go by. I can accept the argument that FP may be less complex once you get used to it (not sure I agree but let's go with it) and immutability by default is great but just because Scala and/or it's surrounding community allow or encourage this style doesn't mean that Scala-the-language is less complex than Java-the-language. That's a statement bordering on the absurd - I would say that Scala is demonstrably massively more complex than Java. Many of the things that make Scala great are extremely complicated, both theoretically and practically. That doesn't stop it being great but it does make it complex.


If I recall correctly, the Scala spec is about one-third the size of the Java spec.


If that's true, then Scala was simply specified less precisely than Java. As lemming said, Scala is a massive superset of Java's functionality.

Off the top of my head, I can rattle off a slew of functionality in Scala that's not in Java:

* Regexp literals

* partial/curried functions

* for comprehension

* argument defaults

* case classes (and switches thereon)

* tuples

* combinators

* XML interpolation

* Mixins

* Fine-grained visibility

Caveat: I'm no Scala expert. Some of this may be in the libraries and not the language proper.

Now... what Java functionality can you name that's not in Scala?


Not that this diminishes your point, but many of these features (e.g. case classes and pattern matching, list comprehensions) actually make real-world code less complex.

You're right that every bit of complexity Java has also trickles into Scala. For example, Java's broken generics certainly add undesired complexity (type erasure, wonky types) to Scala. Still, I think that idiomatic Scala generally is less complex per unit functionality delivered. It's true that "full Scala" must be more complex than Java because it contains Java as a subset, but I think that more can be accomplished in Scala without making code complex or verbose to the point that it becomes a problem.

In other words, to do comparable things requires far less complexity in Scala than in Java, because Scala delivers the best bang-for-buck by picking the most efficient abstractions. For example, Java concurrency requires a JVM expert to get it right (did you know that doubles and longs aren't thread-safe?) whereas Scala concurrency just requires someone who has a basic understanding of the Actor model.


such a wonderful article , sullied by java based examples.


i wrote this article off about when it makes incorrect comparisons between immutable values and final variables. someone who read it, is the rest of the article worth reading, or even accurate?

there's certainly a need for articles like this, targetting an audience of java devs, if only they were correct. I haven't found much.


Yes, it's worth reading and it's basically accurate, although I'm not sure the accuracy is really as important as the general big picture, since there aren't that many examples.

Is there something specific that you can point out where the article is actually incorrect? There are only a few paragraphs in the beginning that mention final variables (and those only use Java primitives). I agree that there's a distinction to be made but I don't know how useful that would be in this kind of article.

If you're going to be splitting hairs over immutable values and final variables, you're probably not the target audience. On the other hand, Java devs who can't be bothered to finish a 20 page article probably aren't either.

FWIW, I think that Rich Hickey's Clojure videos and some of his more general talks on state, time and identity are probably some of the best "intros to FP for Java devs" out there.


To be fair the the grandparent, the difference between immutable data structures and final variables in imperative languages is large and fundamental. If the author is (and I don't know, because I haven't read the article recently) making this analogy then it is going to be extremely confusing for some people.


The article doesn't make this analogy, that was my point. It doesn't really delve into immutable data structures at all, and just discusses FP in broad strokes. This isn't really a bad thing, since I don't think it was meant to be a deeply technical article.


/me adds this article to my pile of articles to read




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: