Hacker News new | past | comments | ask | show | jobs | submit login

What's the benefit of learning a PURE functional programming language, opposed to just using a language which has adapted the best bits and pieces from the functional programming paradigm?

Given that you want write code that sees "real world" use, and is used to handle data and events from the real world. To me, sometimes the line between optimized code and intellectual curiosity blurs.




> What's the benefit of learning a PURE functional programming language

1. It makes it easy to learn how to structure a program in a pure way, which is hard to do in languages that offers you a easy way out.

2. Since "everything" is pure, writing tests is easier.

3. You know for certain that if you discard the result of a function call, all the side-effects that it would normally trigger would be stopped as well.

4. A program where all side-effects are guaranteed to be pushed to the boundaries, is a program that's easy to reason about.

> a language which has adapted the best bits and pieces [...]

Languages that has adapted to best bits and pieces from X, Y, Z tend to be worse than a language specifically for X, Y and Z.

For instance, Java supports functional programming but functional programming languages are much better at it because they were designed for that specific paradigm. In the same vein, sure you can write pure programs in F#, but not as easily as in Haskell that was designed for doing just that.

> and is used to handle data and events from the real world

Pure code really only means (in practice) that side-effects are controlled, which is generally very helpful. It forces you to structure programs in a way which makes it easy to pinpoint where data is coming in, and where data is going out. It also makes for easier testing.

Being able to know, definetly, the answer to "will calling foo perform a network request" without having to read the source for foo is quite nice, especially when dealing with third-party code.

All this said, I probably wouldn't begin with Haskell. A language like Elm is much better suited for learning writing pure programs.


Agree with all the reasons, but number 1 is really the most important:

> 1. It makes it easy to learn how to structure a program in a pure way, which is hard to do in languages that offers you a easy way out.

When there's an escape hatch, you will reach for it at some point. It helps with getting things done, but you never end up really confronting the hard things when you have that, and the hard things are an important part of the learning/benefit.


The problem with Haskell is that it's slow and memory-heavy (and OCaml is the same, but worse). F# and Scala (and Clojure?) are pretty much the only reasonably usable FP languages.


Where are you getting your info from?

Typical OCaml programs, when compared to similar C++ would be slower but use less memory.

F# and Scala are both OCaml in disuse. I don't know what you mean by "reasonable"... but, if the idea is "easy to reason about", then these two don't particularly stand out much.

Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed (i.e. bytecode adds an extra step, thus making a language harder to reason about). Also, languages with fewer primitives are easier to reason about, because the program text becomes more predictable.

In general, "functional" languages are harder to reason about when compared to imperative, because computers inherently don't work in the way the programs are modeled in "functional" languages, so there will be some necessary translation layer that transforms an FP program into a real computer program. There are people who believe that FP programs are easier to reason about due to the lack of side effects. In my experience, the lack of side effects doesn't come close to compensating the advantages of being able to map the program to what computer actually does.

All kinds of behind-the-scenes mechanisms in the language, s.a. garbage collector, make the reasoning harder too, in a sense. We pretend that GC makes reasoning easier by making a mental shortcut: we pretend that it doesn't matter when memory is freed. But, if you really want a full picture, GC adds a whole new layer of complexity when it comes to understanding a program.

Yet another aspect of reasoning is the ability of reasoner to act on their reasoning. I.e. the reasoning might be imperfect, but still allow to act (which is kind of the human condition, the way we are prepared to deal with the world). So, often, while imperative programs cannot be formally easily reasoned about, it's easy to informally reason about them to be efficient enough to act on that reasoning. "Functional" programs are usually the reverse: they are easier to reason about formally, but they are very unnatural to the way humans reason about everyday stuff, so, acting on them is harder for humans.

"Functional" languages tend to be more in the bytecode + GC + multiple translations camp. And, if forced to choose with these constrains, I'd say Erlang would be the easiest and the best designed language of all the "popular" ones. SML would be my pick if you need to get into the world of Haskell, but deciphering Haskell syntax brings you to the boil.


> Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed

This is a very interesting definition of "easy to reason about".

To me, "easy to reason about" means that it's easy for me to figure out what the intent of the code is, and how likely it is that the code does what it was intended to do.

How it translates to the machine is irrelevant.

Now, if you work in an environment where getting the most out of the machine is crucial, then I understand. In my domain, though, dealing with things like allocating and freeing memory makes it harder to see what the code is supposed to do. As a human, I don't think about which memory to store where and when that memory should be forgotten, I just act on memories.

Functional languages, then, tend to be high level enough to not expose you to the workings of the machine, which let's me focus on what I actually want to do.


Heh, no.

You are suggesting to replace FP languages with powerful type systems that perform marginally slower than C# and Java (and can access their ecosystems) with a language that is dynamically typed and performs, in most situations, marginally slower than PHP and marginally faster than Ruby.


Every language is both statically and dynamically typed. But the more correct way of saying this is "dynamically or statically checked". Types don't appear or disappear when a program runs. The difference is in what can be known about types and at what stage.

What programmers actually care about is this:

How can we check more and sooner in a way that requires less mental energy on the side of the programmer to write?

In other words, we have three variables we want to optimize for: how much is checked, how much is checked before execution, how much effort does it take to write the check. When people argue for "statically or dynamically typed languages", they generally don't understand what they argue for (or against), as they don't have this kind of mental model in mind (they just learned the terms w/o clear understanding of what they mean).

And so do you.

So, I don't really know what do you mean when you say "dynamically typed". Which language is that? Are you talking about Erlang? SML? What aspect of the language are you trying to describe?

NB. I don't think either C# or Java have good type systems. My particular problem with these is subtyping, which is also a problem in OCaml and derivatives s.a. Scala or F#. It's not a solution anyone wanted, it's a kludge that was added into these systems to deal with classes and objects. So, if we are going after good type systems... well, it wouldn't be in any language with objects, that's for sure.

NB2. Unix Shell has a great type system. Everything is a string. It's a pleasure to work with, and you don't even need a type checker! For its domain, it seems like a perfect compromise between the three optimization objectives.


What's the benefit?

You start to see functions as self-contained things, as lego blocks. All the logic of the function is there in the function. It only works on values it receives as inputs (it can't read global variables). It only outputs its results (it doesn't assign them to some other global variable that you have to track down).

This makes your code modular. You can add a function in a chain of functions, if you want to perform an extra transformation. Or, you can replace a function with a different one, if you want to change something about the logic.


Is there a benefit if you're already familiar with writing functions like that? Is it wrong for me to expect that most programmers are already familiar with functions that only use their inputs, but treat that style as significantly more optional?

I wrote pure functions for a minute there but that's not the same, a function that only uses its inputs can modify an object while a pure function would have to return a new object. But, similarly, I bet that a lot more people know about pure functions than have any working knowledge of Haskell.


It seems you only focused on one of the conditions I mentioned.

You have to follow both rules: the one about inputs and the one about outputs.

This is like a contract. If you enforce it throughout your program, you gain some guarantees about your program as a whole.


I was looking at both rules, and specifically I was using the long version where you said "it doesn't assign them to some other global variable that you have to track down". If you pass in a mutable object then that's not "some other global variable".

If I interpret "It only outputs its results" in a very strict way, that still allows having output and in/out parameters. The latter of which can break purity.

Though you can break purity with just inputs:

  define f(o): return o.x
  let a = {x=1}
  f(a)
  a.x = 2
  f(a)
If you meant to describe pure functions then that's fine, that's why I addressed pure functions too, but I don't think your original description was a description of pure functions.


So, another definition of a pure function is that, for a particular input it will always return the same output.

Your example respects the rule:

    f({x=1}) == 1
    f({x=2}) == 2
But it's true that the two rules I gave are not enough to make a function pure. Because I didn't say anything about I/O. So, a function that follows the rules about inputs and outputs, could still do I/O and change its outputs based on that.

Starting from the question that gave birth to this whole thread: "What's the benefit of learning a PURE functional programming language..."

The other benefit is that such a language forces you to be explicit about I/O. It does it in such a way that even functions that do I/O are pure. The good part is that, if you use it long enough, it can teach you the discipline to be explicit about I/O and you can use this discipline in other languages.

For example, this is how I see this principles being used in Python:

https://elbear.com/functional-programming-principles-you-can...


> Your example respects the rule:

Every definition of purity I can find that talks about objects/references says that if you pass in the same object/reference with different contents then that's not pure.

Your version differs from mine on that aspect. It passes two unrelated objects.

> Starting from the question that gave birth to this whole thread: "What's the benefit of learning a PURE functional programming language..."

I interpret saying a language is "purely functional" as being more about whether you're allowed to write anything that isn't functional. I can talk about BASIC being a "purely iterative" language or about "pure assembly" programs, without any implication of chunks of code being pure.


I gave it some more thought.

I now believe that learning a language like Haskell (or Elm or PureScript) forces you to see your program as pipes that you fuse together.

It's not just functions. Haskell has only expressions and declarations. That means, for example, that you are forced to provide an `else`, when you use `if`. The idea is that you have to keep the data flowing. If a function doesn't provide a meaningful value (so it returns nil, None), you have to handle that explicitly.

And, btw

> Your version differs from mine on that aspect. It passes two unrelated objects.

Those two objects are not unrelated. They have the exact same structure (an attribute named "x"). So they could be considered two values of the same type.


I mean that the identity is unrelated. Yes, you can say they're the same type. But I'm actually passing the same object in. If f evaluated lazily, it could return 2 from both calls. Something like:

  define f(o): return o.x
  let a = {x=1}
  n = f(a) // n is not evaluated yet
  a.x = 2
  m = f(a)
  return n + m // returns 4


Ok, you're probably proving the point that purity also requires immutability. I'm not sure, as I haven't considered all the implications of Haskell's design.

My two rules about inputs and outputs are more like heuristics. They can improve code organisation and probably also decrease the likelihood of some errors, but they don't guarantee correctness, as you're pointing out. They're shortcuts, so they're not perfect.

Edit: If I remember right, it's laziness that requires immutability. I think I read something about this in the Haskell subreddit as an explanation for Haskell's design.


Even without laziness, you can get similar problems if f creates a closure or returns something that includes the parameter object.


Ok, the Wikipedia definition of pure function is more strict and than what I was saying and I think it covers the issues you mentioned:

https://en.wikipedia.org/wiki/Pure_function


Yeah, this is a common source of confusion with closures in Python. Example on Stack Overflow:

https://stackoverflow.com/questions/233673/how-do-lexical-cl...


Doesn't that example also show a kind of laziness?

I say this because the second solution to that question offers the solution of using `i` as a default argument when defining the function. That forces its evaluation and fixes the problem.


It's just a name shadowing.

Copying the code they wrote:

  for i in xrange(3):
      def func(x, i=i): # the *value* of i is copied in func() environment
          return x * i
      flist.append(func)
That could also be written "def func(x, foo=i): return x * foo". It's just copying i's value to another variable. In the next line, i's value is still 1, 2, or 3, so when the function is called during the next line of the body of the loop, the value held by i is bound to foo.

It's not evaluating a thunk representing i, which is how lazy variables are evaluated in Haskell.


Ok, I had a better look at the code and I realised that it doesn't follow the rule I was talking about, namely having the function only work on values it receives as inputs.

I think that's why I don't use closures, because they read values from the environment. Their only use case (that comes to mind) can be solved with partial application, which is safer.

Oh, and I wasn't using laziness in the Haskell sense, but more in the general sense of deferring evaluation.


Purely functional language is pretty universally taken to mean that the language enforces function purity for all functions [perhaps with some minor escape hatches like Haskell's unsafePerformIO].


> Is it wrong for me to expect that most programmers are already familiar with functions that only use their inputs

They'll experience no friction when using Haskell then. Haskell only refuses to compile when you declare "Oh yeah I know functions from other languages this is easy" but then do some mutation in your implementation.


> They'll experience no friction when using Haskell then.

The question was what benefit you'd get from learning a functional language, though. Existing knowledge making it easier to switch to a functional language is the inverse of that.

And there's no assumption they'll actually be making things in Haskell, so easy switching isn't by itself a benefit.


Yeah I can't really follow these threads.

I saw:

> What's the benefit of learning a PURE functional programming language, opposed to just using a language which has adapted the best bits and pieces from the functional programming paradigm?

I also saw:

  Though you can break purity with just inputs:

  define f(o): return o.x
  let a = {x=1}
  f(a)
  a.x = 2
  f(a)
I don't know if that's the tail-end of a reductio ad absurdum which is trying to demonstrate the opposite of what it stated. Either way, to be clear, the above would be rejected by Haskell (if declared as a pure function.)

I guess if you learn a functional language "which has adapted the best bits and pieces from the functional programming paradigm" then you might think that the above is broken purity, but if you learn a "PURE functional programming language" then you wouldn't.


The topic is what you would learn from a pure functional language.

A) You can learn and enforce full purity in other languages. B) You could also learn and adapt just the idea of clean inputs and outputs to those other languages.

Both of those are valid answers! It's very hard to be completely pure if you're not currently using Haskell.

The way they worded things, I wasn't sure which one they meant. They were describing option B, but I didn't know if that was on purpose or not.

So I responded talking about both. Complete purity and just the idea of clean inputs and outputs.

That code snippet is not some kind of absurd argument or strawman, it's there to demonstrate how the description they gave was not a description of purity. It's not aimed at the original question.


This is all myth. People don't write Haskell, because they read why other non-Haskellers also don't write Haskell, based on what other non-Haskellers wrote.

> a language which has adapted the best bits and pieces from the functional programming paradigm?

Why write in a statically-typed language when dynamically-typed languages have adapted the best bits and pieces from statically-typed languages?


> Why write in a statically-typed language when dynamically-typed languages have adapted the best bits and pieces from statically-typed languages?

Unfortunately, dynamically-typed languages haven't adapted the best bit from statically-typed languages: that all types are enforced at compile-time!


Yep, that's the parallel I was going for.

Functional languages give you the same output for the same input, and almost-functional languages ... probably give you the same output for the same input?


Don't think of it as being all pure code, think of it as tracking in the type system which parts of your code may launch the missiles and which parts can't. Given the following program,

    main :: IO ()
    main = do
      coordinates <- getCoords
      launch trajectory
      where
        trajectory = calcTrajectory coordinates
    
    getCoords :: IO Coordinates
    getCoords = -- TODO
    
    launch :: Trajectory -> IO  ()
    launch = -- TODO
    
    calcTrajectory :: Coordinates -> Trajectory
    calcTrajectory = -- TODO
    
I can look at the types and be reasonably certain that calcTrajectory does no reads/writes to disk or the network or anything of that sort (the part after the last arrow isn't `IO something`), the only side effect is perhaps to heat up the CPU a bit.

This also nudges you in the direction of an Functional Core, Imperative Shell architecture https://www.destroyallsoftware.com/screencasts/catalog/funct...


>as tracking in the type system which parts of your code may launch the missiles

given that Haskell is lazy by default there's a million ways to shoot yourself in the foot through memory leaks and performance issues (which is not unlike the problems the IO type attempts to make explicit in that domain), so I never really understand this kind of thing. Purity doesn't say much about safety or semantics of your code. By that logic you might as well introduce a recursion type and now you're tagging everything that is recursive because you can easily kill your program with an unexpected input in a recursive function. To me this is just semantics you have to think through anyway, putting this into the type system just ends up creating more convoluted programs.


FYI, I think you meant functional core, imperative shell.


haha yes, thanks!


Haskell interfaces with the real world. ST allows for mutability in pure context

https://github.com/serprex/Fractaler/blob/master/Fractaler.h... fractal renderer I wrote in highschool, has mouse controls for zoom / selecting a variety of fractals

https://github.com/serprex/bfhs/blob/master/bf.hs brainfuck interpreter which mostly executes in pure context, returning stdout with whether program is done or should be reinvoked with character input. Brainfuck tape implemented as zipper

your program may exist in real world, but most of it doesn't care about much of the real world


There's a video game on steam you can buy with real dollars built in Haskell.

I work full-time writing Haskell. Fintech stuff. No shiny research going on here.

I've written some libraries and programs on my stream in Haskell. One is a client library for Postgres' streaming logical replication protocol. I've written a couple of games. Working on learning how to do wave function collapse.

Believe it or not, functional programmers -- even ones writing Haskell -- often think about and deliver software for "real world," use.


> There's a video game on steam you can buy with real dollars built in Haskell.

Link? Story?



> Given that you want write code that sees "real world" use, and is used to handle data and events from the real world.

Real world? As opposed to what?

Is there any benefit to answering such polemical questions as if they are not rhetorical?


As opposed to the abstract academic world?

The only time I had contact with Haskell was in university and I did not see it appealing back then, nor now, nor have I ever seen a program that I use, written in it.

So learning a bit of pure Haskell might have been beneficial for me to become a better programmer, but I still fail to see it being more than that - a academic language. Useful for didactic purposes. Less to actually ship software.


> nor have I ever seen a program that I use, written in it

The only mass market Haskell software that I know of is Pandoc. Others like Shellcheck and Postgrest are popular in their niche.

I am not sure that Haskell is faring worse that other programming languages in its level of popularity, like Julia, Clojure or Erlang.


Pandoc seems useful, but maybe "mass market" is a bit of an overstatement?

And since many programmers like myself had to learn Haskell, I think Haskell should have a better head start and be in a better position, if it would be so useful for "real world" use cases.

But please don't take this as an attack on haskell. I have nothing against the language, or its users and I did not suffered because of it in university, I am just curious on the appeal. Because I love clean solutions, but I also want to ship things. So part of me are wondering if I am missing out, but I so far I see not much convincing data. (But I am also mainly interested in high performance and real time graphics and haskell is really not the best here)


Pandoc is the standard for markdown conversion. Check out the comments in this recent thread (or pretty much any thread where markdown is mentioned):

https://news.ycombinator.com/item?id=40695628

https://hn.algolia.com/?q=markdown


I don't think markdown conversion is a mass market application, but maybe personally I will indeed use it soon, so that would be something I guess ..


You originally asked about a program that you use (or would use?) written in Haskell. Someone brought up Pandoc, the swiss army knife of Markdown and similar formats, and every programmer uses Markdown in some capacity. Then you chose to fixate on the phrase “mass market” software, as if that was relevant to your original claim: a program that a programmer would use.

Which demonstrates my point. Someone with this attitude has already dug their heels in and made up their mind.


I am not a user of the language (although I learned it like you). I just came to chime in that (a) there is at least one very popular software written in Haskell and (b) Haskell seems to ship a good amount of software for its popularity.

Haskell never got the “killer framework” like Rails or Spark that allowed to become more mainstream, even if it was teached in Universities all over the world.


"Haskell never got the “killer framework” like Rails or Spark that allowed to become more mainstream"

But why is that the case?

Thinking about writing a "killer framework" with huskell gives me a headache. Doing UI in huskell? Eventloop? Callbacks? Is that even possible, without doing awkward workarounds?


Haskell has yesod, which is Haskell’s Rails. It’s a batteries included web app scaffold. You still need to understand monads, though. But any Haskell shop with web apps is using that.

There’s also scotty and servant for web server stuff.

There’s Esqueleto and Persistent for doing postgreSQL database queries.

And so on.


Yesod seems interesting indeed.

Even though they are biased:

"From a purely technical point of view, Haskell seems to be the perfect web development tool."

But I skimmed the tutorials and can say, I am really not surprised, why it did not take off.

The perfect web developement tool is simple in my opinion. Yesod isn't.


I took a look at Yesod and looks more like Haskell’s Sinatra and comes 6 years later than Rails, in 2010. By 2010 a simple web frameworm is table stakes, no huge differentiator.


I can't speak for others, but I never really understood the benefits of functional programming when my language pretty much allowed unbounded mutation anywhere. I would say there's a chance for impure languages to impede you in learning what functional programming is about (or at least my experience with F# and OCaml did not really help as much as it otherwise could have I think).

Your mileage might vary, but I've heard advice from others to learn Haskell and "go off the deep-end" because of people citing similar reasons.


In a way, one benefit is the whole ecosystem / culture / idioms built on top. Haskellers went further in that direction than most languages (except maybe scalaz and some hardcore typescript devs).


Pure functional programming doesn't preclude side-effects like IO; it makes side-effects explicit rather than implicit!

At my previous job, we used pure functional programming to ensure that custom programs only had access to certain side-effects (most importantly, not IO). This meant that it was trivial to run these custom programs in multiple environments, including various testing environments, and production.


In my experience, I only really learned how to write small functions after Haskell. The discipline it forces on you is a good training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: