Hacker News new | comments | ask | show | jobs | submit login
Can functional programming be liberated from the von Neumann paradigm? (2010) (conal.net)
161 points by stevekrouse 38 days ago | hide | past | web | favorite | 82 comments



The question is valid if you intend to stick with one paradigm and keep it pure. For any practical purposes, that's useless crap.

Nothing wrong with writing the gist of your program written in functional way, keeping things immutable, allowing for orderless execution and lazy evaluation. But because your program interfaces with the real, consequential world it must realise all that computation at the pivot points where there is I/O and it suddenly matters that the lazy evaluation is complete.

Have a procedural loop to drive your events and other practical stuff and, from there, launch functional computations to calculate a completely or partially new state for your program. Keep things mostly immutable but do mutation in certain specific locations where it makes sense.

Clojure is good at drawing the line where the programmer sees it fit best. Multi-paradigm has always beaten pure ideologies hands down when it comes to delivering useful programs that actually work without making programmers do mental gymnastics for the sake of purity. (They should reserve their efforts to be mentally gymnast in the real-world problem they're solving.)

You don't want to write all of your program in object-oriented, procedural, functional, declarative, or whatever paradigm you have a thing for. They all do some things well and suck at others. Use them like tools, the right one for the right job.


Paragraphs 1,2 and 5 are literally repeating objections the author tried to address without mentioning his answers and why they succeed or fail. You’ve completely failed to engage with the piece.


> Multi-paradigm has always beaten pure ideologies hands down when it comes to delivering useful programs that actually work without making programmers do mental gymnastics for the sake of purity.

Don't think anyone is arguing with that (even the author of this piece). But the functional patterns that contribute to useful programs today are the direct result of the kind of pure theoretical thinking here.

To me, your argument could be used against any theoretical research in any field. None of it is of practical use until it is.


> Don't think anyone is arguing with that (even the author of this piece)

The author is openly and explicitly arguing for a single paradigm, and he clearly stated in the article that he is in favor of, and calling for people to do the mental gymnastics for the sake of purity.

> the functional patterns that contribute to useful programs today are the direct result of the kind of pure theoretical thinking here.

FP isn’t pure theoretic, so I’m not sure what you mean. The foundations haven’t changed considerably in decades, and it has been possible and practical to write high-level FP code to, say, compute factorials or whatever, for decades.

FP is great as a high-level design tool to simplify and reduce mutable state. But there’s no evidence that taking it to the limit with human interaction and hardware is even possible or what it means or why it’s useful.

> your argument could be used against any theoretical research in any field. None of it is of practical use until it is.

But you neglect to mention that much theoretical research never becomes practical, and some is proven impossible. Practicality is and always will be a useful and valid argument, even if it’s occasionally wrong.


> for the sake of purity.

The post was about choosing a representation for sake of composition. In this case, the composition happens to be pure because it is mathmatical, but the simplest composition was the actual goal, not just any kind of purity.


The post discusses weak vs strong composition, not simple vs complex. The goal is FP for I/O so that composing is pure, so that it’s strong, whatever that means.

We already have simple composition of programs with Von Neumann I/O, so I’m not sure I can agree with you.

To that end, I found his example of number composition quite unconvincing. He said we use binary and not strings for composability. That’s not true. Numbers represented as strings are no less composable. The only real downside is they’re less memory and time efficient, but string based numbers are just as composable as binary numbers.


Genuine question: what are some accessible examples of real-world programs that use multiple paradigms "well"?


Scala maybe? It does both OO and functional.


Thats some what subjective. If you look to python pandas or Gnu R programs; that would be paradigme hybrids (good ones mostly). I personally enjoy coding so much more if I can use the right tool for the job.


I like Elixir/Erlang's 'weird' approach to this, where it primarily uses an FP approach for the code, but an OO-like approach for the runtime (keeping state in processes). The OO being perhaps closer to Alan Kay's original ideas (as I understand it) is nice too.

I'm not experienced enough to judge how well it does all this though.


many C# programs now leverage a lot of functional capabilities through Linq. More so in F#.

C# is more imperative with functional mixed in and does little to stop you doing functional "evils"

F# is more functional with ability to mix in imperative and mutations and stops you from doing functional "evils" unless you say you want to be evil


Past performance is no guarantee of future outcomes.

If a non-imperative paradigm enables the creation of a different architecture that consistently beats the von Neumann adaptions we can create, you can bet people will get out of their way to write everything on that paradigm, whatever mental gymnastics is required.

Besides, most of the programming world has been imperative-only from the beginning until just a few years ago. I don't know where you see all that multi-paradigm success.


Beats it by how much? Computers today are basically an accretion disc of suboptimal decisions made for the sake of backwards compatibility. You would have to demonstrate a huge leap in performance to wipe the slate clean.


I'm not sure by how much. Currently there is another tread here about people getting out of their way to push into userspace, on each application, all the crazy network protocol handling. Developers seem quite ok with that, as much that I feel out of place bothering about the complexity.

We also have a lot of people rewriting legacy stuff on Rust, just for bit of extra confidence.

Currently, people have got out of their way to create artificial intelligence guided real-time optimizers that converts von Neumann code into graph-machine code on x86, because it performs better. But it gets very close to optimum usage of the graph machine, so there is no need for most people to actually learn it.


>Past performance is no guarantee of future outcomes.

No, but it's the best predictor of them.

>If a non-imperative paradigm enables the creation of a different architecture that consistently beats the von Neumann adaptions we can create, you can bet people will get out of their way to write everything on that paradigm, whatever mental gymnastics is required.

Well, if wishes were horses beggars would ride. Let's have someone build and showcase this "different architecture" before we drop the current one.


I think the piece is about programming models, not computer architecture. The idea might suggest alternative architectures, but that’s not what the post is about.


This piece is hard to evaluate: it seems more like an expression of hope or a manifesto than something I can engage with.

It argues that the idea that there is an ineleminable imperative core in our programming model is just an assumption today, and I agree.

But that doesn’t mean it’s right or wrong, just that we have to wait to prove or disprove that assumption. People should keep trying to disprove it, but I can’t fault the majority of developers for continuing to take it as an assumption.


Not at all programming related but I found this quote incomprehensible

"Belief, as I use the word here, is the insistence that the truth is what one would “lief” or wish it to be. The believer will open his mind to the truth on the condition that it fits in with his... "Faith, on the other hand, is an unreserved opening of the mind to the truth, whatever it may turn out to be. Faith has no preconceptions; it is a plunge into the unknown. Belief clings, but faith lets go. In this sense of the word, faith is the essential virtue of science, and likewise of any religion that is not self-deception." – Alan Watts

Faith as every human on earth uses it is a belief that that person has opted to exempt from questioning. It means in fact the opposite of his usage and all belief systems are necessarily self deception stories we make up and tell ourselves to render sensory soup a coherent narrative in which we have a part.


I tried to find a little more context on the quote. Note that Alan Watts was a major popular figure for bringing zen Buddhism to western audiences. Turning things like definitions on their heads is definitely in line with that goal. :)

> We must here make a clear distinction between belief and faith, because, in general practice, belief has come to mean a state of mind which is almost the opposite of faith. Belief, as I use the word here, is the insistence that the truth is what one would “lief” or wish it to be. The believer will open his mind to the truth on the condition that it fits in with his preconceived ideas and wishes. Faith, on the other hand, is an unreserved opening of the mind to the truth, whatever it may turn out to be. Faith has no preconceptions; it is a plunge into the unknown. Belief clings, but faith lets go. In this sense of the word, faith is the essential virtue of science, and likewise of any religion that is not self-deception.

The previously missing sentence and the very beginning of your quote seem to address your comment: "in general practice, belief has come to mean a state of mind which is almost the opposite of faith. Belief, as I use the word here, ..."

My impression is that belief has become according to Watts as an assertion of a specific hypothesis: I will open myself up to this idea if and only if the idea fits with my preexisting ones. If you are testing a hypothesis, this is a poor attitude to have, and it can affect how you perform the test: the test could be wrong but agree with your preconceptions. This is how "belief clings".

Faith, as the "essential virtue of science", according to his definition would be detachment from the outcome of testing a hypothesis, "whatever it may turn out to be", with "no preconceptions". You have a question, and you are employing a method, and the outcome remains unknown until you complete the test (and perhaps even after). This is how faith "lets go".

Religion of "self deception" is one where those practicing it only open their minds to any particular idea "on the condition that it fits with" their preexisting ideas. Religions that do not deceive themselves must then be religions that open themselves to new ideas, whether or not those new ideas agree with ideas they already hold.


> Faith as every human on earth uses it is a belief that that person has opted to exempt from questioning.

Now that's a belief. Doesn't seem like you're open to questioning it though.


Understanding and modelling IO is a big part of why Clojure took of like it did. Its functional, its pure-ish but it has an understanding of time, concurrency and IO baked in. When you do IO time matters, take a look at how Datomic models this (incrementally).

But even from the early days, Rich Hickey said that no pure functional system exists as even an empty HD emits heat, which is a form of IO and I think we should listen to Hitchens and abandon our "fantasy of purity" when it comes to writing real software.


I don't think that's what functional purity means. Actually I think there are a lot of huge misunderstandings about this.

It's kind of like, think about grammars of human languages. Obviously human communication involves commands. But you might still have a good reason to desire a grammar that is "purely functional" in its mathematical formulation.

Pure functional programming is about establishing a certain level of formal discourse where expressions involving applications of functions reduce to values. It's not a fantasy where effects don't exist—it's a way of being precise about effects in a particular style of formalism.

These expressions should be able to describe all kinds of systems, as indeed Haskell's IO type is extremely capable when it comes to describing multithreaded sequential computations involving mutable state and external effects.

Conal is suggesting that the IO type is a bit crude compared to the full potential of "functional reactive programming" which is for sure a powerful paradigm.


Im just going by the formal definition of purity

1. Its return value is the same for the same arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams from I/O devices). 2. Its evaluation has no side effects (no mutation of local static variables, non-local variables, mutable reference arguments or I/O streams).

As soon as I/O is involved, functional purity is technically broken and since any real-world system will have I/O I consider the quest for purity to be a fantasy in the same vein as developers who invest a lot of resources in "proving" their functions.


Functional purity isn't broken by Haskell-style I/O. In Haskell, I/O is not done through side effects, and I/O effects are not functions, so your points 1 and 2 are not violated. It's not a "fantasy." You just don't understand it.


The post is really talking about choosing the best kind of composition to solve problems in (purity was a by-product of being mathmatical).

You guys are arguing over the monadic io that he dismisses in the first couple couple paragraphs.

All he really cares about is achieving program composition.


As someone who knows next to nothing about Clojure, what does an "understanding of time, concurrency and IO baked in" actually involve, compared to something like Haskell or OCaml?


Clojure has more than a handful of data-types which allow you to mutate/change values. You have 'ref' which allows for multiple observers and manipulators that all see a discrete timeline. Think of a bank-transaction where two withdrawals run in parallel would yield a negative balance. Refs prevents that. Refs are mostly for synchronicity.

There's also atoms and agents to look at, but the overall perspective is that the world is as the world is and those who observe it always do so with some delay. The discrete timemodels allow you to have consistent views of the world from multiple observers.

This is a the heart of Datomics data-model. Nothing ever changes. If you commit a fact like "fred likes pizza" at 2pm. Fred will always be known to have liked pizza at 2pm. If you later commit that Fred likes apples. That will only be true from the point in time in which you commit, but you'll still be able to see that fred used to like pizza.

The IO is a side-note, but living on the JVM Clojure has native access to everything from files to streams and you can interact with them just like Java people do. There's a few helpers like with-io to manage multiple simultaneous writes, but they're rarely used.

Haskell in this regard, has the IO Monad which is briefly commented on the article. Its sort of a trick and as I understand it Haskell has no ambition of being the go-to concurrency language. It has recently had something like Refs/STM added though, haven't looked into it much.


Thanks for the explanation. Sounds like Clojure allows mutation and side effects all over the place, but that it has some nice libraries for handling immutability and sequencing in various ways so that you can write consistent programs. That sounds nice, but also not particularly unique or interesting.

The IO monad in Haskell is no trick, but I can understand why you would think that.


No I'm sorry, I didn't explain it correctly then. I was commenting specifically on the data-types which can be mutated but only in a thread-safe/concurrent-safe manner.

Everything else, incl the entirety of the core language, is immutable and functional. You have to go out of your way to mutate.

Its a functional Lisp tightly integrated with the JVM and compiles to both Java bytecode and Javascript. I think that makes it unique.


So immutable, but side-effectful. As far as I can see, there's no effects system for I/O. For example, a function like print doesn't seem to return anything, if I understand the documentation correctly (which isn't easy, because it (e.g. [1]) seems to completely lack signatures, so the only way to tell is to look at the code!). For example, it seems that to print an array of lines to stdout, you use something like doseq, which is like a "for each" function that is side-effectful. This is how OCaml works, too.

[1] https://clojure.github.io/clojure/clojure.core-api.html#cloj...


Almost correct.

The signature is there, eg.

> Usage: (print & more)

'& more' means 'any number of arguments stuffed into a sequence called more'

> (print [1 2 3]) "[1 2 3]" nil

There are a number of ways to print an array of lines and doseq is one. Its designed for statements at have no return.


I saw that, but the signature has no types or no indication as to what the return value might be. If you look at the documentation for functions that do have return values, there's no signature either, just a fairly vague English description of what it returns. Dynamic duck-typing all the way, no type inference.

Anyway, my point was, no effects system. There's no way to determine if something has side effects or prevent them. Very far from Haskell's purity. I wouldn't criticize Haskell for a "trick" that enables the impressive feat of maintaing purity with side effects (while at the same time maintaining strict enforcement of types throughout) when Clojure doesn't even have any kind of effects management.


Was about to mention I heard about this in the future of coding podcast[0] last week, then realised this was posted by Steve (who runs the podcast).

[0] : https://futureofcoding.org

Strongly recommend anyone interested in such topics (programming languages, rapid prototyping, HCI, low code platforms, etc) to check out his work. Steve is doing some really great work as a individual researcher!


Cf. http://conal.net/papers/compiling-to-categories/

Abstract

> It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. A convenient way to realize this idea is as a collection of meaning-preserving transformations added to an existing compiler, such as GHC for Haskell. This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. Each such interpretation is a category easily defined in Haskell (outside of the compiler). The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.


I keep hoping to read something about bypassing the von Neumann bottleneck[0].

[0]: https://whatis.techtarget.com/definition/von-Neumann-bottlen...


> A person convinced of (and perhaps ego-invested in) the nonexistence of new possibilities is a person actively resisting their opportunity to discover possibilities — too stuck in the obvious to discover new truths.

Since this article is almost 10 years old... has any progress been made in demonstrating the existence of pure functional I/O? Reading files and writing to the console is one thing, I’m more curious about human input, GUIs and language. Think about apps Photoshop, Super Mario Bros, or Sublime text. The ways we want to interact with them seem more fundamentally sequential. Not only can I not see how to make them pure functional, I can’t see what that even means or why it would ever be desirable.

> Hand-waving is an important factor in staying stuck in impossibility thinking, since rigorous argument uncovers unconscious limiting assumptions.

You have to admire the effort to discredit any rational thinking before it starts.


>> Reading files and writing to the console is one thing, >> Think about apps Photoshop, Super Mario Bros, or Sublime text.

Well, there's literally no difference between reading from a file or from a queue of requests from the user on any level of abstraction.

E.g. you could (functionally) apply a list of operations to an image in Photoshop that the user has supplied you with.


There is a big difference between reading files and human interaction.

You don’t normally have to output intermediate state while reading a file, but you do with human interaction. Human interaction is usually responding and reacting to intermediate the state of the program. Think a bit more about Super Mario Bros.

A file can be read as a single chunk of memory, and it doesn’t need to consist of a queue of sequential operations. We can decide and design what files are and what’s in them. An array in a file doesn’t need to be read one number at a time.

Human interaction, unlike a file, cannot happen all at once, needs intermediate feedback, and cannot be reordered or easily reformulated into another representation, because it’s responding to the intermediate state of the system.


But we can imagine a program that reads a file byte-by-byte and where some internal state of the program changes depending on those bytes. This program could even exit when given three “#” characters, which might mean you need an internal counter for each “#”.

In that sense Mario isn’t really that different!

(Subtly, a purely functional Mario game has more inputs than just the controller. You’ll need a clock signal for the Koopas to walk around at a given pace, but that’s not too different from a file of sequential numbers that’s read at a particular speed.)

Backstory: I used to think that front-end JavaScript programming was too stateful and inherently messy to be compared with, say, handling HTTP requests. But the Elm language (and to a lesser extent React/Redux) has shown me that the messiness is accidental. You actually can apply the techniques of purely functional programming to create arbitrarily sophisticated GUI programs. Here’s a simplified version of Mario in Elm: https://github.com/avh4/elm-mario.)


No, you missed the point.

The human input depends on the intermediate state, where the file input does not.

Manufacturing a case where you output the intermediate state as you read a file is missing that crucial dependency.

You must also assume that the human input cannot be predicted before the intermediate state is known. Assuming that it’s a function of time is cheating.

> You actually can apply the techniques of purely functional programming to create arbitrarily sophisticated GUI programs.

No, you can’t. That’s exactly what this article is talking about. React is a state machine under the hood. You can handle each request in a functional way, but you can’t build the request processing behind the scenes in a function way, it’s not currently possible.

However, from your point of view, all of your code can appear to be functional, so some of the comments here are, reasonably, asking the question, why does it matter?


> The human input depends on the intermediate state, where the file input does not.

Not sure what you mean by “depends” here. If I’m at a game controller I can smash buttons for years without caring what the state of the game is. It’s up to the game to make sense of those button presses and do the right thing.


It’s a two-way dependency. You are taking about the game responding to the human, but I’m talking about the human responding to the game. Both need to happen, they are entangled, which is why you fundamentally can’t separate them and treat them as an abstract function of time, nor can you re-order the events (which may be necessary for pure functional interaction).

Just because you can smash buttons randomly doesn’t prove anything, that’s not normally how people play games. Normally, they see what’s happening and respond in order to succeed.


Ok that’s something else then. I thought we were just talking about crafting a program that accepts user input (sensical or otherwise) and draws the right graphics on a screen.

If you want a complete cybernetic framework to model the player and the game, I wouldn’t know where to begin.


I don’t want a model of the player at all. (And I don’t think it’s possible to make one that will solve this problem.) We are talking about how to increase the scope of FP to include the I/O layer, because it currently ends at the I/O interface. I’m arguing that it might be fundamentally, perhaps provably impossible to have pure FP when there’s unpredictable human interaction, due to the coupled dependency on intermediate state.


Imagine a simple game where the player has to adjust a dial to tune in an old FM radio manually by holding either button A or button B. The game needs to provide feedback to the user as the tuning happens, in the form of more or less noise, so that the user can stop when the station is found.


So what is an example of an intermediate state in a human being?


What do you mean? I’m talking about the program’s intermediate state, not the human’s. Humans respond to the program, then give it new inputs, then the program outputs a new intermediate state, then the human responds again, and so on. This loop is fundamentally different than reading a static file that doesn’t change or depend on the state of the program.


Well, then swap a file for an ethernet socket and voila - it's unpredictable, too.


You are still missing the critical piece.

The reason a file is different than a human is that everything in the file is known in advance, whereas the human reacts to the program. It’s not just about unpredictability, it’s also about the dependency.

The human input is not decided until the program displays it’s intermediate state, and when the human does send input to the program, the input is based on what the human sees, which is the program’s intermediate state. So, the human input depends on the intermediate state.

A file input does not depend on the program’s intermediate state. The file input is a static queue of data that doesn’t change. If you change the program, the file won’t change, but the human input will. A queue of commands in a file is fixed, and showing the program’s output doesn’t affect which command comes next from the file.

The human input is a function of time and also of the program’s intermediate state, and lastly of the human’s whims. These things force the program’s evaluation loop with human input into a specific order that cannot be evaluated or re-ordered or simplified in advance, unlike the file.

Is that making more sense now? All of this is exactly what the author of the article was talking about when he discussed weak vs strong composition. Do you understand what he meant by functional composition being stronger than Von Neumann composition, and why?


I don’t know a lot about Haskell or pure functional programming. Composing entire programs sounds really desirable though, and while imagining what that would look like I came up with this:

Suppose we had a runtime comprised of objects, and execution was represented as messages sent between objects, as in Smalltalk. Unlike Smalltalk, however, objects would never be created or destroyed. All of our program’s logic and behavior would be represented as a pure transformation on every message between objects. In pseudo-Haskell:

  -- A message is a receiver identifying an object, like “Program”, and a selector, perhaps containing data
  myProgram :: Msg -> Msg
  myProgram (Program {begin}) = Window {createWindowWithName="myWindow"}
  myProgram (Program {windowCreatedWithName=name}) = (Window {windowWithName=name, setTitleTo="Hello world"})
  myProgram (Program {windowWithName=name, didChangeTitle=title}) = (Console {print=title})
  myProgram msg = msg
Here the runtime begins execution by trying to send the message [Program begin], but myProgram([Program begin]) evaluates to [Window createWindowWithName:"myWindow"], so that is sent instead. The “Window” object, when sent [Window createWindowWithName:String], both puts a new window on the screen, and sends the message myProgram([Program windowCreatedWithName]), and so execution continues.

It would be easy to compose programs written using this paradigm, and not hard to imagine what composition of executable programs would mean. For example, to log the name of every window created to the console:

  logWindowName :: Msg -> Msg
  logWindowName (Program {windowCreatedWithName=name}) = Multicast {messages=[Program {windowCreatedWithName=name}), Console {print=name}]}
  finalProgram = logWindowName . myProgram 
(Here “Multicast” is supposed to be an object that can take several messages and, with no notion of order, send them out all at once.)

We could add yet another object representing mutable state, with messages describing changes to it and responses containing data stored there. Without using something like this to explicitly model it, I don't see any explicit sequencing of instructions as in the IO monad.

Are there obvious problems with this approach, or has anything very similar been tried?


Interesting. So would there really only be a fixed set of existing objects, and the interaction layer of the program would be simply a function that is applied to messages passed between the objects?

The message that is actually left gets sent to the object, which generally do impure things?

Would there be any way to define such objects? (Any such definition would of course basically just be a message translator like the program function, but if users cannot define such an object I'd imagine some real pushback, or at least some complaints.

Without a built in object to handle say MyFavoriteDatabase, doing the equivalent via a message to Network, and its reply message back to Program would feel a lot like completely useless boilerplate. (especially since i would need to transform the high level query to something low level like raw packet contents before i could send it to Network, and do the reverse on the reply, while a MyFavoriteDatabase object could do that all for me.


How about

  print :: a->a
or some variant.

Or

  print:: a->(a->b)->a
where (a->b) is a function that extracts some value 'b' out of 'a' and that value 'b' is printed.

I mean pretty straightforward. id functions in your compositions don't effect the overall program.

This is only for the O part of IO. Input is a different problem.


These are pretty close to the debug statements in haskell.

This is partially problematic because of lazy evaluation. It is not clear when the print-statement will actually print. This could even lead to a re-ordering of the print statements.

Another issue occurs when output is piped to input. In bash you could do something weird like:

    foo -i bar.txt >> bar.txt
where -i is an input argument. I have to admit that I can't come up with a reason to run the above command. Really, maybe just ignore this part and consider the ordering issue.


I just don't get the idea that "sequential" is somehow a bad thing? The whole concept of an algorithm/procedure to perform a computation is about performing a series of operations in a specific order. You have to think about the operations and their order as well to come up with an algorithm.


I believe that this:

> I just don't get the idea that "sequential" is somehow a bad thing?

and this:

> The whole concept of an algorithm/procedure to perform a computation is about performing a series of operations in a specific order.

are different manifestations of the one thing: you are thinking about programming in a sequential terms only. When you think about programming problem you are trying to solve it by a sequence of operations. You should try something like Haskell, something that force your mind to think differently, it would help you to teach your mind to see programming other way.

Such a "sequential thinking" is acceptable, but you probably not as good as you might be at thinking about programming problems. Functional programming helps to think about code one level higher, without distractions and compications caused by "linearization" of your thoughts to fit them into imperative paradigm. The main reason behind functional paradigm is an ability to prove statements about code, to be able to reason about code.

To throw out idea of sequence from reasoning of code is a futher abstraction from underlying machinery like transistors or machine instruction set. As any abstraction it gives you both benefits (you can think about more complex problems using the same amount of computational power of your brains) and downsides -- you become more restricted by paradigm itself, though I cannot explain what these restrictions are, because I can feel them sometimes, but do not know how to verbalize that feelings.

You can reason about code even if you think that any program is a sequence of operations, but you can do it better, if your idea of program is wider than a "series of operations in a specific order". Moreover some of that reasoning you can do by special program, like compiler or static analyzer.


> You can reason about code even if you think that any program is a sequence of operations, but you can do it better, if your idea of program is wider than a "series of operations in a specific order".

Can you provide a concrete example of this? Otherwise it is just a hand wavy thing to say (which are way too many in programming world).


I'm not sure that example can possible work here. The best example would be something that is impossible to reason about in imperative terms but possible in functional terms. So... we need sufficiently difficult task, that will cause problems with understanding. And this task should be easy enough to be an example to write in HN comment. A contraversial setup, isnt it?

I should use an artifical task as an example, but the trouble is if someone thinks in imperative terms only, then when he tried to think about a task he would frame the task in imperative terms and he would see it as an imperative problem and would fail to see it as a functional one. It is not because he is stupid or he posess some other disabilities, it is just the way human mind works. It is like people saying "what is so good about language X, if you can do all in Y"? "Why one needs OOP languages, if you can do OOP in C?" The way we think about problems defines how we see problems, and what we see as a problems.

But let us try.

    qsort [] = []
    qsort (x:xs) = qsort [ y | y <- xs, y < x ] ++ [x] ++ qsort [ y | y <- xs, y > x ]
This is functional definition of a quicksort. It is possible the worst definition from performance standpoint, but I like it for clarity of algorithm expression, and it demonstrates the point. I remember the very first time I saw quicksort in C I have spent maybe half an hour trying to figure out how that code was supposed to work. Here you can see compressed version which have nothing but the very idea of quicksort:

quicksort of empty sequence is an empty sequence. quicksort of sequence starting with x is a concatenation (operator ++) of three lists: quicksorted list of items from the rest of sequence (sequence without the first element which is designated as x) that are lesser then x, list of one element x, and quicksorted list of elements that are greater than x.

Of course, if you want quicksort to be sorting in place, you would need to think about small details like traversing two pointers, one from the front of the sequence, second from the back, swapping elements, and not forgetting about the choosen one... Yeah, reality is more complicated than that, sometimes you cannot get rid of complexity.

Functional compilers are constantly searching for optimizing techniques to get rid of all these small details, and in some ways they are succeeding. Though no one is good enough to generate machine code for sorting in place by compiling high level code above. But I tried to demonstrate the idea of abstraction of details and how it helps to understand code and to reason about it, and I believe the example is showing it.


My question was about "sequence of things to do", which is different concept than "imperative programming". Sequence means A should happen before B and this is something that you will have to keep in mind and then encode in the target language. For example: in C language A could be a statement that modify some data and B could be another statement that uses the updated data. In Haskell A could be an expression that produce a value and that value is used in expression B. In both cases A have to be "done" before B and you need to keep this in mind while writing the code.

As far as imperative vs functional goes: Remove the assignment operator from an imperative language and you will end up with functional language without bell and whistles (then you will have to make conditionals as expressions and instead of loops you will need to use recursion).


> In Haskell A could be an expression that produce a value and that value is used in expression B. In both cases A have to be "done" before B and you need to keep this in mind while writing the code.

Yes, and no. For example B(A(x)), haskell can call A(x), get result, then call B passing result as argument. Or it can call B, passing closure as argument, and when it comes to a point, when calculation of B cannot be continued if we do not know value of A(x) only then A(x) would be calculated. In some sense A(x) was done before B -- before B was finished, but from other hand calculation of A(x) was started after starting calculation of B.

> My question was about "sequence of things to do", which is different concept than "imperative programming".

I'm not sure, that I can see a difference, but nevertheless, in example above there is not sequence of operations. There is almost declarative definition of a quicksorted list. There are multiple ways to linearize it into sequence of operations and even if there was single way I don't mind. It is the task of a compilator to make this linearization, not mine.


Not sequential: you can treat it as a set (in the mathematical sense) and solve your problems as a set operation, eg: "I want all the numbers greater than one". It makes it easy to read but also allows 1)not to deal with it until really needed (lazy eval) 2)partalellize the solution to n threads on m cores. Think list comprehension/generator in Python. Cons: you need more memory than you have data

Sequential: you deal with one item at a time, you can deal with an infinite amount of data with a finite amount of memory. Cons: can't parralellize, early eval of things you don't need, you might need to go through the loop more than once.


Order is not always necessary. For an example, just consider why SIMD instructions and GPUs are so powerful: they can do the same operation on a lot of data at the same time. A different example would be the async/await patten. Independent calculations can proceed concurrently. So no, a specific order of every operation is not part of the definition of algorithm. Indeed, if that was the case CPUs could not do out-of-order execution optimizations.


What you are describing is that you can design an algorithm where "some part of the algorithm" can be executed in any order, so that you can execute them in parallel to gain performance using multiple cores but overall when thinking about coming up with the algorithm you will have to figure out the correct sequence.

Async/await is sequential where the operations are asynchronous (you don't move to next async operation until the previous async operation is done, given that the second async operation have dependency on the first operation's result).


If the sequenece is non-essential to the problem, it adds the accidental complexity of all sequence related bugs to the system.

See the out of the tar pit paper https://blog.acolyer.org/2015/03/20/out-of-the-tar-pit/


AFAIK, tar pit paper is about how "shared mutable state" is problematic. Not sure what that has to do with sequentiality.


The paper argues specifically against Prolog because it is not a "pure" control-agnostic language.

  a = 1; 
  b = 2;
The programmer does not need to concern themselves with the order of these two assignment statements and when languages impose the order, it adds unnecessary complexity to the system. (especially when thinking about concurrency)


It is about complexity; state, not necessarily shared, is the main source of complexity addressed but the fundamental point isn't specific to state. Sequentiality, often, especially, for example, for things which can be conceived of as set operations, adds unnecessary complexity which complicates reasoning.


Shared mutable state is just one cause of accidental complexity. In section 4.2 of the paper, they also talk about "Complexity caused by Control".

Quoted: > The problem with control is that very often we do not want to have to be concerned with this. Obviously — given that we want to construct a real system in which things will actually happen — at some point order is going to be relevant to someone, but there are significant risks in concerning ourselves with this issue unnecessarily.


Some difficult problems are easier to solve if you don't have to think about them sequentially. One problem in my current work involves getting records from events one at a time, building predictive algorithms from those, and building another set of algorithms based on those.

If you model it sequentially, it takes a lot of work to make sure the right data is calculated in the right order. But you can also model it as a set of lazily evaluated streams, and that makes life a lot easier – because order of evaluation isn't part of the domain, it's essentially an implementation detail.

ETA: another way to think about it is that this lets the compiler manage evaluation order by default, while you're free to focus on other stuff (with the option to manage manually if you need to.) This is a similar type of benefit to garbage collection.


Things are only sequential when you're programming in imperative style. Haskell, for example, supports plenty of algorithms that don't have a specific order in which things happen. Prolog, too. And Halide. Also, Oz. There are plenty around, and they are both useful and fascinating.


Not sure Prolog is a good example here; my experience with it involved having to spend essentially as much time reasoning about how the unification/backtracking algorithm was going to visit different clauses and facts (and how I might want to cut/control that) as an imperative programmer might think about the order in which statements are executed. Maybe more, considering the process was less linear and obvious.

(Not that I didn't like Prolog -- I did -- I just felt far from freed of mundane concerns of execution order.)


Haskell is lazy, you can write a bunch of expressions and use the results of those expressions in other expressions and so on. For example:

foo :: Int -> Int

foo a = let

            z = a * c

            b = 10

            c = 20

        in 

            b + c + z
I used c is z but c is defined later. The language allows you to defined the expressions in any order but internally it will have to figure out the dependencies between the expressions and execute them in specific order and also while writing code and coming up with the logic of the algorithm you will have to think about what you want to compute first and what to compute next.


Various parts of an algorithm/procedure are related to each other in various ways. Sometimes those relationships describe a sequence. Other times they do not. For me, this is the essence of functional programming: a means to describe how the many small parts of an algorithm are related to each other in the whole (in a natural way).

If the computation truly is one that is most practically described as a sequence of steps, that is easy to do. In fact, in some sense this is what the infamous "Monad" concept is all about.


Having the power to order instructions introduces the possibility of having bugs that have to do with the order of operations.

Getting rid of order gets rid of the bugs. Like how restricting types with type checking gets rid of all type errors.


[flagged]


I somewhat doubt your assessment. Aside from the fact that the article references parts of the paper, my impression of the author is that he's the kind of person who might have actually read the paper. C.f. http://conal.net/cv.pdf


[flagged]


(Stop commenting on the downvotes dude, people will pile on for it. Also, who cares? "Ignore the bird, follow the river.")

> FP reflects an algebra of programs, not traditional functional programming. It doesn't suffer from these problems that the author is stating.

You're so right. (Not about Conal Elliot not having grokked the paper though, see below.) The Joy programming language is much closer to Backus' FP than Haskell, for example, and working with it is much more like deriving mathematical equations than any other language I've tried (FWIW.) (Some examples here: http://joypy.osdn.io/notebooks/index.html)

He's very much on the right track though!

http://conal.net/papers/compiling-to-categories/

Here he's converting Haskell code to a point-free style that is very similar to Joy, then instantiating the code over different Categories to get multiple actual programs/outputs from a single expression: "a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis."


Now these are some cool resources! Thank you for providing them.

I'm really starting to enjoy Joy, J, FP, and other languages based on a similar combinator-like basis. I'll have to read over Compiling to Categories, because that certainly is something that catches my eye.


Cheers!

I have a hunch that the Categorical programming paradigm will prove to be really cool. I hope it doesn't take decades to get into common use, eh?

You might also like this version of Joy implemented in Prolog: https://osdn.net/projects/joypy/scm/hg/Joypy/blobs/tip/thun/...

Check out the "compiler" at the bottom.


As far as I can see, Conal does not state that FP has problems. As I read it, he sees a promise in Backus’s lecture that is not realized by contemporary functional programming.


[flagged]


You could consider investigating why your comments are so consistently flagged on this website. We're here to read informative things, not snark about what so-and-so knows or does not know. So please just teach us about Backus' FP instead of going on an information-free tirade.


> This site simply doesn't allow discourse anymore. Not unless you already agree with somebody. Shame.

Hi, I really like discourse. Especially with people who have strong opinions in a field I'm learning about. Notably, 'functional programming' is such a field, and you seem like such a person.

However, statements like the above kind of turn me off from discourse. It heavily suggests there will be friction in the conversation. Thus, I would ask you to leave out the comments. Your points will still be made, and made more concisely. Moreover, it improves the chance someone will actually reply.

This is doubly so when responding to downvotes, rather than responding to negative replies. Because downvotes don't look nearly as aggressive as negative replies. Moreover, the person reading your response is probably not the one who downvoted.


Forgive them, for they know not what they do.

The people running this site though. The whole thing is designed to make life very difficult for anyone who steps out of line. And yes, it does seem to get worse and worse from here.


Is this about FRP




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: