Hacker News new | past | comments | ask | show | jobs | submit login
Can programming be liberated from the von Neumann style? (1978) (acm.org)
169 points by tosh 44 days ago | hide | past | favorite | 129 comments




Booch: Your functional programming work was unsuccessful.

Backus: Yes.

Booch: Let’s dwell upon that for a moment, because there were some other papers that I saw after your Turing Award lecture [“Can Programming Be Liberat ed From the von Neummann Style”, 1977] which was also a turning point. Because I’d describe it as a wake-up call to the language developers and programmers, saying there’s a different way of looking at the world here. The world was going down a very different path. Let me pursue that point of why you think it didn’t succeed.

Backus: Well, because the fundamental paradigm did not include a way of dealing with real time. It was a way of saying how to transform this thing into that thing, but there was no element of time involved, and that was where it got hung up.

Booch: That’s a problem you wrestled with for literally years.

Backus: Yeah, and unsuccessfully.

http://archive.computerhistory.org/resources/access/text/201...


The rest of the interview provides an interesting context there. He retried in 1991, and, since then, he seemed to have been extremely uninterested in keeping up to speed with any newer programming languages.

Meanwhile, Haskell 1.0 entered the world in 1990, and the Haskell community figured out how to use monads to solve the time problem in 1993. So, it's not necessarily that the problem wasn't solved, it just wasn't solved by him, and he perhaps wasn't aware of the solution.


I would really appreciate if someone takes the time to enlighten me how monads solve the problem.


Check out that Tackling the Awkward Squad that I mentioned elsewhere. It's about as lucid an explanation of the what and why as I've ever seen, by one of the original authors of Haskell.

https://www.microsoft.com/en-us/research/publication/tacklin...


With pure code you cannot really do much of practical value. You can perhaps compute values, solve relational or logic problems, do math, etc., but you only end up with results at the end of computations. Even that may require monads to print out or somehow follow a specific sequence, if the language is pure enough. Pure lazy code simply won't do anything, and evaluation order of lazy execution is variable, undefined even. Purity taken too far, won't do anything useful by itself.

So you use monads to add magic while starting off evaluations, often in specific sequences. They make up code blocks that are explicit about having side-effects. So are often used to separate pure code from impure code, as the magic monads can do could be anything one would like to add. So it's kind of like an impure addon to the existing language, but also by being explicit and more precise about the taints on purity. Each monad do different things, so are kind of like different addons.

However, monads may generally be used purely too. They are more general tools than just the famous IO monad. So the concept of monads are reused for abstraction. It's useful that so many things are solved by reusing the same general pattern. To explain all of that would require more context of the language itself, ie. Haskell. I'm sure somone with more experience has a short and easy explanation! :)

There are StackOverflow pages and such that have good explanations for those with a bit of familiarity with Haskell.


comonad encodes time, but first good paper i know of that describes this is 2006


Well, the interview never really addresses this clearly and directly, but my sense was that it seemed that the biggest problem he was grappling with was dealing with situations where things have to happen in a certain order. The formulation of functional programming that he presented in Can Programming be Liberated from the Von Neumann Style? is, at least to my reading, pure and lazy. For a while at least, nobody really knew how to deal with things where time matters in even the most basic way - ensuring that they happen in a certain order, or even at all - in a pure and lazy language. The Haskell community figured out how to use monads to crack that problem in the early 1990s. Haskell 1.0 didn't have monadic IO, what it had instead was, from what I've read, kind of an unwieldy hack.

SPJ covers these sorts of issues in some detail in his paper Tackling the Awkward Squad: https://www.microsoft.com/en-us/research/publication/tacklin...


IO solves the "time problem" by just encoding a von Neumann machine in the functional language, which is less like a liberation than a peace treaty.


Maybe my imagination is too small, but what else could a solution look like? You want sequencing, and IO gives you exactly that.


stream, signal, and differential programming


Perhaps. But I think that you're also wandering away from the plot. Clinging to our contemporary way of thinking about this stuff isn't really going to help us understand what he was grappling with 30, 40 years ago. It's more likely to muddy the waters.

I don't see any evidence in there that Backus was looking for anything quite so lofty as the stuff you're bringing up. While it's true he wrote the paper that kicked off the whole idea behind FP, we've also got to keep in mind that, as he repeatedly mentions in the interview, he was a lifelong Fortran programmer who, in marked contrast to most fans of FP nowadays, held a lifelong affinity for Fortran. It sounds to me like encoding Von Neumann bits into the functional language so that you're there when you need them is exactly the sort of thing he was looking for. To my reading, his Turing lecture would seem to suggest the same.


Did Clean have uniqueness typing from it's inception ('87)? That's another way to deal with time.


Well, more generally speaking comonads encode costate i.e. context, but I haven’t yet seen them being used represent time. Do you have a link to the paper?


http://cs.ioc.ee/~tarmo/papers/essence.pdf also it needs to be updated for cofree comonad


Thank you!


I feel like Hagino had talked about coinductive data types in the early 90’s.


Could you provide a link or example?


which paper?


Moggi solved it in his thesis in ‘88.


Reactive paradigm programming seems to have handled this problem is a satisfactorily functional way. But these higher reactive layers are almost always built on Von Neumann paradigm languages like JavaScript. So long as bare metal looks more like a Turing machine than a Church calculus we will struggle to take functional paradigm from top to bottom.


> So long as bare metal looks more like a Turing machine than a Church calculus

This will always be the case. The Turing machine is simply a much better model for implementing a machine capable of executing programs. The Lambda calculus (and math in general) are obviously better tools for _reasoning_, but the reason that reasoning in them is so powerful is precisely because you don't have to consider the execution cost of certain operations to reason about them.

Think about all of the common set theory operations. It is trivial to write: PrimeNumbers ⊆ NaturalNumbers, however, think about _computing_ the result of this expression. Generating prime numbers is not a simple computational task, and even if it were, you can only ever verify it for a subset of the natural numbers since computation is inherently finite.

It's all fun and games until you need to execute the program. The functional style is great in terms of expressivity, but you want a Turing machine at the bottom executing it if you ever want the program to finish in your lifetime.


I may not get your point - but a mathematical expression in itself is not functional - as far as I know. You have to give a definition that will likely be an infinite recursive function - which is strictly equivalent in what it can do to a Turing machine.


General recursive function, you mean.

The point is that creating hardware that executes a Turing machine or something like it is easy, but constructing hardware that executes a general recursive function is so hard that it has never been done.


My point was that a random mathematical expression will not be constructive in itself. And you are right, that either general recursive functions or lambda expressions are required to be equivalent to Turing machines. And while I agree that recursive functions are hard to execute on hardware - but certain aspects of it are apparent in CPU design — out-of-order execution is sort of lambda-like.


I think you're a few levels ahead of me in the theory (at least), but it just blew my mind to think about the fact that it's easier to compute all the numbers than it is to just compute the prime numbers. :brain-explode-emoji:


The whole thing still runs in and has to interact with the real world where everything is governed by time and state. You can't avoid having the impedance mismatch somewhere in your stack.


The inability to deal with time and state is not fundamental to functional PLs (though possibly a difficulty for pure FPs). Erlang handles time, state, AND space (though isn't that just time?) fantastically. Perhaps unsurprisingly, the lead creator was a Physics PhD (candidate dropout? I don't remember). Concepts of giving up on absolute synchronicity, for example, reliability of transmissions across unreliable channels, permeate the feel of the (functional) system. It is decidedly not pure functional programming though, as the choice of a functional system was made more of a pragmatic / developer ergonomic choice than a theoretical one.


Yes it is, and Erlang isn't a counterexample. As you noted, it's not a "pure" FP, and it is exactly the messaging elements that let it handle these things.

Q: Erlang "object-oriented" after all?

A: Yes.

http://erlang.org/pipermail/erlang-questions/2009-November/0...


In a talk he admitted that Erlang is the unification of OO and FP, although he didn’t realize it when he created the language.

He also mentioned Hoare‘s CSP as an inspiration with the caveat that he didn’t agree with the lock-step synchronization as messages should be async. Makes sense from a physicist‘s perspective.


I'm sorry, I have a lot of respect for Joe Armstrong but this is simply something he got wrong, probably because he didn't fully understand smalltalk objects. Before he 'admitted' as such, he understandably thought OO was stroustrop style OO. Then someone pointed out to him that originally OO was smalltalk OO, which admittedly looks vaguely like erlang message passing. But there are critical differences.


something can be both object-oriented and functional. they're not exclusive.

However, although erlang's process model superficially looks like smalltalk objects, if you design your erlang program like a smalltalk program, your performance will suffer and you will be in a world of hurt when stuff breaks. The point of message passing concurrency in erlang is to create failure domains, not to organize your code.

In short, please do not structure your erlang program like smalltalk OO, and it is definitely not what a contemporary programmer would call OO.


> something can be both object-oriented and functional. they're not exclusive

Indeed: not a "pure" FP, and it is exactly the messaging elements that let it handle these things

> However, although erlang's process model superficially looks like smalltalk objects

Did anybody claim Erlang objects were like Smalltalk objects?


When anyone claims erlang processes are like objects they are referring to the message passing nature of the processes, which is directly analogous to smalltalk objects. There is no other reasonable way to call erlang object oriented. I am aware that Joe Armstrong got swept up in this analogy. My point is following this superficial similarly to it's logical conclusion will result in shit code, so you really shouldn't make that analogy.

It's so bad there is an explicit admonition against doing it in the elixir std lib docs.


> the message passing nature of the processes

Yes.

> which is directly analogous to smalltalk objects.

No. Definitely not "directly", because Smalltalk messages are synchronous, whereas Erlang messages are asynchronous.

Both are special cases of the general concept of "messaging", and arguably the Erlang instance more closely matches what Alan Kay had in mind. (See his description of objects and messages as lots of little autonomous computers exchanging messages).

> I am aware that Joe Armstrong got swept up in this analogy.

Not sure what you mean with "swept up". He realised it is obviously true. Considering he created the language, he might have had an idea of what he was talking about.

> so you really shouldn't make that analogy.

You shouldn't naively equate Erlang messaging with Smalltalk messaging, but the only who has done this here is you.

¯\_(ツ)_/¯


> Definitely not "directly", because Smalltalk messages are synchronous, whereas Erlang messages are asynchronous.

I think you have a problem comprehending English. Directly does not mean exactly. Most of the rest of what you wrote is basically rehashing what I am claiming.

> Considering he created the language, he might have had an idea of what he was talking about.

No shit. However, he did not know as much about OO, or the history of OO, because he was at the time it was picking up steam very much busy building erlang. 2009 is around the time of the turning point in Joe's understanding of OO:

http://erlang.org/pipermail/erlang-questions/2009-November/0...

> "swept up"

Someone pointed out the full history of OO to him, he realized that his naive understanding of OO (Basically, C++ objects) was not complete, and he quickly grabbed on to his new understanding. Ok great. It also sounds diplomatic, and also now everyone who complained that his language was not OO and therefore bad can be shut up. What would you do?

> but the only who has done this here is you

Nope, so is Joe Armstrong.


Of course, but in the end everything is an abstraction. You can use functional reactive programming to expose a stateful API, for example; there’s nothing wrong with that.

And on the other end of the spectrum, if you go low level enough you realize that everything is stateful in the machine, so you will need to build (or already have) functional abstractions on top of that.

Bottom line, it’s almost impossible to have a single paradigm on all levels of your stack.


Monadic IO, Uniqueness types (as in: Clean), voluntarily working in a pure subset until the End of the World (Scala), effect-first languages, etc. We can impose actual semantics upon an ultimately imperative computer. Is it restrictive? Yeah, but that's, like the point, man. Restricting what can happen allows us to reason about what might happen given a piece of 'random' code. In an imperative-only world, Capabilities serve a similar function.

The fact that the world is made up of elementary particles subject to the 'laws' of thermodynamics has very little impact on the semantics we choose for our programming languages. (If you want to take your appeal to 'nature' that far.)


I was replying to a comment that specifically bemoaned that the low level hardware is not functional.


I will call reactive paradigms "satisfactory" after I get to see any real framework on it that is easy to read and write large programs on.

Yes, they are very promising. But since every single widely deployed language (and "widely" here includes things like Haskell) is a bad fit to it, we are missing some more development before that promise is fulfilled or failed.

(And by the way, the JS frameworks, popular as they are can not claim to be free from the Von Neumann constraints. That requires a pure language.)


I agree. For me, when I'm working with something where I can get functionality like Differential Datalog [1] is when I move over to a reactive-ish paradigm. It's half the answer, and there is a lot of state-related stuff in the other half.

There are probably other ways to provide the rest of the pieces, rules engines, some of those DSLs that describe stream graphs with composable nodes, other incremental querying stuff I can't remember right now ...

[1] https://github.com/vmware/differential-datalog


I'm not sure it meets your criteria but R Shiny (https://shiny.rstudio.com/) is purely reactive.


I've always found Arrowized FRP to be really nice to work with.


The abstract concept of arrowized FRP is quite promising.

The practical implementations are either way too complex for everyday usage¹, or specialized into a narrow application. Also, arrows create a completely different sublanguage on any mainstream language that supports it; a sublanguage where things don't compose as nicely.

1 - You can't demand that people will convert a simple procedure into a series of unrelated steps spread all around, that's a receipt for bad code. The fact that people do such things on unrelated domains is not a reason to have it here, it creates bad code there too.


Angular uses RxJS as a primitive. It's a breeze writing reactive components and applications that way.


I have yet to see even a medium size program stay "a breeze" due to choice of this kind. Tons and tons of small parts that give the facade of easy breezy. But then the whole that they make up is left incomprehensible. Would be delighted to see counter evidence.


Netflix uses RxJS for their UI.[0] Not the end all be all but some evidence.

[0] https://www.youtube.com/watch?v=AslncyG8whg


Not entirely sure how I rank this evidence. Pretty sure the UI had gotten less to my liking as the years go by.

Still, definitely counts as evidence. Thanks!


This whole interview is on YouTube as well:

https://www.youtube.com/watch?v=dDsWTyLEgbk


Some past related threads:

Can Programming Be Liberated from the von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=21855249 - Dec 2019 (32 comments)

Can Programming Be Liberated from the von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=14673817 - June 2017 (3 comments)

Can Programming Be Liberated from the von Neumann Style? (1978) [pdf] - https://news.ycombinator.com/item?id=13210988 - Dec 2016 (107 comments)

Can Programming Be Liberated From The Von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=12159792 - July 2016 (175 comments)

Can Programming Be Liberated from the von Neumann Style? (1978) [pdf] - https://news.ycombinator.com/item?id=10182712 - Sept 2015 (57 comments)

Can Programming Be Liberated From The Von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=7671379 - April 2014 (72 comments)

Can Programming Be Liberated from the von Neumann Style? (classic) - https://news.ycombinator.com/item?id=768057 - Aug 2009 (34 comments)


Feature request for the submit page: live AI fuzzy suggestions of URLs to prevent dupe submits.


It's important to understand that this submission doesn't count as a dupe by HN's standards. On HN, reposts are fine after a year or so, and/or if an article hasn't had significant attention before (the latter isn't the case here, but often is). https://news.ycombinator.com/newsfaq.html

People post links to past discussions are not because they're bad (look at all those reposts!) but because they're good (wow, interesting comments from the past!) - at least in the best case.

Because there's so much randomness in what gets noticed on /newest, overzealous dupe detection would prevent many good articles from ever getting discussed. That would be a poor trade for better deduplication! HN's dupe detector is therefore left porous on purpose (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). We will probably implement karma sharing amongst the various submitters of an article, though, at least when it's a first-time discussion.


Interesting. Maybe there ought to be a Respin HN type which can quote and encapsulate a previous item with semantic links. That way, there would be fewer unconnected "dots" requiring manual previous item comments.


No thank you. It's perfectly fine to discuss something that's been discussed before. I get something new out of this paper every time I read it. We are all learning and growing every day. The conversation wasn't complete just because this has been discussed before.


Spreadsheets and FPGAs are two places where there's no focus on a "program counter".

Metamine was amazing... he had figured out how to safely mix procedural and reactive code in one place without going insane in the process:

https://web.archive.org/web/20201101065701/https://github.co...

I watched a set of videos on YouTube that were amazing, but now I can't find any links to them. 8( I would really like to know where it went.

[Edit: I found a fork of the repository at https://github.com/caminadab/metamine

Of course I forked that fork to have a copy for safety]


In Metamine there are two different modes of assignment

  x = y      is an equation, any time the expression y changes, x will be recomputed

  a := b     is an assignment, it only happens once, like we're all used to.
It is profoundly powerful to be able to use them both in one program. There appears to be a type associated with them, you can't assign to an equation variable, and vice versa.


Don't you accomplish nearly the same thing with memoized functions?

This seems to just provide a wrapper that obfuscates things


Memoized functions have to be idempotent (without side effects).

This is more like a multithreaded application, where one set of calculations is kept updated by a background thread, without most of the attendant synchronization issues. There was some very clever InterBase like generational tagging of data to prevent deadlocks.


Well they don't need to be idempotent, but it's generally a good idea if you want to stay sane debugging your system. To the second effect, I know in clojure's reframe this is handled by something called "subscriptions". It provides a sort of extension of memoization where functions subscribe to a state atom and you can create this dependency graph.

I'll be honest, I haven't used reframe myself, I just came across the concept while using cljfx https://github.com/cljfx/cljfx/#subscriptions-and-contexts so I'm not quite sure if it's all updated async or on each render - but the framework is very powerful and simplifies application logic by an order of magnitude.

But in both reframe and cljfx it's tied to the gui rendering which is a shame - b/c as you comment suggests - this is a more general purpose pattern. In-library it remains a bit too verbose with lots of boiler plate and api to remember and it's all a bit too difficult to reason about at times. It seems like it should be something that's part of the language "runtime". However in the end it still works through memoized functions, which is a bit more explicit that a second magical assignment operator :)

Anyway, a bit of a ramble.. thanks for the info :)


> second magical assignment operator

"magical assignment operator" - I love it. 8)


Svelte does something similar with its `$: x = ...` syntax. I wonder if it was inspired.


Pretty sure this is used in the likes of metafont, as well. Is a great tool where it is applicable. Oddly not that applicable in most programs. Especially if the "plumbing" variety.


This technique (with the same assignment syntax) is used in Make, perhaps the quintessential plumbing program.


Though, Make is one of the quintessentially disliked versions of that type of plumbing program. :( Such that I'm not sure on which side of the evidence line it lands.


OPA's Rego has the same syntax and semantics


Metamine looks interesting, thanks!

Here's a tutorial video I found demonstrating programming the game Snake http://web.archive.org/web/20201014024057if_/https://www.you...

Found from a Reddit discussion http://web.archive.org/web/20201130115327/https://www.reddit...


> Spreadsheets and FPGAs are two places where there's no focus on a "program counter".

I'll give you spreadsheets, but while it's not the "focus" you will very often see softcores and lots of state machines being used in FPGAs. Some things are just easier to reason about and write that way.


Yes, softcores are often used to set up peripherals, and some light duty computation, but the real time aspect of an FPGA is something best set up in Verilog or VHDL, or those type of things.


Metamine looks amazing. I don’t know if I’ve seen it before. Thanks for sharing. Now I’m really curious.


Even when this paper was published, Backus' comments toward APL did it a massive disservice.

Modern descendants of the language have freed it entirely from "expression/statement" splits, and the collection of operators has grown richer. More than any other family, APLs today represent the most successful realization of function-level programming as yearned for in this paper.


I'm too unknowledgeable to know, but I consistently get the feeling when I interact with a new programming language that they all fail to encompass everything that a modern system is capable of, or that they provide mechanisms with such high overheads that they're no better, or even worse, than programming languages of yesteryear.

One example of this for me is that you don't need special drivers for making your program run on a CPU; that would sound quite strange to most developers I think.

But many high-end systems have full blown GPUs on them that we basically never use unless they're specifically interfaced with for video purposes, scientific computing, etc.

Even then, you still have to use specific interfaces to access that hardware. It's not intrinsic to the language.

But say I wanted to run some generic routines across CUDA cores/Stream Processors, I have to explicitly move into the realm of GPGPU programming to do that, and then you have to write "compute kernel" code. It's not just... programming. It's fully terminology'd, it's own thing.

There's no standard library for any programming languages that automatically sends your routines to the GPU.

I don't know how all of this should be approached. I've been exposed to all of these systems, but they're all clearly isolated.

All I know is that it doesn't feel right. I fully engage development efforts feeling like I'm not using the full compute power I have available to me, and a part of that I think is that today we just don't think like developers did 50 years ago.

We have so much compute resource available to us, I think in some very fundamental ways--industry wide--we no longer understand how to use these systems. I know I don't.

I'm consistently asking myself why certain operations I write take milliseconds for some reason instead of microseconds. That's tremendously concerning to me.


C++/AMP (https://docs.microsoft.com/en-us/cpp/parallel/amp/cpp-amp-ov...) is something along those lines. I dunno if it strictly satisfies your "intrinsic to the language" requirement - you still have to explicitly request GPU processing by using the appropriate parallel_* algorithms and restrict(amp) - but it's intrinsic in a sense that compiler takes a C++ lambda and compiles it to run on the GPU.

But I don't think that you could reasonably make something that just automatically sends any random routine to the GPU, or at least not efficiently. The processing model is just inherently different.


While i agree that it is more difficult to distribute computational load between cpu and gpu than it aught to be, Gpu is not necessary faster for all workloads than cpu, it is very specific. Most of computations can probably be executex faster on cpu.


agreed, but i take the GP's argument to be that the languages aren't sufficiently expressive to allow us to communicate generically about workloads in a way that allows a system to correctly determine whether they would be faster on a CPU or not.


Correct. Another good example here is that to my knowledge, developers don't do things like write directly to L1, L2, L3... they tune so that those caches are used effectively.


What are your compiler, assembler, glibc, and the kernel, except specialized drivers for your cpu?

They are certainly more polished than the tools around GPUs or other io co-processors, but I don’t see the difference.


To be honest, we don’t even manage parallel computations on the CPU well. In most languages there is basic SIMD at most with some autovectorization. But if you don’t write your code with threads explicitly, you will only ever use one core.


Most (I assume you meant implementations) don't have even basic SIMD support like that. Even big and popular ones, such as JS, JVM, Python, Lua, .NET, Go... A couple of those do have simd intrinsics.


The JVM will have very good vector instructions coming with Project Panama.


SIMD instuctions were available to JVM already by Java 1.0 (Java 1.0 and VIS instructions in UltraSPARC both came in 1995). So if this Panama stuff comes next year, they got around to it in 27 years. I predict we'll have autovectorization in another 27 years in 2048.


Autovectorization is done by the JIT compiler since forever. But even state of the art autovectorization is very finicky. What’s coming in Panama is an abstract API over SIMD instructions that will reliably compile to the corresponding instruction, while providing safe fallback in the form of serial for loop on platforms not providing the means. It is not behind in times, it will be ahead of most languages.


Interesting!

This seems a good article on the developments in JVM autovectorization: https://inside.java/2021/01/27/extending-c2-autovectorizatio...


Isn't that something that Mono had for ages?


You say we don't need drivers for the CPU, but there is still software that had trouble running on the M1. And there is plenty I can't get to run on a raspberry pi.


Quite arguably, we do - such a driver would be a "compiler". Most compilers do some really wild things to reshape your code to take (some) advantage of new processor features.

There's a huge amount of "brain offloading" that's deferred to the compiler guys, and in a ton of ways, that's the point of having a driver as well; they adapt generic commands like "write a file to disk" to work for you on any kind of disk; compilers adapt your C (or whatever) code to work on e.g. any of the current intel chips of the last N years (within which there are quite a few breaking incompatibilities; I've had code compiled for a really particular i7 CPU actually not work on a different one, because it was taking advantage of new instructions only available on that particular chip - when I reconfigured my compiler to hit a wider compatibility target, the issue went away.

So yeah - there's a huge amount of knowledge "hidden away" there.


Yeah. I don't know why I only hinted at that argument. This was exactly what I meant.


Julia with ArrayFire seems to be straightforward, efficient and easy to use.


The thing in this paper that I see missing from most discussions of Functional Programming is the use of an algebra to derive programs.

I've worked a bit with an FP concatenative language Joy and this sort of algebraic manipulation is easy and fun, and seems to permit (nearly) bug-free development.

Here's a very simple example of deriving a function for the quadratic formula:

https://joypy.osdn.io/notebooks/Quadratic.html

A more complex example deriving a combinator "treestep" to process tree datastructures:

https://joypy.osdn.io/notebooks/Treestep.html


I came across this paper recently while looking at Kitten (it was actually mentioned in HN before): http://evincarofautumn.blogspot.com/2012/02/why-concatenativ...

I think really explains the idea. I always liked combinatory logic more than lambda calculus, although it didn't click to me that the concatenative languages are essentially the formalism for it (as opposed to ML-style syntax).

Although I would say functional programming is somewhat orthogonal concept, the concatenative languages can have functions with side effects, like Forth does.


Perhaps the universe is state based, and not function based...

Programing language are away to represent reality, and manipulate it to your desire.

Object Oriented, and Procedural programing both use functions to manipulate state, while in pure functional languages you use functions and unnamed variables to present state.

The problem is when you communicate, you always do it with state and not functions. From API to storing data as it allows to very different systems to understand each other without to have the same internals. A java web based api/program and a swift/objective-c program running in a phone can easily communicate with each other via state (aka, rest of soap apis), and they don't have to know the internal state of each other.

You can save functions, and pass them along, but the other part has to know how to run it, and have the same type of hardware in order to run it.

I really think pure functional programming is a mathematicians pipe-dream, and theoretical CS nerds trying to shoehorn Math/Algebra equations into being the default type of programing paradigm, but for practical Computer Science problems the von Neumann style (statements and expressions) has won as it is the most practical

The only true successful alternative to von neuman state type of languages, is really SQL and its derivatives, which is a declarative style of programing language. You can pass SQL queries around, and they will be understood by all kinds of systems. But, ultimately they all act on some kind of state/data in a table.


> The problem is when you communicate, you always do it with state and not functions.

I'm not sure that's true. A "callback" is way of communicating with a function as the message. Callbacks are a way for systems to communicate while hiding their implementation details. (One could argue that the callback itself is a first-order object -- a piece of state -- but nonetheless we are communicating with state and functions then.)


Callbacks are a signal that your operation is done, and the data is ready... usually they have some kind of state (i.e. success, error, the data read from disk/api, etc).

They are usually used within one program itself, and not across systems. Something like the Firebase real time database, or webhooks, are for callback achross systems, but they always are used to pass data/state, and not functions itself.

Again, SQL and maybe Regex (which is not a full programming language anyway), as the most successful alternatives to normal Object Oriented/procedural programing.

Most functional languages have not become successful. I learned Prolog in the 90s, and I could see how it might be useful in some situations, but it is hard to do anything practical with it.

Some functional concepts, like map/reduce, have become popular in mainstream languages, as they are useful in certain scenarios, but pure functional languages have not.


This sounds like a byproduct of your natural-language rather than a fact about the universe per se. Native American languages rather famously define things by function rather than by state (Eg names like "runs with wolves"). In a different timeline where North America was never settled by Europeans, perhaps functional programming would be the default and Von Neumann style would be the weird thing nerds think about.


Well, if you go down to the machine code all functions are data, no?


Did you just compare passing data to passing functions?

Because absolutely nothing about transferring data is specific to the program language paradigm chosen.

Here's a task for you: Build a graph of objects holding state with your chosen OO language. Now write that state to disk and read it back in - guaranteeing a consistent state.

Oops, a thread mutated some of the state while writing to disk?

Damn it, need to add a some synchronization stuff. In all my objects. Oh now all my mutations need to go through a semaphore?

No, wait I'll implement the Memento pattern for all my objects, that will help? (nope, it won't, you still need to synchronize across threads)

If you operate on immutable data, that exercise in a functional language is literally two lines of code.


Of course, with pure functional programming with no state under the hood, you would have failed at your original challenge of "build a graph" without further constraints (it's rather hard to ensure totality or that every pointer is accessible when you allow arbitrary cycles).

I'm also not sure why you feel functional programming relieves you of the responsibility of making sure appropriate barriers are created between writes and reads to disk--message passing is more fundamental than programming paradigms, and you physically must make sure your messages synchronize somehow (and that you can recover from corrupted state, if there's a crash). Just writing once simply is not enough here! Also, without mutable state, how do I find the new head (where the appended stuff starts?). This has historically been a very annoying problem to deal with!

Not that I think most of these concerns affect us much during day-to-day programming, by the way! I just find this a particularly poor example of the benefits of FP. In pretty much any useful distributed system, there will be some moving parts that need to be synchronized (like the actual messages to the storage layer), some state that needs to be maintained (root information a bootloader can follow to restore the system), and some weird domain specific stuff that's very hard to encode in types (like crash recovery, in this case). This is true whether or not what you want to do is conceptually purely functional--someone has to make decisions at each of these points about whether and how to expose the inner workings of the system, often in a way that trades off against things like flexibility, space overhead, reliability, and performance. And those decisions have to be relayed to the computer via some programming language that can talk state (or embedded into the hardware).

Accepting that reality doesn't mean that everything has to be written in a "multiparadigm language" or anything like that, but it does mean being cautious about declaring that your favorite technique will automatically solve the issues plaguing another area of CS.


This is just a comment about implicit vs. explicit state. What you described is not inherent to OO, though of course it's much easier to have implicit state in OO.

I'm working in a codebase now that is very OO, but also is very diligent about having state be explicit. The exercise you described would also be 2 lines of code in this project.


Except I would say quantum computation, which is truly more natural, considering the universe is quantum, definitely feels more functional flavored.


> Perhaps the universe is state based, and not function based...

That didn't stop physics, why should it stop CS? No paradigm won, game is not over yet.


Richard Bird has pursued the algebraic approach in a lazy functional language context. He and Ooge de Moor presented their work on the algebraic approach to lazy functional languages in a textbook form in the book Algebra of Programming in late 90s. Not terribly update to date but still really great stuff and a place to start if you're wanting to understand the more up to date stuff. That book can be found on the web. Don't try to buy it on Amazon. They have a highway robbery price. If you need to buy it you might try Google Shopping. I saw several copies for under $10.


Ah, that's a great book, thanks!


b sqr 4 a c * * - sqrt

Yeah no. This being "bug free" seems like a joke.


I don't see an objective reason this is more or less clear than

    sqrt(sqr(b) - 4 * a * c)
or

    (sqrt (- (sqr b) (* 4 a c))
Honestly, by objective measures the traditional style is probably the worst. The Forth style makes it easy to see large stacks because you'll see long sequences of literals. The sexp style makes it easier to work with subexpressions. Both the Forth and sexp style don't require remembering PEMDAS. The sexp style is easier to parse, and the Forth style doesn't even need a parser.

The only thing going for the traditional style is that it is an ugly approximation of mathematical notation - but that notation itself loses most of its value in the uglifying.


This is just a "minor usability issue". Seriously. In Factor, you can solve this with references to local variables, which under the covers manipulate the stack as needed. It can also typecheck, so you wouldn't be able to write the expression in a way which unbalances the stack.


John Backus won the Turing award in 1977 and gave a Turing Award Lecture with the same title. I was in the audience and it made a big impression on me. I had just finished my Masters degree in computer science and had decided to work for a few years before returning to grad school for more.

It was an interesting time to be in CS. To be fair, CS has remained interesting to me to this day. But back then, right out of school, I felt like I knew something about almost every area of CS, maybe not much but at least something. So everything seemed possible.

The field has become so large that even after a lifetime of working and studying in the field of CS, even a modest understanding of one slice of CS is all we can hope for. Nevertheless, it stays fascinating.


Given the question is still being asked after 43 years since the paper was published, my guess is the answer is “no”.

I would not be surprised though if this paper is still posted in the HN equivalent in 2078.


Eric Normand has a great podcast episode where he talks through the paper, summarizing and commenting on it in a way that targets a more general audience.

https://lispcast.com/can-programming-be-liberated-from-the-v...


Joy [1] is a function-level stack-oriented (read: FORTH-y) programming language inspired by Backus's FP. Funny how these things end up connected.

[1]: https://en.wikipedia.org/wiki/Joy_(programming_language)


I’m under the impression that it must actually function in the contrary: i.e. exotic non von Neumann hardware requires new programming approaches.

Example: a systolic array, which you are only likely to encounter as an FPGA bitstream, by its very nature requires a different approach. Flow based programming is a better match for this specific system.

What we have seen since this paper was written is often the shoehorning of new concepts under control of standard von Neumann style programming, whatever the reasons may be, often including programmer familiarity.


This style of programming, such as unnamed arguments, sounds very difficult to understand. An important part of programming is thinking clearly by using simple and explicit writing style. Clear code should be as close as possible to explaining the problem domain and how you would like to influence the outcome.


It most likely sounds difficult to understand because you are not familiar with it. Try working through the first few exercises in SICP; you'll see that it can be quite straightforward.


As someone who's spent many, many hours proving things in Coq, I concur with the original post--pointfree stuff is usually much less easy to read than the same function with named variables. A lot of that comes down to not being able to tell what's going on at all if you don't already know the arity of the functions you're using.


Nevertheless, most mainstream imperative languages (e.g. Java) have adopted lambdas, "fluent" style, async/await (which is a Monad), and other functional features in the last years, and continue to adopt them.


On a half serious note, the traditional 0-1 computing model is the origin of mental polarisation in our society, because such model permits no shades of gray. A better model would use probabilistic bits where each bit represents a probability density.


This reminds me of this passage from the satirical FeministSoftwareFoundation:

"The traditional binary foundation of 1s and 0s is deeply problematic: 1 is inherently phallic and thus misogynistic. Also, some 1s are 0s, and some 0s are 1s. It is not fair to give them immutable labels. Instead, we have 0s and Os as our fundamental binary logic gates. They symbolise/-ize the varying, natural, and beautiful differences of the female vaginal opening.

0 is to take the conventional value of 0.

O is 50% of the time 0, and 50% of the time 1. The determination of this depends on how the underlying logic feels at the moment."


Haven't heard "satire" that shite in ages lol


So there wasn't 'mental polarisation' in society before computational models were created? Don't think that's correct..


I await our enlightened future where we can run programs that have differing state that can be simultaneously true.

What one thread on my CPU calculates as true is it’s truth. Other threads might not agree, because it’s not their truth.


Isn't that what quantum computing is on about


If only ternary computing had won! https://en.wikipedia.org/wiki/Ternary_computer


I would like a system that produces “true,” “false,” and “enthusiastic” results. Enthusiasm left as an exercise for the implementor.


I prefer trulse


Design a computer that does this. If it works significantly better you’d become a billionaire


you meant a 0% serious note right?

The 0s and 1s are in the human brain friend not the computers. You can make computing as probabilistic as your heart desires, people will not stop simplifying and caricaturing the world to fit their biases, not by one "bit".


That’s why I use only floats in my code, and a patched Python interpreter that replaces “if” with “seemslikely”.


#define true (rand() < 0.99)


Adding the integers 2 and 2 together is definitely a grey area.


Maybe.


Ever heard of int?




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: