Booch: Let’s dwell upon that for a moment, because there
were some other papers that I saw after your
Turing Award lecture [“Can Programming Be Liberat
ed From the von Neummann Style”, 1977] which was
also a turning point. Because I’d describe it as a wake-up call to the language developers and
programmers, saying there’s a different way of looking at the world here. The world was going down a
very different path. Let me pursue that
point of why you think it didn’t succeed.
Backus: Well, because the fundamental paradigm did not include a way of dealing with real time. It was
a way of saying how to transform this thing into that
thing, but there was no element of time involved, and
that was where it got hung up.
Booch: That’s a problem you wrestled with for literally years.
Backus: Yeah, and unsuccessfully.
Meanwhile, Haskell 1.0 entered the world in 1990, and the Haskell community figured out how to use monads to solve the time problem in 1993. So, it's not necessarily that the problem wasn't solved, it just wasn't solved by him, and he perhaps wasn't aware of the solution.
So you use monads to add magic while starting off evaluations, often in specific sequences. They make up code blocks that are explicit about having side-effects. So are often used to separate pure code from impure code, as the magic monads can do could be anything one would like to add. So it's kind of like an impure addon to the existing language, but also by being explicit and more precise about the taints on purity. Each monad do different things, so are kind of like different addons.
However, monads may generally be used purely too. They are more general tools than just the famous IO monad. So the concept of monads are reused for abstraction. It's useful that so many things are solved by reusing the same general pattern. To explain all of that would require more context of the language itself, ie. Haskell. I'm sure somone with more experience has a short and easy explanation! :)
There are StackOverflow pages and such that have good explanations for those with a bit of familiarity with Haskell.
SPJ covers these sorts of issues in some detail in his paper Tackling the Awkward Squad: https://www.microsoft.com/en-us/research/publication/tacklin...
I don't see any evidence in there that Backus was looking for anything quite so lofty as the stuff you're bringing up. While it's true he wrote the paper that kicked off the whole idea behind FP, we've also got to keep in mind that, as he repeatedly mentions in the interview, he was a lifelong Fortran programmer who, in marked contrast to most fans of FP nowadays, held a lifelong affinity for Fortran. It sounds to me like encoding Von Neumann bits into the functional language so that you're there when you need them is exactly the sort of thing he was looking for. To my reading, his Turing lecture would seem to suggest the same.
This will always be the case. The Turing machine is simply a much better model for implementing a machine capable of executing programs. The Lambda calculus (and math in general) are obviously better tools for _reasoning_, but the reason that reasoning in them is so powerful is precisely because you don't have to consider the execution cost of certain operations to reason about them.
Think about all of the common set theory operations. It is trivial to write: PrimeNumbers ⊆ NaturalNumbers, however, think about _computing_ the result of this expression. Generating prime numbers is not a simple computational task, and even if it were, you can only ever verify it for a subset of the natural numbers since computation is inherently finite.
It's all fun and games until you need to execute the program. The functional style is great in terms of expressivity, but you want a Turing machine at the bottom executing it if you ever want the program to finish in your lifetime.
The point is that creating hardware that executes a Turing machine or something like it is easy, but constructing hardware that executes a general recursive function is so hard that it has never been done.
Q: Erlang "object-oriented" after all?
He also mentioned Hoare‘s CSP as an inspiration with the caveat that he didn’t agree with the lock-step synchronization as messages should be async. Makes sense from a physicist‘s perspective.
However, although erlang's process model superficially looks like smalltalk objects, if you design your erlang program like a smalltalk program, your performance will suffer and you will be in a world of hurt when stuff breaks. The point of message passing concurrency in erlang is to create failure domains, not to organize your code.
In short, please do not structure your erlang program like smalltalk OO, and it is definitely not what a contemporary programmer would call OO.
Indeed: not a "pure" FP, and it is exactly the messaging elements that let it handle these things
> However, although erlang's process model superficially looks like smalltalk objects
Did anybody claim Erlang objects were like Smalltalk objects?
It's so bad there is an explicit admonition against doing it in the elixir std lib docs.
> which is directly analogous to smalltalk objects.
No. Definitely not "directly", because Smalltalk messages are synchronous, whereas Erlang messages are asynchronous.
Both are special cases of the general concept of "messaging", and arguably the Erlang instance more closely matches what Alan Kay had in mind. (See his description of objects and messages as lots of little autonomous computers exchanging messages).
> I am aware that Joe Armstrong got swept up in this analogy.
Not sure what you mean with "swept up". He realised it is obviously true. Considering he created the language, he might have had an idea of what he was talking about.
> so you really shouldn't make that analogy.
You shouldn't naively equate Erlang messaging with Smalltalk messaging, but the only who has done this here is you.
I think you have a problem comprehending English. Directly does not mean exactly. Most of the rest of what you wrote is basically rehashing what I am claiming.
> Considering he created the language, he might have had an idea of what he was talking about.
No shit. However, he did not know as much about OO, or the history of OO, because he was at the time it was picking up steam very much busy building erlang. 2009 is around the time of the turning point in Joe's understanding of OO:
> "swept up"
Someone pointed out the full history of OO to him, he realized that his naive understanding of OO (Basically, C++ objects) was not complete, and he quickly grabbed on to his new understanding. Ok great. It also sounds diplomatic, and also now everyone who complained that his language was not OO and therefore bad can be shut up. What would you do?
> but the only who has done this here is you
Nope, so is Joe Armstrong.
And on the other end of the spectrum, if you go low level enough you realize that everything is stateful in the machine, so you will need to build (or already have) functional abstractions on top of that.
Bottom line, it’s almost impossible to have a single paradigm on all levels of your stack.
The fact that the world is made up of elementary particles subject to the 'laws' of thermodynamics has very little impact on the semantics we choose for our programming languages. (If you want to take your appeal to 'nature' that far.)
Yes, they are very promising. But since every single widely deployed language (and "widely" here includes things like Haskell) is a bad fit to it, we are missing some more development before that promise is fulfilled or failed.
(And by the way, the JS frameworks, popular as they are can not claim to be free from the Von Neumann constraints. That requires a pure language.)
There are probably other ways to provide the rest of the pieces, rules engines, some of those DSLs that describe stream graphs with composable nodes, other incremental querying stuff I can't remember right now ...
The practical implementations are either way too complex for everyday usage¹, or specialized into a narrow application. Also, arrows create a completely different sublanguage on any mainstream language that supports it; a sublanguage where things don't compose as nicely.
1 - You can't demand that people will convert a simple procedure into a series of unrelated steps spread all around, that's a receipt for bad code. The fact that people do such things on unrelated domains is not a reason to have it here, it creates bad code there too.
Still, definitely counts as evidence. Thanks!
Can Programming Be Liberated from the von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=21855249 - Dec 2019 (32 comments)
Can Programming Be Liberated from the von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=14673817 - June 2017 (3 comments)
Can Programming Be Liberated from the von Neumann Style? (1978) [pdf] - https://news.ycombinator.com/item?id=13210988 - Dec 2016 (107 comments)
Can Programming Be Liberated From The Von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=12159792 - July 2016 (175 comments)
Can Programming Be Liberated from the von Neumann Style? (1978) [pdf] - https://news.ycombinator.com/item?id=10182712 - Sept 2015 (57 comments)
Can Programming Be Liberated From The Von Neumann Style? (1977) [pdf] - https://news.ycombinator.com/item?id=7671379 - April 2014 (72 comments)
Can Programming Be Liberated from the von Neumann Style? (classic) - https://news.ycombinator.com/item?id=768057 - Aug 2009 (34 comments)
People post links to past discussions are not because they're bad (look at all those reposts!) but because they're good (wow, interesting comments from the past!) - at least in the best case.
Because there's so much randomness in what gets noticed on /newest, overzealous dupe detection would prevent many good articles from ever getting discussed. That would be a poor trade for better deduplication! HN's dupe detector is therefore left porous on purpose (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). We will probably implement karma sharing amongst the various submitters of an article, though, at least when it's a first-time discussion.
Metamine was amazing... he had figured out how to safely mix procedural and reactive code in one place without going insane in the process:
I watched a set of videos on YouTube that were amazing, but now I can't find any links to them. 8(
I would really like to know where it went.
[Edit: I found a fork of the repository at https://github.com/caminadab/metamine
Of course I forked that fork to have a copy for safety]
x = y is an equation, any time the expression y changes, x will be recomputed
a := b is an assignment, it only happens once, like we're all used to.
This seems to just provide a wrapper that obfuscates things
This is more like a multithreaded application, where one set of calculations is kept updated by a background thread, without most of the attendant synchronization issues. There was some very clever InterBase like generational tagging of data to prevent deadlocks.
I'll be honest, I haven't used reframe myself, I just came across the concept while using cljfx
so I'm not quite sure if it's all updated async or on each render - but the framework is very powerful and simplifies application logic by an order of magnitude.
But in both reframe and cljfx it's tied to the gui rendering which is a shame - b/c as you comment suggests - this is a more general purpose pattern. In-library it remains a bit too verbose with lots of boiler plate and api to remember and it's all a bit too difficult to reason about at times. It seems like it should be something that's part of the language "runtime". However in the end it still works through memoized functions, which is a bit more explicit that a second magical assignment operator :)
Anyway, a bit of a ramble.. thanks for the info :)
"magical assignment operator" - I love it. 8)
Here's a tutorial video I found demonstrating programming the game Snake http://web.archive.org/web/20201014024057if_/https://www.you...
Found from a Reddit discussion http://web.archive.org/web/20201130115327/https://www.reddit...
I'll give you spreadsheets, but while it's not the "focus" you will very often see softcores and lots of state machines being used in FPGAs. Some things are just easier to reason about and write that way.
Modern descendants of the language have freed it entirely from "expression/statement" splits, and the collection of operators has grown richer. More than any other family, APLs today represent the most successful realization of function-level programming as yearned for in this paper.
One example of this for me is that you don't need special drivers for making your program run on a CPU; that would sound quite strange to most developers I think.
But many high-end systems have full blown GPUs on them that we basically never use unless they're specifically interfaced with for video purposes, scientific computing, etc.
Even then, you still have to use specific interfaces to access that hardware. It's not intrinsic to the language.
But say I wanted to run some generic routines across CUDA cores/Stream Processors, I have to explicitly move into the realm of GPGPU programming to do that, and then you have to write "compute kernel" code. It's not just... programming. It's fully terminology'd, it's own thing.
There's no standard library for any programming languages that automatically sends your routines to the GPU.
I don't know how all of this should be approached. I've been exposed to all of these systems, but they're all clearly isolated.
All I know is that it doesn't feel right. I fully engage development efforts feeling like I'm not using the full compute power I have available to me, and a part of that I think is that today we just don't think like developers did 50 years ago.
We have so much compute resource available to us, I think in some very fundamental ways--industry wide--we no longer understand how to use these systems. I know I don't.
I'm consistently asking myself why certain operations I write take milliseconds for some reason instead of microseconds. That's tremendously concerning to me.
But I don't think that you could reasonably make something that just automatically sends any random routine to the GPU, or at least not efficiently. The processing model is just inherently different.
They are certainly more polished than the tools around GPUs or other io co-processors, but I don’t see the difference.
This seems a good article on the developments in JVM autovectorization: https://inside.java/2021/01/27/extending-c2-autovectorizatio...
There's a huge amount of "brain offloading" that's deferred to the compiler guys, and in a ton of ways, that's the point of having a driver as well; they adapt generic commands like "write a file to disk" to work for you on any kind of disk; compilers adapt your C (or whatever) code to work on e.g. any of the current intel chips of the last N years (within which there are quite a few breaking incompatibilities; I've had code compiled for a really particular i7 CPU actually not work on a different one, because it was taking advantage of new instructions only available on that particular chip - when I reconfigured my compiler to hit a wider compatibility target, the issue went away.
So yeah - there's a huge amount of knowledge "hidden away" there.
I've worked a bit with an FP concatenative language Joy and this sort of algebraic manipulation is easy and fun, and seems to permit (nearly) bug-free development.
Here's a very simple example of deriving a function for the quadratic formula:
A more complex example deriving a combinator "treestep" to process tree datastructures:
I think really explains the idea. I always liked combinatory logic more than lambda calculus, although it didn't click to me that the concatenative languages are essentially the formalism for it (as opposed to ML-style syntax).
Although I would say functional programming is somewhat orthogonal concept, the concatenative languages can have functions with side effects, like Forth does.
Programing language are away to represent reality, and manipulate it to your desire.
Object Oriented, and Procedural programing both use functions to manipulate state, while in pure functional languages you use functions and unnamed variables to present state.
The problem is when you communicate, you always do it with state and not functions. From API to storing data as it allows to very different systems to understand each other without to have the same internals. A java web based api/program and a swift/objective-c program running in a phone can easily communicate with each other via state (aka, rest of soap apis), and they don't have to know the internal state of each other.
You can save functions, and pass them along, but the other part has to know how to run it, and have the same type of hardware in order to run it.
I really think pure functional programming is a mathematicians pipe-dream, and theoretical CS nerds trying to shoehorn Math/Algebra equations into being the default type of programing paradigm, but for practical Computer Science problems the von Neumann style (statements and expressions) has won as it is the most practical
The only true successful alternative to von neuman state type of languages, is really SQL and its derivatives, which is a declarative style of programing language. You can pass SQL queries around, and they will be understood by all kinds of systems. But, ultimately they all act on some kind of state/data in a table.
I'm not sure that's true. A "callback" is way of communicating with a function as the message. Callbacks are a way for systems to communicate while hiding their implementation details. (One could argue that the callback itself is a first-order object -- a piece of state -- but nonetheless we are communicating with state and functions then.)
They are usually used within one program itself, and not across systems. Something like the Firebase real time database, or webhooks, are for callback achross systems, but they always are used to pass data/state, and not functions itself.
Again, SQL and maybe Regex (which is not a full programming language anyway), as the most successful alternatives to normal Object Oriented/procedural programing.
Most functional languages have not become successful. I learned Prolog in the 90s, and I could see how it might be useful in some situations, but it is hard to do anything practical with it.
Some functional concepts, like map/reduce, have become popular in mainstream languages, as they are useful in certain scenarios, but pure functional languages have not.
Because absolutely nothing about transferring data is specific to the program language paradigm chosen.
Here's a task for you: Build a graph of objects holding state with your chosen OO language.
Now write that state to disk and read it back in - guaranteeing a consistent state.
Oops, a thread mutated some of the state while writing to disk?
Damn it, need to add a some synchronization stuff. In all my objects.
Oh now all my mutations need to go through a semaphore?
No, wait I'll implement the Memento pattern for all my objects, that will help?
(nope, it won't, you still need to synchronize across threads)
If you operate on immutable data, that exercise in a functional language is literally two lines of code.
I'm also not sure why you feel functional programming relieves you of the responsibility of making sure appropriate barriers are created between writes and reads to disk--message passing is more fundamental than programming paradigms, and you physically must make sure your messages synchronize somehow (and that you can recover from corrupted state, if there's a crash). Just writing once simply is not enough here! Also, without mutable state, how do I find the new head (where the appended stuff starts?). This has historically been a very annoying problem to deal with!
Not that I think most of these concerns affect us much during day-to-day programming, by the way! I just find this a particularly poor example of the benefits of FP. In pretty much any useful distributed system, there will be some moving parts that need to be synchronized (like the actual messages to the storage layer), some state that needs to be maintained (root information a bootloader can follow to restore the system), and some weird domain specific stuff that's very hard to encode in types (like crash recovery, in this case). This is true whether or not what you want to do is conceptually purely functional--someone has to make decisions at each of these points about whether and how to expose the inner workings of the system, often in a way that trades off against things like flexibility, space overhead, reliability, and performance. And those decisions have to be relayed to the computer via some programming language that can talk state (or embedded into the hardware).
Accepting that reality doesn't mean that everything has to be written in a "multiparadigm language" or anything like that, but it does mean being cautious about declaring that your favorite technique will automatically solve the issues plaguing another area of CS.
I'm working in a codebase now that is very OO, but also is very diligent about having state be explicit. The exercise you described would also be 2 lines of code in this project.
That didn't stop physics, why should it stop CS?
No paradigm won, game is not over yet.
Yeah no. This being "bug free" seems like a joke.
sqrt(sqr(b) - 4 * a * c)
(sqrt (- (sqr b) (* 4 a c))
The only thing going for the traditional style is that it is an ugly approximation of mathematical notation - but that notation itself loses most of its value in the uglifying.
It was an interesting time to be in CS. To be fair, CS has remained interesting to me to this day. But back then, right out of school, I felt like I knew something about almost every area of CS, maybe not much but at least something. So everything seemed possible.
The field has become so large that even after a lifetime of working and studying in the field of CS, even a modest understanding of one slice of CS is all we can hope for. Nevertheless, it stays fascinating.
I would not be surprised though if this paper is still posted in the HN equivalent in 2078.
Example: a systolic array, which you are only likely to encounter as an FPGA bitstream, by its very nature requires a different approach. Flow based programming is a better match for this specific system.
What we have seen since this paper was written is often the shoehorning of new concepts under control of standard von Neumann style programming, whatever the reasons may be, often including programmer familiarity.
"The traditional binary foundation of 1s and 0s is deeply problematic: 1 is inherently phallic and thus misogynistic. Also, some 1s are 0s, and some 0s are 1s. It is not fair to give them immutable labels. Instead, we have 0s and Os as our fundamental binary logic gates. They symbolise/-ize the varying, natural, and beautiful differences of the female vaginal opening.
0 is to take the conventional value of 0.
O is 50% of the time 0, and 50% of the time 1. The determination of this depends on how the underlying logic feels at the moment."
What one thread on my CPU calculates as true is it’s truth. Other threads might not agree, because it’s not their truth.
The 0s and 1s are in the human brain friend not the computers. You can make computing as probabilistic as your heart desires, people will not stop simplifying and caricaturing the world to fit their biases, not by one "bit".