Liberated implies that today's computers are limited in what they can do, which I disagree with. But I can imagine special purpose machines being constructed to carry out tasks faster than otherwise possible. If we depart from the von Neumann style of computers, we probably won't program these new machines in the traditional sense. I can see, for example, a computer designed to run an artifical neural network at the hardware level being trained on various datasets. In this case, training might be the new way of programming.
Today's computers are limited in what they can do, by way of their architecture's constraints. The von Neumann architecture has served us well up to this point, but going forward it's going to be a real handicap. But it isn't just the architecture that needs rethinking. As you say, 'we probably won't program these new machines in the traditional sense'. And you're probably right. Programming languages are also part of the problem. We've wasted so much time reinventing the wheel, over and over again. I don't know what form the next computing architecture will take, but I do know there will come a time when the notion of using a 'programming language' to get things done will seem quaint. Hell, I thought it was an archaic way of doing things 20 years ago, and it's why I didn't pursue a career as a programmer. The tedium was incompatible with my temperament!
You mean writing a series of logical instructions in order to solve problems is tedious? I wonder what future platform will be created that doesn't require an ordered series of logical steps to solve problems...
Well, even today there are computing devices that don't require an ordered series of logical steps (in the form of program code) to solve problems. So you don't need to wonder about it. The future is already here.
However, there could one day be a means to easily implement algorithms in hardware.
To clarify, I'm not thinking of ASICs or FPGA's, which of course already exist and have for quite some time. Neither are very 'easy', and the expense is real.
Just out of curiousity, do you honestly believe that the way we've been getting computers to 'do things' is never going to change?
> Well, even today there are computing devices that don't require an ordered series of logical steps (in the form of program code) to solve problems. So you don't need to wonder about it. The future is already here.
Could you give us some details about what you are talking about here, so that we don't have to play guessing games?
yeah, my bad. thought you were the guy above me. Apologies.
As for the career I chose, you can assume it wasn't anything to do with computers, lol. (Although I almost went the Cognitive Science/Computer Science route back in my younger days, before realizing I wasn't nerdy enough!)
I think you underestimated your nerdiness back then. Here you are all those years later responding passionately in a thread on rescuing programming from the von neumann style.
Well if nobody is passionate about it, nothing will change. I'm sure I look like a fool for it, but someone has to.. might as well be me! :)
My interest in AI and computer architectures never went away.. I just decided not to turn certain interests into a career, and walked away from the chance to be gainfully employed as a programmer. At the time I felt there had to be a 'better way', and I've been trying to come up with that 'better way' ever since.
I'm probably somewhat bitter that my research (probably) won't get translated into something physically tangible and complete, as I'm running out of time. Cancer is unkind that way.
It's entirely possible that I've been chasing a pipe dream all these years though, and while that would be a shame I'm realistic enough to entertain that notion. I've already accepted death, so at this point I can accept this, too. But of course I don't really believe that is the case here. ;)
>Programming languages are also part of the problem. We've wasted so much time reinventing the wheel, over and over again
That's a good thing.
If we didn't we'd still use stone or wooden wheels -- like in ancient times.
People say "reinventing the wheel" when in actuality it's "improving the wheel".
CL is not 60's LISP, Rust is not C, C is not Algol, Haskell is not ML, and Go..., ok, Go is like Algol 68.
>I don't know what form the next computing architecture will take, but I do know there will come a time when the notion of using a 'programming language' to get things done will seem quaint.
This I feel is too hand-wavy ("there must be a better way").
Programming language is sort of like math -- logical instructions.
The scientific underpinning of programming is algorithms. And those are expressed by programming languages.
We cannot really replace algorithms with anything else, like we cannot get rid of equations in math.
We might abstract them more ("robot, write a program to do X for me") or less (some "visual programming" thing), but in the end, programming and algorithms are part and partcel with solving problems.
Wheels are the most effective way to transport things on land. So of course it's in our best interests to keep improving that wheel.
Certainly, language is useful when hashing out a problem on paper (be it pseudocode or math equation or whatever). The abstraction is convenient. But programming languages are not a requirement for computers to do their work. Languages exist for our benefit, not the machine's. One can use software to do work on a computing architecture, or that same algorithm can be implemented in hardware and run a hell of a lot faster.
The problem is that, while we can come up with all kinds of algorithms, we can't seem to agree on what programming language to use when implementing those algorithms. I guess I'm arguing that the languages are getting in the way of the algorithm, and it would be nice if there were a way to achieve something closer to a 1:1 relationship between algorithm and implementation. We're always going to be inventing new programming languages because we can't reach a consensus on what is the 'best' abstraction. It's become ridiculous at this point.
Algorithms are currently expressed by programming languages because they have to run on general purpose, programmable von Neumann architectures. I mean, yeah we could skip the language and just input machine code, but that would of course be ridiculous. That's why we use compilers.
If there were an easy way to implement algorithms directly in hardware, why would we bother with programming languages?
Take, for example, a supercomputer. If the kinds of work a supercomputer does could be implemented directly in hardware, I think it would be safe to assume that researchers would be all over that. The gains would be enormous.
If there were a hardware-only 'Travelling Salesman' algorithm that showed astronomical gains in efficiency compared to the conventional software implementation, don't you think that would be important and useful?
A computing architecture doesn't need to use a language.
The language is there for our benefit, not the machine's. I'm merely suggesting that all these (by now, countless) languages are getting in the way of efficiency. If we have something in mind that we want the computer to do, why not just do it in hardware directly, as that would provide the most optimal efficiency?
I don't 'feel similarly about written language', because written language is kind of important for human communication. Last I checked, human brains have very little in common with computers and our capabilities are not the same as a computer's. We are not computers, we're social organisms. So we need written language. One could even say that we need written language to do 'our work'. Society would fall apart without it.
But I guess if there were a better way to communicate or record/relay our thoughts to others, I'd be all for it.
You might be interested in [Likely](http://liblikely.org/), the programming language I've been working on. One of its core tenants is that "training == compilation". I haven't given much thought to departures from von Neumann style architectures, but with emerging/de-facto standards like OpenCL and LLVM I don't see any reason why, at least for computer vision algorithms (my domain) we can't train/compile to FPGA.
I don't think he meant to liberate the mechanism of computers, but instead our worlds and conceptions for talking about computer programs... which are then translated wherever.
More mundanely, see FPGAs. They can be thought of as "computers" (and you can build a von Neumann core on one), but are programmed using different languages (Verilog, VHDL) and a completely different paradigm that involves lots of parallelism.
There have been attempts to force imperative/functional paradigms into FPGAs, but they have not been well-received simply because it doesn't make much sense to do that.
Seems like Backus likes postfix syntax. Same goes with BNF which reserves cardinality at the end. Does anyone consider the first example easier to read? As a mere Human, I find the syntax hard to follow. 1978 was the era when AI provers were solving theorems in hundreds of steps, which was impossible for a Human to follow.
So, what are the non-von architectures to play with? GPUs? FPGAs? For GPUs, Cellular Automata is a lot of fun. For FPGAs, perhaps Petri Nets. A classic connectionist book is the Minds of Robots, by Culbertson.
What next? Light prisms? How the hell would you program one of those? Or perhaps the problem becomes how to create highly distributed pipelines?
Most of this talk is, in retrospect, embarrassingly wrongheaded, but I think it does point the way toward better approaches. Unfortunately, I still haven't finished writing the counterpoint to it that I began writing in 2007, so I haven't published it yet.
Thank you! I agree that the original paper is kind of all over the place, that many of its ideas turned out to be not very good, and that garbage collection is a huge productivity aid that covers most of the remaining benefits.
Another (possibly controversial) point is that "word-at-a-time" programming is sometimes the clearest solution to a problem, especially when you need to think about time and space complexity. The usual example is Knuth-Morris-Pratt. I wonder how Backus would express it?
The summary is that once you've computed the KMP state transition function, you compute its prefix sum under composition and then ask at which textual indices it assumes a final state. This abstract description of the KMP algorithm is sufficiently general to cover realizations with different time and space complexity, including not just the purely serial uniprocessor O(N)-time constant-space realization, but also potentially the 3× efficient SIMD realization described in the paper, efficient O(log N)-time parallel realizations and GPGPU realizations; and it is sufficiently specific to automatically derive an efficient realization.
An even more interesting question, to me, is what kind of formalism would make it easy to rigorously (and perhaps even automatically) derive KMP and/or BM from the abstract definition of the string search problem and an objective of reducing computation time.
Maybe I should've just said Lambda Calculus, which is waht I was thinking. Von Neumann architecture is prevailing implementation of it because Von Neumann machines prevail. However, functions operating on state w/ I/O don't require a CPU or main memory: LISP/Scheme have been directly synthesized to hardware. That means you can describe non-Von Neumann architecture with LISP in a way that produces actual hardware. So, Lambda Calculus being a separate concept and fact that it can use non-Von Neumann implementations mean a Lambda Calculus implementation is non-Von Nuemann but can be if chosen.
That's what I was thinking. Unless those with more experience in Lambda Calculus, etc tell me [with evidence] that Lambda Calculus is von-Neumann, so is any LISP/Scheme, and never can be implemented alternatively.
Stuff like SmallTalk wouldn't be possible if programming was turned into math (and SmallTalk is way too cool by allowing changing any part of its image on the fly and redefine all rules), and frankly I don't understand why mathematicians want to take over a completely different domain, computing. The relation between computing and math is more like the relation between physics and math. It's great to have multiple paradigms, why restrict oneself to a functional one when there is so much more?
Note that this was the Turing Award lecture by John Backus, who created FORTRAN in the 1950s, and pretty much invented higher-level imperative programming. Before Backus, there was only machine code. Of course he was a mathematician: in those days “computing” didn’t exist as its own separate field. He can hardly be accused of “taking the domain over”.
> Stuff like SmallTalk wouldn't be possible if programming was turned into math
> and frankly I don't understand why mathematicians want to take over a completely different domain
I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics. I don't have time to make a thorough argument, but consider this: instead of thinking of what programming would be like as some kind of math, think of what math would need to be like to adequately describe your favourite mode of programming. (Forget arithmetic, consider algebra, geometry.)
As for FP: these days it wants all the tools, and it is picking up whatever it can formalize. Some FP tools are uncommon or don't work well outside the paradigm. To me that makes all the difference: I'd feel more restricted working without FP than by being strictly within it.
>I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics.
Well, the entire lesson of the Halting Problem and Rice's Theorem is that it can't. Indeed, Haskell loves to just cut through the bullshit about math and be inconsistent: bottom inhabits every type, and Type inhabits Type (in Dependent Haskell).
No fixed mathematical system is capable of formally verifying all programs, or even all interesting programs. You always need more bits of axioms than of program if you want to prove theorems about your programs (Chaitin's principle, proved formally by Calude).
What we can say for FP is that Fast and Loose Reasoning is Morally Correct, and that if we use nicely categorical constructions, then when our programs happen to terminate at all, those nice constructions describe their behavior.
We don't actually need to butt into the Halting Problem or Rices Theorem, though.
Ideally, we would have proven-to-terminate non-turing complete DSLs that can be composed together to form more powerful DSLs. The problem is that we don't really know how to compose programs well, and when you demand the ability to peer into arbitrary executing code and modify it on the fly, you don't make it easier to learn how to do this.
If you were forced to use an architecture that separated things properly, you'd be forced to deal with those composition problems, rather than 'hacking' programs using a level of power that is unnecessary for the problem you hope to solve. (And you are hoping to solve a problem, not just run a program that fails to terminate.)
(I'm not arguing that this is the 'right way' to do things. It's just a way that should be considered properly.)
> >I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics.
> Well, the entire lesson of the Halting Problem and Rice's Theorem is that it can't.
And in mathematics, division by zero is undefined as well as not all differential equations have analytical solutions. Does that engender a lesson to abandon the rest of what the field provides?
> What we can say for FP is that Fast and Loose Reasoning is Morally Correct, and that if we use nicely categorical constructions, then when our programs happen to terminate at all, those nice constructions describe their behavior.
Not sure what you are trying to convey with the first "premise" you state, but I can say with certainty that everything after "and that if we use..." can be applied to just about any approach in making a program.
Structured Programming:
If we use nicely decomposed procedures, then when our
programs happen to terminate at all, those nice
procedures describe their behaviour.
Object Oriented Programming:
If we use nicely encapsulated classes, then when our
programs happen to terminate at all, those nice objects
describe their behaviour.
Declarative Programming:
If we use nicely declared rules and capabilities, then
when our programs happen to terminate at all, those
nice declarations describe their behaviour.
And (your example quoted here):
If we use nicely categorical constructions, then
when our programs happen to terminate at all,
those nice constructions describe their behavior.
That's a little extreme. I think most computations are best¹ written in a way that is amenable to mathematical treatment, and most of the useful ones can be. Just because a counter-example exists doesn't mean we should give up hope.
I find constructing programs mathematically makes life easier in the common case and possible in nearly all others, unless you insist on things like proof-of-termination for programs which do not (I do not.)
1. 'best' because it gives us the means to reason about the code, and we know what kind of transformations we can perform on it, and do them abstractly without having to simulate it, potentially for infinite inputs, etc., etc. You know the drill.
That's a very strange interpretation of Rice's theorem. Just because properties of computable functions are undecidable doesn't mean they somehow "can't be described by mathematics". I'm also not sure why incompleteness is relevant here at all, unless you think describing something mathematically requires a complete formal system. In that case, we can't "describe" arithmetic "mathematically".
It's not that strange. We can write all kinds of program analyses that are decidable, but which are necessarily conservative, and we end up deciding what sort of conservativeness (which side we want to err on) in different situations. What this does mean is that you can't really write a Grand Unified Formal Verification framework and get some wonder-of-wonders benefit just by switching to functional programming.
I've also spent enough time around Haskell coders to notice that the instant you give them a fresh new language feature, they will push it right out to the limits where the compiler's conservative analyses no longer work and the programmer has to manually assert that the code is correct.
Nothing will really save the programmer from having to think clearly about their own code, and most language features designed to ostensibly help do so just enable compiler-abuse at a higher level of abstraction.
That's correct in theory, but in practice I'd be very surprised to see a useful program that couldn't be verified by ZFC. The only natural way to get such programs is by encoding metamathematics, like trying to verify the consistency of ZFC itself.
>That's correct in theory, but in practice I'd be very surprised to see a useful program that couldn't be verified by ZFC.
I think you do too much easy theory if you think that, if you'll excuse my saying so. A lot of the firmware code I write at work is just not going to be formally verifiable in any practical way without scrapping it and rewriting in a language built around formal verification from day one.
Besides which, a Python interpreter is a very common sort of program to write, actually, and I don't expect that you can "just" do the equivalent of `Require Import ZFC` to verify its behavior properly.
Point taken. Isn't that just a limitation of current tools, though? What stops ZFC from proving sentences about Python interpreters, or Brainfuck programs, or whatever? I'm not aware of any theoretical roadblocks, apart from the difficulty of actually writing out the proof (which is admittedly huge).
>What stops ZFC from proving sentences about Python interpreters, or Brainfuck programs, or whatever?
Well, mostly a lack of writing down axioms about how these things actually work. The innate nondeterminism, imperative operating environment, and intentional behavior (in the sense that which implementation of a function you write actually matters) also give things that are, let's say, on the research frontier to reason about formally.
And then there's just the fact that ZFC isn't a very good language at all for programming with. Type theories are both proof-theoretically stronger and easier to write something at all like actually programs in.
Also, things like compilers and program analyzers are very common in "real-world" programming. They're just about the cutting edge of what we can do with formal verification today: it helps a lot that, as I understand things, when you write down an inductive type in a dependently-typed proof assistant, you are also writing down axioms (an induction principle) with which to prove theorems about it, and the dual for coinductive types. Then, between coinductive and inductive proofs that rely on either productive nontermination or eventually-terminating programs, we can write down most of what we actually want to code.
With the exception, of course, of proof assistants, where we can still only write a verified implementation of one system using a strictly stronger system. Gosh, if only someone was working on ways to tie that knot...
Bingo. But most people also like to kvetch and moan about the restrictions imposed by programming with only structural recursion and productive corecursion -- even though those are usually exactly what we want.
Especially if you get an easy way out for when you need it.
Like unsafePerformIO, but for calling not-proven-to-be-terminating functions. Or an explicit marker like the wrapping monads for side-effecting functions.
As someone that has to dip into exotics like relevance logic just to somehow get closer to the mode my brain operates in while programming, either the math is seriously undeveloped/undecidable in areas of my interest, or just not capable of much help in what I set to achieve. And I have extensive knowledge in functional programming including proving correctness using Smullyan's tableaux on the fly all the way from very primitive axiomatics up to complete arithmetic in predicate logic. Often in those exotic logic kinds (which I deem necessary) trivial things aren't provable, yet they seem to model actual human reasoning much better and that is my main interest. With computing I am able to actually do some work in this area, with math, due to aforementioned limitations, not so much.
Most mathematicians can't see beyond binary predicate logic :-(