
Von Neumann’s critique of automata theory and logic in computer science (1947) - adamnemecek
https://www.yodaiken.com/2019/02/20/von-neumanns-critique-of-automata-theory-and-logic-in-computer-science/
======
GlenTheMachine
Controls engineer here.

I've often wondered what a theory of computation based in differential
equations would look like. We desperately need one. Control theory gives us
some tools, but not nearly powerful enough ones.

It's fairly clear by now that AI, at least as applied to domains immersed in
the natural world, such as robotics and machine vision, is better thought of
as analog in nature. Ten years ago I'd go to robotics conferences and it would
not be unusual for me to be the only person in the room who knew that control
theory was a thing. Now it's nearly taken over the field. But the kinds of
developments we see in the field, both in terms of controls and in terms of
machine learning, are essentially experimental in nature. We try some stuff,
and see what works. Mathematical proofs, or even a mathematical understanding,
of these algorithms comes later, if at all.

To this day we have intuition about why deep networks work, but backing that
intuition up with rigorous mathematical analysis has been very challenging and
is quite incomplete. But when it works, it works far better than ideas rooted
in automata. It would be a tremendous benefit to the field if we could somehow
merge the practical power of differential equations and function approximators
with the theoretical power of the theory of computation.

~~~
projectileboy
This is slightly tangential, but if you haven’t already seen it, you should
check out Danny Hillis’ story of Richard Feynman working at Thinking Machines.
He solved how big the buffers needed to be for the communications between
processors by modeling the bit streams with partial differential equations.
[http://longnow.org/essays/richard-feynman-connection-
machine...](http://longnow.org/essays/richard-feynman-connection-machine/)

~~~
mav3rick
Fantastic read

------
lqet
> In studying the functioning of automata, it is clearly necessary to pay
> attention to a circumstance which has never before made its appearance in
> formal logic. Throughout all modern logic, the only thing that is important
> is whether a result can be achieved in a finite number of elementary steps
> or not. The size of the number of steps which are required, on the other
> hand, is hardly ever a concern of formal logic. Any finite sequence of
> correct steps is, as a matter of principle, as good as any other. [...] Thus
> the logic of automata will differ from the present system of formal logic in
> two relevant respects. 1. The actual length of “chains of reasoning,” that
> is, of the chains of operations, will have to be considered.

Interestingly, according the Wikipedia [0], Gabriel Lamé published a running
time analysis of Euclid's Algorithm [1] in 1844, more than a century before
von Neumann suggested this.

[0]
[https://en.wikipedia.org/wiki/Computational_complexity_theor...](https://en.wikipedia.org/wiki/Computational_complexity_theory#History)

[1]
[http://archive.lib.msu.edu/crcmath/math/math/l/l064.htm](http://archive.lib.msu.edu/crcmath/math/math/l/l064.htm)

------
YeGoblynQueenne
If I understand this correctly, Von Neumann is saying that, in the real world,
computation will have to be efficient and robust, and so techniques will have
to be developed to ensure their efficiency and robustness and that such
techniques will necessarily have to be continuous in nature - because
combinatorics is too hard to be used for the job.

It's important to remember that this was written in 1947. So, a year before
Shannon's paper on information theory and several years before the beginnings
of complexity theory [1]. These are both discrete forms of analysis and they
both go a long way towards addressing Von Neumann's 1947 concerns.

I don't know to what extent Von Neumann considered his 1947 criticism
addressed by the subsequent advances. I'm going to go out on a limb though and
say the he would probably have found at least some satisfaction in them.

______________

[1] Shannon's _A Mathematical Theory of Communication_ paper was published in
1948. Wikipedia tells me that the foundations of complexity theory were laid
down in 1965 in a paper titled _On the Computational Complexity of Algorithms_
, by Juris Hartmanis and Richard E. Stearns.

[https://en.wikipedia.org/wiki/Computational_complexity_theor...](https://en.wikipedia.org/wiki/Computational_complexity_theory#History)

~~~
pron
Yes, I think that at least some of what Von Neumann is calling for is
complexity theory.

It's easy to forget how recent complexity theory is. While Hartmanis and
Stearns laid the foundations in 1965, its first concrete result came only in
'71 (Cook) and '72 (Karp), and it took _much_ longer for more results, and,
more importantly, their meaning, to sink in ( _at least_ until the '80s if not
later) among computer scientists who are not themselves computational
complexity researchers. In fact, some of the most _basic_ results complexity
researchers rely on are from the mid 80s (circuit complexity), which makes
complexity theory one of the youngest fields in computer science. It is about
half the age of neural networks, and even younger compared to programming
language theory.

Some famous assertions/hypotheses/aspirational programs in computer science,
including by people like Dijkstra, Tony Hoare and Robin Milner, predate
complexity theory, and should be seen in a different light because of that.
And because it is so young compared to other branches of computer science, it
is sometimes ignored by researchers working in older disciplines, if only
because their own basic results predate complexity theory.

------
blt
Prescient. Nowadays, most computer scientists don't explicitly worry about
hardware failures, but we still love randomized algorithms.

Often it's easy to prove that a randomized algorithm has a desirable property
with probability 1 asymptotically, or with high probability in finite time. In
contrast, it's often hard to prove the analogous desirable property always
holds for a deterministic algorithm. In other words, we use a trick to make a
combinatoric problem more like an analysis problem, because analysis problems
are easier to handle -- exactly what Von Neumann was writing about!

Turns out, the power of analytical methods was its own motivation, and we
didn't need unreliable hardware to push us in that direction.

------
jhanschoo
Today, there are at least two fields that address just these issues: the field
of computational complexity investigates just how many operations are required
depending on the problem.

I'm not too familiar, but I think control theory addresses the issue that
requires accounting for the possibility for error.

Ad for the other field, when von Neumann mentions analytical tools, he is
actually thinking of topological tools. To that end, Type theory investigates
formal logic with much inspiration from topological tools.

------
segfaultbuserr
Related:

* Kurt Gödel's Letter to John von Neumann (1956)

[https://news.ycombinator.com/item?id=19281633](https://news.ycombinator.com/item?id=19281633)

Gödel was asking John von Neumann the number of steps for solving the
Entscheidungsproblem. I asked on HN for its background, and I was told it was
one of the earliest discussions of computational complexity theory,
unfortunately John von Neumann did not have a chance to reply.

Thanks for posting it, now I see the bigger picture: even without seeing
Gödel's letter, John von Neumann had already grasped the idea of computational
complexity independently around the same time.

------
JacobiX
I think that Von Neumann raise three important issues, two of them are largely
solved, and one of them is not yet solved.

\- Hardware failures

\- Complexity theory

\- CS and Formal logic is combinatorial rather than analytical, we can't use
the powerful tools of mathematical analysis to solve those combinatorial
problems.

This reminds me of a remark of Paul Erdős about the Collatz conjecture:
"Mathematics may not be ready for such problems." EDIT: text format

------
pmoriarty
_" formal logic ... deals with rigid, all-or-none concepts"_

Von Neumann only had knowledge of classical logic, so he was not aware of
fuzzy logic, logics that deal with infinite truth values, and other non-binary
logics.

By recognizing the above named inadequacy in the formal logic that he knew,
von Neumann was unwittingly standing on the threshold of a new field. If only
he had asked the right question (ie. what could a non-binary logic look
like?), a first rate mind like his surely could have made some significant
progress.

~~~
saithound
Von Neumann was thoroughly familiar with non-classical logic. He himself came
up with several variants (including quantum logic) in the 1930s.

------
fierro
can someone provide an ELI5 for this critique?

~~~
neel_k
The "normal" physical systems we build (say houses and hydraulic systems) are
mostly "continuous". This means small changes in the inputs generally result
in small changes in the behaviour of the system. So, for example, if the
screws on your kitchen cabinet doors are not tightened quite enough, the
cabinet doors might wobble a little -- that is, a small change resulted in a
small change.

On the other hand, systems based on digital logic are very "discontinuous". A
small changes in an input can result in the system behaving vastly different.
A single-bit error in a computer program can result in an if-then-else
switching from the true branch to the false branch, and thereby running an
entirely different piece of code. So your computer is a single bit flip away
from fintech entrepreneurs using your entire bank account to rent AWS time to
mine Bitcoin.

In general, we are drawn to discrete logic because it's easy to write programs
to do different things based on different inputs. However, von Neumann was
worried about this, because no machine works perfectly -- every computer runs
with a certain probability of error, and computers run fast enough that even a
low probability of error can compounds very quickly. And if a single small
error can result in radically different results, how can you trust the result
of a computer program?

In this note, written in 1947, he expressed the hope that if you could make
computers behave continuously, you could not worry about small errors, because
they would only have a small impact on the result.

However, a few years later von Neumann ALSO invented the alternative, modern
way of handling these issues! In his 1956 paper _The Synthesis of Reliable
Organisms from Unreliable Parts_, he proved that it was possible to use
redundancy in a way to drive the error probability arbitrarily low, enough
that you COULD trust the output of a computation.

~~~
avinium
Great summary.

