
Kurt Gödel and the Mechanization of Mathematics - vo2maxer
https://www.the-tls.co.uk/articles/kurt-godel-incompleteness-theorems/
======
libraryofbabel
One of the most interesting things about Gödel's Incompleteness theorem must
surely be how _little_ effect it has had on the day to day practice of
mathematical research. It's one of the most profound mathematical results in
the last 100 years, and yet outside foundational fields like axiomatic set
theory (Cohen's work on the Continuum Hypothesis comes to mind) it's quite
possible to completely ignore the existence of undecideable statements.
Gödel's work had a deep impact on mathematicians' _philosophical_ views of
what they do, but that didn't carry down into the actual day to day business
of proving theorems.

Isn't that strange? Wouldn't you think that undecidable propositions would be
scattered through mathematics and turn up at curious and interesting moments
in different fields? But apparently that's not generally the case. The
Goldbach conjecture (Knuth) and of course the Riemann Hypothesis have been
mentioned as possible candidates (but you might have said the same thing about
Fermat's last theorem 30 years ago). If we did find a first-rank conjecture in
number theory or some other area that's undecidable in ZFC (or whatever
system) then we would at last be stepping through the door that Gödel opened
for us.

~~~
yters
Just like the conservation of energy has little effect on day to day practice
of physics, besides blocking off an entire unbounded fruitless field of
research into free energy.

In other words, impossibility results make it possible to have a practical
discipline where people are focused on feasible approaches instead of spending
all their time and money researching pie in the sky technologies that can
never work. It's like when alchemy turned into chemistry and astrology turned
into astronomy, because the fields became restricted to what is realistic.

Arguably the same still has to occur in other disciplines _cough_ AI _cough_.

~~~
ProfHewitt
With respect to Intelligent Systems, there are two important theoretical
results.

According to [Shamir, et. al 2019] “given any two classes C1 and C2, along
with any point x∈C1, and our goal is to find some nearby y which is inside C2.
Our ability to do so suggests that all the classes defined by neural networks
are intertwined in a fractal-like way so that any point in any class is
simultaneously close to all the boundaries with all the other classes.” Their
results suggests that gradient classifiers (aka "Deep Learning") could remain
forever fragile in ways that cannot be remedied by larger training sets.

Also important are the results on Inference Robust Logic, e.g. speed bumps) in
the following:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114)

~~~
yters
This seems to be the case with all the widely used ML paradigms: NNs, decision
forests, and SVMs. Favored for their high accuracy models which seem to come
at the cost of massive overfitting and hence fragility.

I used a human in the loop approach for comparison, and those models, while
not as accurate, seem much more robust. I think humans are better at
identifying the big picture patterns, whereas ML approaches will microoptimize
their way to a fragile approximation of the big picture pattern.

Practically, besides likelihood of breakdown, this means these overfit ML
models are very susceptible to hacking. I predict once ML becomes more
commercially widespread, we'll see a big increase in ML hacking.

------
rocqua
> The First Incompleteness Theorem tells us that assuming PA is consistent,
> one can formulate propositions that cannot be proved (nor disproved) on the
> basis of PA, but which are nevertheless true.

As far as I know, that last clause was never a claim made by Godel.
Specifically the claim that these propositions are _nevertheless true_. This
follows quite simply from the assumption that either a proposition is true, or
its negation is true (i.e. the law of the excluded middle). But I recall
reading that Godel, in his proofs, went to great lengths to avoid the law of
the excluded middle over infinite sets.

Indeed, (again AFAIK) godel's first incompleteness theory still holds in
constructive mathematics (roughly where the law of the excluded middle cannot
be used). As such, I think the addition that these propositions are
'nevertheless true' actually hides a lot of complexity. Because there are
philosophical positions in which these independent propositions are neither
true not false.

And really, this is why we call them independent. The truthiness of these
propositions is not determined by their axiom systems. It's a rather big
philosophical stance to say that the 'truth' of these propositions can be
determined outside of the axiomatic systems. It seems to me like that claims
there is an objective (as in, not dependent on an axiomatic system) definition
of what is true.

~~~
xamuel
There _is_ an objective definition of truth in any particular model. For any
particular model M of, say, Peano arithmetic, there is an objective,
unambiguous, perfectly well-defined notion of truth in that model (of course,
the definition of truth cannot itself be formalized within the model).

~~~
strbean
> There is an objective definition of truth in any particular model.

"in any particular model" \- this is exactly parent's point.

From Wikipedia, a restatement of the 1st incompleteness theorem:

> there are statements of the language of F which can neither be proved nor
> disproved in F

The truth of a proposition P, within F, means "P is an axiom of F, or can be
proved in F".

Saying "there are statements of the language of F which can neither be proved
nor disproved in F... but the statements are true nevertheless" is appealing
to some 'objective notion' of truth that transcends F.

~~~
xamuel
>The truth of a proposition P, within F, means "P is an axiom of F, or can be
proved in F".

No, truth and proof are NOT the same thing. The system F needn't have any
notion of truth. For example, Peano arithmetic has no notion of truth. Peano
arithmetic has notions of zero, successor, addition, and multiplication--none
of truth.

Almost all competent mathematicians would agree that there exist theorems in
the language of PA which are neither provable nor disprovable in PA but which
are nevertheless true. Here, "true" means "true in the standard model of PA",
not "true according to PA"\---the latter isn't even meaningful because there
is no such thing as "truth according to PA".

~~~
strbean
Out of curiosity, could you give some examples of such theorems?

~~~
xamuel
1\. The original example would be Goedel's own example, namely, the statement
(formulated in the language of PA) that PA is consistent, i.e., that PA does
not prove 1=0.

Less contrived examples are:

2\. Goodstein's Theorem
[https://en.wikipedia.org/wiki/Goodstein%27s_theorem](https://en.wikipedia.org/wiki/Goodstein%27s_theorem)

3\. A theorem stating that every computable strategy is a winning strategy in
the hydra game (see
[https://faculty.baruch.cuny.edu/lkirby/accessible_independen...](https://faculty.baruch.cuny.edu/lkirby/accessible_independence_results.pdf))

4\. The strengthened finite Ramsey theorem, see
[https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theor...](https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theorem)

~~~
strbean
Thanks!

------
pron
> formalism: the idea that mathematics can be reconstructed in a content-free
> manner, or more strongly put, that mathematics itself is nothing but “a
> formal game of symbols”.

This is a common misrepresentation of Hilbert's Formalism. It doesn't state
that _no_ mathematical sentence has content beyond its mere symbols, but that
_not every_ mathematical sentence has content beyond its mere symbols.

Formalism was meant as a compromise between the two dueling philosophies of
mathematics of the time: Russel's Logicism, which essentially said that all of
mathematics can be derived from some self-evident, obviously true, logical
axioms, and Brouwer's Intuitionism, which essentially said that the truth of
mathematics stems from empirical confirmation that breaks down when infinities
are involved, and therefore some logical axioms cannot be taken to be true for
infinites. Hilbert tried to accept Brouwer's finitist philosophical position
without rejecting any useful axioms. Formalism, then, separates mathematical
sentences into two kinds: "real" \-- those dealing with finite quantities and
can be tied to physical observation and prediction, and "ideal" \-- those
dealing with infinities. Instead of rejecting the latter, Formalism says that
as long as they don't introduce inconsistencies that invalidate the former,
one could view them -- the _ideal_ sentences -- as having no content beyond
symbols, i.e. as not referring to anything "real".

~~~
neel_k
Brouwer was absolutely not a finitist! Indeed, he completely rejected the idea
that you could use formal logic to bound what mathematics could talk about.

A better (but still not entirely accurate) description would be that Brouwer
had an epistemic conception of mathematical truth: mathematicians become able
to assert mathematical truths as they engage in the process of constructing
mathematical objects. This is indeed a constructive process, but it isn't
finitistic -- for Brouwer mathematical reasoning is open-ended rather than a
closed set of rules.

So much like Goedel (who disagreed with Brouwer in many other respects),
Brouwer was totally committed to the idea that mathematical statements
actually mean things. And because they carry meaning, the mathematical process
and the properties of the mathematical objects created by a mathematician take
precedence over the rules of logic.

This is why Brouwer rejected classical logic -- not because of any finitistic
reason, but because (for example) classical logic permits disassembling the
continuum into its set of points. Since (for Brouwer) the point of a continuum
is that it is continuous, it followed that any logical principles (such as
classical logic plus set theory) that permitted you to break up the continuum
were fallacious. This is why he spent a lot of time on his so called "strong
counterexamples", such as his proof that intuitionistically all total
functions on the reals are continuous.

Notice that the direction of reasoning here is exactly the opposite of what a
finitist would accept: Brouwer used the properties of the continuum to justify
various finitary logical rules, rather than the other way around. The
continuum was real, and logic just a façon-de-parler.

I find this kind of thing really interesting because it illustrates that the
pretheoretical idea of truth can be formalized in many different ways, and
that the various formalizations are not all compatible with each other. We are
used to informal ideas like space having many different mathematical
realizations -- metric spaces, topological spaces, vector spaces, etc -- but
it is a little disconcerting (and a lot fascinating) to realize that logic and
truth exhibit the same plural nature.

~~~
pron
Hey Neel! Brouwer was a finitist in the sense that he did not accept
infinitist axioms. His epistemology indeed was one of "mathematicians become
able to assert mathematical truths as they engage in the process of
constructing mathematical objects," but the justification was empiricism.
I.e., his concept of establishing truth was by empirical, and therefore
finitist. Here he is in "On the significance of the excluded middle in
mathematics", 1923 (in Heijenoort):

> Within a specific finite "main system" we can always test (that is, either
> prove or reduce to absurdity) properties of systems, that is, test whether
> systems can be mapped, with prescribed correspondences between elements,
> into other systems; for the mapping determined by the property in question
> can in any case be performed in only a finite number of ways, and each of
> these can be undertaken by itself and pursued either to its conclusion or to
> a point of inhibition [here is his mention of induction that I provide
> below].

> ... On the basis of the testability just mentioned, there hold, for
> properties conceived within a specific finite main system, the principle of
> excluded middle

> ... It is a natural phenomenon, now, that numerous objects and mechanisms of
> the world of perception, considered in relation to extended complexes of
> facts and events, can be mastered if we think of them as (possibly partly
> unknown) finite discrete systems that for specific known parts are bound by
> specific laws of temporal concatenation. Hence the laws of theoretical
> logic, including the principle of excluded middle, are applicable to these
> objects and mechanisms in relation to the respective complexes of facts and
> events, even though here a complete empirical corroboration of the
> inferences drawn is usually materially excluded a priori and there cannot be
> any question of even a partial corroboration in the case of (juridical and
> other) inferences about the past.

Note his use of "testability", "natural phenomenon" and "empirical
corroboration." So the source of truth is nature by way of empiricism, and
that demands finitism. Even his reasoning about the continuum was finitist: no
determination on order/separation can be made after examining a finite prefix
of the decimal expansion of a real number (see ibid §2). If he were alive
today, I have little doubt he would have been an ultra-finitist -- not any
constructivist! -- because, as it turns out, there is no empirical
justification (even in principle) to accept inifinite induction. The very
arguments he uses to reject LEM must also reject unbounded induction: it is
not verified by nature.

(Here's what he says on induction:

> Here the principle of mathematical induction often furnishes the means of
> carrying out such tests without individual consideration of every element
> involved in the mapping or of every possible way in which the mapping can be
> performed; consequently the test even for systems with a very large number
> of elements can at times be performed relatively rapidly.

After the discovery of computational complexity he would have had no choice
but to be an ultra-finitist.)

As to the notion of truth, I completely agree, but here Alan Turing came up
with a completely different perspective. Whereas both Brouwer and Russel were
looking for Truth in mathematics (and Hilbert, too, although he was content to
Truth in the contentual "real propositions" and the finitist rules of
inferene), Turing rejected that notion altogether. To him, mathematics was a
means whose intrinsic truth is irrelevant. It is valuable inasmuch as it is
useful, not because it is true (see
[https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf](https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf)).

~~~
neel_k
You are being a bit too quick to equate finitism and constructivism. Hilbert
was a finitist, but not a constructivist, and Brouwer was a constructivist,
but he was not a finitist.

Brouwer's analysis of the continuum depends heavily upon the notion of a "free
choice sequence" (aka a lawless sequence). The idea is that a creating subject
-- i.e., you or me -- is free to make an unbounded sequence of choices to
produce a sequence. At any finite time, you can only have made a finite number
of arbitrary choices, but if you wait long enough you can make any number of
arbitrary choices. Because these choices are arbitrary, free choice sequences
cannot be described as being generated by any finite rule.

This is a constructive process, if you imagine (for example) generating a
lawless sequence by throwing dice or from the clicks of a Geiger counter. But
it's not a finitist one, because there is no finite data from which a choice
sequence can be computed. This let Brouwer retain the idea that a
mathematician can only have considered a finite amount of information at any
point, without having to make any identification of the real numbers with
computable functions.

In terms of complexity, there's an interesting analogy between choice
sequences and the question of whether randomization gives you extra power --
the Miller-Rabin primality test lends immense force to the intuition that RP
is stronger than P, but we might be centuries from proving it.

Even if we set aside randomness, though, an ultra-finitist/complexity-aware-
logic is something that basically every logician I've ever talked to has
agreed should exist, except we basically don't know how to do it. It's
something that pretty clearly demands more creative insight than technical
grunt, and unfortunately imaginative leaps don't come on demand. :/

~~~
pron
We need to separate Brouwer's construction of the choice sequences from his
philosophical justification for them; in the transition between the two there
were just things he couldn't have known. His entire conception of choice
sequences was because the mathematician's mind is finite. He couldn't have
based his choice sequences on computation because the notion didn't exist yet,
but if it had, he wouldn't have needed "lawless sequences". There is just
nothing that requires them either in his chosen axioms or in his philosophy --
their salient feature is that one cannot predict their behavior, something
that, after Turing, _we_ know is true even for "lawful sequences" _but Brouwer
didn 't_. I, therefore, do not think it is correct to say that he wasn't a
finitist because he relied on randomness. As far as he knew, randomness was
consistent with a finite mind, the finiteness is what mattered to him, and
anyway, he cared about undecidability, not randomness, but didn't know the
former can exist without the latter.

What _is_ essential to his philosophy is that the only justification for the
axioms can come from nature by way of empiricism, and is therefore finite. If
you accept Russel's, Brouwer's and Hilbert's belief that mathematics contains
Truth, and if you then accept Brouwer's belief that the only source of truth
is nature by way of empiricism, you have no choice today but to reject both
LEM and induction. It is perfectly fine to work in constructive mathematics,
and, in the Turingian sense it is of value, but it is no longer justified by
Brouwer's philosophy of mathematical truth. Brouwer would have had to reject
today's constructive mathematics on the same grounds he rejected classical
mathematics: its axioms are unjustified by nature and empirical observation,
even in principle. In fact, I don't think there is _any_ philosophy of
mathematics that could distinguish between constructivism and classical
mathematics on any reasonable philosophical grounds today.

I would say that Brouwer's philosophy is (with today's knowledge)
constructivist and ultra-finitist. BTW, Brouwer was not opposed to formalism
as much as he was to logicism, and he even tried to propose a way to reconcile
the two ("Intuitionistic reflections on formalism", 1927).

~~~
neel_k
In modern terms, you can identify sequences and automata: a sequence is a
current state, plus a transition function giving you the next element of the
sequence and the next state. However, if every function is tracked by a
computable process, then the transition function must also tracked by a
computable process, and so every infinite sequence must be generated by a
computable process.

This identification is precisely what choice sequences are used to rule out.
If a sequence is genuinely arbitrary, then you cannot describe it up front
with a finite amount of data and a finite rule for updating that data.
Instead, you have to imagine a mathematician adding a new axiom at each time
step about the next element of a choice sequence, and requiring that any
constructions they do works uniformly regardless of what axioms may be chosen
in the future.

In this picture, the total amount of axiomatic data is actually going up over
time, and this is actually important for the results Brouwer proved. In
particular, without the axioms for choice sequences, you cannot derive the
strong counterexamples, because you need classically-false axioms to prove
classically-false theorems (like the continuity of all real-valued functions).

~~~
pron
> This identification is precisely what choice sequences are used to rule out.

But you don't _need_ to rule them out -- Brouwer certainly didn't. He just
wanted to rule out _decidable_ sequences. It's the undecidability that's
important to his result, not arbitrariness, and that's perfectly compatible
with automata. He just didn't know about computational undecidability and
thought that undecidability requires lawlessness, but it doesn't. In any
event, he didn't think randomness violates finitism. That only finite systems
can be the source of axioms has been a constant refrain of his. I see that
according to Wikipedia's definition of Finitism, he would be a "classical
finitist."

> In particular, without the axioms for choice sequences, you cannot derive
> the strong counterexamples, because you need classically-false axioms to
> prove classically-false theorems (like the continuity of all real-valued
> functions).

The continuity of all real-valued functions can be proven with just computable
sequences because of undecidability; you just can't do it if computational
undecidability hasn't been discovered yet. In fact, it doesn't even become
quite "classically false", but can be "embedded" in classical mathematics by
restricting the "all": all _computable_ real functions are continuous is a
classical theorem. AFAIK, all constructive theorems can be embedded in
classical mathematics by restricting them to computable entities. Do you have
an example of any result that relies on uncomputable sequences?

~~~
a1369209993
> It's the undecidability that's important to his result, not arbitrariness

Undecidability _is_ arbitrariness - to say that, for example, the axiom of
choice is independent of ZF is the same thing as saying that there is a
arbitrary choice[0] between ZF+C and ZF+!C. Similarly, the sequence[1]
G(PA),G(PA+G(PA)),G(PA+G(PA)+G(PA+G(PA))),... requires (at least) one
additional arbitrary bit per item.

0: not asserting anything about what _makes_ that choice, though.

1: where G(X) is the Godel sentence, roughly "X cannot cannot prove G(X)"

~~~
pron
What I meant was that Brouwer doesn't need lawless sequences; all the results
are valid with lawful sequences only. He just didn't know that because he
didn't know that lawful sequences can be undecidable. You can call it
arbitrary, but it doesn't have to be random, it just needs to be unknowable,
which is the case even if it's chosen by a computational rule. It is true that
most of the sequences of the kind you presented are not computable, but I
don't think any result of Brouwer's is affected.

------
alimw
> Note that if a system is inconsistent then everything is provable, thus if a
> system is consistent there must be a proposition which cannot be proved.

¬ ∀ (P Q : Prop), (P → Q) → (¬P → ¬Q)

------
lioeters
Along these lines, a book I've seen recommended here:

Computability: Turing, Gödel, Church, and Beyond. B. Jack Copeland, Carl J.
Posy, Oron Shagrir (2013) MIT Press

Description:

In the 1930s a series of seminal works published by Alan Turing, Kurt Gödel,
Alonzo Church, and others established the theoretical basis for computability.
This work, advancing precise characterizations of effective, algorithmic
computability, was the culmination of intensive investigations into the
foundations of mathematics. In the decades since, the theory of computability
has moved to the center of discussions in philosophy, computer science, and
cognitive science. In this volume, distinguished computer scientists,
mathematicians, logicians, and philosophers consider the conceptual
foundations of computability in light of our modern understanding. Some
chapters focus on the pioneering work by Turing, Gödel, and Church, including
the Church-Turing thesis and Gödel's response to Church's and Turing's
proposals. Other chapters cover more recent technical developments, including
computability over the reals, Gödel's influence on mathematical logic and on
recursion theory and the impact of work by Turing and Emil Post on our
theoretical understanding of online and interactive computing; and others
relate computability and complexity to issues in the philosophy of mind, the
philosophy of science, and the philosophy of mathematics.

~~~
ProfHewitt
Using Orders on propositions, the construction of Gödel’s proposition _I
'mUnprovable_ is blocked because the mapping Ψ↦⊬Ψ has no fixed point because
⊬Ψ has order one greater than the order of Ψ since Ψ is a propositional
variable. Consequently, some other way had to be devised to prove inferential
incompleteness without using Gödel’s proposition _I 'mUnprovable_. A
complication is that theorems of strongly-typed theories of the natural
numbers and computation are not computationally enumerable because there are
uncountable instances of the induction axiom in each theory. However, proof
checking is computationally decidable :-) The complication can be overcome
using special cases of induction axioms to prove inferential incompleteness of
the theories. See the following:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003)

------
darkmighty
>One way of describing the Incompleteness Theorems (1931) of the Austrian
logician Kurt Gödel is to say that he proved, in the form of a mathematical
theorem, that the possibility of a fully automated mathematics can never be
realized

What Gödel apparently missed (although it's hard to tell without going over
his writings carefully), is that _humans, by the same criterion, can 't
"solve" mathematical problems_. This is essentially a re-statement of the
halting problem (undecidability).

You can't expect to go to the best mathematician in the world (o.c. that's not
a thing, but say for example Terry Tao), give him an arbitrary problem and
expect the problem to be solved within a fixed time. Or even some arbitrary
but finite time (even if he had infinite scratch paper). If he had an
obligation to solve the problem (within an axiomatic basis), there could be
problems that go unsolved. Any mathematician knows this -- there's no
guarantee the Riemann hypothesis will be resolved -- it could be within a
year, within a century, or never.

It's very attractive to think we have some kind of higher powers (that
transcend ordinary matter, or computers, or w.e.), but we do not and mysticism
is not necessary to explain our abilities.

~~~
calf
Yeah I was confused by that and maybe the article could have clarified this in
relation to the Church-Turing thesis.

------
aey
It’s an open question if it is decidable to show that P=NP problem is
decidable :). We may live in a universe where undecidable problems exist and
it may not ever be possible no show that they are undecidable. Which imho is
kind of beautiful. Even if we live in a simulation it is so incomprehensible
that it might as well be God to us.

------
bjornsing
I always get a feeling articles like the OP (and there seem to be many) are
pushing some kind of mysticism instead of clearly explaining...

Can anybody give an example of mathematics that mathematicians consider
“true”, but which is not provable in ZFC?

If no such mathematics exist, then mathematics is essentially just a game of
symbol manipulation, right?

~~~
fipso_act
Sure. Most mathematicians think it's true that ZFC is consistent; but that
can't be proved in ZFC.

 _mic drop /Halmos block_

~~~
ProfHewitt
However, the consistency of ZFC can be proved in strongly-typed higher-order
theories that subsume ZFC.

~~~
bjornsing
What does that mean though?

Does it only mean that ZFC is consistent when subsumed within a strongly-typed
higher order theory, or does it mean that ZFC by itself is consistent?

~~~
ProfHewitt
The strongly-typed higher-order theory _Ordinals_ is consistent because it has
a unique up to isomorphism model by a unique isomorphism. Consequently, ZFC is
consistent because it is a special case of the theory _Ordinals_.

------
swayvil
>One way of describing the Incompleteness Theorems (1931) of the Austrian
logician Kurt Gödel is to say that he proved, in the form of a mathematical
theorem, that the possibility of a fully automated mathematics can never be
realized.

Which suggests the following solution : limit our endeavors to that part of
mathematics that CAN be automatically proven (and drill down infinitely into
details thereof). And stub off the rest with good-enough axioms (aka holy
doctrine).

Crazytalk? Well it's basically how we as a culture handle pretty much
everything, epistemologically (and ontologically) speaking.

It could be very efficient. It could be a dystopian nightmare.

------
chrisshroba
What would be a good starting point to learn more about the topics described
in this article? I felt like I understood the gist, but a lot of things went
over my head.

~~~
munificent
Douglas Hofstadter's "Gödel, Escher Bach" does a very slow gradual walk
through Gödel's incompleteness theorem. If you can get into his writing style,
it's a good way to really "get" the points.

~~~
chrisshroba
Thanks for the recommendation! I actually just picked it up off a friend's
bookshelf yesterday and it's next on my list of books to read.

I appreciate your response!

------
neonate
[http://archive.md/iyvzm](http://archive.md/iyvzm)

------
dajohnson89
are there any practical impacts (direct or indirect) of these results?

~~~
yters
The computer theory version of Godel's result is the halting problem, and that
is applicable all over the place, in the same way that physical conservation
laws imply a large variety of invention are impossible.

For example, it is impossible in general to understand what an English
sentence says, because that sentence may describe the operation of an
arbitrary program, and understanding an arbitrary program is undecidable.

Another example, it is impossible to detect all variants of malicious
programs, because that requires understanding what all programs do, which
requires identifying whether an arbitrary program halts or not.

Yet another example, optimal machine learning is impossible, since that is
based on Solomonoff induction, based on Kolmogorov complexity, which is
uncomputable due to the halting problem.

And another off the top of my head, it is impossible to create a general
program creating program, such that you give it a well defined specification
and it spits out a program that meets the requirements, if the requirements
are satisfiable, since that requires solving the halting problem.

Finally, humans, while not optimally satisfying all the above tasks, do quite
a decent job at these tasks, which is surprising if humans themselves were
some sort of algorithm. So, this strongly suggests the human mind is doing
some operation beyond computation, which in turn would imply that human
intelligence level AI is impossible in theory, and thus impossible in
practice. Furthermore, if the human mind is indeed doing something
uncomputable, this has technological implications that human in the loop
computation approaches are inherently more powerful than computation alone, a
vastly under researched concept, although arguably the secret to the success
of most AI/ML/datamining companies these days, such as Google, Facebook,
Amazon, and Palantir. And the software industry in general is proof of this
concept that humans are program generating processes performing much better
than the halting problem would lead one to expect if humans were indeed mere
computation.

Anyways, a whole bunch of impossibility implications, which have the practical
application of protecting your wallet from comp. sci. snake oil salesmen, and
the intriguing possibility 'beyond computation' technology.

~~~
pure-awesome
> Finally, humans, while not optimally satisfying all the above tasks, do
> quite a decent job at these tasks, which is surprising if humans themselves
> were some sort of algorithm.

Whilst I agree with everything in your comment before this point, I don't
agree with this.

There is a very big difference between _almost_ achieving something and
_actually_ achieving it. For example, there exist algorithms which are able to
determine whether a given program halts, for a large class of programs, or to
give some probability that a program halts prior to some point, even if they
are unable to solve the general halting problem. So you haven't really argued
that humans are doing something computers can't do.

In addition, I take issue with your statement that humans "do quite a decent
job at these tasks". On the face of it, I agree that yes, (when sufficiently
motivated) (some) humans do indeed do a pretty good job of e.g. proving
mathematical theorems and writing programs that halt when they should.
Certainly better than the best computers we can build. But that "pretty good"
is a _far_ cry from the optimality referenced in the theorems you've stated.
And if you are using this "pretty good" to indicate that humans are better
than any possible algorithm, that argument doesn't follow-through.

I think that human-level algorithms are in principle possible. They will have
to incorporate a lot of probability and approximate results and the like, and
likely will have something similar to "intuition" where the path from input to
output is not clear, whether to human onlookers or to the algorithm itself
(whatever that might mean). We already see hints of this today with our deep
learning algorithms. Not to say that we are necessarily close to achieving
this, or that we ever will, but on the other hand it might very well be just
around the corner!

~~~
yters
Sure, this observation is not a definitive proof, it is more along the
evidential lines.

There is also a class of deciders that can decide for an infinitely large
class, which cannot be decided computationally, yet is not complete. I'd say
humans most likely fall in this class, since they far exceed anything we can
do with the best algorithms we've dreamed up, yet also cannot solve
everything.

Of course, you can always say there is a yet undiscovered algorithm to fill
the gap between state of the art and human performance, but at some point that
hypothesis is less well supported by the evidence than the uncomputable mind
hypothesis, and I'd say we are well past that point.

~~~
pure-awesome
Humans exceed our current computers in the same way that a large asteroid
impact exceeds our current nuclear weaponry. It's a matter of scale, not of
kind.

If we track the capability of computation machines from binary adders through
old-school chess AI, expert systems and robotics, to the neural networks
playing Go, chess, Starcraft, DOTA, we see an ever expanding sphere of ability
and influence - often exceeding what humans predicted was impossible for
computers.

> Of course, you can always say there is a yet undiscovered algorithm to fill
> the gap between state of the art and human performance

This implies a kind of asymptotic approach towards human intelligence, that
it's something we keep trying to approach but will never reach.

But beyond "filling the gap" I believe computers have the potential to shoot
past us in ability. And if they do, I think it's going to be a pretty shocking
experience, because I think the rate of increase in intelligence at that point
is going to be bigger than we expect - there's no reason to believe the exact
level of intelligence is a special border, objectively speaking.

To claim that the evidence so far supports the uncomputable mind hypothesis
seems to me like if we had managed to build tiny motors which had been getting
bigger and better each year, and then claiming that we would never be able to
beat an elephant in strength because no motor up to that point had yet been
able to do so. Yes, our metaphorical motors are still tiny, and yes, it takes
other extra pieces of engineering beyond motors to actually push down a tree,
and indeed it's possible we will perhaps never build a big enough machine. But
to take each larger motor as somehow being evidence AGAINST us ever getting to
that point is a strange viewpoint.

~~~
yters
I don't see why you believe this. From my perspective, all the recent gains in
AI abilities is purely due to faster hardware. But, due to the combinatorics
of search spaces, exponential speed increase only gets you linear AI
improvements. So, we'll rapidly hit peak AI in the near future, falling way
short of even emulating human abilities.

As it is, compare the amount of processing, space and energy humans have to
expend to perform equivalent tasks to computers, and there is no comparison.
Computers are extraordinary orders of magnitude inferior to human performance.
The only reason there is the appearance of parity is because we are not
looking at the back end of what it takes computers to accomplish these tasks.

It is like we flipped a coin to create the works of Shakespeare, and waited
the many eons necessary for just the right sequence to occur. We then say a
coin is just as capable as Shakespeare, but that's only because we've ignored
how much effort it took to flip the coin vs how much effort it took
Shakespeare.

So, I'm not arguing our motors will not become bigger. I'm arguing that as our
motors become bigger, the returns are diminishing exponentially, much more
rapidly than can get us to true human parity.

