
Church-Turing Thesis Cannot Possibly Be True [video] - espeed
https://www.microsoft.com/en-us/research/video/church-turing-thesis-cannot-possibly-be-true/
======
joe_the_user
This seems like really exotic stuff.

It's one of those talks that seems to talk about things I know but then
immediately jumps to theories and definitions I have never heard of.
"Sequential algorithms were axiomatized in 2000" \- This "axiomization" was
apparently performed by the author of the talk, my Googling says. All links
about things talked in this talk just seem to lead to other texts and such by
this author. Might not be a good sign but maybe it's just a very hard to grasp
realm.

Anyway, broadly, the Church-Turing _thesis_ is not a theorem and cannot be
proven true or false (so said my professors and Wikidepia). Effectively, it is
a definition of computation that has so far been accepted because nothing
fundamentally different and more powerful than a Turing machine has been
exhibited (there is a theorem that Turing machines, partial recursive
functions and the lambda calculus are equivalent - and then there's the jump
to "these are really general and I think that about covers it" but still, not
a theorem, not provable).

My hunch is the author is mixing in "higher-order" logic "version of Church-
Turing" into their argument. Which is basically "cheating", switching to an
effectively different question without making things clear. However, I simply
don't know. This is basically opaque to my mere MA-level maths.

~~~
joe_the_user
OK, yeah, reading more[1] so I think the author is indeed considering things
like computation on real numbers where, for example, the value of square root
of two can only be approximated by a Turing machine but where ruler-and-
compass construction hypothetically can construct this exactly.

So if you toss out the idea of computation involving digital computers, you
can exhibit something not (exactly) Turing computable. If you have a
chaotically behaving dynamic system and call it a computer, then you have
something where computation becomes something in the eye of the beholder.

I'm not sure what questions this answers but there you are.

[1]
[https://www.researchgate.net/publication/221512843_What_Is_a...](https://www.researchgate.net/publication/221512843_What_Is_an_Algorithm)

~~~
daveFNbuck
A Turing machine can compute the square root of two exactly. It can't
represent it exactly as a fixed point number, but that's not the only way to
compute something. There's a pretty standard definition of computable numbers
[1] and it includes the square root of two.

[1]
[https://en.wikipedia.org/wiki/Computable_number](https://en.wikipedia.org/wiki/Computable_number)

~~~
joe_the_user
Sure, note I'm not trying to say the position being argued is right, I am
simply trying to describe it as well as I can understand it.

One might say that a "classical analogue" computer that "outputs" line
segments could output the square root of two but an digital computer could
merely "represent" it.

That could be pure sophistry, especially since such classical analogue never
have real infinite precision. But it seems like that's the sort of thing being
claimed here.

~~~
daveFNbuck
I'm having a hard time understanding the difference between outputting
something and representing it. Can you explain the difference, or is that just
your best understanding of the argument here?

~~~
rocqua
The point is that we can represent sqrt(2) symbolically (as I just did there)
but it cannot be represented as a binary (or decimal) expansion.

Computers can also work symbolically rather than based on some direct binary
representation. I think equality and especially comparison become more
difficult to determine, but it remains possible.

~~~
vidarh
But this merely a product of the choice of representation.

Here is a fully expanded representation: 1.

Of course my choice of base is a bit exotic, but the point is that no matter
what, we're working on symbols. We're just used to seeing some as more direct
analogues of numbers than others.

------
TimTheTinker
Overview pasted here:

————

The thesis asserts this: If an algorithm A computes a partial function f from
natural numbers to natural numbers then f is partially recursive, i.e., the
graph of f is recursively enumerable.

The thesis has been formulated in 1930s. The only algorithms at the time were
sequential algorithms. Sequential algorithms were axiomatized in 2000. This
axiomatization was used in 2008 to prove the thesis for sequential algorithms,
i.e., for the case where A ranges over sequential algorithms.

These days, in addition to sequential algorithms, there are parallel
algorithms, distributed algorithms, probabilistic algorithms, quantum
algorithms, learning algorithms, etc.

The question whether the thesis is true in full generality is actively
discussed from 1960s. We argue that, in full generality, the thesis cannot
possibly be true.

~~~
olliej
I’m still unclear on whether probabilistic algorithms are “algorithms” in the
strict sense - in that definition an algorithm should have deterministic
behavior, which a probabilistic method does not have. But let’s say your
“random” is a deterministic prng- does that then make the PA a deterministic
algorithm?

Basically the distinction can be best illustrated with bozo/bogo sort. The
definition many people use is:

1\. Randomly shuffle the input

2\. See if it is sorted, if not go to 1.

The problem, as explained to me by an algorithms lecturer a long time ago, is
that the algorithm is not guaranteed to finish, and more over the same input
may or may not result in it finishing on different runs.

Per that lecturer a “correct” bozo sort is

1\. Generate a list of every permutation of the input

2\. Iterate the list until you find a sorted entry and return that.

It has computable best case, worst case, and average case (all terrible). It
is guaranteed that the same input will always produce the same output
(terminate vs not terminate).

Obviously you could optimize this if you had an algorithm that could shuffle
an input such that you were guaranteed that if you shuffled it’s own output
you were guaranteed to produce every ordering of the original input. Then you
would get closer to: shuffle; check; repeat. But still note that there is no
random involved :)

~~~
rocqua
See [1], which is about the 'complexity class' Bounded-error Probabilistic
Polynomial time. Similar to the P=NP conjecture, there is a BPP=P conjecture.
Also consider [2], which concerns variations of the Church-Turing thesis.
Notably, it mentions probabilistic computation.

To answer your original question, indeed probabilistic algorithms are an
extension of normal algorithms. One way to make the extension (and retain the
property that algorithms give the same output on the same input) is to add a
'random oracle'.

[1]
[https://en.wikipedia.org/wiki/BPP_(complexity)](https://en.wikipedia.org/wiki/BPP_\(complexity\))
[2]
[https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#V...](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Variations)

~~~
kiriakasis
From a mathematical point of view probabilistic algorithm simply manipulate
probability distributions instead of value.

The output is a random variable.

------
rain1
This is pretty misleading and click-baity! The formulation they use is quite
esoteric and not what people usually think of the term to mean. The Church-
Turing Thesis really just states that turing completeness is the strongest
type of computational class we can expect to build (finite approximations of)
in our universe. And this does seem to be true. I hope the "exciting" title of
the talk they chose doesn't cause a lot of people to misunderstand this.

------
abdulhaq
A Turing machine can compute root 2 but it takes an infinite amount of time to
do so. A ruler and compass can do it near instantaneously, but it takes
forever to measure it.

~~~
comboy
It also takes an infinitely precise ruler and other tech which at some level
of precision is not in any way more trivial than an infinite amount of time.

You can compute as many digits as you want, but when trying to measure it, at
some point Heisenberg (or Planck to be more precise) will stop you.

------
avodonosov
This talk is Yuri Gurevich's answer to Peter Shor's (famous for Shor's
algorithm) comment [1] regarding a previous Gurevich work. In that previous
work Dershovich and Gurevich prove the Church-Turing thesis formally, using an
axiomatisation of what "algorithm" is, which captures the meaning assumed in
Church and Turing times.

But Shor complains their axiomatisation does not include probabilistic
algoriths, quantum algorithms.

In this talk Gurevich argues that in full generality of the term "algorithm"
the thesis can not be true.

1 - [https://cstheory.stackexchange.com/questions/88/what-
would-i...](https://cstheory.stackexchange.com/questions/88/what-would-it-
mean-to-disprove-church-turing-thesis)

------
Verdex_3
There is a series of talks by Yuri Gurevich on Channel 9 where he talks about
algorithms. While I haven't watched this particular video (yet), I did watch
the other series a few times (3 videos IIRC). Anyway, in _those_ videos Yuri
seems to basically be saying that he doesn't think _all_ algorithms from now
to eternity will always be representable with turing machines. Which seems a
bit more reasonable than the title here (Church-Turing cannot be true). It's a
different way of saying when an old scientist says something is possible he's
probably right and if he says something is impossible then he's probably wrong
(only inverted ... ie saying church turing will never be irrelevant is hedging
your bets against human progress).

All that being said, I don't buy what Yuri is selling. Even quantum computing
can be simulated with a classical computer (with exponential slowdown), so I'm
not seeing what's going to turn out to be more fundamental at least in a
theoretical sense. Practically people might not care (yes technically you can
simulate that graphics card in lambda calculus, but practically you would
never do that and the guys building these things probably don't care about the
church turing thesis), but the effective computability definition feels pretty
effective. Either way, I'm not going to loose too much sleep over it. At the
end of the day I'm more interested in looking for ways to be more effective
and I'm not really interested in whether or not what I'm doing fits into the
current definition of effective computability.

------
mwilcox
What about the Church–Turing–Deutsch principle?
[https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...](https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle)

~~~
tlb
That one's pretty aspirational. At present, we have no idea how to build a
supercomputer that simulates exactly even a cubic nanometer-femtosecond of
space. It's tempting to believe that we might someday figure out that physics
is digital, but there is no workable theory for how.

~~~
kowdermeister
Not sure what your definition of simulation, but there is already one done on
a supercomputer:

[https://www.youtube.com/watch?v=J3xLuZNKhlY](https://www.youtube.com/watch?v=J3xLuZNKhlY)

~~~
yorwba
That's a box of ≈21 cubic femtometers and the time resolution is one
yoctosecond. The difference between that and a cubic nanometer-femtosecond is
more than 25 orders of magnitude. And that's just simulating quantum
chromodynamics. Add in the other parts of the Standard Model and scaling the
simulation gets even more difficult.

~~~
kowdermeister
I'm pretty happy we can even simulate a box of ≈21 cubic femtometers :) Maybe
they should turn RTX on.

------
scythe
The talk begins:

ANNOUNCER: So it's my distinct pleasure, and privilege, to host Yuri for a
talk on a topic I have discussed with him since I was even a student: models
of computation. And through the years, we have interacted in various ways, on
both fundamentals of models of computation and then abstract state machine
applications, and at this point, it's also probably the last talk that Yuri
will give at this institution, as he is moving to Michigan next week. But I
have no doubts that there will be many opportunities to interact with Yuri in
the future. I hear that there's a birthday party coming up in a couple of
years, and it was always a pleasure to participate in these events. So,
without more adieu, please go ahead.

YURI: Thank you very much. It's a pleasure for return for a day.

[First slide appears]

>My scholarly friend Nachum Dershowitz and I published a long paper

>"A natural axiomatization of computability and proof of Church's thesis",
_Bulletin of Symbolic Logic_ , 2008

>where we proved Church's thesis _for classical algorithms_.

>Here classical is not the counterpart to quantum as is common in quantum
literature. By classical algorithms we mean traditional algorithms from
antiquity to Turing's time, also known as sequential algorithms.

>The claim that the Church-Turing thesis cannot be true in general is implicit
in:

>"What is an Algorithm?" SOFSEM 2012, _Springer LNCS 7147_ (2012)

>The main goal of this talk is to clarify things and argue for the claim.

YURI: In 2008 Nachum Dershowitz, my very learned scholarly friend and myself,
we wrote this paper, we proved Church's thesis, and Turing's thesis, but on
very technical-- they are slight differences, it's a technicality -- for
classical algorithms. Now what do we mean by classical algorithms? Now quantum
people hijack the word "classical" \-- it became to mean non-quantum, but our
meaning was classical in the sense: algorithms from time immemorial,
certainly, until 1950s, 1960s. So, this is the algorithm that Church and
Turing had in mind when they put forward their theses. So, another name for
these classical algorithms was traditional algorithms, or sequential. I will
call them mostly sequential, because the first sort of non-classical, non-
traditional algorithms were parallel, and so there was this -- there was
traditional or non-sequential. You can generalize, and generalize, then it's
not completely obvious why it should be even true.

YURI: I had this argument implicitly published years ago, but as you see:

[Second slide appears]

>There is a a problem with long scholarly papers: it takes time to read them,
and there is a tendency to skim.

>Peter Shor: The Dershowitz-Gurevich paper says nothing about probabilistic or
quantum computation. It does write down a set of axioms about computation, and
prove the Church-Turing thesis assuming those axioms. However, we're left with
justifying the axioms. Neither probabilistic nor quantum computation is
covered by these axioms (they admit this for probabilistic computation, and do
not mention quantum computation at all), so it's quite clear to me these
axioms are false in the real world, even though the Church-Turing thesis is
probably true.

>Nov, 22, 2010. [https://cstheory.stackexchange.com/questions/what-would-
it-m...](https://cstheory.stackexchange.com/questions/what-would-it-mean-to-
disprove-church-turing-thesis)

>The great hero of quantum computing has not read the paper.

===========================================================

I hope this will help you decide whether or not to make time to watch the
video, although I agree it would be helpful to have a complete transcription
available.

------
avodonosov
FWIW, Yuri Gurevich is famous for the Abstract State Machines formalism.

------
stevebmark
Why would you share this? It's incomprehensible.

------
olliej
Are there Paper/slides anywhere?

I shall repeat my complaint about AV only material on HN: not everyone (wants
to/can/has time to) watch and listen to 10 minutes of reading spread out to 50
minutes.

It continues to frustrate me.

~~~
gabrielhn
Listen at 2x speed and skip ahead?

~~~
olliej
Doesn’t work for people who can’t listen.

Doesn’t work for people who can’t see.

Doesn’t work for people in environments where the above two are essentially
true.

25 minutes is still longer than 10 minutes (transcript, slides would be even
shorter although less informative), and I was being conservative about reading
speed.

~~~
avodonosov
People who can't see whould be unable to read as well

~~~
RugnirViking
however, if it were written in text, they can read by using a screen reader,
which is very established techology by now.

This is not (as) available for a video if at least some component of the
lecture requires being able to see

------
starpilot
I understood all of it.

------
austincheney
I am so not a math person. From a math perspective perhaps this is important.
From a computer perspective it completely misses the point.

This work from Alonzo Church gave us
[https://en.m.wikipedia.org/wiki/Lambda_calculus](https://en.m.wikipedia.org/wiki/Lambda_calculus)
which in turn provides recursive function abstraction and thus lexical scope.
While this model of computation is older than OOP it currently seems to be
having a modern renaissance in its influence of language design.

~~~
austincheney
This many downvotes without any explanation is a true Reddit moment.

~~~
espeed
NB: Better to refrain from saying stuff like "it completely misses the point"
unless you are certain you understand the point beyond the level presented.

