
Why Philosophers Should Care About Computational Complexity (2011) [pdf] - alokrai
https://www.scottaaronson.com/papers/philos.pdf
======
Animats
That's a useful article. The easy parts include

\- The halting problem is decidable for systems with finite memory. That's
simple enough; a deterministic program with finite memory must either halt or
repeat a previous state. Now the question is, is there a faster way to
determine the outcome than running the program? That's a complexity problem.
One of those where the average case is much easier than the worse case. Many
NP-hard problems are like that.

\- The same scaling issue applies to the "Chinese room", which is an extremely
inefficient solution to the problem. The question is how much can you improve
on exhaustive enumeration.

Then it gets hard. I thought it was established that quantum computing is not
going to allow brute-force search (using multiple universes?) on problems like
factoring. Is that wrong?

~~~
goldenkey
Though the halting problem may be decidable for universal machines with finite
memory, we still don't know what the program that iterates the most states
before halting looks like. The issue is further clouded when you allow a
program to edit its own code..ie no code/data separation. In fact, if a
machine separates code and data, the same one that doesn't will be able to run
for longer before halting, iterating more states.

Now, this is a hunch based on research I have been working on: I believe that
the programs that iterate the most states before halting have symmetry in
time. The structure would mimic V [1]

Heres how the programs would operate: Iterate all possible configurations for
a finite subset of the memory.

Then iterate all possible _ways_ to iterate.

Then iterate all possible ways to iterate all possible ways.

In writing the program, one would repeat the nesting until the program's
memory requirements cannot support a halting condition. And then dial it back
until they do.

The naive procedural method of a bunch of nested for loops will fail
spectacularly. Static code is simply an inefficient use of memory space. That
is what makes the challenge to write this necessarily polymorphic/self-
modifying emulation of V in time and memory, a challenge still unsolved. But I
think we are close.

The amount of nesting would be a measure of the memory requirements necessary
to iterate X configurations, a kind of constant representating the languages
specific relationship between code and data, efficiency of sorts. This
constant would be determined by the language but moreso, the limits of
computation in general.

[1]
[https://en.wikipedia.org/wiki/Von_Neumann_universe](https://en.wikipedia.org/wiki/Von_Neumann_universe)

~~~
whatshisface
If there was an algorithm that produced the busy beaver for a computer of a
certian size, you could run it to see how long it would run and then compute
the busy beaver function. The busy beaver function grows faster than any
computable function[] so that's not possible.

[][https://en.wikipedia.org/wiki/Busy_beaver](https://en.wikipedia.org/wiki/Busy_beaver)

~~~
daveFNbuck
You couldn't use that to compute the busy beaver function, as the busy beaver
function asks about Turing machines which have unlimited tape. You can solve
it if you have a finite amount of tape, but you'd never know whether one of
machines that tried to exceed the limit would have produced a longer output
before eventually halting.

------
mlthoughts2018
Previously discussed on HN:

\-
[https://news.ycombinator.com/item?id=2861825](https://news.ycombinator.com/item?id=2861825)

\-
[https://news.ycombinator.com/item?id=2897277](https://news.ycombinator.com/item?id=2897277)

\-
[https://news.ycombinator.com/item?id=11913825](https://news.ycombinator.com/item?id=11913825)

~~~
fergie
Thanks- I missed it all of those times so I am glad it was reposted, and also
glad that you have linked to the previous conversations.

------
danharaj
The _method_ of proof of P!=NP will undoubtedly tell us something deeply
fundamental about computation and therefore logic and therefore philosophy.
That complexity theory is so difficult thus far is in itself philosophically
interesting.

~~~
goldenkey
Seems directly related to philosophy via the Busy Beaver[1] function. Every
statement about all integers, ie. Goldbach's conjecture, can map to a
predicate P(n). P(n) is either true for all integers or fails at a least n.
This least n must be less than BB(program_size(P)).

We can look at the predicates that don't fail, and take their negation also,
~P(n), which clearly does fail at n=1. These predicates are all P such that
P(1) is true or ∄n P(n) is true. They take up a percentage of the space of all
predicates (analog of Chaitin's constant[2]). Meaning that in the distribution
of all possible predicates, a certain proportion are just (∞,1) and (1,∞)
mappings, while the rest of the predicates have mappings bounded by their
program size. It seems that because of the pigeonhole principle and the
mappings already "wasted" on (1,∞) and (∞,1), the rest of our distribution
gets restricted to bounded levels of expressive power.

The literal allowance of statements that are false trivially, or true for
everything, limits the power of all of the remaining statements that now have
to position themselves as finitary AND confined by that proportionality.

The universe is a very weird place.

[1]
[https://en.wikipedia.org/wiki/Busy_beaver](https://en.wikipedia.org/wiki/Busy_beaver)
[2]
[http://en.wikipedia.org/wiki/Chaitin%27s_constant](http://en.wikipedia.org/wiki/Chaitin%27s_constant)

------
tek-cyb-org
szabo

