
Post-human mathematics - subnaught
http://arxiv.org/abs/1308.4678
======
whatshisface
I think one of the greatest difficulties in completely automated mathematics
would be the discovery of mostly those things which are _interesting_. If
computer mathematics is ever to diverge from human thinking, someone must pay
to keep the lights on. Since nobody would pay for random points in theorem-
space, it must have some way of figuring out what the human brains down at the
breaker box actually want. This to me looks a lot like the problem of computer
automated storytelling or even art.

Also, I don't think the alien-ness of computer proofs is a given. Perhaps
someday some psychologist or philosopher might work out exactly what our
cognition likes to work with, upon which time you could write a proof compiler
that outputted those things.

That raises and interesting question. Can all interesting proofs be built with
human-friendly steps? Maybe that's why we haven't worked out P and NP.

~~~
avn2109
>> "...random points in theorem-space..."

Question: Given an arbitrary powerful computer, is the nature of mathematics
such that in principle we could exhaust the points in theorem space by brute
force?

E.g. is there in principle a finite amount of math?

~~~
noamyoungerm
This is the type of question that quickly runs into problems wrt
incompleteness, but there are a few key points that show that there are
infinitely many theorems

For any given statement P and any given theorem T, P OR T is a theorem,
meaning you can generate infinitely many theorems from one.

More worryingly, there are infinitely many theorems that are true but
impossible to prove. From Gödel's first incompleteness theorem, we know at
least one such theorem must exist. Any theorem of the form Unprovable AND TRUE
can be proved only if you can prove the unprovable theorem.

From a more general perspective, it's worth remembering that theorems can
always be described as a finitely-long string of symbols in some formal logic
system, and so running out of theorems to try and prove is like running out of
strings.

~~~
grkvlt
Since each theorem has a Gödel-number, there must be a function f(x) -> {0,1}
where, when x is the number of an 'interesting' theorem, f(x) is 1, and
otherwise it is 0. So, we just need to find _this_ function (or rather, its
Gödel number) and then we can iteratively feed all the integers to it,
translating the interesting ones back into theorems.

This job is made _much_ easier by the fact that the proof of this
'interestingness' theorem has a Gödel-number of its own; call it F. And,
proving that something is interesting is obviously _also_ interesting. So,
just find the situation where:

    
    
        f(F) = 1
    

and we're done!

I'll collect my Fields medal later, thanks.

~~~
JadeNB
Your condition `f(F) = 1` doesn't seem to do anything to distinguish your
meta-theorem `F` from any other interesting theorem.

------
marcelluspye
> Is there a structure to mathematics which is independent of the human brain?

I would venture to say "no". I don't think humans will ever have no place in
mathematics, because the problems we deem "important" are often relatively
arbitrary. If a "post-human mathematician" starts spewing out thousands of
pages of mathematics a second, all in a form only a computer can understand,
no one will care. If a computer fells a tree in the woods, it doesn't make a
sound.

I wouldn't discount the possibility, though, that a future "creative" computer
manages to produce a proof indecipherable to humans of a theorem we care
about, at which point I think there will be quite a perturbation in the
mathematical community. If a computer proves the Riemann Hypothesis in such a
way that no one can understand it, but it spits out a Coq document that
everyone can load and verify, will people consider the problem solved?

~~~
DubiousPusher
>>I wouldn't discount the possibility, though, that a future "creative"
computer manages to produce a proof indecipherable to humans...

How would this be so? The sheer length of the proof rendering it unverifiable
in a human lifespan?

~~~
marcelluspye
I had written what now seems to me to be an unnecessarily long response to
this, but along the way I think I thought of an example to demonstrate my
point:

Shinichi Mochizuki spent several years compiling a large body of work which he
calls "Inter-Universal Teichmuller Theory" which which implies a number of
important results in number theory, algebraic geometry, and other areas (if I
understand the gist of it). However, he has run into a wall in the
mathematical community in that almost no one wants to spend the time to try to
understand what he wrote (I believe he said he believes it would take someone
well-versed in the field around 6 months of study to get a grasp of it).

If a human could produce enough work in sufficiently dense terms that other
mathematicians don't want to touch it, I can imagine a computer could generate
exponentially more work in a much less human-readable format than Mochizuki
(albeit formally correct).

------
netcan
_Computers are used more and more but do not play a creative role_

It's always difficult to get concepts straight when talking about any of these
AI-ish questions. The thing that we describe before the fact as "intelligence"
is generally stuff we can't imagine mechanizing. Once it is mechanized, and we
see the mechanic we don't really like to call it AI. I think it's the same
issue for "creative."

When humans do mathematics they look for theorems that are interesting
intuitively. We don't really understand what this intuition is. If a computer
does it, say along the way to solving some other problem, we will be able to
look into the mechanic and it probably won't seem like intuition to us.

------
zitterbewegung
For mechanical theorem proving there is ACL2 which integrates machine learning
into the system. See
[http://arxiv.org/abs/1404.3034](http://arxiv.org/abs/1404.3034) also Coq has
machine learning add ons
[http://arxiv.org/abs/1410.5467](http://arxiv.org/abs/1410.5467)

~~~
nicklaf
That is quite interesting, thanks!

------
Strilanc
Almost all of this paper is dedicated to discussing whether or not computers
can be "creative". Then it mentions in passing that computer-discovered
theorems may be long and impenetrable (even if they are profound).

I guess I was expecting some attempted concrete task definitions of
creativity, such as "finds and proves statements with many useful implications
and applications", and discussion of how well existing theorem provers do at
those kinds of tasks. But instead of the "how might we achieve this, and what
will change?" paper I was hoping for, this is more of an "are we special?"
paper.

~~~
bweitzman
The author touches upon creativity being related to making educated guesses
about how to proceed with a proof.

------
ChuckMcM
Loved this quote -- _" Note, by the way, that a great mathematician is one who
does something new, not one who is good at doing again things that have been
done before."_

Same can be said of engineers.

~~~
technofire
No, precisely the opposite should be said of engineers. Engineering is about
reusing well-known techniques to achieve repeatable, predictable results whose
timelines can be estimated. Engineering is not art. When civil engineers build
new buildings or bridges (and I'm talking about your common building or
bridge, not some iconic project), do they try to innovate and undertake to do
things in some new-fangled way every time? No, because this would be extremely
costly and dangerous and would render useless much of the past work that's
been done testing materials and loads and so forth. When a method is known of
constructing a bridge or an overpass that is both safe and economical, a good
engineer should just keep doing that, with perhaps some adaption as
appropriate on a case-by-case basis.

If software developers would stick with tried-and-true tools instead of
inventing new frameworks to solve the same old problems and immediately
obsoleting the "old" way of doing things each time, we'd have much less legacy
code and likely would be much better at estimating software development tasks,
and wouldn't have to solve the same problems over and over again.

~~~
joslin01
You're right about traditional engineers whose work has been going on for
quite some time now. We more-or-less figured out the "correct" way to build a
bridge or an on-ramp or most anything physical and tangible.

However, your last point is not valid because you're under the fallacy we ever
got the framework problem "correct". Who is to say that Django is better than
RoR or visa-versa? Software is still a very new craft; you can't say that
about masonry. Actually your fallacy stems even further. You're assuming
software is predictable. There's degrees of predictability, but most
programming starts out as a trek into the unknown. Sure, you might grab a
familiar lamp like [Framework X] to shed light along the way, but really you
don't know for sure what you're in for. If you did (or do), then you probably
spent an insane amount of time spec'ing everything out perfectly. I have no
problem with this, but there's even some dissonance between spec & code and
the further out that spec gets, the greater the dissonance.

Software will always be kinda crazy in my opinion, but I reckon we're getting
better with these agile approaches that embrace the unpredictability. In the
TDD approach, you have to come up with tests first. This can usually give the
programmer or architect a much clearer picture than "step 1: build web app"
which will transfer over into their estimates.

------
nemo1618

      >The ability to speak was clearly favored by evolution, and the
      >same might be said of the ability to count from 1 to 10.
    

Actually:
[https://en.wikipedia.org/wiki/Pirah%C3%A3_language#Pirah.C3....](https://en.wikipedia.org/wiki/Pirah%C3%A3_language#Pirah.C3.A3_and_linguistic_relativity)

------
jessriedel
Can someone give a better summary of the author's points? The abstract gives
an intro, but not much else.

~~~
jk4930
Human brains are severely limited so the mathematics they invent is severely
limited and everything new are small, understandable steps that build on huge
piles of proven mathematics. A computer can create entirely new huge, complex
ideas we wouldn't understand because they don't depend on making small steps.

~~~
anentropic
is there any evidence that a computer has ever created a new 'idea' ?

~~~
bendbro
If human generated books, artwork, and behavior qualify as new ideas, why
don't computer generated things also qualify?

[https://www.youtube.com/watch?v=tCPzYM7B338](https://www.youtube.com/watch?v=tCPzYM7B338)
[https://www.youtube.com/watch?v=Cbb08ifTzUk](https://www.youtube.com/watch?v=Cbb08ifTzUk)

------
spooningtamarin
[https://agtb.wordpress.com/2012/04/01/automatic-proof-for-
re...](https://agtb.wordpress.com/2012/04/01/automatic-proof-for-reimanns-
hypothesis/)

Love this one as a look into the future.

------
taber
The technology of mathematics is not theorems. Think of theorems like unit
tests: no matter what framework you use to get a result, it should match the
corresponding results that other approaches have yielded.

The technology of mathematics is words. Words that define the barriers between
abstract objects and their different properties. The set of words that a
mathematician uses to approach a problem is where progress is made.

Until a computer can conceptualize a problem outside of the words used to
describe it, it will never mimic this aspect of abstract thought.

------
tvural
How human mathematicians decide that a problem is "important" is not as
arbitrary as people seem to think. The work mathematicians choose to do can be
heavily influenced by fashion, but in general, a problem is important if its
solution would give insight into many other important, unsolved problems.
Given this definition, it wouldn't be difficult to get the computer's
definition of an interesting problem to align with ours.

------
api
They'd probably be more... computational:

[http://www.amazon.com/dp/1579550088/?tag=googhydr-20&hvadid=...](http://www.amazon.com/dp/1579550088/?tag=googhydr-20&hvadid=48305385595&hvpos=1t1&hvexid=&hvnetw=g&hvrand=2373967073518921741&hvpone=28.65&hvptwo=&hvqmt=b&hvdev=c&ref=pd_sl_4j8ledlqpp_b)

------
eveningcoffee
_you please enter digital certificate of virginity of grandmother , or some
such nonsense._

How did they get this kind of spam into arxiv.org?

------
tianlins
I think automated theorem proving replaces the "search function" of a
mathematician. But there is another part, which seems to be more important, is
the creative insights. For example, inferring and conjecturing a theorem from
few "data points". Machines at the moment do not have such capability yet.

~~~
zenogais
I think creativity is more than just inference from limited data (which
computers can do also). It has to do with either discovering a way to connect
two or more seemingly unconnected things or discovering a new means by which
to connect two or more things.

Mechanically churning through a pre-defined possibility space doesn't seem
like a sure-fire way to produce either of these effects. Though it is, no
doubt, a great way to generate proofs that were previously prohibitively
expensive to produce.

------
golergka
While the theme seems interesting, I found the article itself pretty
underwhelming. It just describes the current state of affairs and then goes
into hypotheticals with less precision and imagination than many sci-fi
authors before.

------
underlings
So I guess this is it then. If people were wondering about what comes after
"postmodern" it's probably:

Posthuman

------
cLeEOGPw
As Mike Tyson said "a computer might win a chess match, but he would lose in
boxing ring".

~~~
nicklaf
That doesn't even make sense. If we're going to compare apples to apples,
you'll either have to remove the human's brain from his body and place it in
the ring with the computer, or give the computer a proper body.

I don't see the human winning in either case.

~~~
dazmax
If the rules are that you have to use a human body, good luck computer.

Considering what our brains were designed for, you could make the argument
that chess is just as unfairly biased in a computer's favor as controlling a
human body in a boxing match is in the human brain's favor.

~~~
gohrt
Chess was invented long before computer. It's not unfairly biased; it happens
to be a particular human endeavor that computers can (relatively) easily be
made much better at.

------
sevzi7
I'm already doing this with Java multicore. Is Java multicore human? No.

