

Are Mathematicians In Jeopardy? - amichail
http://rjlipton.wordpress.com/2011/02/17/are-mathematicians-in-jeopardy/

======
waqf
When mathematicians are obsolete, computer scientists will be obsolete. When
computer scientists are obsolete, we have by definition entered the
Singularity.

So my take is that he's not necessarily wrong, but if he's right then lack of
employment for mathematicians won't be our biggest concern.

~~~
tybris
To me, singularity is hypothetical nonsense that shows a clear
misunderstanding of engineering problems. Computer systems have not become
much more sophisticated in the past 40 years, we have just made them bigger.
We keep running into the same mathematical and physical limitations that will
not suddenly disappear when machines can improve themselves. In fact, even if
such a machine was possible it probably would not get much further than a few
generations after which the limitations have become orders of magnitude more
complicated compared to the sophistication of the machine. (welcome to
engineering)

~~~
JohnnyBrown
I'm not saying that the singularity isn't some kind of "nerd rapture", but it
may a bit disingenuous to say computer systems haven't gotten much more
sophisticated. Sure, we're still using Von Neumann architecture and
transistors, but software has gotten massively more interesting in several
different directions.

------
tychonoff
Computing will continue to aid mathematical discovery, proving (useful)
theorems like Four Color, or even formulating their own (useful) theorems
before proving them.

For example, in Einstein's Special Theory, there's not a lot of deep
mathematics (relative to the General Theory) so it's not too difficult to
imagine Watson taking the problem and axiomatizing (just to see what would
happen) that light speed is always constant (which is the key to everything
else that followed).

But what about Descarte's insight that geometry could be described by algebra
using a coordinate system, giving birth to analytic geometry? Or proving that
it can't prove all true statements in arithmetic (Godel)? Or even asking the
question?

I think we're a long way off from doing something like that in a machine.

~~~
Panoramix
Indeed, mathematics requires a big deal of creativity. You could argue it is a
somewhat subjective discipline, as interesting questions receive infinitely
much more attention than a random question. And how do you define what is
interesting? There are an infinity of directions you can take -- which for
humans is fine, because they know that you eventually have to end up with
something your peers find interesting as well. How would a machine do this? I
feel this could be akin to train a computer to enjoy music. While computers
will continue to become more and more involved in theorem proving, it's not
clear to me that they will be able to ask interesting questions within the
next few years.

On the other hand, journal papers keep pouring every day at rates that are
impossible to keep up. In the future this situation will be hopeless in the
absence of some AI to guide us among that mess. But then again, papers are
written in a combination of natural language and mathematics.

------
ezyang
Automated proof tools are already causing a headache for mathematicians. The
four color theorem was proved by breaking the problem down into 1,936 cases,
and then proved each of the individually. This story has a happy ending: the
result is now fairly well accepted and there have been numerous steps in
simplifying the proof.

However, other automated proofs have not been so lucky: Hale's proof of the
Kepler Conjecture was 300 pages of mathematics and 40,000 lines of supporting
computer code, and the referees of the paper tried four years to understand it
(they even set up a seminar series for all of the prerequisite material to
understand the proof) before letting the authors know that they were unable to
verify the correctness of the proof because they "ran out of energy to devote
to the problem." (The Annals of Mathematics henceforth edited their rules to
disallow submission of computer programs.) The Flyspeck project is to, well,
ask a program to verify the correctness of the proof. Ah, the times we live
in.

------
xyzzyz
>Is this a possible goal? Note the one key insight that I feel the need to
repeat: Thomas J. must not be perfect. It must not only put out correct formal
proofs. It must be allowed to put out papers like we do.

So, he not only wants correct formal proofs of difficult theorems, but also
insights and thoughtful conjectures? I think math is a red herring here, what
he really wants is strong AI -- you can put 'writers' or 'economists' in his
arguments and not much changes.

~~~
alextgordon
I struggled parsing that too. I think he _intended_ the negation of "It must
only put out correct formal proofs", but adding a "not" in there doesn't quite
have that effect.

------
tychonoff
Consider the famous 100 year old paradox: this sentence is false.

Watson won't be able to assign a truth value to this gramatically correct
sentence, but would it have the insight that it just stumbled upon a
philosophical problem that's solved by narrowing the notion of truth it uses
(Bertrand Russell).

In other words, a machine may not be able to examine itself in the way that
humans can when certain problems arise.

~~~
cousin_it
Long ago, some people said computers would never beat humans at chess, and
they were wrong. A human brain contains no magical components. It runs on
physics. Whatever it does, can in principle be done by some other machine that
runs on physics. I won't be surprised if someone finds a general way to
replace metamathematical insight with clever search-and-pruning over a large
but simply structured space, like it happened with chess.

~~~
pnathan
That _is_ an assumption regarding the human brain.

Also, the limits of computation have been given an upper bound. If it's not
computable by a Turing machine with a finite memory tape (or anything else
that is mappable to a Turing machine with a finite memory tape), it's not
computable by a physical computer.

Steve Yegge started exploring the metaphysics of computer programs before he
stopped blogging; it makes for interesting reading.

~~~
cousin_it
There's a reason why my argument never mentioned Turing computability :-) If a
human brain uses quantum effects to get cognitive bonuses, you can build a
quantum computer that does same. If the brain uses super-duper unified field
theory, you can build a machine to take advantage of that, and so on. The only
assumption I make about the human brain is that it doesn't run on _magic juju_
that's available _exclusively_ to humans by the orders of Cthulhu, and I think
that assumption is quite warranted.

(As far as I know, neurons can be approximated well enough by classical
computers and you don't even need to go quantum, but that's another story.)

~~~
signa11
moon-is-a-harsh-mistress quote:

Am not going to argue whether a machine can "really" be alive, "really" be
self-aware. Is a virus self-aware? Nyet! How about oyster? I doubt it. A cat?
Almost certainly. A human? Don't know about you tovarishch, but I am.
Somewhere along evolutionary chain from macromolecule to human brain self-
awareness crept in. Psychologists assert it happens automatically whenever a
brain acquires certain very high number of associational paths. Can't see it
matters whether paths are protein or platinum.

------
gaius
The fabled mathematician Shalosh B. Ekhad may have something to say about
that.

<http://www.math.rutgers.edu/~zeilberg/pj.html>

~~~
_delirium
The undercurrent where he's trying to expand the accepted definition of "by
trivial algebra" is interesting. Essentially Zeilberger argues that if you can
plug something in to Mathematica and it can solve it, then it's true "by
trivial algebra" by today's standards, and we should stop bothering to prove
such results by hand.

------
jules
Actually there has been a lot of research in this area. It's called automatic
theorem proving. There have been some successes but not close to replacing
mathematicians.

If you compare it to chess then the computer programs are still at the
beginner level. First year math students will be able to prove things that the
computer cannot, yet.

It's an intriguing area. Computer programs can perfectly well check whether a
proof is correct, so in principle all you have to do is enumerate all "proofs"
and pick out the valid ones. In practice the research is focused on doing this
search more efficiently and intelligently.

------
johnohara
Although I can't speak directly to whether mathematicians are in jeopardy, the
future certainly looks good for patients needing access to highly accurate
medical diagnostics.

Watson may represent the ultimate "second opinion."

~~~
bhousel
I see a lot of stories today about how IBM is planning to adapt Watson
technology for use in health care.

I absolutely believe that Watson could revolutionize health care, but not in
the way most people are suggesting. Despite what you see on House MD,
diagnosing mystery illnesses is not something that doctors struggle with.

However, the amount of money that Watson could recover in insurance and
medicare fraud would absolutely dwarf what he won on Jeopardy. There's the
real payoff...

~~~
_delirium
Diagnosing strange mystery illnesses isn't common, but not doing as well as
one could at diagnosing and treating _routine_ illnesses is more common.
Automating the diagnosis and treatment process would just be the next step in
the evidence-based medicine movement, which is already moving in the direction
of having doctors follow best-practice flow charts derived from statistical
data.

------
Tichy
There already have been cases of theorems being proven by computers. I don't
know how much truly original work, but I remember (faintly) at least one case
where the computer discovered an unknown way to prove an existing theorem.

Many years ago I was on a logic conference and one speaker proposed to make "a
computer can solve it automatically" the definition of trivial (as in "this is
trivial to solve").

------
s3graham
As with the intended commercializations of Watson, it seems like augmenting
humans will be more fruitful for mathematicians.

Specifically, a computer spitting out proofs of things that no one asked for
doesn't seem especially productive. Humans conjecturing something interesting
and getting computer generated proofs or counter-examples seems interesting
and more plausible though.

------
mattmight
No.

Anyone that's tried to use a state-of-the-art automated theorem prover (like
ACL2) knows we have a long, long way to go.

They're still pretty dumb.

That said, I'm convinced that Godels Incompleteness Theorems hold only in the
limit -- the majority of propositions of human interest are probably amenable
to mechanical proving.

------
jgorham
While I don't doubt that in the future there may be some machine that can
crunch logic at a ridiculous pace, Watson still seems far off from the AI the
author here is alluding to. Machine learning of Watson's nature
(classification problems, feature selection, etc.) is able to take data
generated from the world and find interesting patterns to make a prediction.
This is nowhere near the creation of new ideas, like Galois' insights into
field theory or the work Perelman contributed toward the concept of a Ricci
flow. I haven't yet heard how to create a machine whose thoughts "can wander"
in a way that isn't hopelessly inefficient. That would be a big leap toward
this type of AI.

~~~
jeffcoat
Our current technology is a long way off -- but he's making a prediction (or
maybe more of a suggestion) for 15 years from now.

I wouldn't have guessed even 3 years ago that we were this close to having a
working Jeopardy player.

------
impendia
As a small-scale example of this, see the book "A=B".

<http://www.math.upenn.edu/~wilf/AeqB.html>

The authors reduce the proving of a wide class of combinatorial identities,
which were formerly regarded as extremely nontrivial, to a mechanical
calculation. This has been implemented in Maple. Input your favorite identity,
push the button, and watch the computer spit out a proof.

------
rubidium
> It [the Watson-esque computer for mathematics] must not only put out correct
> formal proofs. It must be allowed to put out papers like we do. And these
> must be right often enough to be interesting, but they need not be perfect.

This answers the question, I believe. Are all human mathematicians in
jeopardy? No.

May we need less human mathematicians in the future and maintain a strong
mathematical community? Likely.

~~~
shasta
Why, because Watson gave a bunch of laughably wrong answers? That's not how
the mathematician machine would work. It would combine brainstorming (probably
via massive parallel statistical search), at which people are currently much
better than machines, with rigorous proof checking, at which computers are
much better than people.

Still, in 15 years? I'll wager against this prediction.

------
Jun8
His proposal, Thomas J. is a brilliant idea! Sure, mathematical notation can
get much more complex than that used for language but it has the preciseness
that language lacks in syntax ("I shot an elephant in my pajamas. How he got
into my pajamas I'll never know."), semantics ("Iraqi head seeks arms"),
pragmatics, etc., which I think may compensate for this difficulty.

------
tybris
No, mathematicians will be the last ones to still have jobs once all others
have been replaced.

------
maeon3
If a software system becomes better than the best humans at building new
innovative software than there is going to be another "space race" but instead
of space, it's going to be knowledge/intelligence. Some country will perfect
the system and become the new world superpower overnight.

It means hold on to your hat Dorthy, cause Kansas is going byebye.

------
cparedes
All I have to say is, "Incompleteness Theorem."

Hilbert tried to come up with a method of generating theorems by just cranking
a wheel that churns out different logical forms. Ends up that we still need
verification to make sure that these things actually make _sense_. (Even if we
have incompleteness in any logical system that contains arithmetic, we still
have to figure out whether a certain logical form "makes sense" to use.)

~~~
btilly
It is clear to me that you do not understand the Incompleteness theorem, the
original post, or both. I am guessing both.

The fact that there are results the computer can't prove is not a bar to it
being useful. After all there are results we can't prove either.

What was discussed was not a perfect theorem proving system. But rather a
system that was good enough to be competitive with humans.

~~~
cparedes
No, I think you're not granting enough charity to what I'm saying.

1\. The incompleteness theorems basically say, "if you have a logical system
that includes Peano arithmetic, even if the theory is consistent, it will be
an incomplete system." Also, "if you're trying to show that said system is
consistent within the system, then you have an inconsistent system." These
statements drop the Hilbert program to a dead halt.

2\. I'm not talking about a perfect theorem proving system. However, what I
_am_ saying is that you still need humans to verify the results - we do have
automatic theorem provers already, but we still need to make sure that the
steps are logical. Not only that, but the theorem needs to connote something
useful. Also, we tend to use axioms that we can't say are consistent within
the system (for instance, the axiom of choice; yet, mathematicians use it like
candy in order to show useful results in analysis, and weird, unintuitive
results in topology [slicing a sphere and producing two spheres with the same
volume.]) I understand that the article is saying, "yes, such a proof produced
by a computer still needs to be independently verified." To be honest, I think
a system that could make mistakes and whatnot is not necessarily what people
would want to see - reason being that mathematicians probably have not much
trouble trying to figure out the larger details and overall plan of attack.

3\. The reason why I'm doubtful that we'd ever see such an AI for proving math
statements is because the entire enterprise is still very much a creative one
- you not only have to generate the logical forms somehow, but also figure out
whether it gets you closer to your goal or not. I suppose you could have some
sort of heuristic that could show that a generated theorem or lemma could in
fact get you closer to your goal, but my god, math is filled with so many
potholes and garden paths that could lead one astray.

Don't get me wrong, it'd be awesome to see such a system - however, I'm very
doubtful that such a system could ever be created, and if it is created, that
it'd be actually useful to practicing mathematicians.

~~~
btilly
_No, I think you're not granting enough charity to what I'm saying._

Let's see.

 _The incompleteness theorems basically say, "if you have a logical system
that includes Peano arithmetic, even if the theory is consistent, it will be
an incomplete system." Also, "if you're trying to show that said system is
consistent within the system, then you have an inconsistent system." These
statements drop the Hilbert program to a dead halt._

Close but not quite. The incompleteness theorem says that a consistent set of
axioms describes arithmetic can only be consistent if and only if it does not
prove its own consistency. An inconsistent theorem proves absolutely
everything. A consistent system is welcome to attempt to prove its own
consistency all it likes - it will just fail.

It is true that this killed Hilbert's program as originally conceived.

 _I'm not talking about a perfect theorem proving system. However, what I _am_
saying is that you still need humans to verify the results - we do have
automatic theorem provers already, but we still need to make sure that the
steps are logical._

At some point, I fail to see why you need the humans. What value are humans
actually providing?

 _Not only that, but the theorem needs to connote something useful._

Mathematicians already have a concept of "useful" that is so far at odds with
the common understanding of the term that I honestly cannot make sense of what
"useful" actually means in this context. If the program is able to tackle
actual difficult research problems, and has heuristics that suffice for real
problems, then its notion of "useful" is probably good enough for practice.

 _Also, we tend to use axioms that we can't say are consistent within the
system (for instance, the axiom of choice; yet, mathematicians use it like
candy in order to show useful results in analysis, and weird, unintuitive
results in topology [slicing a sphere and producing two spheres with the same
volume.])_

Outside of logic, most of mathematics has agreed on the set of axioms to use.
Namely ZFC. As for using axioms that are not consistent within the system,
that is absolutely necessary by the incompleteness theorem. Though you picked
an ironically bad example. Gödel proved that ZF is consistent if and only if
ZFC is consistent, and therefore the axiom of choice does not affect the
consistency of the axiom system. It might not describe the set theory we want
to describe, but it does not lead to contradictions.

 _I understand that the article is saying, "yes, such a proof produced by a
computer still needs to be independently verified." To be honest, I think a
system that could make mistakes and whatnot is not necessarily what people
would want to see - reason being that mathematicians probably have not much
trouble trying to figure out the larger details and overall plan of attack._

That was one of the weaker points made. I see the value of having heuristics
that can go astray. But if you've got this in a computer, hook the heuristics
up to a theorem prover that can fill in the details and come up with verified
theorems. And which can alternately can describe useful plans of attacks.

 _I understand that the article is saying, "yes, such a proof produced by a
computer still needs to be independently verified." To be honest, I think a
system that could make mistakes and whatnot is not necessarily what people
would want to see - reason being that mathematicians probably have not much
trouble trying to figure out the larger details and overall plan of attack._

Until people trust the computer, it absolutely needs to be double-checked. Any
piece of software can have horrible bugs that are hard to find. No matter how
confident its designers are, it needs verification.

 _The reason why I'm doubtful that we'd ever see such an AI for proving math
statements is because the entire enterprise is still very much a creative one
- you not only have to generate the logical forms somehow, but also figure out
whether it gets you closer to your goal or not. I suppose you could have some
sort of heuristic that could show that a generated theorem or lemma could in
fact get you closer to your goal, but my god, math is filled with so many
potholes and garden paths that could lead one astray._

Yes, it looks as far away from being possible now as beating Jeopardy looked
back when Deep Blue beat Kasparov. Based on my experience with mathematics and
computing I find myself in agreement with rjlipton that it is not impossible,
and in fact I wouldn't be surprised to see it happen within 15 years.

Here is an interesting back of the envelope calculation for you. A human brain
has 100 billion neurons, each of which has 7000 connections, and fires about
every tenth of a second. Let's suppose that emulation averages 1000 clock
cycles per synapse. The result is that simulating something as complex as the
human brain in real time should take on the order of 10 __11 * 7000 * 1000 = 7
_10_ _17 clock cycles per second. Watson was running at 8_ 10 __12 clock
cycles per second. If Moore's Law continues to hold for 16 generations, which
is 24 years, then a computer the size of Watson should be able to match the
human brain, in real time.

Once we have the hardware, I think it is only a question of time until the
necessary software is available. And given the demonstrated power of
statistical analysis of large data sets to real problems (think Watson, Google
translate, and the like), I think we're on our way to developing appropriate
software as well.

 _Don't get me wrong, it'd be awesome to see such a system - however, I'm very
doubtful that such a system could ever be created, and if it is created, that
it'd be actually useful to practicing mathematicians._

I think that rjlipton is a little optimistic on the time frame. But I fully
expect to see it arrive in my lifetime. I have some trepidation about the
inevitable economic upheaval when it does happen.

