
Revolution in Mathematics: What Happened 100 Years Ago and Why It Matters [pdf] - tokenadult
http://www.ams.org/notices/201201/rtx120100031p.pdf
======
calhoun137
This revolution in mathematics literally led to the invention of computers. I
have been reading all about it when I go to the bookstore in the book Turing's
Cathedral[1].

David Hilbert wanted to redo mathematics from square one, basing everything on
set theory, axioms, and rigorous proofs. In 1900, Hilbert gave his famous
address where he stated the Hilbert Problems. One of these was to prove that
mathematics is "complete" in the sense that every possible statement is either
provable or disprovable from the axioms.

In an effort to solve this problem, Alan Turing wrote his famous paper "On
Computable Numbers" where he conceived for the first time his "Turing
Machines". Although his approach didn't work to solve the Hilbert problem, it
greatly inspired Von Nueman who at the time was pretty much working on pure
mathematics.

Godel came along and proved that mathematics is not "complete", because there
are some statements about natural numbers which cannot be proved or disproved
from the axioms, but rather are undecidable.

Von Nueman was so upset by Godel's result that he decided he no longer wanted
to work in pure mathematics, because if it's not complete then whats the point
anyway. Instead he decided to work on applied math and physics. Since he loved
the work of Turing, he wanted to build a physical Turing Machine to help out
with computations, and he figured there would probably be a lot of other uses
for such a machine that no one had thought of yet, and that would only become
apparent once the machine was built.

While at the Institute for Advanced Study, Von Nueman fought a hard political
fight to secure funding and permission to build the first computers, the
MANIAC and the ENIAC, to the horror of the faculty at the time, including
Einstein.

Since I am posting this at 4:30am in a thread that is 12 hours old, I will be
happy if someone reads it :P

[1]
[http://books.google.com/books/about/Turing_s_Cathedral.html?...](http://books.google.com/books/about/Turing_s_Cathedral.html?id=6sElRNGXWFIC)

~~~
stiff
This is interesting but not completely accurate. What is most mind-blowing, is
that Turing machines were Turings approach to formalizing proofs as a means
towards solving the Entscheidungsproblem. In fact he was successful in
"solving" the problem, in that he proved the impossibility of solving it,
later than Goedel, but in a much clearer way - Goedel encoded theorems using
numbers in a quite crazy way. I also don't think Turing and Von Neumann can be
given that much exclusive credit in inventing the computer, while they had
huge impact, I think the idea of a computer was to some extent "in the air"
and there were many people working on it concurrently, sometimes unaware of
each other, see:

<http://en.wikipedia.org/wiki/History_of_computing_hardware>

~~~
calhoun137
It seems like the invention of the computer is one of the most controversial
"who got there first" things in all of science.

Turing's Cathedral has a ton of really interesting historical details about
the history of computers, and LOTS of details about the political fight Von
Nueman waged to even be allowed to work on the first computers at IAS. I
didn't realize how big a role he actually played, and I would encourage people
to read that book because its pretty good.

------
mdlthree
The Revolution:

At the turn of the 1900s, the field of mathematics was changing. The old
school relied of physical intuition as a means of proof. The new school found
that an adherence to formal logic led to more reliable findings.

Most Relevent Sections:

 _In brief, traditionalists lost the battle in the professional community but
won in education. The failure of “new math” in the 1960s and 70s is taken as
further confirmation that modern mathematics is unsuitable for children. This
was hardly a fair test of the methodology because it was very poorly
conceived, and many traditionalists were determined that it would succeed only
over their dead bodies. However, the experience reinforced preexisting
antagonism, and opposition is now a deeply embedded article of faith.

Many scientists and engineers depend on mathematics, but its reliability makes
it transparent rather than appreciated, and they often dismiss core
mathematics as meaningless formalism and obsessive-compulsive about details.
This is a cultural attitude that reflects feelings of power in their domains
and world views that include little else, but it is encouraged by the
opposition in elementary education and philosophy.

In fact, hostility to mathematics is endemic in our culture. Imagine a
conversation:

    
    
      A: What do you do?
      B: I am a ———.
      A: Oh, I hate that.

Ideally this response would be limited to such occupations as “serial killer”,
“child pornographer”, and maybe “politician”, but “mathematician” seems to
work. It is common enough that many of us are reluctant to identify ourselves
as mathematicians. Paul Halmos is said to have told outsiders that he was in
“roofing and siding”!_

from page 33-34.

 _Core methods such as completely precise definitions (via axioms) and careful
logical arguments are well known, but many educators, philosophers,
physicists, engineers, and many applied mathematicians reject them as not
really necessary._

from page 34

Why It Matters:

The sciences are at risk in the reckless implementation of maths.

Education methods are weakened by the philosophical divide.

~~~
waps
Part of it is that exact mathematics run afoul of several atheist articles of
faith. Depending on your interpretation it either disproves the "Jacobian"
scientific model, or more flexibly interpreted it means science is at best a
subset of what governs the universe (assuming -yes, assuming- it isn't
entirely wrong). Godel's theorem runs afoul of the principle that science can
explain everything. It answers the question whether mathematics directs and/or
predicts the events in our universe : it doesn't, it can't. It's too "weak" a
theory.

We're at a very exiting point in time where we know basic mathematics to be
wrong. It predicts a great many things, but there's problems that make it
thoroughly unsatisfactory. In a way it's like physics in the beginning of the
20th century, with the black body radiation problem. Of course, we have been
in this less-than-satisfactory state for ~70-80 years now, and no Einstein in
sight ...

Godel's theorem means that there's an infinite set of empirical truths (ie.
simple experiments you can try out with marbles and bags) that are completely
unexplained by mathematics - and thus by every science built on top of it.

Worse : this is not a fixable problem. Sure we can fix it for specific
problems. Wherever we see an obvious leak (say the birthday problem, or large
cardinal number problems) it can be plugged with a new well-chosen (or -more
often- ill-chosen, like choice) axiom, but there's infinitely many leaks and
the proof means that there's no plug that will stop any significant number of
them.

So right now in the set of all empirically observable events, there's a set
that's explainable by science, and there's a set unexplained by science. The
unexplained set is at least as large as the explainable set (and keep in mind
that's because both sets have been proven to have a cardinality of at least
the largest known cardinal number, given the actual definitions of those 2
sets I'd say the unexplained set is going to turn out to be bigger).

~~~
loudmax
Interesting points, other than mathematics weakening straw-man atheism.

~~~
waps
Don't you think that the idea that science explains everything is at the very
core of atheism ? Do you think my interpretation of Godel as directly
contradicting this thesis is wrong ?

------
stiff
Courant said much more much better in much less words:

 _There seems to be a great danger in the prevailing overemphasis on the
deductive-postulational character of mathematics. True, the element of
constructive invention, of directing and motivating intuition, is apt to elude
a simple philosophical formulation; but it remains the core of any
mathematical achievement, even in the most abstract fields. If the
crystallized deductive form is the goal, intuition and construction are at
least the driving forces. A serious threat to the very life of science is
implied in the assertion that mathematics is nothing but a system of
conclusions drawn from definitions and postulates that must be consistent but
otherwise may be created by the free will of the mathematician. If this
description were accurate, mathematics could not attract any intelligent
person. It would be a game with definitions, rules, and syllogisms, without
motive or goal. The notion that the intellect can create meaningful
postulational systems at its whim is a deceptive half-truth. Only under the
discipline of responsibility to the organic whole, only guided by intrinsic
necessity, can the free mind achieve results of scientific value…To establish
once again an organic union between pure and applied science and a sound
balance between abstract generality and colourful individuality may well be
the paramount task of mathematics in the immediate future._

~~~
jordigh
Courant is kinda saying the opposite here, by de-emphasising the need for
purely abstract formalisms, or by saying that they have to go hand-in-hand
with applications, or are worthless without applications. As a numerical
analyst, he of course thinks "applications" of mathematics are tantamount to
its existence.

The thesis in the linked article is that we need to emphasise the importance
of purely abstract formalisms, not only in the context of possible
applications.

~~~
stiff
What I meant is that he described the same problem in a more complete way, not
that he just said the same thing more nicely. There are infinitely many formal
systems and after all we choose only some particular ones, what guides this
choice? Isn't the existence of models in the real world one of the most
important factors in choosing which formalism is valuable to develop? Courant
how I understand him is saying that the tension between the two approaches is
vital to the development of mathematics.

There is also a nice lecture [1] from Vladimir Arnold, who was very much on
the side of intuition, where he raises an interesting point that is related:
in todays mathematics all the praise goes to the people who prove theorems,
where perhaps the person who first _stated_ an theorem that is interesting and
relevant deserves at least as much credit.

[1] [http://www.msri.org/web/msri/online-
videos/-/video/showVideo...](http://www.msri.org/web/msri/online-
videos/-/video/showVideo/2714)

------
Tloewald
Fascinating paper. As a math graduate I had no idea that the switch to
axiomatic descriptions and formal logical proofs was so recent and
controversial, and no longer working in math I had no idea that "core"
methodologies (i.e. pure mathematics) are under threat (although this makes
perfect sense given grant funding priorities).

------
thebear
The author describes the old, pre-revolutionary way of doing mathematics as
"relying on intuition and physical experience." In certain fields, such as
algebra and number theory, the old way of doing it had another important
trait: to prove the existence of something, such as the greatest common
divisor or the unique factorization of a polynomial, one had to give an
algorithm to construct it. The new, axiomatic way of thinking, by contrast, is
content with showing that the assumption of non-existence leads to a
contradiction. The rise of symbolic computation over the last few decades
(Mathematica, Maple, etc.) can therefore be seen as a comeback of the old
style of mathematics. I call it Kronecker's revenge. With respect to math
education, well, considering the importance of computing, I am inclined to
think that we should pay _more_ attention to the 19th century way of thinking
rather than less.

~~~
cwzwarich
Most mathematicians are sympathetic to utilitarian constructive mathematics,
i.e. finding constructive proofs for the sake of actually computing the
results in applications, as opposed to some philosophical reason. However,
nonconstructive mathematics (or at least mathematics that cares little for
avoiding nonconstructive arguments) will probably continue to dominate
mathematical research, since many major research areas now depend heavily on
its results as a basis for a conceptual framework.

~~~
defrost
You can get a pretty good feel for the practical level of sympathy by popping
down to your local math library and looking at the checkout histories of the
works of Brouwer [1] and Bishop [2].

[1] <http://plato.stanford.edu/entries/brouwer>

[2] <http://en.wikipedia.org/wiki/Errett_Bishop>

'Sparse' would be an understatement & poor old Errett got such wonderfully
backhanded reviews as:

> "Even those who are not willing to accept Bishop's basic philosophy must be
> impressed with the great analytical power displayed in his work."

and

> Bishop's historical commentary is "more vigorous than accurate".

~~~
cwzwarich
Well, both Brouwer and Bishop fall into the category of constructivism for
philosophical reasons. As I mentioned, this doesn't garner much sympathy from
ordinary mathematicians. Brouwer and Bishop wrote against the practice of
classical mathematics with considerable vitriol, as opposed to proposing
constructive mathematics as an additional lens for viewing classical results.
In the era of widespread use of computers to perform calculations, the
advantages of this latter point of view are obvious.

~~~
defrost
That's a fair assessment. Still it's a shame their work more or less petered
out due to politics, personality and philosophical disdain, had they played
better with others and taken a slightly different path they could well have
been better regarded than they are now.

------
jamestc
Mathematics education is a bit odd now for sure. I think if there was an
effort to teach kids the logical and even creative side of math early on,
they'd have an easier time building intuition for certain computational
aspects later on. As it stands, most people are unaware that math is no longer
predicated in reality, which seems nonsensical. You generally teach grammar
before (or along with) composition.

~~~
metacontent
I'm sorry, but could you please explain what you mean by "predicated in
reality"? I feel stupid, but I just don't understand what that means, and I'd
rather feel stupid than remain ignorant.

~~~
williadc
It means that a connection to the real world is not needed for mathematics to
exist. Another way of saying "predicated in" would be to say "based in" or
"required by."

------
platz
"To a first approximation the method of science is 'find an explanation and
test it thoroughly', while modern core mathematics is 'find an explanation
without rule violations'."

This kind of rings a bell for me here concerning Formal Verification methods
(i.e. proofs), possibly more popular with functional programming techniques.

------
JDDunn9
I think physics is going in that direction too. The paper explains how math
went from experimental to formal proofs. Now, for better or worse, science
wants to extrapolate to universes from quantum particles.

Schrodinger's cat was originally proposed by Schrodinger to ridicule the many
word's extrapolation of quantum mechanics. Now it is often used to support it.
Scientists create multiple dimensions, universes, etc. based on very little
real world data.

~~~
mdlthree
Schrodinger's cat is just like the old school intuition that is subject to
errors. It is also why physics may have a better public image than math, due
to the higher availability of intuitive ideas in describing physical
phenomena. We wouldn't need a simultaneously alive and dead cat if people were
more educated about the idea of a random variable. Heck, five seconds on
wikipedia leads to _"a random variable conceptually does not have a single,
fixed value (even if unknown); rather, it can take on a set of possible
different values, each with an associated probability."_

~~~
danbmil99
The paradox Schrodinger pinpoints in his famous thought experiment cannot be
explained by a simple appeal to random variables. Such variables would be the
"hidden variables" that Einstein and others attempted to find to explain
Quantum entanglement.

Von Neumann thought he proved that hidden variables were impossible, but In
the 1970's, John Bell introduced Bell's Inequality, and showed us that hidden
variables can only be used to explain Quantum entanglement if one gives up on
naive locality.

Things get hairier from there when you try to combine non-local theories (such
as Bohm's) with relativity.

It's complicated.

------
DanWaterworth
I'd like to point out that there is a difference between the law of the
excluded middle and proof by contradiction.

The law of the excluded middle says that for every x, either x or not x is
true. Proof by contradiction says that if given not x you can derive a
contradiction, then x is true.

~~~
stiff
He basically says that Hilbert relied on the law of the excluded middle by
using proofs by contradiction, which is valid, because if you do not accept
the law of excluded middle than proof by contradiction ceases to be a valid
proof method.

~~~
DanWaterworth
This is the sentence in question:

 _The first controversy occurred early in Hilbert’s career and concerned his
vigorous use of the "law of the excluded middle" (proof by contradiction)._

I don't see how you can interpret that as not implying that the law of the
excluded middle and proof by contradiction are not one and the same thing.

 _if you do not accept the law of excluded middle than proof by contradiction
ceases to be a valid proof method._

I don't believe that this is true, because that would imply that you can
derive the law of the excluded middle from proof by contradiction.

~~~
stiff
This is explained here nicely:
[http://philosophy.stackexchange.com/questions/1890/what-
is-t...](http://philosophy.stackexchange.com/questions/1890/what-is-the-
difference-between-law-of-excluded-middle-and-law-of-non-contradicti)

~~~
DanWaterworth
The law of noncontradiction and proof by contradiction are different things.

~~~
stiff
In a logical system you have axioms, that are sentences that are true by
definition and proof methods which are rules of sentence transformation that
are assumed to preserve truthiness. The law of noncontradiction is an axiom
that states:

X and (not X) == False for any sentence X

If you start with a sentence y, and then via the proof methods arrive at a
sentence (not y), this is the same as saying

(y -> (not y)) == True

Which is equivalent to:

((not y) or (not y)) == True

Which is equivalent to:

(not y) == True

Now, if you accept the law of non-contradiction this implies that:

y == False

If you do not accept it, y can both True and False at the same time and the
proof method is not valid anymore. Hence the proof method of reductio ad
absurdum, or "proof by contradiction" as you call it, is valid if and only if
this law holds.

~~~
DigitalTurk
The law of excluded middle is not sufficient for proof by contradiction. What
you actually need is ex falso quodlibet (principle of explosion).

For instance, paraconsistent logic has the law of excluded middle but not ex
falso quodlibet.

<https://en.wikipedia.org/wiki/Principle_of_explosion>

<https://en.wikipedia.org/wiki/Paraconsistent_logic>

\--

addendum

Specifically, notice that in classical logic the axioms law of excluded middle
and law of non-contradiction can be substituted for one another (they are
equivalent). However, that is not the case for all semantics.

~~~
stiff
I just wanted to explain the authors mental leap, I originally said: _if you
do not accept the law of excluded middle than proof by contradiction ceases to
be a valid proof method_. The law of excluded middle might not be sufficient,
but is necessary for the reductio ad absurdum and proof by contradiction
methods. You are right though I jumped too quickly to the "if and only if",
the details get quite complicated.

~~~
DanWaterworth
My point is that you can add reductio ad absurdum as an axiom. So long as you
can't then derive the law of the excluded middle, your assertion is incorrect.

~~~
stiff
Reductio ad absurdum is:

(x->(~x))->(~x)

Law of excluded middle is:

x | ~x

Proof follows:

(x->(~x))->(~x)

From definition of implication is equivalent to:

(~x) | ~(x->(~x))

From definition of implication is equivalent to:

(~x) | ~((~x) | (~x))

From de Morgans law is equivalent to:

(~x) | (x & x)

From identity law is equivalent to:

~x | x

I am not a logician and I might not have gotten all the details right, but I
certainly think this is possible. I also have quite a good proof by authority
;) in form of an article by Alonzo Church:

[http://www.ams.org/journals/bull/1928-34-01/S0002-9904-1928-...](http://www.ams.org/journals/bull/1928-34-01/S0002-9904-1928-04516-0/S0002-9904-1928-04516-0.pdf)

~~~
DanWaterworth
Can you justify using material implication?

It's possible to create reductio ad absurdum in Coq:

    
    
        Definition reductioAdAbsurdum (X:Prop) (f : (X -> (X -> False))) (x:X) : False := f x x.
    

But not the law of the excluded middle.

Edit: It's much clearer in Idris:

    
    
        reductioAdAbsurdum : (x -> (x -> _|_)) -> (x -> _|_)
        reductioAdAbsurdum f x = f x x

~~~
stiff
I am assuming that we are talking about taking the law of excluded middle out
from classical logic, where all the other things independent from that law
still hold, and I think implication being material is independent.

~~~
DanWaterworth
I mean material implication as in:

 _In propositional logic, material implication is a valid rule of replacement
which is an instance of the connective of the same name. It is the rule that
states that "P implies Q" is logically equivalent to "not-P or Q"._

<http://en.wikipedia.org/wiki/Implication>

~~~
stiff
Yes I understand, but why do you think it needs justification? As far as I
understand this is simply the definition of implication in classical logic,
for one it follows from the truth tables of both expressions.

~~~
DanWaterworth
OK, but we're not talking about classical logic, otherwise the question about
whether the law of the excluded middle is derivable is uninteresting.

~~~
stiff
Well, that's exactly what people were considering and what the article is
referring too, a variant of classical logic where everything except the law of
excluded middle holds true. I don't think this is uninteresting, which
postulates are independent from one another is one of the core problems in
logic.

Nice flame, eot.

~~~
DanWaterworth
I didn't mean to flame, I thought we were having an interesting debate. I was
under the impression we we're talking about logic systems in general where
reductio ad absurdum exists, but the law of the excluded middle doesn't.
Anyway, thanks, I learnt much.

~~~
stiff
This would be a very peculiar logic I think, maybe you just come from a very
different background and hence all the hair-splitting was necessary, we just
wandered so far away from the original point I lost patience, maybe
unnecessarily, in the end it really was an interesting debate for me too.

------
danbruc
<http://xkcd.com/435/>

------
tychonoff
Abraham Robinson's Non-standard analysis (<http://en.wikipedia.org/wiki/Non-
standard_analysis>) used a modern axiomatic approach to legitimize Isaac
Newton's intuitive use of infinitesimals, by creating a superset of the real
numbers, some of which were smaller than any ordinary real (infinitesimals),
or larger (infinite).

He then went on to prove new things with it.

And he started out as an engineer.

------
skybrian
Vague and unenlightening. I kept skimming through looking for the part where
where it explains what this "revolution" is supposed to be but it never
actually does. Edit: objection withdrawn.

~~~
jordigh
The revolution was clearly explained: it's the formalism viewpoint, most
famously expounded by Hilbert, "mathematics is a game played with meaningless
symbols following rules on paper." What this means is that we can discuss
mathematics discarding all personal biases and intuitions. We can give
personal meaning to these meaningless symbols and abstract rules, but we don't
need these biases when communicating to each other nor when we actually have
to perform computations or deductions.

The example given at the end is fractions, you can think of fractions in terms
of pies and pizzas, but it's much more effective to ignore any of that when it
comes to actually perform computations with that and simply focus on the
"meaningless" symbols and manipulations that you do with them.

It's a pity that you tl;dr'ed this.

------
hazov
There's a paper that stress how this transformation occurred well before in
Germany, specially in Göttingen, named "Klein, Hilbert, and the Göttingen
Mathematical Tradition" by David Rowe.

It makes a good point in how the changes initiated by Weierstrass and Cantor,
by throwing away Geometry as a requirement to think in mathematics, ultimately
lead the way to us to think in axioms and algorithms that's basically is how
we do mathematics today. (The part about Cantor and Weierstrass is not
stressed in the paper though).

