
Doing Mathematics Differently - vinchuco
http://inference-review.com/article/doing-mathematics-differently
======
smilliken
This is a good article, and an important topic, but it mischaracterizes the
discipline of mathematics:

> the principle that mathematical truth is black or white and provides
> absolute certainty.

> Pure mathematicians like to think that they have absolute truth

Formal mathematics has no concept of absolute truth; this is left for
philosophy. It's just concerned with axioms and theorems (and their proofs).
Whether an axiom reflects reality or not is out of scope, which is the whole
point of their invention. As a philosopher you're welcome to debate them, and
as a pragmatist you're welcome to choose them (and be compelled to accept
their consequent theorems).

> Because it is easy for mathematicians to ignore Gödel’s proof. What lurks in
> their heart of hearts is a commitment to absolute truth, and a universal
> formal axiomatic theory for all of mathematics.

In one sense it's disheartening that we cannot prove or disprove every
proposition in every axiomatic system. This is an unreasonable expectation as
Gödel proved, but it doesn't invalidate the theorems we _have_ proven and
disproven. It also doesn't invalidate all of the programs we have written,
even though there are non-terminating programs and uncomputable numbers.

~~~
plutooo
"Formal mathematics has no concept of absolute truth"

This is false. The axioms don't have to be true. You can still talk about
their implications in absolute terms.

"Assuming a=0 implies a=0" is absolutely true. Regardless of whether _a_
actually is 0.

~~~
bachback
"The axioms don't have to be true."

No, not at all. Axioms are better thought of as universally accepted truths
which everything else depends on.
[https://en.wikipedia.org/wiki/Logical_atomism](https://en.wikipedia.org/wiki/Logical_atomism)

~~~
golergka
Acioms are only viewed as true because they are defined as such.

It's obvious when applying math to real world. If you're talking about objects
on Earth surface, for example, at first Euclid's geometry will get you good
results, because objects you're working with can be assumed to fulfill it's
axioms. However, when your scale gets bigger, you'll have apply a more
complicated geometry apparatus.

Easiest analogy about axioms is interface in software engineering: you don't
care what the object really is, but as long as it shows certain properties,
you can prove theorems about it. Which will be true for any objects with these
properties, and as true as exactly these properties are fulfilled by the real
life object.

------
GolDDranks
Just read this part:

> So we have a measure of the complexity of a formal mathematical theory A,
> and in theory A you cannot prove that any program is elegant that is larger
> in size than A’s complexity. That is what the paradoxical program P proves.

And thought: hmm. Sounds like, what was it called again? _Checks Wikipedia,
article on Kolmogorov Complexity._ Oh yeah, Chaitin's Incompleteness Theorem.
I wonder if it's mentioned in the article. CTRL-F - "Chaitin"

...the writer of this article is Gregory Chaitin.

------
curuinor
This is the Chaitin of Kolmogorov-Solomonoff-Chaitin complexity, if the
argument seems familiar.

I like Grassberger-Crutchfield-Young
([http://www.scholarpedia.org/article/Complexity](http://www.scholarpedia.org/article/Complexity),
look for Statistical complexity) complexity, because it can actually be
measured.

I had long suspicions that Chaitin was a Leibnitzian, but there you go. I like
thinking about the Principle of Sufficient Reason, too, but I have long
suspected the phenomenon of causality can be more simply explained by positive
feedback effects _only_.
([http://howonlee.github.io/2016/01/21/Poking-20At-20Causation...](http://howonlee.github.io/2016/01/21/Poking-20At-20Causation1.html))

~~~
joe_the_user
The thing is that any computable complexity measure allows one to
algorithmically produce an infinite which seems to have a complexity which
goes to infinite as it get longer but since it is the product of a finite
length computer program has finite complexity.

On the other hand, you can prove that for "nearly all" finite sequences of
symbols, the algorithmic complexity is within a constant of naive statistical
measures.

~~~
dllthomas
How is that done? My naive approach would be "always pick the next symbol that
most increases complexity" but that doesn't seem guaranteed to diverge and
could easily be trapped in local Maxima...

------
calibraxis
Anyone fascinated by the (non-)intelligibility of the world might like one of
Chomsky's many talks on the subject. Such as
[https://chomsky.info/20060301/](https://chomsky.info/20060301/)

> In fact, if you look at the history of science seriously, in the seventeenth
> century there was a major challenge to the existing scientific approach. I
> mean, it was assumed by Galileo and Descartes and classical scientists that
> the world would be intelligible to us, that all we had to do was think about
> it and it would be intelligible.

> Newton disproved them. He showed that the world is not intelligible to us.
> Newton demonstrated that there are no machines, that there’s nothing
> mechanical in the sense in which it was assumed that the world was
> mechanical. He didn’t believe it — in fact he felt his work was an absurdity
> — but he proved it, and he spent the rest of his life trying to disprove it.
> And other scientists did later on. I mean, it’s often said that Newton got
> rid of the ghost in the machine, but it’s quite the opposite. Newton
> exorcised the machine. He left the ghost.

> And by the time that sank in, which was quite some time, it just changed the
> conception of science. Instead of trying to show that the world is
> intelligible to us, we recognized that it’s not intelligible to us. But we
> just say, ‘Well, you know, unfortunately that’s the way it works. I can’t
> understand it but that’s the way it works.’ And then the aim of science is
> reduced from trying to show that the world is intelligible to us, which it
> is not, to trying to show that there are theories of the world which are
> intelligible to us. That’s what science is: It’s the study of intelligible
> theories which give an explanation of some aspect of reality.

------
plutooo
Omega is an amazing number. We know it has a digit in its binary expansion
that is 0 or 1, yet it is impossible to formulate why. It is impossible to
come up with a train of thought that explains it. There is no explanation that
can be written down on a piece of paper.

So we are left with the weird conclusion:

1\. It has the value it has for no reason.

2\. It has the value it has for a reason that is impossible to formulate.

~~~
eli_gottlieb
Nah, Omega is just defined as a number which contains countably infinite bits
of algorithmic information. Since algorithmic information is equivalent to
thermodynamic information (and likewise relates to quantum information if you
head in that direction...), the whole "you can't calculate Omega" thing is
really just a way of saying, "You can't have an infinite-precision physical
measurement encoded into a finite physical system."

Chaitin makes far too much metaphysics of his work.

------
21
Somewhat related, a physics talk by Susskind about some interesting links
between the inside of a black hole and (quantum) computational complexity. The
complexity part starts at 20:00. He works with Scott Aaronson on this.

[https://www.youtube.com/watch?v=IuY4RMehdP8](https://www.youtube.com/watch?v=IuY4RMehdP8)

------
jleader
The author keeps talking about going through all possible proofs built from
the axioms, as if that's a thing that obviously could be done. However, if
your system is complicated enough to allow arbitrarily large combinations of
axioms in your proofs, then I don't understand how you can expect enumeration
of all proofs to be a finite process. Any statement that says "enumerate all
possible proofs" followed by "and then do X" is meaningless, because you will
never reach the "do X" step!

~~~
repsilat
He's not assuming finiteness, just countability.

An example of how this is useful: If it were provable that a program does not
halt, an enumeration of all proofs would find the proof that the program does
not halt. (If a program _does_ halt, it is necessarily provable that it does
so, for obvious reasons.)

Thus either,

\- We can solve the halting problem, or \- There exists something that is true
but not provable.

Thus the uncomputability of the halting problem implied Goedel's
incompleteness theorem. Proving the other direction can be done by similar
techniques.

------
Ono-Sendai
Counterpoint (from my blog):

[http://forwardscattering.org/post/7](http://forwardscattering.org/post/7)

[http://forwardscattering.org/post/14](http://forwardscattering.org/post/14)

In summary i don't think AIT offers an absolute measure of complexity, due to
having to choose the abstract machine.

Not to say the ideas aren't interesting, and that this isn't a nice article,
from one of the main figures in the field.

~~~
joe_the_user
I don't think your claims are correct.

If you read a formal definition of Chaitin-Kolmogorov, you can find that the
concept are essentially normalized by recursive function theory. By using the
Church-Turing thesis, you can see that any program in any language can be
simulated up to a constant length-multiple by an abstract Turing machine
(speed isn't considered in these definitions so architecture doesn't matter
much). Chaintin-Kolmogorov basically considers the log of the length of
programs and then thus takes it's values as being plus or minus a constant for
a given measure.

~~~
Ono-Sendai
have a look at the second post, i address that.

~~~
mafribe
I'm sorry to say this, but your second post also doesn't really address the
issue and continues to misunderstand the purpose and definitions of Kolgomorov
complexity.

The fact that for any specific string S you can find a language L such that
the Kolgomorov complexity K(L, S) is 0, is not that interesting.

1\. For any specific L, you can only play this trick (hardcoding a target
string) for a finite number of strings. That means for essentially all strings
(all but a small finite number) the trick is irrelevant.

2\. There is no free lunch: The trick you use to bring down the Kolgomorov
complexity K(L, S) down to 0 increases the complexity of L. This can be taken
into account, see "conditional Kolgomorov complexity".

The book "An Introduction to Kolmogorov Complexity and Its Applications" by Li
and Vitányi explains this, and much more in great detail, and is highly
recommended.

~~~
Ono-Sendai
Regarding point 2, I agree, but the question is how to measure the increase in
complexity - we are back at square one :)

~~~
mafribe
You measure it in the same way, using the idea of shortest decription w.r.t. a
fixed universal language. You than show that this is absolute up to a
constant. Any results produced that way such as the incompressibility of most
strings is independent of such constants, hence the the choice of universal
language doesn't matter.

------
joe_the_user
"Gödel incompleteness is even unpopular among logicians. They are ambivalent.
On the one hand, Gödel is the most famous logician ever. But, on the other
hand, the incompleteness theorem says that logic is a failure."

I suspect Gödel himself didn't like the incompleteness for similar being very
much an idealist who even created (but didn't) publish a modal-logic proof of
the existence of God.

The thing is that if one takes a formalist position, that mathematics is a
game played by pencil and paper (or computers) then the completeness and
incompleteness theorems mean that foundational questions are simply done.
oppositely, the position that foundations-people now have to take is that even
though you create a universe compatible with any true-but-unprovable position
(like the continuum hypothesis or its negation), some of these true-but-
unprovable hypotheses are more plausible, aesthetically appealing or something
and these hypotheses should be the one considered "true" in some ideal reality
(I'm trying to crudely paraphrase Raymund Smullyan here).

------
js8
In a sense mathematics already is experimental, because we already have,
although not formally defined, class of things where we simply don't know
their status of being true or false - hypotheses.

I am currently thinking how to make an automated mathematician based on lambda
calculus, which would have its own notion of "beauty", and based on this it
would try to select interesting definitions. It seems such a system needs to
have a notion of experiment.

Since computations are proofs, theorems are basically computations that we
know the result of already, i.e. which have already been done (all with
respect to some axiomatic system, given by types).

Such system also needs notion of "economy", which lets it allocate
computational resources effectively. So it needs to be able to evaluate things
only partially, to avoid infinite loops etc.

This naturally leads to experimental approach, where you don't only know true
or false, but there is a wide spectrum of what you know about certain
statement (lambda expression).

------
gaur
What's the point of inserting untranslated French text into an English
article?

Is it supposed to just be window-dressing? Is it supposed to promote the
(outdated and highly dubious) notion that all educated people speak French? Is
it just for the author to show off? Whatever the reason, it's a highly
obnoxious practice and it doesn't improve the article.

~~~
Rexxar
The author just probably like Leibniz a lot and prefer to make the citation in
the original language of the text.

Would you have preferred Latin or German ? Leibniz used them too.

~~~
gaur
> Would you have preferred Latin or German?

I could get by with German, but that's beside the point: the essay is written
for an English-speaking audience, so it's stupid to insert extended passages
of foreign text without providing translation.

~~~
jeffsco
Obviously nobody is forcing you to read it. Perhaps it's written for an
English speaking audience that doesn't mind skipping over some French here and
there.

