Hacker News new | past | comments | ask | show | jobs | submit login
A Century of Controversy Over the Foundations of Mathematics (auckland.ac.nz)
55 points by mariorz on June 6, 2009 | hide | past | web | favorite | 22 comments



Absolutely fascinating read. I think I feel the same way prehistoric man would feel looking at nature. Its a daunting task, but it feels like our journey has just begun just like our ancestors; and its getting more beautiful. I wanna hear Bach now.


Thank you, a very interesting lecture. The process physics he is linking to are maybe even more interesting. Everything based on random fluctuations ... hmm I'd better learn the statistics good!


I've been wondering about the Russell Paradox...

  S = the set of all sets not members of themselves
  x = 1 / 0
Isn't the problem for x the same as for S? Not all mathematical expressions are well-defined, and likewise for all "set expressions".

Aside: There was a wonderful quote in Jaynes' Logic of Science, decrying the kind of airy mathematics that Chaitin is doing...

Should one design a bridge using theory involving infinite sets or the axiom of choice? Might not the bridge collapse?


The problem is that there is no X for which "X / 0" is defined, at least in the sort of traditional mathematics you're talking about.

But "the set of all sets which meet criterion X" is well-defined and is an extremely common thing to encounter. Similarly, "X is a member of itself" is well-defined and is fairly common.

So you end up with this definition -- the set of all sets which are not members of themselves -- which just puts together these two common, well-defined (at least, we hope they're well-defined) concepts and ends up at a contradiction.


Mwell, I'm not really satisfied by your answer. Both "1" and "0" are well-defined expressions; division is also well-defined, but, in order to avoid logical contradictions, we cannot meaningfully define the division of 1 and 0. To me, that sounds exactly like what you described above, and I thought that the axiom of choice was the analog of the "you can't divide by 0" rule.


Maybe look at it this way:

There is no X for which "X / 0" is defined.

There are many, many, many X (in fact, potentially all X such that X is a set) for which "X is a member of itself" is defined.

In other words, division by zero universally leads to contradiction no matter what you put in for X. But self-membership is perfectly well-behaved for lots and lots of values, and only starts to get troublesome on certain particular cases. This is curious: why should something which works well in many, many cases end up at a contradiction based not on the logical form of the statement, but rather on the particular values we substitute in place of its variables?


Yes, the problem is exactly analogous. And whenever you manipulate equations of rational expressions, or manipulate any equations by using division, you have to be careful that you don't accidentally invalidate your logic by inadvertently dividing by zero.

A classic example is http://www.math.utoronto.ca/mathnet/falseProofs/first1eq2.ht... — you get from a correct equation to 2 = 1 by simply canceling an innocuous factor (a² - ab) from both sides, the problem being that the innocuous factor happens to be equal to 0.

You can avoid this problem by changing your reasoning procedure. Old incorrect procedure; given:

    xy = xz
conclude

    y = z
Corrected procedure: given:

    xy = xz
conclude:

    y = z ∨ x = 0
In many circumstances you can then immediately eliminate the x=0 case, e.g. when x is a numeric constant other than 0, or when x=0 → y=z by some other chain of reasoning. In other circumstances it turns out to complicate your proof enormously, and in cases like the fallacious proof I mentioned above, it actually makes the proof impossible.

So the surprising discovery that this basic set-creation operation — the set of all objects meeting some criterion — contained a pitfall analogous to that of division occasioned some consternation. What was the analogous correction to make to reasoning with set theory? And once that was settled, all of the proofs made under the old system had to be checked wherever they used this fundamental operation.


Great link! I've read about a lot of this stuff in text books etc, but it all fits together so much better when put in its proper historical context like that. If I understand it correctly, his main result in algorithmic complexity theory implies that it is impossible to prove the absence of a pattern in data. Does this thus imply that there is no way for us to know when we know everything?


"So 'this statement is false' is false if and only if it's true, so there's a problem."

I don't understand this. This seems to assume that the meaning of a phrase is contained within the phrase itself. But if you instead assume that the phrase is a pointer to meaning that is contained somewhere else then the paradox goes away.



Interesting. So what I was getting at was exactly what Tarski said. Except for that he coined some cool words for his explanation, which I will steal for future use.


Except Tarski didn't really solve it, he just kinda swept it under the rug. The object language/metalanguage distinction introduces an infinite regression (since, the moment you want to talk about the metalanguage, you need a metametalanguage for those statements, which means you'll need a metametametalanguage, and so on).

And then Godel blew everything up anyway :)


And that (the Humpty Dumpty reading, wherein the meaning and intent of words is subordinated to the reader's desire) is an assumption of the sort one has to make in order to "prove" that it is fundamentally impossible for a statement to be self-contradictory. Such statements are always introduced in intellectual discourse as self-referential; that is, the words "this", "I" and "me" are identities of the statement by definition. There is no ambiguity of the sort that one might find in Magritte's "Ceci n'est pas une pipe", where one may chose to infer "ceci" to refer to any of the depiction of the pipe, the painting as a whole, or just the written statement (none of which is a pipe as such).

Think of it this way:

If a stranger were to approach you on the street, say, "I am a liar," then walk away, you'd likely think that the fellow is a little depressed and perhaps a bit remorseful about a habit of lying whenever it suited him. If the same fellow, in the same circumstances, had instead said "I am lying to you right now" before wandering off, you're left with two choices -- seriously psychotic or smartass.

People are reluctant to think of systems such as language and mathematics as seriously psychotic, so a cottage industry has sprung up trying to prove that these systems are merely smartasses out to infect our internal source code repositories with Brainfuck or some other Turing tarpit. It doesn't matter how many times one invokes the demons of deconstructionism, these paradoxes are and will remain very real.


Well, that's a pretty large assumption. At least in american English, the phrase 'this _____' is commonly understood to be self-referential in some manner unless some other context has been previously established. For example:

This sentence contains ten words and is a true statement.

If you haven't read Godel, Escher, Bach, now is the time.


Oh, come on, guys. Those who read www.thedailywtf.com are well aware that a logical statement can be:

a) true

b) false

c) file not found


Self-referrential to the words themselves, or to the meaning of the words?


Depends on what sort of metaphysics of propositions you accept, whether you want to deal with an extensional or an intensional language, and a whole bunch of other stuff that generally only philosophers and some mathematicians have cared about.

If you're interested, though, let me know and I'll see what I can dig up for you from back when I was studying this stuff.


Either, and occasionally both, my favorite example being 'this sentence no verb'. There's a big difference between being able to find a way out of a given paradox and in being able to generalize that insight to makes statements about computability (or proofs, in the context of mathematics).


Is this correct in set theory?

ω/ω=1

1/ω=ε

ε*ω=1

ω=1/ε

If we assume for every positive number there is only one negative number then this must be true: +∞/-∞=1


> Is this correct in set theory?

More or less:

http://en.wikipedia.org/wiki/Surreal_numbers#.22To_Infinity_...


why is 1/ω=ε ? would it not tend to 0. perhaps I'm misunderstanding you, I don't really grok set theory.


Sure it tends to zero which is ε, being ω the largest and ε the smallest possible.

Never mind, just trying to understand Cantor's brilliant mind.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: