Hacker News new | past | comments | ask | show | jobs | submit login

I believe you're mistaken. There is value in axioms and axiomatic proofs: two different people will most definitively have a different notion of "obvious", and even have a different understanding of a mathematical problem. So a proof may be accepted by one person and rejected by another.

Given a set of axioms and proofs it's possible to mechanically check a proof. It's not quite possible to reliably check proofs otherwise.




I'm not saying anything about the ability to check proofs, or the value of axiomatic proofs in general, only that 2+2=4 specifically doesn't require an axiomatic proof in order to convince anybody that it is true. This is like saying that we need a rigorous theory of color in order to be convinced that black is darker than red. Mathematicians didn't axiomatize natural numbers in order to show that 1+1=2 or 2+2=4, or any other trivial arithmetical fact. They have never doubted it, and I don't know what "doubting 2+2=4" even means. In fact the entire process is reversed: they invented axioms that can form a formal basis for what we already know to be true. If Peano axioms proved that 2+2 = 6 - they wouldn't be a valid axiomatization of the natural numbers. One cannot axiomatize the natural numbers without already assuming that all the basic arithmetic facts we know about them are true (or else he wouldn't be axiomatizing the natural numbers, but something else). Somebody who rejects 2+2=4 has a problem understanding human language, not proofs.


I feel like this kind of attitude belies that everyone has a different idea of obvious, and that this kind of self-assured confidence is what froze geometry until about 100 years ago.

Realizing that axioms were switches to be turned on and off to generate new structures that may or may not be useful was an important step to abandoning the most obvious and intuitive truths of geometry. Thus geometry has no concept of true outside of axioms, and true simply means internally coherent. Outside of formalization, "obviously true" is the hindrance of confidence.


> This is like saying that we need a rigorous theory of color in order to be convinced that black is darker than red.

You do, if you want to be right. The fact that you can get people to agree with you doesn't make you right, and red is frequently darker than black by some pretty normal definitions of "darker". Red and black are differentiated by the shape of their reflective spectrum, not the amplitude.


You guys are basically arguing over Moore's here-is-one-hand problem.

https://en.wikipedia.org/wiki/Here_is_one_hand

pavelrub's point is that you sometimes have less reason to believe the axioms of your formalization than their derived consequences. We have better reason to believe the intuitive idea that 2+2=4 than we do any putative axioms of arithmetic. If we derived that 2+2=5 from some particular axioms of arithmetic, we would conclude those axioms were wrong (or rather, were not the proper system for formalizing 2-plus-2-ness) rather than conclude that 2+2=5.


Pure black is never lighter than any shade of red, by any definition of "dark" or "lightness" that I'm aware of. The point here is that again the words "dark" didn't come into language from some rigorous theory of color - the process is reversed. A theory of color can never show that red is darker than black, because this would simply be a misuse of the word "darker", in the same way that no valid axiomatization of the naturals can possibly show that 2+2 != 4.


There is no pure black in the real world.


There is theoretical pure black in any modern representation of color. If the term confuses you, you can replaces it with #000000. If the black vs red comparison still confuses you, you can replace it with #600000 vs #FF0000. You might also want to address my actual argument, instead of irrelevant technicalities.


How are you getting a "theoretical pure black" without having a rigorous theory of color? If you're just going to call some things black and some things red, you'll find that a lot of things that people call black are lighter than a lot of other things that people call red.


I’ve already told you what to do if the term “pure black” causes problems. The color represented by #600000 on my monitor is always darker than the color represented by #FF0000, regardless of the existence of any theory of color. If you claim that some formal theory of color convinced you of that, then it would mean that a different theory of color could conceivably convince you otherwise. But this is clearly impossible: We would say that any theory of color which shows that #600000 is lighter than #FF0000 is simply misusing the word “lighter”. This shows that our understand of what “lighter" and “darker" mean is independent of any theory of color, and in fact this understanding is a precondition for the development of such a theory to begin with.


But none of this is true. #600000 on your monitor might be lighter than #FF0000 on your monitor if they're compared at different times, or if part of your monitor happens to be in the shade. Or if I have a weird viewing angle to your monitor.

If you want to argue about what #600000 and #FF0000 should look like, you're back to a rigorous theory of color.


No, because I can be very specific about the condition of my monitor, without reference to any theory of color (I can just say: in the exact conditions my monitor is in at this moment, or in the conditions of a darkened room, where no external light source can make #600000 lighter than #FF0000) . I can hypothetically also show you my monitor in person, in any conditions I choose. In fact I don’t even have to talk about my monitor or those specific colors. We can talk about the color of space as seen from the ISS on the dark side of earth, versus the color of the sun when visible from the ISS. Or the color of my room at night vs the color of a candle flame. There are millions of examples we can think of without talking at all about any theory of color. I mean - it is a matter of fact that words such as “darkness” and “lightness” existed before the invention of rigorous representations of color, and that the numerical definition of those words was designed on purpose to coincide with their existing meaning, and not with some new arbitrary property of color. So I’m not even sure what you are arguing against.

But the whole argument about colors is really unnecessary, I only used it because I thought it would be simpler to understand, but I might have been wrong. If you don’t think that it supports my argument about 2+2=4, feel free to ignore it and address the argument itself.


> There is theoretical pure black in any modern representation of color. If the term confuses you, you can replaces it with #000000. If the black vs red comparison still confuses you, you can replace it with #600000 vs #FF0000. You might also want to address my actual argument, instead of irrelevant technicalities.

So you're using a numeric representation of colors (RGB) to prove to me that black is darker than red.

By doing this, you're basically proving my point.

1) different people may have different opinions on "obvious" statements

2) the simpler the statement is, the easier it is to accept or reject it

If you give me two color plates, one is black, and one is red, I might find people who disagree which one is darker.

But if you give me photo measurements, I will say that one is objectively darker with respect to a specific metric (e.g. visible photon energy flux, YCbCr luminosity, CIECAM02 luminosity).


> So you're using a numeric representation of colors (RGB) to prove to me that black is darker than red.

No, I’m not doing that at all - I’m simply creating a well-defined understanding between us about which colors we are talking about, so that there won’t be any confusion. If you were sitting next to me, I could show you some other two colors in person, with no reference to RGB or to any other numeric representation, and the exact same argument would stand.

No specific metric can ever show that #FF0000, as it is displayed on any reasonably well-balanced monitor, is darker than #600000. If somebody invented such a metric, we would say that this metric is either incorrect, or misuses the word “darker”. This would also be the case if no other metrics existed before it. Therefore it is clear that our understanding that #FF0000 is darker than #600000 is independent of any formal description of darkness, and comes prior to it.

And you are still avoiding, for some reason, my main argument, which had nothing to do with colors, and dealt specifically with 2+2=4.


But are people more likely to accept the axioms of Principia Mathematica (and the soundness of every logical step from page 1 to 300) than they are to accept the notion that 2 + 2 = 4 based on intuitive notions of what twoness, fourness and plusness are?


Of course they are more likely to believe their intuitions. They also believe that it makes no difference whether or not you swap doors in the Monty Hall problem, and don't believe that with only 23 people the odds of a shared birthday are more than 50%.

To some extent, there is the problem. People trust their intuitions, and their intuitions are often wrong. That's why for some things we need proper proofs.


But proofs always come back to axioms, and on what basis do we accept axioms? That they sound intuitively right. So we've just kicked the problem upstairs a bit, we can't avoid using our intuition.

Personally I'm more likely to believe 2 + 2 = 4, something I can easily check to my own satisfaction using four objects, than I am to believe the Axiom of Choice.


We still use our intuitions, but now everyone knows the starting set of assumptions.

As for the axioms in use, I think the big reasons they were chosen is: They lead to results we already wanted/proved to be true.

Another thing to keep in mind, not everyone works with the same sets of axioms. Which, as someone with a formalist[2] view on mathematics, I find interesting. For example, not everyone studying logic assumes the principle of the excluded middle[1]. One of the consequences of this is that you can no longer do proofs by contradiction.

The axiom of choice is another example of this where two groups of mathematicians accept it or not. I'm a formalist, so I don't have issues with this (as long as both sets of axioms are interesting and "intuitive"), other philosophies of maths might.

[1] https://en.wikipedia.org/wiki/Law_of_excluded_middle [2] https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathe...


It comes down to levels of assurance. The rules of inference you are using are part of your design space. Mathematics at its core is about clever problem solving. When you encounter a problem you have to decide what you would accept as a proof, and this forces you to accept certain reasoning principles.

The caveat is that this is non-trivial and it's very easy to make people accept assumptions which are completely wrong. That's really the main reason to accept the axioms of set theory: People have been trying to poke holes in them for a hundred years and nobody has managed it yet. If you can use set theory (or something equiconsistent) to solve your problem, chances are that nobody will be able to call you out on a mistake.


Well, the whole idea is to pick simple axioms, so it's harder to get wrong. And also, every successful prediction that math makes based on the axioms, is in a sense a verification of them.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: