Hacker News new | past | comments | ask | show | jobs | submit login

You have two choices here:

1. You assume "trivial arithmetic facts" as axioms.

Result: You have an infinite number of axioms. (Whee!) The likelihood that you have snuck in non-trivial assumptions is pretty high, unless you are very strict about how you define "trivial" (which is probably as much work as just proving the trivial facts), and in that case, there's a high probability that some of your trivial facts are false.

2. You demonstrate that you can prove "trivial facts" in your system and you do so when needed by more complex proofs. The proofs of trivial facts are not necessarily trivial.

In neither case is your handling of "trivial facts" meaningless.

Quod erat demonstrandom.




From a mathematical point of view, one must be very careful, and we have abundant evidence of that.

However, we are fully justified in saying that if anybody came up with a mathematical system in which 2 + 2 != 4, we can dismiss it without having to do some sort of deep analysis of it. 2 + 2 = 4 is obvious. We can literally do it with 4 little objects right in front of us. If we can not accept that as obvious, we are hopelessly ignorant and have no reason to trust our fancy proofs, either. (Italicized to emphasize my main point.) If you can't trust that, you certainly can't trust the significantly more complicated number theory axioms do anything useful.

Note that 2 + 2 = 4 carries some implicit context when we say it without qualification, and subtly sliding in a context change is not a disproof. 2 + 2 = 1 modulo 3, but that's not what anybody means without qualification. Clearly we're operating on "the numbers I can hold in my hand" here, or some superset thereto. Note how I'm not even willing to say "the natural numbers" necessarily; it isn't obvious to me what some billion digit number added to some other billion digit number is. It's actually crucial to my point here that I'm not extending "obvious" out that far; I can only run an algorithm on that and trust the algorithm. But I'm just being disingenuous if I claim ignorance of 2 + 2. And being disingenuous like that tends to turn people off, and doesn't encourage them to try to learn more.


Let's say I have a new mathematical system (that I call the "brontosaurus"). It is made up of some fundamental elements ("a little end", "a big part in the middle", and "another little end") and some axioms ("and that's how dinosaurs are shaped"). Now, I claim that this is a complete formalization of all mathematics, such that if you prove some statement S involving quaternions, tesseracts, and turgid verbals, you can be assured that no falsehoods have crept in and therefore that S is true.

But in my system, it's not immediately apparent whether 2+2=4, if only because none of '2', '4', and '+' are part of the fundamental elements. So, how are you going to determine whether my system claims 2 + 2 = 4 or 2 + 2 != 4? You'll need to prove it, one way or the other (or both, in which case my system is screwed).

The proof that 2+2=4 has absolutely nothing to do with "claiming ignorance" and everything to do with whether or not you can "trust the algorithm".


Okay. So according to you, the explicit proofs of the following facts, using the Peano axioms, are all meaningful for mathematics:

1. 2+2=4

2. 3+2=5

3. 3+3=6

4. 4+3=7

5. 4+4=8

Is there any serious mathematical work that explicitly proved them all? Have those proofs influenced mathematics in any meaningful way, or in any way at all? Did they demonstrate something that we didn't know before?

I might agree that a single demonstration of something like 1+1=2, using the Peano axioms, might be educational (though not necessary) for somebody who tries to understand the axioms, but to claim that in general those proofs are meaninfull is simply false. A thousand-page book filled with proofs of n + m = l type statements will contribute absolutely nothing to mathematics.


Well, according to the Wikipedia page on Russell and Whitehead's Principia Mathematica[1], "54.43: "From this proposition it will follow, when arithmetical addition has been defined, that 1 + 1 = 2." —Volume I, 1st edition, page 379. (The proof is actually completed in Volume II, 1st edition, page 86, accompanied by the comment, "The above proposition is occasionally useful." - they go on to say "It is used at least three times, in 113.66 and 120.123.472.")" So that proof has been used on at least three occasions.

No, there are no books (that I know of) containing explicit proofs of "n + m = l" for all n and m up to some values. There wouldn't be any point: if you can prove 1+1=2 at all, then the generalization to n+m=l (where l is the "intuitive" value of n+m) should be easy enough to use directly. But my point is that, if you are constructing a proof in formal mathematics and find yourself at a step having to demonstrate 4+4=8, you have to either (a) assume 4+4=8 or (b) prove that 4+4=8, either manually or invoking some previous lemma or some such. There are no other choices. And option (a) seems to imply that you have an infinite number of axioms, at least one for each possible n+m=l---and that's kind of frowned upon in formal mathematics.

Think of it as a programming problem. (Formal mathematics and programming have a great deal in common.) The required output includes the line

    n = 4
when n = 4. You explicitly have to print "n = 4" somehow, either by invoking

    printf("n = %d\n", n);
or by writing a function to print the decimal value of a variable or by having a big table of

    "n = 0",
    "n = 1",
    ...
for all values of n and printing the appropriate string from that list. You don't have the option of saying "That's trivial" and going on without doing anything.

Now, as for whether this kind of formalization, which necessarily makes all the details explicit, has any influence, I'm the wrong person to ask. I suggest Gottlob Frege, David Hilbert, Russell and Whitehead, Kurt Gödel, or Turing.

For me, I just have to note that all of the dependently typed programming languages I've played with have used Peano arithmetic for encoding the size of an array in the type system. As a result, the requirement of a proof that n+m=l (where l is the "intuitive" value...) has been encoded in the type of, say, array concatenation.

[1] https://en.wikipedia.org/wiki/Principia_Mathematica


> if you can prove 1+1=2 at all, then the generalization to n+m=l (where l is the "intuitive" value of n+m) should be easy enough to use directly

But the proof of 1+1=2 is itself trivial in PA. So this really doesn't mean much. In fact it seems like you are agreeing with me that having explicit proofs of m+n=l for m,n>1 is meaningless.

> But my point is that, if you are constructing a proof in formal mathematics and find yourself at a step having to demonstrate 4+4=8

I understand your point, but saying that there might be an occasions where a proof of P can be meaningful is not the same as saying that a proof of P is meaningful. The former is almost a tautology, and can be said about practically any proof whatsoever.

> For me, I just have to note that all of the dependently typed programming languages I've played with have used Peano arithmetic for encoding the size of an array in the type system. As a result, the requirement of a proof that n+m=l (where l is the "intuitive" value...) has been encoded in the type of, say, array concatenation.

This doesn't show that those proofs are meaningful in mathematics.


> The proofs of trivial facts are not necessarily trivial

The proof 2+2=4 is trivial given the peano axioms.

I suspect you know this was the point being made.


Given the Peano axioms, yes. But most attempts at a formal axiomatization of mathematics don't start from Peano. (If you start from arithmetic, you may have difficulty proving set theory.) Frege started from Cantor's naive set theory, which blew up in his face. Russell and Whitehead[1][2] finally started from formal logic and some weird typed version of set theory that no one has ever explained to me and thus that I fear with superstitious dread. And that takes somewhere over 300 pages to get to "1+1=2", and the proof isn't actually completed until page 86 of volume 2 (according to Wikipedia).

Further, that the fact is trivially provable from Peano's axioms still doesn't mean the proof is meaningless.

[1] https://en.wikipedia.org/wiki/Principia_Mathematica

[2] http://www.storyofmathematics.com/20th_russell.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: