1. You assume "trivial arithmetic facts" as axioms.
Result: You have an infinite number of axioms. (Whee!) The likelihood that you have snuck in non-trivial assumptions is pretty high, unless you are very strict about how you define "trivial" (which is probably as much work as just proving the trivial facts), and in that case, there's a high probability that some of your trivial facts are false.
2. You demonstrate that you can prove "trivial facts" in your system and you do so when needed by more complex proofs. The proofs of trivial facts are not necessarily trivial.
In neither case is your handling of "trivial facts" meaningless.
Quod erat demonstrandom.
However, we are fully justified in saying that if anybody came up with a mathematical system in which 2 + 2 != 4, we can dismiss it without having to do some sort of deep analysis of it. 2 + 2 = 4 is obvious. We can literally do it with 4 little objects right in front of us. If we can not accept that as obvious, we are hopelessly ignorant and have no reason to trust our fancy proofs, either. (Italicized to emphasize my main point.) If you can't trust that, you certainly can't trust the significantly more complicated number theory axioms do anything useful.
Note that 2 + 2 = 4 carries some implicit context when we say it without qualification, and subtly sliding in a context change is not a disproof. 2 + 2 = 1 modulo 3, but that's not what anybody means without qualification. Clearly we're operating on "the numbers I can hold in my hand" here, or some superset thereto. Note how I'm not even willing to say "the natural numbers" necessarily; it isn't obvious to me what some billion digit number added to some other billion digit number is. It's actually crucial to my point here that I'm not extending "obvious" out that far; I can only run an algorithm on that and trust the algorithm. But I'm just being disingenuous if I claim ignorance of 2 + 2. And being disingenuous like that tends to turn people off, and doesn't encourage them to try to learn more.
But in my system, it's not immediately apparent whether 2+2=4, if only because none of '2', '4', and '+' are part of the fundamental elements. So, how are you going to determine whether my system claims 2 + 2 = 4 or 2 + 2 != 4? You'll need to prove it, one way or the other (or both, in which case my system is screwed).
The proof that 2+2=4 has absolutely nothing to do with "claiming ignorance" and everything to do with whether or not you can "trust the algorithm".
Is there any serious mathematical work that explicitly proved them all? Have those proofs influenced mathematics in any meaningful way, or in any way at all? Did they demonstrate something that we didn't know before?
I might agree that a single demonstration of something like 1+1=2, using the Peano axioms, might be educational (though not necessary) for somebody who tries to understand the axioms, but to claim that in general those proofs are meaninfull is simply false. A thousand-page book filled with proofs of n + m = l type statements will contribute absolutely nothing to mathematics.
No, there are no books (that I know of) containing explicit proofs of "n + m = l" for all n and m up to some values. There wouldn't be any point: if you can prove 1+1=2 at all, then the generalization to n+m=l (where l is the "intuitive" value of n+m) should be easy enough to use directly. But my point is that, if you are constructing a proof in formal mathematics and find yourself at a step having to demonstrate 4+4=8, you have to either (a) assume 4+4=8 or (b) prove that 4+4=8, either manually or invoking some previous lemma or some such. There are no other choices. And option (a) seems to imply that you have an infinite number of axioms, at least one for each possible n+m=l---and that's kind of frowned upon in formal mathematics.
Think of it as a programming problem. (Formal mathematics and programming have a great deal in common.) The required output includes the line
n = 4
printf("n = %d\n", n);
"n = 0",
"n = 1",
Now, as for whether this kind of formalization, which necessarily makes all the details explicit, has any influence, I'm the wrong person to ask. I suggest Gottlob Frege, David Hilbert, Russell and Whitehead, Kurt Gödel, or Turing.
For me, I just have to note that all of the dependently typed programming languages I've played with have used Peano arithmetic for encoding the size of an array in the type system. As a result, the requirement of a proof that n+m=l (where l is the "intuitive" value...) has been encoded in the type of, say, array concatenation.
But the proof of 1+1=2 is itself trivial in PA. So this really doesn't mean much. In fact it seems like you are agreeing with me that having explicit proofs of m+n=l for m,n>1 is meaningless.
> But my point is that, if you are constructing a proof in formal mathematics and find yourself at a step having to demonstrate 4+4=8
I understand your point, but saying that there might be an occasions where a proof of P can be meaningful is not the same as saying that a proof of P is meaningful. The former is almost a tautology, and can be said about practically any proof whatsoever.
> For me, I just have to note that all of the dependently typed programming languages I've played with have used Peano arithmetic for encoding the size of an array in the type system. As a result, the requirement of a proof that n+m=l (where l is the "intuitive" value...) has been encoded in the type of, say, array concatenation.
This doesn't show that those proofs are meaningful in mathematics.
The proof 2+2=4 is trivial given the peano axioms.
I suspect you know this was the point being made.
Further, that the fact is trivially provable from Peano's axioms still doesn't mean the proof is meaningless.