I had an incorrect proof in a paper I wrote in Mathematics of Computation; it wasn't noticed until three years later, at which point I wrote another paper in the same journal with a (correct) proof of the result -- with the slight difference that the correct proof was five pages long where the incorrect proof was five lines long.
The reason the result I claimed was not significantly wrong (the correct theorem required the additional assumption that nobody does floating-point arithmetic with less than five bits of precision) was because I discovered the result experimentally: When I was writing the first paper, I needed an error bound and wasn't sure what it should be, so I had my computer do millions of trials on random inputs and tell me the worst rounding error which it encountered. Only after experimentally convincing myself that I had the right result did I try to come up with a proof.
EDIT: I should add that the result in question concerned the maximum rounding error which could result from multiplying complex floating-point values -- so it's not exactly an esoteric problem which nobody would care about.
Would you mind sharing a link to your papers? From scholar.google I'm guessing this is the corrected version but it didn't list anything older. (http://www.daemonology.net/papers/fft.pdf)
I'd say it's because it's the field with (a) the strictest formal definition of what "correct" means, and (b) no dependence on experiment, and hence no data and no statistics.
Everything you need to determine whether or not a given proof is correct is either in the paper in front of you, in the literature, or implied by the literature. If you can't follow the implications, you ask the author of the proof to clarify it.
Compare this to research in, say, medicine, where there are always millions of variables and progress requires careful controls, a statistician, an enormous budget that supports many trials of the same thing, and hope.
My hypothesis is that this is also why so many mathematicians do great work when they're young: It's one of the few fields where you don't have to do everything N times, because the error does not depend inversely on the square root of N. Once you've figured something out in math, it stays figured out. So all you need to become a mathematician -- besides your own mind -- is a library, the ability to read, and patience. (Although, in practice, nobody becomes a great mathematician without a teacher or two to guide them through the library. Even Ramanujan only found a couple of the books on his own.)
The reason the result I claimed was not significantly wrong (the correct theorem required the additional assumption that nobody does floating-point arithmetic with less than five bits of precision) was because I discovered the result experimentally: When I was writing the first paper, I needed an error bound and wasn't sure what it should be, so I had my computer do millions of trials on random inputs and tell me the worst rounding error which it encountered. Only after experimentally convincing myself that I had the right result did I try to come up with a proof.
EDIT: I should add that the result in question concerned the maximum rounding error which could result from multiplying complex floating-point values -- so it's not exactly an esoteric problem which nobody would care about.