The random bit flip comment is of course silly, since anyone can mechanically verify the Coq proof; it doesn't have to be those who created the proof. What has to be assumed is the correctness of the Coq kernel, but all uses of Coq contribute to that, not just this use.
And so I simply don't get why you're so hung up on Erdős's assumption of credible mathematicians making a credible proof. Are you seriously insinuating that Paul Erdős, of all people, was not aware of everything you're saying here?
This is true of any proof, not only computerised proofs. Sometimes maths journals publish incorrect proofs because a person made a mistake. I doubt an incorrect proof has ever been published on account of a hardware bit-flip.
I'd be more worried about the kind of deterministic failure than manifests in every run, such as a bug in the CPU (the famous Intel floating-point bug, say), or in the operating system, etc. Could be mitigated by using a variety of platforms I suppose.
Large randomly generated primes are produced by a random algorithm like this, with a step that (when run on a composite number) shows that it is composite with probability p = a constant < 1. The chance of error there can also be made exponentially small.
As you say, deterministic, or at least highly correlated, errors would dominate the chance that the proof was wrong. But that's true of human proofs too.
I'm not sure that's right. The peer review process works well (e.g. ) precisely because human mathematicians are likely to make (at least somewhat) different mistakes from each other.