Large randomly generated primes are produced by a random algorithm like this, with a step that (when run on a composite number) shows that it is composite with probability p = a constant < 1. The chance of error there can also be made exponentially small.
As you say, deterministic, or at least highly correlated, errors would dominate the chance that the proof was wrong. But that's true of human proofs too.
I'm not sure that's right. The peer review process works well (e.g. ) precisely because human mathematicians are likely to make (at least somewhat) different mistakes from each other.