Somehow, I don't think that is going to fly.
Additionally, the statement about type checking and program correctness is not really correct.
Let's try another thought experiment. Let's compile a linux kernel with this beast. We should be happy with sometimes getting the right answers? I am not sure that they have thought this through.
Does anyone remember in the early days of MySQL where it was really really really really fast because it didn't have locks. Some wiser heads said "but it is often giving the wrong answer!" The reply was "well it is really really really fast!" And we know how that came out.
Perhaps the expected output of this sea of devices is poetry, which in the minds of those on the project, might require less precision. But even there, some poetry does require lots of precision.
Consider sorting data, because that's what computer scientists do…
If you can sort your data on a sea of 1000 processors in O(nlogn) with minimal errors (defined as out of order elements in the result set), then check the result in O(n) over 1000 processors and fix the handful of problems in a couple of O(n) moves then you will beat an error free sort that can only make use of 10 processors because of memory/lock contention.
For your bank (which should certainly start charging you a transaction fee on deposits), it might take the form of processing batches of transactions in a massively parallel way with a relatively fast audit step at the end to tell if something went poorly. In this case the repair strategy might be as simple as split and redo the halves.
I am kind of Knuth with this who seems to day that this whole parallelization/multicore stuff may be a result of the failure of imagination on the part of the hardware designers.