Whenever I see stuff about automated algorithms it reminds me of a movie, can't remember the name or plot exactly but they create a being or upload someones consciousness and the "ai" is solving all the problems the world faces until it reaches the last one, humans. It then begins to synthesize some airborne virus or nukes that would scour us from the earth. We basically created tech that would eventually optimize us out of existence as a cancer to the earth.
Edit, found it Transcendence (2014), also it seems my brain injected some random thoughts into that plot, it's not about that at all but the point stands!
Formal verification is very good at proving that compiler transformations preserve semantics. Programming language semantics are pretty well specified (at least for some programming languages...), so the chance of bugs in the specification are low.
It can catch all kinds of bugs, but you have to ask the correct questions. So it comes down to define what a bug is and the assumptions you use for testing. And therein lies the problem: what constitutes a bug? A function that add two numbers but bever terminates might be considered bugfree if forget to include as a bug that not giving an answer before the end of the universe is faulty behaviour. We humans are terrible at writing these kind of specifications, so formal verification as a method just pushes the correctness from code to specification.
If you navigate within algebraic structures with well known properties (which are also verified for example), formal verification is all you need, you can be certain of being bug free.