The calculations needn't be performed by a machine. If they are performed by a machine, you have a safe language. If they must be performed by humans, you have an unsafe language, which is nevertheless a lot more usable than C and friends.
A major difficulty here is that undefined behaviour is a property of a particular execution of a C program, rather than a static property of the program itself. Tools that dynamically detect UB are useful, but will not demonstrate that there are no inputs for which a program will go wrong.
OTOH, machines excel at exhaustive enumeration and enforcing separation of concerns: that's why ML-style algebraic data types, pattern matching and parametric polymorphism are such a boon for high-level programming.
So there's room for both human and machine work in program verification. What we should be interested in is finding the right division of labor between humans and machines.
There are several things, like signed overflow of integrals, that cannot often be determined at compile time.