but what he means by A on the RHS, I think, is that you must take into account the fact that A_LHS might be zero. You can't really write this identity and give a constant on the right, so he wrote the next best thing.
Unary representation is useless when speaking about complexity in number theory.
If you take a number N given in unary, convert it to binary and do trial division up to the square root, it will take O(N log N) time for conversion to binary and O(sqrt(N) * log(N)^2) for trial division (depending on your computational model, it could be O(N) and O(sqrt(N)) - I am counting bit complexity). In total, it's O(N log N). The runtime is dominated by reading the input! The complexity of trial division and the brilliant AKS algorithm is the same from this viewpoint.
Even if you had an algorithm that did not have to convert to binary and could tell in time linear to unary represenation whether a number is prime, it would be interesting trivia but nothing worthy a Nobel prize. In practice numbers are given in binary (or some other base>1 number system). To use your algorithm, you would have to convert to unary, which already means trial division would be faster.
I asked about x^n + 1 = z^n, which is a simpler special case with y=1. The system no longer recognized it to be False. So the theorem was programmed as thoughtless pattern matching. I think one day computers might become authentic "creative" tools for mathematicians (as opposed to "computational" tools), but Mathematica's philosophy seems to be a dead end in this regard.
When I write a program, I don't want it to be just correct (true); I want it to be _provably_ correct; I want to be able to be convinced that it is correct, at least in principle, given enough time and whole specification of the system. Programs which are correct, but not provably so, should not pass code review and might as well be lumped together with those which are wrong. It doesn't matter if you are using a full-blown theorem prover or thinking about the code in your head; Gödel's theorems are not really relevant to programming, even when the code uses deep mathematics.
I'm not sure I agree with "theorems that must be verified at compile time can never account for data that are provided only after compilation". At compile time, you prove the assertion "for every x, the program outputs correct answer for x". Now, you don't know that the user will enter say x=5 at runtime, but since you proved a theorem about _every_ x, this includes x=5. You cannot predict the future (the path that will be taken by the program), but once you prepare for all possible futures, you're safe.
>The problem with theorem provers like Adga and Coq is they are brittle to refactoring and thus hard to make changes to an app. Something seeming small can ripple through the whole system.
I hear Idris is more reasonable in this regard and has better general purpose programming properties.
You can say the same for any strongly typed language, but reality is it's the opposite. If your program has a good design, it's easy to refactor.