Not when it's referentially transparent. In that case, there's no need to run the tests if a function or its children haven't changed. Of course, in that's the case, the tests are probably fast enough that running then won't matter much.
That's not the only kind of way in which real hardware falls short of the theoretical. I've been burned by this at least a half dozen times in my career.
Not to mention things like caching and consistency bugs, either in hardware (e.g. CPU errata) or the software stack which is implementing the abstraction.
Or stuff like reordering a piece of code which should be a no-op, but causes two expensive operations to be run simultaneously on the same die at runtime, resulting in a CPU under voltage event which triggers a reboot. I ran into this exact problem about 6 months ago. We could argue about whether the CPU, motherboard, or power supply vendor was out of spec, but in the real world with deployed hardware that doesn't matter. You revert and run the code that doesn't trigger a reboots during high work loads.
Having a test suite catch this before you deploy to production is a good thing.
Oh yes, because we can always be sure that a change in one place will never cause a test in another place to fail.