Simulation involves a lot of overhead. Computers and life can be built from the same stuff QED. the simplest simulation of a lifeforms must be more complex than the direct equivalent of that lifeforms.
That's not to say the simplest possible equivalent organism exists in that universe, but anything that could make a simulation should also be able to make that life form.
True for some values of "a lot", false for others.
What's undeniably true is that it incurs some overhead over NOT running a simulation.
But that doesn't prove that a simulated life form incurs overhead larger than its real life counterpart.
For one, there might not be any real life counterpart.
We say "simulation" here, but what we actually mean is "virtual world", which might simulate an actual world, or it might be its totally own thing (the same way I can chose to write a simulation of actual things, like e.g. "the Sims", or a simulation of a domain I only imagined). If, for example, as per TFA, our universe is a simulation, is doesn't mean that it actually simulated something else. Just that it's a simulation in itself.
So, "simulated" in this discussion means "not an organically created world made of some physical substrate, but consciously created/programmed by some advanced civilization".
So, the thing simulated could be totally unlike (in properties, physical laws, etc) what exists in the universe of those doing the simulation.
Second, a simulation (as we know it and practice it ourselves) usually has much less overhead than the real life thing it simulates (when it does simulate some real life thing). That's like, it's whole point. E.g. a weather model running in some supercomputers has some overhead, but nothing like that of the actual weather. Similarly, Sims has some overhead, but nothing like the equivalent real-life place and humans would have.
Where you seem to be confused is that you assume that: (a) a simulation must be of something that exists, (b) a simulation must be perfect, e.g. 1:1 to the thing it simulates. Only then would your argument make sense.
But neither of those things are necessary -- even our Earth and universe, if they are simulations, they could be very crude models, running with very low resources, in a vastly more complex and powerful real universe.
(a) is not an objection because 'life form' is flexible. Any computing sub strait could directly compute say a neural network vs a simulation of a neural network. The simulation will always be slower, but the 'real' version running on "FPGA" or it's higher dimensional equivalent imposes direct mapping between what happens in the 'real' world and what is being computed. EX: If we use a FPGA that's hit by a cosmic ray that bit is flipped, on the other hand if you have an array of FPGA's and compare them then that's decouples the 1:1 mapping creating a simulation but adds overhead.
(b) a simulation must be perfect.
No, if you can get away with a less accurate simulation you can get also get intelligence from less computational power using the same approach in the 'real' world.
Adding on to this with an analogy from simulating dynamics in cellular automata, most of the interesting models of physics we see act with high redundancy in both spatial and temporal locality.
A simple rule like Conway's Game of Life that isn't too physically realistic but is instructive because of how intimately it's been analyzed while exhibiting some relatively high complexity, shows remarkable compressibility using techniques such as memoization in HashLife[0].
Even more striking is the potential for superspeed caching where different nodes are evolved at different speeds often allowing _exponential_ speedups of pattern generations to be calculated for longer than the timeframe of the universe we speculate about today for real physics.
No free lunch. Hashlife takes more memory and only works well in low entropy environments.
But, consider if you want to run a simulation 100 times using the same data you can speed that output up by just copying the output of the first simulation 100 times. But that's not simulating the same mind 100 times it's simulating the mind only once. Hash life and similar approaches don't increase your ability to compute unique mind states.
I'm not sure if that's entirely true. My analogy here would be how we observe classical mechanics or even just regular objects. We can predict the path a ball will take through the air in painstaking and incredibly accurate detail (more detail than we have tools to properly measure), but that doesn't require that we simulate all of the component pieces that make up the ball. We compress that by just calculating what the whole lot of atoms will do on average and substitute that in when we have millions of them and the error is negligible. That's essentially the basis of statistical mechanics.
Why could we not make a simpler simulation of a life form (that its 'direct' real implementation) with similar processes of compressing information on component pieces and making approximations where the magnitude of the error is smaller than the accuracy of the measurement?
It seems to me that there is also a similar analogy to be made in the methods that we can use to compress information. Is there a functional difference between a compression algorithm that is lossless and one that would maybe corrupt 1 rgb value a tiny nudge in an entire image with millions of pixels if our only tool for examining it for corruption were our eyes? What if we then used that compression algorithm only in situations where we know the tools used to examine the results wouldn't be able to identify the losses, and used a lossless compression only when such tools were available?
All this is to say that I believe you could feasibly create a perfect simulation of something more complex than the thing performing the simulation with proper optimizations, but it would require certain stipulations like knowing when to use various optimizations to avoid contaminating fidelity of the simulation. Simple example would be simulating human history before a microscope was invented would allow us to constrain all approximations to have less error than would be visible to the most precise observer's senses of measurement.
At that point you are not simulating physics you are simulating minds. From a practical standpoint it's viable, but introduces the possibility to notice the simulation. To counter that you could add overhead detect when when something would notice the simulation, but that's not going to be cheap computationally.
Further, simulating worlds vs keeping real people in pods to see those simulated worlds seems to favor people in pods. Especially if you alter the biology of those pod people to have real physical brains operating on some hardware and little else. Philosophically you can argue about simulations vs "FPGA" boards ruing minds, but direct minds on "FPGA" boards still introduces direct impacts from the real world vs pure simulation.
I do not believe it is accurate to say that what I am positing would only simulate a mind and not the world. It is simply saying that detail below a certain level is insignificant to the simulation of the world as a whole. The entire purpose of minimizing error isn't specifically to avoid detection, it's so that the error cannot be propagated between interactions and end up simulating something that is significantly deviant from the thing you're trying to simulate.
If I were to use the kinematic equations to simulate throwing a ball through the air, but my simulation only used 1 significant digit, rounding errors would quickly add up to produce a path for the ball that would significantly deviate from the path if we were to make the same calculation except treating the ball as made of quarks/atoms/molecules and painstakingly analyze forces on every single atom until the collection of them being the ball reached the end of the throw. It is that deviation from the result that we need to avoid for our simulation to retain enough fidelity to be said to be simulating throwing a ball, otherwise we're just simulating some other interaction that doesn't really match what we would observe.
Your post also makes the assumption that the simulator even cares if we notice that we're in a simulation. I don't think that this premise as a whole includes any assumption as to the motivation for our simulation (if we indeed were to be in one). For all we know, it could be a simulation to determine how long it takes to develop sentient life to a point that it can observe inconsistencies in its environment and deduce that it is in a simulation.
We just don't know anything about the 'real world' in this instance and I think guesses in that realm venture into the realm of being impossible to verify. It can still be fun to think about, but it can't really be based on any experimental evidence unless it were deliberately placed there by a hypothetical simulator.
That doesn't follow logically.
It's perfectly fine to be able to simulate life forms MORE efficient than the one's in your universe.