Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure if that's entirely true. My analogy here would be how we observe classical mechanics or even just regular objects. We can predict the path a ball will take through the air in painstaking and incredibly accurate detail (more detail than we have tools to properly measure), but that doesn't require that we simulate all of the component pieces that make up the ball. We compress that by just calculating what the whole lot of atoms will do on average and substitute that in when we have millions of them and the error is negligible. That's essentially the basis of statistical mechanics. Why could we not make a simpler simulation of a life form (that its 'direct' real implementation) with similar processes of compressing information on component pieces and making approximations where the magnitude of the error is smaller than the accuracy of the measurement?

It seems to me that there is also a similar analogy to be made in the methods that we can use to compress information. Is there a functional difference between a compression algorithm that is lossless and one that would maybe corrupt 1 rgb value a tiny nudge in an entire image with millions of pixels if our only tool for examining it for corruption were our eyes? What if we then used that compression algorithm only in situations where we know the tools used to examine the results wouldn't be able to identify the losses, and used a lossless compression only when such tools were available?

All this is to say that I believe you could feasibly create a perfect simulation of something more complex than the thing performing the simulation with proper optimizations, but it would require certain stipulations like knowing when to use various optimizations to avoid contaminating fidelity of the simulation. Simple example would be simulating human history before a microscope was invented would allow us to constrain all approximations to have less error than would be visible to the most precise observer's senses of measurement.




At that point you are not simulating physics you are simulating minds. From a practical standpoint it's viable, but introduces the possibility to notice the simulation. To counter that you could add overhead detect when when something would notice the simulation, but that's not going to be cheap computationally.

Further, simulating worlds vs keeping real people in pods to see those simulated worlds seems to favor people in pods. Especially if you alter the biology of those pod people to have real physical brains operating on some hardware and little else. Philosophically you can argue about simulations vs "FPGA" boards ruing minds, but direct minds on "FPGA" boards still introduces direct impacts from the real world vs pure simulation.


I do not believe it is accurate to say that what I am positing would only simulate a mind and not the world. It is simply saying that detail below a certain level is insignificant to the simulation of the world as a whole. The entire purpose of minimizing error isn't specifically to avoid detection, it's so that the error cannot be propagated between interactions and end up simulating something that is significantly deviant from the thing you're trying to simulate.

If I were to use the kinematic equations to simulate throwing a ball through the air, but my simulation only used 1 significant digit, rounding errors would quickly add up to produce a path for the ball that would significantly deviate from the path if we were to make the same calculation except treating the ball as made of quarks/atoms/molecules and painstakingly analyze forces on every single atom until the collection of them being the ball reached the end of the throw. It is that deviation from the result that we need to avoid for our simulation to retain enough fidelity to be said to be simulating throwing a ball, otherwise we're just simulating some other interaction that doesn't really match what we would observe.

Your post also makes the assumption that the simulator even cares if we notice that we're in a simulation. I don't think that this premise as a whole includes any assumption as to the motivation for our simulation (if we indeed were to be in one). For all we know, it could be a simulation to determine how long it takes to develop sentient life to a point that it can observe inconsistencies in its environment and deduce that it is in a simulation.

We just don't know anything about the 'real world' in this instance and I think guesses in that realm venture into the realm of being impossible to verify. It can still be fun to think about, but it can't really be based on any experimental evidence unless it were deliberately placed there by a hypothetical simulator.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: