Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Simulations are way easier to write than almost anything else (I'm a nuclear physicist who did this professionally for many years.) The reason why: physics. Or more specifically: conservation laws.

Conservation laws are the book-keeping of physics, and they act as global checks on correctness. If your code conserves energy, mass, particle number, charge, momentum, etc it is probably correct in the ways that matter, because it's incredibly hard to get bugs that get the physics wrong in materially interesting ways while still obeying conservation laws.

I once wrote a rarefied gas dynamics simulator (modelling a few thousand interacting particles--I was curious about some non-equilibrium thermodynamics questions) that conserved energy to +/-1 bit in double-precision (and which exercised the weird feature of some x86 processors that had something like 80-bit internal registers but did 64 bit double precision computations, and which allowed the garbage in the extra bits to contaminate the LSB, resulting in very slightly different behaviour in debug and release mode, which gave me fits for weeks.) It is just incredibly hard to get that level of conservation and still get the physics wrong.

And then there's semi-analytic solutions: to check that gas dynamics code I wrote a super-simple analog of the system in Perl based on approximate equations of motion, and got the same results. Because the underlying physics admits of different computational representations that are guaranteed to be the same if they are both correct and extremely unlikely to suffer from the same bugs you can have a very high degree of confidence in their correctness.

So no: the results of simulation are not wrong 97% of the time. I've worked on one experiment where the modelling was wrong, and which took three different computational approaches to sort out, but it's a rarity.

My feeling, based on experience, is these guys are onto something very interesting.



You are simply skeptical for different reasons than I have.

As a software professional, I have been explicitly instructed to alter a program solely for the purpose of making the output more palatable for customers and investors, at the expense of real-world accuracy. And I did it, because as much as I dislike dishonesty, I also hate searching for new jobs and thoroughly enjoy sleeping under a roof and not starving.

Did the simulation that you wrote have a check for conservation of professional ethics?

You have a great advantage in that subatomic particles are unable to lie to you. Software developers have a capacity for deception exceeding even that of accountants, and we are sometimes asked to use it in unethical ways. One might think that there are reasonable limits, but we still have electronic voting machines that are mysteriously unauditable, and software trading agents programmed to automatically front-run institutional investors.

While the 97% figure was simply made up to mirror the ancestor post, it is possible that all those programs that are not accidentally wrong are intentionally wrong. In your case, you can rule out ill intent for the software you wrote yourself, but as there are potentially hundreds of millions of dollars in funding at stake for this fusion "discovery", I would not discount it for any simulation that suggests this device will work.


...and which exercised the weird feature of some x86 processors that had something like 80-bit internal registers but did 64 bit double precision computations, and which allowed the garbage in the extra bits to contaminate the LSB, resulting in very slightly different behaviour in debug and release mode, which gave me fits for weeks...

FWIW, this is a compiler error. Most x86 processors have 80-bit extended floats, which allow 64-bit "pow" to be computed in software with low error (among other benefits). Some compilers "helpfully" compile for 80-bit mode with some combinations of compile options. They should never do this unless it's specifically asked for using a single command-line option, partly for the reason you discovered: it totally messes with replicability. It also messes with implementations of "log1p", Kahan summation, and other high-precision algorithms like double-double arithmetic.

If you ever come across this again, submit a bug report.


The question is, what did they simulate and how? A full-on dynamical plasma simulation of the entire machine is intractable, and every approximation technique has its blind spots.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: