Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's my question too.

I wrote a multiplayer strategy game back in mid 90's issuing floating point and it was deterministic. No problems. Maybe chip optimizations have affected this?

We looked at fixed point, but with careful scheduling we'd get zero fpu (x87) stalls for float operations, so it wasn't a real win to go fixed. And it gave us the benefit of having more registers to use without needing to use the main stack as much, which also made the asm easier to read.

Edit: typos



Achieving reliable reproducibility for floating point calculations is... difficult, to say the least. There are minor differences between hardware (x87 vs SSE is the most famous one, but there are others). Changing compiler, its version or options may produce subtly different results (the most obvious example is the -ffast-math flag). And even the bigger problem is implementation of non-primitive (e.g. trigonometric) functions. Usually your program will use implementation from a system or vendor library, which probably have different underlying implementations.


Deterministic in what sense? A given build is generally deterministic if it contains only one code path, but compiler optimizations mean different builds of the same code might produce different results for the same inputs, and certainly the same code in two different functions could behave differently. And if you wanted to take advantage of SSE2 on systems that had it then that would generally mean having two codepaths that gave different results.


Your compiler or runtime might reorder operations, different machines might have DAZ and FTZ set differently, some CPUs might helpfully offer you a bunch of extra bits of precision while others do not




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: