Hacker News new | past | comments | ask | show | jobs | submit login

Floating point numbers are a common source of issues in large open world games. Because of how a float is stored, there is more precision for values closer to the "origin" of the world (0, 0, 0). Which means there is less precision for very far away things. This can result in things like "z-fighting"[0], which I'm sure most gamers have seen, and also issues with physics.

One solution to address some of the precision issues (not necessarily z-fighting, as it operates in a different coordinate system) is to dynamically re-adjust the player's world origin as they move throughout the world. This way, they are always getting the highest float precision for things near to them.

0. https://en.wikipedia.org/wiki/Z-fighting




Is there a reason that game developers don't use fixed-point math for Cartesian coordinates?

A 64 bit integer and a 64 bit float both chop up your coordinate system into the same number of points, but with the integer, those points are equally spaced which is the behaviour you'd expect from a Cartesian coordinate system (based on the symmetry group of translational invariance).

And even a 32-bit integer is still fine enough resolution to support four kilometres at one-micrometer resolution. With 64 bits per axis you can represent the entire solar system with 15 nm resolution, while maintaining equal resolution at any location, and exact distance calculations between any points no matter how close or how far.


Having asked this myself once, and tried to write it: it is hilariously slow to render. Graphics cards are float crunchers. Changing one's frame of reference is not trivial but isn't impossible, and it is much faster.


The rendering can be done relative to the camera position though, can't it?

So for the graphics you just subtract all world coordinates from the camera coordinate, and cast the result to float; for the game physics and AI, you work directly in fixed point.


That is typically done with matrix transformations, which all end up in floating-point space anyway. Having to do integer-to-float transforms for everything to get you there is bad news.


You don't need to transform every vertex of the 3D model though. If you're rendering an astronaut on mars, you just feed the graphics engine the relative position of the astronaut and the camera. The detailed rendering of the astronaut's eyebrows can all be done natively in floating point once you've calculated that offset.


I mean...maybe. I am not up on it enough to say, though my intuitive answer is "it's not that simple." But that's just not how any existing stuff works, too. If you want to work with the ton of middleware, etc. that already exists, you work the way Unreal (or Unity, etc.) do.


Rendering is usually done with floating point on graphics cards, but I don't know if this is a requirement.


I've heard that the Star Citizen engine devs had to change all of their math operations in CryEngine from single to double precision to add support for seamless large worlds (the player origin hack has limits...) Don't even want to imagine how awful of a nightmare that would have been.


I believe unreal engine is doing the same thing for their next major release to support large open worlds


A former AAA game developer visualized this exact sort of glitch that a player sees due to mantissa imprecision.

https://www.youtube.com/watch?v=qYdcynW94vM


This is manifested as "far lands" in java editions of minecraft. Location away from origin gets out to precision limitations in floats resulting in jittering and other rendering oddities

https://www.youtube.com/watch?v=crAa9-5tPEI


Not just in games, but also the real world. I've had issues implementing robotic mapping code coming from the naive use of GPS coordinates, for which the solution was the same as you just said: dynamically adjusting the origin.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: