Hacker News new | past | comments | ask | show | jobs | submit login
The fixed_point C++ Class Template (github.com/johnmcfarlane)
45 points by ingve on Aug 14, 2015 | hide | past | favorite | 21 comments



I was having trouble understanding why fixed-point arithmetic was particularly useful in game development. In fact, I was curious why fixed point representation was more useful than floating point at all...

Two links deep, I found [1]. The salient being "...floating point representation gives you extra precision for numbers near zero, this extra precision comes at the expense of precision for numbers further out"

1. http://www.pathengine.com/Contents/Overview/FundamentalConce...


The most useful feature of fixed point calculations is its deterministic nature. Determinism is necessary when multiple parties need to be in the same state, with the state being too large to be exchanged between the parties.

The canonical example are RTS style games where there are hundreds of units, but only the actions of the players are sent to every other players. The result of the actions are computed independently by all the players. Having all the players be in sync requires determinism in the calculations, which floating point calculations do not provide.


To add to this, when you're trying to minimize information flow among parties, you might have each player only send information to some other players (perhaps only the set that needs to know right now - think unit information from Player A to Player B. This is in the 'fog of war' for player C, so you don't send it. Later, however, the accumulated state needs to be sent to player C when it becomes relevant to her. I made this example up, but it's quite common to minimize the amount of information sent over the network in latency-sensitive networked multiplayer games.)

The short answer is that accumulated calculations deviate in floating point arithmetic, because floats don't behave like mathematical abstract numbers. One easy way to see this is that operations on floats are not associative, especially when you're working on numbers that differ greatly in magnitude. So you don't have the guarantee that (A * (B * C)) = ((A * B) * C). This can end up with state slowly deviating and it all falling apart.

The reason for this is fairly easy to see: floats are represented as A * 2^B (A is the 'mantissa', B is the 'exponent'). In double-precision floating points, the mantissa is 52 bits, and the exponent has 11 bits (1 is left over for the sign bit). So if you have a really small number, say X = 1.125125123 * 2^-22, you can store it with a lot of precision. If you have a really large number, say Y = 1.236235423 * 2^30, you can also store this with a lot of precision. But now add the two together, and you're basically going to throw out the smaller number entirely, because you don't have enough mantissa bits. So essentially (X + Y) gives you Y. Now if you had a third number, say Z = 1.5 * 2^-22, and you were trying to do Y + X + Z, then order matters. Because (X+Z) together might "bump" it up to the point where they are within a multiplicative factor of (2^52) of Y, so they have some impact on the total. But Y + X throws out X, and then (Y + X) + Z throws out Z. So (Y + X) + Z ≠ Y + (X + Z)

P.S. I didn't check the math but that's the basic argument. I could be off by one or two orders of magnitude, but the point stands that floats behave wackily if you care about correctness.


I'm confused at why you'd expect fixed point to avoid these problems. Consider

    auto pi      = fixed_point<int16_t, -7>(3.141592654);
    auto e       = fixed_point<int16_t, -7>(2.718281828);
    auto log2_10 = fixed_point<int16_t, -7>(3.321928095);

    std::cout << (pi * e) * log2_10 << std::endl;
    std::cout << pi * (e * log2_10) << std::endl;
These give 28.2422 and 28.2656 with the provided code, respectively.

For float16s, these give 28.375 and 28.359 respectively.

The correct answer is 28.368, so floats are much closer as well as being closer to each other.

---

In fact, your example doesn't justify this calculation. If one is sending the accumulated values to C, one isn't worried about calculation error. You just need to require all parties that do calculate it to do so in agreement.

There are several more targeted problems of floating point

* C++ makes few guarantees about what floating point calculations do, and they can vary even between optimization levels. Languages like Python and Javascript fix this by setting well-defined behaviour.

* Often one only needs a certain level of detail, so adaptive computations are just a waste. If you know the bounds of your map and the maximum zoom, a fixed precision gives more uniform calculations and often better precision.

* Fixed precision has obvious and rounding-free addition.


IEEE floating point with a specified rounding mode should be deterministic. Unfortunately the C and C++ standards allow calculations to have excess precision, removing this determinism. Other languages, notably JavaScript, do guarantee this determinism in the spec.


The idea that floating point isn't deterministic is patently false. If it wasn't, that'd mean that every computer adds random noise to the calculation. That's nonsense. This myth seems to be propagated because fp is hard to work with and has a lot of quirks: its non uniform division of space, the x87 extended precision mode, the various rounding modes... But none of that stuff is intractable.

(Many games have shipped with fp in networked code. see http://gafferongames.com/networking-for-game-programmers/flo... and look for previous discussion here on HN, reddit...)


I compare/contrast fixed point and floating point extensively in this article: http://blog.reverberate.org/2014/09/what-every-computer-prog...


Not every architecture has an FPU, and doing floating point in software is obviously pretty slow.


A two-byte fixed-point number is likely to be less expensive to deal with than a two-byte floating-point number. (Promotion and demotion cost is pretty high for small floating-point types (bit masking, shifting, oring, etc.) compared to fixed-point promotion and demotion (shift).)


I figured performance came into it also. It would be nice to see some performance numbers though.


Lots of good answers in the comments here.

Similar to the PathEngine concept, when I compress 3D meshes to 6 bytes per xyz, I could use float16. But, there's no point to having most of the precision focused in the center of the mesh. The verts are usually fairly evenly distributed if not biased towards the outside edge of the bounding box.

Bit-level, cross-platform floating point determinism is harder than it should be. That and associativity is important in physics, financials, replays and distributed simulation.

The PlayStation1 and GameBoyAdvance didn't have FPU coprocessors. Software floating point was completely impractical. So, you pretty much had to use fixed point. To the point that the PS1 did have a fixed-point vector math coprocessor! :D


I'm surprised they don't support C++11 user-defined literals[1]. That seems like a perfect fit for stuff like this.

[1] https://en.wikipedia.org/wiki/C%2B%2B11#User-defined_literal...


I don't know, they would have to encode two template arguments, one of which is a freely-chosen integer, into a number of suffixes. I'd say, leave that to the user. The user can define literals for the type/exponent combinations they will use often.


Phew! I was thinking this was going to be a template-meta programming example of a y-combinator...

Looks useful!


Yeah, I was prepared to see some Lovecraftesque template metaprogramming implementation of lambda calculus just to calculate the fixed point of a function.


The negative exponent bothers me. Every implementation I've used specifies either mantissa bits or exponent bits, as a positive value. The extraneous '-' everywhere feels like a waste


Given there are no exponent bits (all of the bits are effectively mantissa bits with no hidden bit), your plan doesn't seem to make sense.

The numbers are (integer_value * 2 ^ constant_exponent), such that all bits are dedicated to integer_value.


I don't understand your complaint... A waste of what? You'd rather have all values expressed in terms of just raw bits?


Fixed point arithmetic, as one of the number types, has very well-defined mathematical properties. The Computational Geometry Algorithms library describes this. I think fixed point would be a special case of exact rational numbers. They could have classification structure much like iterators do, except classified for whether they form a field or are double-constructable, and such.


I haven't read through SG14, but I'm wondering why the fixed point implementation doesn't provide bases other than binary. Fixed point math has uses well beyond game development, and honestly too many people just use floating point by default.

For example, a decimal base is useful for representing prices that are otherwise unrepresentable in binary (e.g. a penny).


Not the "fix x = x" class, but a fixed point number class :-|




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: