

The fixed_point C++ Class Template - ingve
https://github.com/johnmcfarlane/SG14/blob/fp_docs/Docs/fixed_point.md#the-fixed_point-class-template

======
sdab
I was having trouble understanding why fixed-point arithmetic was particularly
useful in game development. In fact, I was curious why fixed point
representation was more useful than floating point at all...

Two links deep, I found [1]. The salient being "...floating point
representation gives you extra precision for numbers near zero, this extra
precision comes at the expense of precision for numbers further out"

1\.
[http://www.pathengine.com/Contents/Overview/FundamentalConce...](http://www.pathengine.com/Contents/Overview/FundamentalConcepts/WhyIntegerCoordinates/page.php)

~~~
Jyaif
The most useful feature of fixed point calculations is its deterministic
nature. Determinism is necessary when multiple parties need to be in the same
state, with the state being too large to be exchanged between the parties.

The canonical example are RTS style games where there are hundreds of units,
but only the actions of the players are sent to every other players. The
result of the actions are computed independently by all the players. Having
all the players be in sync requires determinism in the calculations, which
floating point calculations do not provide.

~~~
arjunnarayan
To add to this, when you're trying to minimize information flow among parties,
you might have each player only send information to some other players
(perhaps only the set that needs to know _right now_ \- think unit information
from Player A to Player B. This is in the 'fog of war' for player C, so you
don't send it. Later, however, the accumulated state needs to be sent to
player C when it becomes relevant to her. I made this example up, but it's
quite common to minimize the amount of information sent over the network in
latency-sensitive networked multiplayer games.)

The short answer is that accumulated calculations deviate in floating point
arithmetic, because floats don't behave like mathematical abstract numbers.
One easy way to see this is that _operations on floats are not associative_ ,
especially when you're working on numbers that differ greatly in magnitude. So
you don't have the guarantee that (A * (B * C)) = ((A * B) * C). This can end
up with state slowly deviating and it all falling apart.

The reason for this is fairly easy to see: floats are represented as A * 2^B
(A is the 'mantissa', B is the 'exponent'). In double-precision floating
points, the mantissa is 52 bits, and the exponent has 11 bits (1 is left over
for the sign bit). So if you have a really small number, say X = 1.125125123 *
2^-22, you can store it with a lot of precision. If you have a really large
number, say Y = 1.236235423 * 2^30, you can also store this with a lot of
precision. But now add the two together, and you're basically going to throw
out the smaller number entirely, because you don't have enough mantissa bits.
So essentially (X + Y) gives you Y. Now if you had a third number, say Z = 1.5
* 2^-22, and you were trying to do Y + X + Z, then order matters. Because
(X+Z) together might "bump" it up to the point where they are within a
multiplicative factor of (2^52) of Y, so they have some impact on the total.
But Y + X throws out X, and then (Y + X) + Z throws out Z. So (Y + X) + Z ≠ Y
+ (X + Z)

P.S. I didn't check the math but that's the basic argument. I could be off by
one or two orders of magnitude, but the point stands that floats behave
wackily if you care about correctness.

~~~
Veedrac
I'm confused at why you'd expect fixed point to avoid these problems. Consider

    
    
        auto pi      = fixed_point<int16_t, -7>(3.141592654);
        auto e       = fixed_point<int16_t, -7>(2.718281828);
        auto log2_10 = fixed_point<int16_t, -7>(3.321928095);
    
        std::cout << (pi * e) * log2_10 << std::endl;
        std::cout << pi * (e * log2_10) << std::endl;
    

These give 28.2422 and 28.2656 with the provided code, respectively.

For float16s, these give 28.375 and 28.359 respectively.

The correct answer is 28.368, so floats are much closer as well as being
closer to each other.

\---

In fact, your example doesn't justify this calculation. If one is sending the
accumulated values to C, one isn't worried about calculation error. You just
need to require all parties that do calculate it to do so in agreement.

There are several more targeted problems of floating point

* C++ makes few guarantees about what floating point calculations do, and they can vary even between optimization levels. Languages like Python and Javascript fix this by setting well-defined behaviour.

* Often one only needs a certain level of detail, so adaptive computations are just a waste. If you know the bounds of your map and the maximum zoom, a fixed precision gives more uniform calculations and often better precision.

* Fixed precision has obvious and rounding-free addition.

------
ginko
I'm surprised they don't support C++11 user-defined literals[1]. That seems
like a perfect fit for stuff like this.

[1] [https://en.wikipedia.org/wiki/C%2B%2B11#User-
defined_literal...](https://en.wikipedia.org/wiki/C%2B%2B11#User-
defined_literals)

~~~
adsche
I don't know, they would have to encode two template arguments, one of which
is a freely-chosen integer, into a number of suffixes. I'd say, leave that to
the user. The user can define literals for the type/exponent combinations they
will use often.

------
Patient0
Phew! I was thinking this was going to be a template-meta programming example
of a y-combinator...

Looks useful!

~~~
Fede_V
Yeah, I was prepared to see some Lovecraftesque template metaprogramming
implementation of lambda calculus just to calculate the fixed point of a
function.

------
thwest
The negative exponent bothers me. Every implementation I've used specifies
either mantissa bits or exponent bits, as a positive value. The extraneous '-'
everywhere feels like a waste

~~~
Veedrac
Given there are no exponent bits (all of the bits are effectively mantissa
bits with no hidden bit), your plan doesn't seem to make sense.

The numbers are (integer_value * 2 ^ constant_exponent), such that all bits
are dedicated to integer_value.

------
adolgert
Fixed point arithmetic, as one of the number types, has very well-defined
mathematical properties. The Computational Geometry Algorithms library
describes this. I think fixed point would be a special case of exact rational
numbers. They could have classification structure much like iterators do,
except classified for whether they form a field or are double-constructable,
and such.

------
uxcn
I haven't read through SG14, but I'm wondering why the fixed point
implementation doesn't provide bases other than binary. Fixed point math has
uses well beyond game development, and honestly too many people just use
floating point by default.

For example, a decimal base is useful for representing prices that are
otherwise unrepresentable in binary (e.g. a penny).

------
n_yuichi
Not the "fix x = x" class, but a fixed point number class :-|

