

0x5f3759df and the fast inverse square root (2012) - carljoseph
http://h14s.p5r.org/2012/09/0x5f3759df.html

======
spiritplumber
If anyone wants a fast atan2 that I wrote a while ago (2007), for making a
microcontroller navigate, it's at [http://robots-
everywhere.com/portfolio/math/](http://robots-everywhere.com/portfolio/math/)

~~~
GFK_of_xmaspast
how's it compare to a CORDIC?

~~~
spiritplumber
I wrote it for the Parallax Propeller, which is slow at doing divisions - so I
tried to keep those to a minimum (one, which isn't too bad).

~~~
GFK_of_xmaspast
With a CORDIC the only divides should be by powers of 2 in the main loop and
taking out the gain factor at the end, all of which can be turned into
multiplies?

~~~
spiritplumber
Thanks, I'm going to go back and try it :)

------
thewarrior
Tl;DR :

The fast inverse square root is based on the fact that the integer
representation of a floating point number is a rough approximation of its
logarithm.

So convert floating point to its integer representation. So now you have its
approximate logarithm. Now take half of that and improve that with some Newton
raphson.

~~~
mkesper
So convert floating point to its integer representation.

Would something dirty like this be feasible in rust?

~~~
kaoD
AFAICT transmute would work:
[http://rustbyexample.com/staging/unsafe.html](http://rustbyexample.com/staging/unsafe.html)

That's what's great about Rust, you can write unsafe code when you need it and
it's isolated in `unsafe` blocks for easy auditing.

EDIT: Didn't test it too much, but looks like it works.

    
    
        fn fast_inv_sqrt(x: f32) -> f32 {
            let f: f32 = unsafe {
                let i: i32 = std::mem::transmute(x);
                std::mem::transmute(0x5f3759df - (i >> 1))
            };
            f * (1.5 - 0.5 * x * f * f)
        }
    
        println!("{}", fast_inv_sqrt(1.0));
          //=> 0.998307
        println!("{}", fast_inv_sqrt(255.0));
          //=> 0.062517
    

EDIT2: Shorter, more readable version.

------
simias
I'm really impressed by the generalization to other powers, including the
regular square root. It's the first time I've seen that hack (actually even
the author mentions he found nothing on google). I think I could definitely
have used that when I needed to compute square roots of fixedpoint numbers
with no HW support, it looks very significantly faster than the iterative "by
digit" method.

------
hbbio
There is also a Wikipedia page for this:

[http://en.wikipedia.org/wiki/Fast_inverse_square_root](http://en.wikipedia.org/wiki/Fast_inverse_square_root)

------
tempodox
And the moral of this story is: Never trust any floating-point number whose
bits you haven't polished yourself.

------
jokoon
how does he adjust his σ value ?

------
agumonkey
Rarely I'm that amazed at encodings.

------
KhalilK
I may be going on rampage here but " _It does contain a fair bit of math_ "
really grinds my gears!

Of course it does, it is about square roots and inversions.

In most of the cases, in order to optimize an algorithm or solve a problem,
mathematics will do it for you, and that ranges from simple arithmetic
properties to base changing theorems in linear algebra.

You just can not tackle such subjects without expecting it to _contain a fair
bit of math_ .

~~~
rtpg
loads of things do not require math (in the sort of "doing stuff with
functions in (R -> R), or multiplications/division).

Pretty much no sorting algorithm requires mathematical knowledge to work
through, nor do many data structure algorithms. Concurrency resolution issues
neither.

This is of course, so long as you consider math to be "do stuff with these
numbers." Group theory and the like can help guide your results on a more
abstract level.

EDIT: what I meant by this is the sort of "number-y" math that people usually
mean when saying "warning: Math ahead". I realise that abstract modeling is
also a part of mathematics, but that's not what people usually _mean_ when
said in the context of what parent post was mentioning

~~~
avz
> ...math (in the sort of "doing stuff with functions in (R -> R), or
> multiplications/division")

This is not what math is. It's more what calculus is and math is a lot more
than calculus. Most mathematics is not about numbers at all.

BTW: Interesting results about sorting algorithms (e.g. the lower bound of
O(nlogn) complexity for comparison-based sorts) require maths (specifically
combinatorics, permutations etc). Concurrency involves math, too. See the
famous happens-before relationship which involves notions of sets (of
executions), transitivity, partial ordering, etc.

~~~
vishnugupta
> See the famous happens-before relationship which involves notions of sets
> (of executions), transitivity, partial ordering, etc.

[OT] What's the probability of two comments with very similar views appearing
at the same time :-). I too did comment almost exactly same thing!

PS: Ducks from down-votes.

