Curiously enough, today, such a hack is completely pointless (at least on x86): the rsqrtps instruction can do 4 of these in just 3 clock cycles, with higher accuracy to boot.
Most modern instruction sets with remotely decent floating point support have a similar instruction, in large part because of the prevalence of hacks like this in the past.
Yes, but what a beautiful little hack it is. The fact that modern processors implement it in hardware is actually a tribute to this tricks ingenuity and elegance.
After all, to have major chip manufacturers implement your software hack in hardware is quite a compliment, assuming they use the same mechanism.
This particular hack, yes, but the general point stands. Sometimes hacks make for great code. The point is that this is good code:
float h = 0.5f*x;
int i = *(int*)&x;
i = 0x5f3759df - (i>>1);
x = *(float*)&i;
return x*(1.5f - h*x*x);
That's 3 local variables, 5 short statements... makes me wonder how quickly one could search/test all combinations of sensible 5-statement functions to find other such "hacks"...
Yeah, though at that point I'd be start to worry about whether that's a fair L1 cache tradeoff, given that it only saves a single (integer!) instruction and requires 512 bytes of memory.
Actually it is not completely pointless today: Some months ago I used it to speed up an force-directed algorithm which spent 80% of its time in calculating normalized vectors between two points - and the fast inverse sqrt actually helped to speed up the program by a factor of 2.5 to 3.0. The reason why i used it is quite simple: In (unsafe) .NET code you can execute all the necessary arithmetic operations (pointer casts, bit shifts) but you are not allowed to execute arbitrary x86 operations.
I remember when this was being discussed over at gamedev.net, where Chris Lomont came up with his version. The wikipedia article explains it much clearer though :-P
The history of this particular hack can be found over at Beyond3D:
They trace it all the way back to Greg Walsh, who apparently came up with the hack while working on the Titan graphics computer at Ardent Computer in the late 80's, who got the idea while working with Cleve Moler, the author of matlab.
I hope someday to actually understand this. It seems to come up once a year or so, and each time I understand a bit more... so hopefully in a few years...
After all if your first approximation is 'in the ballpark' you get a lot of precision in one go. Newtons method would run 'infinitely long' to reach an arbitrary precision, normally the cutoff is some acceptable error.
By getting within the ballpark on the first try you can get within acceptable error with one more approximation step.
(acceptable depends on the use of the result of course).
If you Google for that magic number, you will find a large number of explanations, requiring various levels of understanding. No doubt there is one clear enough for you; perhaps stack a few of them.
Most modern instruction sets with remotely decent floating point support have a similar instruction, in large part because of the prevalence of hacks like this in the past.