Hacker News new | comments | show | ask | jobs | submit login
Why a*a*a*a*a*a cannot be optimized to (a*a*a)*(a*a*a)? (stackoverflow.com)
153 points by ciupicri 1996 days ago | hide | past | web | 29 comments | favorite



When I first read the headline, I definitely thought that was a regular expression which could be "optimized" to a+. I guess I'm more inclined to read "*" as the Kleene operator than as multiplication...


I'm glad to know that I'm not alone in doing this.


I think you need a font where asterisk is not superscripted. Try Inconsolata.


Me too, but that's probably because of all those regular expressions like 'aa* b*' I had to read at uni :)


In numerical analysis class numerical stability was a major theme[1]. It was very definitely eye-opening how quickly rounding/truncation errors can bite you in seemingly trivial situations!

On the other hand, if I have a function that works on parameters in a specified range (say, 0.0 to 1.0) and returns results which is supposed to be correct to a specified accuracy I would love to have the compiler do all the possible optimizations without having to specify a compiler flag (doing so for, say, just a single function can be quite annoying!). Maybe approaches like Haskell's type inference will, eventually, produce significantly faster code because they can do things like this?

In a somewhat related case, it would be awesome if the compiler recognized properties like associativeness, distributiveness, commutitiveness of composite functions and rearange things optimally for performance. It's really nice to see so much development in languages and compilers at the moment (llvm, javascript, functional languages like Haskell, etc.) :°)

[1] http://en.wikipedia.org/wiki/Numerical_stability


So you want something like GCC's -funsafe-math flag.


David Buehler's mixed-point computations can do that, I believe.


The short answer: Because floating point math is not associative.

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problem...


As a user of Stack Overflow only when google brings me there, I'm curious as to why the answer received so many upvotes? It is certainly a good answer, but the score seems disproportional to the utility.


Because it's on the front page of both Hacker News and reddit/r/programming.


I just feel sorry for the guy that answered. He's got +685 upvotes at time of writing and no dvs so he should have ~ 7,000 points from this, but apparently SO imposes a daily cap of 200.


I think the Daily Cap is intended for situations just like this. So if you see someone with 5k+ reputation it's a good sign they've given good answers to a fair few questions (or asked good questions) over a period of time rather than just got lucky with a question that found its way on to Twitter. Because reputation entitles you do certain things like close questions SO is careful about how you can earn it.


Yeah makes sense. Still, I'd be a bit gutted :)


There are other ways of dealing with "fake real numbers" (which is what doubles are)

One example is interval arithmetic. http://en.wikipedia.org/wiki/Interval_arithmetic

None of them seem very mainstream: the predominant thinking is that floats are "the right way" to badly approximate real numbers, that works well enough except when you're working with pounds and pence, when "everyone" knows the right way is to use fixed point...

That's not going to help the above associativity problem, but it fits in the theme of "LAAAA LAAAA LAAAA lets gloss over the bits of floating point that don't follow the rules"



It can. But only for integer data types.

As already pointed out, for floating point data types the two versions could (and probably will) result in slightly different answers due to rounding issues.


It's not just rounding issues: depending on the values involved the order in which you carry out FP operations can lead to radically different results.

In this particular case, the difference due to re-ordering of multiplications is likely to be relatively small, but it doesn't hurt to be aware of the potential problems.


> It's not just rounding issues: depending on the values involved the order in which you carry out FP operations can lead to radically different results.

Different operation orders giving different results is due to rounding issues. At the end of each individual operation the intermediate result needs to be stored in a floating point data type and at this point it is rounded to fit the limits of that type - changing the order of the operations to try optimise in the way being talked about changes the number of intermediate results that get rounded (so change the error at the end of the process), and some numbers "suffer" more from the rounding than others (0.5 can be represented accurately in binary, 0.6 can not) so sometimes the difference in the values you get at the end of operations that are mathematically equivalent (until you consider the accuracy of intermediate results) will be much larger than other times.

Of course if all compilers optimised a.a.a.a.a.a to (a.a.a)^2 then it would be OK as for the same data type all code would give the same result, but for consistency as not all code does do this optimisation it is safer for none to do it unless the programmer explicitly codes that way (and therefore takes responsibility for the fact that the "optimal" version may give different results when compared to code using the unaltered version, rather than the compiler deciding that this is OK).

The same is true for any chained floating point operations. changing a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a to a*20 (if one multiple is faster than 19 additions) will give different results.


Why does it being different matter? Compilers can do all kinds of things that change answers.


Because consistency matters.

An optimisation that could change the result (even by the smallest possible amount) is not a safe optimisation. As a general rule, unless the build script tells it otherwise, a compile won't perform an optimisation that could make the optimised version give a different result from the basic one.

Of course you may decide it is safe enough, in which case you might find your compiler has an option to tell it that you would like such optimisations to be considered. Stepping away from arithmetic for a moment, variable aliasing is another example of this. If your code could potentially update a value several ways through pointer jiggery-pokery a compile may decide certain register based optimisations are not safe, and there are often directives you can give to tell the compile that it can consider these optimisations as you are sure the code does nothing that would make them a problem (the compile can't decide this for itself as that would potentially take for ever due to the "halting" problem).


They can't do it on code that doesn't invoke undefined behavior. This is not undefined behavior.


Obviously they can't do whatever they want but they can decide if they want to store variables in registers or main memory, they can change the order of a variety of operations, etc. Does C specify order of evaluation with multiplication? If so I wasn't aware of that. But you'll still have some compilers giving you 32 bit precision on intermediate results and some giving you 80 bits.


> they can decide if they want to store variables in registers or main memory, they can change the order of a variety of operations, etc.

As long as they follow the "as if rule", which states that any transformations that is done to code that does not invoke undefined behavior will produce the same final result as if the transformations were not done. Reordering floating point multiplications does not follow this rule.

(Watching stuff in a debugger doesn't count for this rule)


1. Are you saying that multiplications have a specified order? I asked that but didn't get a response.

2. The compiler is allowed to either leave the float inside the FPU or store it back to main memory after each operation, is it not? These give different results.

Somewhere around here I have a chart of the differences in floating point outputs of different GCC versions. And no, fastmath is not on.


Yes. Multiplication is left associative. I'm not sure that C defines the precision of floating point operations, although I think it sets an upper bound. I may be wrong, and I think that the C99 spec (optionally?) pushes compilers to conform to the iee754


Is this on the front page of HN because of the many upvotes it got, or did it get the many upvotes because it is on the front page of HN?


It got many upvotes (or a small number of upvotes directly after being posted) which pushed it to the front page of HN, where it gets greatly increased visibility and hence more upvotes.


Almost certainly the latter.


Because it brings down Stack Overflow?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: