Hacker News new | past | comments | ask | show | jobs | submit login

From the introductory text “The ultimate goal of vectorization is an increase in floating-point performance…”. I find it a bit funny that by vectorizing float executions we’ll be able to calculate wrong answers faster ;-)



It's not a wrong answer, anymore than writing down ⅓ as 0.333, multiplying that by 3, and discovering that the result is 0.999 and not 1.

And there's nothing in vectorization that requires floating-point numbers; it's just that the algorithms that tend to demand the most out of a computer tend to be large numerical algorithms, which use floating-point.


> the result is 0.999 and not 1.

But 0.999... is 1.

Edit: Nevermind, I see you meant "writing down 1/3 as 0.333" literally.


They're not wrong answers, they're floating-point answers. What's wrong is the assumption that infinite precision will fit in a handful of bytes.


for many practical applications where we compute things, the input data isn't fully accurate, it's an estimate with significant error or uncertainty. when we consider how the outputs of computations are used, in many cases it isn't useful for outputs to be arbitrarily precise. maybe the output is used to make a decision and the expected cost or benefit decision won't significantly change even if the output is wrong by 1%, for example.

one person's "wrong" is another person's approximation. by reducing accuracy we can often get very large performance improvements. good engineering makes an appropriate tradeoff.

there's even more scope for this kind of thing inside some numerical algorithms where having a "pretty good" guess gives a massive speed boost, but the algorithm can be iterated to find quite precise answers even if the "pretty good" guess is an approximation that's off by 20%

there's yet more scope for this if we consider if we're trying to mathematically model reality -- even if we perform all the math with arbitrary precision calculations, the theoretical model itself is still an approximation of reality, there's some error there. there's limited value in solving a model exactly, as there's always some approximation error between the model and reality.

it's amazing we have such good support for high-performance scientific quality ieee floating point math everywhere in commodity hardware. it's a great tool to have in the toolbelt


How is this relevant? This is more of a critique of floats than it is about vectorization.


Still irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: