Hacker News new | past | comments | ask | show | jobs | submit login

Is this the reason for the downvote?

First, I'd ask you to name, say, 1,000 applications where this would be acceptable. Games might be an obvious one.

OK, what else? Because, you see, without a significant ecosystems of applications that can tolerate these kinds of errors the business case to fire-up a foundry and make chips just can't be made.

Military? I don't know. You can justify even the most ridiculous things under the cover of military spending.

I would venture to say that the vast majority of the applications that can deal with a 1% error will work just fine using integer math which, if done correctly, can be far more accurate as someone else pointed out. And, I would further venture to say that they already use integer math.

In other word, in the context of the vast majority of applications this is not a problem needing a solution.

I've done lots of analog circuit designs (yes, analog...scary) where you creatively make use of the physics of the junctions to produce various non-linear transfer functions. It works well and can produce amazingly compact circuits that do what would require lots of digital circuitry. The errors, potential thermal drift and process variations can kill you though.

To give it a different spin, an FPU required an external chip 20 or 30 years ago. Today you have processors smaller than a fingernail running a multi-GHz with pipelined FPU's. My vote is for further development in the direction of faster and more accurate FP calculations rather than faster and less accurate (and full of other liabilities) FPU's.




To give it a different spin, an FPU required an external chip 20 or 30 years ago. Today you have processors smaller than a fingernail running a multi-GHz with pipelined FPU's.

GPUs have so many FPUs (literally thousands) that their silicon footprint is quite large. Large SIMD FPUs, like those in the Sandy Bridge, are also large and power-hungry, so much that Intel actually has an (undocumented) feature where the Sandy Bridge powers down half of of the FPU if it hasn't been used recently.

With regard to the importance of accuracy, even Intel realizes the importance of being able to improve power and performance with lower-accuracy FP calculations: see for example their demo hardware here: http://www.theregister.co.uk/2012/02/19/intel_isscc_ntv_digi... .

"The FP unit can do 6-bit, 12-bit, and 24-bit precision and, by being smart about doing the math, it can cut energy consumption by as much as 50 per cent and - perhaps more importantly - you can do the math a lot faster because you only operate on the bits you need and don't stuff registers and do calculations on a lot of useless zeros."


Sure. Makes sense. The question really is about whether or not some of these applications can tolerate errors in the order of 1%. I contend that those application are few and far between, particularly in this day and age. People don't want to go backwards.

For example, I can do photorealistic rendering of mechanical models from Solidworks. A 1% error in the output of these calculations would not be acceptable. The quality of the images would not be the same. Play with this in Photoshop and see what 1 1% error means to images. I would have zero interest in a graphics card that ran faster, used less power but offered a 1% error in calculations. That's an instant deal breaker.

I can see the competitors' marketing: "Buy the other brand if you want 1% errors".

Does a radiologist care about the processing of an MRI happening faster? Sure. Can he/she tolerate a 1% error in the resulting images? Probably not. He probably wants more detail and resolution rather than less. I wouldn't want him/her to be evaluating MRI's with a baked-in 1% error. I think it is ridiculous to think that this sort of thing is actually useful outside of a very narrow field within which errors would be OK.

I'm still waiting for someone to rattle-off 100 to 1,000 applications where this would be actually acceptable. Without that it is nothing more than a fun lab experiment with lots of funding to burn but no real-world utility.

Let's check in ten years and see where it's gone.


Sure. Makes sense. The question really is about whether or not some of these applications can tolerate errors in the order of 1%.

I think the idea here is threefold:

1. The vast majority of computations do not need extremely high precision. A billion times more floating-point operations go into rendering video for games every day than for MRI processing (made-up statistic). Even now, we have double-precision floating point for this reason: most operations only need single-precision.

2. Applications that need a particular level of precision can be written with that level of precision in mind. If you know the ability of a particular operation to provide precision, and you are aware of the numerical properties of your algorithms, you can write faster code that is nevertheless still as accurate as necessary. Most of this isn't done by normal programmers, but rather by numerical libraries and such.

3. Many performance-intensive applications already use 16-bit floating point, even though it has very little native support in modern CPUs. AAC Main Profile even mandates its use in the specification. The world is not new to lower-precision floating point, and it has gained adoption despite its lack of hardware support on the most popular CPUs.

The entire idea that this is "let's make ordinary computations less accurate" is a complete straw man and red herring that nobody actually suggested.


"... name, say, 1,000 applications where this would be acceptable...what else?"

Image processing is already a large market but it is constrained by the fact that doing image processing on today's silicon is expensive.

If instead of 1000 faster you read "1000 cheaper to do the same thing" you can imagine lots of computer vision / pattern (voice) recognition / noise reduction stuff that becomes viable and the market will be huge.


Yes! Or 1000x less power hungry.

Immediately obvious application: real time moving image recognition in your phone.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: