I'd certainly like to have decimal FPU types too, I know IBM did some nice work but the rest of industry mostly ignored it and I think it's a pity.
Still I don't see where in scientific computing anybody would need it by the nature of the problems being solved -- what is decimal in nature? When you calculate with base2 FP you have better "resolution" and partial results in "absolute" sense (not in the "let me see the decimal digits" sense of course). For the same reason, when you make a series of calculations, the error accumulates slower with binary. That's why base 2 FP was used for all this years. When you don't need to calculate money amounts, it is better.
But what are the examples where decimal is more "natural" for scientific computing?
To amplify this a bit, in binary the relative and absolute errors are within a factor of two, but by a factor of up to 10 for decimal. IBM actually used to use base 16 on some hardware but the rel/abs error disparity of factor 16 hurt and it was eventually abandoned.
Right, and I can see why that's a problem in accounting, but why does it matter for scientific computing? I do a fair amount of stuff that could be called scientific computing, and I just use doubles. If I need to keep track of uncertainty or propagate errors, I normally use Gaussians as the representation (not "significant figures" as would be implied by using decimal).
It almost never matters in scientific computing. Doubles give us the equivalent of almost 16 digits of accuracy, and that's more precision than we know any physical constant to. You're right that the world isn't decimal, and switching to decimal encodings actually reduces the effective precision of any computation.
There's a reason they're called the natural numbers. Nature doesn't have to be decimal for decimals to be useful (the question that started this debate), it just has to be rational. Many many many parts of nature are rational, and sometimes we need to deal with them in scientific computing. DNA sequence processing comes to mind.
When the question is stated like this, then the answer is: it is really more convenient for us humans, and the computers would less "lie" to us in significant number of cases. We don't care that much for accumulated partial errors in computation and we just don't expect our inputs to be interpreted immediately "wrong."
Think about it this way: you have the computer capable of billions operations per second, unfathomable capacity, still it lies to you as soon as you enter the number 16.1 in almost any program: it stored some other number, dropping infinite amount of binary decimals! Why you ask, and the answer is "but otherwise it's not in the format native to hardware."
So it should be native to hardware. Just not because of "scientific computing" but more "for the real-life, everybody's everyday computing." We need it for the computers to "just work."
Yes I was the one who questioned "scientific" motive, see the top of the thread! Still, on humane grounds, I claim we really, really need it in hardware. It doesn't matter for "scientific computing" it matters for us humans as long as we use decimal system as the only one we really "understand."
Any program that computes anything a lot has to use hardware based arithmetic to be really fast, still nobody expects that 0.10 cents he writes loses its meaning as soon as it is entered. It is absurd situation.
I'm sorry, are you asking for what purpose a scientist would need to multiply by a power of ten? Converting between units in scientific notation? Doing base 10 logarithms? Calculating things in decibels?
The original post that started this thread was saying that chips should support decimal floating point natively in silicon instead of only base-2 floating point. Yes, those are different things: https://en.wikipedia.org/wiki/IEEE754#Basic_formats
Fascinating discussion, so there are a couple of threads and they can be summed into precision arguments and range arguments. I confess I'm friends with Mike Cowlishaw (the guy behind spelotrove.com) and he's influenced my thinking on this quite a bit.
So precision arguments generally come under the heading of how many significant digits do you need, and if its less than 15 or so you're fine using a binary representation. If its more than 15 you can still use binary but its not clear to me that it's a win.
The second is in range. So if you're simulating all of the CO2 molecules in the atmosphere, and you actually want to know how many there are, and you want to work in the precise values of the rations of N2, NO2, H2, O2, CO2, etc in the atmosphere then, as I understand it, you're stuck approximating. (For context I was asking the a scientist from the Sandia National Laboratory about their ability to simulate nuclear explosions at the particle level and wondering if the climate scientists could do the same for simulating an atmosphere of molecules, or better atoms, or even better, particles. ) that is a problem where there is a lot of dynamic range in your numbers adding 1 to .5866115*10^20 doesn't really change the value because the range breaks down.
And yes, you can build arbitrary precision arithmetic libraries (I built one for Java way back in the old days) but if your working with numbers at that scale then it gets painful without hardware support of some kind.
In my day to day use I find that imprecision in binary screws up navigation in my robots as they are trying to figure out where they are, but repeated re-location helps keep the error from accumulating. And yes, its a valid argument that dead reckoning is for sissies but its very helpful when you have limited sensor budgets :-)