This short interview is quite inspiring. Here is a choice quote:
"I did not bill Intel for consulting hours spent on those aspects of the i8087 design that were transferred to IEEE p754. I had to be sure, not only in appearance and actuality but above all in my own mind, that I was not acting to further the commercial interest of one company over any other. Rather I was acting to benefit a larger community. I must tell you that members of the committee, for the most part, were about equally altruistic. IBM's Dr. Fred Ris was extremely supportive from the outset even though he knew that no IBM equipment in existence at the time had the slightest hope of conforming to the standard we were advocating. It was remarkable that so many hardware people there, knowing how difficult p754 would be, agreed that it should benefit the community at large. If it encouraged the production of floating-point software and eased the development of reliable software, it would help create a larger market for everyone's hardware. This degree of altruism was so astonishing that MATLAB's creator Dr. Cleve Moler used to advise foreign visitors not to miss the country's two most awesome spectacles: the Grand Canyon, and meetings of IEEE p754."
It's unfortunate that Intel didn't consult Kahan earlier in the design process of the 8087, when they could have designed a base 10 floating point unit, as recommended by Kahan.
At least for spreadsheets and other accounting software, as well as most uses of JavaScript, IEEE 754-2008 decimal64 arithmetic is more appropriate than IEEE 754-1985 binary64 arithmetic. It astounds me how many professional programmers report bugs that aren't bugs but misunderstandings of binary floating point rounding. Excel has some ugly display hacks to hide some of the binary floating point number quirks from users, but this results in some numbers that display identically but are not equal according to the = operator.
Given the wide birth that programmers give to floating point number comparison and rounding, I wonder what would happen if a browser vendor shipped a dialect of ECMAScript/JavaScript that used decimal64s to implement Numbers. I haven't read the any of the ECMAScript standards in full, but I imagine this would be a non-conforming variation.
The only benefit Base10 has over Base2 is that it's more easily understandable by beginners.
It's more complex to implement in hardware. It still has exactly the same precision issues that must be taken into consideration. It still cannot represent everything exactly (1/3 in base 10 decimal. It gets truncated).
Using base2 floating point is pretty much the same as using base2 integers.
> The only benefit Base10 has over Base2 is that it's more easily understandable by beginners.
Dr. William Kahan argues better than I can [1][2]. As far back as the 1970s, he tried to convince both Intel [3] and IBM [2] to implement base 10 floating point in hardware.
First of all, in an ideal world, all software is written and reviewed by people have taken, aced, and remembered their discrete math courses. We don't live in that world. I work with a lot of bright people, but I've come to find that many of them have a model that's essentially floating point operations result in a value that's a low variance Gaussian distribution about the correct value. It's not uncommon for me to see things like doubles compared thusly:
if (abs( floor(x) - floor(y) ) < 1e-30) ...
Secondly, almost no financial or accounting rules or regulations are drafted by people with any knowledge of binary floating point semantics, so the semantics of IEEE 754-2008 decimal numbers more closely match financial rules and regulations.
Thirdly, base 10 floating point numbers are less confusing for end users and result in fewer hacks such as the display hacks in Excel to prevent users from thinking floating point artifacts are bugs in Excel.[1]
I think IEEE 754-1985 is a wonderful and well thought-out standard. However, the IEEE 754-2008 committee didn't add decimal floating points on a whim. There are very compelling use cases.
[1] http://www.cs.berkeley.edu/~wkahan/ARITH_17.pdf
[2] http://grouper.ieee.org/groups/754/email/msg01831.html (search for Yorktown Heights)
[3] https://en.wikipedia.org/wiki/IEEE_754-1985#History
Edit: It was a bit harsh to begin with "Incorrect." Sorry.
Do you have a source for “they could have designed a base 10 floating point unit, as recommended by Kahan”? It is not in the article being discussed (only one occurrence of “decimal”), and I have a hard time believing that Kahan didn't believe that binary was the superior base and should be implemented first (and is so efficient that it should be implemented even if one builds a decimal system too).
Simulations of physical systems, for instance, do not care about the base. Base-2 offers the smoother relative precision, and a very simple system to save on the first digit (do not store it). In fact, I seem to remember a Kahan article saying that binary was superior to all other bases that could be considered for floating-point.
See the third citation in my reply to your sibling comment.
Granted, personal exchange after an IEEE committee meeting is a weak citation, it is consistent with Kahan's email stating that he had tried to convince IBM to implement decimal floating point in hardware approximately 30 years prior to 2005 (my second citation to your sibling comment).
> Given the wide birth that programmers give to floating point number comparison and rounding, I wonder what would happen if a browser vendor shipped a dialect of ECMAScript/JavaScript that used decimal64s to implement Numbers. I haven't read the any of the ECMAScript standards in full, but I imagine this would be a non-conforming variation.
It did not make enough of a difference that the Ada, C++, C#, Fortran, Java, Pascal, mawk, nawk and lua variants with decimal floating-point were maintained since 2010, apparently.
Back around 81 or 82 I was a principal in a software that marketed a software implementation if IEEE 754 and associated libraries, developed by Rick James, formerly of CDC, and Bill Gibbons, formerly of HP. It had a small market among compiler writers because no hardware supported it other than the 8087. Even so it was faster than the native 8087, but dont hold my feet to the fire if my memory is faulty. It was a very hairy specification to implement, because it included everything, even the kitchen sink. It was the kind of floating point that an astrophysicist would love. (I had several of those for roommates way back when.) Even by the 90s, I dont think any hardware manufacturer had implemented it in all its glory. If someone told me that no one has done do yet, I would not be surprised.
"I think it is nice to have at least one example -- IEEE 754 is one -- where sleaze did not triumph. CDC, Cray and IBM could have weighed in and destroyed the whole thing had they so wished. Perhaps CDC and Cray thought `Microprocessors? Why should we worry?' In the end, all computer systems designers must try to make our things work well for the innumerable ( innumerate ?) programmers upon whom we all depend for the existence of a burgeoning market."
Indeed, we programmers are very lucky that IEEE p754 won and became the ubiquitous standard.
A few years ago, I had the opportunity to speak with someone who was involved in p754, and he claimed that Kahan took advantage of the fact that Kahan took advantage of the fact that a number of his students were on the committee to add unecessary complexity to the standard.
I offer this anecdote not because I necessarily believe it's true, but because I think it offers more perspective on just how acrimonious the arguments over things like gradual underflow were.
I'm not a CPU designer, and so I can't speak to the merits of the case, but I do note that to this day, a large fraction of embedded CPUs do not have hardware support for denormals, but rely on the OS to implement it in software. My intuition is that this is not due to it being intractible, but rather the FPU being an afterthought on a lot of embedded processors.
Does anyone know if the current/recent processors from Intel and AMD still perform relatively slowly in number crunching with denormals and NaNs? I remember it used to be like hitting a brick wall performance-wise if these crept into your data...
AMD since long a while wasn't as slow for NANs whereas a few generations ago Intel was certainly 100 times slower as soon as the NANs are there. The first processors that aren't are Sandy Bridge ones, apparently.
Inf and NaN have been handled at speed at least as far back as Core (probably on P4 too, but I'm not certain, as I never really worked with those). [edit: I’m referring to SSE/AVX; I forget that there are people who still use x87, where this stall is still around].
Denormal stalls, which are on the order of 100x, started to go away for addition in SNB. (IIRC, AMD doesn't handle denormals at speed either, but I'd need to double-check that).
"I did not bill Intel for consulting hours spent on those aspects of the i8087 design that were transferred to IEEE p754. I had to be sure, not only in appearance and actuality but above all in my own mind, that I was not acting to further the commercial interest of one company over any other. Rather I was acting to benefit a larger community. I must tell you that members of the committee, for the most part, were about equally altruistic. IBM's Dr. Fred Ris was extremely supportive from the outset even though he knew that no IBM equipment in existence at the time had the slightest hope of conforming to the standard we were advocating. It was remarkable that so many hardware people there, knowing how difficult p754 would be, agreed that it should benefit the community at large. If it encouraged the production of floating-point software and eased the development of reliable software, it would help create a larger market for everyone's hardware. This degree of altruism was so astonishing that MATLAB's creator Dr. Cleve Moler used to advise foreign visitors not to miss the country's two most awesome spectacles: the Grand Canyon, and meetings of IEEE p754."
Also, please read "What every Computer Scientist should know about floating-point arithmetic" available at https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numer....