And likewise they knew better than to use floating-point where it doesn't work.
Well, essentially, of course.
You mean 0.5-1
0-1 is about a quarter of the numbers, just as many as 1-inf
Traditional explanation done well:
The title reminds me a friend of mine who attends a data science master course without computer background. He asked me for help for his numpy homework on Bayesian Probability theory. The number he got was so small that he got weird result. I immedieatly thought it was because float64 couldn't cover that number so I told him to use longdouble dtype (it's 80 bits in Intel I think) and the problem solved.
Later he told me even the professor didn't expect such case when announcing the homework and many students couldn't finish the hw on time because of this.
This is a very dangerous habit. If 64 bits isn't enough you are generally using the wrong representation.
Relevant xkcd: https://xkcd.com/435/
Or as my friend put it: everything is just a trivial special case of either mathematics or physics. ;-)
"Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk"
Verbose, highly literal translation: "Loving God created the absolute integers; all else is the work of humanity."
"How Java’s Floating-Point Hurts Everyone Everywhere"
p.s.: it's by William Kahan, who the primary architect behind the IEEE 754-1985 standard for floating-point computation (and its radix-independent follow-on, IEEE 854)
Fortran, D, Factor, and SBCL likely have better support.
But anything where a compiler can reorder things is suspect, if you need exact behavior, since even simple register spilling causes trouble (Intel), among other things compilers may do.
You don't have to reorder to make bugs. Simply having a compiler replace (a-b)+b with a is a bug. Simply flushing an internal 80-bit floating register to a 64 bit memory place, then loading it, causes differences in computations.
Note the C++ spec specifically states it does not require an underlying representation or even any certain level of accuracy (18.104.22.168). So many of these bugs are allowed.
The spec does not even require that compile time evaluation of an expression must match a runtime evaluation of the
same expression (8.20.6). When you hit that in code you're going to be surprised.
C/C++ is not required to round-trip floats/doubles when printed and parsed. stream formatting rounding modes are allowed (and are) not exact, leading to different behavior on formatting the same number on different systems.
Dealing with these led to compiler implementation defined behavior added due to market pressures, making C/C++ numerics less of a mess, not anything in the spec.
C/C++ has notoriously been bad at all this, which is why the history of it is littered with such bugs, errors, undefined behaviors, and dozens of compiler switches to try and mitigate such behavior.
For example, many compilers notoriously replaced high-precision Kahan summation with low-precision regular summation because optimizations tell the compiler that such things are allowed.
The state of the art in C/C++ is now passable, but it is still flawed, and has been historically very bad.
And all this met the C/C++ spec of the time. Because if the underlying representation does not require IEEE 754, then it cannot require only transformations for IEEE 754 without making other representations behave badly.
A PhD student is very different from a professional developer with years of real-world experience in industry. It's a completely different job.
The title is highly misleading.
A "developer" is just someone who writes software and not necessarily a "professional developer" with years of experience in a particular kind of job like what you have in mind.
Moreover, I would even expect that the vast majority of "professional software engineers" who make a living strictly writing software without any particular domain in focus, might actually fare worse in floating point gotchas than scientific domain workers who are "software engineering dilettantes" but nonetheless do computation as their job.
>Perhaps not surprisingly, the most predictive factor is simply Contributed Codebase Size, the effect of which is shown in Figure 16.
>There is a gain of only about 2/15 compared to those who reported a codebase where floating point was not intrinsic or where they were not involved.
The title would be better suited if they included a higher number of HPC/ML/etc software engineers (post PhD) that work on larger scale projects in industry/labs.
Specifically, I disagree with this assessment:
>We believe that the combination of our recruitment process and the resulting background of the participants illustrated here suggest that our sample is a good representative of software developers who write code for, and in support of, science and engineering applications.
That's just outright ignoring all science and engineering applications outside of academia/academic research.
> The title would be better suited if they included a higher number of HPC/ML/etc software engineers (post PhD) that work on larger scale projects in industry/labs.
Their premise is that if folks in academia are having a hard time with it, then perhaps the training and tooling around floating point needs some reconsideration. That seems very reasonable to me.
There's been 50+ years of this reconsideration. Some problems are simply hard, and perhaps nothing will make it easier than to simply learn the nuances.
For some domains, there are some solutions, like BigInt to avoid overflow, but each comes with other tradeoffs.
For general floating-point, it's mathematically quite likely there is no better tradeoff. You cannot stuff infinite values into finite space.
What % of scientific code do you think is written outside the areas this paper surveyed? Having been on both sides, I find that the majority of numerical code, especially anything more complex than simple math (which is almost always written poorly outside academic areas), is written/developed by academics, and the results adopted as libs in engineering.
>"Arithmetic operations on signaling NaNs turn them into quiet NaNs with a different, but often similar, bit pattern. However, on some processors merely copying a signaling NaN also performs that conversion. In particular, copying a signaling NaN to return it to the calling method may perform this conversion."
It doesn't work in the pathological case though as any float has a fixed number of bits, and I see the article has been amended with a note.
On the one hand I find the results unsurprising - even people I know to have worked a lot on numerics often have only a rudimental understanding of the intricacies of corner case behaviour for floating point, and yes that absolutely manifests in hitting a wall when something curious goes wrong. Mostly this results in head banging until you find the piece of code going wrong is the numerical piece and then very quickly you start looking at possible floating point gremlins.
Having said that though this paper seems to have a very academic view of what HPC is. Even for people designing HPC systems numerical optimizations are rarely a huge chunk of their job so it's probably not actually that important - I think the fact few people have a good understanding is a reflection of the fact it's not often necessary.
Finally while we're on the topic: Does anyone know a good tool that can allow me to write an equation, specific the precision of inputs/outputs and get the required precision of all the operators? I know Matlab have the DSP toolbox but it has some serious limitations, I'm still in search of something fantastic.
If you were going to do this though I'd just do 2 ints acting as a single fixed point. Less glue logic and likely more precision. A float using all of the bits would likely be better than int + float as well depending how you configured it.
Apart from greater speed, floating point numbers don't really have any advantages over exact real numbers. Problems like the inability to compute discontinuous functions over the real numbers manifest as even worse problems over the floating point numbers.
I'm also a believer that anything beautiful is probably going to end up useful. I think exact real computation (as described in Type Two Computability Theory) is one such thing. Automatic differentiation was another thing I found beautiful, before I found out about its applications to neural networks.
The complexity theory of type-2 functionals is a theory which is natural to ask for, but after results by Cook, Bellantoni, Kapron, Seth, Pezzoli and others, has come to an impasse and now seems abandoned.
Regarding computability, it's not so bad. Basically a continuous operator is usually computable. Example: Integration is continuous, hence (likely, and in this case actually) computable.
With floating point numbers, you can solve quadratic equations, calculate sin/cos, calculate log/exp. These things have lots of applications.
No you can't.
> What's your point?
Floats really have lots of advantages over exact real numbers. Being able to compute transcendental functions is one of them. There're others.
They're freaking fast, my cheap laptop with integrated GPU can do almost 1 TFlops. They're fixed size i.e. you can handle them without dynamic memory allocations.
They have disadvantages of their own, though: for instance, y if two of these are equal then in general you can't determine that they are in finite time.
This is arguably a problem with floating point as well. Two computations F and G which ought to produce the same result can end up differing because of rounding errors. So equality testing is not advisable to do over floating point. Also, inequality of real numbers is still semi-decidable, which is useful.
People keep bringing up these problems around not being able to compute discontinuous functions, but those problems still manifest themselves over the floating point numbers, in the sense that a small rounding error in the input can drastically affect the output. This obviously isn't a problem over the rational numbers, but a lot of operations like square-root are not possible over the rational numbers.
While both rationals and symbolics have their use, saying that floats don’t have any advantages over them is just wrong.
1. I did acknowledge that floating point numbers are faster. I expressly stated that they do have some advantages over real numbers.
2. I'm not talking about symbolic computation.
Yes you did. And then you said it’s their only advantage. It’s not the only one.
For example, in some environments, dynamic memory is undesirable. Especially since you mentioned physical sensors: many real-life sensors are not directly connected to PCs but instead are handled by MCU chips.
> I'm not talking about symbolic computation.
The numbers you’re talking about are pretty close to symbolics in many practical aspects. From Wikipedia article “Computable number”:
equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus
I see this all the time with developers who have zero embedded experience like you: you don't seem to understand that in many embedded environments, dynamic memory allocation is simply not possible. You allocate all memory at the start of the program, and that's it. So doing math the way you suggest is completely impossible.
And ultimately, I'm interested in beautiful things as well as practical things. I fully acknowledged that the idea here is impractical, though theoretically possible, with some very interesting theorems coming out of it. I don't want to always think like a hard-nosed practitioner. That's stupidly dogmatic.
Const-me is complaining that I didn't mention his pet interest, which is GPGPU. And then he said some totally irrelevant and obvious stuff (to anyone who knows CS), just like you did. This is why we can't have nice things.
I’ve been programming for living for couple decades now, and I like writing efficient code. Not all ideas, however beautiful they might seem to some people, are good enough for actual software that people use to accomplish stuff. This idea is one of them. Spending thousands of CPU cycles to compute a=b*c is extremely inefficient in vast majority of cases. Even a hardware-based implementation won’t be efficient, see what happened to LISP machines.
Apparently, that thing that you find beautiful, I find impractical and therefore only applicable to narrow set of problems and environments.
Speaking of environments, even on PC, I code for GPGPU as well. Due to extreme levels of parallelism, GPU-running code has almost no support for dynamic memory: there’re append/consume buffers there but they’re very limited and not actually dynamic, just a static buffer with thread-safe cursor over them.
My experience is; those who should know about it, do. Of course, a lot of apps or web developers don't. And I think that is okay.
The bigger issue is code designed for float32 (for performance reasons) being assumed valid for float64 data. And in these cases the issue is always, what is the appropriate way to communicate to your user the intended use case of the function? When the writer is long gone, a user will not intuitively know. Whenever I have to use a 3rd party library (ahem, PCL). I peruse the code and grep for float variables... If I see too many, I simply assume I need to recenter my float64 to be near 0.0.
Not everybody has the experience for this diligence, thus it appears as if developers don't understand.
This is an active research area, ENS de Lyon has a lot of material available if interested implementing floating point on FPGAs, plus there is Vol 2 of The Art of Computer Programming where Knuth shows the reader how Babylonian mathematicians made extensive use of a sexagesimal (radix sixty) positional notation that was unique in that it was actually a floating point form of representation with exponents omitted.
for(i = 0.0; i < 1.0; i += 0.1)
edit: My first Clojure program was the last time this got me--LISP on the outside, Java floats/ints/bigints on the inside.
After all, every damn fool knows--even ones like myself who've implemented FP in assembly--that 0.1 + 0.1 + 0.1 = 0.30000000000000004.
Yes. Because you clearly don't understand what you're doing at all if you believe that equivalence relation is defined the same way for floating-points as it is defined for integers.
Floating-points aren't numbers. They're more like confidence intervals - and at a certain limit a confidence interval behaves just like an integer. But it's only a limit.
The fact that most users of programming languages don't want to keep track of that in their code only reinforces my belief that they must do exactly as the software guidelines tell them to.
In most SQLs, decimal fixednum types are predominant. Most LISPs have a number tower that automatically juggles fixnums, bignums, ratios, floats, and complex numbers. In the 8/16-bit days--before everything had a floating-point unit--it was not uncommon to store and calculate numbers directly in BCD. Two-digits to the byte, and all of the operations--including formatting--were very straightforward to implement. Many of the old business mainframes even have hardware instructions to pack/unpack/operate on BCD.
> that most users of programming languages don't want to keep track of that...
Why would I want to keep track of that when the language (or more accurately, the data type) will do it for me?
Original example was about steps, not termination. With FP, each tick label has to be post-formatted with a precision specifier to get the desired behavior. With BCD, casting to a string is sufficient (and super-simple to implement).
It's not going to happen much until we stop pushing floats as the standard solution for dealing with decimals. I'm pretty sure they are more commonly abused than used at this point, and most cases would be better off with fixpoints, rationals or bignums in increasing order of flexibility and complexity.
Lisp and Perl6 get it more or less right. My own baby, Snigl , only supports integers and fixpoints so far. I might well add floats eventually, but I intend on hiding them well enough to not encourage anyone who isn't motivated.
I'm curious, in that sentence, who is "we"? I cannot imagine a scenario where someone would be so technically involved to be aware of the difference between a floating-point and fixed-point decimal, but still decide to use floating-point in an accounting environment.
Are you talking about project managers? Architects?
That's what it means that we've established floats as the standard. You have to specifically choose not to use floats, look up your language's entirely non-standardized approach for representing exact decimals, and then maintain constant vigilance against entirely intuitive things like decimal literals.
I knew well enough, but the system was already approaching a million lines of semi-working floating point workarounds by the time. And since it was all my crazy idea, I would have been held responsible for every single rounding error from that point on, had I managed to get the ball rolling at all. Life is too short and too precious.
I was thinking language designers mainly; but user preferences are shaped by habits, which means that anything but floats will take more effort to communicate and popularize.
edit: Observe how popular this comment is for even floating (take that!) the idea. Plenty of people have serious issues when it comes to floats, that much is obvious. I'm guessing its mostly avoiding change at any cost, so good luck to us all.
Integers with implied decimals is the way to go.
But in 30 years, I have yet to come across a piece of corporate software that's not crap. Established companies, consultants, startups; all crap. Different kinds of crap, it's not all rounding errors; but none of it would survive in most open source projects.
Because quality is only an issue as far as it increases short term profits, and that's not a very successful heuristic for writing good software.
> This question asks whether in floating point 0.0/0.0
is a non-NaN value, which it is not. NaN generation is desirable here, since it will propagate to the output as a NaN and thus make the user suspicious.
Horrible nonsense. What is desirable is an exception so that it's loudly confirmed to the user that something has screwed up. We don't want to lead the user on with subtle hints and suspicions.
Even a dollar store calculator from 1985 gets this right by locking up with an E on he display, requiring a clear.