> many if not most mathematical functions won't be exact anyway
That's actually a really interesting question - while this is obviously true for most functions which (in a mathematical sense) exist, I wonder if it's true for "all functions weighted by their use in computing applications"? That is - do boring old "addition, subtraction, and multiplication of integers" outweigh division, trigonometrics, etc.?
In 3D modelling/video games, almost certainly not. In accounting software...probably? Across the whole universe of programs: who could say?
You're missing the other major problem, which is that range is mutually-exclusive with precision. The scientific community discovered a long time ago that exponential notation is the superior way to represent both for very large and very small values because the mantissa is shifted to the place where precision is needed most.
>In 3D modelling/video games, almost certainly not.
A 32-bit integer divided into a 16-bit whole and a 16-bit fraction would be limited to only representing values between -32768 and 32767 while also having worse precision than a 32-bit ieee std754 floating-point at values near 0.
>In accounting software...probably?
Representing money in terms of cents instead of dollars removes the need for real-numbers entirely outside of "Office Space" scenarios where tracking fractions of cents over millions of transactions adds up to tangible amounts of money.
>Across the whole universe of programs: who could say?
Most computer programs don't need real numbers of any sort, and the ones that do need to be written by people who understand basic mathematical concepts like precision.
> A 32-bit integer divided into a 16-bit whole and a 16-bit fraction would be limited to only representing values between -32768 and 32767 while also having worse precision than a 32-bit ieee std754 floating-point at values near 0
...OK? I'm not sure how that relates to my supposition that trigonometric operations are likely to be more common in 3D modelling cases. I'm not arguing for or against any particular representation of numbers therein.
> Representing money in terms of cents instead of dollars removes the need for real-numbers entirely
I wasn't imagining dollars-and-cents, but rather rates - X per Y, the most natural way in which division arises in real life.
> Most computer programs don't need real numbers of any sort...
You're again arguing against a case I'm not making. I'm not making any claims about the necessity (or otherwise) of real numbers in programs, but simply wondering about the prevalence of particular operations.
> and the ones that do need to be written by people who understand basic mathematical concepts like precision.
A snide insult motivated by your own misunderstanding of my point. I understand precision, and it's irrelevant to my point.
I had thought this was also to do with gpu physics at some level of precision in the float it matters so one machine configuration will not precisely match another machine doing the same calculation. Or something like that.
Normally I would say that it is hard to tell, because it is. But I think in this particular case I have a reasonable argument---back in 2014 when Python added a support for matrix multiplication operator `@`, the proposal author did survey and made a case for it [1]. And you can see that an exponentiation operator `**` is actually used more than division `/` even in non-scientific usages. And as you've guessed, exponentiation won't be exact if its exponent is negative.
Since addition, subtraction, multiplication, and modulus are each used more than division and exponentiation _combined_ (and since not every use of those last two functions would result in an "inexact" result), I think we can pretty clearly conclude that "most usages of mathematical operators in these libraries will result in an 'exact' result" (I'm hand-waving on the definition of "exact", I don't think it's at issue here)
Which is not, of course, a good justification for ceasing to worry about the problem, since a) those packages might not be representative of all libraries, and b) a small proportion of uses might result in a disproportionate amount of bugs.
> a small proportion of uses might result in a disproportionate amount of bugs.
This is what concerns me. Sure, using decimal floating point solves 0.1 + 0.2 = 0.3 (which I can't imagine ever writing in real code). But if you get used to that, then you start to expect 0.1*x + 0.2*x to be 0.3*x, and depending what x is this may or may not be true. Maybe it works for all of your test cases (because your test cases are things like 2 and 10^-4), but then you accept some user input and start getting weird bugs (or infinite loops). There is no good solution besides expecting and preparing for rounding error.
I'm trying to find the quote your talking about, but I just see a comparison between stdlib, scikit-learn, and nipy. And for the import stats it is just what was on github in 2014. I think that it is safe to say that most code is not publicly available on github.
Though regardless of usage, I think that people doing stuff that needs floats are more likely to understand why they need them, and have the ability to use them explicitly without much issue. By using python, and most other high level languages, we're already making sacrifices to make things easier to use and understand, and in Python specifically we're told that explicit is better than implicit, except for this.
That's actually a really interesting question - while this is obviously true for most functions which (in a mathematical sense) exist, I wonder if it's true for "all functions weighted by their use in computing applications"? That is - do boring old "addition, subtraction, and multiplication of integers" outweigh division, trigonometrics, etc.?
In 3D modelling/video games, almost certainly not. In accounting software...probably? Across the whole universe of programs: who could say?