Hacker News new | past | comments | ask | show | jobs | submit login
Calculating the mean of a list of numbers (2016) (hypothesis.works)
271 points by GregBuchholz 27 days ago | hide | past | web | favorite | 94 comments



If you're reaching these values then it's extremely likely that either:

(a) You're doing something wrong (usually, not standardizing your data, etc.) which means that you're not thinking about the fact that computers have finite numerical precision and adjusting your problem accordingly (e.g., have a wide dynamic range of numbers).

Or, (b) your problem is pretty ill-conditioned and there's probably no solving it in any useful way unless you crank up the precision.

Almost every problem I've encountered of this form (whenever I've worked on optimizers, or, more generally, solving problems) is of type (a). There aren't many more choices if your problem is of type (b), since you're almost always losing digits somewhere in any implementation of a numerical algorithm.

The mean of two numbers is a particularly easy example to analyze, but I don't think that the implementation is "wrong" since every (even slightly more complicated) numerical algorithm will run into the same problems. I think this is a good post in its content, I just don't agree with the author's final interpretation.


Most of the issues discussed in the article are not from limits in precision, they're from overflows that arise from limits in the exponential range. Finding the average of 8e+307 and 8e+307 is a "low precision" problem, and even naive methods to avoid overflow will not hit limits on precision in this case (e.g. 8e+307/2 + 8e+307/2).

You're right that there are issues with precision and floating point math (the article touches on this a little), but the implementation is definitely wrong if it promises to work like in the case of Python's statistics library (which was correctly using the fractions library to store arbitrary numbers, but converted to a fraction in the wrong order).


Sorry, I should’ve been more clear: when I said “precision” I meant any of several things related to floating point arithmetic including, but not limited to: actual numerical precision of operations, non-associativity of operations, overflows (which affects the latter case as well), etc. (Which is why I mention dynamic range in the GP.)

The point is that there is an “unwritten contract” between the user and the library: if we want any kind of performance, then we have to promise that the inputs are “well behaved” (in a very specific sense that varies from algorithm to algorithm). The user knows this and so does the developer, because otherwise most algorithms will spend most of the actual compute time converting problems to “fully general forms” that work on every input, but are hundreds if not thousands of times slower than the original.[0]

The point is that most people want “mean” to be fast, and a simple implementation of mean will work on almost every case, without needing to resort to arbitrary precision fractional arithmetic. The claim I make above is that the latter thing is almost never what people need, because if they did then almost every numerical algorithm is doomed from the start, whenever they perform operations that are more complex than “take the mean.” Now, if they really need this to be done for the huge numbers given (or whatever the case is), then there are libraries that will do this at a huge cost in performance. (But, if they really did need it—for example, when a problem cannot be rewritten in a simple way that would work with the available implementations—my bet is that this user would know).

Also apologies for mistakes, I’m currently on my phone walking to a meeting.

——

[0] that isn’t to say that if you can write a fast and general version, that you shouldn’t! Just that it usually incurs some overhead that is often unacceptable (see, e.g. using arbitrary precision fractions for Gaussian elimination).


Great explanation and detail. If one faces a badly conditioned problem, one must think about the error inherent to the inputs. Anything measured from real sensors is already corrupted by that perturbation matrix. Even in some perfect scenario like a model generated by CAD, the corresponding physical object is not going to match.

Numerical stability feels like this black art that programmers are scared of, but I maybe it's less scary if you explain the core problem: nothing in the world is a real number, it's always a probability distribution.


> using arbitrary precision fractions for Gaussian elimination

Here's a way to get exact results from just standard precision, say, integers 32 bits long:

For Gauss elimination, that's for solving a system of linear equations, say, m equations in n unknowns. So, we are given, in floating point, a matrix m x n A, an unknown vector 1 x n x, and a right side constant 1 x m b. So we are looking for x so that

Ax = b

Now, multiply each equation by an integer, say, a power of 2, so that all the numbers in A and b are integers.

Next get a list of positive prime numbers, each, say, stored in an integer 32 bits long. Say we have p such prime numbers; so we have prime number for i = 1, 2, ..., p.

Next for each i, solve Ax = b by using Gauss elimination but in the field of integers modulo prime number i. For division, use the Euclidean greatest common divisor algorithm. Yes for this arithmetic have to be able to form the 64 bit sum or product of two 32 bit whole numbers and then divide by 32 bit integer and keep the 32 bit remainder -- commonly we write a little assembler routine for this.

After doing that work p times (could be done in parallel), we use the Chinese remainder theorem to put together the rational numbers that are quotients of multi-precision whole numbers of the solution. With those, we can get floating point approximations as accurate as we please.

But if want to work in floating point arithmetic anyway, then there are three ways to do better:

(1) To start, in Gauss elimination, look down column 1 of A, find the row with the number of largest absolute value, and swap rows to put that number in row 1 and column 1. More generally after p - 1 rows, will want a pivot element in row p and column p. By swapping rows, use the number largest in absolute value in column p and rows p, p + 1, ..., m.

(2) When form an inner product, say,

u(1)v(1) + u(2)v(2) + ... + u(q)v(q)

form each product in double precision and add the double precision numbers. If want to do a little better, then before adding, sort the numbers so that are adding the smaller numbers first. Or

"The main sin in numerical analysis is subtracting two large numbers whose difference is small.", etc.

(3) Once have x so that Ax ~ b, find difference d = b - Ax and then solve for e in Ae = d and replace x by x + e so that A(x + e) = Ax + Ae = (b - d) + d = b. So, take x + e as the solution. After Gauss elimination, solving Ae = d can go relatively quickly. Can do this step more than once.


In all of these cases it could also be argued that the mean is not a useful metric for the data.


For problem (b), this is the standard case for inverse problems, which are almost always ill-posed / ill-conditioned. There is now a vast literature on this subject, including many algorithms for solving them robustly.

"...incorrectly posed problems were considered ‘taboo’ for generations of mathematicians, until comparatively recently it became clear that there are a number of quite meaningful problems, the so-called ‘inverse problems,’ which are nearly always unstable with respect to fluctuations of input data." — H. Allison, Inverse Unstable Problems and Some of Their Applications (1979)


Hmm... I'm not sure I fully agree. Fundamentally, an ill-posed problem maps a large input space to a very small output space, which means that inverting it is impossible under any noise (in a very similar way to, say, the Nyquist-Shannon theorem)—unless you know something more about the noise (e.g., concentration phenomena in differential privacy) or the underlying signal (e.g., sparsity in compressed sensing), or you don't mind the noise in the parameters so long as the output signal is well-represented by your parameters (generally anything in ML).

A ton of these cases which are "usually" ill-posed, as in compressed sensing, physical design, etc., have some more structure which is encoded as regularizers or priors. For example, in compressed sensing, we know the input signal is sparse, so even though we have far less samples than the Nyquist rate would require, we know something else about the input signal, usually encoded as an l1-regularizer or other kind of sparsifying regularizer, making the resulting optimization problem well-posed.

That being said, I'm not sure how this immediately plays into (b). My claim is that you're pretty much hosed unless you increase the general precision/range of the algorithm, which is independent of the noise in the input, but rather depends on the actual implementation of the algorithm and the atoms the algorithm uses (sums, multiplications, etc., of two floating point numbers, for example). The case being that you have to increase the precision of the atoms which the algorithm uses (e.g., like in the article, switching to arbitrary-precision fractions when sums of large numbers are needed).

---

NOTE: I guess more generally, I'm defining ill-conditioning/numerical instability as a problem with the algorithm in the sense that, for a fixed input precision, perfect arithmetic (the "correct" result) will differ heavily from the result with truncated arithmetic (with floats, or whatever it may be).


I think that was the author's point to some extent: it's probably ok to ignore this problem, but you should know it exists.


An interesting article describing a useful framework.

In addition to the largest numbers that floating point can handle, a related issue can come up with numbers that are not near the edge.

A good student of floating point numbers will notice that there is a good chance you will get a different result summing a list of floating point numbers if they are sorted in a different order. This is due to the rounding error that can happen if you add a large number to a smaller one. Thus, if you add up all the small numbers first, you can get an intermediate sum that is large enough to not overflow when added to the next on the list. Thus, you should sort an array of floating point numbers and begin the summation with the smallest.

Some careful thought is needed if numbers are important to your computation. While the author of this article makes important points about one aspect of this, they seem to shrug off further study of the referenced 30 page paper at https://hal.archives-ouvertes.fr/file/index/docid/576641/fil....

The authors of that paper, however, seems to misstate one vital point. They note that the mathematical formulas are "unstable", but to the point of working with floating point numbers, it isn't the mathematics that is unstable, it is the failure to understand how deeply modern floating point numbers are an approximation. Methods that fail to take this into account will be unstable.


> Thus, you should sort an array of floating point numbers and begin the summation with the smallest

Is this method more accurate than Kahan summation? And if so, is it worth the extra cost of sorting?


No, in fact it's slower than Kahan summation and less accurate. In particular, sorting doesn't help if you sum up a large number of similarly small values.


How is it less accurate?


Using avian's example of a list of numbers of similar magnitude, sorting the numbers will not really be any better than just summing them directly, but Kahan summation will do better.


>> In particular, sorting doesn't help if you sum up a large number of similarly small values.

Elaborating based entirely on my reading of the wikipedia article:

Kahan summation manually allots some extra bits to calculate with. You get extra precision because you computed with more bits of precision. Conceptually, you can do the same thing with integers:

Imagine I'm adding up a bunch of unsigned bytes. Ordinary addition will give me the lowest 8 bits of the sum, which is pretty much worthless. But in the analogue of Kahan summation, I can do better: I allocate 16 bits to hold my running result, and compute an accurate sum of, say, 0x02B7. Then I zero the low-order bits until the high-order bits fit in a byte: the sum after precision loss is 0x02B4. Then I report the sum: it's 0xAD (= 0x02B4 >> 2) with a left shift of 2. This is much better than the ordinary result of 0xB7.

(In that example, I needed to return an extra value, the left shift of 2. But floating-point numbers already include an indication of their order of magnitude, which integers don't, so in Kahan summation the extra return value is not necessary.)

So the quick answer to "why is Kahan summation more accurate" is that it's more accurate because you used more accuracy. Sort-and-add doesn't give you any more precision than usual, but it will lower your accumulated error.

The root issue that both these strategies attempt to mitigate is that when you add a large value to a small value, you end up losing precision from the small value. Sort-and-add will address that problem as to any two elements of the input -- if you have a list of values of which some are very large and some are very small, sort-and-add will sum up all of the small numbers into one large number before adding that sum to another large number. This reduces the number of times you would add a large number to a small number, compared to summing the input in a random order.

Kahan summation means you do all your addition at a higher level of precision. As such, it also addresses the (large+small) problem that occurs when an element of the input is small in comparison to the sum of all the input values, as opposed to being small in comparison to the maximum individual input value.

If your input consists of a medium amount of small numbers and a few large numbers, such that the sum of the input is well approximated by each of the large numbers within the input, sort-and-add should work fine.

If your input contains a large amount of small numbers, sort-and-add will fail: by the time your running sum has become large compared to each element of the input, every further addition will suffer from loss of precision -- you are adding a small number (the next input element) to a large one (the running sum).

If your input contains many large numbers, sort-and-add will fail for exactly the same reason it fails on many small numbers: each element individually is small compared to the total.


Thanks for thoughtful reply.

But the Kahan method aside, the idea would be to start with the smallest numbers, not the largest.


Yes, I understand that.

>> if you have a list of values of which some are very large and some are very small, sort-and-add will sum up all of the small numbers into one large number before adding that sum to another large number

I don't quite follow the point you're making -- can you elaborate?


Intuitively:

1e100 + 1e-100 = 1e100

Because of rounding error.

(This is true for some sufficiently large and small exponent.)


You don't need to go that far:

  julia> 1e16 + 1 == 1e16
  true

  julia> 1e19 + 1000 == 1e19
  true


So what? That's true regardless of whether you add small numbers in before or after the big one. If you have a lot of small numbers that add to 1e-2, and a couple of big numbers that add to 1e+200, it is totally irrelevant what order you add the numbers in, because all of the small numbers together have exactly zero influence on the sum.

But we're talking about cases where the sum of the small numbers is large enough to be detectable when measured against the large numbers, even if no individual small number is that large.


A solution more accurate than sorting before adding is to place your numbers into a priority queue and repeatedly add the smallest two numbers and re-insert the result into the queue. This helps handle the case where you have many similarly valued numbers and your running sum becomes large enough relative to your numbers to cause the same rounding errors.


Isn't a priority queue implicitly sorted?


Yes. I don't think your parent post meant to imply otherwise.

The priority queue approach boils down to "sort after every addition, instead of just once at the beginning".


Ah right, I read 'more accurate than sorting before adding' as 'without sorting before adding' and missed that it was more about rounding errors than the sorting.


It's amazing how many floating point edge cases emerge from mid-range values. Ordering can also help minimize the accumulation of rounding errors for similar reasons to those you pointed out.


I ran into a similar issue doing high accuracy dot-products. The solution was Kahan summation [1], which first adds the incremental value, then checks the result to see how much floating point error was introduced, then rolls that error into the next addition.

I suspect something similar could be done here, even if it would be a few times slower, to get an ulp-accurate mean regardless of the order/size of the individual elements.

1: https://en.wikipedia.org/wiki/Kahan_summation_algorithm


  This is the TXR Lisp interactive listener of TXR 214.
  Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
  1> (defun mean (seq) (/ (sum seq) (len seq)))
  mean
  2> (mean '(8.988465674311579e+307 8.98846567431158e+307))
  ** out-of-range floating-point result
  ** during evaluation at expr-1:1 of form (sum seq)
Works for me; error is caught. Breaking news: floating-point has a limited range! Detailed story at 11 ...

Almost any non-trivial calculation that overflows can be massaged into something that avoids overflow. One important motivation behind floating point is to have enough practical range so that you don't run into overflow.

If this is an actual practical problem someone faces, the tool to reach for is wider floating-point, not to mess with the code. A bigger "E" than +307 is a thing.

Also, multi-precision floating-point is a thing.


Or one could cheat, :) by using Scheme exact numbers:

  #lang racket/base
  (define (mean seq) (/ (apply + seq) (length seq)))
  (mean '(#e8.988465674311579e+307 #e8.98846567431158e+307))
89884656743115795000[...]


Similar correct result from sbcl.


The pedantic point of the article is that an error in this situation isn't correct. The average is 8e307 and 8e307 is 8e307.


Agreed. My limited point was that detection of out-of-range is available out of the box in some languages.


Here is a fascinating post by Stefan Karpinski, one of the creators of Julia: https://discourse.julialang.org/t/array-ordering-and-naive-s...

He shows off a function named "sumsto" which takes a single positive double precision floating point number "x" as an argument.

"sumsto" always returns the same vector of 2046 floating point numbers -- just in an order so that naive left-to-right summation returns "x". Just changing the order of that vector lets you sum to almost any positive double precision number.

If you want to run the code, I didn't see where "realmax" was defined, but this works:

realmax() = prevfloat(typemax(Float64))


`realmax` got renamed to `floatmax` in 1.0; similarly, `realmin` got renamed to `floatmin` (they're both very much specific to floating point types).


You think that's bad? Theoretical math has "conditionally convergent series'" [1] Sum in order and it can convergence to a given value. Rearrange the terms and any real number is possible.

I have a hunch there could be a bit of relation between these things.

[1] https://en.wikipedia.org/wiki/Conditional_convergence


It's an interesting connection. Any finite collection of real values (represented in floating-point or otherwise) has a definite correct sum. If you're constrained to the floating-point to represent that value, you should ideally round the true value to the closest float and return that. Computing this true sum of a sequence of floating-point values is surprisingly hard though. The state of the art seems to be Radford Neal's paper on superaccumulators from 2015:

https://arxiv.org/abs/1505.05571

Fortunately pairwise summation (Julia's default) is fast and fairly hard to trip up, although it certainly is possible to trick it.


As commented by others, this is the kind of functions for which you want to use a numerically stable algorithm.

For that you can :

- sort the numbers (clearly not the most cost effective solution but easy),

- use a high precision accumulator (which can give you an exact result if you go far enough),

- use a compensated summation (such as Kahan summation but the state of the art can do much better)

For extreme cases there are algorithms that can give you an exact result, my current favorite being Accurate Floating-Point Summation Part I: Faithful Rounding : https://www.researchgate.net/publication/228664006_Accurate_...


What about a "reduce" technique? Average the numbers in equal-sized chunks, then average those averages. You could even chunk the chunk averages and repeat the process as many levels down as you want to, and chunks could be as small as 2 each.

I guess this still assumes that the largest number in the original list is less than or equal to the maximum floating point value, but otherwise you stay roughly in the same space of precision as the original data.


I believe you have pairwise / cascade summation in mind:

https://en.wikipedia.org/wiki/Pairwise_summation


That produces an unavoidable special case when the number of elements in the list is a prime number.


Just pad the end of the list with 0s, but keep the original size of the list.


Unfortunately, by averaging the averages you skew the results. Average of averages does not produce the same result as averaging the whole list.


Average of equal size chunks' averages does. (mathematically)


Ah my bad. Thanks for that correction


It does if you weight the chunks correctly. Combine pairs of numbers and keep a weight of the remaining odd number. Either cut the weight in half of the odd number every time or use it in the average if an iteration has an even number of numbers.


Thanks for that, didn't know.


\left( \sum_{i=1}^m x_i/m + \sum_{i=m+1}^{2m} x_i/m \right) / 2 = \sum_{i=1}^{2m} x_i /(2m)


I think this works perfectly if your list has 2^n elements. Otherwise, you have to resort to multiplying by imprecise fractions.


You would only need one special case: if a list (top level or not) has 2n+1 numbers, weight the last one differently.


An 'online' mean calculation might resolve some of the common issues. Something like -

  def mean(ls):
      mu = 0.0
      for i, x in enumerate(ls):
          mu = i/(i+1) * mu + x/(i+1)
      return mu


Even better if you can afford to sort the data and use a high precision type for mu. The algorithm will not be online any more and for python real numbers are usually double in practice. But in a constraint system where one has to work with float32, just using float64 for mu helps.


    mu = mu + (x - mu) / (i+1)
This is more natural when written in C:

    mu += (x - mu) / (i+1)


How much precision loss is to be expected here?


Good point. The precision loss will be pretty bad if the numbers added above have greatly different magnitudes, which could occur if the next number (x) is very different to the current mean (mu), or if the list is very long. The first issue could be partially mitigated by sorting the list first. If the list is very long, the above algorithm will necessarily add numbers of quite different magnitudes, but employing something like Kahan summation - https://en.wikipedia.org/wiki/Kahan_summation_algorithm , could partially mitigate that issue as well.


It is proportional to the number of terms in the sum. Kahan summation has a tighter error bound.


How is that different from the second example?


That you don't have to know the number of elements to sum at start. But I think the precision loss is even greater this way.


This article is both extremely insightful, and overly pedantic.

If you are not aware of how floating point value work, this is an excellent illustration of that, and you should spend some time learning about the risks and edge cases associated with working with them.

At the same time, enabling floating point exceptions on your platform is probably the most pragmatic way of dealing with these issues. until you are in a domain where this level of effort is an investment, over-engineering modules that use floating points is a waste of time.


This is one of the many reasons multiprecision floating point libraries exist. Most people would probably be better off with the cost of using them than dealing with any of this.


I wrote a Postgres extension to compute the mean of an array of floats. [0] I tried to avoid overflow problems by using an approach that finds the final mean for each element as it goes along. [1] But that is a lot of divisions, so there is a disappointing perf hit. (It's still a lot faster than SQL though. :-) I'd love to know if there is a faster way that is still correct. (I have some implementations for SSE2 and AVX that aren't published yet, but they still do all the divisions; what I really want is a different algorithm.)

[0] https://github.com/pjungwir/aggs_for_arrays [1] https://github.com/pjungwir/aggs_for_arrays/blob/master/arra...


The reason exponents have so many bits is basically to solve this problem. The exponent/mantissa bits of IEEE 754 double could have been 8 and 55, but I suppose the standard committee decided that some precision should be sacrificed to allow stupid values outside the range of meaningful physical numbers so that you never have to worry about the problem mentioned in this article. So if you pretend that doubles have 8 exponent bits as they probably should have been, then `sum(list) / len(list)` will always work for this set of "sane" values.

And if that isn't enough, you have subnormal numbers as a safety net as well, although using these kills performance on many FPUs.


"...precisely on a computer with big floating point numbers"


For the probably simplest solution to the given problem if summing doubles there is a hardware in every x86 CPU which allows the biggest number during the computation to be 1.18e4932 (using 80 bits internally, 15 from that for the exponent, vs. only 11 in the 64-bit double). By just activating it the sum can be done very fast, and no need for any modification: first sum, then divide, convert to double at the end.

You should not use SSE for that however, but the old x87 "FPU" instructions.


And if you have to use Microsoft or Intel C compiler for 64-bits Windows, you have to use assembly to reach that functionality which is still in the CPU. With the open source compilers it's better. You can write C code that works with both GCC and clang.


How about…

using known stable numeric methods? Kahan-Babuška summation?


In ruby you need to use bigdecimal:

    precision = Float::DIG + 1
    require 'bigdecimal'
    a = BigDecimal(8.98846567431158e+307, precision)
    b = BigDecimal(8.988465674311579e+307, precision)
    mean = (a + b) / 2
    puts "Works" if mean > [a, b].min && mean < [a, b].max
    => Works
(edit: I used 0 first and it works fine but converts to precision 15 instead of 16 (which is the maximum on my computer) for some reason, I may file a bug for this)

Works for small floats too:

    smalls = [1.390671161567e-309, 1.390671161567e-309, 1.390671161567e-309].map { BigDecimal(@1, precision) }
    mean = smalls.sum / smalls.size
    puts "Works" if mean >= smalls.min && mean <= smalls.max
    => Works
One could also use BigDecimal("8.98846567431158e+307") but seems like the numbers are coming as floats.


So what's the correct code? I try my guess before reading the 30 page paper (I will add my comment after reading it)

My first guess: Normalize it. Find the largest exponent and divide all values with the exponent. This is a power-of-2 operation so would not lose any precision. Then divide all numbers by the length. Perform the same normalization again. Sum the numbers, then revert the exponent that was removed in the first step.

My second guess (backup plan): Use the interval arithmetic. I don't remember the specifics but I remember this is the safest form of floating point operations. But perhaps I should be careful about the performance penalty.


Who has actually run into this in practice? Though I’m sure a few people have I doubt it’s more than .001% of programmers or programs. And those that do run into it are probably aware of the challenges.


Calculating (a+b)/2 isn't even entirely trivial for integers - the trivial implementation overflows on large numbers, as one may find out when doing a binary search over a 4GB array.


Also, their overflow avoidance technique fails on integers too, and for basically the same reason (lack of precision).

Suppose you want to find the mean of (1, 1, 1). If you divide by the length of the list first, you're going to compute sum(1/3, 1/3, 1/3), which reduces to sum(0, 0, 0). And 0 < min(1, 1, 1).


`(uint32_t) (((uint64_t) a + (uint64_t) b) / 2)` and then go on with your life. :P


This article highlights and issue with floating point numbers, a substantial use case for data scientists (and as such, I value the input).

How do REPLs and databases handle this edge case?


Almost always they ignore this kind of issue; the best you're likely to get is a mean() function that remembers to sort the input first. Most numbers are in a "human" range far from the limits.


If you when calculating an average actually reach overflow in a double without messing up the precision first and making the calculation worthless in the first place, some numbers in the list of numbers is bogusly big anyway.


Or the list is just very long.


Not really. The time required to overflow that way is unrealistic. Also I think you'll run into S + x = S. at that point your sum will stop climbing towards overflow.


Data science is squishy to begin with -- those wanting high performance are heading to fixed-point hardware-accelerated solutions where loss-of-precision is a given, so not having a fully accurate answer with high-precision floating-point along many steps to solving the problem doesn't seem like a big deal.


One option is to convert to a bignum representation, which is slow but works.


This is silly. If overflow is really a concern, use Welford’s algorithm to compute a running mean. It avoids the sum -> float overflow issue.


These kinds of issues are why https://en.wikipedia.org/wiki/Extended_precision was widely used since the 80's, both in language implementations and FPUs/CPUs.

Python has Decimal for those who need correctness under all circumstances.


Aren't means recursive? It's just sum/count. So you just split the list into arbitrary sublists, such that the sum of each sublist is well under the infinity cutoff. Calculate the mean of each sublist, and then the mean of those means.

And you could add the split logic to calculating the mean of the means. Just as insurance.

So would that work?


No nasty tricks - these aren’t NaN or Infinity

OK but then the first example is a naive computation of the mean of two enormous numbers.

All this really illustrates is that you need to be aware of the range of your inputs and choose a solution that accomodates that.


And then there was Clojure...

   (defn average [numbers] (/ (apply + numbers) (count numbers)))

   (average [8.988465674311579e+307M, 8.98846567431158e+307M])
   => 8.9884656743115795E+307M


But by then you are not comparing apples to apples. Those are Java BigFloats, and by that reasoning Java and any other language that has access to bigfloats also would do the correct thing.


But still relevant, no?

You could say that the “hard problem” mentioned has been removed by those abstractions.


Why not convert to integer (saving residuals), computing mean of integers via long division and adding means of residuals (arbitrary precision sum)?


The article would have been more compelling if he showed at least one example where floating point errors blew up to the point where one would care.


rust might fare pretty well w/r/t preventing these overflow errors, because rust is pretty persnickety about overflow.


you want to in practice calculate exponential moving average


    reduce(lambda x,y: (x+y)/2.0, numlist)
Why wouldn't this work?


Because you're placing a higher weight on elements that come later in the list. Try it for the list [1, 2, 3, 4]

The mean is 2.5 if you sum them all and divide by 4.

With that method it is:

mean([1, 2, 3, 4]) = mean([1.5, 3, 4]) = mean([2.25, 4]) = 3.125


It doesn’t compute the average. Consider this test case: [0,0,0,0,0,0,4]. Your snippet would return an average of 2.


Just work out the math for N = 3 and you'll see why your solution is wrong.

X = Mean([a,b,c]) = (a+b+c)/3

Y = YourFunction([a,b,c]) = (((a+b)/2)+c)/2 = (a+b+2c)/4

X does not equal Y. It doesn't matter if the reduce is left or right, btw, it's still wrong.


What happens to the denominator when you have 3+ elements?


The function is not associative.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: