As someone who has worked with building, testing, and numerical software, this seems like a great thing. Generally floating point computations are effectively non-deterministic: you (or the compiler) change something that doesn't seem like it matters and all of a sudden your result changes. Having more methods available that give you the deterministically best right thing at little additional expense is great!
It uses a different approach than superaccumulators, which as far as I know was first explored for the purpose of determinizing parallel floating-point sums by the folks working on ExBLAS (https://exblas.lip6.fr/)
Was the software you worked on a general numerics platform, or something more specialised? I'm trying to learn more about the topic myself, from a finance perspective.
This technique seems like overkill for most finance applications - you can fit nearly any sensible currency amount into a 64-bit fixed-point number (or a 128-bit if you're doing whole-economy calculations).
Sort/bucket the abosute value within some epsilon then merge pairwise in a binary tree.
Or split FloatMax into 32-2048 bins. Start all at zero. Add incoming numbers to the corresponding bin. If a bin goes out of range zero it and add the sum where it needs to go. When you are done keep adding the lowest magnitude bin to the next highest.
As someone who has worked with building, testing, and numerical software, this seems like a great thing. Generally floating point computations are effectively non-deterministic: you (or the compiler) change something that doesn't seem like it matters and all of a sudden your result changes. Having more methods available that give you the deterministically best right thing at little additional expense is great!