Hacker News new | past | comments | ask | show | jobs | submit login

Good essay. The dynamic scoping / mutable global state issue, especially with respect to rounding modes, is a total nightmare. We've been trying our hardest to provide great support for rounding modes in Julia and it just doesn't work well. The trouble is that there are functions that need normal rounding regardless of the current rounding mode while other operations can safely respect the current rounding mode. There are still other functions for which neither is appropriate. A software-implemented sin function, for example, may completely break if used with an unexpected rounding mode, so you don't want that. But you also don't want to just completely ignore the rounding mode either. What you really want is a sin function that returns a correct result, rounded according to the rounding mode. But that means you really need five variants of the sin function and you need to dynamically dispatch between them based on the current rounding mode – i.e. global mutable state – which is, of course, quite bad for performance, not to mention the explosion of complexity. I've come to believe that the rounding mode concept as designed in IEEE 754 was only ever feasible to use when you were writing your own assembly code.

The essay is not quite right about the motivation for rounding modes – they are not intended to support interval arithmetic. When we got the chance to ask him about it, Kahan was quite negative about interval arithmetic, noting that intervals tend to grow exponentially large as your computation progresses. Instead, the motivation he suggested was that you could run a computation in all the rounding modes, and if the results in all of them were reasonably close, you could be fairly certain that the result was accurate. Of course, at the higher level the essay is exactly right: the fact that this motivation isn't explained in the IEEE 754 spec is precisely what allowed this misunderstanding about the motivation to occur at all.




> But that means you really need five variants of the sin function and you need to dynamically dispatch between them based on the current rounding mode.

That would be one way to achieve the behavior you describe, but it is certainly not the only way, nor is it the best. You could, for example, do a computation that produces a result rounded to odd with a few extra bits (independent of the prevailing rounding mode), then rounds that result to the destination precision using the prevailing mode.

This assumes that you really want correct-rounded transcendentals anyway, which at present are sufficiently slow that you might as well dispatch based on rounding mode (despite the herculean efforts of the CRLibm crew). If faithful rounding is all that is required, even simpler methods exist that are extremely performant.


Yes, this is a very fair point – it's not like our transcendental function implementations are uniformly correctly rounded anyway – being within 1 ulp is good enough. I suppose I might be happy if there were a lexical flag for each floating-point instruction (i.e. a bit of the opcode) that chose whether to use the current rounding mode or the normal one. The trouble now is that you have no choice but to use the dynamically scoped one, which is problematic often enough that one is pushed towards just opting out of changing rounding modes altogether.


> Instead, the motivation he suggested was that you could run a computation in all the rounding modes, and if the results in all of them were reasonably close, you could be fairly certain that the result was accurate.

I actually thought that was the use case I was describing, though I would expect round-positive and round-negative to be enough. Don't the other rounding modes yield results within those bounds?


Consider something like

  a-(b+c)
To obtain the correct lower bound, you need to round the addition up, and the subtraction down. This is also incredibly inefficient on current processors, as the registers are flushed after each mode change.


You're right. To be absolutely sure I tried writing a little program that does the thing I describe in the post, https://gist.github.com/plesner/7a13b5c8d269315df645, and sure enough it breaks down completely,

round default: 0.072817 round up: 0.072703 round down: 0.072931

Now I don't understand how you get a reliable result using rounding modes at all.


Interval arithmetic is tricky: you have to make sure you round in the most conservative direction for each operation. As other commenters mentioned, these bounds can end up being so large as to be useless.


The way I hear this is: you have two options.

- Either use interval arithmetic which is tricky and may give you useless large bounds, but those bounds are guaranteed to hold.

- Or just try all the rounding modes which is less likely to give you large bounds, but now you have no guarantee that those bounds say something meaningful about your computation.

Or does the second case give meaningful guarantees?


Not strictly, no. From what I understand, Kahan's idea was that you run the computation under different rounding modes and see how the behaviour changes, see for example:

http://www.cs.berkeley.edu/~wkahan/Stnfrd50.pdf


Thanks for the link, I hadn't seen that. I have two reactions.

Firstly, I find his example contrived. He wants to determine the accuracy of a black box by subtly tweaking how floating-point operations spread throughout that black box behave? It seems like an okay hook for ad-hoc debugging -- if those flags are around anyway -- but as the primary rationale for the entire rounding mode mechanism? That's not a lot of bang for a whole lot of complexity.

Secondly, this is exactly the kind of discussion around rationales and use cases I'm looking for. Even if I don't buy his solution the problem still stands and might be solvable some other way, for instance through better control over external dependencies. Maybe something like newspeak's modules-as-objects and hierarchy inheritance mechanisms could be applied here.


I agree. I've never heard of anyone else utilising this approach, and allowing programs to change the mode of other programs is a recipe for shooting yourself in the foot.

One of the most infuriating things about global rounding modes is that they are so slow for things when changing rounding modes would actually be useful. A nice example is Euclidean division: given x,y, find the largest integer q such that y >= q*x (in exact arithmetic). If you change the rounding mode to down, then floor(y/x) would get you q. However the mode change is typically so slow that it is quicker to do something like round((y-fmod(y,x))/x).


Honestly this sounds like a perfect use case for monads - you write your computation with a dependency on the rounding mode, and you can either ripple that all the way up (to your whole program if need be), or specify an explicit rounding mode at a level where it makes sense. It would also allow sensible (if poorly-performing) rounding for subtraction cases. Has anyone implemented it like that?


I think monads might possibly somewhat alleviate the problem of writing 5 different versions of sin by hand but I think you would still need to do some dynamic dispatching to handle the rounding modes, which according to Stephan is going to be a big performance hit.


How important is it to implement floating point arithmetic exactly as the standard specifies it? What if the same set of features is generally available, but not quite in the required way?


That depends on what you want. Some people don't care at all, but others have legitimate needs for a lot of the features described in the spec. So if you can at all, it's best to follow the spec, imo. That way languages don't do a lot of annoyingly different, similar things. And, as this essay points out, the spec is wildly successful – it is arguably the most successful spec of all time – so second-guessing it is usually a mistake. Kahan at al. have forgotten more about numerical analysis than most of us will ever know.


Factor has a library for setting up floating point environments that might interest you. I think we implemented all of the functionality you might need and it's nestable with dynamic scoping. I haven't needed to use it for anything though.

Code: https://github.com/slavapestov/factor/blob/master/basis/math...

Tests: https://github.com/slavapestov/factor/blob/master/basis/math...

Docs: http://docs.factorcode.org/content/article-math.floats.env.h...


a lot of analysis is based on ieee 754, that would mean developers would have to re-do the analysis and change the finicky algorithms. I'm personally not able to do it. I nod at the research papers I read, but I would not be able to write them.


>> But that means you really need five variants of the sin function and you need to dynamically dispatch between them based on the current rounding mode

Just define a rounding behavior for the language and implement it that way. Don't claim full 754 support, just specify the strategy used by the language. A sin function should behave according to the design of that function and should be able to ignore any previous state of the FP hardware. I have not seen a language that directly supports setting the rounding modes, so any language libraries can do what they like - you don't need to preserve or worry about something you don't offer the option to modify.


Check out my other comment below. Factor has with-rounding-mode and you can nest these modes.

Example:

    +round-up+ [
        1.0 3.0 /f double>bits .h
        +round-zero+ [
            1.0 3.0 /f double>bits .h
        ] with-rounding-mode
    ] with-rounding-mode
    ! output
    ! 3fd5555555555556
    ! 3fd5555555555555


Julia also supports this:

      with_rounding(Float64, RoundUp) do
        println(bits(1.0 / 3.0))
        with_rounding(Float64, RoundToZero) do
          println(bits(1.0 / 3.0))
        end
      end

      0011111111010101010101010101010101010101010101010101010101010110
      0011111111010101010101010101010101010101010101010101010101010101


With all the state and modes and such, is there a mode that just does perhaps: round to even, clamp to infinity, never produce NaN, and never trigger an exception? Because that's basically what a lot of people want - do the math and handle the extremes in reasonable ways without throwing exceptions.


In your "never produce NaN" arithmetic, what do you want sqrt(-1) to be, and why is that a better answer than NaN?


I want it to throw an exception - a language-level exception - and I'm fine with sacrificing a bit of performance (i.e. checking the flags after every operation) to do so. This is a better answer because it means I see the error where it actually happened, rather than getting a NaN out the end of my calculation and having no idea which particular computation it came from.


This is what we had before IEEE-754. Instead of having a closed arithmetic system, exceptional conditions caused a trap (the "language-level exception" of its day). It was a terrible situation, and the direct cause of several "software-caused disasters" that you may have learned about in an engineering class.


Having division by zero being an unchecked exception is a terrible idea, as you say.

But that's not what I want. I want a paranoid language. I want a language where potential division by zero is a checked exception. One where "a = b / c" won't even compile if c might be 0. One that won't compile if it can find an example of an input to a function where an assertion fires. I want one where there is no such thing as an unchecked exception. Or rather, one where you can explicitly override checked exceptions to be the equivalent of (read: syntactic sugar for) "except: print(readable exception trace); exit(<code>)" - but you need to explicitly override it to do so.

Would it be a pain to write in? Yes. But at the same time there's a lot of software that would be best written in this manner. A language where the language itself forces you to be paranoid.


> One where "a = b / c" won't even compile if c might be 0.

Dependently typed languages can provide this.


Can you give an example? I have yet to run into a language that doesn't require a proof of correctness, but will just attempt to find a counterexample.


Well, it sounds like what you're looking for is property based testing. You can setup something like QuickCheck to run at compilation.


You need to provide a proof, but in e.g. Idris the language gives you the tools to make that proof quite easy.


[Citation needed]

Again: I am looking for a language that doesn't require you to provide a proof. I'm looking for a language that is a "logical extension" of what currently is available - that is, I am looking for a language that will attempt to find a counterexample on compilation and will bail if it can.


But non-exhaustively? That exists already - plenty of languages will warn or error if they can tell you're dividing by zero, but don't catch every possible case.

Any working program will in some sense be a proof, by Curry-Howard. So I think asking to not have to provide a proof is backwards; what you want is a language that makes it easy to express the program and manipulate it as a proof.


Only because of inadequate language support (or, in the case of that Ariane 5, deliberately overriding the language support)


I know that's what a lot of people want, but I do not see how one can 'handle the extremes in reasonable ways' without producing NaNs. I have heard people advocate 0/0 = 0, but IMO that is NOT 'handling the extremes in reasonable ways'. What would you propose for 0/0?


More than that, a signaling NaN should raise an exception if used in a comparison. Otherwise, you get a bogus comparison and a bogus branch.


It isn't possible to never produce a NaN. 0/0 and inf-inf, for example, have no reasonable result that can be produced. And indeed, the spec defines those operations as resulting in NaN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: