The essay is not quite right about the motivation for rounding modes – they are not intended to support interval arithmetic. When we got the chance to ask him about it, Kahan was quite negative about interval arithmetic, noting that intervals tend to grow exponentially large as your computation progresses. Instead, the motivation he suggested was that you could run a computation in all the rounding modes, and if the results in all of them were reasonably close, you could be fairly certain that the result was accurate. Of course, at the higher level the essay is exactly right: the fact that this motivation isn't explained in the IEEE 754 spec is precisely what allowed this misunderstanding about the motivation to occur at all.
That would be one way to achieve the behavior you describe, but it is certainly not the only way, nor is it the best. You could, for example, do a computation that produces a result rounded to odd with a few extra bits (independent of the prevailing rounding mode), then rounds that result to the destination precision using the prevailing mode.
This assumes that you really want correct-rounded transcendentals anyway, which at present are sufficiently slow that you might as well dispatch based on rounding mode (despite the herculean efforts of the CRLibm crew). If faithful rounding is all that is required, even simpler methods exist that are extremely performant.
I actually thought that was the use case I was describing, though I would expect round-positive and round-negative to be enough. Don't the other rounding modes yield results within those bounds?
round default: 0.072817
round up: 0.072703
round down: 0.072931
Now I don't understand how you get a reliable result using rounding modes at all.
- Either use interval arithmetic which is tricky and may give you useless large bounds, but those bounds are guaranteed to hold.
- Or just try all the rounding modes which is less likely to give you large bounds, but now you have no guarantee that those bounds say something meaningful about your computation.
Or does the second case give meaningful guarantees?
Firstly, I find his example contrived. He wants to determine the accuracy of a black box by subtly tweaking how floating-point operations spread throughout that black box behave? It seems like an okay hook for ad-hoc debugging -- if those flags are around anyway -- but as the primary rationale for the entire rounding mode mechanism? That's not a lot of bang for a whole lot of complexity.
Secondly, this is exactly the kind of discussion around rationales and use cases I'm looking for. Even if I don't buy his solution the problem still stands and might be solvable some other way, for instance through better control over external dependencies. Maybe something like newspeak's modules-as-objects and hierarchy inheritance mechanisms could be applied here.
One of the most infuriating things about global rounding modes is that they are so slow for things when changing rounding modes would actually be useful. A nice example is Euclidean division: given x,y, find the largest integer q such that y >= q*x (in exact arithmetic). If you change the rounding mode to down, then floor(y/x) would get you q. However the mode change is typically so slow that it is quicker to do something like round((y-fmod(y,x))/x).
Just define a rounding behavior for the language and implement it that way. Don't claim full 754 support, just specify the strategy used by the language. A sin function should behave according to the design of that function and should be able to ignore any previous state of the FP hardware. I have not seen a language that directly supports setting the rounding modes, so any language libraries can do what they like - you don't need to preserve or worry about something you don't offer the option to modify.
1.0 3.0 /f double>bits .h
1.0 3.0 /f double>bits .h
with_rounding(Float64, RoundUp) do
println(bits(1.0 / 3.0))
with_rounding(Float64, RoundToZero) do
println(bits(1.0 / 3.0))
But that's not what I want. I want a paranoid language. I want a language where potential division by zero is a checked exception. One where "a = b / c" won't even compile if c might be 0. One that won't compile if it can find an example of an input to a function where an assertion fires. I want one where there is no such thing as an unchecked exception. Or rather, one where you can explicitly override checked exceptions to be the equivalent of (read: syntactic sugar for) "except: print(readable exception trace); exit(<code>)" - but you need to explicitly override it to do so.
Would it be a pain to write in? Yes. But at the same time there's a lot of software that would be best written in this manner. A language where the language itself forces you to be paranoid.
Dependently typed languages can provide this.
Again: I am looking for a language that doesn't require you to provide a proof. I'm looking for a language that is a "logical extension" of what currently is available - that is, I am looking for a language that will attempt to find a counterexample on compilation and will bail if it can.
Any working program will in some sense be a proof, by Curry-Howard. So I think asking to not have to provide a proof is backwards; what you want is a language that makes it easy to express the program and manipulate it as a proof.