The essay is not quite right about the motivation for rounding modes – they are not intended to support interval arithmetic. When we got the chance to ask him about it, Kahan was quite negative about interval arithmetic, noting that intervals tend to grow exponentially large as your computation progresses. Instead, the motivation he suggested was that you could run a computation in all the rounding modes, and if the results in all of them were reasonably close, you could be fairly certain that the result was accurate. Of course, at the higher level the essay is exactly right: the fact that this motivation isn't explained in the IEEE 754 spec is precisely what allowed this misunderstanding about the motivation to occur at all.
That would be one way to achieve the behavior you describe, but it is certainly not the only way, nor is it the best. You could, for example, do a computation that produces a result rounded to odd with a few extra bits (independent of the prevailing rounding mode), then rounds that result to the destination precision using the prevailing mode.
This assumes that you really want correct-rounded transcendentals anyway, which at present are sufficiently slow that you might as well dispatch based on rounding mode (despite the herculean efforts of the CRLibm crew). If faithful rounding is all that is required, even simpler methods exist that are extremely performant.
I actually thought that was the use case I was describing, though I would expect round-positive and round-negative to be enough. Don't the other rounding modes yield results within those bounds?
round default: 0.072817
round up: 0.072703
round down: 0.072931
Now I don't understand how you get a reliable result using rounding modes at all.
- Either use interval arithmetic which is tricky and may give you useless large bounds, but those bounds are guaranteed to hold.
- Or just try all the rounding modes which is less likely to give you large bounds, but now you have no guarantee that those bounds say something meaningful about your computation.
Or does the second case give meaningful guarantees?
Firstly, I find his example contrived. He wants to determine the accuracy of a black box by subtly tweaking how floating-point operations spread throughout that black box behave? It seems like an okay hook for ad-hoc debugging -- if those flags are around anyway -- but as the primary rationale for the entire rounding mode mechanism? That's not a lot of bang for a whole lot of complexity.
Secondly, this is exactly the kind of discussion around rationales and use cases I'm looking for. Even if I don't buy his solution the problem still stands and might be solvable some other way, for instance through better control over external dependencies. Maybe something like newspeak's modules-as-objects and hierarchy inheritance mechanisms could be applied here.
One of the most infuriating things about global rounding modes is that they are so slow for things when changing rounding modes would actually be useful. A nice example is Euclidean division: given x,y, find the largest integer q such that y >= q*x (in exact arithmetic). If you change the rounding mode to down, then floor(y/x) would get you q. However the mode change is typically so slow that it is quicker to do something like round((y-fmod(y,x))/x).
Just define a rounding behavior for the language and implement it that way. Don't claim full 754 support, just specify the strategy used by the language. A sin function should behave according to the design of that function and should be able to ignore any previous state of the FP hardware. I have not seen a language that directly supports setting the rounding modes, so any language libraries can do what they like - you don't need to preserve or worry about something you don't offer the option to modify.
1.0 3.0 /f double>bits .h
1.0 3.0 /f double>bits .h
with_rounding(Float64, RoundUp) do
println(bits(1.0 / 3.0))
with_rounding(Float64, RoundToZero) do
println(bits(1.0 / 3.0))
But that's not what I want. I want a paranoid language. I want a language where potential division by zero is a checked exception. One where "a = b / c" won't even compile if c might be 0. One that won't compile if it can find an example of an input to a function where an assertion fires. I want one where there is no such thing as an unchecked exception. Or rather, one where you can explicitly override checked exceptions to be the equivalent of (read: syntactic sugar for) "except: print(readable exception trace); exit(<code>)" - but you need to explicitly override it to do so.
Would it be a pain to write in? Yes. But at the same time there's a lot of software that would be best written in this manner. A language where the language itself forces you to be paranoid.
Dependently typed languages can provide this.
Again: I am looking for a language that doesn't require you to provide a proof. I'm looking for a language that is a "logical extension" of what currently is available - that is, I am looking for a language that will attempt to find a counterexample on compilation and will bail if it can.
Any working program will in some sense be a proof, by Curry-Howard. So I think asking to not have to provide a proof is backwards; what you want is a language that makes it easy to express the program and manipulate it as a proof.
You might tuck a copy of these into your personal library in case IEEE purges them.
IEEE754 base document: http://www.validlab.com/754R/standards/754.pdf
IEEE754r draft: http://www.validlab.com/754R/drafts/archive/2006-10-04.pdf
Perhaps someone who has seen both versions can comment on how close these are to the closed versions.
It's not hard to find copies of the final standard online, but the availability issue is definitely something that the committee is aware of.
Basically, it encapsulates attributes and status flags into a thread-local "context" which you get/set through normal function calls. There's also the helpful "with" syntax which allows you to say "run this code block (and anything it calls, etc) with this context instead of the current one, then restore the current one on exit".
A sibling comment talked about a sin() example where you want to use an explicit rounding mode for your calculations, then apply the global rounding mode to the result.
Under this paradigm it would look something like:
with MySpecialContext(settings, etc) as ctx:
check status flags in ctx
round result # this uses the parent context
It just seems so odd to spend so much effort to develop a public standard, then make it expensive and hard to use. Doesn't that defeat the entire point of having a standard?
In fact, dynamic rounding modes are not required at all by IEEE-754 (2008). The revised standard requires that languages provide a means to specify static rounding at "block" scope, which is a language-defined syntactic unit.
> (4.1) the implementation shall provide language-defined means, such as compiler directives, to specify a constant value for the attribute parameter for all standard operations in a block; the scope of the attribute value is the block with which it is associated.
> (2.1.7) block: A language-defined syntactic unit for which a user can specify attributes.
You can take "block" to mean whatever makes sense for your language: it could be a single arithmetic operation or it could be the whole program (though it's more useful if it isn't). It is recommended, but not required, that languages provide a means to access "dynamic" rounding modes as well, which correspond roughly to what most people think of when they think of IEEE-754 rounding modes as widely implemented, but again a huge amount of flexibility is left to the languages to choose exactly what scope and precedence rules make sense for their language.
 efficient hardware support for such fine-grained static rounding is still somewhat lacking in the commodity CPU world. On GPUs and some other compute devices, it is quite natural (and "dynamic" rounding is sometimes quite a hassle). AVX-512 will bring support for per-instruction static rounding to mainstream CPUs.
When we look at flags, the situation is much the same. Languages completely specify the scope of flags. There is no requirement of mutable global state. For example:
> (7.1) Language standards should specify defaults in the absence of any explicit user specification, governing ... whether flags raised in invoked functions raise flags in invoking functions.
Like with rounding, current commodity CPUs make it easier to provide flags with thread scope, but IEEE-754 does not require it. Commodity hardware works the way it does because mainstream languages work that way. If a different model makes sense for your language, do that.
Finally, the concern about "exceptions" is entirely misplaced. "Exception" in IEEE-754 simply means "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application," which is a rather different meaning than the way "exception" is understood in colloquial PL usage. Under default exception handling, which is all that IEEE-754 requires implementations to support, all that needs to happen in the case of an exception is for the implementation to raise the corresponding flag, the scope of which is (as previously discussed) up to the language to specify.
I would encourage you to direct questions like these about the spec to committee members. If you work for a big company, a few members probably work with you. If you don't, most committee members are happy to answer questions, even from people they don't know.
The concerns about access to the spec itself and to the minutes are well-placed, and definitely something that the committee is aware of. (But mostly out of the committee's hands; it's up to IEEE to set pricing. Send them your comments!)
I deliberately never say that the rounding flags are global, my strawmen are that they're either lexically or dynamically scoped, both of which are problematic.
> You can take "block" to mean whatever makes sense for your language: it could be a single arithmetic operation or it could be the whole program (though it's more useful if it isn't).
What I'm arguing is that there is no interpretation of "block" that yields a satisfying result. I didn't consider applying it to individual operations because the whole motivation for the rounding mode mechanism is to keep the same mode is in effect across multiple operations. The lack of hardware support suggests the same thing.
Maybe you can give an example of how you see this working where the attributes are more tightly scoped than per-thread?
> Finally, the concern about "exceptions" is entirely misplaced. "Exception" in IEEE-754 simply means "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application," which is a rather different meaning than the way "exception" is understood in colloquial PL usage.
Fair enough, though that introduces a new problem: how do you implement fp-exceptions if the're not language level exceptions? But maybe if a reasonable solution can be found for the rounding flags something like that would work for the exceptions too.
> I would encourage you to direct questions like these about the spec to committee members. If you work for a big company, a few members probably work with you. If you don't, most committee members are happy to answer questions, even from people they don't know.
I did, I raised some of these issues, including the licensing issue, with David Bindel a year ago.
The underlying problem is that programming languages and compilers want to model something like "add" as an operation which has two inputs, one output, and no side effects. The need to support flags conflicts with this. Declaring that flags don't cross function boundaries or any other boundaries doesn't make the problem go away.
That is a relief – this might actually make rounding modes usable. Unfortunately, it will only be usable on very new hardware, meaning that it's pretty hard to actually use in a language. Of course, as you point out, this is more of a hardware issue than a spec issue, but ultimately, I see one of the major responsibilities of a good spec to be ensuring that compliant hardware has all the primitives necessary to use features like rounding modes effectively. In my view, IEEE 754 has failed in this area. If a new version of the spec were to fix this, we would be in great shape – in 20 years.
Unsurprisingly, it's the hardware manufacturers who block most new requirements on hardware that IEEE-754 might want to add. They very much want the committee to standardize existing practice, and short of reforming the membership rules, there's very little that could be done to prevent them from blocking changes.
At the same time though, it's not designed that way accidentally or in ignorance of the problems it creates. IEEE-754 knew that programming languages wouldn't be very happy about global state, and chose to keep it because they believed it was still the best approach. In many other areas, IEEE-754 pushed against people who said it would be too hard to implement, and in retrospect they ended up being right in many cases. It's tempting to wonder if global mutable state really was too much of a tradeoff though, in retrospect.
For clarity, is 'they' referring to IEEE-754 or the people who said it was too hard to implement?
Can I copy it into my own spec or would that be
copyright infringement? Can I write something myself
that means the same? Or do I have to link to the spec
and require my users to pay to read it?
What if your language doesn’t have exceptions? What
does it mean to provide a status flag?
This may have made sense at the time of the original
spec in 1985 but it’s mutable global state – something
that has long been recognized as problematic and which
languages are increasingly moving away from.
Program block? Attribute specification? Nothing else in
my language works this way. How do I even apply this in
practice? If attributes are lexically scoped then that
would mean rounding modes only apply to code textually
within the scope – if you call out to a function its
operations will be unaffected. If on the other hand
attributes are dynamically scoped then it works with
the functions you call – but dynamically scoped state
of this flavor is really uncommon these days, most
likely nothing else in your language behaves this way.
You could have written basically the same reply, but starting out with, "Hey, those are good questions. Because of my time in the academic world, I happen to know some of the answers. Let me see if I can help."
But instead you had to be a dick. That's undeniably fun, but contempt for people asking reasonable questions doesn't make them smarter; it just stops them from asking questions.
The guy has a reasonable background as a language guy. He was trying to deal with the spec; he's allowed to opine on it.
That you felt things? Those are your feelings. That you think he should be obliged to address your personal concerns preemptively? That's your problem, not his. If you want to know how deeply he's researched the issues, you could say, "Hey, have you looked at C's fenv.h?" Rather than just assuming the answer that lets you be a dick.