Sounds a bit like an argument for Checked Exceptions: If you're going to write code for one layer of abstraction, your code should only emit errors that match its own tier. (And not raw naked primeval stuff occurring many layers down.) ( http://imgur.com/iYE5nLA )
I feel like "unspecified" still sounds too lax. How about "implementation-defined"?
I'm thinking of scenarios like: "When this occurs, The Java Virtual Machine implementation may choose to either X or Y. If it does Y, then it shall throw a Z exception."
Can you add a source link? I would be interested in looking it up.
It is one of the few languages that has denotational semantics as part of the language definition.
It was also one of the first languages to fix the null reference problem (called Void-safety in Eiffel speak).
Had the compilers been a bit cheaper instead of targeted to the enterprise customers, maybe many more would be enjoying writing safe native compiled software on Eiffel.
Undefined behavior means that there are some well-formed programs that have semantic holes: the specification says that is up to you to avoid that your program gets to some illegal state. If your program does, the language says "I told you not to get there. Now I can do whatever I want". The reason is that by assuming that the program cannot reach those states, better code can be generated (think bounds checking).
What Godel's theorem says (and the more CS-specific version of it is Rice's theorem ) is that you cannot have an algorithm that proves any given property about the semantics of a program. But if the language is safe, the property of being well-defined is trivial (all the programs have it), and trivial properties are the only exceptions to the theorem.
Though if I'm missing the point you're making, it would probably be quite interesting to see it in more depth.
Nothing stops you from creating a computer language that has absolutely no undefined behavior except the general difficulty of fully defining behavior and the practical issues involved with possibly overspecifying your behavior to the point it can't be practically implemented on real hardware. You also face the possibility that your model of the real hardware may also be flawed, if for no other reason than because of CPU bugs, so in practice you may not be able to manifest your fully-specified language 100.00000%. But it's not in principle impossible.
"Undefined" refers to certain properties of certain languages, like C, where an operation can produce any of an unenumerated set of behaviors, including behaviors that go well beyond normal program behavior. For example, in C, writing to a pointer that's pointing outside of allocated application memory can do things like change the current function's return address; in C++ it may change an object's vtable. The set of possible behaviors that can ensue is too large to enumerate and depends heavily on unforeseen factors.
What you're referring to is unspecified behavior. For example, in C, calling a function like so: foo(i++, i++) does not specify the values of the parameters to the function, but they can be one of a small set of possibilities, and cannot cause undefined behavior (like changing the function's return address etc.).
Creating a language with no undefined behavior is quite easy and is the norm in many high-level languages. Fully specifying the behavior of the language is harder, and you're right that often there is intentionally unspecified -- i.e., implementation dependent -- behavior to allow for efficient implementations on different platforms by the language's compiler/runtime, OS or hardware. For example, in Java, changing the values of non-volatile fields concurrently by multiple threads has an unspecified result, and this is intentional.
Maybe they should remain coupled - after all, they're intimately related, the error check is what makes the "unsafe" operation reasonable. For a program to remain correct it is vital that the error check remains adequately coupled to the undefined behaviour it's preventing - e.g. if an operation that would do something weird on overflow is being used, the link to our reason for believing that overflow can't happen in this case should be made explicit. It should be possible to do this in a way that has zero overhead in the final machine code (e.g. a richer type system at the LLVM bytecode level).
But this is not true, there are things that leak through and cause big headaches for the designers of Rust.
I think that the distinction should matter more if/when someone tries another implementation of the language.
Huh? This is quite a bit of a false dichotomy... Illegal operations could also result in unspecified behaviour (e.g. it's not specified what result an integer overflow gives, but the rest of the program must continue normally).
> unsafety of machine code
In what way is the machine code unsafe? AFAIK the CPU will always try to execute the code, the worst that can happen is some kind of a trap.
If it was decided that an overflow would generate an error, than it was a design decision to trap errors at that level. The program could crash so that everybody knows something is wrong.
If it was decided that an overflow just would overflow, than it sounds like a refusal to trap the error. The program could continue in an unexpected state.
Maybe it's better to crash the program so you know something is wrong.
They explained in the very next sentence: "e.g. it's not specified what result an integer overflow gives, but the rest of the program must continue normally".
(In the C standard, "unspecified" is an explicit marker for things where a compiler must document its choice of semantics.)
I agree with the thrust of the statement, but this isn't nearly as simple as is stated. Compiler bugs exist, and the more standards your code has to interact with on the way to machine code (e.g. C spec => LLVM spec => X86 spec) the dicier this becomes, not to mention the assumptions about the APIs exposed to you by your OS and libraries you use.
 The calculations needn't be performed by a machine. If they are performed by a machine, you have a safe language. If they must be performed by humans, you have an unsafe language, which is nevertheless a lot more usable than C and friends.
A major difficulty here is that undefined behaviour is a property of a particular execution of a C program, rather than a static property of the program itself. Tools that dynamically detect UB are useful, but will not demonstrate that there are no inputs for which a program will go wrong.
OTOH, machines excel at exhaustive enumeration and enforcing separation of concerns: that's why ML-style algebraic data types, pattern matching and parametric polymorphism are such a boon for high-level programming.
So there's room for both human and machine work in program verification. What we should be interested in is finding the right division of labor between humans and machines.
There are several things, like signed overflow of integrals, that cannot often be determined at compile time.
That works in the linux command line, and in this text field on linux:
Similar methods available for windows and mac, I imagine.
Typing the more unusual characters has always been way easier on Macs. (Literally since Macs were invented.)
That used to be the case, but I find the Compose key in X is awesome: as others have noted, Compose / = → ≠ (and → is Compose - >)
Certainly longer than typing !=, but far easier than looking up hex values.
PROCEDURE & Init(id: LONGINT);
IF id # NofPhilo-1 THEN
first := id; second := (id+1)
first := 0; second := NofPhilo-1