In the simplest case, think of your 'interpreter' as a program that translates programs into values. For example, 1 is a value and \x.x+1 (the lambda expression) is a value, but (\x.x+1)1 is not a value because it evaluates to 2 (or, at least, to 1+1).
Let's say that your interpreter does this a single small step at a time; so for instance, if you have* f(x,y) = x+y, then f(1,2) |--> (x + 2)(1) |--> (1+2) |--> 3.
Type safety goes like this: If x has type T then either x |--> x' or x is a value. Also, if x has type T and x |--> x' then x' also has type T.
The first property is "well-typed programs don't go wrong" and the second property closes a loophole ("if you're a well-typed program, then you're never going to evaluate to anything that isn't also well-typed", so we never escape from the first property by evaluating)
* using the product syntax because some people will get confused by f 1 2, but that's in fact what I mean.
I'm not surprised, either. Siek's papers are always as accessible as they are insightful.
The problem is that you sometimes can distinguish, say, a segfault from an infinite loop. Any code which does that is pretty dangerous.
That's why "pure" exceptions are considered super taboo in Haskell. If you need exception passing then you should do it in something like IO/Either/Cont to contain that effect.
It's also why the couple partial functions in Haskell are all considered warts and never for practical use:
head :: [a] -> a
tail :: [a] -> [a]
(!!) :: [a] -> Int -> a
head :: [a] -> Maybe a
tail :: [a] -> Maybe [a]
(!!) :: [a] -> Int -> Maybe a
Personally, I kind of wish `(/) :: Fractional a => a -> a -> Maybe a` sometimes. It'd get confusing with IEEE floats though since that type already contains values Inf/-Inf/NaN.
To do this, you need to define another decently large set of judgments for error propagation (such as "if x is an error and y : t then (x,y) is an error" and vice versa), but ultimately you can maintain type safety via progress and preservation while still accounting for exceptions as we know them.
On the topic of whether it would make ASM well-typed, I figure that the lack of array bounds checking would be one reason why ASM would have problems, but I haven't thought it through fully. However, I found a neat paper that tries to create a type-safe assembly language pretty similar to x86: http://www.cis.upenn.edu/~stevez/papers/MCGG99.pdf
edit: got grandparent's username wrong.
But... that's why I added the "In the simplest case".
If you're not familiar with the "progress + preservation" definition, then it's going to be rough going understanding anything else without lots of background in logic or proof theory, especially in the case of programming languages as they relate to software engineering. As a prime example, your comment ("include exceptions in the set of values") really very often means something quite different to someone who has an intuitive grasp on the "progress + preservation" definition, and someone who does not.
See also Robert Harper's post and Michael Hicks's response on the article in question.
Which is a shame, because it's precisely this range - the gap between non-liquid Haskell or Ocaml and Java - which is most relevant to today's programmers making choices of programming language.
fun initialize(sm) -> sm'
where sm.state == uninitialized, sm'.state != uninitialized
fun ping_clients(sm) -> sm' where has_clients(sm)
fun has_clients(sm) -> bool
An even smarter compiler can, if it knows the definitions of these functions, determine statically whether a given predicate is guaranteed to produce the same result on a modified object. (e.g. simply by inspecting which fields are inspected and modified)
However, abitrary predicates is a different matter. Even if you have a language which allows arbitrary computation in types there exist undecidable predicates which may weaken your desire for arbitrary ones.
It seems to me to be better to make this a runtime check, which the compiler makes sure to call every time subtraction is applied to a value of that type.
Its a problem if you assert that the result of subtraction is the same type, because the non-negative integers are not closed under subtraction--if the definition of your data type relies on something that is internally incoherent, you have a problem; but you can define subtraction of non-negative integers in a way that is general, not problematic, and doesn't require compile time knowledge of specific values, e.g.:
data Nat = Zero | Succ Nat
data Neg = Neg Nat
subtract :: Nat -> Nat -> Either Nat Neg
I think that this is one example where the types don't speak completely for themselves. Is `Neg Zero` meant to stand for the integer `0` (as one might naïvely expect) or the integer `-1` (as one might expect from the pseudocode `Neg n = -(n + 1)`)? If the former, then what is `subtract Zero Zero`? (I.e., is it `Zero` or `Neg Zero`? I hope that we don't get into IEEE 754 (http://en.wikipedia.org/wiki/Signed_zero) territory here!) If the latter, then there's no ambiguity, but I think that it's fair to say that that meaning isn't clear just from the type!
As usual, I really suggest playing with Agda or Idris to get a feel for what sophisticated types actually look and feel like.
But it's not clear to me what happens when you try to compile and typecheck program that attempts to subtract numbers provided as input at runtime.
What you'd need to do is something like
createProof : (n : Nat) -> (m : Nat) -> Maybe (n >= m)
This is one of the common misconceptions about the stronger-typed languages. I know because I had it before I learned Haskell. To say that you have a guaranteed non-negative number does not necessarily mean that if you subtract two such numbers that the compiler must be able to prove on the spot that the result is positive.
What exactly it means depends on the language. Haskell is not particularly capable of representing that type (or at least, simple Haskell is not), so it'll happily let you write:
subtract :: NonNegative -> NonNegative -> NonNegative
subtract :: NonNegative -> NonNegative -> Maybe NonNegative
In Haskell, this is a runtime check... it is simply one you can not ignore. It isn't an exception (very easy to forget to check for), and you can't use the result (if there is one) until you unwrap it, and the language syntax to do so tends to strongly encourage you to handle the error. You can not subtract two such numbers without having it shoved in your face that it may fail. You may then choose to fail to check it, since the language can't actually stop you, but you will be doing so with full knowledge of the fact that you are doing so.
And in general, that's how a lot of the "bad computations" that a type system prevents is done... it isn't "prevented" so much as "you are forced to deal with violations". For instance, imagine you are deserializing input from an external source... in a strongly-typed language, all the deserialization routines will have Maybe (or perhaps Either with errors) pervasively used, since deserialization routines are able to fail at pretty much any point. The language doesn't prevent failures from happening, because it can't. It prevents you from sweeping failures under the rug, accidentally or otherwise.
Thus, in Haskell, the first subtract example up above, while the compiler won't actually stop you from writing it, is incorrect; there will exist no implementation that doesn't behave badly for some inputs. In a dependently typed language, there exist safe implementations, but you will have to provide some sort of proof that it works. However, since in general it's difficult to prove, that would probably involve some local context, and it's not hard to imagine that in practice for such a simple thing you might still end up using the equivalent of the Haskell function. It just depends on what you have in hand.
In a way, programming in something like Go has a very similar feel in this way to programming in Haskell, where the fact that the error is constantly shoved in your face and you are forced to deal with it before moving on is the way it generally works. Haskell has a wide variety of clever ways of dealing with it, and Go mostly uses brute programmer force, but the feel is quite similar, IMHO.
So this is a perfectly well-defined notion of subtraction on non-negative numbers. It reflects the obvious fact that NonNegatives are not closed under subtraction.
I think a term like vacuously type safe is a good description of what Python/Ruby are under this set of definitions. So as to distinguish from languages where type safety actually means something slightly stronger than "all syntactically valid programs are accepted by the null type system".
They are indeed safe, you won't get undefined behavior in the vein of C/C++. In fact, if they weren't such, there wouldn't any point to using them over more performant architectures.
The only time as a Ruby developer I've ever encountered a segfault is when a library calls out to a C extension, the problem is fixed by switching to a different gem. I'm safe in that whenever my program fails, the bug is in something I've written. I've never found an actual bug in Ruby.
Sure, I'm giving up a hell of a lot of performance for this safety, but it's more than worth it for my company's use case.
In fact, I can't even imagine a form of safety you could add to Ruby that isn't a Very Hard Problem, e.g. safety from race conditions. TDD in Ruby gives you a kind of certainty about the correctness your code that is hard to get in other languages. If something breaks, just add another test.
? TDD is pretty easy in many other languages, both dynamic and static. In some ways, static type systems can enhance TDD by making testing to validate invariants that are beyond what the type system can statically verify easier, because static types can be used to generate test data automatically, so instead of specify specific tests, you specify the universally quantified invariant that must hold and the testing framework generates numerous tests. Both Haskell and Scala (and no doubt other languages) have testing frameworks that leverage the type system to allow this.
There are, of course, other reasons to choose Ruby, perhaps because of Rails, large talent pool, code conciseness, etc.
Rather than debate this for the gazillionth time, I'll just point you at tel's article which breaks it down very nicely: http://tel.github.io/2014/07/08/all_you_wanted_to_know_about...
With a definition that broad, you really need to examine every extant approach to solving the problem, even if you don't want to go in that direction.
Other things are obviously language safety features, but just are not "types" in the sense of how some languages are "type safe".
Progress: if an expression is well typed then either it is a value, or it can be further evaluated (in the sense that "an appropriate evaluation rule exists").
Preservation: if an expression e has type t, and is evaluated to e', then e' has type t.
That's it, really (well, you can do a few extensions to allow errors such as division by zero, or take memory into account).
If your language has both of these properties, they tell you that typed expressions will behave nicely at run time.
And that's really the rub. You want a language which is type safe and possessing expressive types. Getting both is challenging.
Additionally, Ruby prevents e.g. calling on an undefined method on an object. Given the way the interpreter works this comes trivially, but that does not mean that it does not affect type safety.
To summarize: there are certain classes of type errors related to e.g. casting which would be dumb to allow in a (non-toy) dynamically typed language and others which come with no particular cost. This does not mean the type safety of the language is "vacuous" (though it may negate the need for an explicit "type checker").
If that's not the case is "type-safe, pure, and functional" a good epithet to extend, say, Haskell's "type-safe" tag with many of its other features that weave with the type system well?
Typically, the marketing definition means "type safe and expressively typed" which outlaws trivial type systems and their trivial type safety.
Haskell also uses type inference rather than requiring type declarations, but that is not _directly_ related.
That's not totally static, but within the definitions of the article, it might be good enough to count as "type safety" without falling into the "null type safety" category.
I don't think it's any less precise a concept. Java type system is less strict than Haskell's, has similar "strictiness" to Python or Ruby (while quite different in static typing), and is more strict than C, or Perl.
You have type-safe casts in Haskell, for instance, where the failure is reflected at runtime.
cast :: (Typeable a, Typeable b) => a -> Maybe b
In C++ with RTTI, IIRC it returns null. That's also "failure at runtime" (of a sort), but you have to check it.
How can that be so, if Python and Ruby provide escape hatches that allow C libraries to be called?
Perhaps what is meant that in the absence of using certain features, the core semantics is well-defined.
That said, the article did say that Java was type safe, and you can write that same expression in Java. I don't recall whether the result is undefined in Java.
Interesting. But is it meaningless? Isn't its very meaninglessness its meaning because it has come do be known due to it being bandied about far and wide as an example of meaninglessness.
Maybe it's something more. Clearly, for all X, X cannot be at one and the same time both green and colorless. Similarly ideals do not _literally_ belong to the class of things which can sleep. Finally, sleeping as an activity is never done furiously as fury signifies intent and whatever else may be true when one is asleep it is certainly true that one lacks intent when asleep.
Thus packed into that sentence are three ways in which it _cannot_ be the case or refer to something _actual_ in the world. Is it in this way that it fails to have meaning and so will always continue to fail to have meaning?
> Let’s deconstruct this phrase and define its parts, considering the second part first.
Let's use the word analyse rather than deconstruct when that is the word needed.
1. "Isn't its very meaninglessness its meaning" is a confusion of reference and sense. The quoted sentence "Colorless green ideals sleep furiously" interpreted as an object in its own right holds meaning as a common example of syntax and semantics but if you "remove the quotes" and interpret its sense then we run into problems... and obviously that's what everyone is talking about when they bring it up.
2. "Is it in this way that it fails to have meaning and so will always continue to fail to have meaning?" Yep. If we assert that we're stating this sentence as an assertion or judgement of reality then we would like to evaluate its "truth". Even without getting too mystical it's a long established sense of the definition of truth that it ought to somehow correspond with reality and that any notion of truth which does not is hypothetical or meaningless. This is encapsulated in Dummett's C and discussed by Per Martin-Löf here: https://michaelt.github.io/martin-lof/Truth-and-Knowability-...
Well, the way I use words it is not meaningless. It has a pretty clear meaning. Admittedly, that meaning may not describe any possible state of the world (although I could perhaps argue that), but I can only spot that because I know what the phrase means.