Hacker News new | comments | ask | show | jobs | submit login
The Unsound Playground: Java and Scala's Type Systems are Unsound (livecode.ch)
206 points by theemathas on Nov 27, 2016 | hide | past | web | favorite | 124 comments



It was unsound from the first version with respect to arrays. If B is a subtype of A, then Java allows you to pass as an argument (or return) an array of B where an array of A is specified in the type signature.

To see why this is broken, consider a method that takes an array of Animal and sets the first element to be a Cat instance. Java's type system will allow you to call this method with an array of Dog which will blow up at runtime on the assignment to the array element.

I noticed this immediately when I first started using Java (version 1.0.4!) as I had just read the classic 1985 OO type systems paper by Cardelli. I didn't have an OO background and was surprised that a "modern" language like Java would have this flaw in its type system given that the problem was well known years before.


This, imo, is actually a bigger deal than generics. Since type erasure does all kinds of "magic", it doesn't surprise me that there will end up some kind of (T)-type casts somewhere in the OP's bytecode.

But the fact that the following is unsound (and has been for a while) will always remain a mystery to me:

  public class Main {
  
  	class Animal {}
  	class Dog extends Animal {}
  	class Cat extends Animal {}
  
  	public static void main (String[] args)  {
  		Main x = new Main();
  		Dog[] doges = new Dog[10];
  		x.t(doges);
  	}
   
  	public void t(Animal[] bar) {
  		bar[0] = new Cat();
  		bar[0] = new Animal(); // also breaks
  	}
  
  }
The array type seems completely tractable and may be able to be caught by the compiler.


> The array type seems completely tractable and may be able to be caught by the compiler.

This isn't an error, it was a design decision: https://en.wikipedia.org/wiki/Covariance_and_contravariance_...

You could call it bad design, but fixing it would (imo) just make Java even worse than it already is.


My point had nothing to do with the merits of covariant types, just tractability. It seems that the array's type is tractable so the compiler should (at least) give a warning.


This would be fine if t's array bar only allowed mutation of the objects within the array, but didn't allow mutation of the array itself. Treating a Dog as an Animal works fine; however, you can only treat a Dog[] as a const Animal[], not a mutable Animal[].


A joke from my childhood:

  Man goes to doctor, 
  and says:
  
  "Gee, doc, it 
   really hurts 
   when I do this."

  Doctor replies:

  "Then don't do that."


Two men stand on a bridge. The first leans on the railing, which gives way and sends him over the edge.

Horrified, the second looks around and spots a civil engineer. Engineer shrugs and says, "Don't do that."


Ha ha. Funny joke. Now let's get back to discussing the future applications of JavaScript in air travel...

;).


"Don't do that"


On the positive side, these kind of errors tend to blow up immediately when the code is executed. Assuming that you have a semi-decent test suite, it won't make it to production. The language type system won't save your butt but your unit tests will.

There is a deeper discussion here; if you are writing good automatic tests anyway, which I hope you are, then of how much value is a rigorious type system? :)


I agree that a comprehensive test suite is valuable, but I don't think it's safe to assume that errors like this will always be caught, no matter how good your tests are. It's not reasonable to write a test for every possible situation that could ever occur. In fact, it's a giant waste of time.

> of how much value is a rigorous type system?

It saves a lot of time on those tests you won't have to write and maintain just to ensure that you're getting your types correct. Strong static types are also good documentation. Some type systems (e.g. Haskell) can even explicitly indicate whether a procedure is pure or has effects. Static types also enable dispatch that can be resolved at compile-time, resulting in more efficient execution. And the list goes on.

Of course, advocates of languages without static types have heard and dismissed all these arguments before. I guess it's really just a matter of taste.


I've heard both sides of the static types vs dynamic types argument a lot of times. I don't think there will ever be any kind of universal agreement but I find the discussion to be interesting.

For the record, I personally think a language like Typescript hit a sweet spot. Not a lot of mandatory ceremony but you have the choice to add types where it matters, e.g. at interface level. Also the duck typing saves you from inventing class names for all kind of almost-primitive types.


> if you are writing good automatic tests anyway, which I hope you are

"Don't write tests. Generate them."


Another fundamental flaw in my opinion is the cats-and-dogs analogy frequently used to highlight the use of inheritance in OO.

Cats and dogs being mammals is a taxonomy. It's data, not code. Unfortunately, I see OO programmers coding taxonomies all the time just for the sake of it, ending up with deep inheritance trees of classes without ever taking the step back and asking themselves, why they do it the first place. (e.g. to share functionality or to reuse code)


Taxonomies are common in the real world, so it is natural to try to model them. If the language lacks sum types (the mathematical dual of product types, a.k.a. tagged unions) then programmers are forced into using inheritance to encode it. So I blame the language, not the programmers.


I understood JP's critique to be more about mixing behaviour with data than about alternative ways of modelling inheritance.


But Java strongly encourages the mixing of data and behaviour. There is only virtual method dispatch and switch on ints/enums for efficient case analysis. We therefore need complex encodings like the Visitor pattern, if we are to avoid conflating data and behaviour. So again, I blame the language not the programmers.


Is this a defect or a tradeoff? Passing arrays of subtypes seems useful to me. I never experienced such an error, but I suspect this is quickly caught and shouldn't be a big problem.

But then I don't know anything about type system theory.


Without generics (this is java 1, remember) it seems like a reasonable tradeoff. Otherwise[1] you can't have array-modifying polymorphic code at all - how would you write a "reverse this array" function, for instance? You'd need one for every kind of array you want to reverse. Or a "copy" function - even if you're given a properly-sized-and-typed array to put the values in, you can't make a function that'd work on anything except a single type.

You could force every array-function-caller to cast their args to e.g. `Object[]`, but that seems like it'd require more work for even less safety. Though I'd love a read-only concept in Java, for read purposes :|

[1]: I think. If I'm wrong, someone please correct me!


You could define what an array accepts vs returns separately (input vs output). Admittedly, you're pretty much into generics (good ones, not what Java has) at that point.

Edit: Could use a better word for "good" that sounds less dismissive, but I couldn't think of one. Java's generics are lacking in certain areas, unfortunately.


Generics in Java are pretty complicated already, so much so that very few developers fully understand them. I'm not sure the gain in complexity would be worth the added safety.

I'd like to see examples in languages where this is possible, if there are any?


It is both.

Passing collections of subtypes where a read-only collection of supertypes is expected by the receiving code is fine, because the receiving code can't do anything that would break type safety.

But Java decided to only have a single array type, as a matter of expediency. But they also wanted covariance, because, as you point out, sometimes it's just convenient - more expediency. And so the trade-off was to enforce safety at runtime.

It's still a defect in a sense that the design itself could be made sound, if there were a read-only array reference type in Java.


Read-only is too strict. Can't reverse the array, or sort it.


Sure you can - you just need to copy it.

And I didn't mean to imply that array references should always be read-only. What I meant is that there should be a way to say "the referenced array cannot be mutated through this reference", like const* in C. 90% of the time, when you pass or return arrays, this is exactly what you want.


The existence of arrays at all is a trade off. The language would be more type safe and more consistent with OO best practices without arrays and using dynamic random access lists instead. But that would be too slow for some applications.


I think it is a fundamental.

Just because D isa B it does not mean list(D) isa list(B).


What we need is

  listContaining(Dog) isa listContaining(Animal)
  listAccepting(Animal) isa listAccepting(Dog)


Which is precisely the idea of covariant getters / contravariant setters in some sounder languages.


IIRC, Scala has covariant/contravariant generics

(apologies for the bad terminology, I'm not too familiar with the details)


support for "<? extends X>" and "<? super X>" is supporting covariant / contravariant. invariant would be "<x>".

basically. the same concept applies in a number of areas, with varying support (e.g. Java doesn't support contravariance in its method overloading: http://stackoverflow.com/a/2501513/16295 )


> Is this a defect or a tradeoff?

I'm curious what line of thinking could prompt this question. Not all trade-offs are good choices just because they're trade-offs! Maybe it has something to do with the recent spate of language design choices justified with "it was a trade-off", as if the words "trade-off" is an incantation that puts a technical decision above criticism.


> Not all trade-offs are good choices just because they're trade-offs.

Indeed. But they're not automatically bad choices either. Are you saying this one is a bad trade-off?


Both defect and trade-off. I can definitely see value in being able to pass an array of Dogs and treat it as an array of Animals, but it means that the array of Animals is not really an array of animals (because certain properties of a real array of Animals actually cause the code to break) and as a result the language is a little dirtier for it. Like, that's pretty horrible to be honest.

What you really want in that instance is to be able to easily and explicitly cast the Dog[] to Animal[] before calling the method. A better language would make this easy instead. I think Java's Vector type can do it, so it surprises me that they allow that sort of type weirdness with arrays..


I think you're wrong about Vector


Yep, it's a huge hole. Arrays in Java should be Object[] or primitive[], not Animal[]. That's one of the reasons that object arrays should be avoided (something like ArrayList should be used instead).


Don't think the scenario perfectly fits because before generics you couldn't have a method that takes in an Array of Animal, just an Array.

Can a language could resolve this issue without generics or some other contract feature (which Java did later add), or without limiting the scope of array definitions and their use?

May be good to note that Array<Dog> doesn't actually extend Array<Animal>, so there shouldn't be any polymorphic usage between them, and the return statement and method call scenarios are compile time errors with proper generics.


> before generics you couldn't have a method that takes in an Array of Animal, just an Array.

Arrays aren't generic types like ArrayList. Arrays are core language constructs parameterized by their element types. Array types are not erased in Java.


Ehm, no?

You can’t pass Dog[] where Animal[] is expected, it’s one of the more interesting properties of the Array, because its the only case where that doesn’t work.

For the exact reason you mentioned.



Looks like you missed the gist of the post, then. Code it up, you'll see it in action very clearly.


I believe you missed the GP's point. It's not that "you can't because the language stops you", it's that "you can't because it doesn't work".

That is, you're agreeing, not disagreeing.


I used IntelliJ to convert the Java code into Kotlin and the compiler came up with the following errors. It does seem to do a better job of handling types:

Error:(3, 39) Kotlin: Type argument is not within its bounds: should be subtype of 'U'

Error:(5, 21) Kotlin: Type inference failed: 'B' cannot capture 'in T'. Type parameter has an upper bound 'U' that cannot be satisfied capturing 'in' projection

Error:(5, 28) Kotlin: Type mismatch: inferred type is Unsound.Constrain<U, in T>? but Unsound.Constrain<U, ???> was expected


In related news: Radu Grigore "Java generics are Turing complete" POPL'17 Preprint: https://arxiv.org/pdf/1605.05274


Scala is broken by implicit constraints as well.

Makes me wonder how Dotty would hold up under similar conditions, or whether this sort of calculus on type constraints cannot guarantee a correct return.


The soundness of Dotty (or at least of DOT) was formally proven. In the process they also demonstrated exactly what makes Scala 2 unsound. Links: http://dotty.epfl.ch/blog/2016/02/03/essence-of-scala http://dotty.epfl.ch/blog/2016/02/17/scaling-dot-soundness.h...


A sound type system is one that prevents illegal operations. Using a null reference is an illegal operation, and Java doesn't prevent this. Hasn't Java's type system been unsound all along?


A sound type system is one that enforces certain runtime behaviors at compile-time.

Using a null reference is not a behavior ether the Java or Scala type systems promise to prevent, just as Rust's type system does not promise to prevent panic!.

Java's type system is unsound if, say, without casting or other "unsafe" operations, we could have a compile-time Integer that is actually a runtime String.


So if I'm expecting my variable to contain a String, and I try to do Stringy operations on it, what difference does it make to me if that fails because it actually contained an Integer or actually contained a null? My program is just as meaningless either way. Maybe from a standpoint of what's officially promised by the runtime, it's sound, but that feels to me like moving the goalposts. In terms of useful guarantees, Java's type system is, let's just say, kind of useless, if "unsound" is technically incorrect.


(1) I 100% agree that null is a terrible, terrible idea https://www.lucidchart.com/techblog/2015/08/31/the-worst-mis...

(2) It's all relative. Most languages allow lots of invalid operations on data.

What if I'm expecting my variable to contain a non-empty list and I try to do non-empty list things to it and fail? Dependently-typed languages like Idris say everyone else is moving the goal posts by accepting this behavior in their type systems.


It's not necessarily a matter of "do these things behave the same at runtime on the JVM I ran my program on", it's a matter of semantics -- what these programs mean -- and that actually does have a direct impact on what happens at runtime, if you widen your horizons a little. Focus on the cause, not the effect. It's not really moving the goalposts, considering nobody was talking about dereferencing a `null` String in the first place, least of all, the paper itself. `null` is crucial to their argument, but injecting your own impressions about how "it's nothing new, because I had a runtime error before, take that, dumb dumb academics" is mostly irrelevant posturing, it's not what the paper is about.

Imagine suddenly if your setting was changed, and e.g. you were not running Java code on the JVM, but directly compiled to native code. It's very well defined what happens when you try to use a `String` but it's actually a `null`, even at the bare metal level - you get a NullPointerException. Easy. Catch the exception somewhere, or don't, it's all very straightforward.

And despite what you said in your post -- to the contrary, a program which dereferences `null` is completely well defined and totally meaningful! It is not nonsense from a semantics point of view. Does a program which dereferences null do anything useful, all on its own? No. But it absolutely is a program that has meaning and you can describe what will happen by looking at it (roughly speaking), because its behavior is fully defined.

What happens at the bare-metal level when you try to use this Unsound module to convert an `Integer` to a `String`? Well... Bad things will probably happen, up-to and possibly including things like memory corruption or outright termination, depending on how the implementation works. Maybe there is an optimization so small `Integer`s are represented with a different object layout than a large `String` is, at runtime. The runtime system will likely clobber itself if it tried to directly coerce these two different objects, with different sizes, fields, layouts, etc -- because like most typed languages, it would likely rely on the object type to describe memory layout (for things like the GC, so it knows where to find pointers). Classic memory corruption scenario.

The JVM itself is not broken by this problem, so it is "correctly" handled for most practical purposes, but only by that coincidence that the JVM retains type safety here. Despite that, there is still no true "meaning" to this program. It is, like the prior program, useless -- but the prior program was meaningful. This program is not.

Similarly, it's necessary to understand these kinds of corner cases if we want to do things like formalize compilers, semantics, and design find better points in the design space that can ameliorate these problems (either by tooling, language revisions/changes, or entirely new languages).

For most end users, you're basically right, things like this will "only" manifest as a runtime error, and a runtime error is a runtime error is a runtime error. But the paper is about the actual semantics of Java the language, which is a much more broad topic, as opposed to the fact that yes, there's a new way to get a runtime exception.


I wouldn't say that the JVM works on this by coincidence. Java handles polymorphism by converting the code into casts which the JVM typechecks. The JVM is sound. One of the constraints with introducing polymorphism was not changing the JVM.


> Does a program which dereferences null do anything useful, all on its own? No.

Yes it does. It produces an exception, which contains a stack trace. I've actually used that, when I had a program that was reaching some code, and I couldn't figure out what the path it took to get there. It was easier to create an exception, catch it, and print the stack trace than to fire up a debugger and set a breakpoint.


I'm not sure I understand the term "illegal operation" here. I would assume it means "an operation that isn't allowed", but in the case anything that is allowed (e.g. using null) would by definition not be illegal. Can you clarify what you mean by this term?


Dereferencing null is allowed at compile time but disallowed at run time; that's what makes it an illegal operation.


Ah, so the term in this case is just used to mean "something that cases a runtime error"? I think I misunderstood because the comment I was responding to said "using a null reference", which to me doesn't necessarily imply dereferencing it (although that's open to interpretation of course).


I think s?he means operations that would cause a runtime type error/exception by "illegal"


Illegal operation doesn't seem like an explicit enough definition. I'm not knowledgeable about type systems, but I figured a type system would be sound if there's no way to construct code where the result is undefined.


Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method upcast(Unsound.Constrain<U,B>, B) in the type Unsound.Bind<U> is not applicable for the arguments (Unsound.Constrain<U,capture#1-of ? super T>, T)

at Unsound.coerce(Unsound.java:13) at Unsound.main(Unsound.java:17)


You're using the default example, but the error is mostly a red herring. The paper explicitly acknowledges this problem, Section 3 pg. 3:

> ... As a consequence, the method type-checks. Furthermore, even though type-argument inference in Java is undecidable, this type argument is identified by javac, version 1.8.0_25, which consequently accepts and compiles the code in Figure 1. However, constraint solving is generally a non-deterministic process, and type-argument inference itself is non-deterministic [35], so the Eclipse Compiler for Java, ecj version 3.11.1.v20150902-1521, and the current compiler for the upcoming Java 9, javac build 108, fail to type-check this code...

Use the `Unsound9` example, which should work in multiple versions. These discrepancies are merely due to some deficiencies in the way type inference happens in Java, but the unsoundness principle is still valid, and you can still construct invalid `coerce` functions.


What version of Java are you using? My understanding is that the Java language specifications say that this code is valid. Some versions of Java "correctly" accept this code is valid, and some versions "incorrectly" don't accept this code, as an implementation quirk.


Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method upcast(Unsound.Constrain<U,B>, B) in the I get compilation why trying this in eclipse on Java 8 but strange when I compile it outside of it, it goes through.

type Unsound.Bind<U> is not applicable for the arguments (Unsound.Constrain<U,capture#1-of ? super T>, T)

at Unsound.coerce(Unsound.java:13) at Unsound.main(Unsound.java:17)


The headline is misleading as the type system is sound, but the general idea of generics is that the type bindings and constraints should result in a guarantee that there won't be any failing casts at runtime. And they're interpreting the failure to catch all detectable errors statically as a sign of unsoundness, which is not a correct assertion.

The Java language's half-assed generics implementation works by type erasure - so everything is boxed to Object. Because of that erasure my guess is that they're not handling inner class type binding correctly (maybe they're not able to? I don't know their compiler internals - fundamentally it _should_ be possible). If anything i'd argue that this is a compiler bug.

Anyway, the end result is a type safe (and entirely sound) runtime and language -- if it were not sound it would be possible to write code that _should_ trigger a type failure, either at compile or at runtime, but that doesn't. That is what it means to have an unsound type system, for example all C-derived languages have unsound type systems. Remember the Java has no guarantees of full static safety (the ability to downcast Object to anything ensures that they can never be fully statically typed), so as long as you're guaranteed the the runtime will never let an invalid cast or conversion happen its still correct. Another way of triggering this would be to use introspection to instantiate an instance of a generic class. Because the VM is unaware of generics you'll just get an Object interface through which you could trigger the same faults.

Footnote: .NET actually gets generics right, it's not done through erasure but actual correct type bindings. The VM is itself aware of parametric types and instances of said types and so can statically verify the IL itself, so even if you make an unsafe language that targets .NET the VM would reject illformed/unsound IL, before running it: this is VM equivalent of a static failure: it's not waiting for a dynamic cast to fail at runtime.


I don't see the headline as misleading at all. There are two type systems in Java: the JVM's run-time type system, and Java's compile time type system. It's the latter that has been shown to be unsound. As the paper notes, it's fortunate that generics have been implemented with type erasure, because that is what saves the JVM from being unsound, too.

As for the problem itself: it really has nothing to do with inner classes (they are static in the example!). The problem is with this line:

    Constrain<U,? super T> constrain = null;
which leads the type system to assume that there is a type that is a superclass of T, which also is a subclass of B (from the 'Constrain' class) - even though no such type could possibly exist! The evil trick is in the 'null' - even though a type with the stated constraints is impossible, and so no actual 'real' value of this generic type could exist, 'null' is part of all types, so the compiler can't see the problem.

It really is a problem in the type system as defined in the standard, not just a compiler bug.


Yes, it's all about nulls. I was curious to break it down from Curry-Howard correspondence (Types as Propositions) point of view. The line

  static class Constrain<A, B extends A> {}
reads as

  for all propositions A and B, such that B implies A, proposition Constrain<A, B> holds.
While, the line:

  Constrain<U,? super T> constrain = null;
kind of says:

  Admit that there exists type X, such that T implies X, for which Constrain<U, X> holds. 
Even though no proof (read constructed instance) that X implies U is provided.

If we look at concrete execution instance with String and Integer:

String implies Object, Serializable, Comparable<String>, CharSequence. None of those imply/extend Integer, which is required for truthful Constrain<U, ? super T> proposition.

Basically, ex falso quodlibet :)

EDIT: spacing, formatting, wording.


Good point with the null, they have an example without the null keyword which is an even dirtier trick (because it is still null).

https://github.com/namin/unsound/blob/master/Nullless.java


Do you have a spec reference for that (not trying to be a dick - i just want to understand what the type system believes it should be doing here as the value being assigned to a reference should not influence the type system behaviour)


This is explained in the article in a bit more detail. The point is not that the type system does something with the actual value - the point is that it would be impossible to create any code that results in such a value (other than null), since then you would have to provide an actual type that is somewhere between Integer and String - and that would obviously not pass the type checking. Using null avoids that.

Section 4.5 in the language spec deals with parameterized types; 4.5.1 deals with wildcards. Of course, there is no example in the specs that clearly points out what happens in this example - otherwise this wouldn't have remained undiscovered for so many years :)

If you can show, using the rules in the spec, that the sample program shouldn't pass type checking, then you'll have proved the article wrong :-)


The JVM's type system is sound. Java's type system is not (in the sense that a program with no casts can lead to a ClassCastException). This is possible because in the process of compiling Java to the Java Bytecode, some casts (that should, in theory, never cause ClassCastExceptions) are inserted.

Relevant quote from the paper:

> When executed, a ClassCastException is thrown inside the main method with the message “java.lang.Integer cannot be cast to java.lang.String”. The reason is that the JVM does not directly support generics, implementing them through type erasure and type casts. This is fortunate since it causes the JVM to catch the error before it can directly cause severe problems, such as accessing memory associated with a field that is not actually present (though it might be possible that it can indirectly cause problems).


Java's type system is _sound_ as it guarantees that you will never use a variable of one type as another incompatible type. That is the soundness guarantee, it cannot get stronger than that as the langauge allows down casts, and those are fundamentally unverifiable at compile time. Therefore the soundness of the complete type system /must/ consider the runtime behaviour.

This is literally a "the type system is unsound if we don't consider the required runtime semantics of the type system" paper.

Even ignoring this, the example they give is something the the typesystem correctly rejects, but what they want is a compile time failure that should be possible. I would say the if anything this is a compiler bug.


> Java's type system is _sound_ as it guarantees that you will never use a variable of one type as another incompatible type.

Is it though? I always thought it was unsound due to, at a minimum, mutable covariant arrays. It's definitely possible to have a type error at runtime without using a cast.


I think they are referring to the fact that you will get a type error at runtime, instead of silently corrupting data or other undefined behavior.


But the static type system is what is being evaluated here. This is like saying that python has a sound type system because the interpreter throws type errors when you fuck up.


Java allows this

  Integer a = 0;
  Object c = a;
  String b = (String) c;
This code compiles but produces a runtime exception. The problem is here that you are allowed to upcast and this is perfectly valid code and it is developers responsibility to verify that upcast is actually valid.

The problem with the code in the paper is that Java 1.5 introduced generic types, that are in principle meant easy such situations and it appears that there are some corner cases where it does not work as expected (to more general populace).

For example consider this code

  static class Upcast {
    static <T extends String> String cast(T t) {
      return t;
    }
  }

  Integer a = 0;
  Object c = a;   
  String b = Upcast.cast(c);
This code does not compile, as we explicitly stated that we want an type that extends the String. Writing the last

  String b = Upcast.cast((String)c);
will make the code compile, but it will produce a compiler warning and runtime exception.

The paper demonstrates a more complex indirection to avoid the compiler warning but still produce a runtime exception.


Just to further clarify, the existence (or not) an _explicit_ cast operator is not necessary to create an implicit cast that might fail. The dumbest case being due to the contra variant array semantics that say i can do this:

Object[] foo = new String[10] foo[0] = new Integer(1)

which is the type error, even if the reported error is something akin to a "array store error" or something (it's been a long time since i did java programming :D)


Isn't the fundamental problem here the `?` wildcard type? It allows expressing impossible types like "a super-integer that extends string", and the compiler just trusts you such type exists if you say so.

Of course the whole generics system in Java is basically just "syntactic sugar" to save your from writing a ton casts, so it will just break at run-time.


With most type systems there's never a problem with being able to express nonexistent types. In some type systems, such "nonexistent types" itself are useful tools.

(Where by "nonexistent type" I mean "a type for which there cannot be an instance". "imaginary type" might be a better term, since the type itself exists in the type system.)

The problem here lies in the fact that Java lets you create an instance of any type you can write the signature for via `null`. This lies in conflict with the ability to specify nonexistent types (which is a relatively common ability in type systems and might not even be possible to excise without causing significant complexity).


Actually shamsmall (in the thread) shows the it is a compiler bug, because their compiler does reject it any compile time.


That doesn't mean anything, really. Eclipse's compiler does not guarantee 1-to-1 correctness/equivalence with `javac` or any other compiler, so it's entirely possible that Eclipse is wrongfully rejecting this example, when it should actually accept it like `javac` does. So it is a "compiler bug", but in the other direction.

The paper actually explicitly acknowledges this, Section 3 pg. 3, if you read closely. In fact there is an even bigger danger lurking, which is that Java has non-deterministic type inference. It's not even necessarily a "compiler bug" that the first example, Unsound.java, is rejected:

> ... As a consequence, the method type-checks. Furthermore, even though type-argument inference in Java is undecidable, this type argument is identified by javac, version 1.8.0_25, which consequently accepts and compiles the code in Figure 1. However, constraint solving is generally a non-deterministic process, and type-argument inference itself is non-deterministic [35], so the Eclipse Compiler for Java, ecj version 3.11.1.v20150902-1521, and the current compiler for the upcoming Java 9, javac build 108, fail to type-check this code...

Non-determinism means something very specific here: it does not necessarily mean the compiler randomly rejects something based on whether it's a random day of the week or because `rand() % 4 == 0`. It means the compiler has no sound method for inferring the type of a value, given multiple choices, so it "just" picks one, in an ill specified way. It could pick the first one or the second, and whichever type it picks will influence further constraint generation/inference. See this paper, section 7.2:

http://www.cs.cornell.edu/~ross/publications/tamewild/tamewi...

The Eclipse compiler only coincidentally rejects one of these programs due to this type inference problem. Other valid, unsound programs are accepted by it, and these programs are unsound due to the same basic principles the original (rejected one) used. The Unsound9.java example should still work.

The fact a compiler is rejecting a valid program (a bug in the eclipse compiler) is essentially orthogonal to the fact the program it rejects may show the type system is unsound (a bug in the language itself, so to speak).

Furthermore your parent post is still simply wrong. You say:

> Java's type system is _sound_ as it guarantees that you will never use a variable of one type as another incompatible type.

But this is literally the point of the paper. They cast Integer to String, you cannot get this through downcasting alone. It is unsound. They are able to write a function of type "a" to "b" for any types "a" and "b". It's almost the textbook definition of being able to break the type system, from the viewpoint of Java-the-language. This same "cast anything to anything else" is also considered a breaking of the type system in Haskell, too, and being able to write a similar "coerce" function is Very Bad.

In fact, it's very similar in theory to the way you might construct or leverage such a "bug" in Haskell. They are essentially constructing an argument to the compiler -- in the form of a type, which is how the compiler thinks -- that there is an equality between two arbitrary types A and B, so we can coerce between them. The compiler needs some evidence that this equality, this argument that A and B are the same, is legitimate. Otherwise, it rejects the argument, because it's not that gullible (it rejects the program as "this program failed to type check"). Because the argument to the compiler is made in the form of a type, the compiler considers "evidence" for that argument, to be a value of that type. So if your type makes the argument that "Integer and String are equal", then you need to represent that argument as a type, and create evidence for it by creating a value of that type. Then the compiler thinks your argument is legitimate. (This is a core central component to e.g. constructive mathematics/constructive computerized theorem proving)

`null` inhabits every type, so every type contains `null`. Even the type that says "String equals Integer" has `null` as a value. Thus, we can "prove" to the compiler that yes, we can turn an Integer into a String, we have the argument (the type) and the evidence (the value) to prove it! But the evidence is actually bogus.

I honestly suggest you give the original paper and the surrounding references a good reading. It all seems very sound and straightforward to me as an argument (with a bit of PLT background), and frankly I can't make heads or tails of most of your complaints in this thread, though if you're unfamiliar with the field, teasing out the subtle parts is probably a bit difficult.


> That is the soundness guarantee, it cannot get stronger than that as the langauge allows down casts, and those are fundamentally unverifiable at compile time.

An explicit downcast is allowed to fail at runtime. But if you can get a runtime cast failure in code that did not contain any cast, that's a soundness bug.


.NET doesn't get generics right, having type erasure is among the best features of Java.

The actual reason for why you feel the need for reification is because the type systems of both C# and Java are so poor that you end up in situations where you really need to do "isInstanceOf" checks and castings. But people are missing the forest from the trees here, because these are essentially holes in the type-system, only needed because the languages are not expressive enough. Because really, you don't really need to check whether something is an instance of "List<int>" at runtime, in a static language, unless your language sucks ;-)

As to why type erasure is among the best features of Java, that's because with reification it gets hard to support other languages. It's kind of ironic, but .NET CLR was once marketed as the multi-language runtime, but time has been on the JVM's side in that regard.

For example F# doesn't support higher-kinded types because the CLR doesn't support higher-kinded types and introducing support for HKT would mean for the F# compiler to basically do type erasure by itself, which would place it at a serious disadvantage versus the host language in terms of performance. Scala supports higher-kinded types, which means you can express in Scala types such as the Monad and the Applicative, which are essential for FP in static languages, without sacrificing performance or Java interoperability. Even OCaml supports higher-kinded types.

And this isn't just about static languages. Pick any dynamic language you can think of, take your pick from Ruby, Python, Clojure and Javascript and compare the performance of the implementations on the JVM versus the CLR. OK, it's sort of an unfair comparison, given the JVM implementations have had more resources invested in them, but you know, one of the reasons for why dynamic languages work better on top of the JVM is because the JVM's bytecode is basically dynamically typed, with the last barriers being brought down along with invokedynamic. But by introducing generics reification on the other hand you end up with more static info in the bytecode that dynamic languages have to work around.

The only clear advantage to reification is the specialization for primitives that the CLR is doing for things like the standard collections. But you know, this can be a job for the compiler, you don't actually need the runtime to do specialization (see Scala's miniboxing or Dotty) and even if the runtime does specialization, you don't need it to do .NET-like reification (at least the Java engineers claim that, having plans to do primitive specialization for Java 10 or something).


> having type erasure is among the best features of Java

Until you try to use something like a List<int> without wasting incredible amounts of memory. This is even worse in a VM with custom value types (like the CLR), which also have to deal with boxing.

> For example F# doesn't support higher-kinded types because the CLR doesn't support higher-kinded types and introducing support for HKT would mean for the F# compiler to basically do type erasure by itself, which would place it at a serious disadvantage versus the host language in terms of performance.

So the solution is to push the performance disadvantage into the VM so all languages are equally screwed?


My original message contains your acknowledgement and answers to your question. Please re-read it.

Cheers,


F# already does type erasure in some circumstances with Type Providers (erased vs. generated types).

F* runs on the CLR and has higher kinded polymorphism.

Also, while the JVM multi-language story is terrific, it's not like the CLI is severely wanting since it also has several world-class language implementations:

* C#

* VB (criminally underrated; it's not just C# with another costume)

* F#

* F* (real dependent types)

* C++/CLI

* Nemerle (like C#, but with real macros and pattern-matching)

* PowerShell

* IronPython


> Because really, you don't really need to check whether something is an instance of "List<int>" at runtime, in a static language, unless your language sucks ;-)

So, I hate reflection with a burning passion, but this is wrong. You may not like doing it, but there are tons of legitimate use cases for RTTI.

My primary case is for runtime "light-up" of advanced features. For instance, if you deploy an application to a series of platforms that provide non-overlapping sets of capabilities, a fantastic way of dealing with this problem is to use RTTI to check whether the instance of the object is some specialized case at runtime, then cast to the specialized object to use the specific features (e.g., GPS, touch screen) that are only available on that platform.

The problem with .NET reflection is not that this dynamism exists, it's that everything is bundled together without an opt-out. I would love to have reflection split into small pieces that I could pick and choose from -- maybe I get RTTI, but I can't instantiate arbitrary generic types at runtime, for instance. Or I can't do RefEmit to JIT dynamically generated IL.

These play havoc with the runtime, forcing it to support incredibly elaborate systems just for simple features, so providing some guarantees would make the runtime's job way easier while also providing developer flexibility.


> The problem with .NET reflection is not that this dynamism exists, it's that everything is bundled together without an opt-out. I would love to have reflection split into small pieces that I could pick and choose from -- maybe I get RTTI, but I can't instantiate arbitrary generic types at runtime, for instance. Or I can't do RefEmit to JIT dynamically generated IL.

Something like mirrors?


C++'s ability to selectively turn off features drives me crazy, as I either need to convince myself to live without RTTI and exceptions or live without most of the libraries that exist.

Specially given the mentality of micro-benchmarking code lines without profiling.

It is also the cause of quite a few C++ warts, because the library code needs to be written assuming those features can be disabled.


To be fair, there's absolutely nothing wrong with type erasure, except when you allow runtime reflection.

If there is no reflection, then it shouldn't be possible to observe whether or not type erasure is done.


It really depends at what level -- because the erasure is done at the language level rather than the VM you _have_ to have a pile of runtime casts, even when those casts are no-ops :-/

I agree it works, i just don't think it's as elegant :D

Of course eventually everything is running on an untyped CPU and everything is just an integer so it's all erased in the end. Unless you have one of those silly mid-90s java cpus right?

Or maybe a LISP-machine


Well, that's a detail of how the VM works. My only point was that type erasure is not inherently bad, as plenty of languages do type erasure, like Haskell and OCaml. Of course you can always create conditions where it could become a problem.

But the example with C# "doing it right" is a little odd in that case, since Microsoft had the opportunity to build something new, that didn't have the same issues the JVM had. And they did, in some ways, but that doesn't mean tracking type parameters at the VM level is the only way to do it "right".


> silly mid-90s java cpus

They are still quite actual, here is one example.

http://www.imsystech.com/products/im3910-ic/


Even without full-blown reflection, the problem is that you don't have an "is instance of" operation for unerased types. Which is crucial to case/match as used in Scala and functional programming in general.


It is not crucial for functional programming, unless your functional programming language uses types as variant tags.

Both Haskell and OCaml use type erasure, and I doubt anyone would claim they aren't functional languages.


If they store a variant tag, that means they don't erase the bit of info that's needed for case matching. My point is that it's crucial to be able to tell at runtime to which (sub)type a value belongs. Other type information that isn't necessary for that can still be erased.


I think we're in agreement, and just tripping over terminology. In most functional languages, values in a variant are just values in that variant. They don't each have their own type, which means the variant tag isn't actually type information.


You can observe it on the JVM when you realise you can't overload a function with arguments that differ only in generic type.


Yes, my point was that there's nothing inherently wrong with type erasure. You can of course create conditions that would be problematic, like the design of Java and JVM does.


Haskell erases its generics and its type system's "generics" are superior to C#'s in every way. Erasure is not a problem.


>.NET actually gets generics right, it's not done through erasure but actual correct type bindings.

Type erasure is one of the main reasons why there are so many different languages available on the JVM. It's also why there isn't a need for another runtime to run on top on the existing runtime just to support dynamic languages.


Proper type specialization has its downsides in compile time and binary sizing. Ideally you would have the option of doing one or the other depending on your situation.


Not if done correctly -- for a JITted language the IL does not increase in size for generic types as the IL still only has a single version (with the type parameters). The JIT can share the code for the majority of Object/Reference instantiations (strictly it could share all object code if it put appropriate type metadata into the in memory class representation).

For value types you do need different copies for the various sizes of type parameter, but even then your heap overhead is reduced because you no longer need to box every value. E.g. you no longer need to make an object to wrap an int. So i suspect on balance you reduce the total space requirements.


FWIW, this is exactly how .NET does this. Generic instantiations are shared between all reference types (at runtime, you can sometimes see this leaking into visible abstractions in form of System.__Canon pseudo-type in call stacks etc), but separate instantiations are produced for every value type.


TLDR: We always knew null'a could float around and cause NPEs, but many assumed they couldn't convince the compiler to unsafely convert, e.g., a String to an Integer. However, since null can inhabit any type, we can use null as a proof of any proposition, for example that there exists some super type of Integer which is a subtype of String. Of course, if such a type exists we should be able to traffic an Integer to a String.

The example on the page is pretty straightforward. I encourage everyone to reason it out.


I suppose that means that it is 'null' which is evil in Java. If you're going to get rid of pointers, and you still want to have the option of a thing existing or not, you therefore must represent it with a list and a contract on the size of the list. (Zero entities, or one entity in the case of a 'single object reference.)

Otherwise you'd only be able to detect 'missing' objects by exceptions being thrown.


What you've described is precisely the Optional or Maybe type. Many folks worry about the efficiency, which is reasonable, but feels misguided. A nullable annotation would be sufficient for the safety we all crave and would have no runtime overhead (i.e. It would still represent missingness with null.)


The problem is that then your Optional is a special kind of type that only works on reference types, and you either can't instantiate Optional<int> or it has radically different performance characteristics from most Optionals.


I can live with the notion of nulls. But I think having all (non-primitive) function parameters be nullable is a huge mistake. It is just very rarely what the writer actually wants. And explicitly checking for null for all object parameters is a huge pain.

The right solution, to my mind, would be to have function parameters be non-nullable by default. Unfortunately it is too late in the history of Java to switch to that behavior now. Too much code would break. So I would settle for being able to declare individual function parameters non-nullable as a standard part of the language.

I can't believe it would be all that hard to implement.


I recommend that you read section 6.2 (The Billion-Dollar Mistake) in the linked paper.

Most important part of that section:

> If our community wants to remove implicit nulls from industry, where appropriate, our community should make greater effort to appreciate the complex compromises that have to be made in practice so that our work can successful transfer out of academia.


    <B extends A> A upcast(Constrain<A,B> constrain
As someone who doesn't know Java well, why is it that we need to pass a Constrain<A,B> into the function? We already know that for this method, B extends A


To allow B and A to be inferred based on the constrain passed in. Look at where upcast is called - it doesn't explicitly call bind.upcast<Something, Otherthing>(...), it just passes the values.


There have been some fantastic conversations on Hacker News about type systems. I've learned a lot about the different arguments for dynamic typing, versus strict typing, versus gradual typing. Java's type system may or may not have represented new thinking when it was under development in the early 1990s, but it certainly seems a bit dated in the year 2016. The limits of the Java type system have been discussed both here on HackerNews and also elsewhere, many times.

For those who want to consider arguments against Java's style of strict typing, 2 things I would recommend include "Agility & Robustness: Clojure spec, by Stuart Halloway":

https://www.youtube.com/watch?v=VNTQ-M_uSo8

He offers a chart that shows the strengths and weaknesses of strict typing versus unit tests versus the run-time checks offered by Spec. It's worth a look.

The discussions around gradual typing have been interesting, but to see how far the limits of this can be pushed, I would suggest everyone check out the Qi/Shen programming language:

https://en.wikipedia.org/wiki/Qi_(programming_language)

"Qi makes use of the logical notation of sequent calculus to define types. This type notation, under Qi’s interpretation, is actually a Turing complete language in its own right. This notation allows Qi to assign extensible type systems to Common Lisp libraries and is thought of as an extremely powerful feature of the language."

This next quote is from someone who has spent a long time experimenting with different Lisps:

"Qi (and its successor Shen) really push the limits of what we might call a Fluchtpunkt Lisp. I suspect it requires a categorization of its own. A few years ago I was looking for a Lisp to dive into and my searching uncovered two extremely interesting options: Clojure and Qi. I eventually went with Clojure, but in the intervening time I’ve managed to spend quality time with Qi and I love what I’ve seen so far. Qi’s confluence of features, including an optional type system (actually, its type system might be more accurately classified as “skinnable”), pattern matching, and an embedded logic engine based on Prolog, make it a very compelling choice indeed."

http://blog.fogus.me/2011/05/03/the-german-school-of-lisp-2/

Mark Taver, who created Shen, posted a comment and then turned it into an essay here:

"The underlined sentence is a compact summary of the reluctance that programmers often feel in migrating to statically typed languages – that they are losing something, a degree of freedom that the writer identifies as hampering creativity. Is this true? I will argue, to a degree – yes. A type checker for a functional language is in essence, an inference engine; that is to say, it is the machine embodiment of some formal system of proof. What we know, and have known since Godel's incompleteness proof [9] [11], is that the human ability to recognise truth transcends our ability to capture it formally. In computing terms our ability to recognise something as correct predates and can transcend our attempt to formalise the logic of our program. Type checkers are not smarter than human programmers, they are simply faster and more reliable, and our willingness to be subjugated to them arises from a motivation to ensure our programs work. That said, not all type checkers are equal. The more rudimentary and limited our formal system, the more we may have to compromise on our natural coding impulses. A powerful type system and inference engine can mitigate the constraints placed on what Racketnoob terms our creativity. At the same time a sophisticated system makes more demands of the programmer in terms of understanding. ...That said, not all type checkers are equal. The more rudimentary and limited our formal system, the more we may have to compromise on our natural coding impulses. A powerful type system and inference engine can mitigate the constraints placed on what Racketnoob terms our creativity. At the same time a sophisticated system makes more demands of the programmer in terms of understanding. The invitation of adding types was thus taken up by myself, and the journey to making this program type secure in Shen emphasises the conclusion in this paragraph"

http://www.shenlanguage.org/library/shenpaper.pdf

I'm only quoting two favorites of mine, but of course I could post a hundred examples, all making a similar point. Java's style of strict typing is both weak and incomplete, and yet overly rigid at the same time. It's worth noting how many other strategies exist, that deliver more robustness, with greater flexibility.


Both the Qi and Shen pages on Wikipedia were deleted last year.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio... https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio...

Deletionpedia doesn't currently have them. The Internet Archive has snapshots:

http://web.archive.org/web/20141211005007/http://en.wikipedi... http://web.archive.org/web/20150102045719/http://en.wikipedi...

This is a long-standing problem (decade plus). Wikipedia deletionism interacts badly with research programming language pages. They tend to get deleted for "lack of notability", failing the test of "someone other than the people involved, wrote about it on a sufficiently high-profile piece of dead tree". And the pages face a recurrent threat of "heads you get deleted, tails we flip again in a few years". Some pages have been through deleted/recreated/deleted-again cycles. Some language communities manage to scrape together notability, others browbeat the deletion nominator, but it seems most pages get deleted. Fixing Wikipedia appears intractable.

Absent a wiki associated with something like LtU, creating an alternate wiki has been beyond the capabilities of the programming language research community. Which ends up reflected in balkanization - for example, people working on category theoretic type hierarchies in different languages, being unaware of each others' work. Shoemaker's children.


The day we loose the Internet it will many times worse than Alexandria's library fire.


In case it makes reading easier for people, here's the code with the "A, B, T, U" replaced with what they would expand to. The "that can't work" comes out pretty clear with the equivalent of "Integer-superclass which extends String" which is obvious nonsense.

    class Unsound {
      static class Constrain<AString, BSuperInteger extends AString> {}
      static class Bind<AString> {
        <BSuperInteger extends AString>
        AString upcast(Constrain<AString, BSuperInteger> constrain, BSuperInteger b) {
          return b;
        }
      }
      static <TInteger, UString> UString coerce(TInteger t) {
        Constrain<UString, ? super TInteger> constrain = null;
        Bind<UString> bind = new Bind<UString>();
        return bind.upcast(constrain, t);
      }
      public static void main(String[] args) {
        String zero = Unsound.<Integer, String>coerce(0);
      }
    }
I left the T, U, A, B as part of the name, so it's easier to map back to the original.

I wish people would stop insisting on single-character type variables. They have a purpose - name them meaningfully, like everything else.


I ran the example and I got the same result. The result that is provided, and the one I got, makes sense. Can someone explain to me what is "broken" about this?

Broken down you:

Create 3 classes - Unsound: Unsound contains a static generic method. Two types are requested (T and U). T is the parameter and the U is the return type. From now on T will be called CoerceInputType and U will be called CoerceOutputType - Constrain: A class that has two generic types it requests. A being the first type, and B who is the second type. A restriction is placed on B saying that B extends A. We will call A ConstraintType and B ExtendedConstraintType - Bind: A class that takes one generic parameter of type A. Here we will call A BindType

What is happening here?

Your execution stack:

   - *Call* Unsound.coerce. Tell it CoerceInputType is an Integer and CoerceOutputType is a String. You pass into coerce a value of '0' with the type CoerceInputType (an Integer class) and accept a result with the return type CoerceOutputType (a String class) 
   - *Assign* In Unsound.coerce you define a type Constraint<CoerceOutputType, ANYTHING extends CoerceInputType>. This means that these type annotations will be propagated through anything this is passed into. 
   - *Assign* you create an instance of bind with the type CoerceOutputType.
What this currently looks like

   <Output, AnyType that extends Input> Output ourFunction(Input i) {
      return i
   } 
What happens: You get an exception as Integer is not castable to a String. You can look further down to the only link in this page to see where that cast is coming from in the bytecode.

The authors seem to be surprised that this will not execute? Or maybe that this doesnt? This is completely expected behavior. Maybe not to someone who hasen't worked in Java and is coming from another VM but for me this is not surprising. Why? Here's a list:

The ways you can trigger a type change:

   1. Auto boxing
   2. Casting
For integers and strings, these two type conversions will not work. Here is a much more direct explanation of what's happening here:

   class Test {
      public static void main(String[] args) {
         String a = "Hello World";
         Integer b = 10;
         a = b;
      }
   }
Running this you will get the following error:

   test.java:5: error: incompatible types: Integer cannot be converted to String
   		a = b;
   		    ^
   1 error
This is because Integer cannot auto (un)box to String. This exception happens at compile time. I'll get into why in a bit.

You might also ask, "But gravypod, what about an explicit cast, will that work?" No, it will not!

   test.java:5: error: incompatible types: Integer cannot be converted to String
   		a =(String) b;
   		            ^
   1 error
This will not work at compile time as well.

Now for the next section: why will this not be picked up at compile time? Well that's a hard one to answer, the basic answer is the method isn't existent yet. Or to be closer to the truth, the method's types don't exist yet. When you use generics you're creating a method that will work on a base set of types. There isn't necessarily hard type checking on generics though. These are mainly there for the programmer, not the runtime. The runtime will operate on any data and is just expecting casting to change between types.

When run though a decompiler this is the output of Unsound.class: http://hastebin.com/sitamomogu.java

All that has happened is the compiler couldn't follow the type annotations at compile time and it was left for runtime. This should be improved, but I don't see what is broken. The time of the error just shifted. This is the same thing that happens with null. The error is not compile time when you are going to have an NPE. Instead they are runtime.

Here is example code that compiles perfectly fine:

   public class Test {
   	public static void main(String[] args) {
   		String a = null;
   		System.out.println(a.getClass().getSimpleName());
   	}
   }
When run you will get this exception:

   Exception in thread "main" java.lang.NullPointerException
   	at Test.main(Test.java:7)
I don't think that this is broken behavior as this is consistent with the way the JVM is meant to operate. Types are suggestions to the VM. This to me is a side effect of auto (un)boxing.

Maybe I'm wrong. I hope someone could tell me why if so.

Edit: Please read https://news.ycombinator.com/item?id=13051036 as this explanation is much more elegant and gets the idea across in a simpler way.


The unsoundness is more straightforward than that: https://news.ycombinator.com/item?id=13051659

By passing `null`, the compiler doesn't notice that the type constraints are irrational, leading to a violation - to satisfy them, it'd need an Integer superclass which is a subclass of String.

Arguably that's fine, since the null can't be used to do anything - it's perfectly "safe" to insert a `String x = null;` into a `new ArrayList<Integer>().add(x)` for instance, since it's just a null. But writing that code gives you a compiler error since it isn't allowed by the type system - writing the same thing via generics should too, but instead it fails at runtime.


There's no lack on clarify on why this happens. The point is that getting a ClassCastException with no explicit casts in the code is bad in and of itself. Java (and other similar languages) is normally assumed to be sound wrt types in the absence of explicit casts.


The code doesn't seem to change on click.

EDIT: Don't downvote, is anyone else having this problem?


Weird. Doesn't work for me either. ¯\_(ツ)_/¯

You can still get some of the code from the linked paper, though.


What browsers are you using? Can you file an issue?

Thanks.


Tried in latest Edge, Firefox (50) and Chrome (54) on Windows.

Clicking Scala gives me Scala. Clicking Java again gives me Java. But none of the variants underneath make any difference to the code.



Latest Chrome on Linux. Clicking on source code links doesn't change the example. Clicking on Java / Scala does change to the corresponding language.


Chrome on MacOS and Chrome on iOS


yes, just got to the github repo tho https://github.com/namin/unsound




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: