Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>everyone's pet feature

Isn't that C#? Java is very slow at adding new features, Java has only things that were proved to work in other languages.



> Java has only things that were proved to work in other languages.

But they still somehow keep finding ways to make them not work so well when implemented in Java.

C# may move faster, but its design team is also much more methodical about ensuring that new features have good ergonomics. In Java, I tend to feel surrounded by hacks that were hastily slapped on in an effort to keep up with C# and, increasingly, Kotlin.


Idk, have you seen the interfaces with default implementations in latest C#? Also duck typing? Both are mistakes IMO. First missteps I feel like I've seen C# make.


If by "duck typing" you mean dynamic, then I don't know what you're complaining about. It has a very niche set of use cases where it is needed. If people are abusing it then it's on them. There is no good or even alluring reason to use dynamic outside of it's intended purpose, so I don't feel like it's one of those "shiny hut dangerous" features you see in some other languages.


Fair point. I switched from C# to Java several years back, so I'm at least somewhat working from nostalgia for a certain point in time.

I look at the feature list in the latest iteration of the language, and my thought is, "Y'know, you really should stop when you're done."


What duck typing? Are you talking about the `dynamic` keyword?


Why don't you like default implementations?


Hmm, I do a lot of C# programming, including very language-y low-level stuff, and I'm not sure I completely agree.

I appreciate that by moving faster they get more stuff into more hands faster, but they definitely have a lot of hackish solutions with poor ergnomics outside of the narrow scope they were originally intended for.

If you will: the language features have a clear purpose but a general implementation; and outside of the narrow purpose the designs usually feel pretty poor.

E.g.:

- LINQ/expression trees don't support most of the C# language, and new language features are usually without equivalent expression tree. This isn't a full lisp or F# style quotation, but a pretty narrow window that's not easy to use outside of linq-to-sql style usages.

- LINQ trees are again intrinsically inefficient, since the expression trees compile not to a statically shared expression, but to a bunch of constructors (i.e. looping over even a medium sized expression is bound to be slow); and they're not equatable, so it takes a lot of effort for a consumer to detect this case leading to overly complex (and hard to reproduce correctly) hacks inside stuff like EF.

- LINQ is restricted, but the restrictions are fixed, not customizable. That makes it a poor fit for DSLs, including stuff like Entity Framework, because there are usually lots of expressions your DSL can't support, but there's no way of communicating that to the user. Also, if you use expressions as DSL, you need to follow C# semantics, which isn't trivial; witness gotchas in ORMs surround dealing with null and equality.

- lambdas are either delegates or expressions; not both, and this isn't resolved via the normal type system, but by special compiler rules, making it hard to do both, and leading to type inference issues such as that var f = (int a) => a + 1; cannot compile.

- Roslyn: very poorly documented, and ironically very dynamically typed to the point that many casts or type-switches are necessary but finding out what types there are and what they do is generally a matter of trial and error since the docs aren't great. Ergonomics are poor in other ways too; e.g. dotnet is xplat, but the build-api is not - i.e. it's clearly not dogfooded. Also: totally not integrated with expression trees, which is at least mildly surprising.

- string interpolations are unfortunately quite restrictive (compare with e.g. javascript, where this was implememented much better), and intrinsically and unnecessarily inefficient (at least 2 extra heap allocations, and usually lots of boxing, and the parsing the compiler necessarily must do is not exposed in any kind of object tree, but instead reserialized to string.Format compatible syntax necessitating re-parsing at run-time). Also, like expression trees, this was really hacked into the language, so, e.g. you can't participate in other normal C# features like overload resolution the way you might expect, extension methods plain don't work, culture-sensitivity can be a gotcha: basically this works for immediately evaluated expression, but is tricky elsewhere.

- razor (not strictly C#) is hugely complex, and has a very impractical underlying model. Compared with e.g. JSX which is trivial is (ab)use creatively, and which uses mostly language-native constructs for control flow, razor makes it impossible to use even basic features like methods to extract bits of common code; lots of basic programming features are reimplemented differently. Instead of passing a lambda or whatever, you have to deal with vaguely equivalent yet needlessly different stuff like partials + tag helpers.

- optional parameters are kind of a mess (no way to enforce named args, no way to cleanly wrap optionals, restriction on compile-time constant, interaction with overloads can be suprising); tuples are too (names are dealt with differently than everything else in the language, no syntax for empty or 1-elem tuples, no way to interpret arg lists as tuples or vice-versa, no checks on nasty naming errors like swapping order); equality is a mess (how many kinds are there again?), lots of apis are disposable but should not be disposed but for others it's critical, no good way to compose disposables, huge ever expanding api without practical deprecation path is a pitfall for newbs, no partial type inference for generics, no unification of all the various func-and-action variations means billions of pointless overloads (and sometimes per-API ways around it); tuples and anonymous objects are sort of redundant, but not entirely; no good way of implementing equality/hashcode/comparability and yet easy way to detect misused non-equatable types.

I mean, I respect their choices here, and there's a tradeoff with lots of benefit too: they're really quite fast-moving, and I want those new features ;-). But it's not without costs; they definitely aren't "much more methodical" or anything like that.


Uprated because you gave lots of specific examples. Too many conversations get vague fast.


Thank - I hope I don't come across as too bitter - I really do think there's an upside to all those limitations. I'm just past the exuberance of thinking that because it's so actively developed, that all these flaws are eventually fixable. It's a fast lifecycle, and probably at some point it'll be too impractical to continue as is, and then we'll just jump ship to some slimmed down alternate with a good transition story - and that's fine. So far: so good.


C# has made some serious mistakes: reified generics (which has basically destroyed simple language interop on the CLR and makes it an unattractive target for language implementors), and recently, async/await. Both of these help in some ways, but have costs that are higher than the benefit and much better alternatives.

Java is not trying to "keep up." It is intentionally slow-moving and conservative (this design goal was set by James Gosling when Java was first created), and only adds features once they have been proven in other languages for a while.


As former Java dev, returned into .NET world, I don't consider it a mistake, the CLR was designed with multiple languages in mind, and there are plenty of options available, even if Scala devs failed at that.

On the other hand, what I consider a major mistake from Java side was ignoring value types and AOT compilation since it's inception.

Had Sun blessed such features since the beginning, and many use cases for C and C++ wouldn't be necessary.


Value types add complexity and they weren't necessary in 1995. They only became necessary due to hardware changes circa 2005. Similarly, AOT compilation has only become attractive for the kinds of applications people use Java for only recently, when startup time became important for serverless. The lack of neither has caused Java lasting damage; what has is the domination of the browser on the client, but that has affected all languages.

As to baking variance into the runtime, I think this is just a bad idea, which is so far used only in C++ and .NET, two languages/runtimes with notoriously bad interop (it's not just Scala; Python and Clojure have a similarly bad time on the CLR, as would any language not specifically built for .NET's variance model). It is simply impossible to share a lot of library code and data across languages with different variance models once a particular one is baked into the platform. This is too high a price for a minor convenience.

Specialization for value types (which are invariant), is another matter, and, indeed, it is planned for Java. Perhaps some opt-in reification for variant types has its place, but not across the board. I am not aware of other platforms that followed in .NET's misplaced footsteps in that regard. Those that are known for good interop -- Java, JS and LLVM, don't have reified generics.

What's worse is that it's a mistake that cannot be unmade or resolved at the frontend language level. Even Java's big mistakes (like finalizers, how serialization is implemented, nullability and native monitors) are much more easily fixed.


Value types aren't "necessary", but they would have been valueable at day 1. The GC heap is simply inefficient; not necessarily because of the GC (which indeed is harder with massive multicore), but simply because of the per-object memory overhead.

There's a reason java had built-in value types from day 1, because it made sense even back then.

Frankly, I think both java and C# kind of got this wrong. There was an overreaction against the C/C++ of the day, and whereas the GC turned out brilliant, the idea that it's not even necessary to express the notion of references/pointers/values etc. was too much; and the idea of a single type system root (object) is similarly dubious, and then particularly the idea that that root type isn't the empty type. Object has semantics, and that was a mistake, because it contributes to the bloat. I'm totally happy with ignoring those features 99.9% of the time, but having them completely unavailable makes those 0.1% cases extremely expensive. (I mean, I think those things are slightly changing, but it's slow going).


> because it made sense even back then.

That was necessary for performance back then. User-defined value types weren't, and Java has done well without them.

> Object has semantics, and that was a mistake, because it contributes to the bloat.

I think most of the RAM bloat is due to the GC trading off extra RAM for speed rather than object headers, and I'm not sure trading off complexity for headers was right 25 years ago (JS is doing fine on the client without value types). What changed was the performance characteristics.

As to object semantics, it may be a fixable mistake. The goal is to get value types without today's object semantics while preserving a single class hierarchy at the same time. The Valhalla team thinks that's achievable.


Heap (over)use by the GC is effectively a scaling factor. How large the underlying objects are remains remains relevant: if your objects are twice the size necessary, the GC will "bloat" that further - and this tradeoff isn't entirely GC specific, other allocators such as those used to implement malloc/free have related tradeoffs to make; free() won't release memory to the OS either (and memory, released or not, may end up evicted from RAM anyhow).


Of course it is relevant. I'm just saying it isn't the decisive factor that makes this an absolute necessity, as evidenced by the fact that much the backbone of the largest software services in the world is Java. There are lots and lots of tradeoffs in runtime design, and it's important to look at the whole rather than at one decision in isolation and point out that it's important. As a whole, the criticality of value types for Java is relatively recent.


I advise you to read Mesa/Cedar report on the impact of garbage collection algorithms available at Xerox PARC bitsavers archive, EthOS or SpinOS experience with Modula-3.

All of them refer that having value types alongside GC had a relevant impact improving performance.

All systems designed before Java was a thing.

Or since you refer to JS, the paper about SELF's design.

Even Dylan was designed with AOT/JIT and value types support, which is relevant here given that its domain was being a systems language for the Newton. That politics killed it is another matter.


Oh, I don't deny that value types would have helped performance back in '95, just that they were absolutely essential for Java. Smalltalk/Self and Scheme/CL didn't have them, and those were probably Java's greatest influences; I don't think VB had them, either. Also, in its first four years, before HotSpot was ready, Java was interpreted, so it had bigger performance problems.

I don't know why there was no emphasis on AOT back then. I guess they started with interpreter/JIT, and then there just wasn't much demand for AOT until now.


Microsoft Basics have support for value types since MS-DOS.

QuickBasic supported value types and AOT to native code, and while Visual Basic used P-Code, version 6 introduced a proper AOT native compiler.

Modula-3 was also a big influence, at least accordingly to some papers.

There was surely demand for AOT, given that most commercial JVMs had it in some form or the other since 2000.

Even Sun actually supported it in Java Embedded variant for OEMs, probably grudgingly.

Common Lisp certainly has support for value types.


What user-defined value types did CL have in '95? Also, are you sure about VB having had them then?

As for AOT, there may not have been sufficient demand from Sun/Oracle. I only joined relatively recently, but we generally do expensive things only if we believe they have a huge benefit or in huge demand, and we believe it can be long-lasting. The assumption is that any new feature will require maintenance for 20 years, taking away resources from other things. So if something is expensive, even if it's cool or some people could find it very useful -- we don't do it. The assumption is that the ecosystem is large enough that others can, and will.


Arrays, structs, fixnums, explicit stack allocation.

I can check the respective manuals if you wish.

Yep, I did VB programming for a short while.

And please note that even though my focus is now elsewhere, Java is one of my favourite eco-systems.

As a peasant I just wished that Java 1.0 was more like Go, given the existing alternatives back then.

So it kind of stayed as a pet peeve of mine.

Same applies to .NET, just in a different way.


Well, we can disagree about when AOT and value types became critical for Java (and I would argue that they clearly weren't back then because Java has done spectacularly without them), but Java is getting both soon.


I would say that it had other factors that contributed to its sucess, so it succeeded in spite of lacking those features.

However due to the hardware architecture changes and new kids on the block, it is starting to be an issue.

I keep wishing to see them arrive, have watched all the JVM Language Summit, Devoxx and JavaONE talks about them.

Meanwhile I can already enjoy them elsewhere. :(


> Meanwhile I can already enjoy them elsewhere. :(

That's perfectly fine. We think that our priorities are right for the workloads Java is used for (e.g. people care more about a low-latency GC like ZGC, and deep low-overhead in-production profiling, like JFR, than about AOT).


C# got it right from day one, regarding value types.

AOT compilation not so much, given the NGEN constraints.

However they got it both wrong, considering CLU, Modula-3, Delphi and Eiffel are considered influencial languages on their design.


Mutable structs are not exactly value types, but Microsoft has always preferred control over simplicity (after all, they pushed C++ really hard). I won't say whether that philosophy is right or wrong, but it is very different from Java's.


In 1995 I was enjoying Oberon, Component Pascal, Eiffel and Delphi.

Value types were pretty much obvious as necessary.

More so when one dives deep into how languages like CLU and Mesa/Cedar were designed.

Having a AOT support doesn't preclude having a JIT as well, like Common Lisp or Eiffel already had in 1995.


There's a difference between useful, and even very useful, and absolutely necessary. Clearly value types weren't absolutely necessary, as Java did well without them (and JS still does).

Gosling said that his goal was to have nothing you can somehow live without (I don't know how well early Java lived up that ideal, but that was the ideal). Hardware changes made user-defined value types absolutely necessary for workloads Java wants to target.


Being there since the beginning, I wrote my first Java game in 1996, early Java only did well thanks to Sun's marketing weight and it being free beer vs the alternative of having to pay for a compiler like Delphi.


I was around, too, and I don't think that was at all the full story. Marketing has never been solely responsible for the long-term success of any product. There were other languages that were very heavily marketed: VB and C++ (and FoxPro, too, I think) by Microsoft, Delphi, and about a million other RAD tools. Being free was one of the reasons, but so was targeting the web, and Gosling's design of wrapping a JVM that gave people what they wanted (dynamic linking, fast compilation, garbage collection) wrapped in a language that felt familiar and non-threatening. I don't remember what were Delphi's issues, but a big project I wasn't involved with at the same organization I was working at circa 2002 (I was all C++ back then) did Java on the server and Delphi on the client. Maybe Delphi didn't have a good server-side story?


It sure did, as long as you were a Windows shop.


No need for Windows. There was an official Linux implementation back in the day, codenamed Kylix.


Kylix was full of issues and was a mismanaged product variant, largely ignored.

If I recall correctly it even depended on WINE.


> and native monitors

Are there any plans to fix those?


Some ideas; nothing concrete. Need to figure out cost/benefit.


async/await is fantastic and pretty much the direct inspiration for the exact same feature om ES6, where its a godsend.

C# i s one of the best dev experiences in any language/IDE


Async/await is fantastic compared to not having anything at all. It's a big downside compared to other things you can do (cf Go, Erlang), and hard to get rid of. It's the classic case of getting easy short-term benefits at the expense of long-term costs. It's main benefit from an implementor's perspective is that it's better than nothing and very cheap to implement quickly. Just as .NET has lived to regret reified generics[1], it will live to regret async/await.

[1]: Maybe not C# programmers, but there are easier ways to do a single-language runtime.


The other main alternative to `async/await` with the Promise<T>/Task<T>/future<T>-paradigm is Rx's Observable - but let's not pretend that because Observable<T> is capable of handling every situation that Promises can doesn't mean we should use it everywhere - Angular tried that when they changed their HTTP client library to use Observable<T> instead of Promise<T> because they wanted to expose retries and other nifty logic - but in doing-so made the learning curve a vertical brick-wall for everyone involved (and now we can use Promise<T> with support for retries and better error-handling anyway) in addition to adding a very hard dependency on a fast-moving project (e.g. Angular 6 comes with a load of RxJS compatibility shims because RxJS radically changed their API design (again)).

Go's goroutines seem okay - but I don't like how much control they take away from the programmer. For example, last year I worked on calling-into a black-box C DLL from a Go program and we learned the C DLL had code that was actually simply terminating the thread inside of it (by design!) because the author of the C DLL assumed ownership of the thread. That caused a problem for us because Go's goroutines are scheduled by the Go runtime and it will never let you give-up ownership of a Go thread - and I couldn't see how I could use my own thread (e.g. getting a thread from a native OS call to keep it outside of Go's control) with goroutines. The project was almost DOA after we learned this, fortunately we convinced the author to always return instead of killing the thread. I'm not sure if anything's changed in Go since then that would have made things easier for us. But since then we haven't used Go for anything new. The only reason we used Go was because it gave us binaries that "just worked" for Windows, macOS and Linux without having to worry about Java, .NET and other dependencies - but I wasn't happy about the ~20-30MB-sized executable output.


What we're doing in Java is letting you choose, for each sequential computation, whether you want a heavyweight (kernel) thread or a lightweight usermode thread (like a goroutine), and if you choose the latter, you can use your own scheduler (schedulers are written in Java, and aren't a part of the runtime). No promises, no observables, no async/await, and no thread control issues.


C#'s language is much better designed IMO. Can anyone compare LINQ and Java's streams and not pick LINQ? Feels much sloppier in Java and Java came second.


Yes, I personally prefer streams, LINQ seems to me like mixing SQL in C# and that feels wrong.


That's probably what I like most about it. But that aside, the naming of tasks seems much more consistent in C# than Java. Java already had streams and maps and mangling those names makes searching for documentation a pain.


I do like all of LINQ's extension methods, but not the syntax myself.


> It needs close parenting. Java has been ruined by the push to include everyone's pet feature.

Oracle is moving to a faster cycle of development. There are some of us who strongly feel that some of their decisions are based less on what's best for the language and more on catering to the popular-and-loud crowd. I'll never forgive the addition of `var` to the language.


That this very thread exists suggests a certain “C++ ification” that happens to languages.

I really respect the slowness of the go maintainers in adding new stuff. I also suggest that we all ponder our tooling some; Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.


>Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.

It's not so much the extra typing that's the problem, it's the extra reading. All the stuttering is visual noise.


This. If you want to revolutionize the profession, come up with something that helps with reading as much as modern IDEs help with writing. My answer is that boilerplate should be generated somewhere else and largely ignored.


IMO, boilerplate source code shouldn’t be generated at all — the tool chain should directly emit the required object code. And code generation shouldn’t require a different language — or special comment syntax.


Depends on the toolchain. Everybody knows how to generate ugly source files, but it takes more effort to add AST nodes during compilation (or symbol table entries with types plus object code) and might lead to errors nobody understands how to fix (because you can't read the declaration of the thing you're trying to interact with).


>I'll never forgive the addition of `var` to the language.

I'm inexperienced with Java and didn't know this existed until I saw your post. It seems like a nice shorthand to me. Can you explain why you don't like it?


Misconceptions mostly. Java developers are some of the most conservative developers around.

And there you have the answer to why Java hasn't evolved that much, or when it did, why it needed to care deeply about backwards compatibility at the source level. It's because Java developers want it that way.

The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.

Therefore I find it interesting when Java developers complain about Var, because the ecosystem has in my opinion bigger problems. Compared with annotations Var isn't a problem because Var is statically checked, so here we have a clear case of missing the forest from the trees.


> Java developers are some of the most conservative developers around.

You're right, there are loads of conservative Java developers. It's one of the the things that makes me love using the language.

> The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.

> Misconceptions mostly.

But drop the strawman argument and borderline ad hominem. It'll do you better.


Spring is terrible in that sense, and you do find professionals arguing for Spring and strongly typed language. That said it's not an argument I've ever heard before being part of a Spring centric shop.


What ad hominem? That Java ecosystem is wholly dependent on crazy amount of magic is not a personal attack, really, but a mere sad admission.


I'm of the mind that it is un-Java like. Whether or not there is a "Java" as a philosophy is not the hill I'm trying to die on.

Consider these contrived lines of code:

```

String first = someMethodCall();

var second = someMethodCall();

```

The first provides more useful information at a glance. I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming, but now we have this new option that caters to the lazy, and in doing so makes life harder. Now I have to either ban it, embrace it, or come up with some ruleset around when you can and can't use it. Why? Someone can't be bothered to type a few extra characters.

It reminds of my grandfather, a former professional ball player, but one who played back in the days where there weren't these multi-million dollar celebrity ballplayers pissing and moaning in the press about just how hard their life is. He used to call those types "high-priced cry-babies," and I really feel a tinge of that in dealing with folks who just wholly embrace `var` and give folks like me shit for having criticisms of it. Perhaps that's just my old blue-collar showing but your convenience in writing a handful of characters simple will never enter into my considerations.

I love using Java, I love the addition of things like streams, the Optional type, etc. My sibling comment is a little right, and very wrong. Lots of Java developers have a certain conservatism about them, I'm mostly certainly one. But there are large reasons to hate it.


> I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming

There are a few rather glaring spots that I've noticed.

First, when you're refactoring, you've now got to edit every spot where a variable of that type is created. At the very least, when you're just renaming a class, your IDE can help you, but you still create a lot of diff noise. At worst, when you're splitting up a class or otherwise shifting responsibilities, you may end up with a whole lot of yak to shave. This is not just an annoyance; it's a latent code quality problem, because it creates a disincentive to clean things up.

Second, I've seen it become an impediment to writing clean code in the first place. I have encountered situations where it's clear that the author wrote

   someMethodCall(
    withOutputOfSomeOtherMethod(
      thatTransformsTheOutputOfYetAnotherThing(
        basedOnThisInput)));
because creating intermediate variables would have meant having to type out (and burn precious screen real estate on) some ridiculous set of 60-character generic type names.

I've even seen it result in situations where data gets copied or otherwise processed excessively, because the explicit type annotation resulted in an upcast that shed some useful feature that a subsequent developer shimmed back in because they trusted the explicit type annotation and not a function's actual return type.

So yeah, I decry your assertion that this feature is about being lazy. This feature is, at least for me, about code quality.


It would appear that all of your code quality arguments are "people are too lazy to actually write good code." So it's not clear why you would decry that assertion. I don't have a horse in the race one way or another, but you're not refuting mieseratte's objection.


A tool that encourages you to shoot yourself in the foot is a bad tool.

If a language encourages bad patterns by making good patterns overly verbose, the language should address that.


If explicitly naming the type is better, then do that. I find it useful for cases such as:

  OurConfabulatorWithADashOfSpice confab = new OurConfabulatorWithADashOfSpice();
Versus:

  var confab = new OurConfabulatorWithADashOfSpice();


To quote another comment from this thread:

> [D]rop the strawman argument and borderline ad hominem. It'll do you better.


Sure, in a contrived example where the return type is not obvious, you perhaps shouldn't use var. What about real world examples where the type is more often than not obvious?


I mean go also has var for very similar reasons as java has it.


Generics was introduced in Java in 2004 with J2SE 5.0[0]. [0]: https://en.wikipedia.org/wiki/Java_version_history#J2SE_5.0


Generics should have been in Java (and Go) from the beginning.

Those surely are not the proof that Java adds "everybody's favorite feature". I think the parent means the newest Oracle projects (Valhalla, modules, value types, streams, and so on).


(Technically, I think Java modules have been floating around in weird, likely broken suggestions since before Oracle bought Sun. As far as I could ever see, the primary design constraint was always, "NOT OSGi!")


The concept of generics/parametric polymorphism has existed decades prior to that in languages like SML and proven to work rather well.


Every time I get to edit pre-java 5 code is a reminder how useful generics actually are.


Generics were added so late because they had to figure out how to do it properly, correctly, on the first time.


And they failed. They did the best they could, but the fact that they were added onto the language later (plus the desire for backwards compatibility) means there are lots of gaps and warts in what they wound up implementing.


And the hacks to work around them started rolling in quite quickly. For example:

http://gafter.blogspot.com/2006/12/super-type-tokens.html

At its root, the real problem here isn't "reification good" vs "reification bad", per se. Haskell has an excellent implementation of generics, and erases types far more aggressively than Java does. C# also has a very good implementation of generics, this time based on reification.

The problem is more that Java's particular mix of design decisions resulted in a language that operates at cross purposes with itself. Once upon a time, back in the beginning, Java was a reflective language. Being reflective requires type information to be available at run time, though. When Java decided to use type erasure in its implementation of generics, they created a really bad set of interactions: They kneecapped reflection, so now you can no longer call Java truly reflective; it's only partially reflective. You can no longer effectively and accurately reflect on what have come to be some of the most-used classes in the language. And, at the same time, they forever sealed a rather important corner of the type hierarchy off from generics. They also delayed a bunch of type checking until run time - after types have been erased - so that certain things can just never be made to cleanly type check. Meaning you also can't say Java is any more than partially generic.


IIRC, the choice was between Java 5's version of erasure or drastically modifying the JVM, with the likely result that the new Java would be incompatible with the old Java. (Like C# has done.) This was considered unacceptable at the time. (Unlike C#.)


Erasure is important in allowing interop from other JVM languages. Reified generics would be nice from the perspective of just writing Java, but the interop story on the JVM is one of its best selling points.


Generics in Java are giant hack from the early 2000’s to maintain backwards compatibility with 1990’s-vintage JVMs. C#’s generics we’re done right.


I'm of two minds of that, these days. I came from C# so of course reified generics were of course better, of course--but these days I would rather have them in Java more and in C# less. I often find myself wanting to write the moral equivalent of `IFoo<?>` in C# and end up having to have two separate interfaces, etc. just to have a way to handle a list of a thing that I end up working on in an abstract manner. (Though I'd caveat that that is more of a gamedev-related concern than in Java/Kotlin, which I write for work.)

I do appreciate, though, that when Microsoft decided to do generics for C#, they did so decisively. These days, when C# gets a new feature, it seems like it's the complete opposite of decisively delivered.


You mean like this, right?

    class A {};
    class A<T> : A {};
I don't mind that. It can even be an aid to organization - all the generic stuff goes in the generic class, all the stuff that doesn't rely on that can go in the base class. But it would be nice to use something like <?>. Too bad generics don't inherit implicit casts, like A<int> to A<object>.


I do mean that, and I do mind it a lot when I'm so used to just being able to erase the generic.

There are performance implications to type erasure, to be sure, but when our computers are mostly all future machines from beyond the moon, I'm more interested in minimizing the impedance between my brain and a solved problem.


This is the only non-bad consequence of type erasure I'm aware of. On the other hand, a lot of code I've written in C# would be impossible or severely hacky without type retention, like "new T()", "T is Thing", finding all classes that are derived from T, etc.


TBH, I'm just as happy passing a factory method in for that sort of thing. Because it allows you to do both and pick the one that makes the most sense for you in a given situation.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: