Hacker News new | past | comments | ask | show | jobs | submit login
Records Come to Java (oracle.com)
143 points by agluszak on Jan 16, 2020 | hide | past | favorite | 159 comments



Java finally gets what's been available in Scala (case classes), C# (structs or maybe properties), Kotlin (data classes) and others for a very long time .. so long we already have tools like Immutables and Lombok to get past this really dumb limitation in Java.

I was really surprised to learn that the Java compiler/VM, when it sees the patterns "somevar" and functions "getSomeVar" and "setSomeVar" doing purely getter/setter stuff, removes the actual function calls and silently changes them in the background to field lookups for speed.

If you attach a debugger or memory profiler, is changes them back to functions and slows them down. I'm not sure if this is still true; saw an example of this at a meetup back in the Java 8 days.


> Java finally gets what's been available in ... and others for a very long time

You seem to think that the goal is to get as many features as possible, as quickly as possible. Maybe some languages have that goal, but Java's design philosophy -- from its inception [1] -- has been to be a conservative language that only adopts features that have become familiar to programmers and stood the test of time in providing a good cost/benefit. Java hasn't always done this, but that's the aspiration. Worked out pretty well, I think.

Given Java's design philosophy, the far more interesting question is which features of those more adventurous languages Java does not intend to adopt.

> Java compiler/VM, when it sees the patterns "somevar" and functions "getSomeVar" and "setSomeVar" doing purely getter/setter stuff, removes the actual function calls and silently changes them in the background to field lookups for speed. If you attach a debugger or memory profiler, is changes them back to functions and slows them down.

It's true, and the optimizing JIT compiler does much more than just that. It performs deep speculative optimizations, like inlining virtual calls and removing untaken branches, reverting them (using a process called deoptimization) either for debugging or when it detects its assumptions are false. What you're referring to is just an instance of inlining.

[1]: https://www.win.tue.nl/~evink/education/avp/pdf/feel-of-java...


   Worked out pretty well, I think.
Au contraire mon ami!

ML was better than most of its successors, including Java, which held programming back. Milner's ML avoided some of Java's glaring mistakes (e.g. exception specifications, lack of generics, non-uniform treatment of types, lack of higher-order functions, lack of pattern matching) from the start. Xavier Leroy showed how efficiently to compile ML in the 1990s.

The main thing ML lacked (w.r.t sequential computing), was a good integration with OO and HKTs. That was solved by Scala.


I meant that it's worked out pretty well for Java. ML -- one of my favorite languages, alongside Java and a couple of others -- has also managed to avoid being used by anyone, really. But look at Java now: it has generics and higher-order functions, and it's getting a uniform treatment of types (Project Valhalla), and pattern matching [1]. And so while ML introduced those features in a research context, Java is the language that has made and will make the world actually use them. Being ahead of your time is often just as much of a mistake as being behind (as far as product design goes).

As to the question of "holding programming back," it is yet to be seen how much of an impact these features have in practice. Same goes for what ML "lacked." I personally like some features more and some less, as do you, but I don't think anyone knows which programming language features really push programming forward.

[1]: https://cr.openjdk.java.net/~briangoetz/amber/pattern-match....


"Holding programming back" means that it would have advanced in the absence of Java. What language would have advanced it? Cladote seems to assume that ML would have, which would have been actually used if Java hadn't taken over the world. That seems unsupported by evidence. Overly optimistic, as well.

And if Cladote's answer, ML, would not have taken over, then did Java hold back anything? No. Was it better than, say, C++? For many uses, yes.

"We would have ruled the world if it wasn't for you meddling kids!" is wishful thinking. If ML was so good, it should have won anyway. Instead, "has managed to avoid being used by anyone" is a pretty fair assessment.


>If ML was so good, it should have won anyway.

Experience proves otherwise, to the best of my knoledge. For gaining wide propagation in a niche, whether of habitat or market, being the paragon of what can be conceived is rarely the way to optimize success. There are many other factors, and technical merit is lightly pondered.

That's true for life, where there are numerous examples where a function is filled by an organ that evolved in a counter manner of what an engineer sense of elegance/parsimony would provide.

That's also something that happens for technical products, where early proliferation in the market seems to offer a far more successful advantage than excellence of the product.

That said, I never had the opportunity to touch ML, and only already heard about it through my personal research on the roots of Coq that I studied during my CS courses in University. I didn't touched much of Java since I left University. Last year I worked on an SDK provided for several languages, including Java, so it gave me the opportunity to look at how it evolved, and play with its integration of functional paradigm. That still requires far too much boilerplate code to my gust, but is bit less frustrating in how you can structure your code.


> Experience proves otherwise, to the best of my knowledge.

It's very hard to say that it does in programming, at least. Even though we could easily come up with value metrics for programming languages, we don't really know what they are for any particular language. However, that, in itself tells us something. The software industry is an environment with strong selective pressures. That big difference in "quality" (i.e. some metric with a bottom-line impact) will go unnoticed is hard to believe. It doesn't make sense from a theoretical perspective -- adaptive traits should be detected in a selective environment, and it also doesn't fit with observed reality. We observe that technologies that truly provide an adaptive benefit are adopted at a pace commensurate with their relative adaptability; often practically overnight.

So we have reasons to believe that it is not true that "better" languages lose, at least not ones that have some measure of adoption at all.


> strong selective pressures

The opposite is true. Unlike in electronics, construction, shipbuilding, you can write a shitty software and patch the hell out of it after release most of the time with the few exceptions.

Hence the only thing that matters is how cheap and replaceable the developers are. Unlike pharma or architecture, where errors are always costly.

Java was successful because is was 1) pushed by big enterprise company, 2) familiar to people with C,C++,perl background and 3) overall easy to write at the expense of verbosity and bloated codebases (who cares if you could just hire too many of the cheap devs who would write as many classes as needed).


> The opposite is true. Unlike in electronics, construction, shipbuilding, you can write a shitty software and patch the hell out of it after release most of the time with the few exceptions.

What happens in software is what the market wants. You cannot claim that something is better if it is better at doing something that the industry doesn't want (and even that you can't show). It's like the Betamax vs. VHS: Betamax was better at some metric that few if anyone cared about, but VHS was better at what people actually cared about.

> Java was successful because

Java was (and is) successful because it was (and is) a very good product with good backing. At the time, other companies, like Microsoft, Borland, IBM and others that few now remember also heavily pushed other products, and they didn't do as well. I am not claiming that Java is the best possible programming environment, but it is pretty funny to claim that some specific platform/language is better with absolutely nothing to support that claim.


> What happens in software is what the market wants.

Sure thing, I'm just emphasizing that it's a more of a social-enterprise-management issue, than a software quality.

> Java was (and is) successful because it was (and is) a very good product with good backing.

Or just because it's Sun/Oracle/baked into android/symbian. Who knows? IMHO it's more of a "nobody was fired for using java" than "java is a good language". It's a language for managers, not for programmers.

> Microsoft, Borland, IBM

MS does pretty well with CLR, for another reason, though. There were no product at the moment which could compete with Java in terms of marketing, money, support etc.

Borland had nothing for serious enterprise, MS had C++ and VB only back then, regarding IBM I don't even know what you mean, Smalltalk?


> Or just because it's Sun/Oracle/baked into android/symbian.

Fact is, in the 25 years since its introduction virtually no one has built a platform with a similar combination of performance, productivity and observability. Java has very little competition in the "serious software" space, although platforms/languages like Node, Python and Go -- that suffer in some or all of the factors I mentioned -- get used in microservices, that are eating into "serious software".

> Borland had nothing for serious enterprise, MS had C++ and VB only back then, regarding IBM I don't even know what you mean, Smalltalk?

I meant Borland with Delphi (which was quite heavily used for a while, even after Java came out), MS with Visual C++, VB and Fox Pro, and IBM with Smalltalk.


I agree with what you write.

But I also believe the counterfactual that, had Sun/Oracle been based their language upon (core) ML rather than Java (with suitable pragmatic additions like code loading and been able to convince programmers to use it), then that language would be even more dominant and performant today. Many of Java's mistakes and detours would have been avoided from Day 1.

Can I prove this? No. But the fact that Java and essentially all other successful languages have gradually been adapting essentially all of ML (sans modules), is evidence. It means that language designers like G Bierman, A Hejlsberg, Odersky et al, too have gone Milner's way.


> then that language would be even more dominant and performant today

I think you should watch the first 20 minutes of this talk: https://youtu.be/Dq2WQuWVrgQ

Java's early designers realised (or believed) that the features with most bang-for-the buck weren't linguistic, but were: GC, dynamic linking, and performance with observability. Linguistic features, overall, were a detractor from those, because people were hesitant to use them, and so they missed out on the big benefits. Java decided to provide these most important features wrapped in a language that was non-threatening at the time; James Gosling called it "a wolf in sheep's clothing." In other words, a familiar language was a very central part of the strategy. You say you believe it would have worked with a non-familiar one, well, we can never know, but if it hadn't, then much none of the other, more important features would have mattered.

> But the fact that Java and essentially all other successful languages have gradually been adapting essentially all of ML (sans modules), is evidence

As I said in another comment, it's not that all successful languages are adopting ML's features but some, and those ones do it because they influence one another and they knowingly borrow from ML because they like it. In other words, the designers of some influential languages like ML. Is it a coincidence? I don't know, but I'm hesitant to say that it's evidence that ML was "right." I, too, personally like ML, but I wouldn't claim its design leads to better outcomes than others. Some successful language clearly do not adopt ML's features -- e.g. all the untyped ones and Go.

BTW, Java's objects are a different take on modules. If you've seen 1ML, that's almost an OO language.


I understand the network effect, I understand that most decisions on PL are not based on technical reasons. Most non-experts mostly worry about syntax, as per P Wadler:

   "The simplest thing to do is a significant 
   change to the semantics. People won't argue 
   the way they do about syntax."
ML had GC, it would have been easy to add dynamic linking, and X Leroy made Ocaml fast, with a tiny fraction of the resources that have been poured into the JVM. Admittedly, this was a major breakthrough that was unexpected. I reckon the main problem was getting OO to work properly with generics: type inference becomes a problem and you have to worry about co/contravariance.

A Hejlsberg certainly wasn't on the ML train in the past. For a start he didn't think generics are a good idea (he told me personaly). Typescript shows that he changed his mind! Go explicitly rejected generics at the start, but Go-2 is scheduled to get them.

Module systems are not such a big deal in my exerience, I don't think ML's was a full success.


If we collapse the two concepts of

- best possible X

- most popular X

are we loosing possibly interesting perspectives on X?


I don't think AnimalMuppet is confusing most popular with best. But as you don't have the data required to conclude which language is better by any meaningful metric, anyway, then your claim is just wishful thinking. If ML had won and Java lost, one could have just as easily said that ML was holding programming back from some imagined benefit Java would have brought. I think that it is you who are confusing "an interesting perspective" with a value judgment. Yes, ML had pattern matching before Java, while Java had pervasive dynamic linking and dynamic code loading. Those are simple facts and perhaps "an interesting perspective," but how do you decide which is better?


Nobody has credible data enabling reproducible comparison of programming languages. The big lacuna of PL research ...

If popularity mattered ... PHP, JavaScript etc.

I find the evolution of all successful programming languages eventually to evolve into an ML-like core fascinating.


> Nobody has credible data enabling reproducible comparison of programming languages.

Right, so as "better" and "best" are unknown, we should stop using those terms. I like Java and I like ML, but I don't know that one of them is better than the other, and if so which.

> I find the evolution of all successful programming languages eventually to evolve into an ML-like core fascinating.

Some of the successful typed languages, sure: Java, C++, C#, but the three heavily influence one another, and at least Java's and C#'s designers were aware of ML. So I don't think that, to the extent this is true, it's some "natural" draw, but rather a direct or indirect influence. On the other hand, you have Go, which is also quite successful (admittedly in a lower tier of success), and it doesn't seem to be converging on something ML-like, though (although it is influenced by another Milner idea), and neither is C.


Absolutely. Popularity is absolutely fallible.

But I don't take claims of "X language was better and should have won" claims very seriously, partly because it's claimed by advocates of so many different languages. Should ML have won, or should Pascal/Modula2/Oberon? I've seen that claim, here, just from different people and for different reasons.

But I also don't take such claims very seriously because, while popularity can be wrong, so can unpopularity. That one guy spouting off about how everyone else is wrong and he's right? Sometimes he is right - but not often. More often, that person who thinks they have the right answer is seeing less than the full picture. I mean, how many people use Java? Several million? Would ML really have been a better answer for the majority of those several million people?

I react the same in government, by the way. When someone says, "I know what you should want, and I'm going to make it happen, even if it isn't what you actually want", well, I trust the people to know what fits them better than someone who isn't in their shoes.


Java inertia was also a side effect of its market. It was big corps which were used to slow and bloat. But Oracle seems to have changed to follow the more hectic pace of web 2.11 era.


> deep speculative optimizations

my new blog title


>Java finally gets what's been available in Scala (case classes), C# (structs or maybe properties), Kotlin (data classes) and others for a very long time .. so long we already have tools like Immutables and Lombok to get past this really dumb limitation in Java.

All the ones you named are fairly new languages, and C# does not have anything even remotely like records (structs are value types, and properties are fields) so what's your point? Java historically has been very conservative with implementing things because they usually try to get it right (generics non-withstanding). For that matter C# rushed into a few things too and screwed the pooch badly on them.


I'd be curious which features you think C# messed up on. I have been using C# professionally since 2000 in it's first beta and can't think of a newer feature that would meet this criteria?

That said you are correct about records. Although it is getting them also.


The one that annoys me the most in C# on a day-to-day basis is Nullable<T>. Java took their time with Optional<T> but once it was released it was great. Can be used in lambdas, all libraries are modified and aware of it etc. C# Nullable<T> is near useless because it only works on value types. I.e., you know, the types that are least likely to generate NullReferenceException for you.


In both C# and Java, reference types are nullable. Nullable<T> exists in C# to provide for nullable value types. It's not comparable to Optional<T> as that's an entirely different feature.


They aren’t “entirely different feature”. Why do you think C#8 introduced Nullable reference types that work exactly like Java Optional ? https://docs.microsoft.com/en-us/dotnet/csharp/nullable-refe...


> C# Nullable<T> is near useless because it only works on value types. I.e., you know, the types that are least likely to generate NullReferenceException for you.

You are completely missing the point of Nullable. It was introduced because databases allowed int, float, etc ( value types ) to be null. It was never meant to be an Option type. Reference types are already "nullable" so the database returning null for strings wasn't a problem. But databases returning nulls for int was a problem. It led to performance issues ( boxing and/or adding additional code to marshal data from one source to another ). If you find Nullable to be annoying then you either don't code in C# much and you especially don't deal with data/databases.

For what it was intended for, Nullable did it's job. And if Nullable is your only complaint, given the extraordinary changes that C# has gone through, I think it speaks well for C#.


> You are completely missing the point of Nullable

Am I? Apparently so is everyone else because Microsoft has re-worked Nullable<T> in C# 8 to be almost exactly like Java Optional! Silly me right?


Nullable/Non-nullable reference types in C# 8.0 having nothing to do with Nullable<T> and aren't even implemented using it.

C# 8.0 implementation of something like Optional<T> for reference types uses static flow analysis to ensure correct use of potentially null values. That's vastly superior in every way to Optional<T>.


[flagged]


Yes, because it's checked by the compiler using normal constructs.

    someNullableInstance.DoSomething();  // compiler error
    if (someNullableInstance != null) 
        someNullableInstance.DoSomething();  // not an error
Option<T> is fine but it's a lot of syntax to use correctly.


> C# does not have anything even remotely like records (structs are value types, and properties are fields) so what's your point?

How would you compare records to c# language features then? Are records value or ref types? Do they live in the stack or the heap?


Reference; heap;

They are data containers, not control structures.


Can't you use structs in C# as data containers?


Ughh, you can provided that they are

1) Small 2) Short lived.

Because they live on the stack and follow the rules of value types you really shouldn't be using them for anything major. I.e. you want to model a SQL query result for a table with more than 3 columns? Probably struct is NOT the way to go, use a POCO class.


C# structs don't always live on the stack (unless you use a ref struct)[0]

[0] https://kalapos.net/Blog/ShowPost/DotNetConceptOfTheWeek16-R...


Well of course. Just like an int doesn't always live on the stuck. I.e. when it's declared as a field for example. If you are piping data from SQL DB into a List<YourStruct> the list is a reference type and will of course be on the heap.

Another issue with struct being big is really when it's being passed around and value copying starts to occur. You want to minimize this if the struct is large. My point being is that if in doubt, you probably don't want to use a struct and hence it's not a general purpose data container.


C# does not have records but they have been "proposed" and subsequently delayed for some time now.


> C# (structs or maybe properties)

Those are not directly analogous to records. As a matter of fact, C# is getting records as well: https://blog.cdemi.io/whats-coming-in-c-8-0-records/


C# has had records slated for years without delivery, alas. Thankfully we do finally have a version in testing.

In any case to answer GP, structs should not be considered a valid substitute for records, they are 'value types' so passing them around means you copy all the data every time you pass (i.e. lots of allocations if the record is larger than 1 or 2 fields)

Properties get you 'closer'. You can do some code Which is a good bit less than Java by default (i.e. not Lombok or similar)

F# does have Records, and technically there's nothing stopping you from declaring your records in F# and importing them into C#... but that may not be ideal. I don't think you get the full benefits (like copying a record with specific properties changed)


This looks like syntactic sugar for comcepts that already exist?

I think in Java’s case, its only mow getting struct-like semantics

edit I may be wrong. It could be a purely syntax level feature in Java as well.


Java records are syntax sugar.

Value semantics are proposed in JEP 169 [1] and are part of Project Valhalla. [2]

[1] https://openjdk.java.net/jeps/169

[2] https://wiki.openjdk.java.net/display/valhalla/Main


I believe it's not explicitly looking for get/set strings in method names, but generally inlining trivial methods. There is (or was, last time I checked) a -XX:MaxTrivialSize option to set the size.

Debugging optimized code awkward as compilers will inline and eliminate many functions. I assume the debugger is being nice to you and undoing the optimizations to make things more intuitive.


In Java years, Kotlin hasn't had anything for a "long time," the language itself is still a baby. Most Java shops are still living in the "Java 8 days."


I believe that the JIT does not remove get/setFoo by matching the naming pattern, but simply because these methods are trivial and usually not polymorphic and thus prime candidates for speculative inlining.


C# struct is a value type, the Java Record is not a value type, right?


Right, inline classes are the value types (as part of Valhalla).


Function inlining has been in Hotspot forever. Possibly day 1?

In fact there was a point in time where polymorphic inlining was an active field of research and a number of people used the JVM to prototype their theories. Pretty sure a later version of Hotspot had polymorphic inlining as a feature.


LOL, record was an intentional omission during designing Java, similar to output parameters in function calls. It's not like James didn't know about them or couldn't implement them.

The only truly problematic thing in the original Java design was type erasure, leading to mess in all JVM-based languages like Scala, Kotlin or Clojure, but it saved some memory, so it was a design compromise. C#/CLR chose the other option, making their VM more flexible.


> leading to mess in all JVM-based languages like Scala, Kotlin or Clojure

!!!

Type-erasure of generics is what gives the other languages flexibility.

Scala eventually gave up C# precisely because CLR reification proved a huge challenge with Scala semantics.

You may not like JVM type-erasure but Scala would literally not be possible without it.

---

If anything, I would argue that Java should have done full type-erasure, not just type-erasure of generic types. (That does come with downsides though.)


> Scala eventually gave up C# precisely because CLR reification proved a huge challenge with Scala semantics.

In other words, Scala was designed on a hack involving JVM's type erasure, then authors found out the hack can't be ported to CLR. That's another way to view it. Type erasure is a half-baked mess in original Java, it was causing headaches right with the first version of generics...


Type erasure is not a hack. It's the most common technique in typed languages. It causes what is, at worst, a minor inconvenience in the Java language in exchange for making the Java platform a very attractive and very convenient target for languages. Reification -- and I believe .NET is the only platform with variance that does it -- makes a very inconvenient compilation target for any language not designed around it.


That sounds as ex post facto justification. IIRC the designers hit the issue once they started implementing generics and the gravity of the situation became apparent just at that time. Then another set of hackers figured out a way to utilize it for language interoperability. I am not saying CLR's final implementation is great either, but it addresses some points JVM missed as they had time to learn from SUN's mistakes, while making their own errors in the process as well. Language/platform design is notoriously difficult and unpredictable.


> That sounds as ex post facto justification.

The initial reason was indeed backwards compatibility. As it turns out, compatibility with Java 1 unintentionally yielded compatibility with future languages too.

Because of this flexibility (ex post facto notwithstanding), the JVM became the most popular VM target ever.

Note that GraalVM -- which from day 1 intended to be a multilingual VM -- also uses type erasure.

---

Know that there is more than conceptualization of a type system, and in particular a parameterized type system. C vs C++ vs Java vs Haskell. The more of this is baked into the runtime, the more reflection can be done at runtime, but the less interop can be done with other type systems.

(There is a reason why C not C++ is the de facto FFI standard and its not just age.)


The original issue was compatibility and interop between two languages with different variance strategies. It's just that those two languages were two versions of the Java language. Calling it a "justification" is also misplaced: baking a variance strategy into the runtime makes the runtime very over-fitted to a particular language; AFAIK this is only done by .NET. So if it's a "justification" it is a very reasonable one. I don't think either of the approaches is a black-and-white mistake, but I personally very much prefer Java's. The loss is a very minor inconvenience (mostly the inability to overload methods with different generic arguments, e.g. foo(List<String> x) and foo(List<Integer> x)), and the gain is that the Java platform is a great compilation target for many different languages. Whether or not those who made this decision had other languages in mind is irrelevant; it's a very good decision, even though some would prefer the opposite one. It's certainly not a mistake and not a hack. It's not a hack because it is a well-known, well-understood, common compilation technique; it's not a mistake because there is no alternative that's clearly better, and many (myself included) think it's significantly better than the alternative chosen by .NET.


Type erasure of generic parameters is what allows the Java language, Kotlin, Clojure, JavaScript and other Java platform languages to interoperate so well and share data types with no runtime conversions despite having different variance strategies. By baking variance into the VM, .NET is much less flexible, as can be seen by how cumbersome it is to share data structures among different languages.


How is that? I don't have as much experience with .NET languages so I'm not sure of what difficulties lie in .NET interop, but from my very cursory experience with F# And C# I didn't find anything particularly difficult about using one's data structures in the other (granted they don't have different variance strategies).

What about Java's type erasure lets languages play better with variance strategies and data structures? It's not apparent to me that e.g. IronPython has a hard time sharing data structures with C# (although I haven't actually used IronPython FWIW). Why doesn't the usual strategy of upcast everything to `Collection<Object>` within a language where collections are covariant and then downcast as needed in a language that doesn't care about variance (e.g. a dynamic language) work?


> granted they don't have different variance strategies

... because it's baked into the platform.

> It's not apparent to me that e.g. IronPython has a hard time sharing data structures with C#

https://ironpython.net/documentation/dotnet/dotnet.html#acce...

> Why doesn't the usual strategy of upcast everything to `Collection<Object>` within a language where collections are covariant and then downcast as needed in a language that doesn't care about variance (e.g. a dynamic language) work?

Because that does not work for mutable collections, or even for immutable collections created by, say, an untyped language. Clojure can create a list of strings and pass it to a Java method expecting a List<String>.


Right but my impression was that that was basically a performance optimization for IronPython, similar to type hints for Clojure and that they aren't actually necessary. IronPython was mainly what I was thinking when it comes to alternate variance strategies. I mean most typed languages I know of with stricter typing regimes than Java either all have the same variance schemes or eschew subtyping altogether, JVM, CLR, or otherwise.

Why doesn't that work for mutable collections (are you referring to the need to be invariant here?) or immutable collections? More specifically, why can't you always have `Collection<A>`, upcast to `Collection<Object>`, downcast to `Collection<B>` as an escape hatch? You break type safety but erased generics already break type safety (this is especially clear with C#'s choice to follow Java's convention on array variance and the pain that this causes when it's inconsistent with everything else in C#).

For passing back to a typed language, you add a downcast of `Collection<Object>` to `Collection<String>` and have no runtime conversion cost. Why doesn't that work?

EDIT: Ah I see, is the basic problem you don't get free (more accurately cheap) casts, especially with generics? I'm not familiar with how casting works in the CLR since the vast majority of my time is on the JVM.


> are you referring to the need to be invariant here

Yes, or contravariant.

> erased generics already break type safety

They don't, they just don't enforce type safety across languages. BTW, most languages erase types (including Haskell).

> Why doesn't that work?

Because platforms with safe casts do a runtime instanceof test, and if the platform reifies generics, then it is not true that the runtime type of Collection<Object> can be cast to Collection<String>. But if the generic parameter is erased, they both have the same runtime type.


Sure, Haskell relies on parametric polymorphism and typeclasses to get away with full type erasure with opt-in reification via Typeable. C++ does something similar with templates and has opt-in reification with RTTI. The JVM is just weird because it has some type reification, but not all the way.

RE type safety, it doesn't really make sense to talk about type safety across languages since type safety is tied to a particular language's semantics. I would argue that every ClassCastException is a violation of type safety in Java the language even if it's reasonable at the JVM level but whatever that's just definitions diving.

I see. Basically what you're saying is that you don't really see examples of reified types with unsafe runtime casts? Presumably because that kind of negates the whole point of reifying the types in the first place?


> The JVM is just weird because it has some type reification, but not all the way.

I don't know if it's "weird" given that there are only two examples, and they're different from one another. It's not like there's a norm here.

> I would argue that every ClassCastException is a violation of type safety in Java the language even if it's reasonable at the JVM level but whatever that's just definitions diving.

But you can't get a CCE in Java without explicit casts (or a compiler bug), just as in C#.

> Basically what you're saying is that you don't really see examples of reified types with unsafe runtime casts?

I'm not sure what you're asking.


You can get CCEs without explicit casts or compiler bugs though. javac will catch the most glaring examples of generic type erasure with a warning.

  List<String> myStuff0 = new LinkedList<>();
  List eraseMyStuff = myStuff0;
  // Warning here
  eraseMyStuff.add(1);
  // CCE here
  String myString = myStuff0.get(0);
However, if you obfuscate it a little javac doesn't know about it anymore.

  List<String> myStrings = new LinkedList<>();
  List<Integer> myIntegers = new LinkedList<>();
  myIntegers.add(1);
  Stream.of(myStrings, myIntegers)
    // Wildcard capture
    .map(x -> x)
    .reduce((x1, x2) -> {
      // Would expect this line to blow up either at compile time or at runtime with reified generics
      list1.addAll(x2);
      return x1;
  });
  // Blows up with a CCE
  String result = myStrings.get(0);
The above is trick that was featured in some HN post from a while ago (maybe a year or so ago? Need to dig it up...) and is not an original idea of my own. It's essentially combining wildcard capture conversion with type erasure to delay a CCE away from the point of the problem. As I remember I think this isn't a compiler bug in the sense that this follows the Java spec.

I thought, however, there was an easier trick to get a CCE using type erasure of generics without warnings and without having to involve capture conversion, but it looks like I may have just been wrong about that as my efforts all ended up getting warnings from javac.

Of course whether any of this actually matters is a separate issue altogether. Java is still a workhorse language that powers a ton of software.

As for weirdness, I can't think of any other language with generics that preserves runtime type information without a way of preserving that for generics as well. Most as you point out either do full erasure (most stuff in the ML-inspired family) or either fully preserve all the information (of course all the .NET languages)/let you opt-in to preserving generics (Scala to some extent with TypeTags) or just forgo generics (Go).

For example in OCaml if you try to do an unsafe and incorrect cast you segfault. IIRC in GHC you end up with a crazy incorrect value (e.g. unsafeCoerce [1, 2, 3] :: Int results in some messed up integer).

My question was me thinking out loud; I was guessing that CIL bytecode has a instruction analogous to checkcast in JVM bytecode that you can just omit and then you can freely cast at will, potentially wreaking havoc if you cast incorrectly. I don't know if that's true. Basically I was separating the verification of type information from the storage of it at runtime. If the CIL separates them then the downcast trick works. If it doesn't the trick doesn't work.

Also in a way this unsafe cast is really just a run around reified types to begin with. I guess the only difference is that you still have storage of the types so you can do runtime reflection. On reflection, I can certainly see how this complicates implementing things like higher-kinded types (because you sort of end up implementing half of type erasure yourself) but I'm not sure it complicates marshalling ordinary data.


> You can get CCEs without explicit casts or compiler bugs though

Using raw types is cheating. It's essentially using a different language and, as you say, the compiler will tell you that you are.

> However, if you obfuscate it a little javac doesn't know about it anymore.

It's a bug (rather, a few bugs): http://wouter.coekaerts.be/2018/java-type-system-broken (I assume that's where you got the example from)

You shouldn't get a CCE without explicit casts, and if you do, that's a (soundness) bug in the Java compiler and/or spec. Those bugs aren't always fixed quickly when they're not very important in practice (if it's determined that it's hard to hit them by mistake, which is why it's taken 15 years to find them).

> Would expect this line to blow up either at compile-time or at runtime with reified generics

Reified generics have no impact at compile time. Everything a language with reified generics knows at compile-time, a language with erased generics also knows. However, a runtime with reified generics and a similar compiler bug would, indeed, throw at this line at runtime.

> As for weirdness, I can't think of any other language with generics that preserves runtime type information without a way of preserving that for generics as well.

There are good reasons for doing it the Java way: so that you don't bake a language's particular variance strategy into the VM. There are also good reasons for doing it the C#: it's slightly more convenient for that one language. I can't think of any other language with variance and a runtime type system that reifies generics for reference types. When all you have are two instances, it's hard to say which of them is weird.

> If the CIL separates them then the downcast trick works. If it doesn't the trick doesn't work.

I don't know CIL, but looking at how languages behave on .NET to interop, I wouldn't think that can work. It would totally break the safety of the runtime type system if you could cast without a typecheck.

> I guess the only difference is that you still have storage of the types so you can do runtime reflection.

BTW, you can reflect on generic type arguments in Java, as well (https://docs.oracle.com/en/java/javase/13/docs/api/java.base...). When you reflect on a method that returns a List<String>, it will tell you it returns List<String>, but you can't get such a Type from an instance of that list; you can only get a Class.


That looks like a good addition because anything that makes code more concise and easier to read is a win.

Off topic, but I have been feeling nostalgic for Java, even though I now almost exclusively use Lisp languages and I am happy with that. In early Java years, Sun‘s main Java web site had a link to my site and I wrote a number of Java books. I was the number one search result for “Java consultant” for about a decade; Java was good for my career.


Forcing usage of constructors is hardly more readable IMO.

Which bug is easier to spot?

    cat = new Cat(1, 2, 4);
    cat = Cat.builder().eyes(1).legs(2).heads(4).build();
Additionally, from the examples in that post, records aren't valid JavaBeans either, so they can't be used with the myriad of existing tools/libraries that expect beans.

From my first impression, it looks like records don't really address many of the reasons we use Lombok and its ilk, so many of us will just ignore this feature. Or do like I've done in the past couple of years, and just ignore Java altogether.


Java could really used keyword arguments as a replacement of the Builder pattern:

cat = new Cat(eyes=1, heads=2, legs=4);


Too much for small classes/records imho e.g. `new Name("First", "Last")` vs. `new Name(first="First", last="Last")`

Also, IDEs can help with parameter hints when is needed.


They can be optional, like in Kotlin -- you can give name to only some (after unnamed ones).


If you're using a decent IDE, that bug is just as easy to spot when using a constructor. My IDE adds the var names in front of the values, so when I create a new Cat(1,2,4) it actually looks like this:

        new Cat(eyes:1, heads:2, legs:4);



If using an IDE is necessary to make the code readable, then Java should ship with an IDE. Or just add it to the language.

I find myself often reading code outside of an IDE, such as in GitHub or other hosted repos. Why should it be less readable in those cases? Because it's marginally easier to write?

I also write a lot of Go code, and appreciate how "go vet" disallows unkeyed struct literals, forcing you to write the field names when creating structs. It is not an IDE feature, and I can freely browse GitHub or grep around in the command line and the code is just as readable as in an IDE. It also means Go works in a lot of editors and IDEs more easily, because those editors/IDEs don't need to implement a bunch of extra features to make the code actually readable.


Verbosity is a huge problem in a language. Verbosity that makes unfamiliar code easier for you to read can actually hurt the readability to a more familiar reader trying to figure out where the bug is. I don’t need that thisIsTheBloodyFooArgument: all over the place if I already know it is the bloody foo argument. Eventually, the code becomes so repetitively noisy that I have to rewrite it stripped down into a text file just to get a gist of the real things I need to understand (like when comments get in the way and don’t help.

But really, this is a problem we don’t need to have...tooling could make it so we have the verbose repetitive version that make unfamiliar readers happy and the more concise to the point version that makes familiar readers happy. But for some reason, our profession is infatuated with text files that make this best of both worlds impossible.


> But for some reason, our profession is infatuated with text files that make this best of both worlds impossible.

Because people already have existing tools, lots of them. Text files are an easy way to support all of them. What kind of tooling do you propose that would immediately integrate with IntelliJ, Eclipse, NetBeans, emacs, vim, VSCode, Atom, Sublime, GitHub browsing, grep, etc.?


Yes, but then you have people like the comment I was replying to who say that those tools must be inflexibly fixed when the language ships...and so those languages that expect less from tooling (like Go) are somehow better than those (like Java) that expect more. That the text should speak for itself well enough without tooling.


Assuming you are using an IDE, the new Cat(1,2,4) is far better and concise, plus the IDE tells you already what they are.


I expect the reason that they didn't want them to be valid beans is because then they'd lose immutability.


I think it's perfect for parameter objects


Which Lisp flavour do you use Mark? I worked in Java for 15 years before moving to Clojure a decade ago.

I find it superbly powerful to be able to leverage the JVM and the Browser with the same language.


I mostly use Common Lisp, Racket, and periodically like to try the latest versions of Gambit, Chez, and Guile.

I used to have two customers who paid me to use Clojure on their projects, I liked Clojure (donated in the early days) but fast natively compiled tools like SBCL, etc. feel better to me. You have found a good home: Clojure and the JVM ecosystem is good for a wide range of uses.


Have you tried Clojure with the GraalVM native-image? I am curious to what extent native-image can delete the overhead of Clojure and make fast natively compiled binaries.


No, I haven’t tried it. I retired last year and the limited time I spend on tech is now persuing personal research on hybrid symbolic and deep learning systems; I use the Hy language ( hylang) for this because it wraps Python (for deep learning) in a reasonable Lisp language with a Clojure-like syntax. I am also writing a book on Hy. Otherwise Common Lisp and Racket(and a little Haskell) suit my hacking needs.


> That looks like a good addition because anything that makes code more concise and easier to read is a win.

Arguably, as Java already had Lombok for years, this does not add anything really new.


+1 for mentioning Lombok. I used to use it a long time ago.


This is akin to case classes of Scala such a welcoming change to the Java world.

Apart from being a functional language one of main draws to Scala is the ease with which one can get rid of the boilerplate code. So kudos to the Java team!


But case classes are also algebraic data types so Java is not yet as great.


I'm curious if this will make AutoValue obsolete: https://github.com/google/auto/blob/master/value/userguide/i...


It should.


I am continually surprised that Java hasn't adopted C#'s property syntax. Having to write backing fields and your own getters and setters takes me back.


As someone who codes C# every day, I hate properties with a passion. Its my least favourite feature of c#


Why?


Python recently added the same thing in 3.7. They call them data classes.

https://docs.python.org/3/library/dataclasses.html


Python has had namedtuples for a long time, which is similar idea implemented less cleanly (in fact the implementation is horrible hack because of some rare corner cases) and without dependence on Py3 annotation syntax


> less cleanly

Python has a number of these warts. Same with classes and type annotations.


I'm pretty excited for this. I've seen Java developers jump through hoops so they wouldn't have to implement a new class when a Python developer would use a NamedTuple and not think twice.

And yes, there are codegen options (Autovalue, Lombok, etc.), but the build process is a little clunkier, and IDE support isn't great.


I guess I was hoping the same end result would have been reached by different means.

I would like to see:

new non canonical java.lang.BaseObject, which implements nothing.

new interfaces for .toString(), .equals( that ), and .hashCode().

Canonical java.lang.Object implement these new interfaces.

Any code wanting struct, data, record style objects would subclass BaseObject.

The JVM would auto generate missing .toString(), .equals( that ), and .hashCode() as needed. Like when you add a data class to a HashMap.

New syntax for properties, to eliminate need for setter/getter boilerplate.

New syntactic sugar for the static initializer constructor trick, to eliminate constructor boilerplate. Basically make new MyDataObject() {{ a = 1; b = "abc" }} look prettier.

--

Best as I can tell, only the last two would require language changes.


The best thing about this is VM support for records. This means that all the other JVM languages that implement similar features can potentially leverage the underlying support for records to make better implementations themselves.

I for example, don’t immediately see any reason kotlin can’t use this to make a more efficient implementation of data classes.


What VM support? Records are a Java compiler feature. It just auto-generates all the boilerplate at compile time, there doesn't seem to be any bytecode or VM support for this feature?

It seems functionally identical at the VM/bytecode level to what Kotlin's data classes already do.


My mistake, I thought this was more akin to value types proposed in this JEP

https://openjdk.java.net/jeps/169

This is disappointing.


Records are meant to play nicely with inline types, i.e. when inline types land, you'll be able to declare inline records.


I feel like Java is making great strides lately, but that we're also getting slightly lesser versions of capabilities available in other languages.

Records, for example, are directly inspired by Kotlin's data classes, and function almost identically... except that Kotlin allows you to have mutable or immutable fields in a data class, and automatically provides a `copy` method (similar to Lombok's `wither`) that allows you to use a builder pattern.

This difference can be very important. For example, I don't see how a Record class could be used with most JPA-style ORMs, which immediately rules out Records for many of the most common use cases.


Records are actually inspired by product types, and will be central to the pattern-matching feature that has started making its way into the language (and, combined with the upcoming sealed types, would form Java's ADTs). Allowing mutation doesn't make much sense, then.


>Allowing mutation doesn't make much sense, then.

Why?


Because that's not what records are about. See https://openjdk.java.net/jeps/359, https://cr.openjdk.java.net/~briangoetz/amber/datum.html and https://cr.openjdk.java.net/~briangoetz/amber/pattern-match....

For one, mutation doesn't play well with pattern matching.


Explain why mutation doesn't play well with pattern matching.


A pattern requires matching against a piece of data possibly composed of several related components that are assumed to be coherent with one another, not some mutable object. How would you match Point(3, y) against a mutable object, if by the time you grab the y, the x component may no longer be 3 (recall that Java is a multithreaded language, and x could be volatile)? By analogy, think of how matching a regex pattern would work against a mutable array of characters as opposed to an immutable string.


Different components being in sync is a property of objects in general. E.g. Java's ConcurrentHashMap is mutable but its fields are in sync.

For pattern matching to see an in-sync view of the object, the object must provide such a view. If the object allows concurrent modification (most don't) then it must provide a custom view. Otherwise a generated default view should be sufficient.

If you look at [1] you can see that Goetz does propose a function that provides such a view for classes and not just records (c-f extractor.)

[1] https://cr.openjdk.java.net/~briangoetz/amber/pattern-semant...


But that's exactly what I meant by mutation does not play well with pattern matching: you need to work, sometimes work hard (in fact you need to somehow hide the mutability) to make it work. I'm not sure what you meant exactly by the fields of ConcurrentHashMap being "in sync" (the contents of the hash map certainly aren't and it does not provide a snapshot view), but their coherence requires hard work. Unless you don't know how the particular synchronization mechanism, if any, works, you cannot provide a view suitable for pattern matching, so it cannot, in general, be automatically generated for mutable objects, at least not in a way that works as well as you'd want it to. For example, a naive extractor to a Pair class with two volatile field could return two values that have never been the values of object's fields at any point in time (i.e. you'd pattern-match `case Pair(x, y)` and get, say, x=2 and y=3 despite the pair never having been (2,3)). That "works" in the sense that you have some result, but it's probably not what you want.


ConcurrentHashMap contains multiple volatile fields. These are kept in sync (i.e. their invariants are preserved) by the ConcurrentHashMap implementation. Concurrent programming being hard is a property of concurrent programming. It being hard does not require pattern matching.

Most mutable objects are easy because they don't support concurrent modifications. Therefore pattern matching would not make them harder to implement. Pattern matching such an object while it is being concurrently modified is simply a programmer error just like any other use of the object.

Not allowing pattern matching of mutable objects would be similar to not allowing equals and hashCode methods on such objects and for similar reasons.


I'm not talking about not allowing. I merely explained, because you asked, why pattern matching does not play well with mutation, and it doesn't. equals and hashCode also don't play well with mutation, and indeed they are not automatically generated for mutable objects, but they are for records. As the author of a class you can provide a reasonable implementation of them, as well as of a deconstructor for pattern-matching, but that requires knowing more about the particulars of the class.

There are other reasons for not allowing mutable records, but most -- including pattern matching -- can be summarized as "that's not what records are about, which is being 'dumb' data aggregates". You can read a more detailed discussion of the subject here: https://cr.openjdk.java.net/~briangoetz/amber/datum.html


>I merely explained, because you asked, why pattern matching does not play well with mutation, and it doesn't.

What you explained so far is that it doesn't play well with concurrent modification. And then I said that the "not playing well" comes from the concurrent part, not from the modification part.

I'm less interested in the philosophy of records and more in the use cases they support.


> What you explained so far is that it doesn't play well with concurrent modification

Fair enough.

> I'm less interested in the philosophy of records and more in the use cases they support.

The use cases and motivation, as well as why mutation is wrong for records, are explained in the links I provided. Here they are again:

* https://openjdk.java.net/jeps/359

* https://cr.openjdk.java.net/~briangoetz/amber/datum.html


Mutation without concurrency is still a problem for pattern matching. To return to your earlier example, suppose you've matched Point(3, y) and you're now in some code that assumes that the x-value of the Point is 3. Now, you call some method that has some reference to the point you're working on, and it mutates the x-value to 4. The assumption in that code after the match is now broken.


> Records, for example, are directly inspired by Kotlin's data classes, and function almost identically... except that Kotlin allows you to have mutable or immutable fields in a data class, and automatically provides a `copy` method (similar to Lombok's `wither`) that allows you to use a builder pattern.

Sounds a lot like Scala case classes. More accurate to say that both Java records and Kotlin data classes are inspired by Scala case classes -- not that it matters, programming languages authors are constantly stealing ideas (sorry, drawing inspiration) from other languages; thereby moving the state of the art into the mainstream.


Java is maintaining significant backwards compatibility. Kotlin does not have to do this. More constraints leads to a different design.


Makes absolutely no difference here, Kotlin data classes are just regular Java classes like records are - the compiler implements these methods automatically.

Oracle literally did the minimum effort required here, Java Records don't even generate bean-compliant getters FFS.


You're confused because you don't understand what this feature is and compare it to things with different goals.

https://openjdk.java.net/jeps/359:

> Records provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data... While it is superficially tempting to treat records as primarily being about boilerplate reduction, we instead choose a more semantic goal: modeling data as data. (If the semantics are right, the boilerplate will take care of itself.) It should be easy, clear, and concise to declare shallowly-immutable, well-behaved nominal data aggregates... It is not a goal to declare "war on boilerplate"; in particular, it is not a goal to address the problems of mutable classes using the JavaBean naming conventions.


I think the point a lot of posters are getting at is, maybe reducing boilerplate for JavaBeans is a more useful goal, given the huge ecosystem of tools and code that assumes the JavaBean pattern.

That's what Kotlin did and it's been very successful, so there's no fundamental reason why not. It's a philosophical objection of the form "mutability is bad", which feels somehow Haskellish and un-Java-like. What happened to being a blue collar language?


> so there's no fundamental reason why not

That another language has done it does not mean there is no fundamental reason not to do it. A fundamental reason can be that its cost/benefit ratio isn't good enough. Other languages weigh language complexity less harshly than Java.

> That's what Kotlin did and it's been very successful

In what sense has it been "very successful"? That the small minority of people on the Java platform that have chosen to use Kotlin like this feature? Currently, there are still more people using Scala, does that mean that all Scala features have been even more successful?

In any event, that would be a different feature, and it will be judged, and then perhaps adopted, separately.

> It's a philosophical objection of the form "mutability is bad", which feels somehow Haskellish and un-Java-like. What happened to being a blue collar language?

That's not the objection, and Java is still blue-collar. Records don't do that not because of some "anti mutation" sentiment but because records are about something else than reducing JavaBean boilerplate. See the discussion on mutability here, https://cr.openjdk.java.net/~briangoetz/amber/datum.html, which I'll reproduce below for your convenience:

> The stricture against mutability is more complex, because in theory one can imagine examples which do not fall afoul of the goals. However, mutability puts pressure on the alignment between the state and the API. For example, it is generally incorrect to base the semantics of equals() and hashCode() on mutable state; doing so creates risks that such elements could silently disappear from a HashSet or HashMap. So adding mutability to records also likely means we may want a different equality protocol from our state description; we may also want a different construction protocol (many domain objects are created with no-arg constructors and have their state modified with setters, or with a constructor that takes only the "primary key" fields.) Now, we've lost sight of the key distinguishing feature: that we can derive the key API elements from a single state description. (And, once we introduce mutability, we need to think about thread-safety, which is going to be difficult to reconcile with the goals of records.)

> As much as it would be nice to automate away the boilerplate of mutable JavaBeans, one need only look at the many such attempts to do so (Lombok, Immutables, Joda Beans, etc), and look at how many "knobs" they have acquired over the years, to realize that an approach that is focused exclusively on boilerplate reduction for arbitrary code is guaranteed to merely create a new kind of boilerplate. These classes simply have too many degrees of freedom to be captured by a single simple description.

> Nominal tuples with clearly defined semantics is something that can make our programs both more concise and more reliable for a lot of use cases -- but there are still use cases beyond the limits of what they can do for us. (That doesn't mean that there aren't things we can do for these classes too -- it just means that this feature will not be the delivery vehicle for them.) So, to be clear: records are not intended to replace JavaBeans, or other mutable aggregates -- and that's OK.


I guess I mean successful in two senses:

1. Numerical success. Kotlin has got a lot of adoption very quickly, by the standards of JVM languages. It's now the default language for Android development having displaced Java itself as such (of course, Android continues to use the rest of the Java platform's design and continues to support Java the language).

2. The data class feature itself doesn't seem to suffer the problem of many knobs and boilerplate cited in Brian's doc. Nor does it have any major incompatibilities or surprising semantics. Actually there are no options that I can think of. You just mark a class as data and it obtains some new restrictions and features. Those aren't always appropriate but you can't really customise anything beyond overriding/replacing some of the generated methods.

And by "fundamental" I mean in the computer science sense, not the tradeoffs sense. For instance if a new feature would require major backwards incompatible changes to the type system which would make it a different language.

The design doc says that creating lots of new boilerplate is guaranteed if you add mutability, which is a strong claim. It doesn't seem that way when I use Kotlin data classes. It feels like a big win in boilerplate reduction even though the output is basically an enhanced bean. Yes, you have to be aware of things like not mutating objects used as map keys, same as in Java. OK. No big deal. There are fewer reasons to use mutable objects when you have copy constructors anyway so it's rare to want to do that and an IDE static analysis could warn/error you if you use a mutable type as a map key. (albeit I just checked and IntelliJ doesn't).

If you look at the personas defined at the top of the design doc, which seem pretty accurate, it's notable that most of them are people who just want less ceremony (Boilerplate Billy, POJO Patty etc).

I don't think the lack of mutable records will be a big deal, no more than deviating from the getFoo naming pattern will be, but these small things add up.


First, let's separate Kotlin's adoption on Android and on the Java platform. Android has never been Java, it is now very different from Java, and while adoption of Kotlin for Android is high (~60% according to Google), adoption of Kotlin for Java is quite low (less than 3% according to Indeed.com).

Second, records and Kotlin's data classes are not the same feature and they don't have the same goals. Records are primarily about adding named product types, or named tuples, not about reducing boilerplate for mutable objects, while Kotlin's data classes allow you to add properties that are excluded from equals, which is, IMO surprising, and in any event means that data classes do not serve records' goals. That's also why there is no deviation from a naming pattern, because this concept didn't exist in Java before. Records are not beans, and there is no need for them to follow a naming pattern of something they're not; even if there is some benefit in doing so nonetheless, there is also a disadvantage.

It's perfectly OK to disagree with Java's designers and think that Java should have added Kotlin-style data classes instead of records and while neither satisfies the others' goals, you think that the former's goals are more important, but that is just a matter of taste. Like Boilerplate Billy, you can think that reducing boilerplate in itself is a big benefit. But I don't think you can point at any evidence that shows the decision to implement records is wrong. So yes, Java could have added a different feature, but it chose not to.


I'd equate Kotlin's data clases to Lombok's @data annotation.


Bean compliant getters are horrendous, most newer libraries avoid them.


Agreed, the declaration syntax is awful and the generated methods aren’t idiomatic.

That saidC I don’t even understand the point of generating getters in this case. The fields could just be declared public final. Java’s obsession with wrapping fields in methods has never made sense to me.


Scala lets you quietly replace a field with an accessor method, but Java does not. You'd have to change all consumer code to add laziness or test mocking/spying.


in my experience accessor methods are a particularly nasty kind of YAGNI. I'm not saying you never need to override a field to make it lazy, for example, but the cost of getters and setters has never, in my practical experience, outweighed the cost.


If they're generated and inlined away, what's the remaining cost?


The cost is in typing effort, fluency, legibility and complexity. There is no Good reason to prefix every field access with get, and end it with (). It’s five more keys to type and makes the code more verbose.

thing.getId() is longer and less readable than “thing.id”.

Why does the compiler have to spend time inlining the call? Can you guarantee that it does?

IMO the question shouldn’t be “what is the cost of this complexity?” The question should be, “what is the benefit?”. And in 20+ years of java dev, I just don’t see any benefit in accessor methods at all. We never wire up our components with JavaBeans introspection... so why do we keep this nonsense going?


The article doesn't really touch on how this plays with generics, if at all. Anyone know if we can do:

    public record MktOrder(T currency, int amount)<T> { }
or

    public record MktOrder<T>(T currency, int amount) { }

Would be a bummer if it breaks the idiomatic (and canonical) way of working with generics. In any case, it's a much needed and welcome feature.


Yes, records work with generics pretty much the same way they work with classes. You can do

    public record MktOrder<T>(T currency, int amount) { }
    var m1 = new MktOrder<USD>(usd, amount);


Can anyone explain to me what the benefit of this is over Lombok annotations? I like the Lombok builder syntax which this seems to lack.


Coming from scala, I consider this an essential language feature, and I don't want to have to depend on a third-party library / compiler plugin / annotation processor to get it.

This should be standardized by the language.


Besides just the convenience of it not being a dependency, this will allow other language-level features like pattern matching and object deconstruction in the future. Basically anything that requires the language to have a tuple/named tuple type.


> tuple/named tuple type

As someone not familiar with using those, can you ELI5 what those are and what their benefits are?


A tuple is the generic term for a pair, triple, etc. It's a collection of data of heterogeneous types, unlike an array which is homogeneous. When the members are unnamed, the data is indexed by its position like an array.

E.g. a method could return an unnamed tuple ("foo", 7) which might have type (String, int).

In some languages like ML, having tuples is such a fundamental part of the language that things like functions always act on a single argument which is actually just a tuple. So max(5, 5) is a function that takes a tuple of two ints as its parameter.

When you have tuples as a first-class concept, you can start to do some interesting shortcuts like:

    (b, a) = (a, b); // swap two variables with no intermediate
or

    String (first, last) = name;
This is called deconstruction, where you extract a tuple into variables representing its members. This is common in a ton of languages (Typescript, C#, ML, Scala, Python, etc). It also forms a cornerstone of pattern matching expressions in some languages (Scala, Haskell) where you can create a condition and simultaneously assign the data you care about to variables that can then be used in that branch. Scala pseudocode:

   sum(list: ConsList<int>) {
     return list match {
         Empty(): 0;
         Node(first, rest): first + sum(rest);
     }
   }


It's a language feature, so esp. library developers would pick it up over Lombok to reduce their dependency graph.

That said, I also like Lombok's annotation style more, I can pick & filter which members are relevant for `toString()` and `equals()`


Records are immutable. A normal pojo can be - but you don't know.


Interesting.. why just stop at constructor, can't we introduce fluent/builder pattern as well?


I suppose there's nothing stopping you from writing your own builder to complement the record class, though of course part of the point here is to avoid additional boilerplate.


Yes, it's absolutely not innovation - but this is actually really cool in the context of Java, which was never an innovator.


A missed opportunity?

If the struct graph is all records or primitives, the compiler should include a toJson(), fromJson() in the sealed class.

It could be done with classes too, but if it was records/prims only, then it could be done without using reflection.


So basically Lombok's @Data annotation with extra steps.


Why more steps? You even have fewer keystrokes in case of record.

I'm looking forward to ditching lombok finally.


So Beans are back?


Where they gone?


We call them pojos, now.


we are on java 14 already wtf?? I really don't like the new frantic release cycle. Cassandra is still stuck on Java 8...


There's only been one "real" release since Java 8, which was Java 11.

By which I mean those are the only LTS releases since the shift to frequent releases. The rest have been glorified betas, sometimes with major compiler bugs included (such as https://dzone.com/articles/jdk-91011-side-effects-from-on-ja... ). They have very little support and a super tiny maintenance window. I wouldn't expect any large or stable library/project to use anything but the LTS releases as a result. It looks like Java 17 is the next planned LTS release, which is quite a ways off.


Everybody is still stuck on Java 8.


I know of a project with substantial GC problems that's stuck on Java 7.

I just roll my eyes at this point when I hear about it.


I look forward, with great enthusiasm, to the day when we figure out what the optimal release cycle is, where you keep the bulk of your users in space between bleeding edge and falling off the upgrade bandwagon entirely due to being 'so far behind'.

But today is not that day.


It's OK. Java 8 is supported until:

* June 2023 - Redhat [1]

* June 2023 - Corretto [2]

* September 2023 - AdoptOpenJDK [3]

  [1] https://access.redhat.com/articles/1299013
  [2] https://aws.amazon.com/corretto/faqs/
  [3] https://adoptopenjdk.net/support.html


Java will have it's Python 3 moment... I have already upgraded to Java 11 without too many issues however.


From the first example:

    private final double price;
    private final LocalDateTime sentAt;
Can't help but notice that they won't be able to trade at exact decimal amounts whose fractions aren't sums of powers of 1/2.

Also wouldn't Instant be better for a timestamp?


Production code likely should use neither double for currency nor LocalDateTime for a point in time.

But for the purpose of the article, which is about Records being a new type of Java class, this is kind of irrelevant.


I think it's kind of a problem if examples of how to write Java are full of bad practices.


For what it's worth, I work at a place that would write a similar domain object, and we would use both of those and not feel bad about it.

The FUD on money as a double is way overblown in my opinion - it only becomes a bad idea once you have to do precise math with the it (which is admittedly a lot of the time when you're dealing with money, but surprisingly rarely in some domains).


Examples are usually bad production code. The sky is blue.


So Java is basically getting structs. Seems awesome.


There's a lot to like about this, but I laughed at trying to cast nominal typing as a design choice in typing per se rather than e.g. a tool to make the VM's loader simpler. It gives away the lie just two paragraphs after explaining it:

> This choice was partially driven by a key design idea in Java’s type system known as nominal typing, which is the idea that... each type has a name that should be meaningful to humans... the compiler still produced two different anonymous classes, $0 and $1...

Names extremely meaningful to humans!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: