Hacker News new | past | comments | ask | show | jobs | submit login
Java's records, Lombok's data, and Kotlin's data classes (nipafx.dev)
196 points by gher-shyu3i 8 months ago | hide | past | favorite | 288 comments



Kotlin data classes can use Java records as their implementation if running on JVM, so it’s not like you have to choose one or the other.

https://kotlinlang.org/docs/jvm-records.html#declare-records...


This is great news!

I was also kind of surprised to see the article talking about algebraic data types... table stakes for ADTs is sum types and I have missed them considerably in the Java ecosystem.

Kotlin has something like them with sealed classes, although I'm a newcomer to the Kotlin and Spring Boot ecosystem so I don't see explicitly how I make some simple case like JSON

    {"type": "left", "abc": 123}
    {"type": "right", "def": "456"}
turn into some structure like:

    sealed class EitherTest {
        data class Left(val abc: Long): EitherTest()
        data class Right(val def: String): EitherTest()
    }
rather than the hacky version that you have to do in relational databases and Java which don't support such things,

    enum class EitherType { LEFT, RIGHT }
    data class EitherTest(val type: EitherType, val abc: Int?, val def: String?)
Like I’m not saying that there’s no way, I’m sure there’s a way... just that the ecosystem seems so hesitant to embrace sum types that like the above sealed class is widely viewed as a hack and there is no statement about “here is how you actually use sum types for everything in your Spring application with Kotlin.”

Was gonna give the choose-your-own-adventure example of why sum types are handy and how you have to kind of hack around them with inheritance when you don't have them but it occurs to me that anyone who has stuck with this comment this far probably already has some familiarity with this?


Sealed classes are (most likely) going to be in the next version of Java, Java 17: https://openjdk.java.net/jeps/409



In traditional Java, you're probably using Jackson for your JSON. You can achieve those sorts of results by specifying the classes and type tags in annotations on an abstract Either class to use Jackson's polymorphic serialization feature.


The point of sealed classes is that you have a known number of possible representations. So you can have `when` blocks that exhaustively check all possibilities. Having an open class hierarchy would not work for that purpose.


this technique works with sealed classes, i’ve done it before. it throws a runtime exception if it can’t identify the discriminant.


The easiest way is with GSON:

https://mkyong.com/java/how-do-convert-java-object-to-from-j...

You would just have to have annotations on the enum names to get the lowercase values in your example json.


I wonder why, unlike a Java record, in Kotlin a data class annotated with @Record can not be local.

Being able to use local records is a useful feature in particular in unit tests.

  @Test
  void testPoints() {
     record Point(int x, int y) {}
     ...
  }


I don't see the point; you're not testing code that depends on Point, as it's local, so `x to y` (i.e. `Pair(x, y)`) would work just as well.

If there's a particular language feature records enable that Kotlin doesn't with ordinary data classes in this case, I'd love to see it.


That misses the point. The strength of records is that the language can build features that are only possible because of records' restrictions. It's not that the bytecode says "I'm a record" - which is all you get if Kotlin compiles a data class to one. @JvmRecord gives you all the limitations with none of the benefits. The annotation only exists for compatibility reasons.


The problem is all of these "benefits" are imaginary right now[1]. You are talking about future possibilities that Java could get due the strong guarantees offered by Java records. That's cool, but I couldn't find in your article (or the JEPs) any such benefit that is available _right now_.

The reality is that Kotlin can build the very same benefits you're talking about as well, in future versions. All the compiler has to do is to restrict these benefits to data classes which only have 'val' members, do not have hidden state and do not use inheritance. In fact, Kotlin is ALREADY enforcing these constraints on a any data class that you annotate with @JvmRecord.

Now, if we're already talking about imaginary future benefits here, Kotlin is also planning to have (immutable by design) value classes when Project Valhalla is ready (you can already have value classes with a single field in Kotlin 1.5).

[1] To be fully honest, Java records themselves are themselves an imaginary feature for the vast majority of Java users until Java 17 comes out, since very few projects would use a non-LTS version. And even then, you can expect years until libraries can start to use them. Kotlin and Lombok data classes can be used right now.


Yeah, and that is a good example of how semantics might change depending on which platform Kotlin code is targeting.


Scala's case classes are missing in the comparison (only mentioned briefly at the very end). They offer everything that records do and more. Good to see that Java finally catches up a bit.


Scala is usually omitted for one reason or another when Kotlin is compared against Java as a better alternative, which is a shame.


Scala has to the solution to every problem except the problem of too many features.


> except the problem of too many features

Scala is a pretty simple, concise and coherent language though. Never understood why people deem it has too many features, I write it professionally and never felt so.

At least comparing it to C, C++, Python. Java may be more simple, but you have tons of features added with metaprogramming via dozens of annotations generating lots and lots of boilerplate, Lombok and Spring are examples of that.


Exactly this. Java-the-language is so simple that Java-in-practice is disgustingly littered with verbose patterns to accomplish really basic abstraction concepts, annotations that may or may not work at runtime, and runtime reflection with unsafe casting because of type-erased generics.

Java codebases are way scarier than Scala.


> Never understood why people deem it has too many features

I'll start with one (key)word: `implicit`

I agree with your distaste for metaprogramming though.


Implicits are one feature that replace many features in other languages (Kotlin has probably 5 or 6 special features that each do limited subsets of what implicits can do, and still misses important parts of their functionality). There is some legitimate criticism to be made of Scala implicits (though I have yet to see a better alternative), but "too many features" isn't it.


What Kotlin features do implicits serve to replace? I'm new to Scala and am having trouble imagining when I'd want to use an implicit, so I'm genuinely curious.


Off the top of my head: Extension methods. "with" contexts. Scope functions (by letting you write your own). Many of the things done with kapt (e.g. typeclasses, not that there's a good way to do them in Kotlin). Spring-style autowiring (not part of Kotlin proper, but seemingly the idiomatic way to use it). The magic last parameter lambda syntax thing (by letting you implement the magnet pattern).


Scala implicits are used in a few different places, predominantly in library code where they can for example be used to derive instances of type classes. e.g. in circe:

  import io.circe.syntax._
  List(1, 2, 3).asJson
Where `asJson` requires an instance of an `Encoder`[1] and this Encoder can be derived with the help of implicits.

For you as a normal user, the two common places where you might use implicits are (1) for implicit classes providing syntactic sugar:

  // original
  def doSomething(a: A): B = ???
  val a: A = ???
  val b = doSomething(a)

  // with implicits
  implicit class AImplicits(a: A) {
    def doSomething: B = ???
  }
  val a: A = ???
  val b = a.doSomething
and for (2) implicit conversions. These are a foot-gun, so should be used in limited circumstances. At work, we use case classes in data pipelines then convert these to avro classes on save; there's lots of ways to do this, but as an example if you have an `Optional[Int]` and your avro constructor requires a nullable java `Integer` then `JavaConverters` won't save you and you'll need something like:

  implicit def optIntToInteger(optI: Option[Int]): java.lang.Integer = optI.map(Int.box).orNull
[1] https://circe.github.io/circe/api/io/circe/syntax/package$$E...


You can use Scala without introducing implicits and even if you have to interopt with a library that does require them, it's quite easy to learn how to use them. It's also easy to abuse them but then that's not really Scala's problem.

They are changing implicits in Scala 3 though with the "given" keyword which is more ergonomic.


Scala can't let go of implicits as they're a fundamental part of the language. I just don't believe this Scala 3 fudge to give the impression implicits are not alive and well beneath the surface.


Scala 3 takes the most common use cases for implicits and makes them more ergonomic. Implicits are still in the language, but not exposed as concept.

Odersky called the approach for Scala 3 "intent over mechanism."


You don’t have to use implicits though, but you’d be super happy they’re there should you ever be in a situation where they’re helpful


Doesn’t a monad or for comprehension use implicits to find the right CanBuildFrom?

Are you suggesting to use scala without using for comprehensions? Or do you mean you don’t need to write your own?

It’s been a long time since I wrote scala, so may be getting it wrong.


No it does not. A for-comprehension just requires that map/flatMap methods are available. No implicits or CanBuildFrom needed.


CanBuildFrom is gone in the 2.13 collections rewrite


> I'll start with one (key)word: `implicit`

I don't think implicit counts as "too many features" or as something complex. Basically all it does is finding a canonical value in scope for a hole of certain type.


I only worked in Scala briefly so maybe it's coming from Java habits, but implicits were huge pain. When first reading through code you don't really know if that function call you eyed over really has the params you think it does. It makes it WAY harder to track what's derived from what.

Either way, the local Scala guru there said the Scala community was starting to get over implicits.


I think that's were it comes from: people that briefly worked with Scala in the past.

At some point it was true, but nowadays the problem is solved because IDEs will show you were an implicit is used and where it comes from. Without that, it was indeed more difficult - but also not more difficult than Java's reflection or DI (where you had and have even less help).


> When first reading through code you don't really know if that function call you eyed over really has the params you think it does.

Yeah, but don't you feel the same pain when using Spring's autowiring? Or python's (or C++ template) default arguments? Or dynamic method dispatching in any OO language, where you don't know what method you actually call? Or late bindings?

I don't see how implicits' implicitness is significantly worse than any other kind of implicitness common in other language.


I mean me personally? I'm very boilerplate tolerant. I'm not a fan of Dependency Injection/Auto Wiring for similar reasons and I've never found wiring boilerplate to be too painful. In fact I'd love even more explicitness and manual wiring for almost everything like disk, network, or even clock access aka passing capabilities!

Default params are generally fine since they're hidden on the implementation side but explicit if they vary at the caller site.

And even dynamic dispatch I try to use carefully and sparingly. I just like really explicit programs.


> I'm not a fan of Dependency Injection/Auto Wiring for similar reasons

I see. My point was that such things are pretty common in most contemporary languages and I don't see how Scala is special in this regard.

Yes, unlike Go or Java, implicits are part of the language spec and not some metaprogramming magic on top of it, but I'm not sure it's a bad thing.


Maybe but scalas case classes are super easy to grasp and require no magic like annotations


What annotations are you referring to?


Lomboks. Seriously, scala case classes are like the very first thing people start using when learning the language. Scala3 enums are even more straightforward


The ancestors of your comment mention Kotlin, not Lombok, which is why I asked. Kotlin's data classes are probably the analog to Scala's case classes, and they do not require annotations.


I'm not the person you were asking, but, presumably, Lombok's annotations.


Lombok and Kotlin are both much easier to integrate into existing Java applications and libraries than Scala.


Not so IME. Kotlin's nullable types don't play nice with option-oriented APIs (e.g. Java Streams), and you can't make Java types transparently implement Kotlin interfaces (and there's no equivalent to typeclasses), so using Java types with Kotlin libraries is harder than with Scala libraries.


Lombok is the problem not a solution (see https://github.com/projectlombok/lombok/issues/2681 regarding support for JDK 16 to understand why) and a ticking time bomb in every project that wants to move past JDK 15/16.

But why do you think Kotlin is easier to integrate than Scala? Both work on JVM.


It's been resolved though.


I don't see how Kotlin is easier to integrate into an existing Java app than Scala is. They both require adding new dependencies and changing your project build config, and require developers who know the respective new language. That's... about it. They both offer similar levels of interop with Java libraries.

If you want to assert that finding Kotlin developers is easier or that Kotlin is an easier language to learn, sure, that might be the case (I really don't know), but that's not really an integration task.


What about the fact that Kotlin collections are fully interoperable with Java collection? That makes the transition significantly easier.


You can use Java collections in Scala just as you do in Kotlin, and in fact interop is easier in Scala since you can write typeclass instances for the Java types whereas there's no equivalent for that in Kotlin.

The real difference is that more of the Kotlin ecosystem uses Java's fundamentally mutable collections compared to the Scala ecosystem, and using actually immutable collections in Kotlin is extremely difficult to the point that essentially no-one does it. IMO that's a bad tradeoff in the long term, but you can absolutely take the same approach in Scala if you really want to.


I write Kotlin all day and almost exclusively use immutable types. What difficulties with using them are are you referring to?


Kotlin doesn't have immutable collection types in the standard library (it lets you use a read-only interface but the collection is really still mutable and will be seen as such by any Java code), so if you want actually immutable collections you have to use non-standard collections, and since Kotlin doesn't have typeclasses or implicit conversions it's difficult to interoperate between any non-standard collection library and other Kotlin code.


Ah, fair. I've never actually seen that become a problem, but I can see how it would be if I were doing more extensive Java interop.


Yeah, if all your code and libraries are Kotlin then there's less risk of a collection being mutated under you (though even then, Kotlin gives you no way to write a method that only accepts an immutable collection - you can write a method that accepts a view-only interface, but it will always be possible to pass a mutable collection and mutate it in parallel while that method is running). But of course in that scenario there's also zero benefit from having compatibility with the Java collections.


I believe Scala's are as well, this seems to be a pretty complete list:

https://www.scala-lang.org/api/2.13.5/scala/jdk/javaapi/Coll...


Kotlin's STDLIB collection types are essentially aliases for the Java types. So while the Scala adapters are low-cost, in Kotlin everything's zero-cost.

One other benefit of that is you maintain object identity. I don't think that Scala's wrappers do that.


Scala has the full suite of Java collections without any conversion or overhead whatsoever.

But it also has Scala collections. With scala collections you get the full power of the Scala type system, as well as a much richer and full featured collections api. So most scala programmers won't bother with java collections unless they have specific java interop requirements.

The Scala adapters are merely ways of converting java collections to scala collections and vice versa.


I think it's a fair complaint that in Scala it is a pain in the ass to deal with Java collections. You have to litter your whole code with `.asScala` and `.asJava`, the java collections don't work with for comprehensions, etc.


No, it's not hard. In Scala, you are completely free to use those java collections and you can do it exactly how kotlin programmers do it. You want a java list, without any need to convert back and forth between Scala and Java? Import `java.util.List`. All of the same methods and iterators and expressions and constructs are still there. You get all of the lack of capabilities and grace that the java collections provide. Literally no different from using the same collections in Kotlin.

The problem is that those java collections suck in comparison to the Scala collections. So Scala programmers prefer to use Scala collections. Scala programmers would never willfully use java collections if they don't have to, and if they absolutely have to, they have minimal overhead conversions back and forth. The minimal conversion overhead is the price they're willing to pay to use better collections while maintaining java interoperability.


Except Kotlin enjoys the compatibility with Java, plus it outfits those java collections with extension methods and some compiler tricks to achieve all the same functionality as the Scala collections (in fact, I would say even more self-consistent and useably than in Scala). Just as in Scala, in Kotlin you can use functional transforations that Scala users are so accustomed to:

Kotlin:

    listOf(1,2,3,4)
        .map{i -> i + 1}
        .filter { it % 2 == 0 }
        .flatMap { listOf(it, it * 2, it * 3) }
Kotlin even has a similar take to Scala's views, which they call sequences:

    listOf(1,2,3,4)
        .asSequence()
        .map{i -> i + 1}
        .filter { it % 2 == 0 }
        .flatMap { listOf(it, it * 2, it * 3) }
        .toList()
Is this really so "lacking in capability and grace" compared to Scala? The only think missing is persistent immutable collections, which are implemented in kotlinx.


> The only think missing is persistent immutable collections, which are implemented in kotlinx.

You said:

> You have to litter your whole code with `.asScala` and `.asJava`, the java collections don't work with for comprehensions, etc.

So how does kotlinx achieve zero-cost compatibility with Java collections without having something like `.asKotlin` and `.asJava`? Because otherwise there is no difference between Scala and Kotlin in this regards.


because the persistentList/Collection/etc in kotlinx implement the appropriate java collection interfaces:

    import java.util.List;

    public class Foo {
        public static void blah(List<Integer> list) {
        }
    }

Kotlin:

    import kotlinx.collections.immutable.persistentListOf

    fun main(): Unit {
        Foo.blah(persistentListOf(1,2,3))
    }

This compiles fine


It's only a problem if you can't decide which part of your project to write with which language.

If I have to use Java, I put it into a separate sub-project and make sure that I have a nice API to interface between the subprojects. `.asJava` and `.asScala` then only appear at very specific places where the interop happens.


What you're describing is in and of itself a severe cost and barrier to the very thing we're describing: interoperability between Java and Scala collections. On the one hand you have Scala where you either have to constantly `.asScala` and `.asJava`, or go your route of ensuring that in any given module only deals with either Scala or Java collections. This may involve additional abstractions or classes to achieve. On the other hand you have Kotlin which just directly, frictionlessly deals with Kotlin/Java collections the same way (in fact, they literally are the same). We're comparing "some small-to-medium cost" against "zero-cost".

Or an other way of saying it is that your approach of walling off modules as either java-collection or scala-collection is kind of like saying "Java and Scala collections work well together, so long as you don't have to use them together too much. If you minimize how much they need to interoperate, the problem is not so bad."


> Or an other way of saying it is that your approach of walling off modules as either java-collection or scala-collection is kind of like saying "Java and Scala collections work well together, so long as you don't have to use them together too much. If you minimize how much they need to interoperate, the problem is not so bad."

Yes, that's actually a very good way to rephrase it. That being said; I myself wouldn't limit that to just the collections; Java and Scala on the language level have excellent interop. For Scala-the-ecosystem the story is a bit different.

Java is not the primary target for most of the Scala libraries. Not even a secondary one, I dare say. (I'm not even sure how you would model a Java API for a library - say; Cats - that uses implicits for the heavy lifting.)

That of course stems for the fact that Scala makes use of concepts that have no correspondence in neither Java or Kotlin, so there will necessarily be something lost in translation.


Exactly. Kotlin was actually explicitly designed to be like this and is directly competing with Java as a drop in replacement for it. Consider e.g. the seamless integration into Spring and Android, which both continue to support Java as well. Writing Android applications in Scala is a bit of an uphill battle (but I've heard of people attempting it). Writing Spring applications in Scala is possible in principle, I guess, but it's just not a thing that is very common or that is supported by Spring. Spring and Android have documentation with code samples in both Java and Kotlin, and extensive Kotlin specific stuff in the form of e.g. extension functions. At this point Kotlin is the preferred language for both even though they both maintain compatibility with Java as well and will for years to come.

Spring is about half the server side JVM ecosystem. Android represents a good chunk of frontend usage. Kotlin is a first class citizen for both; Scala just isn't. It has its own frameworks of course but they are kind of niche in comparison.

I think it's great that Java is slowly evolving to have features that other languages have. Kotlin supports Java's records as of this week's 1.5.0 release via an annotation. Meaning that if you have Java code that needs to interact with Kotlin code, you can write a data class that from the Java side looks like a record if you put the right annotation on it. It's a compatibility feature that's only relevant if you are planning to use or support Java. Another notable feature that landed with this week's release include sealed interfaces (it already had sealed classes). You can use both with data classes of course.


i think Kotlin might be winning over Scala because of better ineroperability with java. https://jelmini.dev/post/java-interoperability-kotlin-vs-sca...


I think that case classes are roughly equivalent to Kotlin data classes for the purposes of this comparison. In particular, one of the extra features they offer is the ability to make fields mutable. Which, in some cases, may be useful, but also means that they aren't, strictly speaking, transparent carriers of immutable data.


Not only that, but they offer some other features that (to my knowledge) both records and kotlin's data classes don't offer. Which is control over accessibility. E.g. when you have a "NegativeNumber" type and you need to make sure the inner value is actually negative. Then just using having a check in the constructor isn't sufficient in the existence of copy methods (like kotlin) does it.


A good rule is that constructors MUST not do work. Writing constructors in Kotlin is possible but not that common. Data classes are typically initialized with just value assignments. Having support for default values also means that it largely removes the need for having multiple constructors (which is very common in Java).

If you need to do validation, there are very decent frameworks that you should use for both Java and Kotlin. We are using a thing called Konform currently. Very nice.

For non negative numbers, you can use the new unsigned numeric types they added in Kotlin with the last release: https://kotlinlang.org/docs/basic-types.html#unsigned-intege...


The copy methods that Kotlin generates construct a new instance of the class with the updated fields - so any validation done in the constructor would in fact be done again.


Yes, that's correct. But instead of catching the error at that point, you would probably rather want to disable these methods from the beginning so that someone cannot even make the mistake of using them and has to go through some custom defined method instead.


I don't think they offer more. Deconstructing patterns and "reconstructors" (generalised "withers") are on the way. And Java features are often designed as a complete whole: some of the feature is in the language, some in the core libraries, and some, even, in the VM. In the case of records, they're treated in a special way by the runtime (e.g. in serialisation).


Well I said "they offer more" and not "they will always offer more". So I stand by what I said.

> In the case of records, they're treated in a special way by the runtime (e.g. in serialisation).

Scala will use them as underlying implementation (just like it will do with value-types, functions and so on).


I'm not sure what it's taking about for the "with" feature: https://nipafx.dev/java-record-semantics/#with-blocks

In Kotlin data classes, it's already implemented (just called copy) https://kotlinlang.org/docs/data-classes.html#copying


"With" doesn't exist in Java, but it was introduced in C# 9.0 as part of the .NET implementation of records:

https://devblogs.microsoft.com/dotnet/c-9-0-on-the-record/#w...

I think the author is more saying that a "with" operator would fit into Java record semantics but might not be compatible with Lombok @Data classes or Kotlin data classes (though it looks from your link that Kotlin actually does have something very similar in the "copy" method).


Not sure why this is getting downvoted; if you look at the hypothetical “with” syntax in the original blog post, it is clearly borrowed from the C# implementation.


The annoying thing about Java here is that it doesn't have default argument values, and doesn't allow you to name your arguments in function calls.

In Scala (I don't know Kotlin, but I assume it's similar) you could easily implement a copy() method yourself (and many people do if they need a case-class-like thing but can't use a case class) that behaves identically to the copy() method provided by a case class.

But the semantics of that copy() method require default argument values, and the ability to call functions using named arguments. Java doesn't have either of those, so instead of adding those features (I can understand the former being controversial), I guess the plan is to add entirely new syntax just for records, which IMO is a huge shame.

But it appears the "with" syntax is far from finalized, so it's possible they'll do something better.


It was included because changing data using records (copying it except changing the fields you need to change) is a shitty experience, so rather than showing the code you have to write, they showed you code you may or may not in the future be able to write.


Seems like an odd thing to mention. They are arguing that records are better than data classes, but part of their argument is "maybe, in the future, at some point, I don't know, it will be as easy to do this thing with records as it already is with data classes, we'll only need to add new syntax and a new keyword to the language".


Who is they?

You mean the article author? Yes, it was odd that he mentioned that, but besides that not a bad feature to have (but a bit wordy for my taste, I hope they'll change that).


> I'm not sure what it's taking about for the "with" feature: ...

The author is referencing possible future work that can build on the current records implementation, and the work Brian Goetz is doing with pattern matching and record deconstruction. Goetz has put together a draft that shows how these could be combined.

https://github.com/openjdk/amber-docs/blob/master/eg-drafts/...


It's also implemented nicely, today, with Lombok:

    @Value @With
    public class Pair {
        int left;
        int right;
    }

    final Pair rightIs3 = somePair.withRight(3);


A long winded article that really never justifies its headline.

Kotlin will soon be generating records in its back end, thereby gaining all the advantages that it's allegedly not getting today.


That misses the point. The strength of records is that the language can build features that are only possible because of records' restrictions. It's not that the bytecode says "I'm a record" - which is all you get if Kotlin compiles a data class to one. @JvmRecord gives you all the limitations with none of the benefits. The annotation only exists for compatibility reasons.


But what are the benefits that are available right now?

The only benefits I've seen in your article are not available right now, and can (and sometimes are) matched by Kotlin and even Lombok.

* Destructuring pattern matching syntax (JEP 405)

Proper pattern matching is obviously one of Kotlin's weak spots, but nothing prevents it from developing this in the future. Just like Java has JEP 405, Kotlin has KT-186[1], although Java might (uncharacteristically) beat Kotlin to this one.

* with blocks

This solution is far away and relies on introducing new syntax to the language. Meanwhile both Kotlin and Lombok already have solutions that give you the same benefits without introducing new syntax.

Lombok has both @With and @ToBuilder, while Kotlin has the copy() method.

* Serialization

As far as I can see, Kotlin Serialization already supports algebraic data types - including Sum Types, not just Product Types!

* Boilerplate reduction

I don't think you were trying to imply otherwise, but just to be clear about it, this is the benefit that both Kotlin and Lombok data classes had from day one.

[1] https://youtrack.jetbrains.com/issue/KT-186


That feature was released this week with Kotlin 1.5.0.


One of the best things about Records is that they guide you to creating immutable data structures, which Lombok does not. This, along with the reduction of boilerplate, greatly reduces the cognitive load required to understand a lot of code.


Lombok has @Value classes, which I use a lot. Is there something about them that does not guide you to creating immutable data structures?


Lombok increases the cognitive load required to understand and debug the code.


The guidance is nice, but, unfortunately, Record components can still be mutable. It would be great if there was a way to have a stronger guarantee of immutability.


I think the article is misleading in the list of advantages over Kotlin's Data Classes.

1. Destructuring - available in Kotlin

2. Copy with change - available in Kotlin

3. Serialization - not sure why Kotlin data class would not be serializable

4. Boilerplate - Kotlin takes care of equals and hashCode

Huge disadvantage that matters to me is that record fields cannot be mutated. It makes the records much less useful.


"Huge disadvantage that matters to me is that record fields cannot be mutated. It makes the records much less useful."

No. It makes them much more useful.


Well it means that the second you need a setter you can't use records so all the boilerplate remains.


You are supposed to create a new copy with some of the fields changed. Just like you do it with the java.time.* classes and others.


Yet the only mechanism for this is error prone or relying on mountains of boilerplate.


Instead of changing few bytes, now I have to copy hundreds of bytes around and add more stuff for GC to collect.

Well, they have to obey Wirth's law, I guess.


I mean, you do realize that you're writing Java code right? If you want ultimate low level optimization and control then you're already about ten miles too far downstream to make that turn. Also, I'm guessing the copy overhead is more than offset by the JVM being able to do better optimizations around these data structures.


Structs with mutable fields is hardly “ultimate low level optimization”, it’s something a lot of programmers still use as a matter of course. (I say this as a fan of immutable data!)

Ultimate low-level optimization in Java would be more like packing your structure into arrays of integers - which is something people actually do in Java. Just because you’re using Java doesn’t mean you don’t want your code to run as fast as possible.


I don't know if the JVM developers have actually implemented this, but since records are immutable, there's no reason why a copy couldn't share memory between the instances.

The major downside is that could ruin cache locality and make passing the instance across a FFI boundary require a copy.

Another optimization would be that the JIT (or even perhaps javac) could notice cases where you make a copy (with one field changed) of a record and then never use the original reference again. If the JIT (or javac) can prove that no other bit of code holds a reference to the original record, it can reuse and mutate the original one instead of making a copy. I don't know if this optimization is or will be implemented, of course.

Either way, I expect the overhead you mention ends up being worth the benefits of immutable data. (That's been my experience using Scala, anyway.)


This has been argued ad nauseam around the time Scala started gaining traction because scala's collections are grouped into immutable and mutable. The consensus at the time is that the GC overhead is well worth the ability to parallelize computation.


You have to copy less than you think. Because the data is immutable, you can share everything but the changed fields.


Then you've invented mutability without references/identity. Except those are desired properties for data classes, unlike for value classes which have such semantic difference.

Btw Kotlin allow to make immutable Java records too so clear winner.


Not sure what you're getting at. Immutable means exactly that. If you hand out an object, then do a copy change, that change isn't reflected in the object you gave to another method/thread/fiber/etc. Immutability doesn't mean application state never changes; it means that a single reference will always point to memory that hasn't changed.


Parent means basically the difference between identity and primitive/value classes. In Haskell, you’ve only got the latter (maybe you can manage something with lower level controls exposed), that is in a non-haskell syntax new Point(3,2) == new Point(3,2), even though in memory the two object is different.

“Problem” is with records, that they are only shallowly immutable. record Rect(List<Point> corners) will readily hand out a “pointer” to a mutable list. It can be solved of course by storing only immutable objects.

What parent may have failed to get from grandparent comment is that the latter likely meant it under the hood, transparently to the user. That is, new Point(3,2) != new Point(3,2) but the JVM can make the object reference the same data, because the field itself is final. Thus a copy can be optimized at the JVM level, while still having identity.


Or you know, the JIT will trivially optimize away the old class if it is reassigned to the same variable, as you would use it inside a loop. How do you think the litany of FP languages work? Like Haskell, Scala, Clojure?


It can be done in theory, but the JVM does nothing of the sort right now.


False. The JVM already does quite a good job with escape analysis, and record types just add extra semantic information to potentially further improve the situation.


Counterpoint: Java programs with memory consumption graphs that have decided sharktooth patterns, which is extremely common.


As opposed to what exactly? How would a C program’s memory graph look with quick bursts of memory-allocation requiring functionality, especially if it is very dynamic in nature? Yeah you can overallocate with memory pools and the like, and there are cases of course where escape analysis can’t help — that’s why Valhalla is in the works for quite some times now.

But GC-wise the JVM is far ahead the game, whatever you see is likely better than the same functionality would be under JS, Python, C#, Go, etc (though the latter two do have value types/structs already so they can at some place manually do the “escape-analysis”. But not every problem requires/can use value types either)


Value types are essential element of design to me, but this is not widely recognized yet. To me it’d be like saying primitive types are not essential.


What precisely of what I said is “false”?


the JVM does something of the sort right now.


Every time you rebuild a record you run through its constructor. I doubt this can be optimized the same as with a plain C struct.

Eventually with some more evolution. But not yet.


That's a theory not happening in practice. In practice Java programs are slow and memory-hungry because of those issues when some people think that it's cheap to create small objects or that escape analysis will solve their issues without verifying that it works for their case.


Most of the world’s server applications would like to disagree with you.


Java isn’t perfect. But you underestimate the amount of software written in it. Or even things like Python and JS which are a lot more basic but have similar elements in regards to their memory model. What do you use?


I work as Java developer for the last 10 years, I perfectly understand the amount of software and other things.

I don't know much about Python and JS, but I do know that they're not using immutable model, everything is mutable in Python and in JS, so I'm not sure what's your point. The only immutable language I'm aware of is Haskell which is not used widely. Just because JVM is faster than Python or V8 does not mean that it's OK to slow it down with immutables.


Using immutables doesn’t mean you go full Haskell. Strings are also immutable you know.


GC is there for a reason, unless you do HFT, use it to your advantage.


You write java and worry about bytes copied?


Yes, I do. Java could be quite fast if you won't slow it down on purpose.


Why would you need a setter for a data class? They add no value there.


1. with possible loss of information 2. as API, but an operator can do more 3. they are surely serializable, but they still need all the ugly reflection hacks that come with that (like bypassing the constructor and allowing the JVM to write to final fields) 4. that part was just presenting a benefit of records (on its own; not over Lombok/Kotlin), but the fact it's under the header "Why Records Are Better*" is confusing


IF you're not on Java 14 yet (70% of industry?[1])... this post shows two other alternatives for Lombok:

https://medium.com/@vgonzalo/dont-use-lombok-672418daa819

Autovalue & Immutables

[1] https://snyk.io/wp-content/uploads/jvm_2020.pdf pg 5


Unfortunately AWS Lambda doesn't support 14 yet :/


Do you care about memory leaks on a Lambda? I don't think they can live long enough to even invoke the GC. I guess if you're going through a lot of data you could blow your memory budget and have it terminated?


I believe the article focuses too much on "perceived/future" benefits of records, while ignoring the actual benefits that Lombok & Kotlin data classes provide today.

For example, article does not mention lombok @Builder and Kotlin `copy` when talking about boilerplate. Boilerplate is not just about application code, it's also about test code!

We have dozens of entities, and when unit testing them – always having to construct the COMPLETE record with all attributes from scratch is a pain. Nested records things worse. We now have all data classes as @Value+@Builder, and test factories provide consistent builders which the actual test cases can chain, override and use. This is possible in Lombok & Kotlin, but not in Record.


yes, java records are part of jdk14, many places are still stuck with jdk8 or jdk11, hoewever everyone can use lombok (they didn't mention that detail in the article)


They are still classes, still live on the heap and still need to be garbage collected. Compare with value types that live on the stack in other languages such as Swift, Go and Julia.


The heap is an implementation detail.

With escape analysis, the compiler can allocate the data on the heap, stack, or even stick it in registers.

https://www.beyondjava.net/escape-analysis-java

https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen...

https://www.javaadvent.com/2020/12/seeing-escape-analysis-wo...


Java compilers are getting more and more advanced, but I don’t think they will ever become the magical “sufficiently advanced compiler” that produces code that’s as good as humans _could_ (but often won’t, because of time constraints) write.

I don’t think anybody fully disagrees with that. At least, I haven’t heard people claim int can be removed from the language because a good compiler can produce identical code for Integers.

And yes, that can also apply to instances that do escape. A sufficiently advanced compiler could in some/many cases figure out that an array of Integer can be compiled down to an array of int. However, it’s way easier for a compiler to check a programmer’s claim “we won’t use features of Integer on these ints” than to deduce that code won’t, so a little bit of programmer effort allows for a simpler compiler that can produce faster code.

For me, records and (future) value types are examples of such “little bits of programmer effort”


I could be wrong but I don’t think dart has ints, I think it only has objects.


https://api.dart.dev/stable/2.6.0/dart-core/int-class.html:

“Classes cannot extend, implement, or mix in int.”

https://api.dart.dev/stable/2.6.0/dart-core/num-class.html:

“It is a compile-time error for any type other than int or double to attempt to extend or implement num.”

⇒ it seems that, technically, you’re right. int is an object in Dart. At the same time, it’s a restricted type of object.

So restricted that I think it is aan object only in name/at compile time.


Can you have an array of 1 million structs, not pointers to structs?


The first question is "through static analysis, can you guarantee that the structs do not leave the scope?"

The second question to look at is "which JVM are you using?"

Different JVMs may implement this differently. This isn't something that one can say about Java. It is something that one might be able to say about HotSpot, Zulu, or GraalVM.


You’re technically correct that this stuff is all possible in principle, but the answer in practice right now is “no”.


From the link about GraalVM:

> Something that Graal can do that C2 cannot, and a key advantage of GraalVM, is partial escape analysis. Instead of determining a binary value of whether an object escapes the compilation unit or not, this can determine on which branches an object escapes it, and move allocation of the object to only those branches where it escapes.

And from https://docs.oracle.com/en/java/javase/11/vm/java-hotspot-vi...

> The Java HotSpot Server Compiler implements the flow-insensitive escape analysis algorithm described in:

> ...

> After escape analysis, the server compiler eliminates the scalar replaceable object allocations and the associated locks from generated code. The server compiler also eliminates locks for objects that do not globally escape. It does not replace a heap allocation with a stack allocation for objects that do not globally escape.

----

So, some JVMs implement, others only do a limited subset of the optimizations available with escape analysis.

I would not say that the answer of "is it used in practice" is "no."


GraalVM is excellent in performing escape analysis on objects on the call stack, but it does not prevent the pointer overhead that a JVM array-of-heap-object-references has vs an array-of-structs that e.g. .NET supports [2].

Theoretically it could do hat, but that's just the classic "sufficient smart compiler" strawman [1]

[1] https://wiki.c2.com/?SufficientlySmartCompiler

[2] https://stackoverflow.com/questions/29665748/memory-allocati...


My point wasn't so much "can GraalVM do {some optimization}" but rather that the Java Language Specification doesn't say anything about it and that different JVMs have a different set of optimizations.

So "does Java allocate a record in an array directly as some structure of values in the array or as a pointer to a record object?" isn't one that can be answered by looking at Java.

It is an interesting question, and I'd be curious to see someone do a deep dive into the internals of GraalVM to show what can be done.

The other part that trickled out in other comments from the person posing the question about the array of records:

> It's a global array of structs, let's say.

and

> No, because my competitors who are attempting to fill the same orders I am attempting to fill are not chasing pointers.

... which, I'd be curious to see how .NET supports an array of struts (that are presumably changing over the lifetime of the array) that is allocated as a global. That sort of use case and the specifics of how it is implemented could make escape analysis give up and you'll see an array on the heap with pointers to records on the heap as they're passed off to different threads (which each have their own stack).


The point is those optimizations are not here now, and haven't been there for the last 25 years. Hand-waving them away as theoretically possible is dishonest. We're 25 years into the most popular programming language's lifetime and the most advanced VM available only recently learned good escape analysis. It isn't easy.

> which, I'd be curious to see how .NET supports an array of struts (that are presumably changing over the lifetime of the array) that is allocated as a global.

Very easy. An array-of-structs (which can still be on the heap mind you) will just be a continuous block of memory. This is totally independent of any locking and synchronization.

For example in a class with 2 32-bit fields, and an array of objects a b object-ref array will look like: [p_ap_b], p being a pointer to [a_0a_1] or [b_0b_1]. A struct-array will look like [a_0a_1b_0b_1].


Sure, but I think that's still important that it's possible. And if it doesn't get implemented, the reason may be because JVM developers have done the work to figure out that in the real world the optimization doesn't buy all that much.

Regardless, if you care about performance enough (via actual benchmarks) that you know that you really need some data to be guaranteed to be stack-allocated structs, then you probably shouldn't be using Java (or any GC'd language?) in the first place. Records don't change that calculus.


It's a global array of structs, let's say.


If it's a global, it's very likely allocated on the heap.

The question of "what is the representation of the object on the heap?" then open.

However, the "this is global" complicates it.

This isn't a question for Java to answer. You would need to dig into the specifics of the particular VM that you are using and how it allocates such a structure along with what optimizations it has available.


1 million is not a lot. I'd begin by asking myselves "can I afford to chase those pointers?", because maybe you can.


That's a good question to ask when faced with a problem that could be solved that way, but a real answer to the question would be useful too.


No, because my competitors who are attempting to fill the same orders I am attempting to fill are not chasing pointers.


You are chasing nanoseconds with a garbage collected language?


Those nanoseconds tend to add up.


are your competitors using java?


Java is relatively common in HFT yes.


Isn’t the point that Record classes will be able to be upgraded to value types easily once Valhalla is done? Or am I missing something


No they won't (or maybe they will be able to be speculatively opt-deopt?) Value types above a relatively small size are less efficient than references.


Yes, just add 'primitive' before record in the declaration.


Value classes, primitive types and specialized generics will be on the stack on the next versions. There is also this related work https://github.com/microsoft/openjdk-proposals/blob/main/sta...


I’ve not explored them much, how well does escape analysis work with them?


Curious question — why do people insist so much on fields being private and there being getters and setters, even when all that getters do is return the field and all that setters do is set it? What kind of problem does this arrangement solve? Why not just use public fields?


I think I can answer that... it's all to do with a grand view that the Java original designers had about what objects should look like, in what became known as Java Beans. There's a spec[1] and everything.

They wanted to be able to write framework code that could introspect Java beans and expose them directly in a user interface, amongst other things, allowing users to modify them, create them, delete them and even compose them to create their own applications... they had even imagined there could be marketplaces where you could _buy_ Java beans to add to your program, or even to modify your other beans to give them extra power... getters and setters were part of that - they needed to know how to obtain and change the state of an Object, but how the Object internally handled such state changes were up to the Object itself (what we now call encapsulation)... this was similar to how Smalltalk worked and that was an inspiration for the Java beans specification... all this never really turned out the way they wanted, of course, but have a read of the Java beans spec to get a better idea of what they had in mind if you don't fully understand it yet.

With time, people forgot completely about Java beans, but for whatever reason, getters and setters sticked around to this day. Many Java developers today think Java beans are just classes with a bunch of getters/setters and never probably heard of PropertyChangeListener, VetoableChangeListener and the other parts of the Java beans spec (some of it lives on in Swing).

If you write Java today and just want to expose some data, yeah, just go with public fields if you can't use Java 16 records yet... if you ever need to change how you internally store information, just refactor that to a setter if you really must (it will never happen).

[1] https://www.oracle.com/java/technologies/javase/javabeans-sp...


I had the same question -- why accessor methods instead of making the fields public, but final. If anything, it seems like a layer of indirection.

The only answer that makes any sense is this one from Brian Goetz.[1] Namely, that it's a workaround to support mutable fields of otherwise immutable objects.

To be honest, allowing you to override the methods on an immutable, auto-generated class to inject custom accessor logic feels like an immediate retreat from the conceptual goal of providing an immutable record implementation in the first place.

[1] https://stackoverflow.com/questions/66702223/why-do-java-rec...


> why accessor methods instead of making the fields public, but final.

The best explanation I have is that it is:

- a convention that seemed like a good idea for many people at the time, it even has a name: Javabeans

- it allows for a standard non magic way to add logic to be run when reading or updating fields

- in a time where source control tools and Java refactoring tools where not as developed as they are today it made sense to make getters and setters everywhere since changing from public fields to accessor methods after it were already in use was probably scary for large teams.


Thanks for your speculation, but I'd rather take Brian Goetz's word for it. He is, after all, one of the architects of the Java language and the records feature in particular.

...Unless you're Brian Goetz's alt account?


> Unless you're Brian Goetz's alt account?

I have a policy of saying whether if is true or false.

That said you should take his word for why it was designed that way and my word as an historical account of why I did that in 2005-2015.


This protects you from changes to the internal representation of state in the future. Direct field access totally blows away encapsulation.


So, uh, do a usage search before changing stuff? You'll have to, anyway.


That works if your class is never used outside your own organization, but the API of published code needs to be changed more gracefully using semver.


Not necessarily agree with private fields, but that is not an option for library writers.


Depending on the language, getters and setters may not be source and/or binary compatible with fields from the client side, so if you might ever need to refactor to a nontrivial getter or setter (whether because you are changing representation or for other reasons), starting with trivial getters/setters instead of fields limits future breakage.


Only argument I heard, not that I agree but, is future proofing by abstraction. What if a simple getter evolves into something more complex. Then You'd have to refactor all the places where it was using the fields.

I personally never had to do this so I'm not sure if this is a real benefit.


Magic beans. They were supposed to automagically populate inside a java UI (applet?).

seriously, I think the reason is so you could override the getter to do something else, like compute a value and return it... but in practice this almost never happens.


getters and setters prevent other objects from modifying internal properties in unexpected ways. Having that encapsulation and abstraction is important for maintainable code.

Also, getters and setters don't always modify individual values. Imagine a class where we have:

private val firstName private val lastName

fun returnFullName(): return firstName + lastName


> modifying internal properties in unexpected ways

Except I'm talking about Java classes that store data. They don't do any operations on it, they don't have any internal state, they're more like C structs.

> Also, getters and setters don't always modify individual values.

Of course. But one doesn't contradict the other.


In modern Java they can be used to create an accessor function, because Java still does not have syntax for referencing fields in this context.

   items.stream().map(Foo::bar)
vs

   items.stream().map(i -> i.bar)


Freedom to change.


> Actually, records are even better* than tuples. EP 395 says: > > Records can be thought of as nominal tuples.

They are certainly not better, that's just a sad click-bait (the author even admits that).

Sometimes nominal typing is better and sometimes structural typing is better. Forcing people to always use nominal types just ends in a lot of generic or long/meaningless names - one can already see this in Java.


My general rule (in Scala) is that if I need a tuple larger than two or three elements, I'm better of writing a case class. Tuples get unreadable real fast.

I think tuples are an essential language feature (and it's ridiculous that Java doesn't have them yet), but I think they're often way overused.


My heuristics is much more based on how often the same structure is used and in which places. Sometimes type aliases are a more light way compromise too.


I've been thinking about structural vs nominal. As you say, both have use cases. How would you combine both in one language so it's not confusing?


I'm not sure, but I think I would make a language based on structural typing only, including to allow a structural type to contain names. (Scala actually already has literal types that come close)

Think about the following type-level (not runtime) representation for a structural type (tuple):

    (String -> Integer)
And the following for a structural type with where each "member" has a name:

    (("name" -> String) -> ("age" -> Integer))
And the following for a named structural type where each "member" has a name:

    ("User" -> (("name" -> String) -> ("age" -> Integer)))
The last structural type here would be equivalent to a class/record in Java. I would then add syntax that makes working with these special kinds of structural types more easy. So

    record User (String name, Integer age)
would translate to the structure above. But one could also do something like that _if they wanted to_:

    ("User" -> (("field1" -> ("name" -> String)) -> ("field2" -> ("age" -> Integer))))
The closed equivalent would be a Java record where each member is annotated. This could be used to serialize/deserialize it into json.

I think that's what I would try :) but I'm not a PL designer.


I think nominal types are a must, otherwise you lose most of the benefits of static typing.

Is an Email type the same as a String even though they are the same under the hood? I precisely should not accept it without explicitly casting it, otherwise I would have duck typing. But I agree that explicit structural types can be useful at places.


I think you misunderstood my idea. For example, check the User that I defined. Even if you would have another type with the same structure, the name would be different, so it would not be interchangeable without a "cast", even though both are ultimately structural types. It's just that the name is now part of the structural definition.


OCaml does this I believe.


Restrict structural types to tuples only.


That honestly won’t be very useful.


TypeScript!


Nominal !≠ named You can have named tuple members while maintaining structural typing, it's not mutually exclusive.


Did I say something that contradicts that?


Kotlin data classes can be used with the JPA contrary to Java records. It's good to encourage immutability, but it is often a misfit and isn't the role of a "data class", they could have added a separate class qualifier immutable that would have been a better separation of concerns, here it's ad-hoc. Kotlin is enabling such immutable support thanks to Java records ironically https://kotlinlang.org/docs/jvm-records.html#declare-records...

Note however that an upcoming version of Java will get first class ergonomic support for manipulating immutable data: https://github.com/openjdk/amber-docs/blob/master/eg-drafts/...

There is also https://github.com/hrldcpr/pcollections


> Kotlin is enabling such immutable support thanks to Java records ironically

That a member of the JVM ecosystem is leveraging new JVM capabilities isn't ironic, it's totally expected.


> Kotlin data classes can be used with the JPA contrary to Java records.

Arguably this is more of an issue of JPA than an strict advantage of Kotlin: JPA was designed at a time where the general consensus was to primarily use mutable data, and was heavily influenced not only by existing Java ORM APIs, but by their implementations.

Kotlin itself supports JPA by the use of a compiler plugin, which is a good enough solution, but nevertheless, not one native to the language. Data classes mostly work by accident, but pretty much any documentation you will find points it to _not_ use them with JPA.


I'm really not sure if using data classes with JPA is smart idea. Solely because of generated hashCode/equals and JPA lazy loading. For instance ebean recommends against it [0].

https://ebean.io/docs/best-practice/#kotlin-data-class


Who is arguing otherwise? It's assumed that when a language adds a new feature that's historically been provided by libraries, it's probably better optimized. CompletableFutures, Streams, Date/Time. For many they've replaced libraries filling the gaps.

But not all of us can use JDK 14, and will continue to use Lombok if we're writing in Java.


Work on a Grails app; only boots on up to 11 ATM. BUT, the hot reload only works with 8... So while the app runs on 11, most development occurs on 8 still.

Lombok JustWorks™. You forget it's there until you setup a new dev environment and forget to setup the annotation processor in IntelliJ.


You need JDK 16 to use records without preview flags. And "with" syntax probably won't be available for JDK 17 which is next LTS, so for the next 3 years records will not be very useful for most projects, I guess.


Project Lombok is my bae, but of course a language included API is better than an annotation processor.


I'm never understood why code generation for getters and setters is so over-engineered. Heavy technologies for light work is almost always more trouble than it's worth.


Getters are setters are pointless anyway - unless you're creating read-only properties by only providing getters, reflexively adding a getter and a setter for every property is exactly the same as just marking the property public. It's even worse in the case of immutable objects (like lists or maps), because the getter itself returns a reference to the mutable object. Getters and setters mentality came from a horrible misunderstanding of object oriented design principles and has been standardized into common practice.


In languages without property support, reflexively writing getters and setters is the only way to make it possible to go back later and add logic to getting and setting without changing the callsites. Is this a workaround for the combined shackles of mismanaged enterprise environments where changing callsites is impossible for some reason, and legacy language environments where you have to use Java for some reason? Yes, of course. But sometimes you need a workaround.


> go back later and add logic to getting and setting

Which, realistically, you're virtually never going to do - at least not often enough to justify the boilerplate and especially in the case of things like "records" which were probably auto-generated from a schema (with obligatory getters and setters) anyway. If you did, you'd end up confusing all of your callers who probably wrote client code presuming that what they provided in the setter was going to be exactly what they get back in the getter.


Agree, for a lot of code you can be pretty sure it will not change. And you should be able to refactor your code base, if not there are larger issues.

In general I subscribe to the at first make it as simple as possible, when you have two examples of it being too simple it is time to refactor.

Of course when creating libraries it is different. In that case to protect users on the API surface and try to make that stable.


This is true, until you're writing a library that's pulled from a repository and used in several projects.

If you can't guarantee you're not going to break someone else's code by changing:

foo.x to foo.getX();

then you should stick with properties. Now, for staying inside a single package, or a single compiled unit, then what you say is reasonable enough. I'd still prefer to write the getters and setters, though, especially in cases where it's free, like Kotlin.


It would be better if the default were to not write setters or getters for any properties by default, but instead to write a logical interface that properly encapsulates the properties. Unfortunately, that ship sailed a long time ago for Java.


It actually happens a lot for me.


The real question is why is the convention getFoo() setFoo(foo) and not foo() and foo(foo).


It makes for easier searching in small code bases with stuff like grep instead of needing full language server / ide support.


Not all languages have overloading.


I'm referring to Java.


I think the pattern was adopted from languages before Java which didn't have overloading.


get()/set() is the same as a public property, until you change the implementation, while maintaining interface compatibility, which is the point.

In C# you'd have a point, they have parametric properties and readonly properties. So you'd favor just declaring public properties.

But this is why context matters. And OO design principles also depend on this context.


> until you change the implementation, while maintaining interface compatibility, which is the point

How many times have you seen that done on the real world?

And how many times have you seen that done, and it not creating a lot of bugs because of the behavior change without interface changes?

Personally, I've seen the first one more than zero times. Not the second. Every single time somebody decided to mess with a setter or a getter, it broke the systems that depended on it, and things would have been much better if they simply changed the interface, so the problems would arrive at compile time.


The behavior doesn’t change. The implementation does. Those are orthogonal.

And yes I see it every day. The collection interfaces in Java have countless swappable implementations. Those are basically getters and setters on a vector.

I also had to change entity storage to columnar for a project. Did that. Never had to change a line of code outside the entities.


Great to see Records come to Java, although without the "with" feature they are pretty half-baked.

I give it 5-10 years before we've tricked everyone into writing OCaml / F#!


So what's the answer to the question? Pattern matching and destructuring? But I don't see any reason Kotlin data classes can't do that. I know Kotlin pattern matching is pretty lack-luster, but I never thought this was the reason.


Records are good but if you want this in Lombok don't you just use @Getter instead of @Data to generate read only methods?

Serialization is handled by Java bean getters and setters just fine. I don't really see an advantage.


I've used @Value along with @Builder(toBuilder=true).


Main problem I had with lombok is that it was too flexible, causing me to have to read all the annotations on a class. Kotlin data classes are implemented a single way so I don't need to grok them individually.


That blog is hard to read for some reason.


I love that this website is trying new stuff with design. I also think the TOC is terrible and distracting.


Trying to read in Safari iOS but most of the article text seems to be missing.


You have to scroll the page reeeeeally slow to let the text load where it’s supposed to.


So this custom code font not only looks like Apple II, but it also works like Apple II. Amazing.


I think it does a GC pause on every scroll. If you just scroll and wait a few seconds it renders. That said it got so frustrating I stopped reading, even though I was curious :(


Only half the code is rendering for me as well, Firefox iOS


Try using "Reader" mode. Tap the little icon at the left of the URL bar.


Completely unreadable on mobile for me.


Same.


Using Dark Reader extension for Chrome, all of the sample code text goes to the same color as the background so I thought I was scrolling over huge chunks of blank space.


It's not blog authors responsibility to test his blog for all possible extension that some readers might be using.

If the reader is using some third-party software that modifies the original blog design, it's his responsibility to disable it if it doesn't work seamlessly.


I agree with this in general, though sometimes incompatibility with popular extensions is an indication that the author isn't following standards.


Asking people to test against all the combinations of browsers and extensions? That's asking a lot


I was surprised to encounter this as well in Firefox using Dark Mode. I'm still not 100% sure what's going on, because it's rather odd, but if I inspect the page, and add a: .language-java { opacity: 0.99999 } All content instantly renders. Interestingly: .language-java { opacity: 1 } or .language-java { opacity: 0.99999999 } /* probably same as 1 */

Blanks out the content again. Seems rather odd behaviour to me so definitely interested if anyone figures it out, even if this is kind of derailing the discussion.


Kay. Day got a bit more quiet. I vaguely remembered running into this weirdness before. It's 'cause apparently opacity creates a new stacking context. Doing z-index:1;position: relative; does same thing. The rule that seems to be breaking everything is the one that forces a default opaque background on everything with dark text. For it to break this, there must be overlapping content, but haven't found it yet. But enough to know what was going on..


@JvmRecord data class ...

I'm sorry, you said something?


Not sure why immutable data structures have surfaced as something important. Typically you never change fields so it is kind of only of academic value if a field in a POD, POJO, POCO or whatever it is called in the specific language actually may change.


> so it is kind of only of academic value if a field in a POD, POJO, POCO or whatever it is called in the specific language actually may change.

I'm not sure if you've debugged much, or inherited any large legacy projects, but knowing it is immutable vs "typically it isn't" is a pretty big distinction in that moment.


Good code don't have this issue.


Good code doesn’t have concepts like “typically doesn’t mutate”. It’s either mutable or immutable.


The standard way to do it is to mutate clones as always in Java. Typically by use of libraries that provide this. You really need very bad code to have to think about if objects are mutable.


Apparently you don’t know most immutable structures are optimized for modified cloning in ways mutables aren’t.


This is Java.


And?


good code shouldn't need to be debugged either ;)


"Typically" this means you can never be sure, which is a problem.


Sure of what? If your code is not garbage it takes two seconds to see where things change. It should be very places in the code.


It takes two seconds in a hello world demo. It takes probably more in a millions of lines project.


Fields of an object should typically not change. But yes, I had to make a field of a class immutable some time ago. There was a bug and I could not read the code to figure out were the object was changed. Still, caused by bad code. Same object sent around pretty much everywhere.


So even you, the master of great code, wrote bad code. I guess immutability is worth something then.


I did not write the original solution...


Precisely: you did not write the original solution, which is where immutability shines. It's a code comprehension tool; it gives you guarantees about code you didn't write. That's a huge boon!


There are no guarantees that it is the same object you get so it is pointless.


No one uses immutables to make sure they “get the same object”. What does that even mean?


So you reckon we only need immutability if two or more people have to work on code. Ok.


You are just too stupid. I hope you are not immutable.


There are many good things enabled by immutability, like safer/easier multithreading, value-like semantics for objects. It is definitely not only of academic value.


Typically you use values and not objects for these kind of things so not that much gained.


Values require copying, so depending on the problem it can be worse from a perspective point of view.


Keyword being “may”.

Guarantees are nice. Especially if objects are going to be passed around every which way from Sunday. It allows you to better reason about what could happen, and where.

Java has taken a while to get there, but I’m glad that they have finally.


Well, if you pass around objects and change their state records will not help. People who does this are already using immutable frameworks in Java to clone and change some field and then pass it along.


Sure, things have been bolted on top of Java to allow this to happen. People have made use of them.

Java now has built in support for these things, so no longer will developers rely on third-party solutions.

This is great for the Java world, and for any languages that are being built on top of the JVM by extension.

I would politely disagree with your characterisation of it being just academic, as an engineer I find it incredibly exciting. Admittedly, my bar is pretty low for excitement these days.


I have done Java for 25 years and never felt any need for records. There are so many issues...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: