On its own, the features specifically for Java 17 aren't obviously that compelling, but the important thing is that Java 17 is an LTS, the last one of which was Java 11, 3 years ago. Since many organizations, including mine, stick to LTS releases, that means a lot of developers will get a big change in what they can do sometime in the next few months as they upgrade to the LTS.
Among other things, this means that we can begin to use records, pattern matching for instanceof, the shenendoah GC, and more.
This is what excites me. Records, switch expressions and sealed classes are all excellent and even better together. Along with pattern matching switch statements, Java is finally losing a lot of cruft people complain about.
I'm trying to find a good use case for records, but the best I can come up with is using it for composite hashmap keys. I suppose when combined with sealed classes and pattern matching features at some point it might be more useful, but what is the main use for records right now? Given that they're immutable and have no convenient way to be copied when modified, I find them quite tedious to use.
I work as a systems developer for a bank and records is an easy replacement for a very popular dependency called lombok. Now, records doesn't necessarily do everything that lombok does, but for us it replaces what we actually need. One less dependency and I'm a happy camper.
Do you not just use maven? Any of the major IDEs will automatically configure annotation processors for you when they're configured in your pom. Similar for gradle.
Its been a while since I have had to deal with one, but at the time the IDE didn't handle it well at all. I think it was because I was missing the Lombok plugin.
TBH, it was a lot of trouble so that someone could avoid generating a couple of getters and setters, and could use a annotation to setup the logger. I realize that there are more features available than that, but the ones that I've seen often used it in such mundane and boring ways that the setup wasn't worth the hassle for me.
But I guess we all get annoyed with minor hassles in different ways. I got annoyed with the hassle of setting up an IDE, they got annoyed with getters and setters more. :)
I no longer think there is a justification for a project like Lombok. FreeBuilder/AutoValue (depending on your needs) will provide the same feature, but with clear visibility at the IDE level.
Honestly, I expect that half the classes I write will be records, once I have access to this.
A lot of the time, the software I'm working on has data objects that are really amalgamations of other fields. Example: "Address" is the street address, the city, the postal code, etc, etc. These sorts of objects should be immutable. Records are perfect for that.
If I need to add additional logic, methods, then you can add those to a record. But it continues to enforce that the state is immutable.
* tuples (complex numbers, points, dimensions, colors, IP addresses),
* database query results,
* stateless beans for DI frameworks where it cuts down on the verbosity of using constructor injection (actually a slight misuse)
* composite natural keys in database modeling
The immutability can help to enforce API contracts and proper service layering. A common problem in legacy code is modules communicating with each other by modifying objects that were passed as arguments.
Eventually, Project Valhalla will introduce the possibility to declare records as primitive, which will also reduce the overhead of Wither methods.
Edit: Building web APIs should also benefit from records because the argument and result types are only supposed to be created and read, not modified.
I guess records will make valhalla (value types) easier which enables more efficient data structures and will probably make passing data over FFI (valhalla) easier too.
They reduce a lot of boilerplate, e.g. when passing multiple return values.
I find records super useful! You can not only express things like a Point(x, y) easily, but you can also wrap primitive types like integer to give values semantic, behavior and typesafety. Records are the perfect fit for value types as used for example in Domain-driven Design.
We would use them as data transfer objects between different layers of the system - however, they cannot extend another class and thus are not suitable in our specific case.
So one thing peculiar in Java is public records have to be in their own files. Now I wanted to treat records as less ceremonial than classes to organize code. I'd have liked to have a dozen or so records in a file along with some basic operations on them but it is not possible have multiple records without that many files.
I know the answer is always use IDE and all but it causes more context switches than scrolling a bit to see types I created.
If you have a bunch of related records you can make them public and nested within a top-level public class. You could also throw some static utility or factory methods in there too.
It's a core restriction of Java that there can only be one public class in each file. Nesting records inside a class would solve the issue. It's not that pretty though because only lower-casing the class name would create the illusion of it being a package.
Interesting. It's ok. It seems like they're adding this syntax just to avoid extra copies eg, `p with {x = 3, y = 4}` instead of `p.withX(3).withY(4)`. I really don't mind the later.
My gut feel is that records break encapsulation and will make refactoring slightly more difficult than the equivalent lombok value class. But if this gets more people making objects immutable, I'm all for it.
The perfect transparency of the underlying data is rather the whole point of records. Tempered by immutability, records should be a useful tool for modeling data where encapsulation is not required. Note that it is still possible to add methods that can perform computation on the underlying data.
TBH both approaches are just poor design of the object model. The one from my link is a clever but inefficient way to manipulate records (reflection), the one with Lombok is creating redundant interfaces without business meaning. A record with few business methods is lean enough and ensures that only valid transitions can happen.
Every method of your API must serve some business purpose. "With" methods and setters generated or written "just in case" often do not have one or they are being used only in tests, which would be insufficient justification for having them. Does your code really need to change individual components of the color? If it is not a graphic editor, probably not and those methods will be redundant.
I've compiled a Twitter thread with some of the key features new since Java 11 over the last few days: https://twitter.com/gunnarmorling/status/1434443970411704324. Also includes some links to related blog posts. Perhaps interesting for some to get a quick overview what you'll get with 17 when coming from 11.
Java 17 is a release of the reference implementation, but there are a number of distributions from a variety of vendors.
Oracle are going to provide long term support for their distribution, and it sounds like many will follow their lead. (If you are going to provide LTS it would be a little perverse to be out of sync with other providers of the same software)
So, for all practical purposes, it is an LTS. But check with your vendor.
Yes and no. Debian's repos provide binaries for multiple version though. Since 11 has been about, Debian testing has always had OpenJDK11 and then of course binaries for newer versions. So they do ship the LTS, you just need to specify the version number, openjdk-11-jdk. They don't tag it as LTS in the package name, but someone who is using java professionally probably knows openjdk-11 is the LTS implementation.
As far as I know, Debian does not have any strict policy or dates for when they will stop building and publishing OpenJDK 11 binaries out of the jdk11u branch in the OpenJDK project.
The jdk11u branch [1] (the source code) is currently steward by Red Hat, SAP employees and other contributors, and both Red Hat and SAP have a support roadmap, which currently goes until at least 2023 (SAP) [1] and 2024 (Red Hat) [2].
So, Debian is not providing an LTS binary. Debian is producing binaries out of the source code maintained by others. Once jdk11u stops receiving updates, Debian also stops shipping updates, because Debian has no LTS commitment.
This also brings some big gotchas, such as the closing of encapsulation loopholes. Unsafe code and various hacks calling into the JDK will break a lot of programs; so you might be stuck on 11 if you use HBase, Spark, etc.
You won’t be stuck, you just need to add some command line flags when running your application, as a way of acknowledging these packages are breaking encapsulation.
“Some” as in dozens (for my apps, at least). It’s a solution, but not a great one. The projects I depend on will have to remove such usages eventually to avoid awful UX (and avoid punching holes in encapsulation).
For server application, this is a one-time administration effort. If you are concerned about `--illegal-access=allow`: yes, you should strive to get rid of it before security auditors blow an artery because of it.
Desktop applications are usually not delivered as bare JARs, but with wrapper binaries or scripts, where such flags are supposed to go.
There are so many good features from the last LTS (11) to now. Some of my favorite languages are Swift and TypeScript because of how natural the syntax feels. Java is finally getting close to that level of flexibility with JDK 17.
Thanks for the Dev.java mention. Our team is excited to launch the site. We focused on minimal design (similar to Inside.java) and heavy on content, mostly learning material to start. More to come soon. Feedback here is welcome!
The responsiveness seems broken because of the "try java" editor. Try it on your phone and you'll see you can move the page horizontally and most of it is blank.
I am confused by the license. The way I read it, a "bundling" with commercially licensed Software is forbidden, that is, you can't ship the JRE along with a product that you are selling. So your users will face rxtra installation steps
They are, however, making a commitment to scripting friendly URLs for the JDK downloads, making it feasible to automate the install.
Still, it is pretty lame, since they've been advocating bundling of the JDK since they deprecated the WebStart.
OTOH, since Oracle JDK and OpenJDK are identical save for branding and licensing, not sure why you wouldn't just bundle an OpenJDK build in that case.
This new license seems to be targeted to two main groups: those who depend upon a "system" install of Oracle Java for running third party apps, and those who are for some reason unwilling to use OpenJDK.
JRE stopped existing in Java 8. The idea now is you "link" a JVM for your app, which customizes and optimizes it. Then you ship both together. It's the fully supported way and has no license implications.
Seems like the new Oracle JDK license is a lot more flexible than before. In reality it's kind of cosmetic because lots of people were using the (100% compatible) Amazon or Azul spins, or just regular OpenJDKs.
So the comments above seem to indicate that bundling 'JRE' with an app requires a license. Are you indicating that post-java8 model which builds a 'Custom JRE', and distributing that with your app does not require a license?
There are plenty of reasons to avoid Oracle, but I’m not sure what about the previous licensing change was a failure. OpenJDK builds have always been free (and provided by Oracle).
The community terribly overreacted and misunderstood the nature of that change, and indeed caused a lot of FUD.
Language Features
394: Pattern Matching for instanceof
395: Records
306: Restore Always-Strict Floating-Point Semantics
409: Sealed Classes
361: Switch Expressions
378: Text Blocks
Language Features in Preview (behind a flag)
406: Pattern Matching for switch
412: Foreign Function & Memory API
414: Vector API
Tooling
392: Packaging Tool (jpackage)
JVM
386: Alpine Linux Port
391: macOS/AArch64 Port
340: One AArch64 Port, Not Two
388: Windows/AArch64 Port
I'm guessing that Loom (lightweight threads) is not in this yet. It looked like a nicer way to handle "async" code, but I wonder about the implementation difficulty.
Basically if you look at the JEPs related with networking infrastructure from the last set of releases, they have been rebuilding the whole JVM to be Loom-able.
Even worse: C# came as a response when Microsoft failed to apply their embrace, extend, extinguish tactic on Java like they did with Netscape[1].
In the end the case was settled out of court and MS agreed to pay more then 1 billion dollar to Sun. MS also agreed to license a whole slew of patents for use with .net.
.net didn't add support for macOS on ARM64 years ago. .net also didn't implement 2D rendering on macOS via Metal. etc. As a matter of fact .net didn't even support any other platform then Windows until recently.
I think I would be generous if I stated 10% of the .net applications ran on Mono instead of MS CLR without source modifications. I even doubt more then 60% of the .net applications ran on .net Core (needed for other platforms then Windows) without modifications.
Meaning .net has nowhere near the "write once, run anywhere" support Java has.
> The release on September 30, 2004 was originally numbered 1.5, which is still used as the internal version number. The number was changed to "better reflect the level of maturity, stability, scalability and security of the J2SE".
and
> This version introduced a new versioning system for the Java language, although the old versioning system continued to be used for developer libraries:
>> Both version numbers "1.5.0" and "5.0" are used to identify this release of the Java 2 Platform Standard Edition. Version "5.0" is the product version, while "1.5.0" is the developer version. The number "5.0" is used to better reflect the level of maturity, stability, scalability and security of the J2SE.
No releases for 17 yet, but checking the site, I discovered that AdoptOpenJDK was moved to the Eclipse Foundation and renamed to Adoptium: https://adoptium.net/
AdoptOpenJDK offers two builds builds for each version, HotSpot and OpenJ9. Adoptium doesn't. Instead, there's only one downloadn, which makes reference to "Temurin", whatever that is. A quick google search lands me on an Eclipse project page which describes Temurin as follows:
"The Eclipse Temurin™ project provides code and processes that support the building of runtime binaries and associated technologies that are high performance, enterprise-caliber, cross-platform, open-source licensed, and Java SE TCK-tested for general use across the Java ecosystem."
I think it's a great option, mostly because it can run anywhere the JDK runs. I'm building and testing the JavaFX 18 early-access builds for Linux on six Debian architectures (amd64, arm64, armhf, i386, ppc64el, s390x). I should have the JavaFX 17 general-availability release built this week on all six, too. See:
It's worth noting that you may not always want to use the ARM JVM on M1 even if it's available, because then anything linked by JNI needs to be running on ARM also.
I'm on an M1 Mac and have spent more time using the x86 JVM because the ARM JVM is less likely to have applications Just Work.
A ton of huge stuff was in JDK 16 though, and seems most companies won't upgrade until LTS releases, so folks should be looking at JDK 17 as JDK 12-17 if they're in a corporate environment.
Records, multiline strings, pattern matching for instanceof, handling nulls in pattern matching/switch, sealed classes/interfaces, probably other things I'm forgetting.
I had never written Java before until recently, and thank god for JDK16 features. Only thing that made it slightly tolerable.
The teams working on the JDK think about their work in 6-month release cycles, not LTS cycles. What's ready to go in when the train leaves the station, goes in, what's not, waits another 6 months. And this applies across the board for incremental changes coming from Amber, Loom, Valhalla, Panama, etc.
The danger of thinking in LTS cycles is a feeling that a feature needs to be rushed to make it in, which might jeopardize the production-ready quality on Day 1 of GA. It's very important to us that the ecosystem can trust every release, in production, right out of the gate.
Can't remember where I heard it, but the thought was that "big stuff" was skipped in case it created bugs that then had to be maintained for the entire LTS support period.
See my parallel comment on this philosophy. Also, it's likely we'll never see "really big stuff" in a single release as the 6-month cadence has made possible incremental changes to a larger vision. Case in point, see Project Amber [1].
The new FFM API looks really cool. Still incubating but having better native interop I think could really be a big win. Most developers will never touch that stuff but having the ability to manage native, off-heap memory and safely(ish) make native calls will be great.
I wrote an app which used JNI to call into C++ code. I ended up refactoring it to spawn subprocesses written in C++, and communicate with them via RPC. Not because JNI was difficult, but because errors in the C++ can take down the whole app otherwise. All the performance and ergonomics improvements in the world wouldn't persuade me to give up that safety barrier.
Unless i rewrite the native parts in Rust, of course :).
Interesting but the "Isn't this just multiple return?" section seems to completely miss the point. I don't see any examples of what multiple-return would look like. And multiple-return is what I want.
At first glance, it looks like this (sure, more powerful) pattern matching system is not going to satisfy my desire for a compact syntax that lets me do the equivalent of this typescript:
No, there is no tuple support yet in Java. That would require something like variadic generics or a special compile time syntax sugar for tuples as the sole use of variadic generics.
With the pattern matching the best you are going to get is nasty boiler plate like
var returnedTuple = getMeTwoThings();
if (returnedTuple instanceof MyTupleType(Foo foo, Bar bar)) {
// Do something with foo and bar
} else {
// Not reachable unless refactoring, etc. So panic here.
}
That else clause is so terrible you'd probably just rather use the getters from the type.
Using the object(/class) as you say is the probably "the Java way" to do this, given their historical "everything is an object" stance (which has been eroded a bit, though, with some of the functional support). But it sure is nice to do a multi-returns, here and there, without having to create another class. A class which required you to write boilerplate (getters/setters). Maybe records are some sort of compromise. Probably a non-compromise for people who like the multi-returns.
I think the best thing we have is the new `record` feature. You can declare a small public record before the method with the return type, and by using the `var` keyword, the caller doesn't need to repeat the type declaration.
I use the lombok plugin with IntelliJ, it eliminates tedious parts of some code.
Some times one needs to create new objects return multiple related items from a function. Or use a collection or array. I think it is unnecessary. Other languages (e.g. go or python) have had this for a while now. I taking a wild guess here LISP probably had it since the 1970s.
I would like to ask a question, in order to elicit the detailed and considerate comment that HN is known for. It's not intended to inflammatory in any way shape or form!
I am not a Java developer. I am one of those users who has a bad memory of using Java desktop apps in ~2001 where they ate a ton of ram, seemed horrendously slow, and had example "Hello Worlds" that are reminiscent of Enterprise FizzBuzz [1]. At some point in the last 20 years, Java has gone the other way -- it has a reputation (I think) of being "boring", "performant" and used by businesses for doing server-side logic. Still, the only way I interact with it is with the occasional dependency install.
I've noticed:
-- There are a bunch of different JREs/JDKs, most famously Sun's/Oracle's own Java, OpenJDK, OpenJDK built by Other People™, and Random Other JDKs (Azul; Amazon Corretto).
-- People rail against Sun/Oracle and what happened to Java.
-- For a bloody long time I had to download the JDK separately and/or type in "agree" into a very un-friendly sounding license at the command line.
Can I therefore ask the HN meta-brain to either explain, or point me to a reference that explains:
-- What the ideological and practical differences are between the JDKs
-- Why there are different JDKs -- I get that that it's good to have different language implementations, but there are really quite a lot!
-- What Oracle did
-- And what was the beef with the whole open-source license was. Isn't opener-better?
For the most part, the various JDKs not really different. They're builds of the same OpenJDK repo.
They occasionally have a few small changes, maybe with a few extra bugfixes new or a backported.
The one small difference between compiling it yourself is that Oracle does control the TCK, a test suite for the binaries that Azul and others might use but not open to you.
I've got a task at work to install Oracle WebLogic and migrate a legacy app from an older OAS instance. The Weblogic installer didn't even run on OpenJDK, it specifically checked for it and crashed with a "OpenJDK builds are not supported" error. No idea why that is since I read everywhere that they're essentially the same thing.
I believe that when you license Weblogic from Oracle, the Weblogic license includes a license for the Oracle JVM it runs on. So there is no reason to run Weblogic on another JDK/JVM. I assume that Oracle requires their JDK/JVM for Weblogic to keep support simpler.
> memory of using Java desktop apps in ~2001 where they ate a ton of ram, seemed horrendously slow,
Note that the horrible end user experience of running java programs (or java plug-ins in the browser) back in the day, never was indicative at all of how it's for running on servers and those kind of workloads. The stuff I've written in java for backends have been blazingly fast (compared to python for instance) and with much better tooling. So devs choose java/jvm because it works well, despite some shortcomings in some areas.
Btw, I think you'd find today's java desktop apps to be snappier than you'd think (especially compared to electron for instance). Bundling the jvm removes the hassle of having to install something separate and version issues. And AOT compilation and the new modularity of the stdlib makes the files smaller and quicker to load.
> What the ideological and practical differences are between the JDKs
The builds may ship with different optional features enabled, such as various garbage collectors or I think support for the GraalVM stuff. Support terms differ.
> Why there are different JDKs -- I get that that it's good to have different language implementations, but there are really quite a lot!
Most of what you will get these days are just different builds of the OpenJDK class library + tools + Hotspot (JVM). Well, except for the android runtime, but we don't talk about that one.
Historically there have been been more JVMs and class libraries but that has been consolidated a lot since oracle opensourced it. There still is OpenJ9 and some smaller, more specialized ones.
For a while, there were some nitty, edge case, "UFO" types of differences between OpenJDK and the Oracle JDK. But that's mostly been resolved.
Oracle owns "Java", that is the name, the trademark, etc. In that sense, OpenJDK is "not" "Java". But operationally, this is moot. But it has impact in other areas. What was originally "Java Enterprise Edition" was transferred over to the Eclipse Foundation, but they couldn't take the "java" name with them. All of the JEE packages were "javax.enterprise.". Oracle wouldn't give up the "Java" part in order to not dilute its trademark, so now its the "Jakarta Enterprise Edition", and all of the packages are being renamed to "jakarta.enterprise.". I mention this just as an example of the hoops the community going through, even today, over what's happening with java.
So, Oracle and the community have to dance a fine line over what is the language, compilers, tools, and libraries, and what is "java", what is trademarked, what is owned by Oracle.
Oracle was kind enough to be an invested member in the Java community by not forking, closing, and going off its own way, and leaving the OpenJDK to fend for itself. This is essentially what happened to Solaris and OpenSolaris.
The other JDKs (Zuul, AWS, etc.) were certainly a hedge against this. The industry has a lot invested in "java", and can't really afford to let it get out of hand. Having "official" builds, that were supported, and patched, and eyes on vs the latest release from OpenJDK gives companies a secure feeling. That's why other companies stepped up to support their own JDKs. To help assure clients that the technology is stable and still worth building on.
So, the differences are not so much different languages, or even different implementations (though there is certainly some of that). Rather much of it is simply stability in the community.
Also, the whole licensing issue with Oracle vs the others was a big deal also. We'll see what impact the new free license from Oracle has.
> All of the JEE packages were "javax.enterprise.". Oracle wouldn't give up the "Java" part in order to not dilute its trademark, so now its the "Jakarta Enterprise Edition", and all of the packages are being renamed to "jakarta.enterprise.". I mention this just as an example of the hoops the community going through, even today, over what's happening with java.
That is quite a change, and probably not very backwards compatible?
No, it's not at all. But, since that code needs to run in a formal container (app server) environment, the app server makers do classloader and renaming shenanigans to be able to load code from the earlier toolkits.
Java is set of specs from https://jcp.org,
Implemented by various parties, IBM J9, Sun(Oracle) HotSpot, BEA Systems(Oracle) JRockit are I know of.
Sun open source HotSpot as OpenJDK, Then Oracle merged some code from JRockit, IBM also joined some in to OpenJDK, after shit with https://harmony.apache.org ,
OpenJDK is GPL, Oracle provides compiled version, but there are issues with it's license and long term support, You have to pay for long term support to Oracle, Some product are there still running on JDK 1.4
Since OpenJDK is GPL Redhat provide compiled version for redhat linux called IcedTea, Eclipse foundation provide adoptium, and there are commercial support providers like Azul
Not a java dev, but i did code in java here and there. What is super odd to me:
1. Java devs are always begging for some silly feature that almost every other pl has.
2. Why not write your own? There is a culture of sorry to use your words: "give me". Programming in java observed by me in the wild is typically assembling huge blocks of framework&libraries&configuration.
I work in finance, hedge fund kind of stuff. My team has its main apps on 11, with a long tail of minor services on 8. One other team in the company has been keeping up with each release (brave souls!). Most others are in a similar place to us.
I've been interviewing lots of candidates recently, and i usually chat a bit about what versions they've used. Only one is using 11, the rest are on 8. One only finished migrating to 8 this year!
I know of a number of RHEL-based deployments we’re managing that still use Java 8. In general, if you’re developing libraries, you still need to support Java 8, because there are a significant amount of enterprise deployments that use it.
I recently finished migrating a bunch (100+) AWS Lambdas from JDK 8 to JDK 11 and it was pretty painless - unit testing was spotty at best but there were no issues.
This was hastened by AWS changing the underlying Java 8 runtime to Amazon Linux - it probably would have been fine but I decided that if we were going to test a change, we might as well move to Java 11 and make the full regression test worthwhile.
Anecdote: We _just_ fixed our last JDK11 blocker here at $corp (the groovy 2.x runtime was holding us on JDK8). Our new quality gates are likely going to keep us closer to the latest and greatest, so I don't see a lot of problems with us jumping to JDK17 sooner than later.
Java5 and Java8 were monumental changes to the language… this update probably will have the same legacy. Can’t wait to see the benchmarks out of this bad boy once they get the compilers tuned in.
I understand wanting to remove the overcomplicated Security Manager, but we have one use case which, as far as I can see, has not been mentioned at https://openjdk.java.net/jeps/411 : we install a custom security manager to prevent the software, when run on a developer machine, from accidentally connecting to non-localhost network addresses (we actually use a whitelist). I wonder how we'll do that after Java 17.
Is there a way to see top new Java features since version X? For example my company is stuck at version 9 I believe, and I’d like to see what I’m missing out of at a glance
A clarification: Rust only makes this guarantee for certain types[1] which you would expect to be represented by a simple pointer under the hood (e.g., Box<U>, &T, function pointers) and for other types which can never be all 0.
Making this optimization for something like Option<u8> is, naturally, impossible.
(I assume you are aware of this and were implicitly referring to this optimization.)
I think this statement has been true since about 2014.
Structs are nice, as they allow for control over locality, to a degree that is simply not possible in Java currently. But there is a second effect, which is possibly more important: If I can move gigabytes of my data into arrays of structs, then 1) I greatly reduce memory requirements (far fewer pointers), and 2) I greatly reduce the amount of work that GC has to do.
This is such an important thing to add to Java, and it seems to be perpetually off the stove, not even on the back burner.
Alternatively: This is such a (potentially) big change, it's important to get right. I'd be disappointed if an inadequately-baked solution gets rushed in.
Another way of looking at it: This is fundamentally just a performance optimization. Java performance is already exceptional for most of Java's popular use cases (business processing). While valuable, I don't think this one feature is quite as important as you consider it.
Some features take a long time when you have millions of users and billions of loc. Goetz recently did a State of Valhalla interview here: https://www.youtube.com/watch?v=x1_DBqJrykM
It's mostly implemented in a branch. You can download a version of Java that can do this today. There's more work to do before it can be merged though, and will probably ship in incremental pieces.
> If I can move gigabytes of my data into arrays of structs, then 1) I greatly reduce memory requirements (far fewer pointers), and 2) I greatly reduce the amount of work that GC has to do.
As an aside, and not to say that Valhalla is not needed (I am looking forward to it very much, along with Loom), I’ve recently learned that for many use cases (not all) when you want gigabytes of struct arrays, a bunch of equal-sized primitive arrays, one per struct field, might work even better from memory requirement and cache locality perspective.
The semantics of primitive types should make it possible for the JVM to apply this optimization automatically. Maybe controlled via an annotation. But no clue whether it's on the roadmap.
Can’t tell if this meant to be a statement of fact, opinion or sarcasm. Over the course of 20 years have I have probably spent days of time tracking down NPEs in code being developed and after the fact in production releases. The amount of extra code and time spent to determine if variable is null certainly isn’t free.
Optional at least states the variable may not be referencing anything and provides helpful methods to chain mapping and conditionals to more easily deal with null values.
I really like the Kotlin approach, where they just embraced null being a fact of life on the JVM/JS and instead integrated nullability into the type-system [1]. C# and Typescript do something similar.
So then you get the best of both world, the safety of Optional without the boxing overhead.
This really sounds like the way to go, although of course it is too late for Java. Optional feels awkward, and there's no more safety than we already have since the JVM null-checks anyway.
The main advantage is that Optional is inconvenient and forces you to consider the null case. It's easy to forget that values can be null and assume they're not, e.g. auto-unboxing
Long getSomeValue();
int max = Math.max(0, obj.getSomeValue());
The Optional value forces you to type some text basically acknowledging that the value can be null.
Luckily it's not too late. A C#/Kotlin approach could be introduced and would not interfere with existing Optional.
The type system would need to be extended with a T? (nullable), T! (non-nullable), smart-casting on null-checks, and with some annotations or a flag to determine if T should be treated as T? or T! and whether you can call methods on it.
> How is this better?
It's not if you use Optional.get, but if you use .map, .orElse and the like the compiler can prevent you from getting NPEs instead of them being runtime.
I am skeptical about the "strong encapsulation" and in general, of all cases whey people tell me about something: "you don't need it, it's better for you to do it other way".
Usually I know better what I need. There were cases when I needed to access the internals, e.g. fixing a prod issue.
A quote from Thinking Forth comes to mind:
> The newest traditional languages (such as Modula 2) bend over backwards to ensure that modules hide internal routines and data structures from other modules. The goal is to achieve module independence (a minimum coupling). The fear seems to be that modules strive to attack each other like alien antibodies. Or else, that evil bands of marauding modules are out to clobber the precious family data structures.
> This is not what we’re concerned about. The purpose of hiding information, as we mean it, is simply to minimize the effects of a possible design-change by localizing things that might change within each component.
In that case, I think a Java Agent is better suited to your needs. We have one we wrote that will intercept a class load and hot-patches the byte code.
This allows us to fix bugs in 3rd party software or the JDK itself until fixes can be released upstream and we don't wish to release an internal version of an artifact. Not something we use exclusively for obvious reasons, but it's a handy tool to have in the toolbox when we're forced up against a wall.
Fixing a prod issue by bypassing encapsulation and other programming practices sounds very much like accumulating technical debt.
Encapsulation may be a lot more verbose, but it remains the best way to isolate software modules, to write better tests, and improve the overall reliability of software.
Also, justifying it by security is not convincing. Encapsulation / decoupling and security are different things. The internals can anyways be accessed through core-dumps, /dev/mem, side-channels, etc.
Yes, the strong encapsulation of JDK impl classes can create some inconveniences to an attacker. But I don't feel it's valuable enough to justify luck of access to the internals.
Among other things, this means that we can begin to use records, pattern matching for instanceof, the shenendoah GC, and more.
Links to JEPs for the other releases.
http://openjdk.java.net/projects/jdk/16/ http://openjdk.java.net/projects/jdk/15/ http://openjdk.java.net/projects/jdk/14/ http://openjdk.java.net/projects/jdk/13/ http://openjdk.java.net/projects/jdk/12/