Hacker News new | past | comments | ask | show | jobs | submit login
Java 12 (java.net)
324 points by kalimatas 35 days ago | hide | past | web | favorite | 465 comments



I'm so glad I was taught Java at Macquarie University back in 1998. For the past 20 years I've had a career built on a solid API that doesn't change every 2 years like some flavour-of-the-month Javascript framework.

Even on the client where Java has lost to Javascript, I'm finding it more enjoyable to add features to my 15-year-old SWT app [0] rather than dealing with the multiple layers of abstractions that is Javascript+CSS+DOM (+ maybe Electron). Personally I think it's a shame Sun dropped the ball with client Java - if they had chosen SWT over Swing and provided a minimal JVM then maybe Java Web Start would have beaten Javascript web apps. It's also a shame Sun sold Java to Oracle - Google would have been a better steward, and probably would have been willing to pay more for the Java parts of Sun.

I'm now trying Dart to develop a few Flutter apps. It's no doubt a better language, but not that much better - I think Flutter would have been more successful if it was built on Java.

[0] https://www.solaraccounts.co.uk


The times of ever-changing JavaScript frontend frameworks is long behind us (and, arguably, React has won for MVw-style browser apps). The core node.js web serving APIs (expressjs and core http request API, which expressjs middlewares forwards and decorates) is stable since node.js v0.1 or at least 2015, and infinitely better than Java's servlet, JSP, and taglib APIs (web.xml/jetty-config.xml anyone?).

The flip side of Java's stability is stagnation. On the server-side, customers use mostly Spring/Spring Boot these days to make Java's overengineered stack usable, using even more over-engineering and metaprogramming. Same with maven/gradle, which turns what should be a developer-focussed build system into an enterprise XML mess.

SWT? I'm glad it works for you. Last I heard, IBM had pulled out of developing Eclipse and SWT/RCP over ten years ago. In the one project where we used it, the app looked like an IDE, and customers hated, then cancelled it. Designed as wrappers for OS-native GUI controls, it has even less of a place in a browser than Swing had. Last I looked, of the Java rich-client frameworks, only Swing has been updated with HIDPI support, whereas JavaFX and SWT haven't. None is using any of those (with the exception of experimental JFX apps) for new apps in this decade.


You can call Spring (boot) over-engineered. I call it feature full and extensible.

Whereas in the modern JS world, you start out with a lean project and then add small libraries from NPM that all work slightly different for every feature you need, in the Spring world where the framework has matured for 10+ years most of the things you will need are in the box or are available as external libraries that all work on the same defined interfaces.

Spring Boot with Java 11 or Kotlin are still my preferred backend stack for that reason. It’s solid, has aged well and makes me super productive.


Actually they just said that Java's server-side stack is overengineered, and that Spring/Spring Boot hides that complexity. I think that is mostly true.


"Spring/Spring Boot hides that complexity"

Until it doesn't, then everything goes to Hell very quickly.

Spring is close to impossible to debug, because everything is driven by annotations, configuration settings, property files, beans defined in XML, default values, or code that runs because you added a dependency even though you never explicitly call it.

Program logic is defined in anything and everything, except actual Java code.

The whole thing is a nightmare ignoring every principle of simplicity, composability, referential transparency, and every other established principle of good software design.


> Spring is close to impossible to debug, because everything is driven by annotations, configuration settings, property files, beans defined in XML, default values, or code that runs because you added a dependency even though you never explicitly call it.

I see a comment like this every week on HN and I've been meaning to write a blogpost or two to address this.

Yes, as you mentioned there is a myriad of ways to configure a Spring(Boot) application, and I think that is one of things that make it such a powerful framework or platform to build a service on. To address a few of your points:

* beans defined in XML : we don't do that anymore since annotations got introduced. I haven't written a beans.xml file in years.

* Program logic is exactly where you expect it: in the Java code. All the property files and annotations are used for configuration to allow for different environments. This setup allows us to run the exact same binary jar or docker image built from CI to run in CI, in dev, in staging and in multiple different production environments.

* configuration settings and property files are the same thing as the configuration values go into property files, you can have multiple property files to group configuration logically by their purpose, and if needed duplicate each property file per environment to group those together. I know my database config for the staging environment is in database-staging.properties. If that is not enough you can easily override any of the values with environment variables

* Debugging : another point where Java gets huge flak here on HN is huge stack traces, well I think those are actually helpful in figuring out exactly what is running and giving you trouble in a specific environment. And because of the separation of configuration from code mentioned above it is trivial to setup a testcase with the exact same configuration that you are having trouble with and use a debugger on that testcase to inspect and debug.

* Simplicity and composability : While the framework can look complex if you don't understand the configuration system, it is actually there to facilitate composability. See my point about running the same binary in all environments. The big idea is that you put all code in the same repo and then compose and configure your application at runtime with the configuration system.


I understand where you come from but for me this is a huge drawback when I have to use Spring. I never know what to expect when working on a new spring project.

> beans defined in XML : we don't do that anymore But you still can do that! Also you still can do DI with Autowired private fields instead of injecting via constructors. You can also create beans based on properties. You can also create beans automatically via a included JAR on your class path. On Spring Boot you can also have beans created based based on some conditional annotation. And now there's yet another new DSL [1] to create your beans programatically on startup time. Tracking down and debugging what's loaded in Spring is super hard compared to Guice for example.

Configuration is another piece of software that I have a really hard time to think why they do that. You can configure via a YAML file, AND via a properties file, AND overriding via environment variables, AND also with a configuration server that store your properties on a separated server, and they are all mixed together in a weird single configuration entry for the specific environment that you have. Probably this made sense some 10 years ago when we loved J2EE but why support this kind of madness nowadays?

What I feel Spring is missing so much is forgetting about all the past and where the framework came from, start from scratch with simple premises around reactor-netty [2], only supporting the functional DSL, no crazy class path scanning, no crazy configuration loading, just a simple DI framework with some batteries included.

[1] https://spring.io/blog/2017/08/01/spring-framework-5-kotlin-... [2] https://github.com/reactor/reactor-netty


Parent said that Spring Boot accomplishes that with more over-engineering and metaprogramming. I wanted to point out that what parent calls over-engineering is actually useful abstractions that make it easy to get started out of the box, and have sane standardized interfaces to replace or integrate components where needed.


Haha! Funny man! I am only doing javascript occasionly. Most of my time i do python and swift.

Even stuff in “core” dependencies like webpack/yarn/npm/npx/nvm/babel changes a few times a year.

Right now doing a react native app and a browser extension. While javascript/typescript is fun to write, the time you have to spend fixing the tools is just horrible. Whiping cache, rebooting, switching package versions.. It makes the overall experience cumbersome.

I would not call something stable if it manages to work 2 whole years.


> On the server-side, customers use mostly Spring/Spring Boot these days to make Java's overengineered stack usable, using even more over-engineering and metaprogramming.

There was definitely an era of that, but over the past decade or so it's been acknowledged as a problem, and there's been a lot of effort put into making things simpler and more vanilla. Modern Spring is much much closer to plain old code. (Of course it's still possible to write in the XML/setter injection/afterPropertiesSet style, and always will be - that's backward compatibility for you - but it's very possible to migrate existing applications to a better style in place, which very few platforms manage).

> Same with maven/gradle, which turns what should be a developer-focussed build system into an enterprise XML mess.

Maven is pretty close to the perfect build system, it's just taken a while for knowledge and practices to catch up with it. All the stuff people love and praise Cargo for, Maven has been doing for 15+ years.


The basics of maven might be sound (though it could make it easier to work with local dependencies) and npm et al have basically copied it. Where maven dropped the ball is the myriad of maven plugin magic, maven's implicit lifecyle, and a pom.xml's Turing pits, etc. I've developed Java apps since almost the beginning, and am as much of a markup geek as could be, but still hate maven with a passion, and often times do not understand what tf it wants from me. I just don't have the patience with wading through crap that 2003ish Java nerds thought would be a good idea. And increasingly, as Java libs get EOLd, this will be a major problem for monolithic Java apps with tens of submodules. I don't think you can expect younger devs to keep maintaining daddy-o's web framework crap and cleanup the mess left by Java heads.

However, I don't want to sound too negative. I can see a perspective for the JVM (Graal/Truffle) as a polyglot runtime where most of the ancient Java stuff is hidden from you, and you're using JavaScript or other non-JVM-only language as a migration strategy to get rid of Java alltogether in the distant future.


> Where maven dropped the ball is the myriad of maven plugin magic, maven's implicit lifecyle, and a pom.xml's Turing pits, etc.

Could you be a bit more concrete? I don't recognize any of your descriptions in my experience of maven.

There are probably some bad plugins out there, but as in any system the solution is not to use them.

The complete lifecycle is listed in the documentation, and it's all very common-sense.

pom.xml isn't meant to be Turing-complete, and if you try to use it like a Turing-complete build language you will get yourself in trouble (as many of the early Ant diehards did). You need to treat it as a declarative config file, no different from e.g. Cargo.toml.

> I just don't have the patience with wading through crap that 2003ish Java nerds thought would be a good idea. And increasingly, as Java libs get EOLd, this will be a major problem for monolithic Java apps with tens of submodules. I don't think you can expect younger devs to keep maintaining daddy-o's web framework crap and cleanup the mess left by Java heads.

It's just the opposite. I don't expect anyone to want to maintain Ant or Gradle builds in the future, since they can have any kind of crap in them. But Maven builds are forced to be kept sensible; the tool just isn't flexible enough to let you put crap in your build definition, by design. (If you really insist on doing crap you can make a crappy plugin, but your plugin is then forced to be versioned, tagged in VCS etc., so stands a better chance of being picked up by your organisation's normal code quality efforts).


I may be biased by doing freelance work (eg. seeing only crap projects), but almost all customers struggle with fckn maven: migrating freaking jetty-maven-plugin, servlet api versioning conflicts, ad-hoc scripting using ant plugin, shading/fatjar plugins, multiple mvn central listings for the same artifact, fragile local dependency resolution, dozens of pointless submodules. A customer of mine even had a solid 3 week outage when they wanted to setup a product for blue/green deployment based on maven. Makes autotools look like a sane build tool.


> migrating freaking jetty-maven-plugin

There's a jetty-maven-plugin? What for? When I use jetty I treat it as a library, embed it in my application, write my own main() and then my application is just a normal application.

> servlet api versioning conflicts

> multiple mvn central listings for the same artifact

These are a genuine problems but they're ones that every mature ecosystem struggles with. Maven has better tooling than most IME - you can use the enforcer plugin to require dependency convergence, and when multiple packages contain the same API you can put exclusions in a common parent. No ecosystem has really solved the problem of package renames nicely, IME.

> ad-hoc scripting using ant plugin

Yeah don't do that. If people will insist on shooting themselves in the foot, there's only so much you can do to stop them.

> shading/fatjar plugin

What went wrong there? IME the shade plugin is very good. There are a couple of inherent JVM-level limitations (having to specify transformers for service locator files etc. because of the way the JVM spec is written) but it's hard to imagine any other build tool could do better.

> fragile local dependency resolution

Not my experience at all - what specifically goes wrong?

> dozens of pointless submodules

Submodules are cheap. IME when coming to work on a new project it's much easier to understand the build/dependency structure if there are lot of submodules rather than a complex build - e.g. rather than trying to have a custom build phase for generating some RPC stubs or whatever, it's fine to just have them in their own submodule and then it's obvious to the reader what the dependency graph looks like. What's the downside?

> A customer of mine even had a solid 3 week outage when they wanted to setup a product for blue/green deployment based on maven.

What? How?

> Makes autotools look like a sane build tool.

Strongly disagree with that one. Autotools has several layers of template/macro expansion that all seem to do much the same thing (so why not just one?) and encourages builds that use arbitrary unversioned command line tools from the host system, meaning you very rarely get reproducible behaviour on a host with a different setup. Maven gives consistent, declarative builds - what's not to like?


"Modern Spring is much much closer to plain old code."

Modern Spring, as far as I can tell, is still very much driven outside of Java code, it's just migrated from XML to annotations and yaml properties files. Seems almost like the typical Spring application has more lines of annotations and properties files than lines of Java code.

I am fond of Maven, though, especially with Gradle (superior to pom files).


> Modern Spring, as far as I can tell, is still very much driven outside of Java code, it's just migrated from XML to annotations and yaml properties files.

Disagree; there's been a shift to using constructor injection over setter injection, which often obviates the need for Spring-specific lifecycle methods. So it's now much more common to have a service class that, while it may have Spring annotations on its constructor, can also be used in a first-class way in plain old Java, and e.g. doesn't use Spring for unit tests. If you're using Spring at all then part of your application wiring lives outside Java code almost by definition, but it's now more common to express wiring logic as e.g. an annotated Java method rather than encoded into XML. It's not a complete departure, and it may not go as far as you or I would like, but I do think there's been a real cultural shift.

> especially with Gradle (superior to pom files).

It really isn't. Arbitrary unmanaged code in the build definition is great for writing but terrible for maintainability.


Bad decisions can be made using any technology. Java has a lot of crufty legacy APIs because, well it has a legacy. I gave up using JSPs years ago for Freemarker. I now use Vert.x rather than servlet contains which gives me a node.js environment but with access to the Java ecosystems and basic things like mature logging frameworks, as well as more advanced features (oh and multithreading in a single process).

Despite Spring being Enterprise it still scales down to simpler apps pretty well due to its modularity.

These can be build into a single jar app that is easy to build and deploy, and combined with a database migration library, maintain their own database schemas. Very, very usable.

Yes, I probably would not use Java on the frontend unless it was for a technical audience.


Spring and other frameworks may be popular, but the base servlet api with db connection pooling is enough. This approach feels much lighter than using frameworks. Not sure where the 'over-engineered' view comes from. I'd call that 'sane'.

People complain about the verbosity of Java, but how much time do you spend learning a new framework and all of the gotchas? At the end of the day it seems easier to just write the boilerplate. All of that verbose code means you got exactly what you asked for.

JVM projects never surprise me. When there is a problem debugging is a dream.


>Spring and other frameworks may be popular, but the base servlet api with db connection pooling is enough. This approach feels much lighter than using frameworks.

That. I've used the Servlets API for years in the early to late 00s.

I'm now (for the past 5 years) using Express with Node.

Static typing and cosmetic differences aside, Servlets and Express (or Node's raw handlers) are pretty much the same API.

Request, Response objects, handlers, middleware (iirc we used to call those last "filters" in the Servlet era), etc. It's not like it's some alien convoluted design.

(Java Server Faces now was another thing)


Any app reaching complexity of what’s being done on Java will look the same, no matter what platform do you choose. I haven’t yet seen large nodejs deployments with MLOCs and hundreds of developers, but I’m pretty sure it can easily become much bigger mess - the tools and the libraries are immature, there’s lack of enterprise SDLC patterns etc, which creates a lot of obstacles. Spring is becoming too enterprisey these days, that’s true, but it’s still fit for use in modern architectures - only if you understand how to utilize all of its capabilities.


nodejs isn't a replacement for fat backends for sure, but it's working really well for a frontend-facing light backend where frontend developers can add endpoints as the need arises, such as for autocomplete functionality, and where first-party support for asset pipeline tools is desired. The Spring experience isn't bad for newbies creating "REST" microservice spaghetti to have something to show at the end of their agile day, but it gets seriously in your way if you're a seasoned dev and have a solid understanding of what you want to achieve in terms of network-exposed interfaces and integration with other apps.


I‘m seasoned dev with 20 years of Java programming experience, I see no problems using Spring and I don’t see why anyone with similar experience would have any. It’s sometimes not obvious, how some things work, but it’s open source and the code is quite good to analyze, extend or replace.


"the code is quite good to analyze, extend or replace"

But how do you even know what code to read when so much of the program's logic is in annotations and other kinds of action at a distance?


Exactly. For example, if you're (still) using Spring MVC, what gets called in your backend on a given action attribute in your JSP or Thymeleaf or whatever template, is expressed through your method signature and lots of annotations where the order of arguments matter eg. some magic attributes must come right after others. All you see in your debugger, though, is that your request hangs in some nested and chained handler and reflection monstrosity driven by annotations and who knows what. Took multiple developers a day or two to figure out, each on their own.

It's not that Spring is badly coded or something, I just don't need these surprises and the magical thinking surrounding newbie Spring Boot projects.


Newbie can write bad code on any platform, but it‘s a short period of life of software developer - the problem of beginners is solved not by the tools they use, but through a reasonable composition of the team with sufficient number of experienced developers who can define the architecture and best practices.


From what I can tell, the "bad" Spring code is the code following the recommended Best Practices, at least according to the tutorials that pop up in Google search results.


There’s one “must read” tutorial - documentation on official Spring website. There’s also github account with lots of examples. It’s better to ignore the trash that comes from search results, because either it’s trivial or obsolete.


Another related surprising thing (as I'm currently learning Spring/Hibernate/etc.) is what happens when the Hibernate Validator library is missing at runtime:

Nothing.

All the carefully crafted validation annotations just don't do anything, no validation is performed, and there is no indication whatsoever of this.


It’s not a problem with annotations or Spring - could happen e.g. with SPI and anything that’s discovered in runtime. It’s about how you build your project and configure dependencies (separation of API and implementation in different JARs is typical for Java and implies certain way of thought about build and deployment).


The source code of Spring itself. I know Spring internals - there’s no dark magic there and reference guide describes it well. DI is straightforward, MVC - not perfect, but generally easy, Security - well, it has design problems, but they are not related to annotations.


"but they are not related to annotations"

I had a problem with security that I could only solve by adding an "Order" annotation to my class. Reading the code and stepping through it in the debugger did me absolutely no good.


So, annotation was a solution to a problem (how to define a sequence in which configuration is applied), not a cause, right?


> On the server-side, customers use mostly Spring/Spring Boot these days to make Java's overengineered stack usable, using even more over-engineering and metaprogramming.

Could you elaborate on the overengineering part? There are many modern frameworks that are much lighterweight than Spring/Spring Boot, and with the introduction of lambdas in Java, things are even more straightforward.

Also, gradle doesn't use XML.


In all these years I have used Spring exactly once for an architecture prototype, during one month project.

All our customers doing Java Web projects are either on JEE, or a CMS platform running on top of servlets and JEE related JSRs, like Liferay and AM.


I have no doubt it's working for you and your customers. But JEE and JSRs have been a dead end for many years now. When there are only single implementations for JSRs left, the whole exercise becomes pointless and masochistic, doesn't it?

In 2006 I played around with Jackrabbit for an architecture concept (for managing airworthiness of airplanes and kit). That was before the Iphone even existed. I was honestly surprised Adobe is still using it in their AEM product. From what I recall, it manages a document tree, based on a 2004-ish, Drupal-like node graph in typical Java enterprise fashion.

I'm no insider, but my guess is Adobe is struggling for some time now to integrate AEM, Magento, and other purchases into a lineup for creating shopping experiences. We'll see how this works out.


>When there are only single implementations for JSRs left, the whole exercise becomes pointless and masochistic, doesn't it?

Not any more than PEPs are.


Edit: Liferay is a portlets container isn't it? My experience is only with Websfear portal (shudder) but I think managing UI state on the server has had no reason to exist since at least 2008 when Chrome/V8 came out, and the massive JavaScript improvements of the last 10 years following it


Meanwhile JavaScript frameworks are all the hype rediscovering server side rendering to improve performance.

I have learned it pays off longterm to watch caravans pass by.

Pure HTML and CSS with minimal JavaScript rules rendering performance.


One of my Java guilty pleasures is pulling in some 14 year old library and it just works. Absolute gold for a company with a lotta old code sitting around.

I also do a good bit of frontend and holy crap it's exhausting. That's the main reason I push hard for Java backend. Backends tend to stick around about a decade longer than you would like, while front end code is due for a rewrite every 3 years


>Google would have been a better steward, and probably would have been willing to pay more for the Java parts of Sun.

They didn't and Google didn't want a pay a single dollar to Sun on Java. As James Gosling has mentioned multiple times. I don't understand why people are still painting Google as Saint after all these years.


actually they didn't want to pay a single dollar to sun, because of licensing reasons, not because they didn't wanted to buy java.

currently they recreated the java apis by pulling in apache harmony and they tought that they are in the right or at least in a fair use position (they still think they do). but I agree, I'm not sure if the direction of java would've been better when google would have the stewardship.


They had the opportunity to buy Sun when it went on sale, instead they thought they could get away with it.


just had this conversation with a co-worker today.

java is a stable api but it also doesn't evolve. The tradeoff is you get a program guaranteed to work no matter the upgrade vs being able to build better toolage.

React changes every year or 2. It is exhausting. But you get way better patterns and some things that drastically improve productivity.


Java gives you a flexible object oriented language and an incredible set of libraries. It's got an archiver for artifact and dependency management. It now has lambdas and has FRP style stream programming. I'd say it's kept up pretty well with modern fashion, while it's been doing other things far longer than newer, more popular languages. Take Swift, now my daily driver (which I love so much compared to ObjC, which I moved to after Java)... an example of something Java had way before Swift is "Protocol Oriented Programming". Also, Swing's GridBagLayout predates iOS' AutoLayout and CSS (edit: don't know where I was going with CSS haha...). Everything old is new again, and I'm sure Java reinvented a few wheels of it's own.

Also keep in mind that React is a library, not a language. If you want to compare things, compare Javascript to Java, or the Java stdlibs to React. Maybe that's what you meant when you said Java.

IMO, Java and its libraries are nicer to work with solely due to its static nature and OO design. It is worlds apart from a scripting language when you need to refactor, or even just understand, legacy code. And if you can't even run tests on that legacy code anymore, like a React project over a year or two old?

What are the better patterns and productivity boosts you get from React vs. a Java ecosystem?


In addition to all of these good things, you can also use groovy which has a ton of shorthand syntax and makes writing code super easy. Groovy also makes writing tests short and simple. I love java and all jvm languages honestly.


Kotlin though. Everything you said, but a joy to write instead of a chore.


I am betting Kotlin is the new Groovy.

Lets see where it stands 5 years from now, specially if Fuchsia actually gets released.


Kotlin is different because of the focus on tooling, which is the advantage Java still had over all the dynamically typed JVM languages.

Also, Kotlin/Native is in beta now and could target Fuchsia (compiling AOT to native code using LLVM).


What tooling? Being forced to use InteliJ, without any proper support on Eclipse and Netbeans?

Still not able to use several of Android Studio features available to Java, like incremental compilation and slim APKs?

Kotlin advocates seem to forget JVM will never be rewritten in Kotlin, the language is just yet another guest, with the usual syndrome to wrap existing libraries, having to take care about FFI for Java access, not having all features taking advatange of the latest bytecodes, e.g. lambdas implementation.

As for Kotlin/Native, there is nothing to worry about versus what Go, Rust, C++, Dart, D, Nim offer in terms of performance, libraries and in some cases tooling.

Having to buy CLion for a graphical debugger isn't a selling point versus the established alternatives.

Fuchsia is being written in Go, Rust, C++ and Dart, with the team now hiring for node.js support.

https://www.androidpolice.com/2019/03/19/google-working-to-b...


>What tooling? Being forced to use InteliJ, without any proper support on Eclipse and Netbeans?

Seeing that both Eclipse and Netbeans are now more or less dead (and speaking as a long time Eclipse user, from the very first version to around 4), yes, first class vendor-direct InteliJ support is more than enough. And more than most languages (including Groovy) ever had.

>As for Kotlin/Native, there is nothing to worry about versus what Go, Rust, C++, Dart, D, Nim offer in terms of performance, libraries and in some cases tooling.

IMHO, Rust will always be kind of niche as hard to tackle, Dart we'll see, D never went anywhere, and Nim will remain niche, it's a little too idiosyncratic to catch on.

Kotlin is already more popular than all of the above except perhaps Go.

>Fuchsia is being written in Go, Rust, C++ and Dart, with the team now hiring for node.js support.

Fuchsia is still vaporware or at least irrelevant. It's not even in the market yet. And the fact that it's written in 4 (and looking for 5th) languages doesn't really bring much confidence.


Groovy had Netbeans and Eclipse support, which was dropped when it started fading away.

We move in different worlds, no InteliJ installations around here.

I remember when Groovy was popular, with every JUG in Germany having weekly talks and Sun talking how the next JEE revision would support Groovy for writing beans.

Popularity doesn't write software.


>Groovy had Netbeans and Eclipse support, which was dropped when it started fading away.

I know, I've used it. What I said is that it did not have the first class IDE attention of a major vendor, those were mostly third party sub-par plugins compared to the Java focus of those IDEs. For Kotlin, however, it was first class IDE support as a primary concern from the start.

>We move in different worlds, no InteliJ installations around here.

Then we indeed move in different worlds.

>Popularity doesn't write software.

No, its just the only thing that matters when it comes to get paid for it.


Delivering working software as per customer's RFP is what matters, everything else is fluff.


I don't understand how Groovy is fading away when it is getting more and more popular. Groovy is also very similar to Java. There is no learning curve, you are immediately creating value.


More popular where?

Had it not been for Google's adoption of Gradle for Android and it would have been left for Grails maintenance projects.

And even then, the pressure for perfomance has been so much that it is being superceded by a Kotlin based DSL.

How to improve Gradle has been a common talk at every Android conference since Android Studio has been introduced.


Everywhere?

Google Trends says Groovy is just as popular as it was 10 years ago. The number of downloads has increased significantly but it is not because of Gradle, as Gradle bundles Groovy. It is also currently placed #16. on TIOBE Index

And last, Groovy with compile static is in most cases, just as fast and memory efficient as Java.

Some people always talk down Groovy, yet the numbers speak for them self. Before Kotlin, Groovy was attacked by Scala developers.

The major thing Groovy has over both Scala and Kotlin is strong similarity to Java syntax which makes the learning curve non-existing. And that is a bigger deal than a random cryptic/confusing programming expression. Code readability is very important.


Certainly not in Germany where it vanished from JUGs.

https://trends.google.com/trends/explore?q=%2Fm%2F07sbkfb,%2...

Java worldwide average 86%

Groovy worldwide average 2%

Pretty popular indeed.


> Some people always talk down Groovy, yet the numbers speak for them self

Many of those numbers are fake. Groovy was #70 on the Tiobe index only 12 months ago, see [1], but is #16 now. How do you explain that? Groovy was last in Tiobe's top 20 about 2 years ago when the search numbers for "Groovy" from Baidu were hugely incorrect. Groovy later quickly dropped back out of the top 50.

[1] https://www.tiobe.com/tiobe-index/groovy/


"Cédric Champeau: Goodbye, Groovy!"

https://melix.github.io/blog/2019/03/goodbye-groovy.html


Fuchsia is language agnostic. This is a boon. Writing for Fuchsia isn’t like writing for Unix where C is king. Fuchsia is able to natively host many more ecosystems than Linux cares to bother with. It gives me great confidence.


Tons of great stuff for Unix is written in C++ and other languages all the way to Python and Java. You can write Unix apps in whatever language you want. Ditto for Windows and MacOS.

Not sure what the special deal is with Fuchsia here.

Perhaps you think you'll be able to write Fucshia device drivers with any language you like? That wont be the case at all.

Also Linux has this going in its favor over Fucshia: it actually exists.


Windows has been language agnostic for decades.

What's so remarkable about that fact?


"InteliJ support is more than enough"

That won't get kotlin more traction. No proper/official tools for LSP support means no vscode, vim, emacs etc. users. As a happy eclipse user, I wouldn't call it dead either.


Groovy was a random JVM-based dynamic language with no major company support and no special tooling.

Kotlin has IntelliJ (and thus a great IDE) and Google standing behind it, and Steve Yegge's nod of approval.


Not every Java shop is going to drop Eclipse and Netbeans just to make JetBrains happy.

As for Android, lets see what happens with Fuchsia.

The overwhelming majority of Java developers don't even know who Steve Yegge is.


>Not every Java shop is going to drop Eclipse and Netbeans just to make JetBrains happy.

True. They will drop them because Eclipse has been faltering for ages and has been dropped by IBM, and Netbeans has always been a subpar unloved stepchild used by the kind of devs that don't know better and think SlickEdit or Notepad++ are great editors.

>The overwhelming majority of Java developers don't even know who Steve Yegge is.

That's on them.


I guess Fortune 500's have missed that memo, including IBM own subsidiaries.

Lets talk back in 5 years from now.

I am betting Kotlin would become yet another language that happens to compile to the JVM, after the Android honeymoon goes away.


>I guess Fortune 500's have missed that memo, including IBM own subsidiaries.

They probably did. They're late for all memos. The Fortune 500 is not were you'll go to gauge adoption. Heck, half of them still have IE6 only apps.

There are hundreds of thousands of companies, and just 500, well, Fortune 500.

Apparently you use the most established and boring development-wise of companies (Fortune 500) to prove Kotlin is not used widely, but you're OK with vaporware like Fucscia that's not even beta yet as an argument that Dart and co are.

>I am betting Kotlin would become yet another language that happens to compile to the JVM

And I bet you're wrong. Let's see in 5 years. We were on HN 10 years ago, we'll probably be here then. I'll set a reminder.


Because they prove Google is struggling with internal politics where to go next.


There are plenty of java projects that change apis. Comparing a framework to a language is stupid.


If you develop, not if you maintain. This is a major reason we stay away from JS frameworks after burning the hands on implementing an AngularJS that was obsolete a year later.

That and a few page reloads never killed anyone.


Doesn't evolve _quickly_. Which as you say is say is part of the tradeoff of giving a trustable guarantee that the tooling you do use will continue to work for a long time.


I've been working professionally in Java for almost 20 years. I can't remember the last time I saw someone use features introduced after 1.6. The time API in 1.8 was nice. I've never seen anyone use lambdas in production code. I'm sure people do use newer stuff, but I think it's the minority. So in practice it's even more stable than it is in the headlines.


Strange, our production code is full of streams and lambdas. It probably depends on the type of product.


I work for a large company and the code base is full of streams and lambdas. Streams are almost used too much and could be replaced with a simple for-loop.


Yeah, stream overuse is definitely an issue at our company.


I started using Streams and lambdas immediately, along with functional interfaces. Java is still not nearly as pleasant as Scala or Clojure, however.


Chromium (and so Chrome too) uses lambdas in its Java parts.


Last time I used java, I used it extensivelly lambda, collection map and flatmap, optinnals... But old school javaist barely used them though.


We start using new features pretty much as soon as the new JDK version is released to general use.


I started using them ASAP.


>java is a stable api but it also doesn't evolve

You have probably missed the last 5 years. From streams, to closures, to default methods, to functional interfaces, to local type inference, to new Process APIs, to modules, to unsigned arithmetic, to new date/time APIs, http client, 2 new GCs, shell REPL, etc... with more to come, and new releases every 3 months...


> unsigned arithmetic

I thought I had missed something there, but it looks like it's just a bunch of utility methods to reinterpret integers and convert them to longs (etc.)?

Kotlin, on the other hand, recently added actual unsigned types[0] – taking up the same number of bits as the corresponding signed type, supporting the normal mathematical operators, and preventing unintended mixups with signed types.

[0] https://kotlinlang.org/docs/reference/basic-types.html#unsig...


>I thought I had missed something there, but it looks like it's just a bunch of utility methods to reinterpret integers and convert them to longs (etc.)?

Yes, but it's also a new feature (along with dozens of others). The complaint I answered was that Java is stagnant somehow, when it has been evolving fast in the last 5 years, and is accelerating its pace (release cadenze) even more.

>Kotlin, on the other hand, recently added actual unsigned types[0] – taking up the same number of bits as the corresponding signed type, supporting the normal mathematical operators, and preventing unintended mixups with signed types.

Which, given that the JVM doesn't support them at the bytecode level (iirc), they also use some similar trick of working with signed numbers under the hood, just hide it better at the syntax level.


I love Dart and Flutter. I actually think it was a good idea not to use Java. Dart is much more modern and nice to work with in my opinion. There are many reason and I won't dwell into it here. The biggest-ish project I've built with Flutter is an alternative to Nissan's ConnectEV app; it's used with the electric vehicles Nissan Leaf and Nissan E-NV200. You can see statistics, battery status and control climate control and charging of your vehicle. My alternative is called "My Leaf" on the Play Store and "My Leaf for Nissan EV" on the App Store. It's completely open source; https://gitlab.com/tobiaswkjeldsen/carwingsflutter

It consists of the main Flutter app and a Dart library for communicating with Nissan's API.


I am currently building my first mobile app on Flutter and quite happy I choose that technology! Developing in AndroidStudio + Flutter does not feel like the chore I remember developing android apps to be.


Biggest barrier to entry for me is the build dependency system Maven. And I never understood Ant. I did like the language when I played around with Java 8. Streams are amazing.


Maven is an incredible system. It can do so much and yet, even though I've setup countless apps and services, I can never use it without looking literally everything up.

Maven and spring are the worst parts, for me, when dealing with Java. I don't mind the language at all.


Yeah I felt the same. The language is fine. It’s got a fantastic set of libraries and a large ecosystem and one would be hard pressed to not find a job being a competent Java dev but maven ugh


With maven, less is more. Huge maven files usually means something is wrong.

Follow the conventions and it is less than a screenful for a basic project.

I typically solved maven problems by removing cruft and falling back to the standards.


What was your problem specifically ? For me these two worked great together actually. I really loved Spring. Maybe the problem has some overlooked fix that I can help with ?


Nothing in particular just lots of seemingly random issues trying and failing to get them working correctly, likely due to my lack of knowledge but every time I try to better learn them my brain just can't handle it.

I don't know what it is. I think part of it is my auto-cringe when I see XML nowadays.


I can understand the feeling. I think in the new Spring framework configurations are written in Java. I think this is from version 4.0. So that will solve your XML issues. However, I do make the assumption that you have control over what version of Spring you use. I think there is a problem with the documentation of Spring Boot when it comes to config code since you have a lot of stuff with XML but not so much with the modern Java based stuff. However you can try to learn how to map the XML to Java, you only need a few things to learn that mapping not the full XML configuration knowledge.


The success of javascript is hugely because of no dependency-hell (not being able to upgrade some lib to a new version, because it would conflict with some other lib which requires it in its current version and you can't have both of them loaded at the same time)


> It's also a shame Sun sold Java to Oracle

Worse: Oracle bought Sun.


Worse: IBM, Google and everyone else could have done it as well, but decided not to counter-offer.


The most interesting new feature I think is the Shenandoah GC. The summary from [1]:

"Add a new garbage collection (GC) algorithm named Shenandoah which reduces GC pause times by doing evacuation work concurrently with the running Java threads. Pause times with Shenandoah are independent of heap size, meaning you will have the same consistent pause times whether your heap is 200 MB or 200 GB."

The original algorithm was published in 2016 [2]. It consists of 4 phases: initial marking, concurrent marking, final marking, and concurrent compaction.

[1] http://openjdk.java.net/jeps/189

[2] https://dl.acm.org/citation.cfm?id=2972210


GC pauses have been one of the major barriers to using garbage collected (read: higher-level) languages for game development. This could open up the JVM for games, which could have some exciting implications.


Their lower end target is 10 ms. In a game, 16 ms is your entire time budget.


ZGC, included in JDK 11, has a 1ms average pause time and 4ms max on some demanding benchmarks, and they're now targeting 1ms max pauses.

https://www.opsian.com/blog/javas-new-zgc-is-very-exciting/


Wow great link. I live pretty deep in Javaland and had no idea it was that good!


If it lives up to claims surely it’s a paid for add on?


Originally it was intended as a commercial feature, but since JDK 11 Oracle open sourced all previously-commercial features in the JDK, and now it's part of OpenJDK. It's already available in JDK 11 and 12.


I never got to the level where I needed anything more than the stock default GC. But this looks compelling. Does Kotlin and other JVM based languages benefit from it, too?


Sure. All languages that run on the Java platform[1] can benefit from it, and from most other Java platform enhancements.

[1]: I prefer saying Java platform rather than JVM, because a program written in Kotlin shares over 90% of its infrastructure code with a program written in Java (not including the OS), and the JVM constitutes only about 20% of that.


It works really great with heaps >100GB and is included in the openjdk since 11. Note, that only linux is supported, I think


As my world is currently only serverside this isn’t a limitation but a feature ;)


In VR your budget is 11 ms and you can't miss it or the entire frame is thrown away and you introduce a frame of latency to compensate. So for VR these kind of effects of GC are even worse.


Also, while most games are fine with losing a frame or two every few minutes, in VR dropping frames will cause motion sickness really fast. FPS is, from what I've observed, the most important part of a VR game. If you're dropping frames or stuttering, the experience will suffer greatly

When I was developing a VR game, my team noticed that some ultra-sensitive people started to get sick when we would drop around five frames (the vive displays 90, so this is only a ~5% drop). There was more to it (dropping a few frames every few minutes was usually fine, but consistently dropping frames every second or two is an issue).


10 ms isn't all that much. When games load in assets they'll do various things like malloc huge chunks and you lose a few frames.

If GC only causes occasional (maybe every 2 minutes) loss of a frame or two it should be no problem. If you watch benchmark FPS traces that count every frame delay you'll see that occasional stutters happen in basically every game.

Loss of a single frame just isn't noticeable


Quake, back in the late 90s, would preload assets for all entities in a level at level load time explicitly to avoid stutter when the entity finally shows up and needs to make a noise or whatever.

And some games much later didn't. I recall needing to fire off a shot on each new gun I picked up in Far Cry to be sure I wouldn't get a stutter later at a more critical moment.


Frame drops that happen more than once every several seconds are definitely noticeable (and, as a sibling comment pointed out, especially so in VR). Plus, throwing away more than half your frame budget just because of GC seems unreasonable. And this isn't even considering monitors with high refresh rates, which are slowly getting more and more popular.


> When games load in assets they'll do various things like malloc huge chunks and you lose a few frames.

Streaming/Loading assets in the main render thread is a very 2005 thing to do.


Yeah, the whole reason you have pools and arena allocators is so that you don't do small/large malloc mid-frame. Perf 101 is don't malloc during regular gameplay.


If you have some control over when a GC triggers you could sneak one in during hitstun.


A few of us built a webgl game backend in Java. It was fun but optimizing to make everything always work within 16ms was hard. In our case it was especially complicated because we shared state over websockets back to the clients.


16ms is only if you're just targeting 60fps. We now have 144hz monitors, leaving less than half of that if you want to take full advantage of the hardware.


Once upon a time the barrier was the existance of VMT tables and method dispatch.

GC use can be optimized in high level languages with support for value types (which are still missing in Java, though) and it is not like every game needs to be the next Fortnight.


Only tangentially related to your comment, Fortnite is based on UE4 so it does use garbage collection. And it has caused framerate/hitching issues as recently as a year ago according to an Epic developer.

https://www.reddit.com/r/FortNiteBR/comments/7gu8aq/hitching...


My point being that everyone evaluates programming languages for writing games as if they would be doing a AAA hit, when they will actually be doing a minesweeper clone.

Thanks for pointing it out though.


I didn't realize UE4 adds a garbage collector on to C++. Interesting.


Yes. UE4 also extends C++ with its own reflection system as well which is used all over the place, and critical to the garbage collector.

https://wiki.unrealengine.com/Garbage_Collection_Overview


There are a lot of applications where predictable performance is just as important, if not moreso, as good performance.


Honestly, if you need real performance when writing a game, there's no reason not use the language that's most popular for your engine of choice. Mostly that's gong to be C++ with python scripting.


The GC doesn't just affect high-performance games. The problem is it doesn't just make things broadly slower; it introduces what can easily become a second-long pause a couple times a minute. Even if your game runs buttery smooth the rest of the time, that's pretty much unacceptable. There are measures to minimize it - object pooling, for example - but they remove most of the benefits of having a GC in the first place.


And Lua scripting


LuaJIT, presumably?


This has already happened in a way. Unity uses a ton of C# for scripting. While C# is somewhat nicer to work with, this low pause GC makes java a compelling option for moving much more logic into the VM language


Keep in mind that C# has support for non-primitive value types (structs) which means you can avoid garbage collection entirely if you're careful (not all that difficult in my experience). Additionally, Unity has just introduced a specialized low-latency garbage collector in their beta build. Overall, from where I'm standing, Java doesn't look very appealing for game development.


Lack of value types sucks so yes I agree. On the other hand this new GC might be fast enough that it just doesn't matter if you create garbage for most purposes


Yeah, I do wonder how Unity deals with the GC. Maybe there's an algorithm like this one, maybe the engine's core systems (physics and graphics) are written in native code.

C# as a language does tend to be nicer than Java, but its VM doesn't have as rich a family of languages.


My limited understanding is that high end Unity devs code their stuff not to make garbage. Unity has a very nice system for profiling that sort of thing.

Also, CLR has F#, which I like much better than Scala (and, no type erasure in CLR), Clojure which is ... just like Clojure on JVM, and C#, which as a Java dev since 1.1 I have come to prefer as a language even as I remain JVM ecosystem preferring on the server side. JVM has ABCL, a neat Common Lisp, and now Graal/Truffle which is super interesting. If not for Clojure though (and maybe Kotlin) I would probably give up on the JVM. Oracle does not help. Just my opinion.


I did just remember that C# has value-type data structures (which last I checked, Java didn't) which probably helps a lot. Most objects can probably be kept on the stack, and objects on the stack don't need to be garbage collected.


Totally agree. C# and kin are far better designed. It's the lively Java ecosystem that keeps me around as well


Less “lively Java ecosystem” - more “slow-moving Enterprise Java environments”.


Is Clojure on CLR actively maintained?


Yes, it's one of the officially supported platforms (albeit the least used one).


The first thing a game programmer will do in a garbage collected environment is either try to disable it or control it.


Or learn that maybe for the game that they are making it doesn't matter.

And if it does, making themselved acquainted with how value types, slices and gc-free pointers work would probably be a better idea.


>GC pauses have been one of the major barriers to using garbage collected (read: higher-level) languages

There is such a thing as reference counting :)


Trivial reference-counting can lead to memory leaks when you have an object graph that’s disjoint from an RC root object.

Non-trivial reference counting starts to look like mark-and-sweep GC.

And even then, when a ref count drops to zero, the runtime cost of destruction and deallocation of particular objects can be expensive. Stop-the-world GC is bad, but “make a blocking call because you can’t use async APIs in destructors” is worse - especially when the destructor runs in a program thread instead of a GC thread.

I feel that languages which provide support for defined object ownership and lifetime (Rust, any others?) will become the future because in many scenarios object-lifetime can be nailed-down by the compiler and thus eliminate the need for GC or ARC.


From my understanding of Rust, you will still pay the runtime cost of destruction and deallocation of objects with the object lifetime system ? It's like reference counting, but precomputed by the compiler. Please correct me if I'm wrong.


The way the "it's like reference counting" mental model goes wrong is that things don't live arbitrarily wrong. Reference counting is used to extend the life of things, Rust will not do that.

But yes, it's malloc and free.


It’s like manual allocate/deallocate. Only that the compiler will insert free (or drop, in case of Rust) for your variables when it reaches end of scope.


Right. It’s just easier to reason about the process with Rust’s ownership system compared to a reference-counting or GC system.


What makes you think reference counting leads to shorter pauses than tracing? While it is absolutely true that sophisticated implementations of garbage collection via ref counting can exhibit similar performance to sophisticated tracing algorithms[1], most implementations of ref counting aren't nearly as sophisticated as modern implementations of tracing, and tend to be worse.

[1]: https://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf


Not necessarily shorter, but predictable pauses.


Very predicatable indeed.

Go and C# running circles around Swift.

"35C3 - Safe and Secure Drivers in High-Level Languages"

https://www.youtube.com/watch?v=aSuRyLBrXgI

Possible stack overflows and unexpected pauses due to heavily nested data structures.

"CppCon 2016: Herb Sutter “Leak-Freedom in C++... By Default."

https://www.youtube.com/watch?v=JfmTagWcqoE


If your memory usage is regular and predictable, then both ref counting and tracing would behave regularly and predictably, and if it isn't, neither is.


How is memory efficiency? Usually faster and/or consistent GC performance comes at the cost of extra memory usage.

(This is why I'm excited for ARC / Swift on the server...)


From the tests I've seen on beta versions, no difference.

The performance increase is from a different algorithm + full multithreading. Not from collecting garbage less often as is a common tactic to reduce time spent in GC


Go and C# running circles around Swift.

"35C3 - Safe and Secure Drivers in High-Level Languages"

https://www.youtube.com/watch?v=aSuRyLBrXgI


Well, not exactly C#. At 39:00 they show significant latency problems with C#’s garbage collector. Go and Rust did well though.

Will be interesting to see if ARC gets optimized for use cases like this (drivers). I wouldn’t say it’s super representative of general server programming though. And unless I missed it I don’t think they compared memory footprint.


Is there a guide to the various GC algorithms? Suppose I'm running an app where pause times don't matter, and I want throughout or reduced memory usage. How much more of this will I get by switching to another GC algorithm? 2%? 20%? 40%?


Mostly profiling.

Each JVM implementation (Azul, IBM, OpenJDK, PTC, Aicas,...) has their own collection of GC algorithms.

And their behaviour depends pretty much on the application as well.


Thanks. I understand that the numbers require profiling to determine, but I wish there was at least a high-level guide like:

Do you want to optimise pause times? Use X. Optimise throughput without regard to pause times? Use Y. Optimise memory use? Use Z.

It may not be accurate in every case (no guideline is), but it would be a good starting point for the vast majority of us who don't know the pros and cons of various collectors.



This thread reads eerily like threads about Go's low latency GC from 2015 and how 10ms isn't good enough and throughput will be impacted and on and on. Three years later Go treats any 500 microsecond pause as a bug as Go continues to focus on throughput. Shenandoah is being put together by some very very smart people and I'm optimistic that the only thing that stands in the way of Java reaching the "500 microseconds is a bug" level is engineering hours and resources. More kudos for this achievement are in order.


> The most interesting new feature I think is the Shenandoah GC.

Well, one could build and run Shenandoah GC with JDK8 and JDK11. Ref: https://wiki.openjdk.java.net/display/shenandoah/Main#Main-B...

Here's a presentation on Shenandoah by one of the principal developers (focuses on the how): https://youtube.com/watch?v=4d9-FQZZoVA

Slides: https://christineflood.files.wordpress.com/2014/10/shenandoa...

Another presentation (focuses on the how, why, and when) at devoxx: https://www.youtube.com/watch?v=VCeHkcwfF9Q

Slides: https://shipilev.net/talks/devoxx-Nov2017-shenandoah.pdf

---

Tangent: Go GC (generational, non-copying, concurrent mark and concurrent sweep) and the improvements it saw from 1.5 -> 1.8 https://blog.golang.org/ismmkeynote

The mutator/pacer scheme, the tri-color mark scheme, using rw-barriers during the sweep phase are interesting to contrast between the two GCs.


Will this help spark performance at all? I remember having to make a lot of GC tweaks in order to get some of the larger jobs to run.


For most ordinary applications I've seen around about 5-10% of the time is spent on GC. Maybe some badly tuned applications would see benefits.

If you want an order of magnitude jump in performance need to see if the IBM work on Scala Native version of Spark manifests.


It should help performance on almost any JVM app, but much more on large heaps.

The GC time is lower and multi-threaded but most importantly the "pause time" is low and nearly constant (based on root set size)

The old GC's all scaled pause time roughly linearly with heap size. Apps that created a lot of garbage would have all their threads yielded for large amounts of time.

Now, essentially, it doesn't matter how much garbage you create. This is awesome because you used to carefully watch how many allocations you made to avoid bad GC times. Now it doesn't matter. Make as much trash as you want and the GC will deal with it


Interesting that they explicitly exclude time to safe point from that improvement. In my experience that is usually most expensive part of a stw pause.


> Pause times with Shenandoah are independent of heap size

I expect this needs to be taken with a chunk of salt, since a smaller heap can only require so much GC...


> I expect this needs to be taken with a chunk of salt, since a smaller heap can only require so much GC...

Pause times. Not GC times. Shenandoah pauses to scan the root set only. The size of the root set doesn't grow with the size of the heap.

You can have a large root set and a tiny heap, or a tiny root set and a massive heap. They're entirely independent.


What happens if a huge object becomes unreferenced and a new one of the same size is then allocated before the older one is GC'd (and you don't have enough memory to hold both simultaneously)? Does it just crash or does it pause to collect the older objects first?


It won't crash, but what it does is somewhat complicated. There's a section in the OpenJDK wiki addressing this question:

https://wiki.openjdk.java.net/display/shenandoah/Main#Main-F...


> Full GC. If nothing helped, for example, when Degenerated GC had not freed up enough memory, Full GC cycle would happen, and compact the heap to the max. Certain scenarios, like the unusually fragmented heap coupled with implementation performance bugs and overlooks, would be fixed only by Full GC. This last-ditch GC guarantees that application would not fail with OOM, if there is at least some memory is available.

> Usual latency induced: >100 ms, but can be more, especially on a very occupied heap

Seems like it confirms precisely what I wrote? You can get longer pauses the less memory you have left, i.e. the more of the heap you've used...


I mean that's effectively a deployment error. You're not supposed to run your application without enough head-room. You can degrade almost any GC by restricting the head-room. You're reading from the 'failure modes' section there.


[flagged]


Can you tone down your comments a bit, please? There's no need to be so aggressive. It's not nonsense, I'm not lying about anything, and I'm not running about crying.

> Shenandoah implicitly relies on collecting faster than application allocates

That's what the documentation tells you that you need in order to deploy Shenandoah. If you can't meet that then you don't get the claimed performance characteristics.

If you can meet that requirement then GC pause times are dependent on root set sizes, not heap sizes.


It is nonsense though. I'm frustrated to see the same lie repeated over and over again. When nobody buys the lie a new GC comes out that everyone claims this time is really, truly going to do the impossible. With a giant invisible "implicit" asterisk that basically says "except when it doesn't, then it's your fault". It's been happening for... over a decade now? I don't even remember.

> Shenandoah implicitly relies on collecting faster than application allocates

What precisely does this even mean? How is the programmer or user supposed to provision for this? By blocking on every allocation until enough time has passed since the previous one... which merely shifts the blame for the pause from the GC onto the application? And so if there are A allocations/second and D deallocations/second and we have A < D then suddenly there's a guarantee of constant-time pauses? I don't buy that either. I'm pretty confident I could construct a counterexample if I spend some time to create a nontrivial object graph. Do you believe it?


> which merely shifts the blame for the pause from the GC onto the application?

There is no shift here, this has always been how things work. GCs are not a magic wand that allow you to do anything with reckless disregard towards efficiency with zero drawbacks (neither do malloc/free). Improved GC algorithms strive to cover more and more real-world workloads. Sometimes they fall short. Sometimes the developer did inefficient things that could benefit from optimizations to decrease allocation rates. A classic example is the O(n²) string appending (+=) behavior for which one should use StringBuilders instead. It just puts unnecessary stress on the GC.

G1GC had issues with LRU-like in-memory cache workloads. Some of those were improved with ongoing G1 development, but ultimately the new concurrent collectors (ZGC, shenandoah) solve those problems better since their pause times do not explode when many inter-region pointers need to be updated.


Well that's exactly the point too. GC's aren't magic wands, and people just need to be honest about the inherent limitations of whatever they build or use. You can't claim X is independent of Y when it very clearly cannot be, no matter how much you want that to be the case. That's all I'm saying.


The length of individual pauses is independent of your heap size. It still depends on your root set size (which includes thread stacks). Other factors (CPU cycles consumed, throughput) depend on your heap size and allocation rates.

Those performance goals are only valid when given enough spare resources (heap > live set size, cpu cores). If there are insufficient resources then the VM may eventually enter those "failure modes" to keep the application running, albeit at reduced performance, instead of crashing. I think there are flags to make it just exit if that's preferable over latency spikes.

Ultimately you cannot ask it to do the impossible, this much should be obvious.


It's not complicated - the claim is that the pause only needs to be long enough to scan and update the root set twice and perform a little book-keeping, provided that allocation is slower than collection. That's black and white. Either Shenandoah is working correctly or you need to reconfigure.

How do you know if allocation is slower than collection? The logs tell you. This is the same as any performance requirement - you need to test it empirically.

If you think there's an issue with the claims of Shenandoah then you should publish your work rather than just abuse.


I don't know what "work" of mine you're referring to (I never claimed to have a GC that can do the impossible either?), but what I'm taking issue with is the clearly bogus sentence from their summary [1], which was repeated at the top here:

> Pause times with Shenandoah are independent of heap size, meaning you will have the same consistent pause times whether your heap is 200 MB or 200 GB.

You can't just write an unconditional "summary" like that with a straight face. It's practically dishonest to present it as if the giant asterisk isn't there. And this has nothing to do with the quality of the GC... I'm sure they've done great work, and if they're proud of it they probably very well should be, but that doesn't give us (or them) an excuse to advertise it as doing something it admits it doesn't do.

[1] https://openjdk.java.net/jeps/189


If you know the claim is bogus then you must have done the work to prove that. Show that work to them and stop being so unpleasant to me.


I thought we already established this. It was the GC documentation itself [1] that admitted the summary was bogus. Specifically:

> Usual latency induced: >100 ms, but can be more, especially on a very occupied heap

[1] https://wiki.openjdk.java.net/display/shenandoah/Main#Main-F...


In a failure mode! When you have deployed it outside its system requirements!


Switch expressions prepare the way for pattern matching (https://openjdk.java.net/jeps/305), instead of `instanceof` checks and casts.

But (in my naive opinion) double dispatch seems a more elegant and java-ry solution, i.e. polymorphism on argument classes, so different methods are invoked for different object runtime classes (instead of using the compiletime type of the variable).

The switching could be optimised, as ordinary polymorphism is in the JVM.

Sure, you'd wreck legacy code if you just introduced it, but there's surely a backcompatible way to do it that isn't too awkward.

BONUS: goodbye visitor pattern!


That kind of multiple dispatch would be far more disruptive at several levels, and still wouldn’t really get you where you want to go without a lot more compiler and JIT magic.

Sure, you could match on types, but you wouldn’t be able to extend that to destructuring patterns, or regsxps as patterns on strings, or so many other ways patterns may be extended in future releases.


My comment's a little out of place. I'm not against pattern matching, but for double dispatch; we can have both.

Their initial motivating examples was bare instanceof, but I now see they extend it.

How would destructuring fit this model ?!


Brian Goetz has writes quite a bit about ongoing projects. The latest article he has done about pattern matching is I think http://cr.openjdk.java.net/~briangoetz/amber/pattern-match.h.... There is a load of stuff in there about destructuring patterns from about half way down.


Thanks, a methodical article! Not keen on special support required for destructuring. Nice that they discuss Visitor pattern (and the switch example for such cases looks nice). They don't dicuss double dispatch - but not relevant to them.


I'm learning Swift on the side, I was pleasantly surprised by this.


Anybody know how the graal project ties in with all of this? Is oracle effectively developing 3 different JVMs (OpenJDK, Oracle JDK, GraalVM)? Or is there some sort of convergence plan?

From what I understand, graal has made a lot of headway with language interop, as well as a handful of specific memory optimizations and native compilation, but overall is lagging in pure throughput/latency performance behind hotspot. It would be really cool if we could get the best of both worlds.


Well the relationship between OpenJDK and Oracle JDK is that OpenJDK is the FOSS base and Oracle JDK is the proprietary structure built on top, kinda like Chromium and Chrome

As for GraalVM, I don't think it's meant to be mixed in with those 2 at all. From their website (https://www.graalvm.org/), it sounds to me as if it's a completely different project that shares none of the goals the other 2 JDKs have:

> GraalVM is a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, Clojure, and LLVM-based languages such as C and C++.

> GraalVM removes the isolation between programming languages and enables interoperability in a shared runtime. It can run either standalone or in the context of OpenJDK, Node.js, Oracle Database, or MySQL.


Notably, Graal has been used extensively by TruffleRuby for a while now. It's not oficially ready for prime time but definitely promising.


Graal is a JIT for Hotspot (and other things), you can already use it be turning on experimental features and enabling JVMCICompiler. Graalvm is that packaged up with the Truffle framework (and languages written for that), and SubstrateVM which allows programs to be ahead of time compiled and linked as executables with a minimal VM.


> overall is lagging in pure throughput/latency performance behind hotspot

Twitter use Graal in production - I think it's about 13% faster for them on their real workloads. If you send a Tweet it's going through Graal.


Oracle JDK is just the Oracle(TM) Supported branding of OpenJDK, now. As of Java 11, there are no feature parity or real technical differences between OpenJDK or OracleJDK, just support/EOL differences from vendors. So, that's pretty simple.

GraalVM is... complicated. There are a few parts to it:

1) An advanced JIT compiler, written in Java, and distributed as ordinary maven/jar packages, etc. You can use it yourself if you want. Interestingly, this compiler package ("graal") can be used for the JVM itself, and it will be used to compile the Java code that is run by the JVM. The "JVMCI" feature allows you to plug third party compilers into the JVM and replace HotSpot, and Graal (the library) is such a replacement. You can use this compiler on ordinary OpenJDK 10+ with a special startup invocation.

2) Truffle, which is another library for writing programming language interpreters. It does a lot of magic to make them fast. It uses the graal JIT compiler to do this, so it depends on the previous library.

3) A bunch of programming language implementations under umbrella: Python, JS, etc. These are implemented using Truffle. By using Truffle, these language implementations magically share an object model (so they can interoperate), and they get a JIT compiler, based on Graal, for "free". This means the languages can interoperate and JIT together cleanly (the compiler can inline JavaScript into Ruby!)

4) If you use Graal as the JVMCI compiler (i.e. for your host Java code), and you use Truffle-based languages -- Graal can extend benefit #3 to the host Java code itself. This effectively means the JIT compiler can inline and optimize code across every language boundary.

5) SubstrateVM, which is is a tool to turn Java programs into native exe binaries. The intent is you use SVM on the programming language interpreters, to produce interpreter binaries that look "normal" to the user. These binaries are run on custom, non-JDK/HotSpot runtime. The Java code is not JITed, but interpreted code -- Ruby, Python, JS, etc -- is. (This means you have benefit #3, but not #4). SubstrateVM uses a custom generational garbage collector (written in Java!)

6) The "GraalVM distribution", from graalvm.org. This is a combination of all the above pieces together as a sanctioned package. This package uses JDK8 as the base virtual machine.

7) The GraalVM distribution comes in both a "community" and "commercial" edition, which do have technical/feature differences.

Here's the thing: you can use everything from 1-4 with an "ordinary" OpenJDK today if you know what you're doing, and you don't need a special JDK build. SubstrateVM might also be workable, but I don't know.

Points 6-7 actually mean that there is generally a difference between "GraalVM the platform" and "Graal the compiler". Notably, while OpenJDK no longer has feature gaps vs the commercial OracleJDK, GraalVM does, which I find a very weird choice on Oracle's behalf and unfortunate, but maybe it will change one day.

If you want to use one of the new low-latency garbage collectors (ZGC/Shenandoah) with Graal, I don't think that's possible as of right now: garbage collectors need to interface with the JIT compiler, but ZGC does not support the JVMCI interface (it will explode on startup), and Shenandoah doesn't either, I believe (but maybe this changed). This will almost certainly be fixed in the future, of course.


Thanks, this is a really useful overview, and also explains why I always seem to get so many different explanations as to what graal is.


This is excellent thank you


Included is JEP 230, a Microbenchmark Suite. It's based off of JMH. Nice to see this included!

http://openjdk.java.net/jeps/230

http://openjdk.java.net/projects/jdk/12/


Are they ever going to release value types?


Adding values is a huge task because they are a big change at the VM, language, and library level. You can read about some of the latest plans in draft form at https://mail.openjdk.java.net/pipermail/valhalla-spec-expert....

Rest assured progress is being made.


Sounds like we won't have them in the next LTS, so probably another 5 years at least.


Next lts is 14 which is due in a year as I understand the new release schedule.


No, the next scheduled LTS will be 17. The idea is that 1 release every 3 years will be an LTS release.

I’m not sure what would happen if values landed in 18, it might be an important enough change for people to want break the LTS cadence.


The worst thing Oracle can do is deliver a bad feature. Java Gererics turn 15 this year. It's better to delay such a radical change than being stuck with it for the next few decades. Oracle is no hurry to do that. Maybe they get it together in time for the next LTS release (Java 14).


You can play with them today.

https://jdk.java.net/valhalla/

The engineering problem is that they want to keep old jars running on a world of value types.


I get Java needs it because some uses need the performance gain in terms of less ram and less pointer interdirection. I'm glad library authors of like servers or caches will be able to improve performance without weird things like unsafe byte buffers and native memory leaks.

I suspect I won't ever use value types in my code though unless it's drop in to refactor from AnyVal=>AnyRef (or w/e the java equiv is).

* Does accepting a value type parameter accept it by value or by ref? * How does this play with GC? Can I get dangling ref with value type? * How does value types play with inheritance e.g. will it be like c# and struct and they will be boxed any time treated as ref? * How safe will it be to convert a class from being value type based to Object (w/e AnyRef equiv is). Will I need to inspect all methods that deal it now?

This can probably just be resolved with a style guide like c++ can be mostly sane if you're starting a new project today with an aggressive style guide, but I just appreciated not think about this and just count on escape analysis + gc + hotspot tricks like monomorphize code paths with some counters to be good enough.


I share your frustration but this new GC should drastically reduce the issues with creating tons of garbage. There's still the infamous "allocation wall" but value types wouldn't solve that either. But they would make it a lot better


GC is only part of the issue. Value types allow much finer control over locality, something completely lacking in Java now. And without that, you give up a lot of performance.

For example: https://rcoh.me/posts/cache-oblivious-datastructures/


Yeah that was supposed to be in 12 wasn't it? Everyone is waiting for that, it will obsolete Lombok.


No feature is meant to be in a release. A feature will be done when it’s done, though features may be split so that some functionality can land earlier.


Data classes will reduce the need for Lombok/immutables. Less so for value types.


Still heavily a work in progress based on the in-flight commits getting sent to valhalla-dev@


Still nothing to handle checked exceptions in streams properly... sigh.


I wonder why they didn't use generic exception type in their java.util.function classes. That would propagate checked exceptions statically.

Something like

  @FunctionalInterface
  interface Function<T, R, E extends Throwable> {
      R apply(T t) throws E;
  }


In Java, the concept of exceptions is incompatible with the concept of parametrized types.

During compilation, type parameters are erased. "T" becomes "Object"; "T extends Something" becomes "Something"; assignment becomes type conversion. But at the end it works correctly. If you try to make something that wouldn't survive the type parameter erasure -- such as having two methods with the same name, one of which accepts "List<String>" and another accepts "List<Integer>" as their input -- it will be a compile-time error, because you can't have two methods with the same name and the same parameter "List".

But imagine that you have a type "MyException" extending RuntimeException, and your code has "catch (MyException)". After compilation, this would become "catch (RuntimeException)", which is wrong, because it would also catch other RuntimeExceptions, and it is not supposed to. But if you make this construction a compile-time error, then... the whole thing becomes useless, because what's the point of having an exception type you cannot catch.


The entire concept of checked exceptions is compile-time, because there's no checked exceptions in JVM. I don't really understand your argument. I'm not suggesting to catch exception of generic type, it won't be possible indeed, but it's not required either. You just need to tell compiler to pass "throws" via generic type. You can declare your map function with generic throws and compiler will respect that.


On further thought I understood that streams usually are lazy and real exception would be thrown not at map, but later (probably in collect or similar method). So it would require to propagate that E generic parameter to almost every function (and Stream type signature). Also it won't work well with multiple checked exceptions. So yeah, not that easy.


This doesn't work if your function throws multiple checked exceptions.


Ah, that's because java generics don't work with exceptions.


What do you mean? They do work with exceptions.


Ah, my mistake. I was thinking of generice type arguments for exception subclasses: https://www.mscharhag.com/java/java-exceptions-and-generic-t...

That also includes generic catch clauses and inner exception types of classes with type parameters.


Fairly straightforward to handle yourself e.g. https://github.com/unruly/control/blob/master/src/test/java/...


if you have to do this in every step of the Stream then it seems like a failed design.


Or in FunctionalInterface, or in anonymous functions.

Makes HashMap computeIfAbsent much less useful than it should be. It's impossible to use a function in there that throws a checked exception (all I want is for the exception to propagate out of computeIfAbsent for the parent method to deal with).

Best option I've found is to wrap it in an unchecked exception, then unwrap it in the parent method, which is just .... yuck.


Another option is to fool the compiler. So-called sneaky throws. Something like

    public static void main(String[] args) {
        try {
            sneakyThrow(new IOException("io"));
            Test.<IOException>emulateThrows();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    static void sneakyThrow(Throwable e) {
        Test.<RuntimeException>sneakyThrow2(e);
    }

    private static <E extends Throwable> void sneakyThrow2(Throwable e) throws E {
        @SuppressWarnings("unchecked") E e1 = (E) e;
        throw e1;
    }

    static <E extends Exception> void emulateThrows() throws E {
    }


This only works if I'm throwing the exceptions myself, but not if I'm using a library function (like a rowMapper for example).


You can write wrapper which takes function that can throw anything and returns regular java.util.function.Function. Here's example: https://github.com/rainerhahnekamp/sneakythrow/blob/master/s...


I share the conclusion of the other statically typed JVM languages (Scala, Kotlin): checked exceptions are bad.


If they are officially bad, remove them from the language. It's a backwards-compatible change. Currently we are in a weird situation, when standard library does not work well with each other. It's like making iPhone without USB-C cable and Macbook without USB-A port.


> It's like making iPhone without USB-C cable and Macbook without USB-B port.

Was there ever a MacBook with USB-B?


Why not? https://apple-history.com/mb_late_09 for example.


That's USB A / Display Port


That's USB-A.


Yeah, I'm always confuse USB-A and USB-B, sorry. Edited my post.


> If they are officially bad, remove them from the language

java.util.Date is bad, but it's not getting removed. java.net.URLEncoder is bad, but it's not getting removed.


I find checked exceptions interesting. They happen to be the only example of an effect system in a mainstream language. Anders Hejlsberg formulated the main argument against them here [1] Basically the problem is that checked exceptions are infectious and intermediary code needs to annotate all the exceptions of code that it calls. Which is a chore. Lucas Rytz states in his PhD thesis [2] that the problem is not wich checked exceptions themselves, but that in the java implementation 'not mentioning an exception means if won't happen'. He proposes a system where the default is 'any exception' and being able to turn it off with a compiler flag. And states that the developer experience of that would be much better. This may become possible in Dotty Scala using the effect system based on implicit functions. But that's in 'proposed' status.

[1]https://www.artima.com/intv/handcuffs.html [2]https://lrytz.github.io/pubsTalks/ the link to his thesis doesn't work anymore. I cherry pick some of it in my talk for a Scala meetup in Utrecht https://www.slideshare.net/yoozd/effect-systems-in-scala-bey...


Yes. One can workaround it with the throw-as-unchecked pattern: https://stackoverflow.com/questions/14038649/java-sneakythro...


This isn’t something where I remember the details, but the language designers decided there was not a good solution. There are some links from Brian Goetz in the comments/answers: https://stackoverflow.com/questions/27644361/how-can-i-throw....


Not in base java, but some libraries essentially fix this. Vavr https://www.baeldung.com/exceptions-using-vavr

But my favorite is abusing the language using Lombok SneakyThrows. It just feels dirty using it. In a good way.


You should return some ResultOrError<Result, Error> value instead of throwing Exception..


That's what languages like Rust do with a return return type that is part of the core language.

Sometimes I wish Java had one too to complement Optional. I wonder what the argument against that is? Too much confusion with the existing Exception system?


Java managed to break the Optional type badly enough that it can't be used for its intended purpose (it can't contain null which would be great if the rest of the language couldn't contain null either, but as it stands it's not practical to port existing Java code to use Optional), so I wouldn't trust them to implement a working result type.

Working with result types is really cumbersome in languages that don't have HKT (Rust hacks around it with a special-purpose language feature), and even more so in languages that also don't have pattern-matching. So I'm not sure how much use it would be in practice. But yeah it's the right way to solve the problem.


The new switch feature looks nice, the way it previously used to work was so unwieldy.


In case someone is curious what it looks like with the new -> form:

    switch (day) {
        case MONDAY, FRIDAY, SUNDAY -> System.out.println(6);
        case TUESDAY                -> System.out.println(7);
        case THURSDAY, SATURDAY     -> System.out.println(8);
        case WEDNESDAY              -> System.out.println(9);
    }


or using the switch as an expression

  System.out.println(switch(day) {
    case MONDAY, FRIDAY, SUNDAY -> 6;
    case TUESDAY                -> 7;
    case THURSDAY, SATURDAY     -> 8;
    case WEDNESDAY              -> 9;
    });


The number of times I've wished this was possible is probably 50% of the times I've used a switch statement.


That's much nicer. Anything else added to Java recently to reduce the boilerplate? I haven't used Java much in 5-6 years. Glad they are iterating faster.


Local type inference[1] means you can write:

  var foo = new Map<String, String>();
The static initialisation methods on collections[2] let you write

  var foo = List.of(1, 2, 3);
It's still clunkier than many newer languages but it is improving.

1. https://developer.oracle.com/java/jdk-10-local-variable-type...

2. https://docs.oracle.com/javase/9/docs/api/java/util/List.htm...


As another poster mentioned, the new type inference is great. So is lambda support. Another is Lombok.... Yeah it's not official but we use it literally everywhere and it helps a ton with data classes especially.

Nullaway and a bunch of other linters will also save you from accidentally falling in one of the dusty corners.

RxJava and Java Streams are also very nice.

Anonymous classes are nice too I guess (although they're fairly limited).

Overall the java experience has improved massively in the last 5 years, but you need to know where to look. Highly recommend the two popular "awesome java" repos on GitHub for an overview


The .parallel() method in Java Streams is basically one of my favourite magic spells ever.


Ah yes, parallel stream(), for when you need to filter a list of 13000 entries for.... Reasons. Bad reasons.

Use it all the time :)


I tried it once on a simple monte carlo thingy and legitimately said "wow!" out loud when it just worked.

God, all the fussing with threads and fork-joins and mapreduces and god knows what everyone else wanted me to do: gone. Gone!


Yeah seriously Java's threading support is good but the boilerplate was horrific.

One thing you should know is that parallel streams use default thread pool unless you specify otherwise. Had that choke up some REST handler threads a few times. If you think it might be an issue make sure to specify a different pool


I assume the expression becomes null if none of those match?


No my bad, it doesn't compile, you have to add a default.


switch assignment:

  int numLetters = switch (day) {
      case MONDAY, FRIDAY, SUNDAY -> 6;
      case TUESDAY                -> 7;
      case THURSDAY, SATURDAY     -> 8;
      case WEDNESDAY              -> 9;
  };


Reminds me of Kotlin's "when" statement


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: