* Only 64bit Linux on AMD64 is supported, so no ARM or Android support currently.
* Files must be ran on computers they were compiled on, or identical hardware
* Still needs GC, only G1 and Parallel GC are supported
* No dynamic byte code (Lambda Expressions, Dynamic Classes, etc.)
* The only supported module is java.base
* No decrease to JVM start up times
* Still need the JVM
* Some decrease to spin up as there will be less JIT passes that require stopping the world.
Effectively this just lets you pre-load .class files into the codecache directly rather then running the ~10k initial byte-code passes before they'd receive a JIT pass and that would happen.
Lastly .so is just used as a container so I doubt we can expect to dynamically link against AOT compiled Java in C/C++/Rust land.
That just means it won't be AOT compiled. The JVM can still JIT them.
> No decrease to JVM start up times
The JVM itself starts quite fast. Most of the time spent in startup comes from the applications themselves. Spend a lot of time in the interpreter running their startup code and burning CPU cycles on the lower compiler tiers.
> The only supported module is java.base
Supported in the sense of oracle offering support. You can still compile other modules. It's just experimental.
Jar unpacking can contribute quite a bit to startup times. It would be neat if this also did a pre-linked library as well.
edit: I should add that this really only matters for large numbers and sizes of jars.
My question is how much memory can be saved by not having the JIT. Sure you still have a GC with a heap (VM) but one of the big pigs in memory can be the JIT.
I would have hoped to see it sub 5MB.
If 24-48MB is trivial completely depends on what is stored in this space, i.e. is there a more efficient representation for your stated goals? Context is king. Even 5MB could be non-trivial, depending on context.
There are quite a few manufacturers that have been replacing their firmware with such versions of Java.
$ java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~14.04-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
$ java -Xmx999k
Error occurred during initialization of VM
Too small initial heap
If you don't know better I can gladly provide you some links to embedded Java vendors.
I welcome concrete counterexamples of why this expectation would be realistic.
JamaicaVM requires 1 MB
MicroEJ can target Cortex-M processors with 128 KB flash and 32 KB RAM.
Oracle's offerings, Java ME Embedded reference implementation requires 128 KB RAM and 1 MB ROM
Java ME might not seem relevant in 2016, but it is running on the majority of Ricoh copiers
And smart devices produced by Gemalto like Cinterion, EHS6 digital phone or smart meters.
And many other embedded manufactures, that are jumping into IoT.
On embedded systems it might not be trivial, but on my laptop with its 4gigs of RAM and even more so on my desktop with 16gigs, 50mbs is definitely trivial.
Do you know what you are talking about ? Android is totally different thing than JVM and Java Ecosystem ! It is totally different technology ! They use one step to turn your bytecode to their own dex file, after that it is completely separated from Java ecosystem (Java as ecosystem which comes from Oracle).
I am fascinated on HN, how comment uneducated like this, haven't got down voted.
They use kind of hybrid AOT/JIT right now. (It was interpreter until 2.2, they switched to JIT, in android 5 they switched to complete AOT, now they are going kinda hybrid approach).
Also risk of "hot pathing" the wrong paths if the server comes up during say a market closed time...
Exactly. There are a lot of high performance system written in Java. For long running systems, the startup cost is irrelevant. And for networked systems (such as fintech) you are going to be IO bound. The main issue as far as Java and high performance is the GC related pauses and that is a concern shared with any GC'd runtime.
I am not aware of any trading company using it. May be some.. but the high perf people I know go other ways.
If you had an IF somewhere random in your code.. you will then JIT code that is wrongly optimized.
If you try to put some kind of network device to capture the traffic and respond with fake responses to make the code happy.. you are one bitflip away from sending bad stuff to prod.
Including some ascii art of Trogdor the Burnanator!
My question is really "why will it succeed this time?" Did GCJ and similar technologies not take off because Sun didn't support them? Or because it is a really neat technical challenge that solves a problem people don't actually care about?
So, yes, this is cool. But, I'm currently giving it a likelihood of success really low, since I have priors that failed. What other evidence are people seeing that make this something to be excited about, as opposed to just impressed. (Or, are we just impressed and I can go back to my corner?)
Sun was religious against it.
Regarding the GCJ, it failed because it is a hard problem where people didn't work for free and most of them stopped working on it when the OpenJDK was released.
While others read newspapers on the train, I read papers. :)
In any case, these provide AOT.
JamaicaVM, PTC Perc (former Atego and Aonix), IBM J9, Oracle Embedded Java, OS/400 Java (uses the same bytecode deployment as the other OS/400 languages), Excelsior JET, and probably a few others in embedded market that I am not aware of.
And of course the Android Java fork with ART.
This means that we might actually get a language that becomes the new c in terms of popularity.
Dynamic languages were attractive as an alternative to being forced to specify types all the time, even when it's "obvious." Nobody would have complained if type errors were pointed out for "free."
But type inference is getting popular (eg scala), which is showing people that you can have your cake and eat most of it too.
There's also a lot of little things like REPLs, runtime metaprogramming, blah blah, that used to be solely the domain of dynamic languages, but popular interpreters/VMs have gotten way better in the last decade or two (thanks, JVM) and shown that you actually _can_ have it all. There's no longer a big strong line between interpreted and compiled.
If you can make a statically typed language expressive and fast-to-iterate enough, which I think you can and we (mostly) have, then that kind of yanks the ground right out from under the feet of the dynamic languages, leaving them with no real reason to exist in the long run.
That said, there's something attractive to the unexperienced about being able to code in the laziest possible way, and I don't think platforms that deliver a tiny bit of value in the short run in exchange for massive payback in the long run will ever lose popularity, among new engineers who have not yet learned what it's like to maintain a large program. There will always be PHPs as long as there are students. And that's OK. You learn to do thing well by doing things poorly. But hopefully over time such tools will mainly be used for trivial and learning projects, not big mission critical things
If you go read the Xerox PARC, DEC and ETHZ papers you will find REPL goodies using static system programming languages with automatic memory management.
The Xerox ones even did correction suggestions when compiler errors happened.
Namely Mesa/Cedar, Modula-2+, Modula-3, Oberon and its descendants.
For example Oberon System 3 already had something in spirit similar to Swit Playgrounds.
I've been thinking about type inference in this context but I don't think that's enough to convert people to static languages. I agree that there is a general problem with the 'laziest possible way' to program as you put it - dynamic languages are way more forgiving that static ones. And while tools may evolve to make some things trivial, I think that there (almost) always be some areas where dynamic languages' forgiveness will be good enough reason for some people to use them. Today these are types but in the future there'll be something else to sacrifice.
The theory of statically typed languages is progressing while "dynamic" languages haven't actually had many real advances since e.g. Scheme first appeared. Sure, there's small things around ergonomics, immutability (Clojure, especially), but really there's been very little true advancement.
Personally, I think either static (dependent) types will win or we'll just end up implementing different custom static type systems ad-hoc. My reason for having more confidence in the former rather than even more formal methods is that there are very few math-like/truly spec-level languages (e.g. TLA+) that can be mechanically translated to a meaningful program. We need something intermediate, but so far (to me!) Idris has seemed like the only remotely credible contender in that you can pretty much choose arbitrarily how much provin' vs. how much assumin' you want to do.
EDIT: The current hype (in frontend especially) seems to be around 'gradually typed' languages like TS and Flow, but AFAICT they are basically just a very poor man's ad-hoc version of dependently typed languages. (Concretely TS/Flow do have a huge advantages in that it's really easy to introduce, but personally I had no problem transitioning gradually from ES6/traceur to Scala.Js/React just by adding strongly typed shims where appropriate. If your application is so tangled that you cannot introduce such shims in various places, it's probably tangled enough that you'll want to do a full rewrite anyway.)
 I think Scheme was the first to introduce first-class continuations. That was a pretty major advance in terms of expressing, for example, search algorithms by just rewinding to a previous continuation. Of course, since then "we"(Felleisen, specifically) discovered that delimited continuations are perhaps better -- but that was a 'refinement'.
 Aka: languages with more than one type.
 Of course, some might interpret (pun!) that as a sign that they're "perfect". However, we still get loads of bugs in dynamically typed languages, so surely there must be something to be improved upon, right?
 See e.g. "State Machines All The Way Down" at https://eb.host.cs.st-andrews.ac.uk/drafts/states-all-the-wa... (DRAFT)
 See e.g. "Type systems as macros" at http://www.ccs.neu.edu/home/stchang/pubs/ckg-popl2017.pdf (DRAFT)
 Since I'm already thowing around references, I'd also point to the "Elaborator Reflection" paper; PDF at
http://davidchristiansen.dk/drafts/elab-reflection-draft.pdf ; video at https://www.youtube.com/watch?v=pqFgYCdiYz4 . It shows just how much provin' you can do with just a little bit of ad-hocness at compile time.
I think that excessive dynamism is getting less attractive when you account for all the costs it brings with it and that the vast majority of the benefits can be achieved with static metaprogramming, in a more readable and safer manner.
Imho, OO is about polymorphism  and Rust provides that.
CLOS also allows the developer to implement other inheritance mechanisms, by developing his own way to combine the methods of a generic function.
At least, not according to the widely used definitions I've read of OO.
The Quasar instrumentation in my opinion is a pain. Out of the box you either get AoT instrumentation or you use Java agent command line option. Both were problematic for me in context of build processes, IDE support, etc. It's possible to do runtime instrumentation with no special command line option using the Quasar URL classloader and Ant task which scans classes to instrument.
No, you'd get a slow illusion of full concurrency. The JDK ditched green threads ages ago because native threads were far more robust.
While nearly every other Green Threaded language offered as an additional library/option. This hurt adoption. While Go just forces you to use Green Threading.
With an AOT option for Java, this becomes even more of an option.
why can't Java implement the same idea and
let all existing libraries benefit from it?
1. Change what functions do and don't block can cause huge issues with down stream libraries/people who depend on system libraries behaving the same thing tomorrow as they did yesterday.
2. Refactoring all code around system calls so they never block.
3. Write a multi-core and multi-socket scheduler.
4. Write a scheduler to balance multi-core and multi-socket thread load.
5. Determine how you will do polling, and write a polling strategy to figure out what green threads are/are not blocked, then figure out how to communicate this information efficiently.
6. Repeat the above 5 steps for EVERY platform Java runs on.
The hotspot JVM will accept both -d32 (32-bit data model) and -d64 (64-bit data model) everywhere, but they will only work on certain platforms.
The language specification doesn't state anything about how threads should be scheduled.
There are actually a few embedded JVMs that might still offer it.
Not sure why I'm getting downvotes for just asking a question, I thought this place was about discussion of ideas.
If it's at the JVM level (with a command line option to change how threading is done such as -useGreenThreads") then it's transparent to all existing code. I can just use anything (such as existing JDBC library), increase threadpool to levels that would be considered "ridiculous" with current JVM threading system, and it would all work fine. No code changes necessary for existing libraries.
People do this in Python, but they do it in Python because
1) Python is dumb and has the GIL, much easier to hack in
2) They use reflection to change libraries, but it breaks anytime you would hit native C code. With Java trying to JIT everything, you are now having to change threading at the assembly level.
I think it is would be a ton of work to do at the base JVM level. If you look at the people behind the library I linked, it is many of the core Java people. They might have more insight on reasons why it won't work.
There's no way you would get a switch like that, because you interact with them differently. I wish greenthreads would not use "thread" in the name because it causes confusion like in your case.
You take the same code and run it with 3000000 green threads... you are still capped out at 32 things running at the same time. Your system can kinda sorta pretend to be more concurrent, but the actually concurrency is the same.
(Also, they're still concurrent, just not parallel: https://vimeo.com/49718712)
This isn't "compile your application to an EXE", like Go or Rust. This is "compile some or all of your application to a DLL, and then have the Java runtime use that DLL".
In other words, you still need a JRE available. The use case for this is simply optimizing hotspots in some very specialized situations.
This is significant for those that want to AOT Java without paying for it.
The entry-level edition of Excelsior JET is free (as in beer) since the end of August 2016. Licenses for the more senior editions have been available for non-commercial use at no cost for many years.
AOT compilation is mostly intended to address startup time, Java has a big std library and much of it needs to be interpreted and compiled each time it runs.
Between jigsaw modules and this, we should see some nice improvements in startup time.
For me the JVM starts really quickly; the only performance issues I've seen there are related to large bloated frameworks doing too much at load time. I don't recall the std library being a factor at all (and why do you think it is recompiled every invocation?)
The JVM has always (well, not always, but for 15+ years) run Java code by first compiling it to native code. The only difference is that the JVM will now allow you to optionally do the compilation ahead of time, in order to decrease an application's startup overhead. This actually hurts performance of the compiled methods, since runtime profiling is not available to the AOT compiler (but the document also describes a "tiered" mode, where AOT-compiled code can be dynamically replaced with a better JIT-compiled version at runtime).
If you want to link your Java code with native code written in a non-Java language, that's a totally different requirement, and JNI will continue to be the way to accomplish it.
Although if you don't like JNI, you can use JNA  (which I believe uses JNI to link to libffi, and libffi for the rest).
Not on Java 10 hopefully.
In some future version panama will offer similar functionality to JNR.
You give the JVM a .class file, not a .java file.
Though this is a technicality.
Java 9 will be able to compile java source code to java byte code then compile that java byte code to native machine code, correct?
for the initial release, the only supported module is java.base
I.e, we will not be able to compile our own code ahead of time just yet.
That's not really true. From the same doc:
> AOT compilation of any other JDK module, or of user code, is experimental.
So it's "experimental", but possible to do.
that is incorrect. https://developer.android.com/guide/platform/j8-jack.html
Just plopped this into my activity to make sure: someList.stream().map(String::toUpperCase).forEach(System.out::println);
As long as minSdkVersion is 24 or higher you should be good.