But I don't think the constraints are that bad are they? Can you give examples? Modern compilers like Graal can even scalar replace objects if they will escape in the future.
The biggest constraint by far is the reliance on inlining. Inter-procedural escape analysis could potentially remove the need for value types in many places, but it doesn't seem like the JVM engineers believe that's feasible.
I've read the Graal papers and part of the source code. So I am familiar with PEA. It's too bad that optimisation didn't make it into C2 yet. I wonder if it ever will.
In any case, whether functional or not, the observation is that many objects have limited scopes, that correspond to structural scopes in the code.
box-shadow: inset 1px 3px 1px -2px #ababab;
Don't know the root cause, but the worst most sites get on here is annoyingly sluggish. Between the site and Firefox, something's definitely triggering pathological behavior.
Funny statement, at least for Java. I don't know anyone who would say the majority of Java applications aren't huge, slow, memory-hogging beasts compared to equivalent native ones, and whose codebases are just as unoptimised for the perspective of the humans who have to work with them. That's mainly perception from when I briefly worked with Enterprise Java many years ago --- no one wrote "simple" or "elegant" Java applications, and stuffing as many design patterns and abstractions in as you could was considered "best practice".
There are certainly examples of simple and small Java applications (see https://en.wikipedia.org/wiki/Java_4K_Game_Programming_Conte... ), and I've written a few, but the overwhelming culture seems to be that of anti-optimisation and bureaucratic excess. In some sense, it's almost like working against or "challenging" the JVM's optimiser is the norm.
In my experience, a "simple, elegant" design --- which does not necessarily mean "highly abstract" --- tends to be very close to optimal anyway, with the compiler's optimiser doing the remaining work. That makes me wonder whether claiming a language/runtime/compiler has powerful optimisation abilities is actually a reflection of how much the typical source code in the language has to be "cleaned up" by the optimiser in order to be decently efficient.
Also, does anyone else notice the "Machine code" given for the JIT example is completely unrelated to either the Java or bytecode? It's 16-bit realmode --- I recognise the access to the 40h segment, it seems to be timing-related, and a bit of Googling finds that it's actually part of an old TSR clock utility (with unknown source):
I would, and I've been developing in Java for over ten years now, after ten years of C/C++.
I think that your complaints have nothing to do with Java, and much to do with a coding style that became popular in the '90s. You could see the very same in C++ applications back then, but then Java took over. The big enterprise applications of the '90s and '00s reflect the prevailing mindset of the time. Java applications written today don't look like that.
> In my experience, a "simple, elegant" design --- which does not necessarily mean "highly abstract" --- tends to be very close to optimal anyway, with the compiler's optimiser doing the remaining work. That makes me wonder whether claiming a language/runtime/compiler has powerful optimisation abilities is actually a reflection of how much the typical source code in the language has to be "cleaned up" by the optimiser in order to be decently efficient.
In that case I think your experience may be limited, or that you're unaware how big "the remaining work" may actually be, especially on modern hardware. HotSpot has proven extremely adept at running many different code styles very efficiently. HotSpot's next-gen compiler, Graal, is IMO the biggest breakthrough in compilation technology of the past decade. It runs Java faster than C2 (HotSpot's current compiler), Python faster than PyPy, and JS as fast as any other VM out there. It even has a C frontend, which, while not yet matching gcc, performs surprisingly well considering how little effort has been put into it.
This kind of work can be very important, namely, the ability to optimize code across different languages. A recent paper found that re-writing parts of SQLite in Python (!), instead of the original C, can result in a 3x (!) performance boost when running in an application, as application and DBMS code can be optimized as one unit.
This is really a matter of perspective. A typical enterprise application is running on a beafy multicore processor and spending most of its threads' cycles waiting for database IO. Also, it's not really a concern if your jar file is 100 MB when your machine has 256 GB of ram. From this perspective, the fact that an application can be quickly slapped together from the multitude of Java libraries and frameworks by a stackoverflow savvy junior developer far far outweighs the marginal performance benefits that could be gained from using a language that compiles to native code.
The fact is that Hotspot has made the jvm perform "good enough" for this kind of domain. And keep in mind that many enterprise developers (at least in my experience) have never used a language besides Java, so this level of performance is normal to them.
As for applications that are huge, memory hogs, complex and non elegant, well, congratulations you just described the majority of enterprise applications.
I see the proliferation of "Java is not slow" articles as a sign that it is --- if it wasn't, why would there be a need for such strong propaganda and carefully constructed microbenchmarks? Sun/Oracle have a deep interest in hiding their "naked emperor". Where are all the "C is not slow" articles, for example? I'm sure anyone who has used a non-trivial Java application, not even an "Enterprise" one, is immediately aware that it seems slower, even those who might not know at all what Java is.
Nobody writes "C is not slow" articles any more, but I remember a time when there were people who believed C++ was inherently slower than C. Maybe some such people still exist! I don't see much talk like that anymore though.
With respect to bloated code, the 1990's and early 2000's also saw things like DCOM and other C++ bloat-zones. It's not really to do with the language. The presence of bloat in enterprise apps is more to do with the lack of competition (bespoke apps are by definition monopolies), the frequent lack of deadline pressure (deadlines are invariably artificial) and the inflexible team sizes (over-engineering is incentivised by the need to retain a nice safe corporate job). The fact that enterprise software and Java go hand in hand speaks more to the lack of competition in the space Java plays in, than anything about the language itself.
I just don't write code in Java anymore --- I still need to use some Java apps, and the difference remains obvious.
but I remember a time when there were people who believed C++ was inherently slower than C.
The C++ vs C case is slightly different, because C++ is essentially an extension of C. Some features like iostreams vs stdio are objectively slower on the C++ side, and efficient C++ code usually tends to look a lot like C. Those who say C++ is slower are referring to the "overuse" of C++ features that add overhead. As the benchmarks posted below show, even carefully written and optimised Java that looks like C is slower than C.
They write the same program in C++, Java, Scala and Go, then compare performance both before and after optimisation.
They key part that shows Java can be as fast as C++ is V.E "Java tunings":
Jeremy Manson brought the performance of Java on par
with the original C++ version. This version is kept in the
java_pro directory. Note that Jeremy deliberately refused
to optimize the code further, many of the C++ optimizations
would apply to the Java version as well.
The tunings he applied are not exotic, but would not be used in most code unless it was actually performance sensitive. For instance he chose better starting sizes in ArrayLists and replaced boxing collections with more optimal ones (presumably from GNU Trove).
Of course, that is a comparison against the original C++. Later in the paper it is shown that Google were able to optimize the C++ version heavily and it then beat the Java version again, however, apparently the optimised version relied on data structures so exotic that they're Google proprietary and they chose not to open source them, so I'm not sure that'd be reflective of the experience of the average C++ developer.
Yeah, I remember the days when junior Assembly coders could easily write better code than any C compiler for home computers.
Or the days when the arguments that are now used against Java, Go, Swift and many others where used against Turbo Pascal, C, C++, Modula-2, Basic compilers.
The more things change, the more they stay the same, I guess.
Consistently twice as slow, with memory usage between 2 and 400(!) times larger. Java is slow and bloated compared to native.
Your next argument will be something about how micro-benchmarks don't reflect reality for larger programs...
What do you mean by "native"? C/C++? Yes, given enough effort you can write C code that outperforms Java code, sometimes handily, especially in single threaded or low-contention cases. Yet we did write applications in C/C++ before we switched to Java, we had excellent reasons to make the switch, and there have been precious few companies making the switch back. That shows you that the microbenchmarks don't tell the whole story.
Right now, the JVM's biggest performance issue is the lack of value types. Once they arrive -- and work is well underway -- beating Java would be harder and harder, especially given new compilers like Graal. But in any event, I just don't see large developers switch back to C++ (or Rust) en masse. Memory usage is hardly an issue, as RAM is very cheap and underutilized in server environments anyway. Spending effort to conserve RAM just doesn't make any sense, unless you're targeting RAM constrained environments.
2. How are Java's value types limited?
3. I find your calculation extremely pessimistic. Lambda expression are widespread a couple of years after their release, and I see no reason why value types will be different. I think that 5 years are a better estimate for wide use of value types.
> Except for pointer equality checks, forbidden operations will throw some sort of exception. How to control pointer equality checks is an open question with several possible answers.
Rules for permanently locked objects:
- restrictions on classes of locked objects
- all non-static fields must be final
- there must be no finalizer method (no override to `Object.finalize`)
- these restrictions apply to any superclasses as well
- an array can be marked locked, but then (of course) its elements cannot be stored to
- if not an array, the object's class must implement the marker type `PermanentlyLockable` (is this a good idea?)
- restricted operations on locked objects (could be enforced, or else documented as producing undefined results)
- do not use any astore or putfield instructions, nor their reflective equivalents, to change any field
- do not lock (you may get a hang or a LockedObjectException)
- do not test for pointer equality; use Object.equals instead (there may be a test for this)
- do not ask for an identity hash code; use Object.hashCode instead (there may be a test for this)
- do not call wait, notify, or notifyAll methods in Object
- at the time it is marked locked, an object's monitor must not be locked (in fact, should never have been?)
- side effects
- elements of locked arrays are stably available to readers just like final object fields (i.e., there is a memory fence)
- a locked object can be locked again, with no additional effect
- any attempt to mutate a permanently locked object raises java.lang.LockedObjectException
- any attempt to synchronize on a permanently locked object raises java.lang.LockedObjectException
- object lifecycle
- all objects are initially created in a normal (unlocked) state
- an object marked locked cannot be "unlocked" (reverted to a normal state)
- an object marked locked must be unreferenced by any other thread (can we enforce this?)
- the reference returned from the (unsafe) marking primitive must be used for all future accesses
- any previous references (including the one passed to the marking primitive) must be unused
- in practice, this means you must mark an object locked immediately after constructing it
- the method `lockPermanently` is used to lock an object permanently
- there is a predicate `isLockedPermanently` which can test whether an object is locked or not
- for initial experiments, these methods are in `sun.misc.Unsafe`; perhaps they belong on `Object` (cf. `clone`)`
I'm not seeing the 400x more memory - there is a 40x though. But if you add up the columns for the programs that have both C and Java versions, you get this rough summary...
secs KB gz
Java 76.33 2004048 12775
C gcc 37.32 793652 11651
- Cyberduck is written in Java, yet nobody seems to be complaining about how slow and bloated it is.
- There exists a version of Quake 2 written in Java: http://bytonic.de/html/benchmarks.html It seems to be able to push out more frames than native, actually.
Both of those are real software that actually run well on Java, regardless of penalty paid for running in a VM, JIT, and GC.
Also when looking at the amount of memory and time taken when comparing C to Java in the microbenchmarks, always be sure to mentally factor in the base overhead of starting up the JVM both in time and memory. It's pretty easy to see that when a C version takes up something like 100KB of memory and the Java one takes up 30MB, Java consistently takes up at least ~30MB regardless of how much memory is required for that microbenchmark. When the C version takes up 300MB and the Java one takes up 700MB, that's a bit better of a comparison. (though still not perfect, because the Java GC will reserve a lot of memory for itself, even if it isn't using the full 700MB, if it feels it needs that much, etc.)
Lol I just double checked that after you said it, it never felt "Javaish" even the interface is really sane.
Actually they are using JNA for a lot of stuff and they written foundation bindings... (https://g.iterate.ch/projects/ITERATE/repos/cyberduck/browse...) cool stuff I keep that for reference. However he should put the bindings stuff under something like LGPL since most stuff will fall under fair use anyway (simple class Names which you would use anyway even without looking at the Source when making a JNA binding to Cocoa)
Not much for those programs:
>> It's pretty easy to see … memory
The table in your link shows it being 6% faster in one (literally) corner case, while all the other entries show it being slower by varying amounts. Keep in mind that in this benchmark a lot of the "heavy lifting" is being done by the GPU via OpenGL, so it isn't great for benchmarking languages that run on the CPU. It also doesn't mention what the "Original C Code" was compiled with.
mandelbrot 17% slower
fannkuch-redux 34% slower
fasta 38% slower
fasta-redux 39% slower
pidigits 44% slower
spectral-norm 115% slower
reverse-compliment 121% slower
n-body 136% slower
regex-dna 235% slower
k-nucleotide 253% slower
As for being dishonest, well, you're commenting from a newly minted temp account so it would seem you didn't have the courage/backbone to use your real one.
You're looking at:
public void canNotBeMerged()