I have the same experience on x86, but because of the abundance of resources it's less noticable.
I don't insist on everything being open source (completely), but base languages/VMs should be. Unless something about this changes, I'm not using anything owned by Oracle again.
It of course sucks for those who invested their time in hacking the Oracle stack, but I think many people these days outside the Oracle bubble see Oracle as a dead-end.
Most people don't have a war chest full of money to hire the best lawyers like Google to protect them against baseless lawsuits.
Anyway, even from a technical point of view, things like Graal and Truffle are terrible workarounds and I prefer just fixing the things which are broken in the first place. Seeing that this will never happen at Oracle, I'll just work on code were the maintainer/owners are not so openly hostile.
What we can and should say is that Oracle is losing the trust of the Silicon Valley startup crows, and that this is a very important population segment. It's like a candidate being elected but gradually losing the young vote – it's a bad sign that Oracle would do well to consider.
Saying "I'm not using anything owned by Oracle" is something most serious server-side developers just can't say, because, frankly, there's little choice. If you require high-performance, multi-million-LOC software developed by a large team, you would be quite foolish to choose anything but the JVM. So we should certainly discuss Oracle's behavior, but pretending that we can avoid Oracle software like it was a specific linux distribution is a misrepresentation of reality.
Five years ago most new server engine development was done on the JVM. Since then there has been a shift toward new development being done in C++ such that the vast majority of new server engines I know about are being developed in C++ (mostly C++11) across a diverse range of companies. The reasons are practical and reflect the evolution of hardware.
The short version is that on current hardware C++ is much more efficient, both in terms of throughput per core and can achieve integer factor improvements in absolute performance relative to the JVM. Most of the differences are boil down to two things. First, server performance tends to be bound by memory performance and the JVM is quite a bit worse than what is easily achievable in C++. Second, an optimal high-performance server engine design in C++ is difficult to express within the JVM so basic design of the engine kernel tends to be less efficient as well.
In large-scale systems, that starts to add up in terms of power and hardware consumption and companies are more sensitive to this than they used to be. C++ currently offers significantly better characteristics and using less hardware to do it.
Writing an excellent server kernel requires a high level of technical ability and quite a bit of low-level code since you have to reimplement most of the operating system resource management services most developers take for granted. Few pieces of open source server software are built on userspace kernels and the ones that do like PostgreSQL are (currently) only partial kernels that still rely on the OS to do significant things a full kernel would reimplement. A properly designed database kernel, for example, is at least 100k LoC of low-level C/C++ and that is before you actually write the server application that sits on top of it. I've designed and written kernels for both database engines and network engines, it is not trivial.
So why would you want to go through all this effort instead of writing to the standard POSIX APIs directly? Because performance and scalability. For example, a well-designed disk buffer and scheduler for a database kernel can easily triple the I/O throughput possible with a highly tuned POSIX implementation. A lot of locking and blocking, both explicit and implicit in the OS, become unnecessary. Certain kinds of distributed system problems become easier to solve because you can adaptively schedule data flows at a fine-grained level. One of the reasons Oracle and DB2 scale so well and get such high throughput is that they are built on highly optimized userspace kernels.
Virtualization actually degrades the performance of high-performance server systems in part because the hypervisor acts as a primitive operating system underneath the operating system the server kernel can see. This does not impact non-kernel based servers as much.
Every potential performance gain C++ might have is completely negated by having to touch that complete clusterfuck of a language.
I worked with Java since 0.1 in the enterprise world for over 10 years and with Clojure & Scala I would continue with it, but I don't trust Oracle to do the right thing in releasing source at the same pace or at all for some critical perf/scaling features and I tend to think thats wrong.
How does OpenJDK handle serverside compared to the Oracle binaries these days?
Edit: and indeed it would be nice to know if a company like Google etc uses OpenJDK or the Oracle one.
If you fork, be prepared for a patent-infringement lawsuit as soon as you earn some money with it.
I'm pretty sure most companies have realized that staying the hell away from Oracle isn't a stupid move.
The amount of paid "consultants" that oracle pushes thru to make their sale, plus their tactic of charging what you can afford to pay, as well as the "can't get fired for buying IBM" mentality of large IT organizations means that oracle has a distinct advantage over any small competitor.
Not that oracle's tech isn't good, but there are comparable alternatives, but they are only comparable in the technical aspect, not in the (strongarm) sales aspect.
Do you have technical reasons for saying that, or only political ones?
ps. Why compare with top 3 american internet giants, what's the point?
BTW, Am I the only one who noticed the Lord of the Rings theme?
The risk of touching anything Oracle-related is not worth the potential benefit.
(Yes, the fact that they screwed minor Solaris customers does not translate to them suing over IP -- but then again, they already did the last part to).
Oracle is probably a great company to invest in; they make a lot of money -- but that alone doesn't make them a good stakeholder in your core business. I wouldn't want to partner with McDonalds either.
First because it means we can have _fast_ versions of existing languages.
Second because we can interact with the _huge_ amount of JVM libraries (this is a very big deal).
Third because SubstrateVM seem to be enabling the things I really like about Go: low memory footprint, fast startup time, and easy deployment (give me a binary that does everything I need).
They just need to make sure native interop is easy (both C and C++), and we have a winner!
The "Substrate VM Execution Model" slide talks about Ahead Of Time Compilation, which makes it sound like it might actually be the LLVM-based Substrate VM: http://vmkit.llvm.org. Alternatively, could it be a version of Maxine?
AOT as an option certainly sounds interesting...
Oracle is planning eventually to replace the C++ based JIT with Graal in some version following 8.
http://www.slideshare.net/vinayhulgar/java-8-selected-update... - Just a brief reference on the item list for possible upcoming features
It would be interesting to compare this to parrot, which had similar goals.
It's full of good ideas, but also full of cruft and years of technical debt. It doesn't have a JIT compiler, (nearly) no async IO, threading support is pretty new and not battle-tested nor documented very well.
Or rather just "because it's associated with Perl".
So Clojure yes but Scala, no.
Note: Only Two benchmarks presented in the Paper, and as with any benchmarks take it with a grain of x.
If it wasn't for JVM Truffle, Topaz is the fastest implementation of Ruby. And by a large margin. Assuming Ruby 2.0 was 5x faster then Ruby 1.8, Topaz is 8-10x faster then Ruby 2.0!
And Truffle is about 1.5-2.5x faster then Topaz.
Now i want to know would there be a better faster FFI for SVM.
And how far are we from seeing release of SVM?
-b + (Math.sqrt(b**2 - 4*a*c)) / 2*a
(-b + (Math.sqrt(b**2 - 4*a*c))) / (2*a)
-b + (Math.sqrt(b**2 - 4*a*c)) / (2*a)
And how your approch fares compared to the state of art JIT implementations? Can your solution produce a faster Lua than LuaJIT, for example? Why starting with Ruby for which you implement 40%? And what have you used from JRuby?
The advantage of running on the Substrate VM is that, unlike JRuby, our startup time is about the same as MRI (slide 19).
There are several languages using this system, both by us (JS, Ruby) by academic partners (Python, R) and others (Smalltalk is one I know of). This talk just used Ruby as an example.
We used the parser from JRuby.
Caveat: I've not used HLVM or looked at its implementation, but the scope seems very similar.
It would be interested how it compares to OpenResty (LuaJIT), Node.js (V8 JS engine) & co.
From the second slide: "The following is intended to provide some insight into a line of research in Oracle Labs."
Then i'd actually read this powerpoint!
But in all seriousness if this becomes like LLVM and well developed it could be extremely helpful. .
However, i'm sure Oracle would find an awesome way to make it "oracle enterprise trademarked" and make it into some silly product that aggravates more people than even java which might be impressive!