Java has really come a long way. I'm not sure why it's not hyped more. I guess it's like C++ (another widely used old man's language that runs the world).
It needs no hype. I think 90% of enterprises use it in lot of ways. Pre-hype and post-hype people know Java gets shit done. People who believe in hyped cool functional language which ultimately runs on JVM are not going to listen anyone who claim is Java is just fine for lot of work.
Java isn't perfect but it's the defacto language for anything that doesn't need to be compiled.
I personally really like Java and I've used a ton of languages over the years. Not a perfect design but it's hard to think of any big downsides. The syntax is annoying at times but once you learn to use the IDE to poop out boilerplate it's not an issue
I tend to group languages into machine-code, VM, and interpreted. This tend to lead to similar tradeoffs within the group...with the exceptions of Golang.
in my mind:
interpreted = why would you ever use a language like this?
VM = fast, safe, but going to use a lot of memory
machine code = fastest, low level HW access, usually unsafe
Also Swift and Objective-C. For some reason people think they are fast because they are compiled. Yes, they are fast so long as you only write C in them. Actually using their features is typically way slower than Java.
> I tend to group languages into machine-code, VM, and interpreted.
This distinction doesn't make much sense in 2017: most languages are blends of all these things with technologies like VM's, JIT, Ahead-of-time compilation, etc...
Java is complied to native code, this is just delayed until execution. There are drawbacks like slower start-up performance due to the required JIT compilation but there are also advantages like being able to optimize for the exact machine you are executing on or even recompiling at run-time after profiling the running application and determining useful optimizations based on the actual work load.
A really important aspect of using an intermediate language like Java, .NET but also LLVM does is that it reduces the amount of required code. If you have M languages each targeting N different platforms, then you need M * N traditional compilers. If you first compile to a common intermediate language and then from there to the targeted platform, then you only need M + N compilers.
> advantages like being able to optimize for the exact machine you are executing on or even recompiling at run-time after profiling the running application and determining useful optimizations based on the actual work load
Do you have examples (incl. measurements) for optimizations actually performed by a Java JIT compiler, which an ahead-of-time compiler can't perform due to a lack of runtime info? It is my understanding that those analyses and transformations which eke out the last few percentage points are so expensive that they're infeasible to do at runtime.
The JVMs profile-guided optimisations are generally a 10-20% win for Java and more for Scala, I believe. They aren't that expensive to do. When you get into languages like Ruby or Python it just goes off the charts, you are getting orders of magnitude better performance from profile-guided JIT compilers.
As an example of what they can do, take de-virtualisation. Virtual method calls are expensive. Java method calls are virtual by default. The JVM profiles method calls and analyses the class hierarchy to discover which ones can be de-virtualised. That's a big win. C# requires programmers to manually specify which methods are virtual because .NET doesn't use profile-guided JIT compilation. In some cases (admittedly I've only seen artificial examples) this optimisation is so powerful you can write Java programs that completely trash C++ programs, e.g. a program that uses a command line switch to pick a subclass of a virtual base class and then runs method calls on that object in a tight loop. Java will devirtualise the call based on the observation that only one target is ever used, then inline it, then do loop opts on the inlined version.
I'd say C# makes methods not virtual by default because it's a saner default. When writing a class you shouldn't make everything virtual, that should be a conscious decision to actually design a class for extensibility.
There are many. One really useful one is that final variables will be pulled out along with their dead code branches. Say I have a library which allows different "sizes" of a list. The fastest way to sort that list depends on the number of max entries.
With JIT you can set the size when you create the object, and if JVM knows that value can't change it will pull out all the branches for different sizes and run only the one for selected size.
There's no way to know which sizes will be selected at compile time if the lists can be dynamically created, so a static compiler can never pull out all the checks the list size.
This is a simple example but the JIT is very smart and makes a big speed difference in practice. It's the main reason Java is faster than C in some benchmarks
Inlining of hot, small methods. Call site specialization (for virtual calls, reflective calls and invokedynamic calls). There are others but I'm not an expert on hotspot.
HotSpot does some optimizations based on runtime information [1] but no numbers given here. For .NET Microsoft build SPUR [2] and the paper has a performance evaluation section although I just skimmed the paper and am not sure it contains the relevant comparisons.
IIRC Java Numerics was a dead end precisely because the required optimizations went against the 'write once, run anywhere' ideal. I have heard of specialized Java runtimes that can do interesting things to approximate BLAS/Atlas, but never seen them first hand.
I wanted to cite "Improving Java Performance Using Dynamic Method Migration", Lattanzi 2004, but the site hosting the paper isn't loading at the moment.
Forcing all objects to be heap allocated which makes things like Optional introduce even more memory indirection because thrashing your cache is totally fine
No primitives in generics which therefore means no primitives in containers
Which then leads to the fun of autoboxing so that way your bools can be true, false, or null!
Over the last few years I have developed templates/scaffolding for my Java projects based on my typical Java usage for small projects. It is mainly inspired by Go and I fear hopelessly outdated. It does not use maven/gradle or any latest Java functional patterns etc. I have an ant build, bash scripts and few Java files.
This whole setup helped me develop surprisingly large number of useful tools and apps which proved successful enough that snippets of my code and pattern are copied in proper 'enterprise class' projects at our company.
I feel like it's coming back as the "just use this" after the flashes in the pan of the last few years. C# seemed like it had a shot at the throne but the ecosystem is in deep disarray these days, tooling wise.
I use C# at work, the syntax is better than Java but there's one huge issue: good luck getting it to work without Windows and Visual Studio.
.NET core is still very immature, we had to back out every time we tried to use it in production. Maybe in a few years it will be good.
Java on the other hand has support for many IDE's, package managers, web servers, JVM's, OS's. You can swap out pretty much everything, no vendor lock in issues at all. The open source community is far stronger as well. So many times I wanted to use a cool database and found out there's no official C# client
I realky like it as well. I started to use Kotlin lately which interops with Java seamlessly and addresses a lot of Java's warts without wanting to be more than it is: Turbo Java.
- Oracle has hands on it and does questionable things like suing google
- Everything Java does, C# does as good or better. In general for everything in Java you could easily find a better way to do it.
- Java has lots of warts, many to have the language be backwards-compatible. (switch-case, enums, UTF-16 encoding, generics, null, difference between objects and basic types,...)
- Java is tied to old technologies. There are no standard JSON libraries, however even XSLT is included in java se.
- You need to use Java with an IDE. Eclipse is horrible to use (even scrolling lags here) and IntelliJ costs money.
Still java is a mature and solid language. There is just nothing to hype about it.
You must not have used eclipse for a while :) . I use Visual Studio at work and Eclipse at home and they're comparable these days. I agree that years ago Eclipse was awful.
- Every language has switch-case and enums? I don't get what you're trying to say there.
- Generics are fine?
- Every language supports null on objects?
- Nearly every language treats null and primitive types differently for performance reasons
- Java has three popular JSON parsers and they're all faster than .NET's builtin serialization. To get similar performance you need to use Newtonsoft which isn't builtin to C# either :).
- You don't need an IDE, it's just stupid to develop without one because they're so helpful. Nothing is stopping you from running javac on the command line.
You can't do a LOT of things java can do in C# because third party library support pales in comparison. A lot of databases and open source software don't have c# clients. If you're dealing with big data or ML you'll find almost nothing in C# land.
I feel like you haven't used Java in a long time. Things are much better than Java 6 days
Maybe I should expand a bit on my points, as the problems do not seem so apparent.
- Java has the same horrible switch case with the error prone break as C. If you want exhaustive matching on enums you will end up with a useless default case. If you ever worked with a programming language with pattern matching you will feel the pain.
- This might be a bit opinionated but I think enums are not enough. Sum types/tagged unions/variant types/disjoint unions/whatever you like to call them are pretty useful.
- Generics in Java are highly limited. Part of that is because generics were added as an afterthought and are implemented using type erasure. Both functional programming languages like Haskell or imperative programming languages like C++ or Rust offer more powerful generics that can sometimes help abstract things more elegant.
- The problem is that all objects are nullable by default and you cannot specify that e.g. parameters or results are never null. This leads to boilerplate null checking and missing handling of null cases. Everybody that touched java probably saw quite some amounts of NullPointerExceptions. Kotlin for instance offers types that by default cannot be null, with TypeScript this exists if you turn on an option of the compiler.
- You can offer pretty much everything you offer for objects also for primitive types. In fact, this is what Project Valhalla tries with value types and specialisation.
- My comment on IDEs was more about the need to use an IDE being bigger with Java than for instance C, while eclipse is often annoying. I am using eclipse daily currently because I work on some Java code for my Master's thesis.
The big drawback with C# is being handcuffed to a Microsoft ecosystem, and will still probably be for a while. I also wonder how the Java ecosystem of tooling and available libraries looks compared to C#.
After I switched over from Eclipse to Intellij I think at least the IDE tooling is pretty much equals nowadays. I used Resharper in VS so in many ways you are using the same thing.
As others have said the open source community for Java is still leaps ahead of C#. It's not often you find a library which doesn't have a Java connector/implementation. For C# on the other hand you tend to be much more limited in your options.
I guess this used to be a lot due to the platform dependence of C# (and probably still is). Hopefully .Net Core can help with that, but it's not ready for prime time yet IMO.
IntelliJ is free for the community edition, but that one is limited in it's features. If you're doing anything web, you'll want the paid-for edition. However if you're just working on some Java SE, you could use the free version.
> Everything Java does, C# does as good or better.
It's really true. It's so refreshing switching from Java to C#. I mean, they're both statically-typed-OOP-garbage-collected-bullshit-enterprise languages, but if you're gonna go with one of those, C# is far more pleasant than Java.
It has added some half assed okish stuff into a language that is still massively hard to use, super ceremonious and all that for not a lot of security.
Yes it is better than a lot of things, but it is also a super low standard. It get things done if you are ready to deal with all its problem... But it is still the child of its history, and its shows.
Java is easy to use once you understand it. I can code Java just as fast as C# or Python. In fact C# and Java are close enough to be mutually intelligible. The ceremony is all optional crap that most people don't worry about, it happens to any language used a lot in corporate.
The security issues were mostly in Java applets. The language itself is pretty damn secure. I can't remember the last time I saw a web exploitable issue in the JVM.
By security, i was more thinking of dealing with exceptions and error. I should have used Fault tolerance.
And i was not comparing Java to C# or Python here, but that is probably due to my personal background, which is more with OCaml, Erlang, C, Rust and co.
compute and computeIfPresent are also pretty damn useful (especially in the "atomically modify a value, then delete it if it meets a condition" kind of scenarios)
Wait, it had a concurrenthashmap but no way to "get or add"? Is a concurrent hash map even useful without that? (I suppose you could always write your own with some read/write locking but...wow)
ConcurrentMap has always had putIfAbsent(). That requires the thing that you're putting to have been constructed prior to the call.
computeIfAbsent() allows you to pass in a lambda (a Function really) which will only be called if necessary, thus avoiding a potentially unnecessary Object creation/etc.
For people interested in functional programming, I think Scala still has the following things going for it,
- Built in types (immutable persistent collections, Try, etc). You can get these in Java with
http://www.javaslang.io/ but it's a relatively new library.
- A concise syntax for creating value types without using something like Lombok.
- Mature libraries (ScalaZ, Monocle, shapeless, etc) for people interested in more sophisticated functional patterns.
Case classes, default arguments, type inference, Spark, an ecosystem that's already moved off null and off exceptions, unified type hierarchy with value types, much nicer async API, higher-kinded types. Just generally replacing all the magic annotations and XML with plain code. I mean if most of the good stuff in Java 8 is taken from Scala then why not get a head start on the kind of stuff that will be in Java 9/10/11?
I currently use Java for work and I cry a little inside every time I see big hashCode() methods, toString() methods and a chain of getXXX()/setXXX() methods when I know that a simple case class statement would have been all that was needed.
For the sake of comparison the Scala equivalent of those 8 lines is:
case class Data(id: Int, name: String)
and comes with all the other things merb mentioned. And it's unlikely to be any slower in practice since the JIT can inline the access (not that method calls are ever likely to be a noticeable overhead anyway).
I've found that Project Lombok solves this problem for me. Simply add a @Data or @Value annotation and it will generate getters/setters/hashCode/toString etc.
https://projectlombok.org/features/Value.html
I started with Lombok but found that you're effectivly writing a different language from Java anyway - all your tools (IDEs, code coverage, binary compatibility checking, build infrastructure... ) need to support it. So it's just as much effort as Scala, and gives you a lot less in return.
Scala still has several differences to Java including a stronger type system, traits, pattern matching, and implicits (love them or hate them.) Java is definitely closing the gap on some of the functional aspects though.
Edit: Where did the basic docs on type bounds in Scala go?
Java will never encourage or make functional programming comfortable. The current attempts is like putting lipstick on a pig, with an attitude of anti-intelectualism, starting from Optional breaking the functor laws.
Scala is one of the very few languages that makes functional programming comfortable and is among the handful able to do higher kinded polymorphism and to encode type classes. You can probably count with one hand the number of languages in which you can express Haskell's powerful FP abstractions and Scala is one of them.
You cannot express libraries such as those in Typelevel (Cats, Shapeless, etc) or Scalaz in languages like Java, C# or F# for that matter. The only other comparable languages in expressivity, maturity and potential are OCaml and Haskell, with OCaml being very similar in spirit, but less popular (and F# is no OCaml ;)).
Of course, you will never feel this unless you actually start using the language, getting past the initial hello world and Javaisms. This problem was coined by Paul Graham as the "blub paradox": you can only notice inferior languages, abstractions and paradigms to what you currently know, but you can't easily notice superior ones, unless you make an effort to learn more. The great thing about Scala is that it allows a gradual migration and although this expressivity can be seen as a weakness, it's also why Scala is probably the most popular FP language.
This problem was coined by Paul Graham as the "blub paradox":
you can only notice inferior languages, abstractions and
paradigms to what you currently know, but you can't easily
notice superior ones, unless you make an effort to learn more.
Java will never catch up to Scala, having to maintain backwards compatability for decades old code bases alone is anchor enough to keep Java in place relative to Scala.
And Scala itself is evolving, the new compiler, Dotty, brings a host of new features[1] including union types, implicit functions, trait parameters, etc., all while improving compilation speeds and streamlining compiler internals. Add Scala Meta (overhauls old macro system), Scala Native joining Scala.js as Scala alt-JVM targets, and you have Java 28 ;)
Saying that, Scala will likely remain a niche language, Java is king, slow and steady wins the race in the enterprise.
> Generics in Java are almost entirely syntactical sugar (due to type erasure).
Well, from that point of view, static typing in general is just syntactic sugar. In fact type erasure is one of the best things going for Java, because they haven't screwed the runtime for other languages (e.g. Scala, JRuby, Clojure). Ironically it is the JVM that turned out to be the multi-language VM, with the CLR having only languages that have basically C#'s type system.
No, the problem with Java generics is that covariance / contravariance rules are "call site", specified with wildcards, which are awkward and hard to reason about, versus "declaration site" in Scala and C#.
Scala also has higher kinded types, one of the few languages actually. In combination with making it possible to encode type-classes, by means of plain traits along with implicits, you can express Haskell's powerful abstractions in Scala. You don't see libraries like Cats or Scalaz in other languages like Java or C# or F# for that matter, because they can't be expressed in languages without higher kinded types.
And they're also adding type specialisation and reified generics as part of the value types work, somewhat similar to what's available in C++ (but a bit less crazy).
Declaration site variance is good, except that for Java I have doubts because it's being added in addition to what it currently has.
Specialization for value types != reification and from what I understood from their proposal they are not introducing reification, for one because they still have backwards compatibility concerns, but also because they don't want to screw other languages. I hope those plans haven't changed.
Java still has a ways to go before it catches up with the features of scala. Here are a couple:
I'm sure others have said it, but type aliases can't be done in java. For instance this is useful for seeing 'PersonId' instead of 'String'. More semantically meaningful names. You can emulate it in java, but you incur a runtime cost and have to write a wrapper class.
Java is still missing a huge amount of what makes Scala great. Most importantly case classes, pattern matching, more expressive types and traits (multiple inheritance).
It caught up a lot, but it's still a jump like IE7 was from IE6 compared to the Firefox of Scala. A hell of a lot better, but still lacking.
On the other hand one could say, since java is slowly incorporating all of scala, why not just use scala now. No waiting for jankier implementations of the same concepts.
I got used to the syntax pretty fast, to the point that I actually kind of like it. My complaint about streams is how slow they can be. If you're doing collection transformations often, the time spent turning your objects into streams can kill your performance. I'm worried it's going to reinforce the old myth that functional programming is implicitly slow.
That's one reason (of many) that we use Eclipse collections, formerly known as GS Collections. :) It also closes the gap on weaknesses in JCL like immutability vs mutability, and a nice improvement to the interfaces.
I know even use it in personal projects in place of JCL wherever I can.
The thread-safe implementation of computeIfAbsent in ConcurrentHashMap is the real improvement. "computeIfAbsent" on a non-concurrent Map is merely syntactic sugar.
FWIW, there has been a thread-safe implementation of putIfAbsent(K, V) since ConcurrentMap was introduced in 1.5. computeIfAbsent just adds the additional property of avoiding possibly computing the value more than once if there is a race in
if (!map.containsKey(key)) {
V value = compute(key);
V existing = map.putIfAbsent(key, value);
if (existing != null) {
// someone else won the race
}
}
Neither is 'merely syntactic sugar', take a look at their respective implementations. It's hardly surprising that a method on a class named 'ConcurrentHashMap' in 'java.util.concurrent' provides certain concurrency-related guarantees. It's the point of the whole thing and is written on the tin.
Default Java 8 Map implementation is merely: "get X. if X absent, compute for X, put X." These lines of code have been written over and over again for anyone using a Map in Java, no doubt. It is entirely trivial and almost impossible to get wrong writing it out on your own.
Whereas writing a performant concurrent "computeIfAbsent" is extremely non-trivial and if you try to do it yourself, you have a high chance of getting it wrong or slow.
Therefore, CHM "computeIfAbsent" is new and exciting, and Map "computeIfAbsent" is "could care less".
What is it about Java that gives us so many variants of each method instead of more composable primitives?
I'm thinking of:
forEach(long parallelismThreshold, BiConsumer<? super K,? super V> action)
forEach(long parallelismThreshold, BiFunction<? super K,? super V,? extends U> transformer, Consumer<? super U> action)
I get why we have function, bifunction, consumer, biconsumer, and all that. What I don't understand is why we have a special method for applying a function before passing to the consumer. It seems like the minor convenience in terms of syntax is outweighed by the proliferation of method signatures.
Well, what's the alternative? In a more functional language we have two options: have a map() function we apply first and then call forEach() on the result, and function composition. The problem with the map() function is we have to return a ConcurrentHashMap, which means we're effectively making a copy of the data, which isn't space efficient. The function composition operation is more tenable (in Java, this is the andThen() or compose() methods, which are duals), but kind of ugly to write out if both the transformer and action are lambda expressions. Truth be told, I don't see the disadvantage of having extra method signatures: their usage is optional, and I imagine they have default implementations that refer to each other using function composition on the back end.
There's also one annoyance that neither function composition nor extra method signatures solve, which is boxing primitives. The transformer has to return an object, and the action has to accept one, so the example forEach(4, List::size, ...) will generate a bunch of boxed integers. Hopefully the JIT will notice this and try to elide the allocations, but I don't know how much you can trust the JIT to figure that out. The best thing to do is to manually compose the operations yourself, but that isn't always possible or convenient.
To be fair, there are only 3 signatures for the forEach() method, the reason that section is so noisy is because it's followed by forEachEntry(), forEachKey(), and forEachValue(), each of which has two signatures. You technically don't need those other methods, but they are convenient to have. I find "map.forEachValue(1, Foo::bar)" to be more clear than "map.forEach(1, (_, v) -> Foo.bar(v))".
Manual composition can get ugly if the argument is used multiple times, especially if the function is non trivial. Say something like
which is going to be less efficient, doubly bad if that function has side effects, and noisy on top of it. You could assign the value to a local variable inside the lambda, which gives you
map.forEach(1, (k, v) -> { int x = Expensive.func(k, v); return x * x; })
which fixes the efficiency and correctness problems, but is a bit too verbose for me. There's probably other situations, that was just the first one I thought of.
Honestly, though, it all comes down to taste and the situation. I personally would use manual composition excepting circumstances like my example above.
The Java world tends to be pretty conservative and slow-moving compared to newer, trendier languages.
So while it's been out for some time, a lot of enterprise code bases are only just now starting to move to Java 8, and developers are only now really getting their teeth into these sorts of features.
Also, Java 8 was a pretty huge release. Even experienced developers are no doubt discovering little new interesting features and tricks all the time.
Google just finished moving most things to Java 8 a few months ago; I imagine other large companies are similar.
It can take a while to move to new major language versions, especially when they have major new syntax changes; toolchains and IDEs take a long time to mature. Java 4 -> was huge, 5->6 was very minor, 6->7 medium (string switch statements, try-with-resources, and multi-catch), but 7->8 is another huge change. My guess is that Java 9 will be accepted much quicker than 8, since its changes are not going to be nearly as big as 8.
Is it though? It seems to boil down to "as every other collection ConcurrentHashMap got HOFs/Stream extensions, with the twist of parallelism hints". Respectfully, that's a bit thin to build an article on 3 years old features.
Most of the methods mentioned in the article seem half-assed to me, especially when compared to the "natural" Java 8 functionality that's commonly available on Streams, particularly .map() and .filter().
Maybe there are optimizations, but why would you have a custom signature to add a transformer (really it's just a map()) to forEach(), when you would normally just .map().forEach()?
> why would you have a custom signature to add a transformer really it's just a map()) to forEach()
Might have to do with the parallelism. With the transformer in the forEach call, the transformer will (presumably) be on the same thread as the consumer.
(That's just a guess--I haven't actually looked at code)
For what I care, the feature in itself it is not new, but I am happy to find out things in the JVM space that I haven't known about before. Especially since Java 8 came as one of the biggest releases to date in terms of new features and functionality added to the language.