Hacker News new | past | comments | ask | show | jobs | submit login

The funny thing is that from a purely technological point of view, Java (even the 5-year-old Java 8 and certainly recent versions) is far ahead of most other stuff hyped on HN (as well as less hyped stuff). Virtually no other platform comes close to that combination of state-of-the-art optimizing compilers, state-of-the-art GCs, and low-overhead in-production profiling/monitoring/management. And much of the cutting-edge development and technological breakthroughs on these matters continues to take place in Java (disclosure: I work on OpenJDK full-time). Just in the past few years we've seen the release of open-source GCs with less than 2ms worst-case pause times on terabytes of heap (ZGC and Shenandoah), a revolution in optimizing compilers (Graal/Truffle), and there's an upcoming release to streaming access to low-overhead deep profiling (JFR). So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.





This. Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved. AOT used to be bag of hurt with GCJ ( I know I could use Excelsior, not sure if it was free in my time though ), but now even that will be an supported option from Graal.

Java the languages still isn't pretty, but it has been much improved.

OpenJDK is GPL and apart form the trademark ( ? ) with Java ( I didn't read much into the JakartaEE problem, ) everything should be fine. I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.


> Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved.

Value types are a major missing piece in the JVM stack right now. It's at least on the roadmap, but it keeps getting pushed back and back and back. I'd also argue runtime generics is another one, and perhaps more depressingly one that is unlikely to ever get fixed.

.NET has both of them and also has the same core strengths JVM does, so given the choice I'd go with .NET over JVM 100% of the time as a result. JVM's GC & JIT seem to be on a never ending improvement cycle, but the actual language & core libraries are incredibly slow to react to anything.


> .NET has both of them

I'll give you value types, but reified generics in .NET were a mistake. It really makes interop and code sharing among languages hard, in exchange for a rather slight added convenience. This means that if you're a language implementor and you're targeting .NET, you'll get much less from the platform than you would if you target Java, which makes .NET not a very appealing common language runtime. And not only is Java a good platform for Kotlin and Clojure, but thanks to Truffle/Graal it's becoming a very attractive, and highly competitive, platform for Ruby, Python and JavaScript. All of that would have been much more difficult with reified generics.

Also, I don't think value types in Java are being "pushed back." The team is investing a lot of work into them as it's a huge project, but AFAIK no timeline has been announced.

> and also has the same core strengths JVM does

I don't think so. Its compilers are not as good, its GCs are not nearly as good, and its production profiling is certainly not as good (have you seen what JFR can do?); and its tooling and ecosystem are not even in the same ballpark.


I see it as a double-edged sword.

On the one hand, reified generics means that it's .NET's object model or the highway.

On the other hand, .NET maintains a much higher standard of inter-language interoperability. When I was on .NET, I didn't have to worry much about the folks working in F# accidentally rendering their module unusable from C#. Now that I'm on the JVM, I've accepted that it's just a given that the dependency arrows between the Java and Scala modules should only ever point in one direction.


It's not .NET vs Java so much that the development of the two languages you mention are coordinated. For Kotlin and Clojure interop works both ways; Scala doesn't care much about that, so Scala developers write special facades as Java language API. There are lots of different languages on Java, and they care about different things. The Java team itself develops only one language, although at one time there was another (JavaFX Script). Some language developers (Kotlin, JRuby) work more closely with the Java team, and others (Scala) hardly ever do.

Dumb question. Why do reified generics make interop challenging? Or is it reified generics plus value types that don't inherit system.Object? Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?

> Why do reified generics make interop challenging?

Suppose you have types A and B, such that A <: B. What is, then, the relationship between List<A> and List<B>? This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

> Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?

They could, but then this adds significant runtime overhead at the interop layer. For example, a popular standard API may take a parameter of type List<int>. How do you then call it from, say, JavaScript, without an O(n) operation (and without changing JS)?


> This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

Which, realistically, is probably the only principled way to do things if you want to be doing much with variant generics in a cross-language way.

The Java way, "I pick my variance strategy, you pick yours, and we'll both pass everything around as a List<Object> at runtime and just hope that our varying decisions about what their actual contents are allowed to be never cause us any nasty surprises for each other at run time," is not type-safe and can lead to nasty surprises at run time. It's easier, sure, but easier is not necessarily better.


Except, realistically, of all the polyglot runtimes, the ones that have good interop erase and the one that doesn't reifies.

The problem is that your GenericClass<T1> and GenericClass<T2> are really more like GenericClass_T1 and GenericClass_T2 with their own distinct type definitions and interfaces. From the perspective of a different runtime/language trying to interop with these types, you have to somehow understand and work with this mapping game. It's much easier from inside the .NET runtime than outside.

The general solution is, like you suggested, to avoid using reified generics in the module interface where the interop happens.


The solution I remember from the last time I dealt with Python on .NET (which was admittedly a long time ago) was the opposite - you did use the reified generics, and there were facilities to create an instance of GenericClass<TWhatever> from within Python. There's a whole dynamic language runtime that is purpose-built for smoothing over a lot of that stuff.

What wouldn't work would be to, e.g., create a Python-native list and try to pass it into a function that expects a .NET IList<T>. Which doesn't feel that odd to me - they may have the same name, but otherwise they're very different types that have very different interfaces.

That said, the Iron languages never took off. My personal story there is that all the new dynamic features that C# got with the release of the DLR pretty much killed my desire to interact with C# from a dynamic language. The release that gave me Python on .NET also turned C# itself into an acceptable enough Python for my needs.


.NET Value types do inherit from System.Object.

See: https://docs.microsoft.com/en-us/dotnet/api/system.valuetype...


Is a value type different from a value class?

https://github.com/google/auto/blob/master/value/userguide/i...


Yes, those value classes are still heap/GC allocated objects.

Value types are a generalisation of `int`, `long`, `float` (etc), where values are stored inline, not allocated on the heap. For instance, a `Pair<long, double>` that isn't a pointer, but is instead the exactly the size of a long plus a double, and is the same as writing `long first; double second;` inline.


.NET is tied to Microsoft, so I'd avoid it 100% of the time.

Yes, yes. I know that Microsoft theoretically open sourced and ported it. However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Otherwise I agree that it is a better Java.


They didn't "theoretically" open source it - they actually open sourced it.

I get why people used to shit on Microsoft, but Microsoft has demonstrated over a number of years that its changed under Satya.

> However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Outside of desktop GUIs, this is simply not true. I'm writing complex, cross-platform systems that work just fine on Windows and Linux (and would on MacOS if I chose to target it).

Hell, even a lot of the tooling is now cross-platform: Visual Studio Code, Azure Data Studio, even Visual Studio and Xamarin run on MacOS!


No, Visual Studio does notrun on macOS. Visual Studio for macOS is a fork of MonoDevelop.

So because it’s not the same code base but it is produced by the same company with the same purpose? It’s not “Visusl Studio Code”? Was Photoshop not Photoshop on Windows when the assembly language optimizations were different between PPC and x86?

I don’t think is true anymore.

The .net 5 announcement was very clear that .net core is the future and its been a while since you’ve had needed things that are windows only to build a non-trivial .net core application.


Please double check my wording.

Yes, you can write a non-trivial .NET application on Linux. But if you take a non-trivial .NET application that runs on Windows, the odds are low that it can easily be ported to Linux. And there are almost no non-trivial .NET applications that weren't originally written for Windows.

The result is that if you work with .NET, you're going to be pushed towards Windows.


It the wording of the announcement (taking them in good faith) that applies to applications using .NET Framework. .NET Core should be 100% portable to Linux/Mac/wasm.

.NET 5 should supersede both Core and Framework IIRC


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

In the years since, I've encountered many stories that attempted to make use of that portability. All failed.

I've seen the promise of portability with other software stacks, and know how hard it is. I also know that taking software that was not written to be portable, and then making it portable, is massively harder than writing it to be portable from scratch.

So, based on both the history of .NET and general knowledge, I won't believe that .NET will actually wind up being portable until I hear stories in the wild of people porting non-trivial projects to Linux with success. And I will discount all announcements that suggest the contrary.


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

Microsoft didn’t have anything to do with Mono until 2016.

If you start from scratch with a greenfield .Net Core project there aren’t really any issues getting it to work cross platform.


You are no more “pushed toward Windows” with new code you do with .Net Core than you are if you use any other cross platform language.

While I’ll agree that anything that uses any of the advanced features of Entity Framework and MVC is not trivially ported.


When you're trying to write new code, you run into a problem and look for a library that solves it. But all of the libraries that you find are Windows First, and it is not until after you're committed to them that you sometimes discover how they are Windows Only.

So yes, even in a new project, there will be a pull back to Windows. Because virtually nothing is truly written from scratch.


It’s no more of a problem with .Net Core than it is with Python modules that have native dependencies, or Node modules with native dependencies.

You’re not going to mistakenly add a non .Net Core Nuget package to a .Net Core project. It won’t even compile.

Of course you can find Windows only nuget packages for Windows only functionality like TopShelf - a package to make creating a Windows Service easy. But even then, I’ve taken the same solution and deployed it to both a Windows server and an AWS Linux based lambda just by configuring the lambda to use a different entry point (lambda handler) than the Windows server.

You can even cross compile a standalone Windows application on Linux and vice versa.

I use a Linux Docker container to build both packages via AWS CodeBuild.

Would you also criticize Python for not being cross platform because there are some Windows only modules?

https://sourceforge.net/projects/pywin32/


Looking at the announcement it seems they're basically folding all of the Windows-specific stuff back into .net core. Isn't that just going back to a compatibility minefield?

> the actual language & core libraries are incredibly slow to react to anything.

async/await being the obvious example. Still no sign of it in the Java language, nor any expectation of it, unless I missed something.


https://wiki.openjdk.java.net/display/loom/ which is superior to async/await, if I may say so myself (I'm the project lead)

That's very nice, but saying it's strictly superior to async/await is a stretch. Fibers/stackful coroutines are a different approach with its own tradeoffs.

On the plus side, fibers offer almost painless integration of synchronous code, while async/await suffer from the "colored functions" problem[1].

The price you pay for that, is the higher overhead of having to allocate stacks. If you don't support dynamic stacks which can be resized, you basically don't have a much better overhead than native threads. There are two solutions I'm aware of, both of them have been done by Go at different times: segmented stacks and re-aligning and copying the stack on resize. Both carry some memory overhead (unused stacks) and computational overhead (stack resizing).

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


> Fibers/stackful coroutines are a different approach with its own tradeoffs.

The only tradeoffs involved, as far as I'm aware, are effort of implementation. There are no runtime tradeoffs.

> The price you pay for that, is the higher overhead of having to allocate stacks.

You have to allocate memory to store the state of the continuation either way. Some languages can choose not to call the memory required for stackless continuations "stacks" but it's the same amount of memory.

> Both carry some memory overhead (unused stacks) and computational overhead (stack resizing). Their "advantage" is that, because they're inconvenient, people try to keep those stacks shallow.

Stackless continuations have the same issue. They use what amounts to segmented stacks. "Stackless" means that they're organized as separate frames.


Great article, thanks. Some perhaps-silly questions:

1. Is it possible to inspect the state of a 'parking' operation, the way you can in .Net with Task#Status?

2. So fibers run in 'carrier threads'. Is there a pool of carrier threads, or can any thread act as a carrier? I'm thinking of .Net's where this is configurable (ignoring that .Net 'contexts' aren't exactly threads), by means of Task#ContinueWith() and the Scheduler class. I take it from the following snippets that fibers can only run on the thread where they were created:

starting or continuing a continuation mounts it and its stack on the current thread – conceptually concatenating the continuation's stack to the thread's – while yielding a continuation unmounts or dismounts it.

And also:

Parking (blocking) a fiber results in yielding its continuation, and unparking it results in the continuation being resubmitted to the scheduler.

On a non-technical note, how do OpenJDK projects feed back into the Java spec and Oracle Java?


1. Yes.

2. Configurable. A fiber is assigned to a scheduler that is, at least currently, and Executor (so you can implement your own).

> I take it from the following snippets that fibers can only run on the thread where they were created

No. A fiber has no special relationship to the thread (or fiber) that created it, although now there can be a supervision hierarchy thanks to structured concurrency: https://wiki.openjdk.java.net/display/loom/Structured+Concur...

> OpenJDK projects feed back into the Java spec and Oracle Java?

OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license). Projects are developed in OpenJDK (for the most part by Oracle employees because Oracle funds ~95% of OpenJDK's development, but there are some non-Oracle-led projects from time to time[1]) and are then approved by the JCP as an umbrella "Platform JSR" for a specific release (e.g. this is the one for the current version: https://openjdk.java.net/projects/jdk/12/spec/)

[1]: E.g. the Shenandoah GC (https://wiki.openjdk.java.net/display/shenandoah) is led by Red Hat, TSAN (https://wiki.openjdk.java.net/display/tsan) is led by Google, and the s390 port (http://openjdk.java.net/projects/s390x-port/) is led by SAP.


Very neat. So it preserves the virtues of .Net's task-based concurrency, but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Does it impact things from the perspective of the JNI programmer?

> OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license).

Ah, of course. I'd missed that.

> there are non-Oracle-led projects from time to time[1], and are then approved by the JCP as an umbrella "Platform JSR" for a specific release

How do they handle copyright?


> but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Yes. All existing blocking IO code will become automatically fiber-blocking rather than kernel-thread-blocking, except where there are OS issues (file IO; Go has the same problem). Fibers and threads may end up using the same API, as they're just two implementations of the same abstraction.

> Does it impact things from the perspective of the JNI programmer?

Fibers can freely call native code, either with JNI or with the upcoming Project Panama, which is set to replace it, but a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

> How do they handle copyright?

Both the contributors and Oracle own the copyright (i.e. both can do whatever they want with the code). This is common in large, company-run open source projects.


> a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

Doesn't this boil down to the native function blocking the thread?

How about the C API/ABI of JNI? Will there be additions there for better supporting concurrency (i.e. not simply blocking)? Or can that be handled today, with something akin to callbacks?


If the native routine blocks the kernel thread, it blocks, and if not, it doesn't. While something could hypothetically be done about blocking native routines, we don't see it as an important use case. Calling blocking native code from Java is quite uncommon. We've so far identified only one common case, DNS lookup, and will address it specifically.

Nice. So going 100% async is a real possibility?

I'm not sure what that means.

> Fibers are user-mode lightweight threads that allow synchronous (blocking) code to be efficiently scheduled, so that it performs as well as asynchronous code

Sounds a lot like Erlang BEAM processes.


Yep; or Go's goroutines. Except that the fibers are implemented and scheduled in the Java libraries; the VM only provides continuations.

How does it compare to Windows fibers?

I find this really interesting. Care to provide some comparisons?

I think that the approach done in https://wiki.openjdk.java.net/display/loom/Main is better than the async/await infrastructure.

Well, other than the fact that it only supports Linux and MacOS.

Is that true? The build instructions are for a Posix-like evironment, but I haven't actually looked to see if the actual implementation supports Windows yet.

As someone who runs Windows and Linux about equally, in differing proportions over time, I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet.


It's a prototype. It will support Windows when it's released, and probably sooner. We're literally changing it every day, and it's hard and not very productive to make these daily changes on multiple platforms, especially as none of the current developers use Windows (this is changing soon, though).

> I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet

Seems understandable though. Java is primarily a tool for heavyweight Unix servers, after all. (This is of course an empirical claim, and I have no source, but I'd be surprised if I turn out to be mistaken.)

Makes good sense to go with the strategy of building an industry-strength technology before investing the time to handle Windows.


I like java. Java 8 streams are particularly interesting. Its fast too. I took a hadoop class (which taught java 8 and ironically discouraged hadoop use excepting exceptional cases..).

The hardest part that everyone struggled with was getting a Java environment up and running. Gradle, maven, ant... You almost need and IDE. Its almost like they don't want people using it. I stopped when I didn't have too.

Plus the acronyms. Ones I didn't know from your post:

AOT, GCJ, Excelsior, Graal, OpenJDK, JakartaEE


It’s funny, I feel the same way about web development right now.

Except web development is almost all bootstrapped from a simple npm library these days. You generally npm install and you've got all your dependencies whether it's Angular, Vue, React or pretty much any modern web frameworks. The time for a new developer to get the tooling out of the way and start looking at code is dramatically shorter for web apps than Java in my experience.

It seems like you just don't know how to properly use maven. In my experience it is always as simple as mvn {build, compile, test, package, deploy}.

>Plus the acronyms.

GCJ and Excelsior are really niche, even people familiar with Java ecosystem might not known them as they are mostly used for AOT ( Ahead of Time ) Compiling Java into a Single redsitrbutuamble binary in the early 2000s. I was writing RSS Web Server application then and was looking into how to do Client Side Desktop Apps.... UI toolkit was a bag of hurt for Java, and I gathered that is still the case today.

I think JakartaEE is really just a rebranded JavaEE.

I know Graal only because I follow TruffleRuby, which is a new JVM written in Java. And it has proved that optimising Ruby, once thought as impossible due to its dynamic nature to be possible.


How is this any different than python or javascript? NPM, Babel, Webpack, TSC, PIP, VENV, PyPy, CPython, etc. They all have their learning curves and if you weren't in the ecosystem you wouldn't know what they meant.

> I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.

What do you mean by "lurking"? Oracle is the company developing OpenJDK, and it will continue to do so. All our projects are done in the open, with lots of communication.


By "lurking" people mean that the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

You may not remember incidents like why Jenkins was forked from Hudson, but Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.


> the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

I have no idea what Oracle may do tomorrow, but Oracle has been in control of Java for a decade, and what it has actually done is 1. significantly increase the investment in the platform and 2. open source the entire JDK. So I don't know about the next ten years, but the past ten years have been really good for Java (well, at least on the server).

> Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.

I don't share your romantic views of multinational corporations. Corporations are not our friends, and while they're made of people, they're not themselves people, despite what some courts may have ruled. But like people, different corporations have different styles, and it would be extremely hard to call any of them "good." I have certainly never heard of one is driven by caring (although what you do when you don't care may differ; some may be aggressive with licensing, some are in the business of mass surveillance, some help subvert democracy, others drive entire industries out of business through centralizations and others still drive kids to conspiracy theories). When you look at what Oracle has actually done so far for Java, I think it has been a better, and more trustworthy steward than Microsoft and Google have been for their own projects (Java's technical leadership at Oracle is made up of the same people who led it at Sun). And people who bet on Java ten years ago are happier now than those who bet on alternatives (well, at least on the server). This despite some decisions that made some people unhappy. You can like the good stuff and be disappointed about the bad stuff without some emotional attraction or rejection to these conglomerate monsters.


Your opinion is your opinion.

My opinion of the company and its products is more consistently negative than any other large company. And while you think that the non-Java world is suffering, I think you have some tunnel vision.

Let's just say that I am personally happy with my decision to stay away from Java. And the brief periods where I had to work with Java were misery. Languages have personalities as well as companies, and there is a reason that the startup world stays away from Java in droves.


> and there is a reason that the startup world stays away from Java in droves.

You may have your own case of tunnel vision. I mean, sure, there are "droves" of startups that stay away from Java (many only to turn to it later), but there are also "droves" that adopt it from the get-go.


Why don't we look for some concrete information?

According to the statistics reported by https://www.codingvc.com/which-technologies-do-startups-use-..., Java looks like the fourth most commonly used language in the startup world, and its usage is not particularly well correlated with success.

Both Ruby and Python are more popular than Java, AND are better correlated with how good the company is. Your odds of being in a successful startup are improved if you are in those languages INSTEAD OF Java.

What about from the individual programmer level? Triplebyte did an article about how programming language and hiring statistics correlate. My impression is that their programmers are mostly being hired into relatively good startups, so it is a pretty good view of the startup world. That article is at https://triplebyte.com/blog/technical-interview-performance-....

Long story short, Java was the #2 language that programmers chose. Behind Python. Not so bad. But choosing Java REDUCED your odds by 23% of actually getting to a job interview. And for those got to a job interview, it reduced your odds by 31% of actually getting hired. By contrast Python IMPROVED those same odds by 11% and 18% respectively.

Apparently the startup world doesn't like Java developers either. You'd be far better off with Python.

Now I'm sure that you can trot out every successful Java startup out there. And there will be quite a few. But based on available data, not opinions, I did NOT express tunnel vision when I said that the startup world stays away from Java in droves.


If you truly believe any of the conclusions you've drawn from the numbers in the links you posted, then your favorite programming language REDUCES statistics skills.

Startups don't use Java because Java is for large-scale stable long- lived enterprises, not for prototyping simple small web apps that might be thrown away in a couple of years.

You often hear this, but what does it actually mean? Why is Java for one but not the other.

Here is my understanding.

Java was designed to limit the damage that any developer could accidentally do, rather than maximize the potential for productivity. Which is an appropriate tradeoff for a large team.

It is hard to get good statistics on this, but the figures that I've seen in books like Software Estimation suggest that the productivity difference is around a factor of two.

This matters because it turns out that teams of more than 8 people have to be carefully structured based on the requirements of communication overhead. (A point usually attributed to The Mythical Man-Month.) This reduces individual productivity. Smaller teams can ignore this problem. The result is that measured productivity for teams of size 5-8 is about the same as a team of 20. But the throughput of large teams grows almost linearly after that. An army does accomplish more, but far less per person.

Limiting damage matters more for large teams. Which are more likely to be affordable for large enterprises. However being in such an environment guarantees bureaucracy, politics, and everything negative that goes with that.

By contrast startups can't afford to have such large teams. Therefore they are better off maximizing individual productivity so that they can get the most out of a small team. And using a scripting language is one way to do that.


Today, I go back and forth between three languages at work - .Net, JavaScript, and Python. For simple prototype web apps or more realistically REST microservic APIs to feed front end frameworks, I really don’t see either being slower to develop.

For larger applications with multiple developers working in the same code base, the compile time checking of static languages is a god send. I would at least move over to TypeScript instead of plain JavaScript.


Oracle has a long track record of Sales & Marketing tactics which we can use as a reliable benchmark to predict outcomes.

Oracle will likely pursue the most aggressive strategy they can get away with Java.

I don't believe Sun was suing Google, but Oracle did.

The fact that Google is switching to Kotlin is mostly a means to absolve themselves of the 'Oracle risk' - it's a big change surely, a decision not taken lightly.

The future of Java under Oracle is hard to predict but there's legit concerns Oracle will make things hard.


Kotlin uses the same VM and API, so it makes no difference in this regard. It's not a big change – it's fully interoperable with Java. You can easily take a single class in a Java application and rewrite it in Kotlin, and everything continues working just as before.

Google adopted it because, as they more or less said in the announcement, it was already being adopted by the community and it hugely improved development experience.


but you splitting your developer base. * There will be people better at kotlin * There will be people better at java.

This is a problem when you are looking at hiring new people etc . This fragmentation is going cause issues just because people are hedging against Oracle future decisions.

In a perfect world Google should have bought Sun and the current version of Java would look at lot like Kotlin.


Kotlin is a light syntax for a coding style. It's as easy for a Java dev to learn Kotlin as it is to learn Spring or Hibernate or whatever library or framework the team at your new job uses

It's a whole other domain of things to learn, and now we're stuck with context switching all of the time.

Even if Kotline were 'better' (And I don't think it is), it'd have to be quite a jump better.

It's a little nicer for getting ideas down quickly, but beyond that to me it's just 'different' and now a whole other bag of things to support.

If I have to chose between Kotlin+Java or just Java I'll take just Java.

Going back to Java from Kotlin there's really nothing I miss.


I think Sun might have wanted to sue Google?

https://news.ycombinator.com/item?id=10951407


Yes, thanks for that, it stirred my recollection as I actually bumped into Jon Swartz by accident just in that era.

I don't think it was money, so much as the established culture at Sun (i.e. James Gosling: "Sun is not so much a company but a debating society). A more aggressive CEO/leadership/culture would have maybe raised the money to take on Google, or to take another tact.

So while you are right - and thanks for the reference - the issue here is what Sun was about, vs. what Oracle is about.


Whatever Sun was "about", sadly, it didn't work, and damaged some excellent technologies, like Java and Solaris, that Sun couldn't invest a lot of resources into because it no longer had them. Oracle managed to save one of them and make it thrive. Sun, as a big, impactful company, was a product of the dot-com bubble. It certainly made more lasting contributions than other bubble-era companies, but its strategy couldn't survive the crash. Maybe great ideas can be born in companies like Sun but need companies like Oracle to sustain.

I've been saying for years that Pivotal Labs is a debating club that produces code as a by-product. But now I'm wondering if I read the Gosling quip and then forgot I had.

Google made $billions from Java while Sun went near bankrupt, and is now a top-10 wealthiest company in USA. Oracle trying to get $ from Google in partnership with the former Sun is a different issue from your company's risk.

I do understand Oracle is paying the bill. As well as the team working on Graal and TruffleRuby. so I am grateful for that. Thank You.

>What do you mean by "lurking"?

Referring to Copywriting API a while ago and the JakartaEE problem which has blown up on my twitter feeds. I understand why Oracle is trying to charge money, and I am perfectly fine with that, I just don't like they are using Copywriting API as the tool. And whatever problem it is with JakartaEE this time around I don't have time to follow.


In a lawsuit, Oracle pushed for API's to be copywritten, not just their implementation. They also have paid lobbyists. They're also greedy assholes. The combo of greedy assholes and the ability to rewrite the law is a dangerous one.

So, I don't use a language unless it's open with patent grants and has a non-malicious owner. At this point, Wirth's stuff is probably legally the safest.


I'm curious about the last part. Are you using Modula 2 or something?

I don't have any public projects to release right now. So, I don't have to worry about getting sued. Modula 2 was nice but you could use any of Wirth's with low risk. Although Lisp's had lots of companies involved, Scheme is probably safe with PreScheme aiming for low-level use. A Racket dialect with C/C++ features like ZL language had could be extremely powerful and safe.

Rust, with Mozilla backing it, is probably not going to get you sued. Nim has potential given their commercial interests are paid development and support so far. As in, less greedy they are the better. Languages with community-focused foundations, such as Python, controlling them are probably pretty safe. Although it was risky, the D language now has a foundation. Although no foundation or legal protections, the Myrddin and Zig languages are being run by folks that look helpful rather than predatory.

So, there's you a few examples you might consider if avoiding dependencies on companies likely to pull shit in the future. Venture-backed, publicly-traded, growth-at-all-costs, and/or profit-maximizing-at-all-costs are examples of what to avoid if wanting to future-proof against evil managers turning it from good dependency into bad one.


> copywritten

copyrighted

"Copywritten" probably means nothing, but if it did, it would have something to with copy writing, the act of writing for publication (usually commercial, usually not long-form).

Added: FYI "copyrighting" is not a conscious decision, or an action you can take. Copyright emerges automatically when you create a work, what they've done is defend their copyright in court, and the courts have mixed opinions on the matter.


That is a gross mischaracterization of what Oracle did. They didn't just defend a copyright in court. They pushed to extend copyright to a mostly functional element that copyright law has not traditionally been thought to cover. It's a tremendously harmful viewpoint for interoperability.

Not just "not traditionally been thought to cover", but which existing precedent said DID NOT cover.

Does it surprise anyone that this case was decided by the Federal Circuit? The rogue court most consistently overturned by the Supreme Court, which also is responsible for most of the disastrous software patent cases out there.

The only bright light is that the Supreme Court has reopened the question. Given how often they overturn the Federal Circuit, we have real hope that we'll return to the previous precedent. Which is that since matching APIs is a functional part of how code works, and things that are functional are by law not copyrightable, APIs are not copyrightable.


Yes, copywritten isn't a word, but their point was that Oracle pushed for API's to be copywritable, which was not the case before their suit. It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.

Not to belabor the point, but... "copyrightable".

> It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.

I mean, it has been big news, and it has already been nuclear war, with Oracle putting Google in a position to switch android from Dalvik (and successors) to OpenJDK. I agree that it could become a pretty horrible precedent (imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place).


> imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place

The things you're talking about are already protected by patents, and the copyrightability of APIs have nothing to do with them. At the very least, for something to be copyrightable it must be some specific fixed expression (a piece of text, image, video or audio). So the O v. G ruling applies only to actual (code) APIs; not to protocols (or REST "APIs") and certainly not to stuff that's already protected by patents (the distinction between the two may not always make sense to programmers, but it is what it is; for example, algorithms are patentable but not copyrightable, while programs are copyrightable but not patentable).


You should do some reasearch on the Oracle vs Google case.

It's literally no different. Excell has APIs just like Java, it's just the code goes in cells rather than on lines.

The licensing has gotten super onerous.

The licensing has gotten far better. First, Oracle has just open sourced the entire JDK for the first time ever, and second, instead of offering a mixed free/paid, open/proprietary JDK (with -XX:+UnlockCommercialFeatures flags), it now offers the JDK under either a completely free and open license (under the name OpenJDK) or a commercial license for Oracle support subscription customers (under the name Oracle JDK).

Thanks, that clear things up bit.

What licensing? OpenJDK is licensed under GPLv2 plus Classpath exception, the same as ever.

Support for native code is very bad. The JNI is a pain to use and very slow, IPC is often faster. High performance numerical code often suffers because of poor vectorization. Not to mention tuning the JVM is often needed for critical tasks. Modern GC'd languages like Go have much better memory footprints and the penalty of fast numerical code is much smaller.

Panama (https://openjdk.java.net/projects/panama/) will be replacing JNI very soon, and I don't think you're correct about vectorization. While I think Go has some good features, nothing about it is more "modern" than Java except in the most literal chronological sense; Java is more modern in almost every other meaningful sense. While you may need to tune the VM for critical tasks, in Go you don't need to tune, but rather just run slower.

I thought everyone was using JNA [1] for native access these days. JNA overhead is pretty low, and it’s much simpler to use versus JNI.

[1] https://github.com/java-native-access/jna


Respectfully I'm not sure that's true.

> State-of-the-art optimizing compilers.

LLVM provides coverage for many of the new languages that are hyped these days (with the notable and truly unfortunate exception of Go). LLDB provides an excellent debugger infrastructure.

> State-of-the-art GCs.

True, there's some good work there but I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

> Low-overhead in-production profiling/monitoring/management.

Correct me if I'm wrong but a lot of that is available in dtrace is it not?

This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages. It allows compiler engineers to focus on bringing what they do better and differently to the stack. It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest. All that time Go wasted developing assemblers for platforms should have been spent focusing on Go and letting LLVM handle it.

Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

Think about it, macOS is basically just various front-ends for LLVM. Apple's implementations of OpenCL and OpenGL, the Metal Shading Language and Core Image are all wrappers for LLVM. Swift is a wrapper for LLVM. Clang makes C and C++ fancy wrappers for LLVM. Rust is a wrapper for LLVM. Almost everything your mac does is a fancy front-end for LLVM. Even Safari was (until B3 backend) a fancy front end for LLVM.


As I said in another comment, there can be more than one thing that's cutting edge, especially as the JVM and LLVM operate at completely different levels. Never mind GCs -- LLVM isn't even safe, and neither is WASM; and that's good at that low level. In fact, LLVM is employed by Falcon, the Zing JVM's JIT, and OpenJDK's HotSpot is compiled to LLVM on Mac.

> I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

That's fine, but it seems like giving up GCs comes with its own non-negligible costs, so much so that the two hardly compete in the same domains.

> Correct me if I'm wrong but a lot of that is available in dtrace is it not?

They have similarities (closer analogs would be https://github.com/btraceio/btrace and http://byteman.jboss.org/ than JFR), yes, but operate at different levels with somewhat different capabilities.

> This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages.

LLVM is popular, extremely useful, and quite cutting edge in its domain, but there isn't much of a "computer science" advancement as there is, e.g., in Graal/Truffle.

> It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest.

It certainly does, but there's a lot that LLVM doesn't give language designers that a JVM does, like GCs and high-level language interop.

> Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

I don't know if LLVM wants to operate at Java's level or vice versa. While LLVM may offer some basic GC and Java may offer "safe" LLVM with things like Sulong, I believe their main focus will be their own respective levels.


Java itself is fine. It's very popular--and that's the biggest problem. Not the language as such but the developer ecosystem around it. I am dead tired of seeing 80-character camel-case function and variable names, annotations that hide critical functionality of the code, stack traces dozens or hundreds of lines long because of the amount of indirection and poorly conceived configuration "languages" used to support dependency injection so developers can save a few characters of typing or mask dependencies and complexity, etc.

The language may be fine, but it has been appropriated into the worst sort of terrible programming paradigms one might imagine.


The terrible programming paradigms are workarounds for the terrible language.

I read an old blog post about Spring's @Required annotation versus plain constructor injection and typed out a small example to get a feel for it – a simple class with one required and one optional dependency – which resulted in

23 lines of plain Java with constructor injection, or

16 lines of Java with Spring @Autowired magic, or

7 lines of Kotlin – no magic needed.


We aren’t talking about Java the language. We are talking about the JVM.

I used to be a Java hater (I'm much more neutral now -- I even find aspects of it pretty pleasant). To me, it was never about the technology of Java (always been pretty impressive), or even the language (a little verbose -- but so is C++ and C# and I like both of those), it was really just about the ecosystem. For whatever reason 2000s era java had soo many libraries that were just insanely over-engineered and had these absurdly deep object taxonomies and trying to figure out how to do something came down to figuring out how 5 or 6 different classes interrelated. It was such a nightmare, especially since the IDEs back then weren't as smart as they are right now. I think it's become a lot better now though, modern libraries seem to have learned a lot of those lessons, and having proper closures makes a huge difference, and things like IntelliJ are really great about making dealing with the verbosity much easier.

A lot of the improvements in the ecosystem came from improvements in the language. Go4 patterns are in part shaped by language.

Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby). Where do you? Most developers turn to C++, Java, or C# (not Haskell or F#, for example), and of these, Java and C# are both great for productivity and not daunting for someone just starting out, unlike C++.

There was a period after 200x where developers managed to throw out a bunch of babies along with the bathwater when they rejected Java, UML, SQL, and everything else that was used in 199x. After all, these are all literally from the last century, so they must be "uncool", amirite? /s

The preference for dynamic or duck typing in the name of productivity has always rubbed me the wrong way. To keep it brief, my defense / elevator pitch for static typing is: "Slow is smooth. Smooth is fast" :-)


> eliminated the majority of HN darlings (JS, Python, and Ruby)

Those were the darlings ten years ago (along with CoffeeScript and Clojure). The pendulum has swung back towards types and now the hip ones are Go, Kotlin, Swift, TypeScript, and (to a lesser extent) Haskell, Hack, Reason, and OCaml.

> Java, UML, SQL, and everything else that was used in 199x.

There was the whole no-SQL fad, but lately, even here, I see a lot of people re-discovering and advocating the relational model. Postgres seems to be hotter than MongoDB today.

Much of what is good about Java lives on in Kotlin and Swift. I agree it is unappreciated with today's eyes. Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

UML is garbage. A visual language designed by non-artists with no aesthetic expertise. It deserves to be forgotten.


> Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

Java is still continuing to lead on technology. In recent years it has introduced the world to low-latency, concurrent copying garbage collectors (C4, ZGC), partial-evaluation Futamura-projection optimizing compilers (Graal/Truffle), and low-overhead continuous deep profiling in production (JFR). I think that lots of people remember that, and care about those things, because Java is not only leading technologically but also in market share.


Asking as someone who cares about aesthetics a lot more than most developers, what does UML have to do with aesthetics? UML is for modeling systems and relationships through agreed upon conventions, and sometimes generating code stubs based on those diagrams. As long as you adhere to the conventions, the aesthetics can be changed as necessary.

It's a visual language. It exists entirely to be consumed by human eyeballs. Aesthetics are the user interface for that process.

> Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby).

Well, if you ignore mypy and consider TypeScript distinct enough from JS to not count, sure, though TS is if anything more of an HN darling than bare JS (Really, none of those languages has been an HN or developer community darling in years, though Python—due to ML and data science—is less horrible uncool than Ruby and JS right now. (And, yeah, Ruby is left out of the “but I have static type checking available” list...till Sorbet is available later this year.)

> Most developers turn to C++, Java, or C#

I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.


>> Most developers [from context: who want static types] turn to C++, Java, or C#

> I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.

Really? I'd be surprised if it was less than 90%. What is the competition? Go, TypeScript, Swift, Scala, Kotlin, Rust (in rough order of my gut sense for how widely used they currently are)? These are all still very small relative to C++, Java, C#.

If you argue that C has static types, then I guess you're probably right. But it isn't really true that C has static types in the contemporary sense (which I think is perhaps better referred to as "strong" type safety).


> If you argue that C has static types, then I guess you're probably right.

C (like C++) is both statically and weakly typed; strong and static typing are orthogonal axes (dynamic but strongly typed languages are common.) If you mean strong and static when you say static, you need to take C++ off your list.


No, it's (mostly) not about that.

It's about C not having any generics or similar things (like C++ templates), so you need to fall back way more often on void*.

So on that axis Go is roughly as statically typed as C, albeit more strongly typed.

On the other hand C++ is more statically typed than C, but is about as strongly typed (afaik).


Yeah, I kind of wondered about that as I was writing my comment, seems like a fair point.

According to Github: https://github.blog/2018-11-15-state-of-the-octoverse-top-pr...

As it happens, among the statically typed languages, C++, Java, and C# are the top three, with C and TypeScript filling out the top 10 lineup.

That's not saying "more than half", but if you include C onto that list, I would definitely bet against your statement.


While I still like Python, Python and Ruby haven't been darlings of anything in probably the last 10 years. Go, Rust, Swift, and Kotlin are newer statically typed languages that lots of devs on here like though.

This timeline is a bit aggressive for Ruby. The core of my career using Ruby was 10 years ago, and I would still call it "darling" at that point, though the rumblings were beginning. Node.js was first released almost exactly 10 years ago, and (from my perspective) over the next few years marked the first real exodus from people who had previously been all in on Ruby. I missed out on that one because I thought node.js seemed like an immature answer to a question I wasn't asking. But over the next few years, I became increasingly disillusioned with writing big software in a language without good static analysis, and was on board with the mindshare (if not actual employment) exodus toward languages like Go and Rust. So I would say it was more like 5-7 years ago that you started seeing more criticism than love for Ruby in places like this.

And yeah, as other commenters have said, Python is back due to data science / machine learning. (Though I'm hoping there will be a wave of adoption of tools like Julia and Swift for this in the near future; good static analysis will be nice for data science for all the same reasons it is nice for other kinds of software.)


Python's relevance got renewed by the ML frameworks (Tensorflow) and notebooks (Jupyter).

This. Jupyter notebooks, Matplotlib, Numpy, Spyder...all that made Python big again as ML and AI became the new hotness recently and the killer app for Python.

Outside AI, Python is a really good scripting language for both Linux and Windows. My entire industry seems to run off of Python for process automation and analysis. It really is a lingua franca in this space.


I agree. During my Maths + CS studies I've been using mainly MatLab and R, Andrew Ng's famous Machine Learning intro was also taught in MatLab/ Octave. But using just one „proper” general purposing language like Python instead makes way more sense, especially when building software from scratch. Need a custom ERP? Build one with django. A Web Server? Go with flask. Integrating some ML pipelines? Easy.

> So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.

Sure for core Java (OpenJDK), but what the future of JEE licensing and development? Even non-"enterprise" apps typically make use of some JEE stuff like servlets and JDBC. Is it really the "safe" choice when, apparently, the trademark agreements have just fallen apart?

https://headcrashing.wordpress.com/2019/05/03/negotiations-f...


JDBC is not JEE - it is core Java. Servlets are the original web-container Java spec, independent of JEE. Both are great specs supported by several stable and performant implementations - rock solid tech compared to a lot of the flaky stuff you find advertised today.

How are Java programmers implementing RESTFUL APIs these days?

Many developers don't want to think too much and generally choose Spring boot for a stable, highly-popular and well-documented framework. There is nothing wrong with this choice but I find it too bloated for lean container based micro-services.

Dropwizard and Micronaut are pretty good for getting a smaller and saner footprint. If you need more than REST, say you want a lean MVC-based web-framework, then Blade is also a good choice.

You can also choose not to use a framework and say use code-generation from a swagger/openAPI REST document and implement the remaining bits on your own. If your deployment target is AWS, you can also choose to leverage Netflix's fantastic set of libraries.

Java has tonnes of options, so you usually need to spend some time evaluating what goes into your stack.


I'm curious to hear what specifically is too bloated in Spring Boot in your opinion for container based microservices. I've been writing microservices that run in docker in Java for a few years, then moved to Kotlin recently, all using Spring Boot, and I haven't run into anything that made me feel like they were bloated. I've also written Go and Node microservices for contrast.

Spring boot has a large startup time, heavy dependency trail and a lot of dynamic class-loading based off auto-configuration. This makes it un-suitable for some areas: if you desire fast scaling micro-services that respond quickly to incoming load. Or say you want to transform your micro-service to a server-less function. One can do that with Graal VM to compile your micro-service to a single binary with microscopic start-up time. However, native-image tends to fail more often than not with Spring boot apps. (I haven't tried this recently though so my knowledge could be out of date).

Spring Boot does what it's asked to do, which is to load everything it finds. The dynamic class work has basically nil effect on load time. The core performance constraint is that classes are loaded essentially linearly. If you ship with something enormous like Hibernate, that's thousands of files read individually from a zipfile.

Spring folks are actively involved with Graal. I see a lot of excitement internally at Pivotal. Watch this space.


I'm reminded of the crazy stunts from the 1980s where people would start some big, slow program (Emacs was huge!), dump core, and then "undump" to make a pre-initialized binary that would start as fast as the OS could read it. It actually worked, as long as it wasn't relying on open file descriptors, or signal handlers, or env vars, or …

I’m pretty sure Emacs still does this. There was a patch committed to change it to use a more portable method of dumping state, but I don’t think that change made it into the 26.2 release.

You should checkout new RedHat framework - [Quarkus](https://quarkus.io/). This is a framework which leverages Graal to create native images. Those images are very small and optimized. For example one of Quarkus developers showcase the size of native image, spoilers - it's [19MB](https://youtu.be/BcPLbhC9KAA?t=103). It takes 0,004s to start.

In [this](https://youtu.be/7G_r1iyrn2c?t=3104) session, RedHat developer shows how Quarkus application is being scaled. Comparing to Node, it's both faster to respond to first request and have smaller memory footprint (half the size of node).


Looks very interesting although unfortunate they went with Maven over Gradle.


I can't speak for others, but I work in games and we've spun up a bunch of servers and we use Netty/Jersey for REST and sockets. It's been incredibly stable and a joy to program for compared to our previous Node servers. Granted we were using bleeding-edge Node 5 years ago which is not the Node of today.

https://www.dropwizard.io/

Quoting from the site: Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services.

It's a nice microframework which comes with built in health-checks, and metrics and it's also ops friendly because it deploys as a single JAR file.


Not really related to Dropwizard but... this:

> mvn archetype:generate -DarchetypeGroupId=io.dropwizard.archetypes -DarchetypeArtifactId=java-simple -DarchetypeVersion=[REPLACE WITH A VALID DROPWIZARD VERSION]

Why is it that everytime I read something about Java I have a feeling it was not meant for humans ? Not as bad as C++ projects setup though, I'll give it that.


I don't use maven archetypes too often, the rest of maven seems pretty good to me, but maybe I'm just used to it.

I have done a lot more Java than Javascript, so I feel the same about npm error messages.


Don't get me started on js/npm stuff...

Spring Boot with Webflux and any rx supporting libraries you can get your hands on

Thanks.

I use javalin.io which was written by one if the maintainers of sparkjava (not to be confused with Apache Spark).

I'm a big fan.


Spring boot + jOOQ for RDMS support.

Pardon my ignorance but what does any of that trademark stuff have to do with jdbc?

I know little about EE (and I'm certainly not speaking for anyone but myself), but I believe Java EE has lost dominance not because of any corporate decision, but because it simply started losing ground to unstandardized open source projects [1], as opposed to EE's JCP. So people who liked EE wanted to ditch the slow-moving JCP in favor of a faster process, and one question was whether the new project will be able to change specifications of namespaces that are traditionally reserved to, and associated with, the JCP. In the end it was decided that no, they will not be able to change JCP namespaces (but can choose to maintain them until the end of time in addition to any innovation they do outside of JCP namespaces). The decision obviously disappointed some, but I don't think it's viewed as catastrophic (although some may think so). And I don’t think that the “negotiations have failed” so much as that either side wasn’t able to achieve what they believed was the best outcome for them, but an agreement has been made. You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex. But now everyone can move on.

I, for one, am curious to see how Jakarta EE's current approach of what seems to be an internet-based, democratic semi-standard would work out, and if it can be better than both the JCP as well as more common centrally-controlled open-source projects.

Anyway, this is what Eclipse's director said on the matter:

https://twitter.com/mmilinkov/status/1125213654775889921

[1]: That post you linked to reminds me of those who blame Oracle for killing Solaris. Solaris is a terrific operating system, that was sadly killed by Linux long before Oracle acquired Sun. After a few years of trying, I guess Oracle decided they could no longer save it, and there was no point in continuing to throw good money after bad.


Your revisionism in your footnote is astounding, frankly. Oracle closing the source for OpenSolaris was not because of Linux - it was because of a choice made inside Oracle. Remember that development then continued for years in a closed source fashion, and is still ongoing with a skeletal staff.

> You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex.

Yes exactly, I think that is what makes if feel like not the safest choice for many companies.


Java EE has not been the preferred choice for many companies long before this issue, which, I guess, is at least part of the reason why Oracle gave up those projects. I think Spring is the leader, but I am really not too familiar with that entire domain.

That's right that _pure_ JEE has not been preferred, but part of my original point is that simple things like servlets are actually technically part of JEE - so if you are using Spring for web, cloud, etc then you are actually using JEE at least a little bit.

I was never super into Java development. I started working in 2014 and was introduced to Weblogic, Jenkins, huge Maven POMs and all the rest, then went into Cloud consulting. When I sit down to do anything I get so wrapped up in all the stuff that comes along with Java and feel like I have to use some huge IDE like Eclipse (I hate) or IntelliJ (I <3 U Jetbrains) to do anything "real".

If I could just write code and have a simple package manager like NPM or even Go packages and not be hindered by trying to get VScode to work I would never look back. I just waste so much time trying to understand the ecosystem. I know that isn't a great excuse (not wanting to take the time to become an expert on tooling), but people who came into the professional space in the same time frame likely also just see it as an obvious path to just avoid all the cruft.

What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere). Maybe something like Spring Boot would solve that but I have not investigated that yet.


Java-the-language is a fairly crummy thing to work with, it’s Java-the-ecosystem that’s genuinely good — the JVM, the tooling, etc etc. The quality of IDEs available is definitely a very large component of that. Doing Java on a lightweight editor seems like a way to pay the price without reaping the benefits.

I use TextMate (!) to maintain an Android SDK that's a mix of Java, C and C++ avoiding Android Studio like the plague, to maintain an iOS SDK that's mostly Objective-C, similarly avoiding Xcode's editor, and to maintain the backend written in Python that these SDK's talk to. I've got enough going on in my head w/o having to deal with the complexity of two different IDEs.

Anyway, for me Java-the-language is fine. The tooling, at least as it comes with Android (looking at you gradle and all its Android plugins) is what makes me want to pull my hair out.


I've not done any Android development but how is IntelliJ for it? While it can be slow on big codebases and obviously never as responsive as vim in a terminal, I think the tooling is excellent.

But again, you have to spend a month or so learning the shortcuts.


IntelliJ has an excellent vim keybinding plugin which I use (having come from Linux/scripting/Vim background to Java). Intellij also makes setting shortcuts pretty easy so you can customize to your needs. Having used both Eclipse and IntelliJ, I prefer the latter.

I haven’t used IntelliJ, only Android Studio which I know is based on IntelliJ but don’t know how much they differ. I find it to be a klunky UI and it easily eats all the CPU on a 2015 3.1 GHz MBP doing things like self-updating, downloading newer SDK versions, re-indexing a small repo, etc. I personally find it pretty unusable. There’s also parts of the UI you can’t even access (like the SDK manager and the AVD manager) unless you have a project open. It’s pretty obviously not a Mac native tool. I assume (hope anyway) it’s much better on Linux.

To a couple of those -

Compared to the other options, Gradle (IMO) significantly reduces the package & project management overhead. That said, it's still relatively high insofar as you are still asked to keep track of things like whether a dependency is needed at compile time, run time, or test time.

Java-the-language has a symbiotic relationship with heavyweight IDEs. The language's development is as influenced by the popularity of IDEs in its community every bit as much as the popularity of IDEs in its community is motivated by the language's characteristics. If you're looking for a good lightweight editor experience, I'd suggest looking at alternative JVM languages. Any of {Groovy, Clojure, Kotlin, Scala} will give you a better non-IDE experience while still giving you full access to the ecosystem. (That said, everyone still codes those in IntelliJ, too. It's still not gonna feel like working in Go.)

Lastly, it's totally OK to tune out the ecosystem when you don't need to be plugged into it. Yes, there are a bazillion JSON libraries out there. And you can easily spend more time and energy agonizing over their differences than you could possibly save by choosing the right one. Similarly, go ahead and ignore Spring. The whole Spring Experience™ is designed around developing applications a certain way. If you like to develop applications that way, you will know in your soul that Spring is right for you, and be attracted to it like a cat to an open can of tuna, and you would already have been a deeply devoted Java developer for years now.


Maven doesn’t have a lot of overhead, if you don’t overengineer your POM and use shared parent between multiple projects. It requires some DevOps thinking, but in the end typical project POM will be just a list of dependencies and basic metadata.

> What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere).

That's pretty much the original value proposition for Spring Boot. "I just want to get to work".

I used Spring Boot before encountering a Spring 3 project. The difference is phenomenal.

Disclosure: I work for Pivotal, which sponsors Spring.


Spring boot still requires gradle, which is much more complex than naive use of npm or python pip.

In the long run java build tools are better, but due to the learning curve a lot of folks balk (leave) and use a different stack.

Whoever is in charge of openJDK should just adopt kotlin as java 14 even if it was Not Invented Here.


Spring Boot works with both Gradle and Maven.

https://start.spring.io/


Try a Spring Boot app using Gradle as the build tool.

Weblogic and all those app servers were a product of a different time. They solve problems that are largely solved other ways now (you could argue for some of their features). Spring Boot ships the web server in the application which is more akin to Rails, node etc.

Maven itself is also showing its age (its 1.0 release was 2004). I won't say that most of the industry has moved to Gradle, because the truth is so many workflows and projects are using Maven that it will be around for a long time. The good thing is that other build tools like Gradle, SBT etc. interop with maven the package repo just fine.

There is nothing stopping you from developing Java in vim. Syntastic and other plugins will help though.


For your web apps / services / APIs, check Dropwizard, it is not what I would call "bleeding edge", but it provides a reasonable base and has always allowed me to easily work with the rest of the awesome JVM ecosystem. My Dropwizard projects are usually cruft/magic free, start in less than 5 seconds, consume predictable resources and is easy to reason about, troubleshoot and improve.

I used to be a big fan of Spring(-boot) with MVC, but since I tried Dropwizard I never looked back. I'd still do Spring but perhaps for non-mvc/web needs.


It’s the first time I see someone willing to trade Nexus/Maven for NPM. There’s even no standard way to build and package a TypeScript library, to start with. For small projects, maybe, it can work, but for enterprise needs JS/TS is not even close.

The answer you are looking for is called "Clojure".

It takes a lot of power to push something that large. A lot of brain power to grock the ecosystem.

Outside of the Java bubble, the view is quite a bit different.

All that sophistication looks like a wasted effort.

Take something as simple as admining the garbage collector. Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

After working with Go for several years, at large scale, we never once had to touch any knobs for GC. We could focus on better things. Any of the Java stuff we deploy and deal with we have this extra worry and maintenance issue.

And that's just the GC.


Funnily enough, Go just chose to solve the problem the other way around: While JVM tackled GC with the code equivalent of lightsaber-equipped drones, Go‘s GC is almost embarrassingly simple in comparison (although it‘s pretty decent by now).

The major difference is that the whole Go language and stdlib is simply written around patterns that avoid allocations almost magically. The simplicity of the Reader and Writer concepts is so elegant, yet powerful and doesn‘t allocate anything but a tiny reused buffer on the stack. There‘s lots and lots of other examples, but if you have to collect ten times less garbage, you‘ll be better off, even if your GC is 2x slower.


The byte buffers that Go's Reader reads from and that Go's Writer writes into cannot in general be allocated on the stack because they are considered to escape. Because Reader and Writer are interfaces, calls are dispatched virtually, so escape analysis cannot always see through them. This is now fixed for simple cases in Go, but only very recently: https://github.com/golang/go/issues/19361

Ironically, Java HotSpot handles the use case of Reader and Writer better than Go does, since when it's not able to allocate on the stack it has fast allocation due to the use of a generational GC with bump allocation in the nursery. By virtue of the fact that it's a JIT, HotSpot can also do neat things like see that there's only one implementation of an interface and thereby devirtualize calls to that interface (allowing for escape analysis to kick in better), something Go cannot do in general as it's an AOT compiler.


> like see that there's only one implementation of an interface and thereby devirtualize calls to that interface

Oh, the JIT devirtualizes and inlines even if there are many implementations, but only one or two at a particular callsite. This has been generalized in Graal/Truffle so you almost automatically get stuff like this (https://twitter.com/ChrisGSeaton/status/619885182104043520, https://gist.github.com/chrisseaton/4464807d93b972813e49) by doing little more than writing an interpreter.


I agree it's impressive that Go manages to be not all that much slower than Java while having a much simpler runtime, but much of the simplicity is gained from lacking features that are very important in many cases of "serious software" like deep low-overhead continuous profiling and ubiquitous dynamic linking.

I’ve seen you mention the superior operability of Java and the JVM in high load production environments and I think this is a really important, often overlooked, and commonly misunderstood point.

Would you be up for writing a short post or blog post going into some anecdotal comparisons and sharing some resources?


Exactly. Go is benefiting from years of hard lessons learned in other stacks such as Java. Having super experienced GC builders involved early on resulted in a language and libraries that work much more synergisticaly with the GC.

It's better to dodge a problem, than to have a baked in problem that requires lots of really smart people to make work arounds.


Java HotSpot's garbage collector is significantly better for most workloads than that of Go, because it takes throughput into account, not just latency. Mike Hearn makes the point at length in this article: https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...

I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has. Not having generational GC was an interesting experiment, but I don't think it's panned out: you end up leaning really heavily on escape analysis and don't have a great story for what happens when you do fail the heuristics and have to allocate.


> I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has.

At which time the talking point will become "look how advanced and sophisticated it is!"


> Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

You never have to touch them in the Java world either, unless you like making performance worse that is.

I've never ever seen a case where fiddling with garbage collection parameters didn't make things slower.

I worked with a guy who worked on a popular java compiler, and he says the same thing. he never twiddles GC parameters either.


What sort of scale are you working at? At our scale, it is normal to look into these sorts of things. Reading and understanding and applying articles like the following is absolutely necessary.

http://clojure-goes-fast.com/blog/shenandoah-in-production/


Medium scale. It's not like I need to pimp each of my servers like it's a Honda Civic.

If the load gets too high, I just add another instance.

It's way more economical than wasting developer time fiddling with GC params.


Are you serious? You have to mess with the maximum memory allocated to the GC all the time with java processes. Also the JVMs default settings basically assume it is the only process running. It will keep hogging a huge amount of memory even if the memory allocated to the GC is 60% empty unless you configure the GC properly. Wasting memory and stealing it from other processes which potentially causes swapping or crashing is far worse than any increased time spent garbage collecting. Unless you're as stupid as the Minecraft developers you will rarely suffer from GC pressure with heaps below 10GB.

Cassandra has benefitted from so many GC tweaks that I find this view hard to believe.

i can only imagine someone having this view if they have never cared about latency. How have you never experienced a multi-second GC pause on default CMS settings?


> Outside of the Java bubble, the view is quite a bit different.

I know that outside the "Java bubble" less advanced platforms are often good enough (BTW, while Java's GC ergonomics are getting better, I agree there may be more of a paradox of choice issue, but while you may not need to touch any knobs, you're also not getting low-overhead, deep production profiling and many of the other amazingly useful things you get with Java), and even inside the Java bubble we don't think Java is always the best choice for everything, but that doesn't mean that Java isn't leading in technological innovation, which was my main point.


I'm not saying I disagree but for a lot of businesses (even one with relatively high traffic) it is not unheard of to deploy with almost no tuning of the GC (aside from setting a heap min / max of 2,4,8GB) and have no issues.

I'm sure this all depends on use-cases, but I'll chime in to agree. I work on web services that do high (not Google high, but you've-heard-of-it high) levels of traffic, we run on the JVM, and GC pauses are not something that cause us to lose any sleep using out-of-the-box settings + explicit heap min/max.

Tuning GC is hard, and the nondeterminism is worrisome, but hitting a pathological case and rewriting a bunch of code hoping to avoid it is even harder.

For such an advanced JVM, having to prewarm it by calling code 5000x times or the JVM being much more memory hungry than other non-JVM languages doesn't feel like the cutting edge.

If you're not calling your code 5000x then it's likely not important for performance. I worked in compilers for 5 years and people's intuition for what parts of the code are the bottleneck is generally not very good. This includes me, I've guessed wrong a WHOLE lot.

If you're running microbenchmarks and having issues, then you're likely not using JMH, the Java Microbenchmark Harness which used to be a 3rd party library but is now built in to Java 12.

Is Java memory hungry? Maybe. If you're just writing a small routing server, probably not the right use case for Java. If you're writing a big and complicated game server with hundreds of routes and you need to talk to MySQL, Redis, and Memcache then I'd say the memory overhead of Java is really quite good. Don't forget that you can tune the min/max heap and other things like that, especially with the module system in Java 11. However if very low memory is a requirement, then you're probably running in an embedded environment and you probably shouldn't be using Java.


> If you're just writing a small routing server, probably not the right use case for Java.

General purpose languages have "use-cases"?


Yes, I wouldn't use Haskell for scripting and I wouldn't use javascript for a 3D game engine.

I think in this context Java means "the platform" i.e. the JDK

the usual C2 kicks-in is 10000, not 5k. But that statement about warming up is true only for microbenchmarks.

C1 which is an dumb (but not terrible) compiler tends to be at 1k, and if you cannot get your code to 1k, likely it never needs compilation, so no point to spend time and space on performing the said compilation.

The 'prewarming' does include perf. counters, so it's guided compilation to boot.


What?

I suspect that the parent is talking about the JVM's JIT, where it compiles java bytecode into machine instructions after loading the application. This is why the first few requests on a JVM are usually considerably slower than the rest.

Parent is obviously exaggerating with the 5000x, and he could have made his point in a different way, but there's some truth to it.


5000 refers to number of times a method needs to be called before the JIT decides the method is "warm" and needs to be compiled by the expensive to run C2 compiler which produces good quality machine code.

If you're benchmarking java code and the method was not called enough times before you measure, you're measuring code compiled by C1 (or even interpreted).

The performance gap between Java and C is on the order of x2 to x5.


Why not just pass -Xcomp? From the docs[1]: "You can completely disable interpretation of Java methods before compilation by specifying the -Xcomp option."

[1] https://docs.oracle.com/javase/8/docs/technotes/tools/window...


At the same time, JIT can outperform native code in some specific circumstances, since there are certain optimizations which can be proven safe at runtime which can not be guaranteed to be safe at compile time

this is a thing people say a lot but it is not something that seems to often result in an actual java program or benchmark being faster than program compiled AOT by an equivalently smart compiler (llvm etc)

Actually one of the less known features of the JIT is the ability to deoptimize/recompile based on CHA (class hierarchy analysis) or pushed compilation request via invokeDynamic&friends.

Isn't Minecraft written in Java. Many Fortune XX have their entire product lines written in it. Frankly, I don't understand the hate it gets on HN.

Most of AWS, a lot of Google - as far as I know.

yes, and I had to pay for a lot of ram on servers because of it ;(

The creator of notch chose java because when he started minecraft he thought that would let it run in web browsers. He wouldn't choose java if he could do it over again.


Do you have the link to back up the statement about running in browsers? Notch is super talented. MC was released in 2009. No way, he didn't know about browsers and Java

Paying for RAM on servers is one of the cheapest ways you can pay for performance.

It is also one of the poorest performing games in the history of video games.

> low-overhead in-production profiling/monitoring/management

This is a joke, right? The management overhead for the JVM in production environments is huge. It's really hard to get it right.


Not sure what you are talking about. Can you give me some examples?

Tuning the heap size, GC tuning, etc. are often needed to avoid huge GC spikes out of the box. Not to mention performance is pretty meh compared to something like Go for applications where non-trivial compute is needed.

GC tuning should be trivial for every Java shop. Yeah I guess you can find use cases where Go is faster than Java but in that case if this matters I would go for Rust be because I like it more than Go.

This - Java 8 'feels' legacy, but in so many ways it's miles ahead of upstarts.

Sometimes we forget that the JVM (almost) = Java and that thing is a beast.


>stuff hyped on HN

HN is going to be inclined to take a look at new-ish stuff.


Indeed. For better or worse, that's what the "N" in "HN" stands for - "News".

Yes! Java 8 is good. And Excel is by far the best spreadsheet out there. I don't know enough about Sharepoint but it seems like the normal world has actually chosen very well when it comes to technology.

Despite its title, the article was not about Java 8.

You can try appending "in Go" or "in Tensorflow" to see if the title still makes sense.

I had cause to use a bit of Java recently, so I could wrap an existing library that did exactly the thing I wanted. I haven't used it since university—back in the 1.4 days—and I was actually pleasantly surprised. Performance was great, concurrency was easy, features like type inference and streams made the experience much more pleasant, and obviously the development tools are still first-rate.

The absence of a package manager like Bundler or Cargo was frustrating for someone coming from that environment – as was the effective requirement to use an IDE. But on the whole, the platform feels like something that totally-hip-and-edgy-clique developers like myself are too quick to discount.


>The absence of a package manager like Bundler or Cargo was frustrating

why you did not use maven or gradle?


I think everything you mention here is actually related to the JVM and not Java specifically, and while I agree the work is impressive, it’s also true that languages such as Kotlin and Scala benefit from that work while also offering some very nice modern improvements.

You are describing the Jvm there, mostly. The jvm is great, and java is pretty tired even in the latest incarnations. I would have nothing against working full time with a jvm stack. So long as java isn’t involved.

C# ?

How does this compare to the .NET ecosystem?

C# is a better language. Java 8 started to catch up with some of the quality-of-live niceties, but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff, which the consortium is not willing to do. C# and DotNet were designed with the wisdom gained from the early days of Java implementation and sidestepped a lot of these messes.

Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world. Until very recently you developed in Visual Studio and you ran on Windows in production and that was that. They're working on Linux support and a broader ecosystem but they are probably a decade behind what is available on Java. Even stuff like package management is crude, NuGet is a joke compared to Gradle.


The JVM has multiple languages such as Kotlin, Scala and Groovy. Each of them arguably being a better language than Java. That most people still prefer to build projects in Java shows that other things such as backward compability, how many developers know the language, tool chain and so on matters more than the isolated technical merits of the language.

Cannot agree more. Syntactic sugar or certain more advanced constructs in the end have very low ROI for experienced developers, which can utilize very well the existing language features and extend their expressive power with internal DSLs. The costs associated with the tooling, hiring people with necessary expertise etc at that point are more important.

> C# is a better language.

Yep, as a seasoned Java developer I agree. There are times when I miss a part of the Java syntax but they are few and far between.

> Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world.

Also true, although it is getting closer fast.

Possibly more important: they are both in another league compared most other languages. (Of the other languages/stacks I have some production experience with Angular/TypeScript is the only that I feel has anything close to the tooling support that C# and especially Java has.)


> but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff

Absolutely true, but so does C#. In the end it turned out that .NET's reified generics were a mistake (which makes language interop on .NET painful), and more recently async/await.


> In the end it turned out that .NET's reified generics were a mistake

Do you have any resources that explain this point further?

Reified generics have often been praised as the thing that CLR got right (and JVM got wrong). I never understood fully why that is, especially since other languages with generics (e.g. Haskell and OCaml) don't have anything reified (although in all honesty also don't have RTTI so it's really not necessary).


Erasure makes it easy for Java, Kotlin, and Clojure share code and data structures without costly runtime conversions. Languages like Scala and F# have had trouble implementing features on the CLR because of reification, and take a look at what Python and Clojure on the CLR have to go through for interop.

I know some people think that say they like reified generics in C#, but those are mostly people who aren't aware of what cost to the entire ecosystem they're paying in exchange to what is a minor convenience in C#.

BTW, reification of reference-type generics is not to be confused with specializing collections for value types ("arrays-of-struct") an extremely important feature that CLR indeed has, and Java is now working on getting.


"Mistake" is a strong word, and I would disagree with it. But when building extensible libraries and systems I do often find myself wanting to pass around a List<Something<?>>, which isn't doable without writing a second generic interface or the like. On the flip side, in Java (or, these days, Kotlin), a type-erased generic can be mediated more easily because passing a Class is easier than adding a second interface.

Part of it is that a lot of that stuff is gamedev-related, for me. It's not the biggest thing in the world, but for a lot of the stuff I find myself writing in C#, I find myself wishing for type erasure to make throwing data around a little easier. On the other hand, though, when writing web stuff on the JVM--I absolutely will not waste my time doing this in .NET, ASP.NET Core is not very good and EF Core is awful--I often wish for type reification, so it's just a horses-for-courses thing.


Generics interop just fine on .NET, and async/await had become a starting point for similar designs in many other languages. I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.

> Generics interop just fine on .NET

They really don't. They have posed severe restrictions on features in languages like Scala and F#, and take a look at Clojure, Python and JS interop on the JDK as opposed to .NET.

> and async/await had become a starting point for similar designs in many other languages.

And it's a mistake in most of them. Java has copied some of C's mistake, C# has copied some of Java's, and others may copy some of C#. Every language/runtime both repeats others' mistakes and adds a good helping of its own.

> I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.

I can only express my own opinions (and I think reified generics, as implemented in the CLR, is a far bigger mistake than async/await, which only affects the C# language), but I am far from being alone in having them. Also, I have no doubt that async/await is an improvement on the previous situation, which is why people like it, but nevertheless I think it's a mistake as there are alternatives that are more convenient, more general, and have less of an adverse impact on the language (e.g. Go's goroutines).


What are the restrictions on F# that were posed by them? Given that Don Syme was the one who originally designed them, specifically with a mind for cross-language use (which is why they got stuff like variance long before C# supported it), this is a surprising claim. In fact, I recall Don saying something along the lines of, if CLR did generics with erasure like Java, F# probably wouldn't be where it is today.

I saw the link to Clojure page you posted in another comment, but I don't see any fundamental problem with generics there. Yes, if you're invoking a statically typed language from a dynamically typed one, you have to occasionally jump through hoops when dealing with things like method overloads that require types to distinguish. The same goes for Python. I have actually used Python embedded in C# more than once, with interop both ways, and in practice it "just works" most of the time.

Conversely, I don't see why statically typed languages should surrender valuable type information (and associated perf and expressivity gains) for the sake of convenience of dynamically typed ones, especially on the platform where static typing is the norm, and libraries are expected to be designed around it.

As far as async/await vs Go's goroutines - since we're talking about language interop, how many languages can Go coroutines interop with? Async/await easily flows across language boundaries, as you can see in WinRT - any language that has the notion of first-class function values can handle that pattern. Goroutines are essentially a proprietary ABI. And the worst part is that once the language has them, all FFI has to pay the tax to bridge to the outside world, regardless of how much you actually use them. It may be an argument for standardizing some form of green threads on platform ABI level, so that all code on that platform is aware of their existence and capable of handling them. Win32 tried with fibers, without much success. Perhaps it was too early and the design was too flawed, but it's not encouraging.


> What are the restrictions on F# that were posed by them?

It does make it harder to add features to the language that do not map to the current reified "generics" spec, for example higher-kinded polymorphic types.

Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations, or switch-on-type, for functional languages.


If such features can be implemented via runtime type erasure on JVM, you can still do that on CLR - it's not like it prohibits that technique, it just doesn't use it for generics. You can even have different languages agree on how they would implement it, so that they could fully interop. With modopt/modreq, you can capture it all in metadata, as well.

It wouldn't interop with C# generics (although I don't see why it couldn't interop with C# by other, less convenient means). But if it can't be properly mapped to them when they're reified, why is there an expectation that it should? It seems to me like the gist of the argument here is that we can conflate two features into one, if only we remove all the conflicting bits of one of the features - which also happens to be the one much more broadly used at the moment. It's a strange trade-off.


> Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations,

That's true (and will be addressed), but that's a problem that can be fixed by adding a feature, not removing a central one, which is why I said that both Java and .NET have made mistakes, and reified generics was one of .NET's.

> or switch-on-type, for functional languages.

There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes. Perhaps you mean switching on A<Foo> and A<Bar>, where Foo and Bar are reference types (and possible with a subtype relationship between them), well even Haskell can't do that, and if there was a language that thought this is a good idea, it would be able to do it quite easily.


> That's true (and will be addressed)

Glad to hear it! It's probably the biggest issue trying to do functional programming on the JVM.

> There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes.

The technique to implement ADTs in functional languages on the JVM has often been to add an integer tag to every subtype and switch on that, but it's ugly enough for interop that IIRC Scala doesn't do it (and is thus less efficient).


Packet is a great alternative to the NuGet client.

Tooling might be slightly behind Java's, but honestly, there's not much in it. Dotnet Core might only have been available on Linux for a few years, but Mono was around for many before that.

The dotnet CLI also recently got the concept of 'global tools', which are akin to NPMs.

A lot of work is also underway on better cross-platform performance analysis, crash dump handling and the like for runtime in production[0]

Is there anything in particular you miss?

[0] https://devblogs.microsoft.com/dotnet/introducing-diagnostic...


Maybe not much that the OP miss. But then again there's nothing to miss from Java either since Java fulfill his requirements (maybe more than .NET can).

If I'm a Java developer who has been using Maven for such a long time. I don't see the allure switching to .NET just because NuGet exist or whatever Packet stands for.

The big data ecosystem is all JVM/Java.

Android started off as Java. Sure they have Kotlin now, but I don't know how many developers switching to Kotlin in drives.

Basically, the ecosystem of Java is already _there_ when MSFT is trying to catch it up so experienced Devs (a.k.a people who are already comfortable with the tooling) do not see the reason to switch.

As for me myself, I used to intern at MSFT back in the mid 2000 drinking the .NET kool-aid. One day I woke up and realize that part of the hi-tech world that interest me, relies a lot on OSS and the large portion of OSS was Java. Fast forward to today, FAANG is mostly Java shop. Java seems like a safer ground for me to have a longer career.

That's just me and my 2c. I'm too lazy to switch to .NET since there's no added value at the moment unless if I want to do back-office in-house web-app.


According to the roadmap, Mono's JVM interop used for Xamarin Android will be making its way more directly into .NET Core and will light up on almost every platform, which would give .NET greater, direct access to the Java ecosystem as well.

What about Kotlin?

Why downvote ? C#/Visual Studio is one of the best languages out there.

Lightyears behind it. There's no reason Java is still prevalent besides inertia. .NET/.NET Core is going to slowly but surely overtake it with Java shooting itself in the foot with the new licensing terms and .NET/C#'s far better feature set/better design (lessons learned from Java/JVM's mistakes were able to be fixed in C#/.NET)

.NET Core is winning a lot of benchmark comparisons now. And one might consider Rust to be more bleeding edge.

Java and Rust are both bleeding edge in their respective domains, which are quite disjoint. I like them both, and Rust may well dominate its domain one day as much as Java dominate its own (although Rust's domain is even slower-moving than Java's, so that process may well take decades). I don't know of .NET Core winning any quality industry benchmarks that people actually pay attention to, let alone a lot of them.

TechEmpower BenchmarkGame

And SIMD is coming soon in .NET Core 3 which should be a big jump for many workloads.


True, but java carries a lot of legacy.

"Legacy" includes good things too, not just bad things.

I grew up promising myself that I'd never touch Java, but when I did, it was a relevation. Everything just works. The debugger was just "attach and go" -- no recompilation with different flags, no fishing for the IDE / plugin version in which it wasn't broken, no glaring missing features (conditional breakpoints, stack traces...) The frameworks were refreshingly complete; I always felt that extensibility mechanisms were available so that I didn't have to gamble that my requirements fell into the "just works" subset rather than the "just doesn't" subset. I never had to wonder if integrating with some bog-standard protocol would mean maintaining my own open source project; maven always had something workable available.

Java wasn't born like this, but the legacy it accumulated made it formidable.

Early-adopting a hip language nets you a few sexy wins at the cost of completely eliminating a gigantic Brontosaurus-size long tail of important functionality and ecosystem maturity.


You're right. I never said otherwise.

Java, or projects in Java? Java pretty much has to, in order to preserve the most important feature a runtime can have: backwards compatibility.

Projects in Java are cursed from the beginning: the language has great constructs for boxing complexity (modifiers, inheritance, interfaces, generics, design patterns allowed by performant virtual calls, etc.). This means that when projects in other languages become unmanageable, Java will keep chugging business requirements like no other. Accumulating complexity is the fate of every successful system; and Java makes you go a long way.

Now, a newly arrived person on the job market will face systems with a pile of convoluted business logic a few century.human's high. The first ticket will be: 0.5 month of getting accustomed to the language/framework, 1 month of archeology, producing 50 lines of code, 150 of tests, break a few tests, 0.5 month of corrections; For something that would take one day as a greenfield project. The job might pay well, but will be very lowly gratifying.

Now the sound enterprise strategy is to keep using Java: the platform allows for deeper and finer integration into the business processes. But the sound individual strategy for newcomers is to stay the hell away from it. Oh, you will only work with Rust/Go/Elixir/etc? Here, have a greenfield project.

Clojure/Scala/Kotlin/Ceylon/Frege/Groovy might be the best of both worlds.


It sounds like you're basically saying its difficult and slow to work on large projects, and the solution is to not work on large projects. And that new languages implicitly mean smaller and thus easier projects because they haven't had time to grow.

The weird thing I always get with the always greenfield developers is how you know you aren't just creating even larger messes tomorrow vs the messes you're avoiding today. I guess it isn't your problem though if you're always working greenfield - it's someone else's.


Yeah, there is the legacy of the thousands of mature libraries, frameworks and development tools.

Legacy isn't always a bad thing.


Having moved to a stack with a much more competent standard library, it's much nicer to have that great standard library than to have to wade through a rich set of libraries all the time to achieve the same thing. I'm talking about Go, of course. It's just nice getting right to work coding up an http server without having to think about what "web framework" I should need to select, or which logging library to use.

What stack?

never said it was bad. But just wanted to state that it's not all newness.

To me Java is like a diesel engine, old but proven technology that can keep on chugging for hundreds of thousands of hours.

(Not to discount the recent improvements in GC, etc which are also amazing...)


Once you write code, it is legacy.

So do the framework within Java.

[flagged]


No, you don't need to reverse your opinion at all.

Personally, I voted you down because your opinion was stated in a sort of offhanded manner that does not do the subject justice.


The manner is neutral. It's only a couple words. You want me to write a whole essay on the pros and cons of legacy? In that case I would invite a negative vote because in those cases you can disagree. In this case it's just a statement that is true.

Legacy is a poorly defined term anyway. Is COBOL legacy, how about jQuery? ...

It's mostly a derisive term that seems to mean: "Old technology I don't like."


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: