Hacker News new | past | comments | ask | show | jobs | submit login
IT Runs on Java 8 (veekaybee.github.io)
964 points by anielsen on May 10, 2019 | hide | past | favorite | 529 comments



In the majority of companies it's simply not possible to operate on the bleeding edge the way HN articles would have you believe you should. Besides the obvious issues around the value of rewriting stable legacy systems on new platforms, there are also man power issues. You need tier 1 developers to live on the bleeding edge because any problem that comes up (and they will come up) largely requires you to solve it yourself sans the help of the greater internet. On a legacy system an average dev can generally Google any issues that arise because they are known problems.

When dealing with new tech it's a lot more likely to be the first person to run into some obscure use case no one has ever experienced before. And in that case you need devs who are not only capable but willing to invest in solving the problem. Sometimes that means, re-architect something in the stack and sometimes it may even require making a PR to the original project. That's a lot of investment that may make your devs happier but probably provides little concrete value to the business.

That said I'm still building on elixir so what do I know.


Really good points. IMO the web-surfing crowd's taste for news (as opposed to their capability when it comes to work) is deceiving. Their excitement for prospective high-leverage information creates a demand that biases community sites like HN and Reddit toward novelty, and the result is this massive FOMO loop: "Read it, remember to try it, forget to try it, aw damn a new thing already came out and it's better because X! Read it..." I love that the author of the article really speaks to this situation.

I have a relative who got really into this mindset. In fact he picked the "HN's Choice" software framework of the day for one of our co-creative projects a couple years ago. Mostly a fun project, but it could have gone somewhere, maybe. He got as far as setting it up so that we had some scaffolding, then basically he flamed out. I don't blame him at all; he'd never even used it before and expected himself to be able to just run with it. And this guy was a _master_ at a certain language starting with the letter P, but he was ashamed to use it. It made me so upset to see him feel all this pressure and then collapse.

Personally I feel that pressure myself sometimes but being aware of it helps a lot. As a hobby side project, I decided to go back and do some 1990s MS-DOS programming and experiencing this FOMO stuff was part of the motivation. I needed to be free to work deep instead of thinking broad, so to speak.


themodelplumber says> "And this guy was a _master_ at a certain language starting with the letter P...

Prolog?

What a shame he couldn't use his best language! It has such nice web frameworks.

https://en.wikipedia.org/wiki/List_of_programming_languages#...


I imagine you're joking? More likely Perl, it was a big web language back in the day. Smaller chance that it was Pascal.


Actually php is more likely than either of those.


Good points here. Which is why HN is oriented toward startups with Tier 1 engineers. PG and his cofounders invented the web app, using a combination of old tech (lisp) and new (the internet).

There are plenty of other forums and sites to read about conventional tech. HN is where I find the bleeding edge. 90% of it is just fascinating. But the other 10% offer tantalizing possibilities for making something novel, which is to say: solving an unsolved problem.

“How to Become a Hacker” puts it plainly: no problem should have to be solved twice. Drudgery is evil. And by those axioms, a ‘Hacker’ is just uninterested in old solutions to old problems. We need to live in the future so that we can build the future. Although HN’s content has suffered over the last 5 years with the influx that accompanied a wider awareness of startups, it still is the best place I know of to dip your toes into the various futures we may one day encounter.

Having said that, lately I’ve found that Google Scholar is often more thought-provoking. If only there was a HN for Google Scholar.


Oh, for christ's sake. I don't want to get all "get off my lawn" but HN is full of early-20-somethings rediscovering things and calling them 'bleeding edge'.

The highest paid people in our industry are working on drudgery full-time for FAANG. And that's fine, people have families, I'm not judging anyone. But let's not fool ourselves.


A good example of this is static typing - shat on for years by HN, and now, all of a sudden it's the greatest thing since sliced bread!


Yeah, but I think the kind of static typing that was 'shat on' for years is not the same as the one being praised today.

The shat on one is the old Java, verbose, obtrusive style. The newly praised one is Haskell-style, type inferred, expressive...

Now you could say that's not new, BUT what is new is marrying ML style type system to languages whose other concepts devs are largely familiar with and packaging it the right way to get into production, instead of just academia and that being the case even for historically impenetrable low-level programming and such.


Agreed. The article that opened my eyes to this shows (IMO) a serious deficiency in C#'s type system compared to F#'s, which includes the concept of tagged-union types. It shows a very simple shopping cart program that can't be modeled cleanly in c# without using the visitor pattern....which is difficult to read IMO.

https://fsharpforfunandprofit.com/csharp/union-types-in-csha...


These are the kind of comments that make HN so special: knowledgeable and insightful, due to deep experience with the past and critical understanding of the present. It isn’t just a rush to what is shiny and new, but is capable of identifying what is novel and valuable, and is articulate enough to explain why.

Normally my comment would be an unnecessary back-slapping, but since the conversation veered this way, I think it is appropriate to call out your comment as something that represents the spirit that makes HN unique and wonderful.


I never realised how highly some people praise hackernews comments lol

Hackernews has an _instagram filter_ where everybody brags about how smart they are.

Then there's a bunch of fresh grads who fanboy their favourite tech companies.

Then every so often you get a gem of a comment from knowledge leader in the industry.

But that's quickly overshadowed by 'my Startup, my Startup, mah Startup' and how everybody wants to be rich one day.

But like every larger community in the internet, it eventually derives itself into an echo chamber.


Current-day HN strikes me as very similar to early 00s Slashdot, albeit more self-serious and with an ideology more informed by SV capitalism/entrepreneurialism than the convoluted politics of open source software.


Hacker News ranks at #959 in the US. It didn’t get that rank from just being geared toward startups with Tier 1 engineers.

Look on the front page right now and see how many stories are about startups. That didn’t change in the last five years. It’s always been about technology in general.

HN’s front page 10 years ago:

https://news.ycombinator.com/front?day=2009-05-10

HN’s front page 5 years ago

https://news.ycombinator.com/front?day=2014-05-10


We’re both making points based off anecdotal experience, so who knows. I have been a reader for the last 7 or 8 years, but my experience is just one subjective data-point.

Having said that, the most apparent change to me is in the tone and substance of the comments rather than the front page. And it isn’t a huge change. Just noticeable for me.


You don’t have to use anecdotal experience. You can use the same link format for any day that Hacker News existed and see the same thing.


HN archives are stored in BigQuery, so it would be possible to do an analysis that was more rigorous than a human reading the thousands of front pages to see a pattern of difference.


You are assuming "tier 1 engineers" are also the ones using the bleeding edge technologies. This is not true in my experience. The best developers care a lot about using the best tools available for a particular task - which is very different from using the newest tools avaialbe. In reality it typically takes a long time for a tool to become mature enough for productive use, at which point it is not cutting edge anymore.


In case of open source, everyone wants someone to (beta) test their solutions in production. Only after few years of that such testing a tech product becomes usable for the rest of the industry and thats when it becomes profitable.

Im not surprised the leaders of IT, ppl who create those new technologies, push for a narrative to use those new shiny things.


> Which is why HN is oriented toward startups with Tier 1 engineers.

Or engineers who've convinced themselves that they're Tier 1 engineers...


Are we reading same HN? It seams majority of people advocate fairly sane proven choices for majority of projects e.g. start with the monolith default to PostgreSQL for dbms etc.


There's a pretty big difference between what the people on HN would have you believe and what the articles on HN would have you believe.


You've never seen disdain for Java on HN?


Somewhat but I think it's more a reflection of sentiment towards anything Oracle related.


> provides little concrete value to the business

The concrete value doing such things add to our business is happy staff (which you mentioned), which means the best people don't leave, and stat excited and productive for the long term. Also, when hiring, because it helps us get the best people to join us in the first place.

It's not just about "bleeding edge", it's about giving dev teams the freedom to do that if they want to. Some do more than others, but the point is, if they get it wrong, things break and they get called at the weekend or whatever and they soon change track.


The costs of doing such things include...

1. Half-done projects started by some happy staff who then went on to start the next project in the next shiny new thing and either more-or-less completed by someone who was just about capable of cut-n-pasting without understanding or who decided that some new shiny thing needed to be added to make the rest wonderful.

2. A monstrous stack of projects in 637 different programming languages, frameworks, ideologies, coding styles, and indentation levels, guaranteed to require rewriting for any change. Result: 638 different things.


I always hear people warn about these things, but haven't seen it in practice at all, as long as those who write the systems are also responsible for operating them and not subjected to inappropriate external interference.

Of course, letting someone write whatever and then move on and leave it to someone else is a problem, but that's a problem regardless of whether they used bleeding edge tech or not.

For example, I've personally seen more tech-debt sins in more traditional monolithic Java + rdbms applications than I have an any of the Go based micro-services + nosql/etc I've seen over the years. I'm not trying to prove that "bleeding edge" is therefore better because it's not. I'm saying it's irrelevant.

The problem of leaving behind unmaintainable crap isn't about the tech, it's about the management, the team processes, the prioritisation process etc.


"Move Judiciously And Reuse Things" vs "Move Fast And Break Things"

(There may be some overlap)


Fun story : i had to develop the same system for two different companies, one a startup with 3 employees (the founders) , the other a billion dollar business. Just a backend api & web interface, together with an iOS app displaying the content.

I could use shiny new tech for the startup (which at the time was python on app engine and its nosql datastore, with a backbone.js framework), whereas the other one forced me to use java and deploy the code on the big corp datacenter.

The start up code was built in two months, and maintained by an intern for two years with success. It basically cost nothing to run.

The big corp code was built in a year (note: i did know how to develop in java), people in charge couldn't manage to deploy it (i had to do it myself, despite just discovering their infra on the spot), and i had to maintain the code myself, because despite huge code guidelines and restrictions on the technologies allowed, nobody inside was assigned to maintaining the code.

So, yeah, there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.


It is not the technology which is slowing down the enterprises. That is done by processes, humans and risk.

You cannot stay up to date permanently, because that introduces risks all the time. Once you earn money, there are layers to protect the money stream.

You can loose tons of money by not staying modern, that is true. But you can loose everything when you ignore a risk. The trick is to find a balance.

And all of that is not considering investors and their idea how the money should be used.


You can't avoid upgrading forever though, eventually your legacy technology will stop getting security updates and support. Then you are tasked with a huge leap from old to new.

I suspect corporates don't really want to get stuck with so much legacy, they just don't know how to avoid it.


That is right. That is the balance I mentioned. In our teams we try to update the base framework every year to avoid locking us out of the innovation in language and library ecosystem. But a UI stack for example you do not migrate without explicit funding.


You nailed this and this is very thoughtful.


The start-up code written in modern Java would similarly taken two months to develop and maintainable by an intern.

The problem in enterprise tech like you found is that one is forced to use certain, non-productive old frameworks filled with legacy, over-engineered bloat.

Modern Java micro-services in contrast are really fast to develop. Green field Java projects where one can make personal choices of lean technology and libraries are simply amazing. They also have terrific characteristics under load. The JVM performs well even with bloatware. When you trim out and make your JAR's lean and mean then its utterly amazing.


> Modern Java micro-services in contrast are really fast to develop.

This attitude is a bit surprising to me. The main point of micro-services is that they address a complexity problem when dealing with large organizations. That is, they allow a large organization to break into small teams that can work (relatively) independently so each team can iterate faster. However, microservices definitely make a host of issues harder:

* cross-service transactions are much harder

* cross-cutting concerns can be more difficult to change.

* can add organizational complexity of a feature you want to add needs corresponding changes it up or downstream services.

Even Martin Fowler has this quote:

Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.


And I agree with everything you say! We have followed Martin Fowler's advice. We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time.

Micro-services allowed us to break this. Micro-services also allowed us a faster turnaround time in feature delivery and reliability. It is also easier to isolate problems.

Generally stuff in the monolith that are already abstracted by large service facades are a good candidate for a separate service with their independent data model. Avoid cross-service transactions completely. If you have a cross-cutting concern, it generally means you need a separate service managing that cross-cutting concern.

Organisational complexity is definitely increased. This can be mitigated by tooling. Our build pipeline shows the full graph dependency chain, what is built, what is getting built, what has been deployed, etc.

We have the concept of a "system" that is basically a versioned set of micro-services running off a build trigger. We developed the capability to namespace systems - ie each system of services uses separate resources (kafka/db/etc) and separate URL's (via custom domains) when deployed on our cloud platform. You can also "plugin" your micro-service into a targeted system for diagnostics. This way dev, testing and product demo teams can work independently. The latter is not micro-service best practice, but in a large, slow-moving organisation, we found it valuable.


> We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time. Micro-services allowed us to break this.

We also have a distributed dev team, but decided against using microservices. Instead, we have a monolith with a plugin architecture, so remote teams can just add independent modules to add functionality. Occasionally changes are made to the monolithic application, and these changes are heavily scrutinized. The plugin architecture provides many of the benefits of a microservice, while also allowing for more flexibility on occasion, and eliminates flaky network calls that are inherit to microservices.


> microservices definitely make a host of issues harder ...

I agree with all the issues you've stated, but I'd like to add one more. Microservices arguably make it much easier to build systems with circulate dependencies leading to weird race conditions and deadlocks.

Consider two units of code A and B. If implemented as classes, modules, or libraries, it's relatively easy to spot and prevent A calling into B which in turn calls back to A. Sometimes the compiler and tools can automatically catch that.

With microservices, catching dangerous dependencies like this are much more difficult as each service, outwardly, seems independent of the other and there are few tools to catch these dependencies.


There are few options, but what do you need beside a graph data structure?

We had a pipeline consisting of microservices and Kafka topics. Simple if/then logic quickly became problematic so I implemented our flow control as a directed acyclic graph, and it helped tremendously.

It's also easy to render your graph out with any number of visualization tools to quickly understand/validate work flows.


> There are few options, but what do you need beside a graph data structure?

I don't see how graph data structure solves this problem. Suppose you've created a photo sharing app. One microservice, A, has the graph database and stores the photos. One service, B, uploads and downloads photos. And a third service, C, applies filters to photos.

It's pretty easy to architect these services such that the download service B uses the filter service C in some situations, and the filter service C uses the download service B in others. This is obviously a bad design, but with microservices it's easier to make these bad design choices because the folks who wrote one service have little information about the other.


Sure, a graph wouldn't help fix that but it would at least illustrate the cyclic relationships and someone is hopefully recognizing the drawbacks inherent.

A tool to "fix" your example probably doesn't exist, but a graph is an excellent way to represent dependencies, reason about progress in the flow, and enforce constraints in a generic way.


Unlike the others, I'm going to disagree with what you've said. If you have a mono-repo set up you can develop like it's a monolith but with the added benefit that you don't have deploy the whole macro-service at once.

I think people are just spinning up a bunch of unorganized services and calling them micro and then complaining about it.

If you're on a small team that can build a clean monolith, the work to make them microservices is imo pretty trivial.


The main point was that its a lot easier to make those design mistakes whwn using microservices. Like you said there is no tool to easly spot and fix those issues. In case of a monolith there are such tools and often resolve to simply view dependency graph in your IDE.


I have never seen a microservices architecture where transactions spanning services were trivial.


If you need to do that then you've most likely cut along the wrong service boundry. Same thing can happen in a monolith, although you can kludge something out with spaghetti code I suppose. Doing it the right way in both cases takes about the same amount of work.


How do you cut your service boundaries?


Sure, modern Java micro-services could be fast. Compared to legacy Java micro-services to years ago.

It's still not comparable with Rails. Let alone some niche bleeding edge technologies which are focused on productivity.


The goal of enterprise dev process is no matter the outcome is, nobody is to blame.


GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, so I wouldn't say it costs nothing to run 2 successive years. Even worse, after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself within limited amounte of time. Bleeding age my ass


> GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, ... after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself with[] limited amount[] of time.

I hear you brother. Google generally puts out the "smartest" stuff, but they don't really care about the day-to-day needs of most of their users. The idea is generally "we've thought about this a lot, and this is the best way, so change your code to handle it." The problem is that every few months, they seem to come out with a new "best way".


This sounds like Java is not the big corp's problem. This is bound to happen if the people deploying the software don't know their way around their own infrastructure and around the software they are going to deploy. A big corp tends to centralize knowledge into departments. To get stuff done, departments have to communicate and coordinate. If there are shoddy processes for this in place, you get a big, fat, slow organization.


I don’t even like Java but I can tell you it had nothing to do with Java vs. the new and shiny.


it wasn’t java per say (as i said they had a set of strict rules as to what framework i was allowed to use , aka spring only), but the general culture around it, of « safe and proven, enterprise grade stack ». Had to use ext.js for the web interface for example.

Yet, even with this stack, 50% of the spent time was due to internal process. As an example : we called them once to ask for some news, and they told us about doing a kickstart meeting. At this point we already had finished developping everything and wanted to talk about deployment (little did we know we would have to redevelop part of the stack because of guidelines we hadn’t been told about before).


I honestly believe given the same set of requirements, constraints, and people skilled with their respective stacks, the language and framework wouldn’t make that much of a difference.

Out of the four modern languages/frameworks that I’m fluent in: C++/TreeFrog (played around with it just because I am a masochist), C#/ASP, Python/Django and JS/Express, I really don’t see any difference in my productivity in any of them.


> there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.

I think the tech itself has little to do with it. Cash-strapped small startups are hungry for developers, and often ask them to do more than they can. Large companies often have many developers idling or working on fake toy projects that never see the light of day. There are massive inefficiencies in large institutions. Using old tech may be one of them, but it's a drop in the bucket compared to inefficiently utilizing the people you have.


> But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.

And yet. Despite the danger, that was a billion dollar business. So they were doing something right, weren’t they? ;)


The "problem" is, in a big corp nobody gives a damn how much things cost. But you see, it's only a problem from a certain point of view (you want to get things done quick and cheaply). Otherwise, it's not a problem at all. It looks like you got yourself a nice support contract with that big corp, and the big corp will happily absorb the cost - everyone's happy.


What was the relative scale of the two systems?


very similar. Very few users, everything could run on one server without a problem. The startup could have needed to scale up if they’d met some heavy success, but appengine would have been handling it just fine.


The HN front page is similar to any community. For example, a car site's homepage will be listing the latest supercars, expensive turbos, rims, whatever... while most readers are driving a $30,000 Civic.

The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about, regardless of what the mass is doing.


> The best & brightest in tech are working with the best tools on the biggest problems

I don't think that's true—there is a subset of the best and brightest who work on greenfield projects very decoupled from existing customer bases, and they get to blog / present / post a lot about what they're doing. There are quite a few "best and brightest" people who are in large companies or slow-moving industries. They're often constrained to existing tooling, because moving to fancy new tooling is a huge risk and time sink for limited reward. They might be using cool personal tools—fancy editors and keyboards and window managers—but the stack they work on is generally "legacy".

And usually the problem of "How do we make this work slightly better for millions of end users" ends up being a bigger problem than "How do we do something really cool as a demo."


The best and brightest are working on greatest problems and are not focused on using the newest tools in many cases they are using substandard tools because they are focused on the problem.


If they're really the best, wouldn't they have their pick of workplaces and optimize for personal enjoyment? Nobody who could chew through research level algorithm problems all day would willingly write Java 8 CRUD apps for Windows Server 2000, because those are the people that have a choice.


Personal enjoyment can take many forms and people have more than one priority in life. Big corps can also have their upsides aside from tech and process related questions that might be attractive. While I have my pet peeves that would hinder progress, I personally don't really care about a particular stack enough to get invested. If I'm too concerned with that aspect that would imply that I'm not working on the interesting part of the problem anyway.

You can handle a lot of restrictions if the domain/problem is interesting enough and the constraints put on you don't feel too taxing, e.g. because they aren't enforced for your role or team very much. I feel like company size just isn't a good indicator for personal enjoyment/growth/$whatever, lots of research oriented divisions in larger corporations will let you work on interesting topics and hand off the engineering part to teams with people that enjoy that particular aspect of our world, both working for the same company.


One, the hiring market isn't either liquid enough or high-information enough for this to work.

Two, they have their pick of workplaces and focus on finding the biggest problem or perhaps the biggest paycheck, not the most freedom in tools. I have perfect freedom in tools hacking on OSS by myself; I don't look for that in a job.


> Nobody who could chew through research level algorithm problems all day would willingly write Java 8 CRUD apps for Windows Server 2000

Why do you assume it's impossible to have a fulfilling career writing Java 8 CRUD apps for Windows Server 2000?


People have a strange idea of what "best and brightest" work entails.

I work with Java 8 on a greenfield high-frequency transaction platform for a very large company. It is extremely satisfying to build the "world's largest" of something, and no amount of shiny features in a cute new language would deter me from this work.


Good point about communities, but I don't think it's good to say "the best and brightest in tech" in this context. HN exists to gratify curiosity, and people are more curious about things they haven't seen before, or that are currently captivating their imagination. This emotion is not a reflection of the world around us—it's aspirational. But there's a risk if we start to feel inferior because we don't get to do that at our job or whatever, which is partly what the article is about.

Incidentally, it's the same curiosity that has the current article near the top of HN right now. It's not a point that has been articulated so often, or so recently, or so well, so it's fresh. Even if we already know it, it's a fresh reminder. (And also, of course, there's the catnip of the meta dimension.)


Are we just going to ignore the idea of paying $30k for a Civic? I hope noone is doing that.


If the poster was from Canada, $30K is actually about right for a 2019 Touring trim level (it would be even more after taxes):

https://carcostcanada.com/Canada/Prices/2019-Honda-Civic_Sed...


You could pull it off by buying a Civic Type-R, which I think fits the theme.


The price is negligible in that the point is most people will work with Java, PHP, JS, etc at their workplaces yet indulge in the newest concepts/technologies on HN.


I have like $2k into a 90s Ranger. By the time I'm doing doing "enthusiast things" to it I'll probably have well over 10k into it.


Well the sticker price isn't 30k, but that's probably what you end up paying by the end of the 5 year loan.


> The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about, regardless of what the mass is doing.

That doesn't have to mean new and shiny.

By way of analogy: the F-117 stealth fighter--the world's first operational stealth aircraft--was developed by the best and brightest (Lockheed Martin Skunk Works) and solved the biggest problem (visibility to radar). Aside from having a weird polyhedral shape and innovative radar-absorbing material, nothing about the plane was the "best", "latest", or "cutting-edge" at all. Much of it consisted of parts from other aircraft hacked together. If you actually do want to have the best cutting-edge technology all in one plane, you spend 25 years designing the damn thing--that's the F-35.


The initial version of F-35 was lifted wholesale from Yakovlev Yak-141. Basically during the wild late 80's early 90's in Russia Lockheed Martin entered into an agreement with the Yakovlev bureau to do who knows what with Yak-141 (which sounds impossible and wild in itself nowadays). The agreement lasted just long enough for them to rip everything off and was then dissolved.

http://aviationintel.com/yak-141-freestyle-the-f-35b-was-bor...


That's a massive exaggeration at best. The F-35B variant borrows the lift fan design from the Yak-141. That's a single feature of a single variant of an aircraft that has a half-dozen other completely unrelated features. The Yak-141 didn't have helmet-mounted displays, it wasn't stealthy, it didn't have the new combat information management systems, it wasn't built out of modern composite materials, and so forth.


Would be quite a feat to have all those things in late 80s, don't you think? :-) It's not an "exaggeration" The planes even look similar, and Lockheed got $1.5T in government money for a few million they spent bribing government officials in Moscow.


> Would be quite a feat to have all those things in late 80s, don't you think? :-)

The F-35 has them. The Yak-141 didn't. Therefore, it absolutely is an exaggeration to say the F-35 was "lifted wholesale" from the Yak-141. Two of the three F-35 variants don't even have the lift fan!


>The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about

We should be so lucky. Nobody knows who's the best, what the best tools are, or what the biggest problems are. We only know the ones that get written about.

Writers cover what's flashy, who's the best self-promoter, and what makes the most money.


Exactly right.

Sites like HN and Reddit are aggregators in the literal sense. They take a large volume of data points and reduce it to a select few that get shown to most users. that aggregation function is "most interesting", not "most representative". Those are direct opposites of each other: interesting is almost by definition unusual.

I always find the subreddits that try to explicitly not target polar extremes the most fascinating from a sociology perspective. For example, this photo of a sink faucet is the most mildly interesting submission of the past year:

https://www.reddit.com/r/mildlyinteresting/comments/9ykoe4/t...

Does that mean all other submissions were less interesting, or less mild?


There is most likely some kind of division here where 5% here is working with the best tools on the biggest problems, 60% are working with old and non optimal tools on crud making money, 15% is working with the latest and coolest at a well funded startup and the rest is just following hypes making peanuts trying to fill their resume. Here best & brightest depends on your definition; if you are in it for the money and do not live in the valley, I think brightest might be java or c# if you want to make money.


$30,000 for a Civic? You better be getting gold-plated cup holders at that price.


1x $1900 Civic

1x $100 eBay turbocharger

1x $1000 misc expenses

27x $1000 JDM long-block

There's probably less fun ways to spend $30k on a Civic too


Like in Singapore:

Example A - Honda Civic 1.6 i-VTEC

    Registration Fees - S$220
    OMV - S$19,370
    Excise Duty (20 percent of OMV) - S$3,874
    COE (as of 23rd March 2018) - S$38,000
    ARF - S$19,370
    VES - $0
Basic Cost of Car - S$82,460 (~ 60k USD)

https://www.sgcarmart.com/news/writeup.php?AID=171


This is implying the best & brightest drivers are driving tricked out super cars.


Not a super-driver but Enzo Ferrari used to daily drive a Fiat 128 [1] . Nowadays the former chairman of the VW Group and a member of the family that controls Porsche has just paid $18 million for a custom-made Bugatti.

[1] https://youtu.be/lQSac0Jpz0Y


Right. It's not what all the best and brightest are doing. It's what people like to read about, and what we want to be doing, even if most of what we do isn't that.


I think mundane programmers could be some of the brightest, but for whatever reason, they don't go into the limelight to be noticed.


Supercars or soup-of-the-day?


Much of the ahead-of-the-curve, aspirational tech stack posts featured on HN are the blogging equivalent of Instagram-polished shots. On a personal blog they're equal parts self-expression and self-promotion; on a company blog they're equal parts deep dive and recruitment teaser. It's okay. Some readers will feel intrigue, some confusion, some envy. They'll wonder why they can't use those new tools at their work.

It's nice to recognize there exist developers who seem to stand apart from the vocal ones who make these posts, but they're not a single class. Some brag just as much, just in different circles. Some don't brag, but they do lots of good work. Some put out work that's not so good, and they may or may seek to represent their accomplishment regardless. The only factor they clearly share is their underrepresentation and their lack of attained popularity in talking about their work on social media.

As tempting as it may be, it's a big reach to conclude due to their underrepresentation that their tech stacks are conservative, or even dated; it's not a given that their work is solid and they're hard at work solving business problems given numerous constraints and office politics. Efforts to define this population by their rejection of shiny hype just perpetuates a kind of fictitious class divide between "fancy" developers and "plain" developers, when the real class divide is between management that is desperate for results, and IT workers that are desperate for empowered decision-making and resources. Businesses are under tremendous pressure, and silver bullets read about elsewhere carry great appeal. Transplanted organically by enthusiastic developers or by order of management, such solutions rarely work, because changing just a few inputs of a complex process won't deterministically give a better result. And mediocre results of a messy process make for a different genre of writing.

Maybe there's a class divide after all: between developers who are privileged enough to cook up all sorts of clever, custom solutions to challenging problems and can write about it, and those who spend their whole workday surviving the onslaught of nonsensical demands and deliver something mostly working in the end.


I have seen some Excel files from end users which grows into a whole application. The users wanted more functionalty and didn't get through all the management levels to reach someone from corporate IT and just start doing it by themselve with the tools they know and develop from Excel to Macros to VBA to VBA with SQL Database (because they have a standard process in the company to request a new SQL Database and the data was to big to be just in the Excel file directly). And then the company have this whole mess of "application" which is already business cricitcal because a lot of other users just use it as the fastet way to get their job done and the person who develop it is the only one which someone knows to handle it and maintain the application.

At some point in time the original person isn't available anymore and whole setup will have troubles because of upgrades of the basis (e.g. Excel) or just performance problems. They are even consulting companies specialize in optimizing or migrate to new Excel version for such kind of applications.

The corporate IT have to pick up the users early to help them to solve their current problem or help them to be more efficient with IT. If nobody helps they will develop their own solution which the knowledge/tools the have.


Agreed, 'shadow IT' is very real in the Enterprise. And I don't think it's the user's that are to blame - corporate IT departments just make everything so bureaucratic, expensive and difficult. Meeting after meeting, and all the time your department's budget is getting hammered for the hours, long before any development even starts.


I worked for a few years in enterprise app development at a Fortune 500, and our bread and butter (probably 75% of new code written) was making replacements for people's homegrown "apps" build on top of Excel or Access that had grown so large that they needed a real SQL back end.

I say this not to disparage those apps (though they were often horrifying to us as devs) but to point out that huge portions of your average large business probably run on stuff like that.


My wife use to work at a credit card company. Someone in their group had taken the Microsoft Access 'Northwinds' sample database, changed the labels, and reworked the processes to more or less fit the SQL/tables that existed. No understanding, beyond behavior, of what was underneath at all. There was much 'you got to be kidding' when the corporate mandated all database applications would be migrated to Oracle.


If it reaches that point it becomes an enterprise project with a budget and resources/time. If you try to kill every excel sheet in the wild you take on those projects without the resources to support them.

Any earlier commenter said Enterprise values process over outcome. It is true at the macro level but at the micro level bosses need to make decisions or your new work will receive no credit from above and will be used against you when it fails.


I think you misunderstood something here, corporate IT's main purpose is to prevent people from working efficiently.

https://dilbert.com/search_results?terms=information+service...


The funny thing is that from a purely technological point of view, Java (even the 5-year-old Java 8 and certainly recent versions) is far ahead of most other stuff hyped on HN (as well as less hyped stuff). Virtually no other platform comes close to that combination of state-of-the-art optimizing compilers, state-of-the-art GCs, and low-overhead in-production profiling/monitoring/management. And much of the cutting-edge development and technological breakthroughs on these matters continues to take place in Java (disclosure: I work on OpenJDK full-time). Just in the past few years we've seen the release of open-source GCs with less than 2ms worst-case pause times on terabytes of heap (ZGC and Shenandoah), a revolution in optimizing compilers (Graal/Truffle), and there's an upcoming release to streaming access to low-overhead deep profiling (JFR). So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.


This. Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved. AOT used to be bag of hurt with GCJ ( I know I could use Excelsior, not sure if it was free in my time though ), but now even that will be an supported option from Graal.

Java the languages still isn't pretty, but it has been much improved.

OpenJDK is GPL and apart form the trademark ( ? ) with Java ( I didn't read much into the JakartaEE problem, ) everything should be fine. I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.


> Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved.

Value types are a major missing piece in the JVM stack right now. It's at least on the roadmap, but it keeps getting pushed back and back and back. I'd also argue runtime generics is another one, and perhaps more depressingly one that is unlikely to ever get fixed.

.NET has both of them and also has the same core strengths JVM does, so given the choice I'd go with .NET over JVM 100% of the time as a result. JVM's GC & JIT seem to be on a never ending improvement cycle, but the actual language & core libraries are incredibly slow to react to anything.


> .NET has both of them

I'll give you value types, but reified generics in .NET were a mistake. It really makes interop and code sharing among languages hard, in exchange for a rather slight added convenience. This means that if you're a language implementor and you're targeting .NET, you'll get much less from the platform than you would if you target Java, which makes .NET not a very appealing common language runtime. And not only is Java a good platform for Kotlin and Clojure, but thanks to Truffle/Graal it's becoming a very attractive, and highly competitive, platform for Ruby, Python and JavaScript. All of that would have been much more difficult with reified generics.

Also, I don't think value types in Java are being "pushed back." The team is investing a lot of work into them as it's a huge project, but AFAIK no timeline has been announced.

> and also has the same core strengths JVM does

I don't think so. Its compilers are not as good, its GCs are not nearly as good, and its production profiling is certainly not as good (have you seen what JFR can do?); and its tooling and ecosystem are not even in the same ballpark.


I see it as a double-edged sword.

On the one hand, reified generics means that it's .NET's object model or the highway.

On the other hand, .NET maintains a much higher standard of inter-language interoperability. When I was on .NET, I didn't have to worry much about the folks working in F# accidentally rendering their module unusable from C#. Now that I'm on the JVM, I've accepted that it's just a given that the dependency arrows between the Java and Scala modules should only ever point in one direction.


It's not .NET vs Java so much that the development of the two languages you mention are coordinated. For Kotlin and Clojure interop works both ways; Scala doesn't care much about that, so Scala developers write special facades as Java language API. There are lots of different languages on Java, and they care about different things. The Java team itself develops only one language, although at one time there was another (JavaFX Script). Some language developers (Kotlin, JRuby) work more closely with the Java team, and others (Scala) hardly ever do.


Dumb question. Why do reified generics make interop challenging? Or is it reified generics plus value types that don't inherit system.Object? Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?


> Why do reified generics make interop challenging?

Suppose you have types A and B, such that A <: B. What is, then, the relationship between List<A> and List<B>? This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

> Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?

They could, but then this adds significant runtime overhead at the interop layer. For example, a popular standard API may take a parameter of type List<int>. How do you then call it from, say, JavaScript, without an O(n) operation (and without changing JS)?


> This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

Which, realistically, is probably the only principled way to do things if you want to be doing much with variant generics in a cross-language way.

The Java way, "I pick my variance strategy, you pick yours, and we'll both pass everything around as a List<Object> at runtime and just hope that our varying decisions about what their actual contents are allowed to be never cause us any nasty surprises for each other at run time," is not type-safe and can lead to nasty surprises at run time. It's easier, sure, but easier is not necessarily better.


Except, realistically, of all the polyglot runtimes, the ones that have good interop erase and the one that doesn't reifies.


The problem is that your GenericClass<T1> and GenericClass<T2> are really more like GenericClass_T1 and GenericClass_T2 with their own distinct type definitions and interfaces. From the perspective of a different runtime/language trying to interop with these types, you have to somehow understand and work with this mapping game. It's much easier from inside the .NET runtime than outside.

The general solution is, like you suggested, to avoid using reified generics in the module interface where the interop happens.


The solution I remember from the last time I dealt with Python on .NET (which was admittedly a long time ago) was the opposite - you did use the reified generics, and there were facilities to create an instance of GenericClass<TWhatever> from within Python. There's a whole dynamic language runtime that is purpose-built for smoothing over a lot of that stuff.

What wouldn't work would be to, e.g., create a Python-native list and try to pass it into a function that expects a .NET IList<T>. Which doesn't feel that odd to me - they may have the same name, but otherwise they're very different types that have very different interfaces.

That said, the Iron languages never took off. My personal story there is that all the new dynamic features that C# got with the release of the DLR pretty much killed my desire to interact with C# from a dynamic language. The release that gave me Python on .NET also turned C# itself into an acceptable enough Python for my needs.


.NET Value types do inherit from System.Object.

See: https://docs.microsoft.com/en-us/dotnet/api/system.valuetype...


Is a value type different from a value class?

https://github.com/google/auto/blob/master/value/userguide/i...


Yes, those value classes are still heap/GC allocated objects.

Value types are a generalisation of `int`, `long`, `float` (etc), where values are stored inline, not allocated on the heap. For instance, a `Pair<long, double>` that isn't a pointer, but is instead the exactly the size of a long plus a double, and is the same as writing `long first; double second;` inline.


.NET is tied to Microsoft, so I'd avoid it 100% of the time.

Yes, yes. I know that Microsoft theoretically open sourced and ported it. However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Otherwise I agree that it is a better Java.


They didn't "theoretically" open source it - they actually open sourced it.

I get why people used to shit on Microsoft, but Microsoft has demonstrated over a number of years that its changed under Satya.

> However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Outside of desktop GUIs, this is simply not true. I'm writing complex, cross-platform systems that work just fine on Windows and Linux (and would on MacOS if I chose to target it).

Hell, even a lot of the tooling is now cross-platform: Visual Studio Code, Azure Data Studio, even Visual Studio and Xamarin run on MacOS!


No, Visual Studio does notrun on macOS. Visual Studio for macOS is a fork of MonoDevelop.


So because it’s not the same code base but it is produced by the same company with the same purpose? It’s not “Visusl Studio Code”? Was Photoshop not Photoshop on Windows when the assembly language optimizations were different between PPC and x86?


I don’t think is true anymore.

The .net 5 announcement was very clear that .net core is the future and its been a while since you’ve had needed things that are windows only to build a non-trivial .net core application.


Please double check my wording.

Yes, you can write a non-trivial .NET application on Linux. But if you take a non-trivial .NET application that runs on Windows, the odds are low that it can easily be ported to Linux. And there are almost no non-trivial .NET applications that weren't originally written for Windows.

The result is that if you work with .NET, you're going to be pushed towards Windows.


It the wording of the announcement (taking them in good faith) that applies to applications using .NET Framework. .NET Core should be 100% portable to Linux/Mac/wasm.

.NET 5 should supersede both Core and Framework IIRC


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

In the years since, I've encountered many stories that attempted to make use of that portability. All failed.

I've seen the promise of portability with other software stacks, and know how hard it is. I also know that taking software that was not written to be portable, and then making it portable, is massively harder than writing it to be portable from scratch.

So, based on both the history of .NET and general knowledge, I won't believe that .NET will actually wind up being portable until I hear stories in the wild of people porting non-trivial projects to Linux with success. And I will discount all announcements that suggest the contrary.


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

Microsoft didn’t have anything to do with Mono until 2016.

If you start from scratch with a greenfield .Net Core project there aren’t really any issues getting it to work cross platform.


You are no more “pushed toward Windows” with new code you do with .Net Core than you are if you use any other cross platform language.

While I’ll agree that anything that uses any of the advanced features of Entity Framework and MVC is not trivially ported.


When you're trying to write new code, you run into a problem and look for a library that solves it. But all of the libraries that you find are Windows First, and it is not until after you're committed to them that you sometimes discover how they are Windows Only.

So yes, even in a new project, there will be a pull back to Windows. Because virtually nothing is truly written from scratch.


It’s no more of a problem with .Net Core than it is with Python modules that have native dependencies, or Node modules with native dependencies.

You’re not going to mistakenly add a non .Net Core Nuget package to a .Net Core project. It won’t even compile.

Of course you can find Windows only nuget packages for Windows only functionality like TopShelf - a package to make creating a Windows Service easy. But even then, I’ve taken the same solution and deployed it to both a Windows server and an AWS Linux based lambda just by configuring the lambda to use a different entry point (lambda handler) than the Windows server.

You can even cross compile a standalone Windows application on Linux and vice versa.

I use a Linux Docker container to build both packages via AWS CodeBuild.

Would you also criticize Python for not being cross platform because there are some Windows only modules?

https://sourceforge.net/projects/pywin32/


Looking at the announcement it seems they're basically folding all of the Windows-specific stuff back into .net core. Isn't that just going back to a compatibility minefield?


> the actual language & core libraries are incredibly slow to react to anything.

async/await being the obvious example. Still no sign of it in the Java language, nor any expectation of it, unless I missed something.


https://wiki.openjdk.java.net/display/loom/ which is superior to async/await, if I may say so myself (I'm the project lead)


That's very nice, but saying it's strictly superior to async/await is a stretch. Fibers/stackful coroutines are a different approach with its own tradeoffs.

On the plus side, fibers offer almost painless integration of synchronous code, while async/await suffer from the "colored functions" problem[1].

The price you pay for that, is the higher overhead of having to allocate stacks. If you don't support dynamic stacks which can be resized, you basically don't have a much better overhead than native threads. There are two solutions I'm aware of, both of them have been done by Go at different times: segmented stacks and re-aligning and copying the stack on resize. Both carry some memory overhead (unused stacks) and computational overhead (stack resizing).

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


> Fibers/stackful coroutines are a different approach with its own tradeoffs.

The only tradeoffs involved, as far as I'm aware, are effort of implementation. There are no runtime tradeoffs.

> The price you pay for that, is the higher overhead of having to allocate stacks.

You have to allocate memory to store the state of the continuation either way. Some languages can choose not to call the memory required for stackless continuations "stacks" but it's the same amount of memory.

> Both carry some memory overhead (unused stacks) and computational overhead (stack resizing). Their "advantage" is that, because they're inconvenient, people try to keep those stacks shallow.

Stackless continuations have the same issue. They use what amounts to segmented stacks. "Stackless" means that they're organized as separate frames.


Great article, thanks. Some perhaps-silly questions:

1. Is it possible to inspect the state of a 'parking' operation, the way you can in .Net with Task#Status?

2. So fibers run in 'carrier threads'. Is there a pool of carrier threads, or can any thread act as a carrier? I'm thinking of .Net's where this is configurable (ignoring that .Net 'contexts' aren't exactly threads), by means of Task#ContinueWith() and the Scheduler class. I take it from the following snippets that fibers can only run on the thread where they were created:

starting or continuing a continuation mounts it and its stack on the current thread – conceptually concatenating the continuation's stack to the thread's – while yielding a continuation unmounts or dismounts it.

And also:

Parking (blocking) a fiber results in yielding its continuation, and unparking it results in the continuation being resubmitted to the scheduler.

On a non-technical note, how do OpenJDK projects feed back into the Java spec and Oracle Java?


1. Yes.

2. Configurable. A fiber is assigned to a scheduler that is, at least currently, and Executor (so you can implement your own).

> I take it from the following snippets that fibers can only run on the thread where they were created

No. A fiber has no special relationship to the thread (or fiber) that created it, although now there can be a supervision hierarchy thanks to structured concurrency: https://wiki.openjdk.java.net/display/loom/Structured+Concur...

> OpenJDK projects feed back into the Java spec and Oracle Java?

OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license). Projects are developed in OpenJDK (for the most part by Oracle employees because Oracle funds ~95% of OpenJDK's development, but there are some non-Oracle-led projects from time to time[1]) and are then approved by the JCP as an umbrella "Platform JSR" for a specific release (e.g. this is the one for the current version: https://openjdk.java.net/projects/jdk/12/spec/)

[1]: E.g. the Shenandoah GC (https://wiki.openjdk.java.net/display/shenandoah) is led by Red Hat, TSAN (https://wiki.openjdk.java.net/display/tsan) is led by Google, and the s390 port (http://openjdk.java.net/projects/s390x-port/) is led by SAP.


Very neat. So it preserves the virtues of .Net's task-based concurrency, but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Does it impact things from the perspective of the JNI programmer?

> OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license).

Ah, of course. I'd missed that.

> there are non-Oracle-led projects from time to time[1], and are then approved by the JCP as an umbrella "Platform JSR" for a specific release

How do they handle copyright?


> but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Yes. All existing blocking IO code will become automatically fiber-blocking rather than kernel-thread-blocking, except where there are OS issues (file IO; Go has the same problem). Fibers and threads may end up using the same API, as they're just two implementations of the same abstraction.

> Does it impact things from the perspective of the JNI programmer?

Fibers can freely call native code, either with JNI or with the upcoming Project Panama, which is set to replace it, but a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

> How do they handle copyright?

Both the contributors and Oracle own the copyright (i.e. both can do whatever they want with the code). This is common in large, company-run open source projects.


> a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

Doesn't this boil down to the native function blocking the thread?

How about the C API/ABI of JNI? Will there be additions there for better supporting concurrency (i.e. not simply blocking)? Or can that be handled today, with something akin to callbacks?


If the native routine blocks the kernel thread, it blocks, and if not, it doesn't. While something could hypothetically be done about blocking native routines, we don't see it as an important use case. Calling blocking native code from Java is quite uncommon. We've so far identified only one common case, DNS lookup, and will address it specifically.


Nice. So going 100% async is a real possibility?


I'm not sure what that means.


> Fibers are user-mode lightweight threads that allow synchronous (blocking) code to be efficiently scheduled, so that it performs as well as asynchronous code

Sounds a lot like Erlang BEAM processes.


Yep; or Go's goroutines. Except that the fibers are implemented and scheduled in the Java libraries; the VM only provides continuations.


How does it compare to Windows fibers?


I find this really interesting. Care to provide some comparisons?


I think that the approach done in https://wiki.openjdk.java.net/display/loom/Main is better than the async/await infrastructure.


Well, other than the fact that it only supports Linux and MacOS.


Is that true? The build instructions are for a Posix-like evironment, but I haven't actually looked to see if the actual implementation supports Windows yet.

As someone who runs Windows and Linux about equally, in differing proportions over time, I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet.


It's a prototype. It will support Windows when it's released, and probably sooner. We're literally changing it every day, and it's hard and not very productive to make these daily changes on multiple platforms, especially as none of the current developers use Windows (this is changing soon, though).


> I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet

Seems understandable though. Java is primarily a tool for heavyweight Unix servers, after all. (This is of course an empirical claim, and I have no source, but I'd be surprised if I turn out to be mistaken.)

Makes good sense to go with the strategy of building an industry-strength technology before investing the time to handle Windows.


I like java. Java 8 streams are particularly interesting. Its fast too. I took a hadoop class (which taught java 8 and ironically discouraged hadoop use excepting exceptional cases..).

The hardest part that everyone struggled with was getting a Java environment up and running. Gradle, maven, ant... You almost need and IDE. Its almost like they don't want people using it. I stopped when I didn't have too.

Plus the acronyms. Ones I didn't know from your post:

AOT, GCJ, Excelsior, Graal, OpenJDK, JakartaEE


It’s funny, I feel the same way about web development right now.


Except web development is almost all bootstrapped from a simple npm library these days. You generally npm install and you've got all your dependencies whether it's Angular, Vue, React or pretty much any modern web frameworks. The time for a new developer to get the tooling out of the way and start looking at code is dramatically shorter for web apps than Java in my experience.


It seems like you just don't know how to properly use maven. In my experience it is always as simple as mvn {build, compile, test, package, deploy}.


>Plus the acronyms.

GCJ and Excelsior are really niche, even people familiar with Java ecosystem might not known them as they are mostly used for AOT ( Ahead of Time ) Compiling Java into a Single redsitrbutuamble binary in the early 2000s. I was writing RSS Web Server application then and was looking into how to do Client Side Desktop Apps.... UI toolkit was a bag of hurt for Java, and I gathered that is still the case today.

I think JakartaEE is really just a rebranded JavaEE.

I know Graal only because I follow TruffleRuby, which is a new JVM written in Java. And it has proved that optimising Ruby, once thought as impossible due to its dynamic nature to be possible.


How is this any different than python or javascript? NPM, Babel, Webpack, TSC, PIP, VENV, PyPy, CPython, etc. They all have their learning curves and if you weren't in the ecosystem you wouldn't know what they meant.


> I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.

What do you mean by "lurking"? Oracle is the company developing OpenJDK, and it will continue to do so. All our projects are done in the open, with lots of communication.


By "lurking" people mean that the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

You may not remember incidents like why Jenkins was forked from Hudson, but Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.


> the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

I have no idea what Oracle may do tomorrow, but Oracle has been in control of Java for a decade, and what it has actually done is 1. significantly increase the investment in the platform and 2. open source the entire JDK. So I don't know about the next ten years, but the past ten years have been really good for Java (well, at least on the server).

> Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.

I don't share your romantic views of multinational corporations. Corporations are not our friends, and while they're made of people, they're not themselves people, despite what some courts may have ruled. But like people, different corporations have different styles, and it would be extremely hard to call any of them "good." I have certainly never heard of one is driven by caring (although what you do when you don't care may differ; some may be aggressive with licensing, some are in the business of mass surveillance, some help subvert democracy, others drive entire industries out of business through centralizations and others still drive kids to conspiracy theories). When you look at what Oracle has actually done so far for Java, I think it has been a better, and more trustworthy steward than Microsoft and Google have been for their own projects (Java's technical leadership at Oracle is made up of the same people who led it at Sun). And people who bet on Java ten years ago are happier now than those who bet on alternatives (well, at least on the server). This despite some decisions that made some people unhappy. You can like the good stuff and be disappointed about the bad stuff without some emotional attraction or rejection to these conglomerate monsters.


Your opinion is your opinion.

My opinion of the company and its products is more consistently negative than any other large company. And while you think that the non-Java world is suffering, I think you have some tunnel vision.

Let's just say that I am personally happy with my decision to stay away from Java. And the brief periods where I had to work with Java were misery. Languages have personalities as well as companies, and there is a reason that the startup world stays away from Java in droves.


> and there is a reason that the startup world stays away from Java in droves.

You may have your own case of tunnel vision. I mean, sure, there are "droves" of startups that stay away from Java (many only to turn to it later), but there are also "droves" that adopt it from the get-go.


Why don't we look for some concrete information?

According to the statistics reported by https://www.codingvc.com/which-technologies-do-startups-use-..., Java looks like the fourth most commonly used language in the startup world, and its usage is not particularly well correlated with success.

Both Ruby and Python are more popular than Java, AND are better correlated with how good the company is. Your odds of being in a successful startup are improved if you are in those languages INSTEAD OF Java.

What about from the individual programmer level? Triplebyte did an article about how programming language and hiring statistics correlate. My impression is that their programmers are mostly being hired into relatively good startups, so it is a pretty good view of the startup world. That article is at https://triplebyte.com/blog/technical-interview-performance-....

Long story short, Java was the #2 language that programmers chose. Behind Python. Not so bad. But choosing Java REDUCED your odds by 23% of actually getting to a job interview. And for those got to a job interview, it reduced your odds by 31% of actually getting hired. By contrast Python IMPROVED those same odds by 11% and 18% respectively.

Apparently the startup world doesn't like Java developers either. You'd be far better off with Python.

Now I'm sure that you can trot out every successful Java startup out there. And there will be quite a few. But based on available data, not opinions, I did NOT express tunnel vision when I said that the startup world stays away from Java in droves.


If you truly believe any of the conclusions you've drawn from the numbers in the links you posted, then your favorite programming language REDUCES statistics skills.


Startups don't use Java because Java is for large-scale stable long- lived enterprises, not for prototyping simple small web apps that might be thrown away in a couple of years.


You often hear this, but what does it actually mean? Why is Java for one but not the other.

Here is my understanding.

Java was designed to limit the damage that any developer could accidentally do, rather than maximize the potential for productivity. Which is an appropriate tradeoff for a large team.

It is hard to get good statistics on this, but the figures that I've seen in books like Software Estimation suggest that the productivity difference is around a factor of two.

This matters because it turns out that teams of more than 8 people have to be carefully structured based on the requirements of communication overhead. (A point usually attributed to The Mythical Man-Month.) This reduces individual productivity. Smaller teams can ignore this problem. The result is that measured productivity for teams of size 5-8 is about the same as a team of 20. But the throughput of large teams grows almost linearly after that. An army does accomplish more, but far less per person.

Limiting damage matters more for large teams. Which are more likely to be affordable for large enterprises. However being in such an environment guarantees bureaucracy, politics, and everything negative that goes with that.

By contrast startups can't afford to have such large teams. Therefore they are better off maximizing individual productivity so that they can get the most out of a small team. And using a scripting language is one way to do that.


Today, I go back and forth between three languages at work - .Net, JavaScript, and Python. For simple prototype web apps or more realistically REST microservic APIs to feed front end frameworks, I really don’t see either being slower to develop.

For larger applications with multiple developers working in the same code base, the compile time checking of static languages is a god send. I would at least move over to TypeScript instead of plain JavaScript.


Oracle has a long track record of Sales & Marketing tactics which we can use as a reliable benchmark to predict outcomes.

Oracle will likely pursue the most aggressive strategy they can get away with Java.

I don't believe Sun was suing Google, but Oracle did.

The fact that Google is switching to Kotlin is mostly a means to absolve themselves of the 'Oracle risk' - it's a big change surely, a decision not taken lightly.

The future of Java under Oracle is hard to predict but there's legit concerns Oracle will make things hard.


Kotlin uses the same VM and API, so it makes no difference in this regard. It's not a big change – it's fully interoperable with Java. You can easily take a single class in a Java application and rewrite it in Kotlin, and everything continues working just as before.

Google adopted it because, as they more or less said in the announcement, it was already being adopted by the community and it hugely improved development experience.


but you splitting your developer base. * There will be people better at kotlin * There will be people better at java.

This is a problem when you are looking at hiring new people etc . This fragmentation is going cause issues just because people are hedging against Oracle future decisions.

In a perfect world Google should have bought Sun and the current version of Java would look at lot like Kotlin.


Kotlin is a light syntax for a coding style. It's as easy for a Java dev to learn Kotlin as it is to learn Spring or Hibernate or whatever library or framework the team at your new job uses


It's a whole other domain of things to learn, and now we're stuck with context switching all of the time.

Even if Kotline were 'better' (And I don't think it is), it'd have to be quite a jump better.

It's a little nicer for getting ideas down quickly, but beyond that to me it's just 'different' and now a whole other bag of things to support.

If I have to chose between Kotlin+Java or just Java I'll take just Java.

Going back to Java from Kotlin there's really nothing I miss.


I think Sun might have wanted to sue Google?

https://news.ycombinator.com/item?id=10951407


Yes, thanks for that, it stirred my recollection as I actually bumped into Jon Swartz by accident just in that era.

I don't think it was money, so much as the established culture at Sun (i.e. James Gosling: "Sun is not so much a company but a debating society). A more aggressive CEO/leadership/culture would have maybe raised the money to take on Google, or to take another tact.

So while you are right - and thanks for the reference - the issue here is what Sun was about, vs. what Oracle is about.


Whatever Sun was "about", sadly, it didn't work, and damaged some excellent technologies, like Java and Solaris, that Sun couldn't invest a lot of resources into because it no longer had them. Oracle managed to save one of them and make it thrive. Sun, as a big, impactful company, was a product of the dot-com bubble. It certainly made more lasting contributions than other bubble-era companies, but its strategy couldn't survive the crash. Maybe great ideas can be born in companies like Sun but need companies like Oracle to sustain.


I've been saying for years that Pivotal Labs is a debating club that produces code as a by-product. But now I'm wondering if I read the Gosling quip and then forgot I had.


Google made $billions from Java while Sun went near bankrupt, and is now a top-10 wealthiest company in USA. Oracle trying to get $ from Google in partnership with the former Sun is a different issue from your company's risk.


I do understand Oracle is paying the bill. As well as the team working on Graal and TruffleRuby. so I am grateful for that. Thank You.

>What do you mean by "lurking"?

Referring to Copywriting API a while ago and the JakartaEE problem which has blown up on my twitter feeds. I understand why Oracle is trying to charge money, and I am perfectly fine with that, I just don't like they are using Copywriting API as the tool. And whatever problem it is with JakartaEE this time around I don't have time to follow.


In a lawsuit, Oracle pushed for API's to be copywritten, not just their implementation. They also have paid lobbyists. They're also greedy assholes. The combo of greedy assholes and the ability to rewrite the law is a dangerous one.

So, I don't use a language unless it's open with patent grants and has a non-malicious owner. At this point, Wirth's stuff is probably legally the safest.


I'm curious about the last part. Are you using Modula 2 or something?


I don't have any public projects to release right now. So, I don't have to worry about getting sued. Modula 2 was nice but you could use any of Wirth's with low risk. Although Lisp's had lots of companies involved, Scheme is probably safe with PreScheme aiming for low-level use. A Racket dialect with C/C++ features like ZL language had could be extremely powerful and safe.

Rust, with Mozilla backing it, is probably not going to get you sued. Nim has potential given their commercial interests are paid development and support so far. As in, less greedy they are the better. Languages with community-focused foundations, such as Python, controlling them are probably pretty safe. Although it was risky, the D language now has a foundation. Although no foundation or legal protections, the Myrddin and Zig languages are being run by folks that look helpful rather than predatory.

So, there's you a few examples you might consider if avoiding dependencies on companies likely to pull shit in the future. Venture-backed, publicly-traded, growth-at-all-costs, and/or profit-maximizing-at-all-costs are examples of what to avoid if wanting to future-proof against evil managers turning it from good dependency into bad one.


> copywritten

copyrighted

"Copywritten" probably means nothing, but if it did, it would have something to with copy writing, the act of writing for publication (usually commercial, usually not long-form).

Added: FYI "copyrighting" is not a conscious decision, or an action you can take. Copyright emerges automatically when you create a work, what they've done is defend their copyright in court, and the courts have mixed opinions on the matter.


That is a gross mischaracterization of what Oracle did. They didn't just defend a copyright in court. They pushed to extend copyright to a mostly functional element that copyright law has not traditionally been thought to cover. It's a tremendously harmful viewpoint for interoperability.


Not just "not traditionally been thought to cover", but which existing precedent said DID NOT cover.

Does it surprise anyone that this case was decided by the Federal Circuit? The rogue court most consistently overturned by the Supreme Court, which also is responsible for most of the disastrous software patent cases out there.

The only bright light is that the Supreme Court has reopened the question. Given how often they overturn the Federal Circuit, we have real hope that we'll return to the previous precedent. Which is that since matching APIs is a functional part of how code works, and things that are functional are by law not copyrightable, APIs are not copyrightable.


Yes, copywritten isn't a word, but their point was that Oracle pushed for API's to be copywritable, which was not the case before their suit. It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.


Not to belabor the point, but... "copyrightable".


> It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.

I mean, it has been big news, and it has already been nuclear war, with Oracle putting Google in a position to switch android from Dalvik (and successors) to OpenJDK. I agree that it could become a pretty horrible precedent (imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place).


> imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place

The things you're talking about are already protected by patents, and the copyrightability of APIs have nothing to do with them. At the very least, for something to be copyrightable it must be some specific fixed expression (a piece of text, image, video or audio). So the O v. G ruling applies only to actual (code) APIs; not to protocols (or REST "APIs") and certainly not to stuff that's already protected by patents (the distinction between the two may not always make sense to programmers, but it is what it is; for example, algorithms are patentable but not copyrightable, while programs are copyrightable but not patentable).


You should do some reasearch on the Oracle vs Google case.


It's literally no different. Excell has APIs just like Java, it's just the code goes in cells rather than on lines.


The licensing has gotten super onerous.


The licensing has gotten far better. First, Oracle has just open sourced the entire JDK for the first time ever, and second, instead of offering a mixed free/paid, open/proprietary JDK (with -XX:+UnlockCommercialFeatures flags), it now offers the JDK under either a completely free and open license (under the name OpenJDK) or a commercial license for Oracle support subscription customers (under the name Oracle JDK).


Thanks, that clear things up bit.


What licensing? OpenJDK is licensed under GPLv2 plus Classpath exception, the same as ever.


Support for native code is very bad. The JNI is a pain to use and very slow, IPC is often faster. High performance numerical code often suffers because of poor vectorization. Not to mention tuning the JVM is often needed for critical tasks. Modern GC'd languages like Go have much better memory footprints and the penalty of fast numerical code is much smaller.


Panama (https://openjdk.java.net/projects/panama/) will be replacing JNI very soon, and I don't think you're correct about vectorization. While I think Go has some good features, nothing about it is more "modern" than Java except in the most literal chronological sense; Java is more modern in almost every other meaningful sense. While you may need to tune the VM for critical tasks, in Go you don't need to tune, but rather just run slower.


I thought everyone was using JNA [1] for native access these days. JNA overhead is pretty low, and it’s much simpler to use versus JNI.

[1] https://github.com/java-native-access/jna


Respectfully I'm not sure that's true.

> State-of-the-art optimizing compilers.

LLVM provides coverage for many of the new languages that are hyped these days (with the notable and truly unfortunate exception of Go). LLDB provides an excellent debugger infrastructure.

> State-of-the-art GCs.

True, there's some good work there but I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

> Low-overhead in-production profiling/monitoring/management.

Correct me if I'm wrong but a lot of that is available in dtrace is it not?

This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages. It allows compiler engineers to focus on bringing what they do better and differently to the stack. It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest. All that time Go wasted developing assemblers for platforms should have been spent focusing on Go and letting LLVM handle it.

Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

Think about it, macOS is basically just various front-ends for LLVM. Apple's implementations of OpenCL and OpenGL, the Metal Shading Language and Core Image are all wrappers for LLVM. Swift is a wrapper for LLVM. Clang makes C and C++ fancy wrappers for LLVM. Rust is a wrapper for LLVM. Almost everything your mac does is a fancy front-end for LLVM. Even Safari was (until B3 backend) a fancy front end for LLVM.


As I said in another comment, there can be more than one thing that's cutting edge, especially as the JVM and LLVM operate at completely different levels. Never mind GCs -- LLVM isn't even safe, and neither is WASM; and that's good at that low level. In fact, LLVM is employed by Falcon, the Zing JVM's JIT, and OpenJDK's HotSpot is compiled to LLVM on Mac.

> I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

That's fine, but it seems like giving up GCs comes with its own non-negligible costs, so much so that the two hardly compete in the same domains.

> Correct me if I'm wrong but a lot of that is available in dtrace is it not?

They have similarities (closer analogs would be https://github.com/btraceio/btrace and http://byteman.jboss.org/ than JFR), yes, but operate at different levels with somewhat different capabilities.

> This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages.

LLVM is popular, extremely useful, and quite cutting edge in its domain, but there isn't much of a "computer science" advancement as there is, e.g., in Graal/Truffle.

> It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest.

It certainly does, but there's a lot that LLVM doesn't give language designers that a JVM does, like GCs and high-level language interop.

> Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

I don't know if LLVM wants to operate at Java's level or vice versa. While LLVM may offer some basic GC and Java may offer "safe" LLVM with things like Sulong, I believe their main focus will be their own respective levels.


Java itself is fine. It's very popular--and that's the biggest problem. Not the language as such but the developer ecosystem around it. I am dead tired of seeing 80-character camel-case function and variable names, annotations that hide critical functionality of the code, stack traces dozens or hundreds of lines long because of the amount of indirection and poorly conceived configuration "languages" used to support dependency injection so developers can save a few characters of typing or mask dependencies and complexity, etc.

The language may be fine, but it has been appropriated into the worst sort of terrible programming paradigms one might imagine.


The terrible programming paradigms are workarounds for the terrible language.

I read an old blog post about Spring's @Required annotation versus plain constructor injection and typed out a small example to get a feel for it – a simple class with one required and one optional dependency – which resulted in

23 lines of plain Java with constructor injection, or

16 lines of Java with Spring @Autowired magic, or

7 lines of Kotlin – no magic needed.


We aren’t talking about Java the language. We are talking about the JVM.


I used to be a Java hater (I'm much more neutral now -- I even find aspects of it pretty pleasant). To me, it was never about the technology of Java (always been pretty impressive), or even the language (a little verbose -- but so is C++ and C# and I like both of those), it was really just about the ecosystem. For whatever reason 2000s era java had soo many libraries that were just insanely over-engineered and had these absurdly deep object taxonomies and trying to figure out how to do something came down to figuring out how 5 or 6 different classes interrelated. It was such a nightmare, especially since the IDEs back then weren't as smart as they are right now. I think it's become a lot better now though, modern libraries seem to have learned a lot of those lessons, and having proper closures makes a huge difference, and things like IntelliJ are really great about making dealing with the verbosity much easier.


A lot of the improvements in the ecosystem came from improvements in the language. Go4 patterns are in part shaped by language.


Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby). Where do you? Most developers turn to C++, Java, or C# (not Haskell or F#, for example), and of these, Java and C# are both great for productivity and not daunting for someone just starting out, unlike C++.

There was a period after 200x where developers managed to throw out a bunch of babies along with the bathwater when they rejected Java, UML, SQL, and everything else that was used in 199x. After all, these are all literally from the last century, so they must be "uncool", amirite? /s

The preference for dynamic or duck typing in the name of productivity has always rubbed me the wrong way. To keep it brief, my defense / elevator pitch for static typing is: "Slow is smooth. Smooth is fast" :-)


> eliminated the majority of HN darlings (JS, Python, and Ruby)

Those were the darlings ten years ago (along with CoffeeScript and Clojure). The pendulum has swung back towards types and now the hip ones are Go, Kotlin, Swift, TypeScript, and (to a lesser extent) Haskell, Hack, Reason, and OCaml.

> Java, UML, SQL, and everything else that was used in 199x.

There was the whole no-SQL fad, but lately, even here, I see a lot of people re-discovering and advocating the relational model. Postgres seems to be hotter than MongoDB today.

Much of what is good about Java lives on in Kotlin and Swift. I agree it is unappreciated with today's eyes. Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

UML is garbage. A visual language designed by non-artists with no aesthetic expertise. It deserves to be forgotten.


> Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

Java is still continuing to lead on technology. In recent years it has introduced the world to low-latency, concurrent copying garbage collectors (C4, ZGC), partial-evaluation Futamura-projection optimizing compilers (Graal/Truffle), and low-overhead continuous deep profiling in production (JFR). I think that lots of people remember that, and care about those things, because Java is not only leading technologically but also in market share.


Asking as someone who cares about aesthetics a lot more than most developers, what does UML have to do with aesthetics? UML is for modeling systems and relationships through agreed upon conventions, and sometimes generating code stubs based on those diagrams. As long as you adhere to the conventions, the aesthetics can be changed as necessary.


It's a visual language. It exists entirely to be consumed by human eyeballs. Aesthetics are the user interface for that process.


But you can customize its aesthetics.


> Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby).

Well, if you ignore mypy and consider TypeScript distinct enough from JS to not count, sure, though TS is if anything more of an HN darling than bare JS (Really, none of those languages has been an HN or developer community darling in years, though Python—due to ML and data science—is less horrible uncool than Ruby and JS right now. (And, yeah, Ruby is left out of the “but I have static type checking available” list...till Sorbet is available later this year.)

> Most developers turn to C++, Java, or C#

I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.


>> Most developers [from context: who want static types] turn to C++, Java, or C#

> I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.

Really? I'd be surprised if it was less than 90%. What is the competition? Go, TypeScript, Swift, Scala, Kotlin, Rust (in rough order of my gut sense for how widely used they currently are)? These are all still very small relative to C++, Java, C#.

If you argue that C has static types, then I guess you're probably right. But it isn't really true that C has static types in the contemporary sense (which I think is perhaps better referred to as "strong" type safety).


> If you argue that C has static types, then I guess you're probably right.

C (like C++) is both statically and weakly typed; strong and static typing are orthogonal axes (dynamic but strongly typed languages are common.) If you mean strong and static when you say static, you need to take C++ off your list.


No, it's (mostly) not about that.

It's about C not having any generics or similar things (like C++ templates), so you need to fall back way more often on void*.

So on that axis Go is roughly as statically typed as C, albeit more strongly typed.

On the other hand C++ is more statically typed than C, but is about as strongly typed (afaik).


Yeah, I kind of wondered about that as I was writing my comment, seems like a fair point.


According to Github: https://github.blog/2018-11-15-state-of-the-octoverse-top-pr...

As it happens, among the statically typed languages, C++, Java, and C# are the top three, with C and TypeScript filling out the top 10 lineup.

That's not saying "more than half", but if you include C onto that list, I would definitely bet against your statement.


While I still like Python, Python and Ruby haven't been darlings of anything in probably the last 10 years. Go, Rust, Swift, and Kotlin are newer statically typed languages that lots of devs on here like though.


This timeline is a bit aggressive for Ruby. The core of my career using Ruby was 10 years ago, and I would still call it "darling" at that point, though the rumblings were beginning. Node.js was first released almost exactly 10 years ago, and (from my perspective) over the next few years marked the first real exodus from people who had previously been all in on Ruby. I missed out on that one because I thought node.js seemed like an immature answer to a question I wasn't asking. But over the next few years, I became increasingly disillusioned with writing big software in a language without good static analysis, and was on board with the mindshare (if not actual employment) exodus toward languages like Go and Rust. So I would say it was more like 5-7 years ago that you started seeing more criticism than love for Ruby in places like this.

And yeah, as other commenters have said, Python is back due to data science / machine learning. (Though I'm hoping there will be a wave of adoption of tools like Julia and Swift for this in the near future; good static analysis will be nice for data science for all the same reasons it is nice for other kinds of software.)


Python's relevance got renewed by the ML frameworks (Tensorflow) and notebooks (Jupyter).


This. Jupyter notebooks, Matplotlib, Numpy, Spyder...all that made Python big again as ML and AI became the new hotness recently and the killer app for Python.

Outside AI, Python is a really good scripting language for both Linux and Windows. My entire industry seems to run off of Python for process automation and analysis. It really is a lingua franca in this space.


I agree. During my Maths + CS studies I've been using mainly MatLab and R, Andrew Ng's famous Machine Learning intro was also taught in MatLab/ Octave. But using just one „proper” general purposing language like Python instead makes way more sense, especially when building software from scratch. Need a custom ERP? Build one with django. A Web Server? Go with flask. Integrating some ML pipelines? Easy.


> So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.

Sure for core Java (OpenJDK), but what the future of JEE licensing and development? Even non-"enterprise" apps typically make use of some JEE stuff like servlets and JDBC. Is it really the "safe" choice when, apparently, the trademark agreements have just fallen apart?

https://headcrashing.wordpress.com/2019/05/03/negotiations-f...


JDBC is not JEE - it is core Java. Servlets are the original web-container Java spec, independent of JEE. Both are great specs supported by several stable and performant implementations - rock solid tech compared to a lot of the flaky stuff you find advertised today.


How are Java programmers implementing RESTFUL APIs these days?


Many developers don't want to think too much and generally choose Spring boot for a stable, highly-popular and well-documented framework. There is nothing wrong with this choice but I find it too bloated for lean container based micro-services.

Dropwizard and Micronaut are pretty good for getting a smaller and saner footprint. If you need more than REST, say you want a lean MVC-based web-framework, then Blade is also a good choice.

You can also choose not to use a framework and say use code-generation from a swagger/openAPI REST document and implement the remaining bits on your own. If your deployment target is AWS, you can also choose to leverage Netflix's fantastic set of libraries.

Java has tonnes of options, so you usually need to spend some time evaluating what goes into your stack.


I'm curious to hear what specifically is too bloated in Spring Boot in your opinion for container based microservices. I've been writing microservices that run in docker in Java for a few years, then moved to Kotlin recently, all using Spring Boot, and I haven't run into anything that made me feel like they were bloated. I've also written Go and Node microservices for contrast.


Spring boot has a large startup time, heavy dependency trail and a lot of dynamic class-loading based off auto-configuration. This makes it un-suitable for some areas: if you desire fast scaling micro-services that respond quickly to incoming load. Or say you want to transform your micro-service to a server-less function. One can do that with Graal VM to compile your micro-service to a single binary with microscopic start-up time. However, native-image tends to fail more often than not with Spring boot apps. (I haven't tried this recently though so my knowledge could be out of date).


Spring Boot does what it's asked to do, which is to load everything it finds. The dynamic class work has basically nil effect on load time. The core performance constraint is that classes are loaded essentially linearly. If you ship with something enormous like Hibernate, that's thousands of files read individually from a zipfile.

Spring folks are actively involved with Graal. I see a lot of excitement internally at Pivotal. Watch this space.


I'm reminded of the crazy stunts from the 1980s where people would start some big, slow program (Emacs was huge!), dump core, and then "undump" to make a pre-initialized binary that would start as fast as the OS could read it. It actually worked, as long as it wasn't relying on open file descriptors, or signal handlers, or env vars, or …


I’m pretty sure Emacs still does this. There was a patch committed to change it to use a more portable method of dumping state, but I don’t think that change made it into the 26.2 release.


You should checkout new RedHat framework - [Quarkus](https://quarkus.io/). This is a framework which leverages Graal to create native images. Those images are very small and optimized. For example one of Quarkus developers showcase the size of native image, spoilers - it's [19MB](https://youtu.be/BcPLbhC9KAA?t=103). It takes 0,004s to start.

In [this](https://youtu.be/7G_r1iyrn2c?t=3104) session, RedHat developer shows how Quarkus application is being scaled. Comparing to Node, it's both faster to respond to first request and have smaller memory footprint (half the size of node).


Looks very interesting although unfortunate they went with Maven over Gradle.



I can't speak for others, but I work in games and we've spun up a bunch of servers and we use Netty/Jersey for REST and sockets. It's been incredibly stable and a joy to program for compared to our previous Node servers. Granted we were using bleeding-edge Node 5 years ago which is not the Node of today.


https://www.dropwizard.io/

Quoting from the site: Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services.

It's a nice microframework which comes with built in health-checks, and metrics and it's also ops friendly because it deploys as a single JAR file.


Not really related to Dropwizard but... this:

> mvn archetype:generate -DarchetypeGroupId=io.dropwizard.archetypes -DarchetypeArtifactId=java-simple -DarchetypeVersion=[REPLACE WITH A VALID DROPWIZARD VERSION]

Why is it that everytime I read something about Java I have a feeling it was not meant for humans ? Not as bad as C++ projects setup though, I'll give it that.


I don't use maven archetypes too often, the rest of maven seems pretty good to me, but maybe I'm just used to it.

I have done a lot more Java than Javascript, so I feel the same about npm error messages.


Don't get me started on js/npm stuff...


Spring Boot with Webflux and any rx supporting libraries you can get your hands on


Thanks.


I use javalin.io which was written by one if the maintainers of sparkjava (not to be confused with Apache Spark).

I'm a big fan.


Spring boot + jOOQ for RDMS support.


Pardon my ignorance but what does any of that trademark stuff have to do with jdbc?


I know little about EE (and I'm certainly not speaking for anyone but myself), but I believe Java EE has lost dominance not because of any corporate decision, but because it simply started losing ground to unstandardized open source projects [1], as opposed to EE's JCP. So people who liked EE wanted to ditch the slow-moving JCP in favor of a faster process, and one question was whether the new project will be able to change specifications of namespaces that are traditionally reserved to, and associated with, the JCP. In the end it was decided that no, they will not be able to change JCP namespaces (but can choose to maintain them until the end of time in addition to any innovation they do outside of JCP namespaces). The decision obviously disappointed some, but I don't think it's viewed as catastrophic (although some may think so). And I don’t think that the “negotiations have failed” so much as that either side wasn’t able to achieve what they believed was the best outcome for them, but an agreement has been made. You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex. But now everyone can move on.

I, for one, am curious to see how Jakarta EE's current approach of what seems to be an internet-based, democratic semi-standard would work out, and if it can be better than both the JCP as well as more common centrally-controlled open-source projects.

Anyway, this is what Eclipse's director said on the matter:

https://twitter.com/mmilinkov/status/1125213654775889921

[1]: That post you linked to reminds me of those who blame Oracle for killing Solaris. Solaris is a terrific operating system, that was sadly killed by Linux long before Oracle acquired Sun. After a few years of trying, I guess Oracle decided they could no longer save it, and there was no point in continuing to throw good money after bad.


Your revisionism in your footnote is astounding, frankly. Oracle closing the source for OpenSolaris was not because of Linux - it was because of a choice made inside Oracle. Remember that development then continued for years in a closed source fashion, and is still ongoing with a skeletal staff.


> You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex.

Yes exactly, I think that is what makes if feel like not the safest choice for many companies.


Java EE has not been the preferred choice for many companies long before this issue, which, I guess, is at least part of the reason why Oracle gave up those projects. I think Spring is the leader, but I am really not too familiar with that entire domain.


That's right that _pure_ JEE has not been preferred, but part of my original point is that simple things like servlets are actually technically part of JEE - so if you are using Spring for web, cloud, etc then you are actually using JEE at least a little bit.


I was never super into Java development. I started working in 2014 and was introduced to Weblogic, Jenkins, huge Maven POMs and all the rest, then went into Cloud consulting. When I sit down to do anything I get so wrapped up in all the stuff that comes along with Java and feel like I have to use some huge IDE like Eclipse (I hate) or IntelliJ (I <3 U Jetbrains) to do anything "real".

If I could just write code and have a simple package manager like NPM or even Go packages and not be hindered by trying to get VScode to work I would never look back. I just waste so much time trying to understand the ecosystem. I know that isn't a great excuse (not wanting to take the time to become an expert on tooling), but people who came into the professional space in the same time frame likely also just see it as an obvious path to just avoid all the cruft.

What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere). Maybe something like Spring Boot would solve that but I have not investigated that yet.


Java-the-language is a fairly crummy thing to work with, it’s Java-the-ecosystem that’s genuinely good — the JVM, the tooling, etc etc. The quality of IDEs available is definitely a very large component of that. Doing Java on a lightweight editor seems like a way to pay the price without reaping the benefits.


I use TextMate (!) to maintain an Android SDK that's a mix of Java, C and C++ avoiding Android Studio like the plague, to maintain an iOS SDK that's mostly Objective-C, similarly avoiding Xcode's editor, and to maintain the backend written in Python that these SDK's talk to. I've got enough going on in my head w/o having to deal with the complexity of two different IDEs.

Anyway, for me Java-the-language is fine. The tooling, at least as it comes with Android (looking at you gradle and all its Android plugins) is what makes me want to pull my hair out.


I've not done any Android development but how is IntelliJ for it? While it can be slow on big codebases and obviously never as responsive as vim in a terminal, I think the tooling is excellent.

But again, you have to spend a month or so learning the shortcuts.


IntelliJ has an excellent vim keybinding plugin which I use (having come from Linux/scripting/Vim background to Java). Intellij also makes setting shortcuts pretty easy so you can customize to your needs. Having used both Eclipse and IntelliJ, I prefer the latter.


I haven’t used IntelliJ, only Android Studio which I know is based on IntelliJ but don’t know how much they differ. I find it to be a klunky UI and it easily eats all the CPU on a 2015 3.1 GHz MBP doing things like self-updating, downloading newer SDK versions, re-indexing a small repo, etc. I personally find it pretty unusable. There’s also parts of the UI you can’t even access (like the SDK manager and the AVD manager) unless you have a project open. It’s pretty obviously not a Mac native tool. I assume (hope anyway) it’s much better on Linux.


To a couple of those -

Compared to the other options, Gradle (IMO) significantly reduces the package & project management overhead. That said, it's still relatively high insofar as you are still asked to keep track of things like whether a dependency is needed at compile time, run time, or test time.

Java-the-language has a symbiotic relationship with heavyweight IDEs. The language's development is as influenced by the popularity of IDEs in its community every bit as much as the popularity of IDEs in its community is motivated by the language's characteristics. If you're looking for a good lightweight editor experience, I'd suggest looking at alternative JVM languages. Any of {Groovy, Clojure, Kotlin, Scala} will give you a better non-IDE experience while still giving you full access to the ecosystem. (That said, everyone still codes those in IntelliJ, too. It's still not gonna feel like working in Go.)

Lastly, it's totally OK to tune out the ecosystem when you don't need to be plugged into it. Yes, there are a bazillion JSON libraries out there. And you can easily spend more time and energy agonizing over their differences than you could possibly save by choosing the right one. Similarly, go ahead and ignore Spring. The whole Spring Experience™ is designed around developing applications a certain way. If you like to develop applications that way, you will know in your soul that Spring is right for you, and be attracted to it like a cat to an open can of tuna, and you would already have been a deeply devoted Java developer for years now.


Maven doesn’t have a lot of overhead, if you don’t overengineer your POM and use shared parent between multiple projects. It requires some DevOps thinking, but in the end typical project POM will be just a list of dependencies and basic metadata.


> What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere).

That's pretty much the original value proposition for Spring Boot. "I just want to get to work".

I used Spring Boot before encountering a Spring 3 project. The difference is phenomenal.

Disclosure: I work for Pivotal, which sponsors Spring.


Spring boot still requires gradle, which is much more complex than naive use of npm or python pip.

In the long run java build tools are better, but due to the learning curve a lot of folks balk (leave) and use a different stack.

Whoever is in charge of openJDK should just adopt kotlin as java 14 even if it was Not Invented Here.


Spring Boot works with both Gradle and Maven.

https://start.spring.io/


Try a Spring Boot app using Gradle as the build tool.

Weblogic and all those app servers were a product of a different time. They solve problems that are largely solved other ways now (you could argue for some of their features). Spring Boot ships the web server in the application which is more akin to Rails, node etc.

Maven itself is also showing its age (its 1.0 release was 2004). I won't say that most of the industry has moved to Gradle, because the truth is so many workflows and projects are using Maven that it will be around for a long time. The good thing is that other build tools like Gradle, SBT etc. interop with maven the package repo just fine.

There is nothing stopping you from developing Java in vim. Syntastic and other plugins will help though.


For your web apps / services / APIs, check Dropwizard, it is not what I would call "bleeding edge", but it provides a reasonable base and has always allowed me to easily work with the rest of the awesome JVM ecosystem. My Dropwizard projects are usually cruft/magic free, start in less than 5 seconds, consume predictable resources and is easy to reason about, troubleshoot and improve.

I used to be a big fan of Spring(-boot) with MVC, but since I tried Dropwizard I never looked back. I'd still do Spring but perhaps for non-mvc/web needs.


It’s the first time I see someone willing to trade Nexus/Maven for NPM. There’s even no standard way to build and package a TypeScript library, to start with. For small projects, maybe, it can work, but for enterprise needs JS/TS is not even close.


The answer you are looking for is called "Clojure".


It takes a lot of power to push something that large. A lot of brain power to grock the ecosystem.

Outside of the Java bubble, the view is quite a bit different.

All that sophistication looks like a wasted effort.

Take something as simple as admining the garbage collector. Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

After working with Go for several years, at large scale, we never once had to touch any knobs for GC. We could focus on better things. Any of the Java stuff we deploy and deal with we have this extra worry and maintenance issue.

And that's just the GC.


Funnily enough, Go just chose to solve the problem the other way around: While JVM tackled GC with the code equivalent of lightsaber-equipped drones, Go‘s GC is almost embarrassingly simple in comparison (although it‘s pretty decent by now).

The major difference is that the whole Go language and stdlib is simply written around patterns that avoid allocations almost magically. The simplicity of the Reader and Writer concepts is so elegant, yet powerful and doesn‘t allocate anything but a tiny reused buffer on the stack. There‘s lots and lots of other examples, but if you have to collect ten times less garbage, you‘ll be better off, even if your GC is 2x slower.


The byte buffers that Go's Reader reads from and that Go's Writer writes into cannot in general be allocated on the stack because they are considered to escape. Because Reader and Writer are interfaces, calls are dispatched virtually, so escape analysis cannot always see through them. This is now fixed for simple cases in Go, but only very recently: https://github.com/golang/go/issues/19361

Ironically, Java HotSpot handles the use case of Reader and Writer better than Go does, since when it's not able to allocate on the stack it has fast allocation due to the use of a generational GC with bump allocation in the nursery. By virtue of the fact that it's a JIT, HotSpot can also do neat things like see that there's only one implementation of an interface and thereby devirtualize calls to that interface (allowing for escape analysis to kick in better), something Go cannot do in general as it's an AOT compiler.


> like see that there's only one implementation of an interface and thereby devirtualize calls to that interface

Oh, the JIT devirtualizes and inlines even if there are many implementations, but only one or two at a particular callsite. This has been generalized in Graal/Truffle so you almost automatically get stuff like this (https://twitter.com/ChrisGSeaton/status/619885182104043520, https://gist.github.com/chrisseaton/4464807d93b972813e49) by doing little more than writing an interpreter.


I agree it's impressive that Go manages to be not all that much slower than Java while having a much simpler runtime, but much of the simplicity is gained from lacking features that are very important in many cases of "serious software" like deep low-overhead continuous profiling and ubiquitous dynamic linking.


I’ve seen you mention the superior operability of Java and the JVM in high load production environments and I think this is a really important, often overlooked, and commonly misunderstood point.

Would you be up for writing a short post or blog post going into some anecdotal comparisons and sharing some resources?


Exactly. Go is benefiting from years of hard lessons learned in other stacks such as Java. Having super experienced GC builders involved early on resulted in a language and libraries that work much more synergisticaly with the GC.

It's better to dodge a problem, than to have a baked in problem that requires lots of really smart people to make work arounds.


Java HotSpot's garbage collector is significantly better for most workloads than that of Go, because it takes throughput into account, not just latency. Mike Hearn makes the point at length in this article: https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...

I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has. Not having generational GC was an interesting experiment, but I don't think it's panned out: you end up leaning really heavily on escape analysis and don't have a great story for what happens when you do fail the heuristics and have to allocate.


> I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has.

At which time the talking point will become "look how advanced and sophisticated it is!"


> Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

You never have to touch them in the Java world either, unless you like making performance worse that is.

I've never ever seen a case where fiddling with garbage collection parameters didn't make things slower.

I worked with a guy who worked on a popular java compiler, and he says the same thing. he never twiddles GC parameters either.


What sort of scale are you working at? At our scale, it is normal to look into these sorts of things. Reading and understanding and applying articles like the following is absolutely necessary.

http://clojure-goes-fast.com/blog/shenandoah-in-production/


Medium scale. It's not like I need to pimp each of my servers like it's a Honda Civic.

If the load gets too high, I just add another instance.

It's way more economical than wasting developer time fiddling with GC params.


Are you serious? You have to mess with the maximum memory allocated to the GC all the time with java processes. Also the JVMs default settings basically assume it is the only process running. It will keep hogging a huge amount of memory even if the memory allocated to the GC is 60% empty unless you configure the GC properly. Wasting memory and stealing it from other processes which potentially causes swapping or crashing is far worse than any increased time spent garbage collecting. Unless you're as stupid as the Minecraft developers you will rarely suffer from GC pressure with heaps below 10GB.


Cassandra has benefitted from so many GC tweaks that I find this view hard to believe.

i can only imagine someone having this view if they have never cared about latency. How have you never experienced a multi-second GC pause on default CMS settings?


> Outside of the Java bubble, the view is quite a bit different.

I know that outside the "Java bubble" less advanced platforms are often good enough (BTW, while Java's GC ergonomics are getting better, I agree there may be more of a paradox of choice issue, but while you may not need to touch any knobs, you're also not getting low-overhead, deep production profiling and many of the other amazingly useful things you get with Java), and even inside the Java bubble we don't think Java is always the best choice for everything, but that doesn't mean that Java isn't leading in technological innovation, which was my main point.


I'm not saying I disagree but for a lot of businesses (even one with relatively high traffic) it is not unheard of to deploy with almost no tuning of the GC (aside from setting a heap min / max of 2,4,8GB) and have no issues.


I'm sure this all depends on use-cases, but I'll chime in to agree. I work on web services that do high (not Google high, but you've-heard-of-it high) levels of traffic, we run on the JVM, and GC pauses are not something that cause us to lose any sleep using out-of-the-box settings + explicit heap min/max.


Tuning GC is hard, and the nondeterminism is worrisome, but hitting a pathological case and rewriting a bunch of code hoping to avoid it is even harder.


For such an advanced JVM, having to prewarm it by calling code 5000x times or the JVM being much more memory hungry than other non-JVM languages doesn't feel like the cutting edge.


If you're not calling your code 5000x then it's likely not important for performance. I worked in compilers for 5 years and people's intuition for what parts of the code are the bottleneck is generally not very good. This includes me, I've guessed wrong a WHOLE lot.

If you're running microbenchmarks and having issues, then you're likely not using JMH, the Java Microbenchmark Harness which used to be a 3rd party library but is now built in to Java 12.

Is Java memory hungry? Maybe. If you're just writing a small routing server, probably not the right use case for Java. If you're writing a big and complicated game server with hundreds of routes and you need to talk to MySQL, Redis, and Memcache then I'd say the memory overhead of Java is really quite good. Don't forget that you can tune the min/max heap and other things like that, especially with the module system in Java 11. However if very low memory is a requirement, then you're probably running in an embedded environment and you probably shouldn't be using Java.


> If you're just writing a small routing server, probably not the right use case for Java.

General purpose languages have "use-cases"?


Yes, I wouldn't use Haskell for scripting and I wouldn't use javascript for a 3D game engine.


I think in this context Java means "the platform" i.e. the JDK


the usual C2 kicks-in is 10000, not 5k. But that statement about warming up is true only for microbenchmarks.

C1 which is an dumb (but not terrible) compiler tends to be at 1k, and if you cannot get your code to 1k, likely it never needs compilation, so no point to spend time and space on performing the said compilation.

The 'prewarming' does include perf. counters, so it's guided compilation to boot.


What?


I suspect that the parent is talking about the JVM's JIT, where it compiles java bytecode into machine instructions after loading the application. This is why the first few requests on a JVM are usually considerably slower than the rest.

Parent is obviously exaggerating with the 5000x, and he could have made his point in a different way, but there's some truth to it.


5000 refers to number of times a method needs to be called before the JIT decides the method is "warm" and needs to be compiled by the expensive to run C2 compiler which produces good quality machine code.

If you're benchmarking java code and the method was not called enough times before you measure, you're measuring code compiled by C1 (or even interpreted).

The performance gap between Java and C is on the order of x2 to x5.


Why not just pass -Xcomp? From the docs[1]: "You can completely disable interpretation of Java methods before compilation by specifying the -Xcomp option."

[1] https://docs.oracle.com/javase/8/docs/technotes/tools/window...


At the same time, JIT can outperform native code in some specific circumstances, since there are certain optimizations which can be proven safe at runtime which can not be guaranteed to be safe at compile time


this is a thing people say a lot but it is not something that seems to often result in an actual java program or benchmark being faster than program compiled AOT by an equivalently smart compiler (llvm etc)


Actually one of the less known features of the JIT is the ability to deoptimize/recompile based on CHA (class hierarchy analysis) or pushed compilation request via invokeDynamic&friends.


Isn't Minecraft written in Java. Many Fortune XX have their entire product lines written in it. Frankly, I don't understand the hate it gets on HN.


Most of AWS, a lot of Google - as far as I know.


yes, and I had to pay for a lot of ram on servers because of it ;(

The creator of notch chose java because when he started minecraft he thought that would let it run in web browsers. He wouldn't choose java if he could do it over again.


Do you have the link to back up the statement about running in browsers? Notch is super talented. MC was released in 2009. No way, he didn't know about browsers and Java


Paying for RAM on servers is one of the cheapest ways you can pay for performance.


It is also one of the poorest performing games in the history of video games.


> low-overhead in-production profiling/monitoring/management

This is a joke, right? The management overhead for the JVM in production environments is huge. It's really hard to get it right.


Not sure what you are talking about. Can you give me some examples?


Tuning the heap size, GC tuning, etc. are often needed to avoid huge GC spikes out of the box. Not to mention performance is pretty meh compared to something like Go for applications where non-trivial compute is needed.


GC tuning should be trivial for every Java shop. Yeah I guess you can find use cases where Go is faster than Java but in that case if this matters I would go for Rust be because I like it more than Go.


This - Java 8 'feels' legacy, but in so many ways it's miles ahead of upstarts.

Sometimes we forget that the JVM (almost) = Java and that thing is a beast.


>stuff hyped on HN

HN is going to be inclined to take a look at new-ish stuff.


Indeed. For better or worse, that's what the "N" in "HN" stands for - "News".


Yes! Java 8 is good. And Excel is by far the best spreadsheet out there. I don't know enough about Sharepoint but it seems like the normal world has actually chosen very well when it comes to technology.


Despite its title, the article was not about Java 8.


You can try appending "in Go" or "in Tensorflow" to see if the title still makes sense.


I had cause to use a bit of Java recently, so I could wrap an existing library that did exactly the thing I wanted. I haven't used it since university—back in the 1.4 days—and I was actually pleasantly surprised. Performance was great, concurrency was easy, features like type inference and streams made the experience much more pleasant, and obviously the development tools are still first-rate.

The absence of a package manager like Bundler or Cargo was frustrating for someone coming from that environment – as was the effective requirement to use an IDE. But on the whole, the platform feels like something that totally-hip-and-edgy-clique developers like myself are too quick to discount.


>The absence of a package manager like Bundler or Cargo was frustrating

why you did not use maven or gradle?


I think everything you mention here is actually related to the JVM and not Java specifically, and while I agree the work is impressive, it’s also true that languages such as Kotlin and Scala benefit from that work while also offering some very nice modern improvements.


You are describing the Jvm there, mostly. The jvm is great, and java is pretty tired even in the latest incarnations. I would have nothing against working full time with a jvm stack. So long as java isn’t involved.


C# ?


How does this compare to the .NET ecosystem?


C# is a better language. Java 8 started to catch up with some of the quality-of-live niceties, but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff, which the consortium is not willing to do. C# and DotNet were designed with the wisdom gained from the early days of Java implementation and sidestepped a lot of these messes.

Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world. Until very recently you developed in Visual Studio and you ran on Windows in production and that was that. They're working on Linux support and a broader ecosystem but they are probably a decade behind what is available on Java. Even stuff like package management is crude, NuGet is a joke compared to Gradle.


The JVM has multiple languages such as Kotlin, Scala and Groovy. Each of them arguably being a better language than Java. That most people still prefer to build projects in Java shows that other things such as backward compability, how many developers know the language, tool chain and so on matters more than the isolated technical merits of the language.


Cannot agree more. Syntactic sugar or certain more advanced constructs in the end have very low ROI for experienced developers, which can utilize very well the existing language features and extend their expressive power with internal DSLs. The costs associated with the tooling, hiring people with necessary expertise etc at that point are more important.


> C# is a better language.

Yep, as a seasoned Java developer I agree. There are times when I miss a part of the Java syntax but they are few and far between.

> Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world.

Also true, although it is getting closer fast.

Possibly more important: they are both in another league compared most other languages. (Of the other languages/stacks I have some production experience with Angular/TypeScript is the only that I feel has anything close to the tooling support that C# and especially Java has.)


> but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff

Absolutely true, but so does C#. In the end it turned out that .NET's reified generics were a mistake (which makes language interop on .NET painful), and more recently async/await.


> In the end it turned out that .NET's reified generics were a mistake

Do you have any resources that explain this point further?

Reified generics have often been praised as the thing that CLR got right (and JVM got wrong). I never understood fully why that is, especially since other languages with generics (e.g. Haskell and OCaml) don't have anything reified (although in all honesty also don't have RTTI so it's really not necessary).


Erasure makes it easy for Java, Kotlin, and Clojure share code and data structures without costly runtime conversions. Languages like Scala and F# have had trouble implementing features on the CLR because of reification, and take a look at what Python and Clojure on the CLR have to go through for interop.

I know some people think that say they like reified generics in C#, but those are mostly people who aren't aware of what cost to the entire ecosystem they're paying in exchange to what is a minor convenience in C#.

BTW, reification of reference-type generics is not to be confused with specializing collections for value types ("arrays-of-struct") an extremely important feature that CLR indeed has, and Java is now working on getting.


"Mistake" is a strong word, and I would disagree with it. But when building extensible libraries and systems I do often find myself wanting to pass around a List<Something<?>>, which isn't doable without writing a second generic interface or the like. On the flip side, in Java (or, these days, Kotlin), a type-erased generic can be mediated more easily because passing a Class is easier than adding a second interface.

Part of it is that a lot of that stuff is gamedev-related, for me. It's not the biggest thing in the world, but for a lot of the stuff I find myself writing in C#, I find myself wishing for type erasure to make throwing data around a little easier. On the other hand, though, when writing web stuff on the JVM--I absolutely will not waste my time doing this in .NET, ASP.NET Core is not very good and EF Core is awful--I often wish for type reification, so it's just a horses-for-courses thing.


Generics interop just fine on .NET, and async/await had become a starting point for similar designs in many other languages. I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.


> Generics interop just fine on .NET

They really don't. They have posed severe restrictions on features in languages like Scala and F#, and take a look at Clojure, Python and JS interop on the JDK as opposed to .NET.

> and async/await had become a starting point for similar designs in many other languages.

And it's a mistake in most of them. Java has copied some of C's mistake, C# has copied some of Java's, and others may copy some of C#. Every language/runtime both repeats others' mistakes and adds a good helping of its own.

> I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.

I can only express my own opinions (and I think reified generics, as implemented in the CLR, is a far bigger mistake than async/await, which only affects the C# language), but I am far from being alone in having them. Also, I have no doubt that async/await is an improvement on the previous situation, which is why people like it, but nevertheless I think it's a mistake as there are alternatives that are more convenient, more general, and have less of an adverse impact on the language (e.g. Go's goroutines).


What are the restrictions on F# that were posed by them? Given that Don Syme was the one who originally designed them, specifically with a mind for cross-language use (which is why they got stuff like variance long before C# supported it), this is a surprising claim. In fact, I recall Don saying something along the lines of, if CLR did generics with erasure like Java, F# probably wouldn't be where it is today.

I saw the link to Clojure page you posted in another comment, but I don't see any fundamental problem with generics there. Yes, if you're invoking a statically typed language from a dynamically typed one, you have to occasionally jump through hoops when dealing with things like method overloads that require types to distinguish. The same goes for Python. I have actually used Python embedded in C# more than once, with interop both ways, and in practice it "just works" most of the time.

Conversely, I don't see why statically typed languages should surrender valuable type information (and associated perf and expressivity gains) for the sake of convenience of dynamically typed ones, especially on the platform where static typing is the norm, and libraries are expected to be designed around it.

As far as async/await vs Go's goroutines - since we're talking about language interop, how many languages can Go coroutines interop with? Async/await easily flows across language boundaries, as you can see in WinRT - any language that has the notion of first-class function values can handle that pattern. Goroutines are essentially a proprietary ABI. And the worst part is that once the language has them, all FFI has to pay the tax to bridge to the outside world, regardless of how much you actually use them. It may be an argument for standardizing some form of green threads on platform ABI level, so that all code on that platform is aware of their existence and capable of handling them. Win32 tried with fibers, without much success. Perhaps it was too early and the design was too flawed, but it's not encouraging.


> What are the restrictions on F# that were posed by them?

It does make it harder to add features to the language that do not map to the current reified "generics" spec, for example higher-kinded polymorphic types.

Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations, or switch-on-type, for functional languages.


If such features can be implemented via runtime type erasure on JVM, you can still do that on CLR - it's not like it prohibits that technique, it just doesn't use it for generics. You can even have different languages agree on how they would implement it, so that they could fully interop. With modopt/modreq, you can capture it all in metadata, as well.

It wouldn't interop with C# generics (although I don't see why it couldn't interop with C# by other, less convenient means). But if it can't be properly mapped to them when they're reified, why is there an expectation that it should? It seems to me like the gist of the argument here is that we can conflate two features into one, if only we remove all the conflicting bits of one of the features - which also happens to be the one much more broadly used at the moment. It's a strange trade-off.


> Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations,

That's true (and will be addressed), but that's a problem that can be fixed by adding a feature, not removing a central one, which is why I said that both Java and .NET have made mistakes, and reified generics was one of .NET's.

> or switch-on-type, for functional languages.

There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes. Perhaps you mean switching on A<Foo> and A<Bar>, where Foo and Bar are reference types (and possible with a subtype relationship between them), well even Haskell can't do that, and if there was a language that thought this is a good idea, it would be able to do it quite easily.


> That's true (and will be addressed)

Glad to hear it! It's probably the biggest issue trying to do functional programming on the JVM.

> There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes.

The technique to implement ADTs in functional languages on the JVM has often been to add an integer tag to every subtype and switch on that, but it's ugly enough for interop that IIRC Scala doesn't do it (and is thus less efficient).


Packet is a great alternative to the NuGet client.

Tooling might be slightly behind Java's, but honestly, there's not much in it. Dotnet Core might only have been available on Linux for a few years, but Mono was around for many before that.

The dotnet CLI also recently got the concept of 'global tools', which are akin to NPMs.

A lot of work is also underway on better cross-platform performance analysis, crash dump handling and the like for runtime in production[0]

Is there anything in particular you miss?

[0] https://devblogs.microsoft.com/dotnet/introducing-diagnostic...


Maybe not much that the OP miss. But then again there's nothing to miss from Java either since Java fulfill his requirements (maybe more than .NET can).

If I'm a Java developer who has been using Maven for such a long time. I don't see the allure switching to .NET just because NuGet exist or whatever Packet stands for.

The big data ecosystem is all JVM/Java.

Android started off as Java. Sure they have Kotlin now, but I don't know how many developers switching to Kotlin in drives.

Basically, the ecosystem of Java is already _there_ when MSFT is trying to catch it up so experienced Devs (a.k.a people who are already comfortable with the tooling) do not see the reason to switch.

As for me myself, I used to intern at MSFT back in the mid 2000 drinking the .NET kool-aid. One day I woke up and realize that part of the hi-tech world that interest me, relies a lot on OSS and the large portion of OSS was Java. Fast forward to today, FAANG is mostly Java shop. Java seems like a safer ground for me to have a longer career.

That's just me and my 2c. I'm too lazy to switch to .NET since there's no added value at the moment unless if I want to do back-office in-house web-app.


According to the roadmap, Mono's JVM interop used for Xamarin Android will be making its way more directly into .NET Core and will light up on almost every platform, which would give .NET greater, direct access to the Java ecosystem as well.


What about Kotlin?


Why downvote ? C#/Visual Studio is one of the best languages out there.


Lightyears behind it. There's no reason Java is still prevalent besides inertia. .NET/.NET Core is going to slowly but surely overtake it with Java shooting itself in the foot with the new licensing terms and .NET/C#'s far better feature set/better design (lessons learned from Java/JVM's mistakes were able to be fixed in C#/.NET)


.NET Core is winning a lot of benchmark comparisons now. And one might consider Rust to be more bleeding edge.


Java and Rust are both bleeding edge in their respective domains, which are quite disjoint. I like them both, and Rust may well dominate its domain one day as much as Java dominate its own (although Rust's domain is even slower-moving than Java's, so that process may well take decades). I don't know of .NET Core winning any quality industry benchmarks that people actually pay attention to, let alone a lot of them.


TechEmpower BenchmarkGame

And SIMD is coming soon in .NET Core 3 which should be a big jump for many workloads.


True, but java carries a lot of legacy.


"Legacy" includes good things too, not just bad things.

I grew up promising myself that I'd never touch Java, but when I did, it was a relevation. Everything just works. The debugger was just "attach and go" -- no recompilation with different flags, no fishing for the IDE / plugin version in which it wasn't broken, no glaring missing features (conditional breakpoints, stack traces...) The frameworks were refreshingly complete; I always felt that extensibility mechanisms were available so that I didn't have to gamble that my requirements fell into the "just works" subset rather than the "just doesn't" subset. I never had to wonder if integrating with some bog-standard protocol would mean maintaining my own open source project; maven always had something workable available.

Java wasn't born like this, but the legacy it accumulated made it formidable.

Early-adopting a hip language nets you a few sexy wins at the cost of completely eliminating a gigantic Brontosaurus-size long tail of important functionality and ecosystem maturity.


You're right. I never said otherwise.


Java, or projects in Java? Java pretty much has to, in order to preserve the most important feature a runtime can have: backwards compatibility.

Projects in Java are cursed from the beginning: the language has great constructs for boxing complexity (modifiers, inheritance, interfaces, generics, design patterns allowed by performant virtual calls, etc.). This means that when projects in other languages become unmanageable, Java will keep chugging business requirements like no other. Accumulating complexity is the fate of every successful system; and Java makes you go a long way.

Now, a newly arrived person on the job market will face systems with a pile of convoluted business logic a few century.human's high. The first ticket will be: 0.5 month of getting accustomed to the language/framework, 1 month of archeology, producing 50 lines of code, 150 of tests, break a few tests, 0.5 month of corrections; For something that would take one day as a greenfield project. The job might pay well, but will be very lowly gratifying.

Now the sound enterprise strategy is to keep using Java: the platform allows for deeper and finer integration into the business processes. But the sound individual strategy for newcomers is to stay the hell away from it. Oh, you will only work with Rust/Go/Elixir/etc? Here, have a greenfield project.

Clojure/Scala/Kotlin/Ceylon/Frege/Groovy might be the best of both worlds.


It sounds like you're basically saying its difficult and slow to work on large projects, and the solution is to not work on large projects. And that new languages implicitly mean smaller and thus easier projects because they haven't had time to grow.

The weird thing I always get with the always greenfield developers is how you know you aren't just creating even larger messes tomorrow vs the messes you're avoiding today. I guess it isn't your problem though if you're always working greenfield - it's someone else's.


Yeah, there is the legacy of the thousands of mature libraries, frameworks and development tools.

Legacy isn't always a bad thing.


Having moved to a stack with a much more competent standard library, it's much nicer to have that great standard library than to have to wade through a rich set of libraries all the time to achieve the same thing. I'm talking about Go, of course. It's just nice getting right to work coding up an http server without having to think about what "web framework" I should need to select, or which logging library to use.


What stack?


never said it was bad. But just wanted to state that it's not all newness.


To me Java is like a diesel engine, old but proven technology that can keep on chugging for hundreds of thousands of hours.

(Not to discount the recent improvements in GC, etc which are also amazing...)


Once you write code, it is legacy.


So do the framework within Java.


[flagged]


No, you don't need to reverse your opinion at all.

Personally, I voted you down because your opinion was stated in a sort of offhanded manner that does not do the subject justice.


The manner is neutral. It's only a couple words. You want me to write a whole essay on the pros and cons of legacy? In that case I would invite a negative vote because in those cases you can disagree. In this case it's just a statement that is true.


Legacy is a poorly defined term anyway. Is COBOL legacy, how about jQuery? ...

It's mostly a derisive term that seems to mean: "Old technology I don't like."


Poor start-up time, terrible application bloat, uninspiring language with poor concurrency support, massive RAM requirements, everything XML, complex tuning required, what a nightmare.

No wonder the world is running towards Python and serverless as fast as they can...


Then they discover that even though Python is quicker to develop with, it actually runs much slower and has poor threading support. Since async only gets you so far, you try multiprocessing. It turns out running all those Python worker processes actually takes up more memory and CPU than the equivalent Java app...

Also, most of the things you complain about are not Java: they are the fault of JEE (aka J2EE) and "app servers."


CPython is 100 times slower and has no multi threading support (CPython is single threaded). JVM code can outperform native C code. The JVM supports many advanced languages (Scala, Clojure, Kotlin) and it can even run Python code (Jython) faster than CPython. For concurrency it has this ultra powerful library https://akka.io (Up to 50 million msg/sec on a single machine. Small memory footprint; ~2.5 million actors per GB of heap.)


Jython seems to be much, much slower than CPython, actually: https://pybenchmarks.org/u64q/jython.php

The only benchmark where it's faster is the one involving threads, which makes sense. CPython definitely has multi-threading support, but it also has the Global Interpreter Lock preventing threads from actually executing bytecode at the same time. Jython doesn't. People have written patches that successfully remove that GIL, but those patches make single-threaded performance worse and thus haven't been accepted.


I consider that as basically no multithreading support. CPython multithreading is useless in comparison. Those benchmarks indicate that the Jython compiler is far from optimal. The equivalent code rewritten in Java would be an order of magnitude faster. https://julialang.org/benchmarks/ (note the single Python outlier there is the result of calling a C library for matrix calculation). Anyway, the JVM and CPython are fundamentally not comparable. One is interpreting, and the other is a JIT compiler. There are Python JIT compilers that would make a better comparison. However, as a language, Python is not optimal for large codebases. Comparing the languages only (not their implementation), Python is good for small scripting tasks, Java is better for very large scale projects.


All that might very well be true, but I was responding to your assertion that "The JVM ... can even run Python code (Jython) faster than CPython".


Somewhere I heard that was the case, but I didn't verify it. These benchmarks show Jython performing hundreds of times worse. spectral-norm: 0.12 secs for CPython and 4.96 secs for Jython. How can the JVM be 40 times slower than an interpreter? Maybe they didn't warm up the hotspot compiler (amateur mistake), or the Jython compiler is suboptimal. However I compared the code: This https://pybenchmarks.org/u64q/program.php?test=spectralnorm&... versus this https://pybenchmarks.org/u64q/program.php?test=spectralnorm&...

The py3 version is calling numpy which calls a highly optimized C library. So this benchmark suite is worthless. They failed to warm up the JVM hotspot compiler, and they are comparing interpreted Jython with a highly optimized C library called from Python3

The lack of a warmup phase is confirmed by the raw command line and output logs on various benchmarks. Amateurs. This is a highly misleading set of results on multiple levels.

https://wiki.python.org/jython/JythonFaq/GeneralInfo#How_fas...


The funny thing is that from a purely technological point of view, Java (even the 5-year-old Java 8 and certainly recent versions) is far ahead of most other stuff hyped on HN (as well as less hyped stuff).

The technology is certainly impressive but it isn't equipped to deal with the modern world. It is trivial to run 10 Node.js applications in the memory of one JVM application. This never used to be a problem, but now the cloud is forcing apps to pay for the resources they use its a kiss of death.

There is also the move to serverless and the slow start of the JVM which makes it literally the worst choice out of all the alternatives.

Oh, and lets not even start on the train wreck of Oracle's stewardship -- Java 11 is competing with Java 8.

Java will survive but I cannot see it having any place in a modern software stack.


I haven't tried serverless, but shouldn't that be Java's cup of tea?

The JVM can load code dynamically and adapt the runtime to the work load. Maybe there is a framework missing, so those that provide serverless starts a new JVM per function call, but they shouldn't really have to do that.


What's wrong with Oracle's stewardship? I am not a fan of Oracle, far from it, but Oracle's stewardship of Java has been exemplary.


Currently:

- The licence changes causing a premature move to Java 11 or OpenJDK out of fear of an audit. Today a colleague was investigating a Segmentation fault in the JVM because we moved to the OpenJDK. This is not productive work.

- The future of j2ee is uncertain: https://headcrashing.wordpress.com/2019/05/03/negotiations-f...

How did you get to "exemplary"? Have you got any examples?


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: