Hacker News new | past | comments | ask | show | jobs | submit login
IntelliJ IDEA 2021.1 (jetbrains.com)
128 points by programeris 4 days ago | hide | past | favorite | 74 comments

Started off as very slow on a not-terribly large project > cache cleanup > does not even start without trashing the .idea folder anymore > still slooow.

Command + space takes forever (=seconds) to show the context menu.

IntelliJ feels like a proper IDE now :).

It works great if you give it enough memory. It is a memory hog but its the best IDE I've ever used.

> It is a memory hog

This is different from my experience, when working on Java projects with a few thousand source files and about 1 million SLoC.

  Eclipse - had some problems with the autocomplete being slow to open, IDE was overall unresponsive and used around 2 GB of RAM
  NetBeans - the cache folder filled up weirdly quickly (though that was NetBeans 8.2 not the new Apache NetBeans), but the IDE worked fine with around 1.5 GB of RAM usage
  JetBrains - project was slow to open (+while indexing), but the IDE performs okay with 0.5 - 1 GB of RAM usage (though more needed when running/debugging)
Currently i have a computer with 24 GB of RAM and an 8 core CPU, both of those seem sufficient but in my experience the IDE isn't the main thing consuming memory, it's actually all the services you might want to launch through it (if you need breakpoints across multiple ones).

Of course, this would become more of an issue if you have something like 4 or 8 GB of RAM available.

Disclaimer: this is anecdotal data and i may be wrong in regards to latest versions of the IDEs, since i only use JetBrains products nowadays.

> JetBrains - project was slow to open (+while indexing), but the IDE performs okay with 0.5 - 1 GB of RAM usage (though more needed when running/debugging)

Just to be specific: Is this your experience just running the IDE at default settings, or did you increase the JVM heap size for it and had no effect? If you run it at default settings, the memory footprint will stay <1GB, because that's all it's allowed to use.

My experience on this is that it's a great idea to bump the memory size to 4GB. I have not done any benchmarks on it, but it feels much faster in regular use.

I bumped the limit up to 2 GB, which is where i've left it for IntelliJ IDEA.

When the IDE is mostly idle (with 10-20 opened source files), it generally uses around 0.5 GB of RAM.

When i'm writing code, using autocomplete, reading documentation, running Maven actions, using the suggestions functionality (basically IntelliSense + some plugins), using refactoring functionality etc., then the memory usage can be bumped up to 1 GB.

When the code is running, the memory usage can get closer to the 2 GB limit, when using debugger functionality, recompiling classes on the fly and so on.

Personally, i haven't bumped the limit up higher, because i still need the memory for the actual Java processes of the services that i launch, since under normal circumstances i need at least 16 GB of RAM for all of the services to even run properly (but that may as well be a project related issue, since scheduled processes and a number of other optional things run locally, nor are all of them configured with Xmx and other parameters), though that's less relevant to the actual IDE performance (since i have swap turned off and it doesn't have an impact).

It's also terrible to overdo it. Had a coworker having issues with 4GB on a project and he bumped it to 16GB (on a 32GB MBP) which resulted in epicly long GC pauses.

If you know anything about java you'd know that it has max heap settings that are a command line option. Pretty much all java command line options, GC ergonomics incl are there. I have been using eclipse with an increased heap (and over tuned) since 2006 - modify eclipse.ini

This would matter if one of them had the limit that's much smaller than the others, which would lead to aggressive GC, which wasn't the case.

When working on the same project of a known size and using similar IDE functionality, different IDEs still had noticeably different performance characteristics (i implore you to try the same, especially with the same Xms and Xmx values set).

That simply leads me to believe that each IDE implements things differently, for example:

  - JetBrains have separate IDEs for separate languages (IntelliJ IDEA for Java, Rider for .NET, PhpStorm for PHP etc.)
  - NetBeans allows lazily loading support for different languages/frameworks/tools
  - Eclipse is generally viewed as a bunch of interconnected libraries/frameworks (JDT for Java, PDT for PHP etc.)
That's just one example of the architectural differences (which may impact which functionality is loaded when and how efficiently the memory is used), though it stands to reason that it's perfectly normal to observe them to act differently from one another, even in controlled circumstances, like the above.

They really need to update the default memory footprint. The 750MB default isn't enough for any projects these days and they don't exactly make it obvious how to update it to 2-4GB.

No need as you’ll get a warning on low memory if you’re on a large project, and IDE will advice you to increase heap size (just click provided option) and you can manually choose how much memory to provide.

So it’s fine that default is relatively low.

In my experience, IDEA becomes GC bound quite readily during indexing which in combination with indexing blocking most operations stalls the UI.

It's not that the IDE OOMs, it's that it burns cpu cleaning up objects in stop the world collections. These stop the world events can be mitigated with additional memory.

You can use the action "change memory settings".

Does that have a UI now or just open up the launch flags in a text buffer still?

Yes it does have a ui

Help/Change memory settings

I have the memory monitor on and intellij barely uses any of the memory I'm giving it - I recently upped the memory further and it didn't help at all.

Has always been fast for me and Idea has been the best java IDE around. What specs do you run and how large is your project?

As someone who switches between doom emacs and intellij:

If you have to ask questions like, "What specs do you run and how large is your project?" it is slow.

Does Doom Emacs provide the same functionality as IntelliJ IDEA?

No. Which is probably a big part of it.

The thing is, as far as I can tell, there's a very high overlap between the list of features I don't use, and the list of features that contribute to the sluggishness.

Some additional anecdata: on my maxed out 16” (admittedly Intel) Mac, IntelliJ/PyCharm/Goland/Webstorm are all quicker to start than Spacemacs. I haven’t tried Doom Emacs but don’t get the impression it would be much quicker.

I'll have started Doom, opened a project, found my file, edited, saved and closed, while IntelliJ's indexing bar is still at 10%.

What does this indexing buy you time wise later?

And what about starting it with an already indexed project?

> What does this indexing buy you time wise later?

For simple projects in Python or JS, nothing. For complex projects in most languages, quite a bit. For anything running in the JVM, a whole lot. That's why I pay for my IntelliJ license every year.

The question here is if it's quicker to open a project and be immediately productive in Doom Emacs, and it is, very much so. This is in contrast with Spacemacs, which I used for a couple of years but had to run Emacs as a daemon to make the startup fast.

If I want to go even faster I'll use Vim, but that's mostly out of habit and proficiency.

> And what about starting it with an already indexed project?

Maybe I'm missing something, but it re-starts indexing every time I open the IDE or the project or when switching branches. Is there any other way?

One of the main selling points of doom emacs is that it starts up more quickly than spacemacs.

It starts quickly because everything is lazy loaded, if you open project it will start language server, start indexing etc, It's definitely not much faster than IntelliJ, but functionality is still behind.

It does depend a bit on the language server. There are a few language servers that are nicer than the equivalent IntelliJ plugins, though that is generally not even remotely the case for JVM languages that aren't Clojure. AFAICT, the state of the union for Java is that it's not habitable without an IDE, and the only habitable IDE is IntelliJ.

Looks somewhat promising as a TextMate replacement for out-of-project editing - I’ll give it a shot.

On the other hand, I've found Emacs to be slower than Idea, with frequent "synchronous waiting on ui thread" style slowdowns for many different actions...

It has always been super slow for me, but they're making it even slower over time.

Wait for it to finish indexing, try to remove unnecessary folders from the indexes.

I use Pycharm but needed phpstorm for a project. I wrote asking if I should target IntelliJ or just go with the two.

There isn’t a clear order for feature introduction after stuff hits IntelliJ.

However, they mentioned a few times that people who need multiple tools often pick individual apps simply because they are much lighter weight than IntelliJ.

Nothing lightweight with any solution involving the IntelliJ platform, lol.

Still, you're better of with the specialized IDEs. Some of their functionality is not available as plugins for IntelliJ IDEA, even though it would probably be feasible.

You have a good point. I have found myself using PyCharm a lot in the last year and switching to a M1 MacBook really helped speed up PyCharm.

The problem is that Intellij is written in Java, using the hotspot JVM, which is JIT compiled.

But IDEs, especially fully featured IDEs, are a terrible type of workload for JIT compilation. They're full of branches, which can easily cause recompilations, and the breadth of features mean that there is never really a spot hot enough to stay compiled outside of the editor itself.

I really wish AOT compilation was taken more seriously in the JVM world. Yes, I know about graal native-image and the various embedded commercial JVMs, but those are niches. It would be great if I could just precompile using O3 level compilation for a whole app using a standard JVM, and not have to worry about the weird and hard-to-debug performance fluctuations that come with JITs.

Hotspot can cache generated binary code across runs since Java 11, but people keep using Java 8....

IntelliJ comes (optionally, but by default) with it's own packaged JRE/JDK/Java version iirc, so that wouldn't be an issue...

Apparently they couldn't be bothered to provide an AppCDS archive in modern JVM to go alongside InteliJ.

I also have the feeling that's only pjmlp and maybe 100 enterprises around the world that do use some features like these!

You always seem to mention them (class sharing, Java AOT compilers, etc), but nobody (for values of somebody < enough) seems to be actually using them :-)

When one does enterprise computing those features are available to anyone that cares.

It is not my fault many developers don't care about their tooling, and settle for worse is better. :)

So, that's my question then: are those features really mature and available for using in production.

Or is it like some half-done features with several caveats and/or some companies with this or that compiler extension (like Java AOT), that might or might not work well, not have good documentation, or any kind of support, and that some cavalier daredevils opt to use?

I mean, I'm not 100% convinced whether it's devs don't caring, or the tools being niche and "use at your own peril", the reason for that lack of adoption.

GraalVM, for example, probably still has some rough edges, and that's an official Oracle tool.

In your experience which are the maturest alternative Java deployment tools/compilers/options?

GraalVM is mature enough to power Twitter.

AOT compilers for Java exist in production since around 2000, basically all commercial JVMs supported it, specifically those targeted for embedded deployment.

AppCDS in Hotspot is nothing new, basically just like Flight Recorder, proven components from BEA J/Rockit JVM acquired by Oracle, which they eventually decided to make available on Hotspot.

Likewise OpenJ9 AOT and JIT caches, have years of production testing, from IBM J9 deployments in WebSphere Realtime JVM (now discontinued), IBM mainframes, Aix and Linux.

PTC and Sonic have been used for years in military deployments, and factory automation scenarios.

Aicas focus mainly in embedded scenarios.

Excelsior JET was quite good, but for whatever reason they went bankrupt.

Android Runtime, although not Java, has been AOTing since Android 5, starting with Android 7 evolved into a mix of interpreted/JIT/AOT with PGO from JIT, in Android 10 gained the ability to upload PGO profiles to the store so that devices can AOT right away.

AOT can't take into consideration runtime, which might result in less efficient code compilation vs JIT.

So it is apples vs oranges. One is faster here the other there.

GC might be an issue, but for me IDEA feels fast and the added benefit of code navigation vs e.g. vim is worth it.

But it might also result in less efficient code compilation, because the runtime overhead of some types of optimizations (especially the extremely powerful global optimizations) isn't really feasible while the app is running.

AOT + PGO really is the best of both worlds, but from a devops perspective it gets a bit tricky. But it would be a no-brainer for something like Intellij.

>AOT can't take into consideration runtime, which might result in less efficient code compilation vs JIT.

That has for decades been a "in the future we'll have flying-cars" style promise for 99% of workloads...

Sure, JIT can theoritically optimize based on runtime hints. But most of the time, for any practical use, it's slower than AOT.

JITs have been extremely successful for dynamically typed languages, but that has always been a low hanging fruit for optimization.

Once you have a static type system, the JIT doesn't bring you much over ordinary AOT compilation, and no benefit over PGO...while losing out on global optimizations that aren't feasible in a JIT.

If that was "the" problem, why would past versions of IDEA be fast?

Java and JVM are very fast. I know, because I used to do cutting edge algorithmic trading in C with premise that Java is slow and then I was asked to rewrite it to Java with the result being only about 10-30% slower. In this case we are talking receiving messages from network, performing complex processing and sending responses (on another connection) within 5-10 microseconds.

And that was a decade ago, my understanding Java only got better in the meantime, but I am working on more mundane backend/reactive systems and don't require really low latency.

I never once said Java was slow. I was saying that JIT compiling is a bad fit for an IDE. The workloads present in an IDE cause lots of problems for JIT compilers. And those problems actually can be exacerbated by an evolving code base with lots of new features.

> The workloads present in an IDE cause lots of problems for JIT compilers

This would suggest the JIT is a problem. But that is not true, the "workload problem" really means "bloat".

Any software is written for the machine it runs on and Java programs are written for JVM. No machine is perfect and writing performant software requires that you understand peculiarities of the architecture you are working on. If you ignore it it is not the problem of the platform, the problem is you.

Now, the trouble with Java software is what I call "OOP bloat", which is basically overloading the runtime with overheads of abstractions.

What is acceptable level of overhead will depend on how much you value your time vs performance of application so it is not categorically declare it is bad. That assuming the overhead is accomplishing something else (like making the code simpler, easier to develop/maintain).

Yes, the JIT is the problem. And no, this has absolutely nothing to do with OOP or bloated abstractions. Java is a great language, and the Jetbrains IDEs are written extremely well and have very high coding standards. It is all about the JIT. JITs do really well for code with lots of hot loops and few branches, because inlining and loop optimizations are the classic case for needing execution profiles. But IDEs are the opposite of that...there are branches everywhere, and very few hot loops. JITs are the worst possible compilation model for this type of code, because it is constantly going to be optimizing, deoptimizing, and reoptimizing code. The JIT is just constant overhead for something that should just be profiled and compiled one time.

Looks like a nice list of updates. Any Kotlin performance improvements would be very welcome and it seems that they've done some work there.

We tried code with me a couple of times for pair programming. It's great when it works but still had some issues with e.g. auto complete breaking or people having issues joining with the pre-release versions we tried. But great tool if they managed to fix this.

I find this link better (more visual): https://www.jetbrains.com/idea/whatsnew/

Hopefully they will support Wayland in next version.

Wayland support is pretty terrible, but usable. I use it for my daily dev work.

- Some graphical intensive parts (such as the markdown preview renderer) cause crashes. Turning preview off remedies this.

- HiDPI situation is garbage as usual (but that's a Linux thing in general). IntelliJ renders at half resolution so it's not as crisp as it can be, but it's perfectly usable.

- The 'everything is a window' design choice, combined with the 'you cannot click on any IntelliJ instance if _one_ of them has a window open' is worse on Wayland. This may be due to my WM of choice (Sway), but there are often prompts that are rendered behind popup windows.

- context menus have issues with immediately closing due to some mouse event. I now use keyboard shortcuts instead of mouse clicks as a workaround. As a bonus I am now discovering all sorts of shortkeys that boost my productivity.

- overall rendering performance (like input lag or scrolling FPS) seems to be worse on Wayland compared to X11, but I have not benchmarked this. May just be me being a bit too demanding/expecting from the Wayland performance promise.

> you cannot click on any IntelliJ instance if _one_ of them has a window open

Does anyone know why they do that? It’s pretty annoying even without wayland as I regularly have DataGrip, 2-3 Rider instances, and WebStorm open.

You are running Xwayland not wayland.

I am running native Wayland, not xwayland

How? Are you using custom JDK with Wayland?

Not an expert on the subject, but AFAIK Jetbrains runs their own Java VM by default.

To run intellij on Wayland you do need _JAVA_AWT_WM_NONREPARENTING=1 in your env. It works, but has some quirks as I wrote above.

Sorry, that's not true. This just makes it "work", but it's still being rendered in Xwayland.

Here's how you can test:

Run intellij; `pkill Xwayland`

or, try running intellij on a machine which does not have xwayland installed at all.

The thing is: wayland does _not_ have scaling issues, those are inherently X11 problems.

The issue which is tracking this: https://youtrack.jetbrains.com/issue/JBR-3206

I also just ran this and watched for Xwayland to spawn, killed Xwayland and it died.

Looks like you are right, I didn't realize that Xwayland was still used in the background.

Isn't that a JDK issue?

Correct, but I saw that they are working on it in their JDK fork.

See also the Kotlin improvments: Such as 25% faster highlithing and 50% faster completion. https://blog.jetbrains.com/kotlin/2021/04/kotlin-plugin-2021...

The feature I'm talking about: "Related problems View" is from last year but I think it is a strong innovation as it dramatically reduce the feedback loop of refactoring https://blog.jetbrains.com/idea/2020/06/intellij-idea-2020-2...

The remove editor stuff looks very interesting!

While you don't normally do any complicated editing on remote machines, the ability to run the backend on wsl2 or docker allows me to have different environments for different projects.

This was added to Visual studio code last year and worked very well.

When a job change forced me onto a Windows environment, having that remote option in vscode is probably the biggest factor that let me keep that job. This was back on WSL1, which was horrible, so I ended up running a linux VM to get full docker and Linux support. I tried pycharm through vcxsrv, but it was just a bit off. But with the ssh plugin to vscode, it was incredibly slick.

WSL2 would have the same advantages - a linux vm running under hyper-v. I see PyCharm 2021.1 has the same WSL connectivity as IntelliJ, so I will have to try this out. Would love if they added an ssh option.

Anyone else who finds the idea of having someone poke around in your IDE creepy or a potential security issue, it's possible to disable the Code With Me plugin which is enabled by default.

Why? It's meant for pair programming and has already been done as "Live Share" in Visual Studio / Visual Studio Code. It's a way to work together in only what you give the guest access to.

Thanks, I understand what it's for, and I personally didn't want a potential security hole added to my work environment. It's just an FYI for others who are concerned about security. Here are the security details: https://www.jetbrains.com/help/idea/faq-about-code-with-me-s...

Plugin is installed by default, but code-with-me is disabled by default. You need to explicitly enable access and accept a EULA before it's enabled.

Actually the plugin is enabled and running by default, but can be disabled. Enabling access is a different thing.

The plugin doesn't do anything until you connect it to a server is what I mean. Until connected to a server it doesn't do anything.

I have used IntelliJ as my code editor in the past. Too slow to run for a decent pc. I switched to Visual Studio Code and never looked back. Well, I hope they will add more optimization fixes in their next update.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact