Command + space takes forever (=seconds) to show the context menu.
IntelliJ feels like a proper IDE now :).
This is different from my experience, when working on Java projects with a few thousand source files and about 1 million SLoC.
Eclipse - had some problems with the autocomplete being slow to open, IDE was overall unresponsive and used around 2 GB of RAM
NetBeans - the cache folder filled up weirdly quickly (though that was NetBeans 8.2 not the new Apache NetBeans), but the IDE worked fine with around 1.5 GB of RAM usage
JetBrains - project was slow to open (+while indexing), but the IDE performs okay with 0.5 - 1 GB of RAM usage (though more needed when running/debugging)
Of course, this would become more of an issue if you have something like 4 or 8 GB of RAM available.
Disclaimer: this is anecdotal data and i may be wrong in regards to latest versions of the IDEs, since i only use JetBrains products nowadays.
Just to be specific: Is this your experience just running the IDE at default settings, or did you increase the JVM heap size for it and had no effect? If you run it at default settings, the memory footprint will stay <1GB, because that's all it's allowed to use.
My experience on this is that it's a great idea to bump the memory size to 4GB. I have not done any benchmarks on it, but it feels much faster in regular use.
When the IDE is mostly idle (with 10-20 opened source files), it generally uses around 0.5 GB of RAM.
When i'm writing code, using autocomplete, reading documentation, running Maven actions, using the suggestions functionality (basically IntelliSense + some plugins), using refactoring functionality etc., then the memory usage can be bumped up to 1 GB.
When the code is running, the memory usage can get closer to the 2 GB limit, when using debugger functionality, recompiling classes on the fly and so on.
Personally, i haven't bumped the limit up higher, because i still need the memory for the actual Java processes of the services that i launch, since under normal circumstances i need at least 16 GB of RAM for all of the services to even run properly (but that may as well be a project related issue, since scheduled processes and a number of other optional things run locally, nor are all of them configured with Xmx and other parameters), though that's less relevant to the actual IDE performance (since i have swap turned off and it doesn't have an impact).
When working on the same project of a known size and using similar IDE functionality, different IDEs still had noticeably different performance characteristics (i implore you to try the same, especially with the same Xms and Xmx values set).
That simply leads me to believe that each IDE implements things differently, for example:
- JetBrains have separate IDEs for separate languages (IntelliJ IDEA for Java, Rider for .NET, PhpStorm for PHP etc.)
- NetBeans allows lazily loading support for different languages/frameworks/tools
- Eclipse is generally viewed as a bunch of interconnected libraries/frameworks (JDT for Java, PDT for PHP etc.)
So it’s fine that default is relatively low.
It's not that the IDE OOMs, it's that it burns cpu cleaning up objects in stop the world collections. These stop the world events can be mitigated with additional memory.
If you have to ask questions like, "What specs do you run and how large is your project?" it is slow.
The thing is, as far as I can tell, there's a very high overlap between the list of features I don't use, and the list of features that contribute to the sluggishness.
And what about starting it with an already indexed project?
For simple projects in Python or JS, nothing. For complex projects in most languages, quite a bit. For anything running in the JVM, a whole lot. That's why I pay for my IntelliJ license every year.
The question here is if it's quicker to open a project and be immediately productive in Doom Emacs, and it is, very much so. This is in contrast with Spacemacs, which I used for a couple of years but had to run Emacs as a daemon to make the startup fast.
If I want to go even faster I'll use Vim, but that's mostly out of habit and proficiency.
> And what about starting it with an already indexed project?
Maybe I'm missing something, but it re-starts indexing every time I open the IDE or the project or when switching branches. Is there any other way?
There isn’t a clear order for feature introduction after stuff hits IntelliJ.
However, they mentioned a few times that people who need multiple tools often pick individual apps simply because they are much lighter weight than IntelliJ.
Still, you're better of with the specialized IDEs. Some of their functionality is not available as plugins for IntelliJ IDEA, even though it would probably be feasible.
But IDEs, especially fully featured IDEs, are a terrible type of workload for JIT compilation. They're full of branches, which can easily cause recompilations, and the breadth of features mean that there is never really a spot hot enough to stay compiled outside of the editor itself.
I really wish AOT compilation was taken more seriously in the JVM world. Yes, I know about graal native-image and the various embedded commercial JVMs, but those are niches. It would be great if I could just precompile using O3 level compilation for a whole app using a standard JVM, and not have to worry about the weird and hard-to-debug performance fluctuations that come with JITs.
You always seem to mention them (class sharing, Java AOT compilers, etc), but nobody (for values of somebody < enough) seems to be actually using them :-)
It is not my fault many developers don't care about their tooling, and settle for worse is better. :)
Or is it like some half-done features with several caveats and/or some companies with this or that compiler extension (like Java AOT), that might or might not work well, not have good documentation, or any kind of support, and that some cavalier daredevils opt to use?
I mean, I'm not 100% convinced whether it's devs don't caring, or the tools being niche and "use at your own peril", the reason for that lack of adoption.
GraalVM, for example, probably still has some rough edges, and that's an official Oracle tool.
In your experience which are the maturest alternative Java deployment tools/compilers/options?
AOT compilers for Java exist in production since around 2000, basically all commercial JVMs supported it, specifically those targeted for embedded deployment.
AppCDS in Hotspot is nothing new, basically just like Flight Recorder, proven components from BEA J/Rockit JVM acquired by Oracle, which they eventually decided to make available on Hotspot.
Likewise OpenJ9 AOT and JIT caches, have years of production testing, from IBM J9 deployments in WebSphere Realtime JVM (now discontinued), IBM mainframes, Aix and Linux.
PTC and Sonic have been used for years in military deployments, and factory automation scenarios.
Aicas focus mainly in embedded scenarios.
Excelsior JET was quite good, but for whatever reason they went bankrupt.
Android Runtime, although not Java, has been AOTing since Android 5, starting with Android 7 evolved into a mix of interpreted/JIT/AOT with PGO from JIT, in Android 10 gained the ability to upload PGO profiles to the store so that devices can AOT right away.
So it is apples vs oranges.
One is faster here the other there.
GC might be an issue, but for me IDEA feels fast and the added benefit of code navigation vs e.g. vim is worth it.
AOT + PGO really is the best of both worlds, but from a devops perspective it gets a bit tricky. But it would be a no-brainer for something like Intellij.
That has for decades been a "in the future we'll have flying-cars" style promise for 99% of workloads...
Sure, JIT can theoritically optimize based on runtime hints. But most of the time, for any practical use, it's slower than AOT.
Once you have a static type system, the JIT doesn't bring you much over ordinary AOT compilation, and no benefit over PGO...while losing out on global optimizations that aren't feasible in a JIT.
Java and JVM are very fast. I know, because I used to do cutting edge algorithmic trading in C with premise that Java is slow and then I was asked to rewrite it to Java with the result being only about 10-30% slower. In this case we are talking receiving messages from network, performing complex processing and sending responses (on another connection) within 5-10 microseconds.
And that was a decade ago, my understanding Java only got better in the meantime, but I am working on more mundane backend/reactive systems and don't require really low latency.
This would suggest the JIT is a problem. But that is not true, the "workload problem" really means "bloat".
Any software is written for the machine it runs on and Java programs are written for JVM. No machine is perfect and writing performant software requires that you understand peculiarities of the architecture you are working on. If you ignore it it is not the problem of the platform, the problem is you.
Now, the trouble with Java software is what I call "OOP bloat", which is basically overloading the runtime with overheads of abstractions.
What is acceptable level of overhead will depend on how much you value your time vs performance of application so it is not categorically declare it is bad. That assuming the overhead is accomplishing something else (like making the code simpler, easier to develop/maintain).
We tried code with me a couple of times for pair programming. It's great when it works but still had some issues with e.g. auto complete breaking or people having issues joining with the pre-release versions we tried. But great tool if they managed to fix this.
- Some graphical intensive parts (such as the markdown preview renderer) cause crashes. Turning preview off remedies this.
- HiDPI situation is garbage as usual (but that's a Linux thing in general). IntelliJ renders at half resolution so it's not as crisp as it can be, but it's perfectly usable.
- The 'everything is a window' design choice, combined with the 'you cannot click on any IntelliJ instance if _one_ of them has a window open' is worse on Wayland. This may be due to my WM of choice (Sway), but there are often prompts that are rendered behind popup windows.
- context menus have issues with immediately closing due to some mouse event. I now use keyboard shortcuts instead of mouse clicks as a workaround. As a bonus I am now discovering all sorts of shortkeys that boost my productivity.
- overall rendering performance (like input lag or scrolling FPS) seems to be worse on Wayland compared to X11, but I have not benchmarked this. May just be me being a bit too demanding/expecting from the Wayland performance promise.
Does anyone know why they do that? It’s pretty annoying even without wayland as I regularly have DataGrip, 2-3 Rider instances, and WebStorm open.
To run intellij on Wayland you do need _JAVA_AWT_WM_NONREPARENTING=1 in your env. It works, but has some quirks as I wrote above.
Here's how you can test:
Run intellij; `pkill Xwayland`
or, try running intellij on a machine which does not have xwayland installed at all.
The thing is: wayland does _not_ have scaling issues, those are inherently X11 problems.
The issue which is tracking this: https://youtrack.jetbrains.com/issue/JBR-3206
I also just ran this and watched for Xwayland to spawn, killed Xwayland and it died.
While you don't normally do any complicated editing on remote machines, the ability to run the backend on wsl2 or docker allows me to have different environments for different projects.
This was added to Visual studio code last year and worked very well.
WSL2 would have the same advantages - a linux vm running under hyper-v. I see PyCharm 2021.1 has the same WSL connectivity as IntelliJ, so I will have to try this out. Would love if they added an ssh option.