My gut reaction is that this[1] looks like a NUMA issue. The flip-flop performance profile immediately reminds me of a very gnarly prod issue i lead the investigation of. I’d want to eliminate NUMA first from my investigation.
You can use “numactl --cpunodebind=0 --membind=0 <your original java benchmark command>” to pin the JVM to a particular NUMA node and immediately discount my notion as bunkum.
[1] all the reasons mentioned that make me think numa:
1. The machine affected has multiple NUMA domains
2. The M1 is a single NUMA domain chip and does not exhibit the behaviour
3. The real thing making me think this - the consistent performance flip flop is close to the ~40% performance overhead of accessing remote memory vs local that i’ve seen previously. You’re seeing higher overheads but your code is a much tighter loop of memory access than my previous experience so that could explain the difference i think.
The thing that blew my mind when i ran into this was that the jvm had actually been NUMA aware for many years at that point but it was all turned off by default. This was jdk8 days so things are probably better now.
After i figured the root cause, I spent about a day logging results in excel as i tested various GC options, the only saving grace was i recall the Oracle documentation for GC options and their NUMA effects was pretty good. That optimisation time was really well spent though, i got meaningful improvements in both service latency and throughput.
I had this issue using JDK17 and 18 as well, so at least by default things are not better. =)
I wonder if there is a way that does not involve using `numactl`. It needs the container to run in privileged mode and that is not how the containers are run in the drag-racing benchmark...
Have you tried setting up a JMH benchmark? This should allow you to see if the JIT is the cause of your slowdowns.
Also, running it under a profiler (I recommend async-profiler[1]) should give you a good idea of where the slowdown occurs which might help you pin it down further.
This is probably the first thing the author should try. Just a quick look at their own harness, it's sort of trying to guess what 'warmup' is required. The JIT kicks in at invocation counts, among other things, rather than time so the warmup on differently performing hardware or JVM implementations might be doing different things. JMH's purpose is to take that guesswork out of exactly that kind of microbenchmark.
Would love to see the output of this if someone does it.
Sometimes things like this are caused by your code being "un-jitted", in the event some criteria is met, like an exception being thrown for example (although BitSet does not look to do that internally).
Thanks! I'll check JMH out now. Some week ago, while I thought the problem was todo with something in Clojure - https://twitter.com/pappapez/status/1486339518902984707?s=20... - I ran async-profiler via clj-async-profiler. It turned out to not be granular enough. OTOH, the problem had different clues then and I should try it again now with the latest Java-only repro.
i've seen similar performance deltas from run to run for both jmh and non-jmh benchmarks, though i don't remember any of them being a 100% delta. very annoying because it means you need to wrap the benchmark in a bash script and average over a dozen+ invocations of java to get semi-meaningful results
I'm stumbling onto a strange Java performance irregularity, especially easy to hit with Docker on Linux. A super straightforward implementation of Eratosthenes sieve using a Java BitSet for storage, sometimes drops in performance by 50%. Even more with JDK8. Any JAVA/JDK/JVM experts here that can shed some light on the mystery for me? I've been at it a while, but it is quite a bit outside my field of knowledge, and I am out of ideas for how to proceed. The blog article linked + the Git repository is an attempt to summarize/minimize things.
Do you have performance traces of good and bad runs to compare? Instead of trying to come up with more theories you should start by looking at where the time is being spent.
I'm not experienced with doing micro-optimisations in Java, but I assume you can profile it to find out which individual operations are taking up the time just via a sampling trace.
Given it's common on a particular OS setup, I wouldn't be surprised if it's in the System.currentTimeMillis() call that you're doing every loop.
you could try running it with the linux perf command.
`perf stat -d java ...`
if your CPU supports performance counters it should give you L1 cache misses/branch misses/etc. that might give you some insight into what is different between the runs. someone else mentioned it could be the memory alignment. i think java might allocate with 8 byte alignment and maybe something funny goes on with the L1-caching if the bitset allocation is not 16 byte or 32 byte aligned.
if you are running it within docker you might need to use: ` --security-opt seccomp:unconfined --cap-add SYS_PTRACE --cap-add CAP_SYS_ADMIN`
This would be my first go-to as well. See if you are retiring less instructions, and if yes, see if the counters tell what the CPU is stalling on vs the faster runs.
Are you testing on a x64 laptop? It could possibly be CPU throttling? Maybe try run the benchmark written in a non-JVM language to see if you get the same beahviour?
It's a mini-pc. Could be throttling. There are benchmarks written in tons of languages here: https://github.com/PlummersSoftwareLLC/Primes I've so far only seen this behaviour with the JVM involved.
Would also be interesting to see if this is a Java issue, or a JVM implementation issue: ie, does an OpenJ9 based JVM (Available under the weird IBM Semeru name here [1]) have similar behaviour?
Yes, 1X, 0.5X are a bit arbitrary. I mean 1X as the speed I know it can run at. So it could well be that with JDK7 the optimizations could not happen and that the question then is why the optimizations do not always happen.
Making each run longer, 1 minute instead of 5 seconds displays the same performance toggling:
Not certain. But there are a lot of implementations very similar to mine in tons of other languages and vms. None display this afaik. Also I get different behavior with JVM/JDK 7.
First make sure there are no user or kernel tasks that may be hogging resources sometimes. Maybe even try disabling some peripherals and similar, at the hardware or kernel level.
Lots of stuff you could try then:
* disabling ASLR
* disabling Turbo Boost and similar CPU settings
* changing the CPU performance scaling governor (from "powersave" to "performance"): printf performance | tee /sys/devices/system/cpu/cpufreq/policy*/scaling_governor
* run the code under a real-time scheduling policy (like SCHED_FIFO) with the highest priority. If you do try this, you need to also enable actual real-time scheduling by writing "-1" to /proc/sys/kernel/sched_rt_runtime_us.
But modern CPUs are not predictable in their performance, that's why microcontrollers are usually used for hard real time requirements. So I doubt you'll ever be able to get absolutely consistent performance across all benchmark runs.
I played similar benchmarking games myself before, and it turns out that, although I did most of the stuff described above, and my code was C++ (no JIT), big slowdowns do happen with some inevitable but predictable frequency.
On a related note, also try disabling mitigations... if that's actually a relevant thing to do (somewhat out of the loop - maybe they're off by default or maybe the overhead is minimal now, not sure).
Disabling mitigations could improve performance, but I'm not sure whether it would change how the performance distribution looks.
So, in my experience the distribution is a finite set of pulses, equivalently, the performance largely falls into some predefined level, where the fastest/best performance level is usually the most likely, and the levels roughly get less likely the worse/slower they are.
So what I'm saying is that disabling hw-vuln may make all performance levels faster, but I doubt it would eliminate many of them. I'll for sure play around with that some day, though.
Aligned branch and jump targets are just for the sake of maximising instruction cache hit-rate. It will always remain a micro-optimisation and will not make a difference in this and most other cases.
My first thought was that it's a bug in the deoptimizer, i.e. the JIT compiler dynamically switches back to a deoptimized form to throw away invalid optimizations (or so it it thinks) to apply new optimizations which are more relevant to the current load/usage patterns. [0]
I think I've seen a bug report once about this deoptimization/optimization process happening in an infinite loop, but why would it happen only under Docker on Linux, but not Mac? Perhaps, the deoptimizer relies on heuristics which depend on the current environment.
I think this might be it. When I run with the epsilon GC, and run with taskset to force just one CPU core, the performance is consistently bad. When I instruct taskset to use two logical cores, the performance is good. This is a single-threaded test, so why would it need two logical CPU cores? I'm testing with a Ryzen 7 1700.
Edit: Upon further testing, changing the taskset after the process launches has no effect. The JVM seems to be looking at the number of CPU cores available when it starts, and then it uses that for making compilation decisions.
So far my attempts to reproduce the alleged performance degradation have not been fruitful. I've written up a fairly detailed gist [1] on how to get CPU performance metrics; appendix also has a dump of C1 and C2 compiled methods (useful for comparison). I also ran on 2-node NUMA; binding cpu and memory to different nodes didn't yield a repro. either.
Seeing that the benchmarks are running inside Docker, would it be Docker related? Does Docker throttle CPU on different machines differently?
Check the temperature of the CPU. Modern CPU will slow down when it runs too hot. Also does anti-virus got run from time to time? Does expensive backup or expensive telemetry got run during the benchmarks?
Reading the blog again and seeing the results are dropped to the exact same level step wise, it really looks like the CPU has been throttled or some quota limit has been triggered.
This issue isn't directly related to BitSet. I have observed the same thing in my programming language interpreter that runs on the JVM (well, it's written in Kotlin multiplatform so it runs on JS and Natively as well).
I start the interpreter and measue the time it takes to all all then numbers below 1000000000.
The first time I run it after starting the interpreter it always takes 1.4 seconds (within 0.1 second precision). The second time I measure the time it takes 1.7, and for every invocation following that it takes 2 seconds.
If I stop the interpreter and try again, I get exactly the same result.
I have not been able to explain this behaviour. This is on OpenJDK 11 by the way.
To run the benchmark, type the following command in the UI:
time:measureTime { +/⍳1000000000 }
My current best guess is that the optimiser decides to recompile the bytecode to improve performance, but the recompiled version actually ends up being slower.
>the optimiser decides to recompile the bytecode to improve performance, but the recompiled version actually ends up being slower.
I had that happen while developing a fast "exp(double)" implementation: for some reason casting a double into an int can be very slow (at least on some architectures) if the double is not close to int range, so I took care to put each of two (exclusive) occurrences of the statement "int i = (int)foo;" after special cases dealing with huge values, but the JIT reworked the code by factoring them above the special cases, making the code much slower for huge values. My first workaround was to replace one of the "int i = (int)foo;" by "int i = -(int)-foo;", which behaved the same in practice but was not factored out by the JIT.
Thanks. I'd guess something similar happens here. However, I doubt it's they same case as you, since there are no floating point operations involved here.
How's were you able to figure it out? Where should I start looking?
A lot of your time is spent on the branching and on accessing the memory. So on the ASM level, memory access patterns, caching and branch prediction will affect your performance.
My bet is on the branch predictor, since IIRC AMD has a novel branch predictor that's pretty different from the Intel branch predictor (not sure about M1): In C-land you should try loop unrolling (in fact a decent compiler will do that for you). If the JVM has a control for that, force it; else do it manually and pray the JIT does the right thing.
My first intuition was the BitSet's cache & memory behavior, which might also be the case for some ranges of `p`: Internally the BitSet is probably something like a chunk of memory with bitwise indexing. So to change a bit, the machine has to load a bunch of bytes into a register, set the bit, and write that register then back to memory. This is bad(*) if you want to set e.g. bits 0 to 31 in your BitSet, because you now got 64 memory accesses instead of two. This might affect you for small p, but with p >= 64 the access stride will be larger than 64. Thinking about it, in fact, this could be something that might throw of a naive JIT optimizer.
I would have to think a little bit on how to improve here were your code written in C, with the JVM I have no idea how you could optimize here. Maybe do two loops, first for p<=64 and the other for p>64.
Regarding cache misses, hm, 1M bits are just shy of 64kByte. On any modern machine that should fit into the cache; I would still monitor for cache misses though.
(*) Short story: I have a light case of PTSD since I once had to reimplement the storage backend for a Java SQL engine university project because a friend was trying to be too clever and did exactly that.
Anyway, interesting question and I wish you best of luck with figuring out what's going on there :)
//edited a few times; sorry, I'm bad at writing a comment in on go...
Thanks. I'll work a bit with unpacking all this. =)
Meanwhile. I just published an experiment where I have switched out the BitSet for an array of booleans. Just to demonstrate my observations that then the performance is perfectly stable. https://github.com/PEZ/ghost-chase-condition/tree/master/tes... Does that affect your analysis in any way?
I don't know the JIT internals, but just ignoring them I'd point fingers on the setBit(x) = "load byte from mem into reg, set some bit in reg, write back reg to mem". Now it's just a "write byte to memory", which is much faster at the expense of using more storage (factor 32 or 64, probably 64). Though I am surprised caching doesn't mitigate this.
Haha, yes, you are reading correctly. To clarify: my employer bought the machine for me to have for this purpose. They know how grumpy I get with unsolved mysteries.
I ran into a similar problem with docker on the same problem in zig; podman and on-metal had no performance regression but docker did. I kind of gave the fuck up on the plummer's sieve shootout at that point because (of this and other performance regressions I was finding, like CPU throttling) I felt like I was fighting the stupid rules of the contest more than I was discovering things about performance.
Anyways for the authors: try running it in podman and see if it eliminates the perf regression
What happens if you invert the test condition? That is, you run it 10k times and then see how long that took rather than for X time and see how many times you could do it?
You’re using System.getCurrentTimeMillis() which should be w fast. My first thought was if you were using Instant and then sometimes there’s one call to VM.getNanoTimeAdjustment and sometimes two.
Every other theory listed here is far more likely, but I would try changing your loop from using System.currentTimeMillis() to using System.nanoTime(). It's a higher-resolution time source that has no relation to wall clock time but is more appropriate for timings. Classes like Stopwatch from Google's Guava use it.
From what I can see so far it doesn’t seem related to java itself at all. What other “these sort of things” do you have in mind? Java has the best observability tools, so if anything, you will have a harder time debugging performance regressions with go.
You can use “numactl --cpunodebind=0 --membind=0 <your original java benchmark command>” to pin the JVM to a particular NUMA node and immediately discount my notion as bunkum.
[1] all the reasons mentioned that make me think numa: