Thinking about this - this may be a pattern that;s designed to match something that expands from a string instruction.
While the loop he's testing is a useless bit of code that does nothing the optimisation he's discovered may help speed things like scasb/stosb allowing portions of 2 unrolled copies to be processed per clock
I believe I first saw this on IACA; uops.info has the measurements for zero-latency inc, add, etc on Alder Lake https://uops.info/html-instr/INC_R64.html . These adds by immediate are nicely closed, so I've been assuming renamed values are uniformly represented in Golden Cove as register+increment.
> Since the only Alder Lake machine I had access to was a remote Windows machine that didn’t belong to me, I more-or-less had to choose option 3, which meant subjecting myself to The Ultimate Sadness
Well, you can pick up Sapphire Rapids instances from your preferred cloud provider and avoid the sadness.
Note that, not only are multiple consecutive increments reduced to zero latency, but that happens even if they're interleaved with movsxd, as in the second experiment at https://uops.info/html-lat/ADL-P/INC_R64-Measurements.html. It'd be interesting to see what other instructions it can "fuse" with (if that is what is happening).
I don't see a reason why this should be the case, since the high bits of the result would simply be cleared, and it's a common size optimization to use 32 bit operations.
Interesting. I wonder how would interleaved 'inc r64'+'mov r32,r32' look - that's two separate latency-zero ops, equal to 'inc r32'. Wouldn't be too surprised if an eliminated op can only be zero-extending or incrementing, but not both.
Normally it would be the either the programmer's or the compiler's job to unroll a loop and then reduce dependency chain lengths.
But its nice if the renamer can do that as well.
Presumably intel have real-world data that suggest that significant real workloads can profit from this.
I wonder whether that points to specific software issues, like hypothetically "oh yeah, openjdk8 hotspot was a little too timid at loop unrolling. It won't get that JIT improvement backported, but our customers will use java8 forever. Better fix that in silicon".
Because "increase" and "excrete" have completely different roots that only coincidentally coincide when the verbal nouns corresponding to those words are formed.
You mean, the difference between "going forward" and "coming together"? It's in the prefix, "pro-" (for, forward) versus "con-" (with, together) which give you different shades of the meaning. Can't really say what's the verb of movement was though.
I think he meant it as an absurdist joke, but this is a great response!
I looked it up, "gress" comes from "gradi" in Latin which directly translates to "walk". More specifically: con(pro) + gradi -> congredi (verb) -> congressus (noun)
Edit: Knowing this, "gradient" has an interesting flavour :)
Edit: It looks like the path is more indirect for "gradient"
"gradi" (walk) -> "gradus" (step) -> "grade" (french influence) + "salient" -> "gradient". I like that in Latin "walk" is "to step", or perhaps "step" is "the unit of walking"? "A walking"? Etymology is fun!
You have to use an instruction like cpuid with rdtsc so that the TSC is not read before the loop terminates. There have been changes to the Intel docs and there are more options now:
Just when you get used with features like x86 CPUs combining two instructions into one micro-op (micro-op fusing), you get something like this.
I guess immediate addressing mode addition is a good choice to execute at rename / allocation stage, as it's common, relatively simple and can't generate exceptions.
This isn't really combining as the result of the first increment is needed by the intermediate compare, but is a rewriting that removes a dependency (or moves it further back in the stream)
Well, except for the fact that you need to read from a register before adding the immediate displacement to it.
You'd have to know the physical register and do the read very early (before renaming), or predict the value!
uops.info's measurements show 'inc r64', interleaved with 'movsxd' instructions, still having zero latency[0], so it can't be just merging the immediates of successive increments (or there's additional fusion happening). Plain unrolled 'inc r64' shows an average latency of 0.2 cycles, i.e. 5 dependent ops per cycle. And 0.2 used ports per instr [1].
Similarly, 'lea r64, [r64+8]' (imm8) and 'lea r64, [r64+128]' (imm32) and 'add r64, 2' (imm8); but not 'add r64, 0x1000000' (imm32).
While the loop he's testing is a useless bit of code that does nothing the optimisation he's discovered may help speed things like scasb/stosb allowing portions of 2 unrolled copies to be processed per clock
reply