Anyone who is using SSE / AVX / AVX512 intrinsics probably should know about Intel's excellent Intrinsics Guide. The Intel guide is a reference. This .pdf topic is a tutorial. So both resources will be helpful to anyone seriously doing SIMD on the CPU.
It’s not limited to C++, equally good for C.
Over time, the support slowly arrives to other languages too, like C#: https://docs.microsoft.com/en-us/dotnet/api/system.runtime.i... https://docs.microsoft.com/en-us/dotnet/api/system.runtime.i...
Only GNAT Ada uses GCC/LLVM, the other surviving Ada vendors have their own compilers, no idea how much they expose the underlying SIMD intrisics.
Rust has some initial support, I don't follow it, last time I checked it was only available on nightly with a very basic subset on stable.
Zig no idea, I don't follow it that much, being yet another C just with bounds checking isn't something I care about.
This seems really close to the metal, either to have a non-negligible maintenance cost or not being able to fully exploit the hardware at use.
My experience with both was that as I moved away from the super classic SIMD cases, the more I ran into crazy compiler cliffs where tiny tweaks would blow up the codegen. In each case I gave up, reimplemented what I wanted directly in c++ (the second time using anger fog's wonderful vector class library), and easily got the results I wanted without a ton of finagling the compiler and libraries.
> Plus, when you get the hang of it, writing your own SIMD library is fairly simple
hm.. it's indeed easy to start, but maintaining https://github.com/google/highway (supports clang/gcc/MSVC, x86/ARM/RiscV) is quite time-consuming,
especially working around compiler bugs.
For many practical problems, the ISPC’s abstraction is not a good fit. It’s good for linear algebra with long vectors and large matrices, but SIMD is useful for many other things besides that. A toy problem: compute count of spaces in a 4 GB-long buffer in memory. I’m pretty sure manually written SSE2 or AVX2 code (inner loop doing _mm_cmpeq_epi8 and _mm_sub_epi8, outer one doing _mm_sad_epu8 and _mm_add_epi64) gonna be faster than ISPC-made version.
The main goal in ispc's design was to support SPMD (single program multiple data) programming, which is more general than pure SIMD. Handling the relatively easy cases of (dense) linear algebra that are easily expressed in SIMD wasn't a focus as it's pretty easy to do in other ways.
Rather, ispc is focused on making it easy to write code with divergent control flow over the vector lanes. This is especially painful to do in intrinsics, especially in the presence of nested divergent control flow. If you don't have that, you might as well use explicit SIMD, though perhaps via something like Eigen in order to avoid all of the ugliness of manual use of intrinsics.
> I’m pretty sure manually written SSE2 or AVX2 code (inner loop doing _mm_cmpeq_epi8 and _mm_sub_epi8, outer one doing _mm_sad_epu8 and _mm_add_epi64)
ispc is focused on 32-byte datatypes, so I'm sure that is true. I suspect it would be a more pleasant experience than intrinsics for a reduction operation of that sort over 32-bit datatypes, however.
Depends on use case, but yes, can be complicated due to lack of support in hardware. I’ve heard AVX512 fixed that to an extent, but I don’t have experience with that tech.
> perhaps via something like Eigen
I do, but sometimes I can outperform it substantially. It’s optimized for large vectors. In some cases, intrinsics can be faster, and in my line of work I encounter a lot of these cases. Very small matrices like 3x3 and 4x4 fit completely in registers. Larger square matrices of size like 8 or 24, and tall matrices with small fixed count of columns, don’t fit there but a complete row does, saving a lot of RAM latency when dealing with them.
> to avoid all of the ugliness of manual use of intrinsics
I don’t believe they are ugly; I think they just have a steep learning curve.
> I suspect it would be a more pleasant experience than intrinsics for a reduction operation of that sort over 32-bit datatypes
Here’s an example how to compute FP32 dot product with intrinsics: https://stackoverflow.com/a/59495197/126995 I have doubts the ISPC’s reduction gonna result in similar code. Even clang’s automatic vectorizer (which I have a high opinion of) is not doing that kind of stuff with multiple independent accumulators.
ISPC lets you request that the gang size be larger that the vector size  to get 2 accumulators out of the box. If having more accumulator is crucial, you can have them at the cost of not using idiomatic ispc but I'd argue the resulting code is still more readable.
I'm no expert so they might be flaws that I don't see but the generated code looks good to me, the main difference I see is that ISPC does more unrolling (which may be better?).
Here is the reference implementation: https://godbolt.org/z/MxT1Kedf1
Here is the ISPC implementation: https://godbolt.org/z/qcez47GT5
Line 36 computes ymm6 = (ymm6 * mem) + ymm4, the next instruction on line 37 computes ymm6 = (ymm8 * mem) + ymm6
These two instructions form a dependency chain. The CPU can’t start the instruction on line 37 before the one on line 36 has made a result. That gonna take 5-6 CPU cycles depending on CPU model. Same happens for ymm5 vector between instructions on line 38 and 41, and in a few other places.
In the reference code all 4 FMA instructions in the body of the loop are independent from each other, a CPU will run all 4 of them in parallel. The data dependencies are across loop iterations, only the complete loop is limited to 4-5 cycles/iteration. That’s OK because the throughput limit (probably not the FMA throughput though, I think load ports throughput is saturated before FMA, especially for unaligned inputs) is smaller than that.
I think it does? I see Clang unroll reductions into multiple accumulators quite often.
SIMD in Java: https://news.ycombinator.com/item?id=14636802
(archived version https://archive.is/C5iZA)
SIMD in Rust: https://news.ycombinator.com/item?id=10111729
SIMD in Python: https://news.ycombinator.com/item?id=10470428
Using SIMD to aggregate billions of values per second: https://news.ycombinator.com/item?id=22803504
Towards fearless SIMD: https://news.ycombinator.com/item?id=18293209
First Impressions of ARM SIMD Programming: https://news.ycombinator.com/item?id=19490542
Here’s for frustum culling https://github.com/microsoft/DirectXMath/blob/jan2021/Inc/Di... Relatively inefficient when you have many boxes to test against same frustum, but (a) compiler may inline and optimize (b) failing that, it’s easy to copy-paste and optimize manually, compute these 6 planes and call BoundingBox::ContainedBy method yourself.
As for frustum culling, that code seems to do one bounding box at a time? Or am I misunderstanding? I was planning to try to do 4 (or however many) checks at a time. I’m ok with checking against bounding spheres too if that makes it easier to vectorize.
Yep, most parts of that library were designed for doing one thing at a time.
Generally speaking, HPC-style SoA approach can be faster especially if you have AVX. But there’s a price for that, most importantly code complexity but some performance-related things as well: RAM access pattern, uploading to VRAM for rendering.
> I was planning to try to do 4 (or however many) checks at a time
I would start with whatever code is in that library, and only optimize if profiler says so.
They have sphere versus frustum test too, similar one i.e. they also testing against these 6 planes, might be slightly more efficient than boxes.
It's cool to learn these things. And it's down right important to learn these things once you're experienced enough, because you have to use them at some point if you're in the game of optimization. But also I would feel pretty bad if some kid out there wasted a week on a project at work (and got reprimanded for it) that could have been accomplished with a couple compiler flags, you know?
Here’s an example of auto-vectorizer in clang 12, which I believe represents state of the art at the moment: https://godbolt.org/z/6Pe33187W It automatically vectorized the loop and even manually unrolled it, however I think the code bottlenecks on shuffles not on memory loads. Just too many instructions in the loop, and that vpmovzxbq instruction can only run on port 5 on Skylake.
Compare the assembly with manually vectorized version from an answer on stackoverflow: https://godbolt.org/z/do5e3-
If instead you wrote in an infinite-length vector language and the compiler scalarized it for you, I think that could work better.
Personally, I have often been disappointed. Not much progress in 2 years: http://www.0x80.pl/notesen/2021-01-18-autovectorization-gcc-...
Readers might also find this short intro  helpful, including tips on porting. (Disclosure: author)
> many available instructions are missing from the wrappers
Highway can interop with platform-specific intrinsics (on x86/ARM, hwy_vec.raw is the native intrinsic type).
> vectorized integer math often treats vectors as having different lanes count on every line of code
Fair point, that's a cost of type safety. We usually write `auto` to avoid spelling it out.
It seems like there's one intrinsic to do the AND but this doesn't set ZF. 
But there's another instrinsic that will set ZF but doesn't actually store the result of the AND operation .
 vpand ymm, ymm, ymm
 vtestpd ymm, ymm
I'm guessing that either a) I'm missing an instruction, or having to modify EFLAGS from AVX instructions incurs a large penalty and so it's not advisable?
Bitwise instructions are very cheap, 1 cycle of latency. Skylake can run 3 of them every clock, Zen 2 can run 4 of them per clock. I wouldn’t worry about that extra vpand instruction too much.
About vptest, the latency is not great, 6-7 cycles. If you gonna branch on the outcome, and the branch is not predictable (your code takes random branch every time the instruction at specific address is running), you gonna waste time. Sometimes that’s unavoidable, like when your goal is something similar to memchr() function (however I’d recommend _mm256_movemask_epi8 instead for that). But other times it’s possible to rework into something better: mask lanes with _mm256_blendv_[something], zero out stuff with bitwise AND, that kind of stuff.