Hacker News new | past | comments | ask | show | jobs | submit login

Yes, that's the obvious (and boring!) answer, that I mention in the introduction and that's in a way the implicit conclusion. But that does not teach us SIMD then :)





Your article isn't really about, though, how to speed up a debug build, and I thereby think you're likely not going to find the right audience. Like, to be honest, I gave up on your article, because while I found the premise of speeding up a debug build really interesting, I (currently) have no interest in hand-optimizing SIMD... but, in another time, or if I were someone else, I might find that really interesting, but then would not have thought to look at this article. "Hand-optimizing SHA-1 using SIMD intrinsics and assembly" is just a very different mental space than "making my debug build run 100x faster", even if they are two ways to describe the same activity. "Using SIMD and assembly to avoid relying on compiler optimizations for performance" also feels better? I would at least get it if your title was a pun or a joke or was in some way fun--at which point I would blame Hacker News for pulling articles out of their context and not having a good policy surrounding publicly facing titles or subtitles--but it feels like, in this case, the title is merely a poor way to describe the content.

Could be that he did it for fun and not to reach a target audience?

If so that's perfectly fine, but I still agree with saurik--the title is rather misleading. The article is mainly about how to speed up SHA (without using compiler optimizations).

You would appreciate https://aras-p.info/blog/2024/09/14/Vector-math-library-code...

TLDR: SIMD intrinsics are great. But, their performance in debug builds is surprisingly bad.


I am not sure I understand this and I believe it's wrong and misleading unless I am missing something obvious. Why would it be the case that hand-written SIMD would perform worse than scalar and non-autovectorized code in debug builds?

I’m not sure why compilers generate slow code for SIMD intrinsics when optimizations are disabled. But, it is observable that they do.

Aras and pretty much all gamedevs are concerned about this because they use SIMD in critical loops and debug build performance is a major concern in gamedev.

Debug build performance has been an issue forever for a variety of reasons. The most common solution is to keep certain critical systems optimized even in debug build. And, only disable optimizations for those individual subsystems when they are the specific target of debugging. It’s inconvenient for game engines devs. But, that’s a small subset of the engineering team.


Why would you slap an LLM response to my question?

Dude... I'm a greybeard game engine developer. I write game engine critical loops using SIMD intrinsics. https://old.reddit.com/r/gamedev/comments/xddlp/describe_wha...

These AI slop accusations are getting redic. Is the problem that I was too through in my response? :P Comes from decades of explaining bespoke tech to artists.

And, the writer of the "wrong and misleading" article I linked was the Lead Graphics Programmer of Unity3D for 15 years! XD


Well, the response really sounded unnatural and llm-ey. If it's not then please take my apology.

I write SIMD kernels and the conclusion drawn in the article makes no sense regardless of the fact who wrote it. I don't doubt the observations made in experiments but the hypothesis that the SIMD is slowing down the code.

The actual answer is in the disassembly but unfortunately it wasn't shown.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: