Hacker News new | past | comments | ask | show | jobs | submit login

I am not sure I understand this and I believe it's wrong and misleading unless I am missing something obvious. Why would it be the case that hand-written SIMD would perform worse than scalar and non-autovectorized code in debug builds?





I’m not sure why compilers generate slow code for SIMD intrinsics when optimizations are disabled. But, it is observable that they do.

Aras and pretty much all gamedevs are concerned about this because they use SIMD in critical loops and debug build performance is a major concern in gamedev.

Debug build performance has been an issue forever for a variety of reasons. The most common solution is to keep certain critical systems optimized even in debug build. And, only disable optimizations for those individual subsystems when they are the specific target of debugging. It’s inconvenient for game engines devs. But, that’s a small subset of the engineering team.


Why would you slap an LLM response to my question?

Dude... I'm a greybeard game engine developer. I write game engine critical loops using SIMD intrinsics. https://old.reddit.com/r/gamedev/comments/xddlp/describe_wha...

These AI slop accusations are getting redic. Is the problem that I was too through in my response? :P Comes from decades of explaining bespoke tech to artists.

And, the writer of the "wrong and misleading" article I linked was the Lead Graphics Programmer of Unity3D for 15 years! XD


Well, the response really sounded unnatural and llm-ey. If it's not then please take my apology.

I write SIMD kernels and the conclusion drawn in the article makes no sense regardless of the fact who wrote it. I don't doubt the observations made in experiments but the hypothesis that the SIMD is slowing down the code.

The actual answer is in the disassembly but unfortunately it wasn't shown.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: