The vast majority of developers never profile their code. I think this is much less of an issue than anyone on HN would rank it. Only when the platform itself provides traces do they take it into consideration. And even then, I think most perf optimization is in a category of don't do the obviously slow thing, or the accidentally n^2 thing.
I partially agree with you though, as the penetration of Arm goes deeper into the programmer ecosystem, any mental roadblocks about deploying to Arm will disappear. It is a mindset issue, not a technical one.
In the 80s and 90s there were lots of alternative architectures and it wasn't a big deal, granted the software stacks were much much smaller and more metal. Now they are huge, but more abstract and farther away from machine issues.
"The vast majority of developers never profile their code."
Protip: New on the job and want to establish a reputation quickly? Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.
Another bit of evidence developers rarely profile their code is that I can tell my mental model of how expensive some server process will be to run and most other developer's mental models tend to differ by at least an order of magnitude. I've had multiple conversations about the services I provide and people asking me what my hardware is, expecting it to be run on some monster boxes or something when I tell them it's really just two t3.mediums, which mostly do nothing, and I only have two for redundancy. And it's not like I go profile crazy... I really just do some spot checks on hot-path code. By no means am I doing anything amazing. It's just... as you write more code, the odds that you accidentally write something that performs stupidly badly goes up steadily, even if you're trying not to.
> Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.
I've found that a profiler isn't even needed to find significant wins in most codebases. Simple inspection of the code and removal of obviously slow or inefficient code paths can often lead to huge performance gains.
Yes, and just like Intel & AMD spent a lot of effort/funding for building performance libraries and compilers, we should expect Amazon and Apple invest into similar efforts.
Apple will definitely give all the necessary tools as part of Xcode for iOS/MacOS software optimisation.
AWS is going to be more interesting – this is a great opportunity for them to provide distributed profiling/tracing tools (as a hosted service, obviously) for Linux that run across a fleet of Graviton instances and help you do fleet-wide profile guided optimizations.
We should also see a lot of private companies building high-performance services on AWS to contribute to highly optimized open-source libraries being ported to graviton.
given a well designed chip which achieves competitive performance across most benchmarks, Most code will run sufficiently well for most use cases regardless of the nuance of specific cache design and sizes.
There is certainly an exception to this for chips with radically different designs and layouts, as well as folks writing very low-level performance sensitive code which can benefit from specific platform optimization ( graphics comes to mind ).
However even in the latter case, I'd imagine the platform specific and fallback platform agnostic code will be within 10-50% performance of each other. Meaning a particularly well designed chip could make the platform agnostic code cheaper on either a raw performance basis or cost/performance basis.
Obviously this is fairly niche but the friction to making something fast is hugely easier locally.