So as I was saying in [0] and [1], there is no doubt that properly tuning the compiler for performance can make a significant real difference instead of wasting more money and risking an increase in costs on more servers to solve the problem. (It won’t)
Also, If you needed to re-architect the entire codebase to solve the performance issue, either you chose one of the most inefficient technologies or the code itself was badly architected in the first place or both.
This should be the industry standard for high quality and efficient software.
We must learn from excellent SWEs teams such as DeepSeek which frankly embarrassed the entire AI industry due to their performance optimizations and savings in inference usage.
It is unfortunate that many software engineers continue to dismiss this as "premature optimization".
But as soon as I see resources or server costs gradually rising every month (even on idle usage) costing into the tens of thousands which is a common occurrence as the system scale, then it becomes unacceptable to ignore.
When you achieve expertise you know when to break the rules. Until then it is wise to avoid premature optimization. In many cases understandable code is far more important.
I was working with a peer on a click handler for a web button. The code ran in 5-10ms. You have nearly 200ms budget before a user notices sluggishness. My peer "optimized" the 10ms click handler to the point of absolute illegibility. It was doubtful the new implementation was faster.
Depending on your spend on infrastructure and the business revenue, if the problem is not causing the business to increase spending on infrastructure each month or if there’s little to no rise in user complaints over slow downs, then the “optimization” isn’t worth it and is then premature.
Most commonly, If the costs increase as the users increase it then becomes an issue with efficiency and the scaling is not good nor sustainable which can easily destroy a startup.
In this case, the Linux kernel is directly critical for applications in AI, real time systems, networking, databases, etc and performance optimizations and makes a massive difference.
This article is a great example of properly using compiler optimizations to significantly improve performance of the service. [0]
> When I grew up in computer science in the 90s, everybody was concerned about efficiency...“Somehow in the last 20 years, this has gotten lost. Everybody’s completely gung-ho about performance without any regard for efficiency or resource consumption. I think it’s time to pivot, where we can find ways to make things a little more efficient.
This is the sort of performance efficiencies I want to keep seeing on this site, from those who are distinguished experts and contributed to critical systems such as the Linux kernel.
Unfortunately, in the last 10-15 years we are seeing the worst technologies being paraded due to a cargo-cultish behaviour. From asking candidates to implement the most efficient solution to a problem in interviews but then also choosing the most extremely inefficient technologies to solve certain problems because so-called software shops are racing for that VC money. Money that goes to hundreds of k8s instances on many over-provisioned servers instead of a few.
Performance efficiency critically matters, and it is the difference between having enough runway for a sustainable business vs having none at all.
And nope. AI Agents / Vibe coders could not have come up with a more correct solution in the article.
Asking a candidate to solve proofs for a typical SWE interview in 2025 tells you that they don't know how to hire and likely google'd the answers before the interview themselves.
Unless you are a research scientist at an AI research company or top hedge fund, the interviewer should be best prepared to answer why they really need someone to know these proofs in their actual job.
Before approaching Windsurf, OpenAI wanted to buy Cursor (which is what I predicted thought too [0]) first, then the talks failed twice! [1]
The fact they approached Cursor more than once tells you they REALLY wanted to buyout Cursor. But Cursor wanted more and were raising over $10B.
Instead OpenAI went to Windsurf. The team at Windsurf should think carefully and they should sell because of the extreme competition, overvaluation and the current AI hype cycle.
Both Windsurf and Cursor’s revenue can evaporate very quickly. Don’t get greedy like Cursor.
> Sure they can copy paste the error into the LLM and hope for the best, but what happens when that doesn’t fix it?
Neither side cares unfortunately.
When users attempt to prompt away their problems without understanding the error and it doesn't solve it, that is still good news for Cursor and Anthropic and it is more money for them.
The influencers encouraging "vibe coding" also don't care. They need to be paid for their Twitter money or YouTube ad revenue.
Also, If you needed to re-architect the entire codebase to solve the performance issue, either you chose one of the most inefficient technologies or the code itself was badly architected in the first place or both.
This should be the industry standard for high quality and efficient software.
We must learn from excellent SWEs teams such as DeepSeek which frankly embarrassed the entire AI industry due to their performance optimizations and savings in inference usage.
[0] https://news.ycombinator.com/item?id=43753443
[1] https://news.ycombinator.com/item?id=43753725
reply