Surely both of these characterizations depend on the person? I can believe that integral_t^oo is idiomatic if that's what you're used to, and maybe it's easier to pick up from scratch, but, for someone long used to TeX, it just makes me wonder what other unpleasant surprises someone else will have decided are actually pleasant.
The issue is a lack of credential means you are putting in a lot of effort to teach yourself something hard, with little evidence to show you understand the material. If universities simply offered a for credit exam where learners had to self teach I reckon we would see a lot more success stories.
What do you disagree with Casey Muratori on specifically? I have seen some of his content and he seems pretty humble but opinionated on things he knows about. I also think he did a great job with handmade hero.
Personally I find that Casey presents simple hard facts that rile people up, but at that point they're doing... what? Arguing with facts?
He had a particularly pointed rant about how some ancient developer tools on a 40 MHz computer could keep up with debug single-stepping (repeatedly pressing F10 or whatever), but Visual Studio 2019 a on multi-GHz multi-core monster of a machine can't. It lags behind and is unable to update the screen at a mere 60 Hz! [1]
I have had similar experiences, such a the "new Notepad" taking nearly a minute to open a file that's just 200 MB in size. That's not "a small cost that's worth paying for some huge gain in productivity". No, that's absurdly bad. Hilariously, stupidly, clownshoes bad. But that's the best that Microsoft could do with a year or more of development effort using the "abstractions" of their current-gen GUI toolkits such as WinUI 3.
If I only spoke Arabic or Russian or Chinese, could I write words in my language on those ancient developer tools? Or would I be limited to ASCII characters?
If I were blind, could the computer read the interface out to me as I navigated around? If I had motor issues, could I use an assistive device to move my cursor?
I'm not saying this excuses everything, but it's easy to point at complexity and say "look how much better things used to be". But a lot of today's complexity is for things that are crucial to some subset of users.
> If I only spoke Arabic or Russian or Chinese, could I write words in my language on those ancient developer tools?
For relevant cases, YES!
NT4 was fully Unicode and supported multiple languages simultaneously, including in the dev tools. Windows and all associated Microsoft software (including VS) has had surprisingly good support for this even back in the late 1990s, possibly earlier. I remember seeing in 2001 an Active Directory domain with mixed English, German, and Japanese identifiers for security groups, OU names, user names, file shares, etc... Similarly, back in 2002 I saw a Windows codebase written by Europeans that used at least two non-English languages for comments.
Note that programming languages in general were ASCII only for "reasons", but the GUI designers had good i18n support. Even the command-line error messages were translated!
Linux on the other hand was far behind on this front until very recently, again, for "reasons". You may be thinking back to your experience of Linux limitations, but other operating systems and platforms of the era were Unicode, including MacOS and Solaris.
None of this matters. UTF8 is not the reason the GUI is slow. Even the UCS16 encoding used by Windows is just 2x as slow, and only for the text, not any other aspect such as filling pixels, manipulating some GUI object model, or responding synchronously vs asynchronously.
Look at it this way: for a screen full of text, ASCII vs Unicode is a difference of 10 KB vs 20 KB in the volume of data stored. The fonts are bigger, sure, but each character takes the same amount of data to render irrespective of "where" in a larger font the glyph comes from!
> If I were blind, could the computer read the interface out to me as I navigated around?
Text-to-speech is an unrelated issue that has no bearing on debugger single-stepping speed. Windows had accessibility APIs since forever, including voice, it was just bad for reasons to do with hardware computing limitations, not a lack of abstractions.
> If I had motor issues, could I use an assistive device to move my cursor?
That's hardware.
> But a lot of today's complexity is for things that are crucial to some subset of users.
And almost all of it was there, you just may not have been aware of it because you were not disabled and did things in English.
Don't confuse a "lack of hardware capacity" or a "lack of developer budget" with the impact that overusing abstractions has caused.
These things were not a question of insufficient abstractions, but insufficient RAM capacity. A modern Unicode library is very similar to a 20-year-old one, except the lookup tables are bigger. The fonts alone are tens of megabytes, far more than the disk capacity of my first four computers... combined.
Today I have a 4.6 Ghz 8-core laptop with 64 GB of memory and I have wait for every app to open. All of them. For minutes sometimes. MINUTES!
None of this has anything to do with accessibility or multi-lingual text rendering.
Your comment indicates a pretty poor understanding of why multilingual text rendering is a lot harder than it was in the early 90s. Back then, to display some text on the screen, the steps were: for each character (=byte), use the index to select a small bitmap from a dense array, copy that bitmap to the appropriate position on the screen, advance.
But modern font rendering is: for a span of bytes, first look up which font is going to provide those characters (since fonts aren't expected to contain all of them!). Then run a small program to convert those bytes into a glyph index. The glyph index points to a vector image that needs to be rendered. Next, run a few more small programs to adjust the rendered glyph for its surrounding text (which influences things like kerning spacing) or the characteristics of its display. And then move on to the next character, where you get to repeat the same process again. And if you've got rich text, all of this stuff can get screwy in the middle of the process: https://faultlore.com/blah/text-hates-you/
Indic text rendering, for example, never worked until the mid-2000s, and there's even more complex text rendering scenarios that still aren't fully supported (hello Egyptian hieroglyphs!).
Having personally implemented a Unicode text renderer in both DirectX and OpenGL back in 2002 for the Japanese game market, I dare say that chances are that I know a heck of a lot more than you about high performance multilingual text rendering. Don't presume.
You just made the exact same argument that Casey Muratori absolutely demolished very publicly.
The Windows Console dev team swore up and down that it was somehow "impossible" to render a fixed-width text console at more than 2 fps on a modern GPU when there are "many colors" used! This is so absurd on its face that it's difficult to even argue against, because you have to do the equivalent of establishing a common scientific language with someone that thinks the Earth is flat.
Back in the 1980s, fully four decades ago, I saw (and implemented!) colorful ANSI text art animations with a higher framerate on a 33 MHz 486 PC! That is such a miniscule amount of computer power that my iPhone charger cable has more than that built into the plug to help negotiate charging wattage. It's an order of magnitude less than the computer power a single CPU core can gain (or lose) simply from a 1 C degree change in the temperature of my room!
You can't believe how absurdly out-of-whack it is to state that a modern high-end PC would somehow "struggle" with text rendering of any sort!
Here is the developers at Microsoft stating that 2 fps is normal, and it's "too hard" to fix it. "I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation as “extremely simple” somewhat combatively" from: https://github.com/microsoft/terminal/issues/10362#issuecomm...
Here is Casey banging out a more Unicode compliant terminal that looks more correct in two weekends that can sink text at over 1 GB/s and render at 9,000 fps: https://www.youtube.com/watch?v=99dKzubvpKE
PS: This has come up here on HN several times before, and there is always an endless parade of flabbergasted developers -- who have likely never in their lives seriously thought about the performance of their code -- confidently stating that Casey is some kind of game developer wizard who applied some "black art of low-level optimisation". He used a cache. That's it. A glyph cache. Just... don't render the vector art more than you have to. That's the dark art he applied. Now you know the secret too. Use it wisely.
> You can't believe how absurdly out-of-whack it is to state that a modern high-end PC would somehow "struggle" with text rendering of any sort!
I never said that.
What I said is that modern text rendering--rendering vector fonts instead of bitmap fonts, having to deal with modern Unicode features and other newer font features, etc.--is a more difficult task than the days when text rendering was dealing with bitmap fonts and largely ASCII or precomposed characters (even East Asian multibyte fonts, which still largely omit the fun of ligatures).
My intention was to criticize this part of your comment:
> None of this matters. UTF8 is not the reason the GUI is slow. Even the UCS16 encoding used by Windows is just 2x as slow, and only for the text, not any other aspect such as filling pixels, manipulating some GUI object model, or responding synchronously vs asynchronously.
> Look at it this way: for a screen full of text, ASCII vs Unicode is a difference of 10 KB vs 20 KB in the volume of data stored. The fonts are bigger, sure, but each character takes the same amount of data to render irrespective of "where" in a larger font the glyph comes from!
This implies to me that you believed that the primary reason Unicode is slower ASCII is because it takes twice as much space, which I hope you agree is an absurdly out-of-whack statement, no?
> This implies to me that you believed that the primary reason Unicode is slower ASCII is because it takes twice as much space, which I hope you agree is an absurdly out-of-whack statement, no?
Actually, this precise argument comes up a lot in the debate of UTF-16 vs UTF-8 encodings, and is genuinely a valid point of discussion. When you're transferring gigabytes per second out of a web server farm, through a console pipeline, or into a kernel API call, a factor of two is... a factor of two. Conversely, for some East Asian locales, UTF-16 is more efficient, and for them using UTF-8 is measurably worse.
The point is that any decent text renderer will cache glyphs into buffers / textures, so once a character has appeared on a screen, additional frames with the same outline present will render almost exactly as fast as in the good old days with bitmap fonts.
Not to mention that Microsoft introduced TrueType in 1992 and OpenType in 1997! These are Windows 3.1 era capabilities, not some magic new thing that has only existed for the last few years.
None of this matters in the grand scheme of things. A factor of two here or there is actually not that big a deal these days. Similarly, rendering a 10x20 pixel representation of maybe a few dozen vector lines is also a largely solved problem and can be done at ludicrous resolutions and framerates by even low-end GPUs.
The fact of the matter is that Casey got about three orders of magnitude more performance than the Windows Console team in two orders of magnitude less time.
Arguing about font capabilities or unicode or whatever is beside the point because his implementation was more Unicode and i18n compliant than the slow implementation!
This always happens with anything to do with performance. Excuses, excuses, and more excuses to try and explain away why a 1,000x faster piece of hardware runs at 1/10th of the effective speed for software that isn't actually that much more capable. Sometimes less.
PS: Back around 2010, my standard test of remote virtual desktop technology (e.g.: Citrix or Microsoft RDP) was to ctrl-scroll a WikiPedia page to zoom text smoothly through various sizes. This forces a glyph cache invalidation and Wiki especially has superscripts and mixed colours, making this even harder. I would typically get about 5-10 fps over a WAN link that at the time was typically 2 Mbps. This is rendered in CPU only(!) on a server, compressed, sent down TCP, decompressed, and then rendered by a fanless thin terminal that cost $200. The Microsoft devs are arguing that it's impossible to do 1/5th of this performance on a gaming GPU more than a decade later. Wat?
Single-stepping is moderately complicated. I'm even less impressed with modern editors that can't keep up with typing. My first computer could do that. It had a 2MHz 6502.
I've had the same thing happen with a drawing app.. I've an older android tablet that has a sketch app. It works great..
I got a new high powered tablet, because the battery on the old one wouldn't keep a charge.. New one has a sketch app that lags.. draw line... wait. draw line... wait. It's unusable..
It has a newer processor, and more memory, but I can't draw.
I personally do not like how his solution boils down to "just learn more" which may be true at an individual level, but not as the general solution to awful software.
You will never be able to force developers worldwide to start writing everything in C/Assembly, or even to care beyond "it performs fine on my machine". Individuals can have fun micro-optimizing their application, but overall, we have the app we have because of compromises we find somewhat acceptable.
More likely the solution will be technical, making great/simple/efficient code the path of least resistance.
Watch some of the intros to his performance aware programming videos. He doesn’t want everyone to use C or Assembly. He also doesn’t want everyone micro-optimizing things.
>compromises we find somewhat acceptable
His entire point is that most developers aren’t actually aware of the compromises they are making. Which is why he calls it “performance aware programming” and not performance first programming.
I have actually watched many of his videos, as an individual I very much like his advices. What I am saying however is that this has nothing to do whatsoever with improving software at scale.
But my point still stands, Casey focuses solely on the cultural aspect but completely ignore the technical one. He says that developers became lazy/uninformed, but why did that happen? Why would anything he currently say solve it?
I don’t think you can blame him for not having a large scale systemic solution to the problem.
Imagine if there was a chef on YouTube who was telling everyone how bad it was that we are eating over processed food, and he makes videos showing how easy it is to make your own food at home.
Would it be reasonable to comment to people who share the chefs message that you don’t like how his cooking videos don’t solve the root problem that processed food is too cheap and tasty?
And here’s the thing, he doesn’t have to solve the whole problem himself. Many developers have no idea that there’s even problem. If he spreads the message about what’s possible and more developers become “performance aware” maybe it causes more people to expect performance from their libraries, frameworks, and languages.
Maybe some of these newly performance aware programmers are the next generation of language and library developers and it inspires one of them to create the technological solution you’re hypothesizing.
But he says that he has a large-scale systemic solution, it is literally the reason for his videos.
He isn't doing it so I can get my CPU trivia, but because he believes it will result in better software. Been at it for years, and so far I don't think it's going better.
My opinion is that better software will not come from CPU-aware people, but from people having enough of all the BS and switching to simpler solutions. You speak about language/libraries, what make you think it would come from those?
I do believe he is smart, just unfortunate he puts his talent on the wrong path. It's not like he does not understand the impact of simplicity/working independently, as it is how he made his "fast terminal" showcase: by tying to bypass system libraries as much as possible, transforming it into a simple (or at least less theoretical) problem.
This is such an absolutely wild take to me. Screw those Mythbusters guys trying to explain the scientific method to people. That's not a systemic solution to fixing scientific illiteracy in the US. Sesame Street teaching kids to count? Those guys are way off the mark because it's not going to measurably improve economic performance. You'll never actually change anything just by teaching people stuff.
I'm 100% certain that Casey Muratori doesn't think that his paywalled course is going convince literally every programmer to to care about performance, or to stop Facebook from building slow apps.
He's trying to convince some small percentage of people that performance is something they should be thinking about when they program, and that they should understand the tradeoffs they are making when they choose to use some slow method, algorithm, library, language, or system because it provides some other benefit.
When someone writes a book like "math for dummies" that says on the back cover "I think basic math is a useful skill for everyone to have, and if we all learned basic math the world would be a better place!"
That's not the author stating their true belief that their book is going to literally teach the majority of the world basic math. That's an aspirational goal or in the worst case it's sales copy.
Publicly calling that person out for wasting their time is really something else.
Creating straw men to attack like "You will never be able to force developers worldwide to start writing everything in C/Assembly" when he explicitly states that this isn't his goal makes me think you're just looking for some reason to shit on the guy.
>You speak about language/libraries, what make you think it would come from those?
I have no idea when if or how a technical solution will be found.
"Math for dummies" book writers don't argue that mathematicians are stupid/incompetent because they do not spend enough time on the basics.
It has also nothing to do with teaching people to count, read, or the scientific method. Every. Single. Software I use is bloated, I am unaware of any software solely consuming that it should, nothing is close to the theoretical limit, this is not a knowledge issue. If you have such software in mind feel free.
Casey continuously says that the problem with software is that developers do not understand performance, and so that the problem will be solved once they do. I am not overly familiar with his paid course, but he did stuff before that. He has essentially the same opinion as Jonathan Blow.
The course being paywalled isn't really an argument for either of us, on his side it is most likely a compromise, just because this isn't the grand plan to convince millions of developers doesn't mean he doesn't intent for it to help his cause.
> that this isn't his goal makes me think you're just looking for some reason to shit on the guy.
You can consider it a hyperbole if you want, but essentially what I am saying is that he wants to put the burden on the developers, without really changing the underlying structure.
I am not shitting on him for the sake of it. I do really like his (and Jonathan) observations, and I will continue to watch them, I just find it unfortunate that both waste their time on solution that will not affect anything at scale (one with a custom language, the other with specialized course)
>"Math for dummies" book writers don't argue that mathematicians are stupid/incompetent because they do not spend enough time on the basics.
His argument isn't that they don't spend enough time on the basics. His argument is that they don't know the basics even exist.
This is true in my experience. I've interviewed scores of candidates over the years who just have absolutely no mental model of what is going on under whatever abstraction layer they generally work in.
Making trade offs to sacrifice performance for things like developer speed is fine. But if you're not aware you are making those trade offs, I might not call you stupid, but I will call you ignorant.
>Every. Single. Software I use is bloated, I am unaware of any software solely consuming that it should, nothing is close to the theoretical limit, this is not a knowledge issue.
It's not binary. There's a continuum. No software is ever going to use exactly the minimum theoretically possible number of instructions and memory over the entire program.
But some software is worse and some is better.
It is possible to believe that the problem can be solved by changing the culture, while simultaneously believing that you aren't going to be able to completely change the culture. While also believing that you can shift the culture ever so slightly such that you make it just a little bit better.
You can also hold all of the above beliefs while believing that there is a systemic underlying issue that you have no idea how to solve.
From my observations (having watched a good bit of early handmade hero and a good bit of his paid course) the above is pretty close to his beliefs.
It's also fairly close to my own thoughts on the subject. The underlying issue is that software is an industry that primarily competes on features, not quality. I think the reasons for that are numerous--everything from software being an immature industry, to the distorting effects of cheap money flowing into the industry from the previous few decades.
I believe this will work itself out to some degree over time as the industry becomes more mature and we start to see more diminishing returns for just adding new features. Developers will start to look towards other things to differentiate themselves and quality/performance could be one of them.
I don't think there's anything I or Casey can do to change the market dynamics, and I can't think of a technical fix to this issue. I suspect Casey can't either.
I also think that there are niches today where performance and quality can outcompete more features.
Given all the above, I think pushing to convince a small percentage of people that spending a bit more time thinking about performance is a rational goal. Maybe it's the best you can do. But maybe you end up with just a few less wasted cycles in the world. Maybe you end up with a few more pieces of software that feel snappy and pleasant to use than you otherwise would have.
I completely agree that some (if not the majority of) developers aren't aware of what's happening under abstraction layers, but then I have to ask: is it the developer fault, or the abstraction's?
If you were to ask a new programmer to make a very simple calculator which would then be distributed to various people using various devices, what would they use? How long would it take them? How much would it consume? Does this cost have anything to do with the programmer being unaware CPUs have multiple cores or that memory access is slow? Theoretically I would struggle to find a way to make this calculator takes up more than 10mb of memory (which is already more than Mario 64), both as CLI and GUI. You literally have 4 bytes instructions for add/sub/mul/div and a framebuffer, it is not like I am talking about micro-optimization, this should be the default and simplest path.
Discord takes around 400mb on my machine and will happily take a whole +3ghz core from me if I start scrolling. If I were to give a new programmer a method to query messages, and another to render to a framebuffer through software rendering, would they even succeed matching discord/chromium's bloat? Seems to me it would require some genuine effort.
You could explain me that this bloat come from fancy/convenient features, but it does not change that programmers are always exposed to complex problems even though theoretically easily composable and therefore friendlier to changes.
If you were to ask me for less theory and for a more practical example, I would say that each programs should be written/compiled to a potentially different format with the common point of being easily interpretable (stack machine, Turing machine, lambda calculus, cellular automata).
Each platform (OSes, hardware) should come with a program that expose every single potential IO action, and to run an actual program requires finding an interpreter for the specific format (either made yourself in an hour at most, or downloaded) and mapping your exposed IO calls through the platform app, in the same way you would configure keybinds in a video game.
- Developers are always exposed to their lowest layer
- Programs are always cross-platform, even to future ones
- Programs are stable, increasing the chance of being optimized over time
In this example, none of it depends on understanding how CPUs work. Also does not require a change in market dynamics, individuals can start, and make it slowly gain relevance as anything written this way cannot (easily) break and become abandonware.
Ultimately, even unaware programmers are able to explain their webapp in few words, the problem is that they cannot easily map those words and have to work around frameworks which they cannot really give up on. Android/iOS helloworld templates illustrate it nicely.
I 100% agree that you don't necessarily need to know how a computer works to make a less bloated discord app.
But I do think that knowing the limit of what is possible with respect to performance makes you appreciate just how bloated and slow something like discord is. If you've only ever don't web development or electron apps, and you've only ever used bloated apps, you have no idea that you can do better.
You don't have to teach people about the limits, you could build a discord client on top of a "a method to query messages, and another to render to a framebuffer through software rendering" and show them how much faster it could be how much less memory it could use.
But the thing is Casey likes teaching at a lower level than that. He likes teaching about the domain he finds interesting and that he's an expert in. He thinks he can make the world more like the world he wants to live in, and he thinks he can do it by doing something he likes doing and is capable of doing.
That doesn't mean there aren't other faster or more impactful solutions to less bloated software, but he's not necessarily the guy to work on those solutions and there's nothing wrong with that. Judging by the number of people who watch his videos and the frequency with which the topics he promotes are discussed, he's making an impact--maybe not the optimal impact he could be making, but that's a fair thing to judge someone on.
> Each platform (OSes, hardware) should come with a program that expose every single potential IO action, and to run an actual program requires finding an interpreter for the specific format (either made yourself in an hour at most, or downloaded) and mapping your exposed IO calls through the platform app, in the same way you would configure keybinds in a video game.
Have you looked at the Roc language? It has a lot of similarities to what you're describing.
> You don't have to teach people about the limits, you could build a discord client on top of a "a method to query messages, and another to render to a framebuffer through software rendering" and show them how much faster it could be how much less memory it could use.
The problem is exactly that this is not that simple. I believe that there is a huge gap between the theory of a chat application, and the way they are implemented now. I do not believe it is about better abstraction layers, personally I have always found explanations using raw sockets and bits the simplest.
If you were to ask a programmer to write a discord client, you would very likely end up with a giant mess that somehow break a few months later at most (if ever). Even though they would be able to flawlessly go through the program flow step by step if you asked.
Discussions about efficient code kind of avoid the topic, I don't believe we are in a situation where multithreading is relevant, the bloat is on another level. The bloat is generally not related to whatever language you use, the O(1) or O(n) algorithm you choose, but more likely the overall control flow of the program which you already understand in your head but cannot figure out how to act on, and the inability to make simple problems simple to solve (like the calculator, text editor, or even games)
Now you are probably right about Casey ultimately doing what he likes, and even if suboptimal make other people aware. Although I would believe that this benefit comes somewhat in spite of himself.
> Have you looked at the Roc language? It has a lot of similarities to what you're describing.
Gave it a look and unfortunately, I don't think I have seen the similarities? It has environment/IO access, call host language functions, and doesn't really strive to be reimplemented by independent people.
I'm not an expert. I haven't actually used it, but I have been following it a bit. Roc splits things into apps and platforms. When you implement a platform you're responsible for implementing all (or the subset you want to provide) of the I/O primitives and memory allocators.
>Discussions about efficient code kind of avoid the topic, I don't believe we are in a situation where multithreading is relevant, the bloat is on another level
Case does talk about other kinds of bloat. Things like structuring things so blocking operations can be batched etc...
The real problem is that we have a broken, anti-performance culture. We have allowed "premature optimizaiton is the root of all evil" to morph into "optimization is the root of all evil. That single quote has done untold damage to the software industry. We don't need to pass laws to force all programmers worldwide to do anything. Fixing our culture will be enough. In my view, that's what Casey is trying to do.
I don't believe this is the root cause, computers got faster, and software got quicker to the state of "run good enough". I'm calling Wirth's law on it.
"Clean code" is indeed often a bad idea, but you are overestimating the impact. Even software written by people caring very much about performance consume way more than it theoretically should.
Plus, if this was that simple, people would have already rewritten all the bad software.
Your message is exactly the reason why I do not like Casey, he is brainwashing everyone into thinking this is a culture problem. Meanwhile nobody tries to solve it technically.
The free market is preventing technical solutions. People generally buy based on features first and everything else second. This allows for a precarious situation in software market: the company producing the most bloat the fastest wins the biggest market share and sees no need to invest in proper fixes. Everyone that cares about software quality too much gets outcompeted almost immediately.
And since software can be rebuilt and replicated with virtually zero cost, there is no intrinsic pressure to keep unit costs down as it happens in industry, where it tends to keeps physical products simple.
It doesn't have to come from the free market, FOSS is hardly exempt of awfully slow/unstable software. Nobody figured out yet how to make writing good software the default/path-of-least-resistance.
Wirth's law doesn't bolster your point. He observed software is getting slower at a more rapid rate than computers are getting faster. Which is the whole point. We write increasingly slow, increasingly shitty code each year. I read and hear this attitude all the time that is basically "if you optimize something, you're bad at your job, only juniors try to do that". That's a culture problem.
It's frankly insulting you think Casey brainwashed me into this stance, when it's been obvious to me since long before I'd ever heard of him. IDGAF if code is clean or not. I care that Jira can display a new ticket in less than 15 seconds. I care that vscode actually keeps up with the characters I type. None of this software is remotely close to "runs good enough".
What I am saying is that this is a natural phenomenon assuming no technical solution. People will tend to optimize their software based on the performance of their hardware.
I completely agree that many apps are horrendously slow, but given the alternative are hard pressed to arrive, I can only conclude they are considered "good enough" for our current tech level.
The difficulty involved in rewriting modern apps is one of the reason I would give that result in slow software. Can't really complain about the number of independent web browsers when you look at the spec. Ensuring the software we use is easily reimplementable by a few or one developer in a few days would go a long way improving performance.
Another reason would be the constant need to rewrite working code, to work on new platforms, to support some new trendy framework, etc. etc. You cannot properly optimize without some sort of stability.
Jira runs good enough to get idiot managers to pay for it, which is what it's designed for. And, yeah, microoptimising (or micropessimising, who knows since the changes are usually just made on vibes anyway) random bits of code that probably aren't even on any kind of hot path, while compromising maintainability, is something only juniors and people who are bad at their job do. It's easy to forget how common security flaws and outright crashes were in the "good old days" - frankly even today the industry is right to not prioritise performance given how much we struggle with correctness.
A lot of code is slow and could be faster, often much faster. This is more often because of people who thought they should bypass the abstractions and clean code and do something clever and low level than the opposite. Cases where you actually gain performance on a realistic-sized codebase by doing that are essentially nonexistent. The problem isn't too many abstractions, it's using the wrong algorithm or the wrong datastructure (which, sure, sometimes happens in a library or OS layer, but the answer to that isn't to bypass the abstraction, it's to fix it), and that's easier to spot and fix when the code is clean.
It’s not about writing for loops, the practice is to get good at problem solving and algorithmic thinking. For loops are a means to an end, like writing letters on a page.
I meant that order is good, not really that ethics are conditional. There are exceptions where I'd say go ahead and foment rebellion, be obstinate, etc.
reply