I know - old man yells at cloud and stuff - but some 8-bit home computers from the 80s completed their entire boot sequence in about half a second. What does a 'UI rendering engine' need to do that takes half a second on a device that's tens of thousands of times faster? Everything on modern computers should be 'instant' (some of that time may include internet latency of course, but I assume that the Shopify devs don't live on the moon).
Not sure why people keep bringing the old (my machine x years ago was faster). Machines nowadays do way more than machines from 80s. Whether the tasks they do are useful or not is separate discussion.
Casey Muratori has a clip [0] discussing the performance differences between Visual Studio in 2004 vs. today.
Anecdotally, I’ve been playing AoE2: DE a lot recently, and have noticed it briefly stuttering / freezing during battles. My PC isn’t state of the art by any means (Ryzen 7 3700X, 32GB PC4-24000, RX580 8GB), but this is an isometric RTS we’re talking about. In 2004, I was playing AoE2 (the original) on an AMD XP2000+ with maybe 1GB of RAM at most. I do not ever remember it stuttering, freezing, or in any way struggling. Prior to that, I was playing it on a Pentium III 550 MHz, and a Celeron 333 MHz. Same thing.
A great anti-example of this pattern is Factorio. It’s also an isometric top-down game, with RTS elements, but the devs are serious about performance. It’s tracking god knows how many tens or hundreds of thousands of objects (they’re simulating fluid flow in pipes FFS), with a goal of 60 FPS/UPS.
Yes, computers today are doing more than computers from the 80s or 90s, but the hardware is so many orders of magnitude faster that it shouldn’t matter. Software is by and large slower, and it’s a deliberate choice, because it doesn’t have to be that way.
If you buy poor software instead of good software (yes, branding, IP and whatever but that's just even more reason for companies not to make it good), complaining doesn't help does it. Commercial software is made to be sold and if it sells enough that's all company executives care about. As long as enough people buy it, it will continue to be made.
Company devs trying to get more time/resources to improve performance will be told no unless they can make a realistic business case that explains how the expense of increased focus on performance will be financially worth in terms of revenue. If enough people buy poor software, improving it is not business smart. Companies exist to make money not necessarily to make good products or provide a good service.
I understand your point but you need to understand that business execs don't care about that unless it significantly impacts revenue or costs in the present or very near future.
Nah, it’s not just that. IME, most devs are completely unaware of how this stuff works. They don’t need to, because there are so many abstractions, and because the industry expectation has shifted such that it isn’t a requirement. I’ve also met some who are aware, but don’t care at all, because no one above them cares.
Tech interviews are wildly stupid: they’ll hammer you on being able to optimally code some algorithm under pressure on a time limit, but there’s zero mention of physical attributes like cache line access, let alone a realistic problem involving data structures. Just once, I’d love to see “code a simple B+tree, and then discuss how its use in RDBMS impacts query times depending on the selected key.”
Sure, and the screen in text mode was 80 x 25 chars = 2000 bytes of memory. A new phone has perhaps three million pixels, each taking 4 bytes. There's a significant difference.
And yet the GPU in your phone can run a small program for each pixel taking hundreds or even thousands of clock cycles to complete and still hit a 60Hz frame rate or more. It's not the hardware that's the problem, but the modern software Jenga tower that drives it.