Hacker News new | past | comments | ask | show | jobs | submit login
Software Disenchantment (2018) (tonsky.me)
934 points by ibdknox on Jan 1, 2020 | hide | past | favorite | 488 comments



> Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters?

I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.

If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).

These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).


> "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.

I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:

- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.

- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.

- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...

Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.


This is definitely an interesting take on the car analogy so thanks for posting it! I don't know that I agree 100% (I think I could 'settle' for a car that needed be be fueled once or twice a year if it came with some other noticeable benefits), but it is definitely worth remembering that sometimes an apparently small nudge in performance can enable big improvements. Miniaturization of electronics (including batteries and storage media) and continuing improvements to wireless broadband come to mind as the most obvious of these in the past decades.

I'm struggling to think of recent (or not-so-recent) software improvements that have had a similar impact though. It seems like many of the "big" algorithms and optimization techniques that underpin modern applications have been around for a long time, and there aren't a lot of solutions that are "just about" ready to make the jump from supercomputers to servers, servers to desktops, or desktops to mobile. I guess machine learning is a probably contender in this space, but I imagine that's still an active area of optimization and probably not what the author of the article had in mind. I'd love if someone could provide an example of recent consumer software that is only possible due to careful software optimization.


V8 would be one example. Some time ago, JavaScript crossed a performance threshold, which enabled people to start reimplementing a lot of desktop software as web applications. In the following years, algorithms for collaborative work were developed[0], which shifted the way we work with some of those applications, now always on-line.

That would be the meaningful software improvements I can think of. Curiously, the key enabler here seems to be performance - we had the capability to write web apps for a while, but JS was too slow to be useful.

--

[0] - They may or may not have been developed earlier, but I haven't seen them used in practice before the modern web.


> sometimes an apparently small nudge in performance can enable big improvements

In this thought experiment we are talking about a 2 orders magnitude improvement - hardly a small nudge!


> I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase!

It's a nice idea but it wouldn't work. The gasoline would go bad before you could use it all.

Plug-in hybrids already have this problem. Their fuel management systems try to keep the average age of the fuel in the tank under 1 year. The Chevy Volt has a fuel maintenance mode that runs every 6 weeks:

https://www.nytimes.com/2014/05/11/automobiles/owners-who-ar...

https://www.autoblog.com/2011/03/18/chevy-volts-sealed-gas-t...

Instead of having a "lifetime tank", a car that uses 0.005L per 100km would be better off with a tiny tank. And then instead of buying fuel at a fuel station you'd buy it in a bottle at the supermarket along with your orange juice.


There is [1] https://duckduckgo.com/?q=alkylate+petrol which is said to last anything between two years and up to ten years, depending on the mixture while burning rather clean.


You are thinking too small, with a car generating power that cheaply you could use it to power a turbine and provide cheap electricity to the entire world. It would fix our energy needs for a very long time and it would usher a new age!


Or the car could just be very efficient. Gasoline has a lot of energy. Transporting a person 100km on 34MJ/l * 0.05l =1.71MJ doesn't sound as impossible as you make it seem.


Trains transports at 0.41 MJ/t·km. If the person weights 0.1t it would take a train packed full of people 41MJ per person to transport them 100km, or a bit more than one litre of gasoline. I don't think it is possible to go significantly below that without transporting them on mag rails or vacuum pipes.

Secondly we talked about 0.005l cars, not 0.05l, so it would be a few hundreds times more efficient than train transportation.


Bicycles are probably a bit more efficient than trains.


Its about the same, you burn several thousand calories or a few tens of mega-joules biking 100 km.


my strava says 100km cycling is ~3370kcal


The big problem is this, if we related this back to software it would mean the software being delivered in 10-15 years, rather than in 6 months. Kind of a big downside...


Not necessarily. For one, relating this doesn't remove the ability for incremental development. Another thing, there's very little actual innovation in software being done. Almost anything we use existed in some version in the past two or three decades, and it was much more faster, even if rougher at the corners. Just think how many of the startups and SaaS projects we see featured on HN week after week are just reimplementing a feature or a small piece of workflow from Excel or Photoshop as a standalone web app?


That's the old Ruby on Rails argument. In that specific case it only made sense when there were no similar frameworks for faster languages, but that's hardly the case today.


Ironically though, I'd be willing to bet that end-user performance on most traditional server-side-rendered apps using the "heavyweight" RoR framework is far better than the latest and greatest SPA approach.


It really depends.

I've worked with a Preact SPA where the time to initial render was faster than the HAML templates it replaced.

But then, again, that was an outlier. If your target is speed, traditional SSR or static pages are the best bet, anyway.


In a previous life I did back office development for ecommerce. We had two applications, one RoR monolith and a "modern" JavaScript Meteor SPA. The SPA was actually developed to replace the equivalent functionality in the RoR application but we ended up killing it and sticking with what we had. Depending on what you're trying to accomplish server side rendering is just as good, if not better than the latest and greatest in client side rendering.


Nitpicking: gas goes bad eventually and needs to be burned before that, the usually given timeframe is after ~6 months.


That's the ethanol-component of the gas (i.e. the E part of E5, E10), as it degrades.

If you had pure gasoline, you could store it for years (and in past, countries and armies did exactly that for their reserves).


"Most users," yeah, perhaps.

A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.

I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.


> These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use.

I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.


Virtually all of my accidental inputs are caused by application slowness or repaints that occur several hundred milliseconds after they should have.

I want all interactions with all of my computing devices to occur in as close to 0ms as possible. 0ms is great; 20ms is good; 200ms is bad; 500ms is absolutely inexcusable unless you're doing significant computation. I find it astonishing how many things will run in the 200-500ms range for utterly trivial operations such as just navigating between UI elements. And no, animation is not an acceptable illusion to hide slowness.

I am with the OP. "Good enough" is a bane on our discipline.


How about the i-am-about-to-press-this-button-but-wait-we-need-to-rerender-the-whole-page. At which point you misclick or not at all. Especially some recent shops and ad heavy pages use this great functionality ;)


Twitter on mobile.....


The rule for games is that you have 16ms (for a 60Hz monitor) to process all input and draw the next frame. That's a decent rule for everything related to user input. And since there are high refresh-rate monitors, and it's a web app and not a game using 100% CPU & GPU, just assume 4-5ms for a nicer number. If you take longer than that to respond to user input on your lowest-capability supported configuration, you've got a bug.

0ms is great, 4ms is very good, 16ms is minimally acceptable, 20ms needs improvement (you're skipping frames), 200ms is bad (it's visible!), 500ms is ridiculous and should have been showing a progress bar or something.

Responding to input doesn't necessarily mean being done with processing, it just means showing a response.


This happens to me all the time starting pipelines in Gitlab. Which typically results in unwanted merges to master which then need to be reverted.


Don’t get me started with all the impressive rotating zooming in Google Maps every time you accidentally brush the screen.

The usage story requires you to switch to turn-by-turn, and there’s no way to have bird eye map following your location along route (unless you just choose some zoom level and manually recenter every so often.)

It’s awful, distracting and frankly a waste of time... just to show a bit of animation every time I accidentally fail to register a drag...

Damn Ui


Well, Google Maps is its own story - it's like the app is being actively designed to be as useless as possible as a map - a means to navigate. The only supported workflow is search + turn-by-turn navigation, and everything else seems to be disincentivized on purpose.


I respectfully disagree -- something that is 10 times more efficient costs 10 times less energy (theoretically). When the end user suffers a server outage due to load, when they run out of battery ten times quicker, all of these things matter. When you have to pay for ten servers to run your product instead of one, this cost gets passed on to the end user.

I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...

There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).


Another problem is that the inefficiency of multiple products tends to compound.

- Opening multiple tabs in a browser will kill your battery, and it's not the fault of a single page, but of all of them. Developers tend to blame the end user for opening too many tabs.

- Running a single Electron app is fast enough in a newer machine but if you need multiple instances or multiple apps you're fucked.

- Some of my teammates can't use their laptops without the charger because they have to run 20+ docker containers just to have our main website load. The machines are also noisy because the fan is always on.

- Having complex build pipelines that take minutes or hours to run is something that slows dow developers, which are expensive. It's not the fault of a single software (except maybe of the chosen programming language), but of multiple inefficient libraries and packages.


> "Even worse, try using an OS running in a VM for an extended period of time..."

I actually do this for development and it works really well.

Ubuntu Linux VM in VMware Fusion on a Macbook Pro with MacOS.

Power consumption was found to be better than running Linux natively. (I'm guessing something about switching between the two GPUs, but who knows.)

GPU acceleration works fine; the Linux desktop animations, window fading and movement animations etc are just as I'd expect.

Performance seems to be fine generally, and I do care about performance.

(But I don't measure graphics performance, perhaps that's not as good as native. And when doing I/O intensive work, that's on servers.)

Being able to do a four-finger swipe on the trackpad to switch between MacOS desktops and Linux desktops (full screen) is really nice. It feels as if the two OSes are running side by side, rather than one inside another.

I've been doing Linux-in-a-VM for about 6 years, and wouldn't switch back to native on my laptop if I had a choice. The side-by-side illusion is too good.

Before that I ran various Linux desktops (or Linux consoles :-) for about 20 years natively on all my development machines and all my personal laptops, so it's not like don't know what that's like. In general, I notice more graphics driver bugs in the native version...

(The one thing that stands out as buggy is VMware's host-to-guest file sharing is extremely buggy, to the point of corrupting files, even crashing Git. MacOS's own SMB client is also atrocious in numerous ways, to the point of even deleting random files, but does it less often so you don't notice until later what's gone. I've had to work hard to find good workarounds to have reliable files! I mention this as a warning to anyone thinking of trying the same setup.)


What year MBP is this? I tried running Ubuntu on Virtual Box on my mid 2014 MBP with 16GB ram, but that was anything but smooth. I ended up dual booting my T460s instead.

But perhaps the answer is VMware Fusion instead then.


It's a late 2013 MBP, 16GB RAM.

I've only given Linux 6GB RAM at the moment, and it's working out fine. Currently running Ubuntu 19.10.

I picked VMware Fusion originally because it was reported to have good-ish support for GPU emulation that was compatible with Linux desktops at the time. Without it, graphics can be a bit clunky. With it, it feels smooth enough for me, as a desktop.

My browser is Firefox on the Mac side, but dev web servers all on the Linux side.

The VM networking is fine, but I use a separate "private" network (for dev networking) from the "NAT" network (outgoing connections from Linux to internet), so Wifi IP address changes in the latter don't disrupt active connections of the former.

My editor is Emacs GUI on the Mac side (so it integrates with the native Mac GUI - Cmd-CV cut and paste etc, better scrolling), although I can call up Emacs sessions from Linux easily, and for TypeScript, dev language servers etc., Emacs is able to run them remotely as appropriate.

Smoothness over SSH from iTerm is a different thing from graphical desktop smoothness.

When doing graphics work (e.g. Inkscape/GIMP/ImageMagick), or remote access to Windows servers using Remmina for VNC/RDP, I use the Linux desktop.

But mostly I do dev work in Linux over SSH from iTerm. I don't think I've ever noticed any smoothness issues with that, except when VMware networking crashes due to SMB/NFS loops that I shouldn't let happen :-)


Thanks a lot for the long through reply. It sounds like I might want to give VMware Fusion a go if I want to play around with Linux on my MBP again.


The answer is I/O latency.

Having your VM stored inside a file on a slow filesystem is bad. Having a separate lvm volume (on linux)/zvols (with zfs)/partition/disk is much more performant.


I store my Linux VM disk inside a file on a Mac filesystem (HFS+, the old one), and I haven't noticed any significant human-noticable I/O latency issues when using it. The Linux VM disk is formatted as ext4.

That's about human-scale experience, rather than measured latency. It won't be as fast as native, but it seems adequate for my use, even when grepping thousands of files, unpacking archives, etc, and I haven't noticed any significant stalling or pauses. It's encrypted too (by MacOS).

(That's in contrast to host-guest file access over the virtual network, which definitely has performance issues. But ext4 on the VM disk seems to work well.)

The VM is my main daily work "machine", and I'm a heavy user, so I'd notice if I/O latency was affecting use.

I'm sure it helps that the Mac has a fast SSD though.

(In contrast, on servers I use LVM a lot, in conjunction with MD-RAID and LUKS encryption.)


Yes, but it's not just relative quantities that matter, absolute values matter too, just as the post you replied to was saying.

Optimizing for microseconds when bad UI steals seconds is being penny-wise and pound foolish. Business might not understand tech but they do generally understand how it ends up on the balance sheet.


But the balance sheets encompass more than delivering value to end-users; business can and do trade off that value for some money elsewhere (see e.g. pretty much everything that has anything to do with ads).

Note also the potential deadlock here. Optimizing core calculations at μs level is bad because UI is slow, but optimizing UI to have μs responsiveness is bad, because core calculations are slow. Or the database is slow. This way, every part of the program can use every other part of the program as a justification to not do the necessary work. Reverse tragedy of the commons perhaps?


> Even worse, try using an OS running in a VM for an extended period of time...

I do that for most of my hobbyist Linux dev work. It's fine. It can do 4k and everything. It's surely not optimal but it's better than managing dual boot.


Any hints? How are you getting any kind of graphics acceleration? What's your host/guest/hypervisor setup?


Host is Windows, guest is Ubuntu. Hypervisor is VMWare Workstation 12 Player. There is a very straightforward process to get graphics acceleration in the VM. The shell has a "mount install CD" option that causes a CD containing drivers to be loaded in the guest (Player > Manage > Reinstall VMWare Tools). You install those, and also enable acceleration in the VMWare settings (https://imgur.com/a/PUaE38u). Again, it's not perfect, but I can e.g. play fullscreen 1080p YouTube videos. Not sure how it would like playing 4k videos, but my desktop doesn't like that so much even in the host OS.


I do this the other way around, Ubuntu host and a KVM virtual machine controlled by virt-manager with PCIe passthrough for its own GPU and NVMe boot drive. I enjoy Linux too much for daily use (and rely on it for bulk storage with internal drives mergerfs fused together and backed up with snapraid), but I do a lot of photography and media work so I also rely on Windows. This way, I can use a KVM frame relay like looking-glass to get a latency free almost native performance windows VM inside a Ubuntu host, without the need to dual boot (but since the NVMe drive is just windows, I can always boot into windows if I please)


I have to be careful about what I describe, but I don't think people care about speed or performance at all when it comes to tech, and it makes me sad. In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.

The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

I never liked this view. I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time.

> they probably blame their 4G signal for it

Sad thing is, enough companies thinking like this and the incentive to improve on 4G itself evaporates, because "almost nothing can work fast enough to make use of these optimizations anyway".


"I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time."

Consider a loading spinner with a line of copy that explains what's happening. Say it's for an action that can take anywhere from 20 milliseconds to several seconds, based on a combination of factors that are hard to predict beforehand. At the low end, showing the spinner will result in it flashing on the screen jarringly for just a frame. To the user it will appear as some kind of visual glitch since they won't have time to even make out what it is, much less read the copy.

In situations like this, it's often a good idea to introduce an artificial delay up to a floor that gives the user time to register what's happening and read the copy.


Wouldn't it be better to delay the appearance of the spinner, so it doesn't show at all for those fast operations?


This doesn't work well in apps but games do incredible things to hide that state; and it's partially a consequence of avoiding a patent on minigames inside loading screens.

e.g. back in the 90s with Resi 1, the loading screen was hidden by a slow and tense animation of a door opening. It totally fit the atmosphere.

Plenty of games add an elevator or a scripted vehicle ride, or some ridiculous door locking mechanism that serves the same purpose without breaking immersion, especially as those faux-loading screens can be dynamic.

It's pretty much the exact same technique used in cinema when a director wants to stitch multiple takes into a single shot (e.g. that episode in True Detective; that other one in Mr Robot; all of Birdman).


You can still end up with the jarring flash. Say you delay 100ms--if the action takes 120ms, you have the same problem.


Flash is good. If the state transition is "no indicator -> spinner -> checkmark", then if the user notices the spinner flashing for one frame, that only ensures them the task was actually performed.

It's a real case, actually. I don't remember a name, but I've encountered this situation in the past, and that brief flash of a "in progress" marker was what I used to determine whether me clicking a "retry" button actually did something, or whether the input was just ignored. It's one of those unexpected benefits of predictability of UI coding; the less special cases there are, the better.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

I see this argument coming up a lot, but this can be solved by better UX. Making things slow on purpose is just designers/developers being lazy.

Btw users feeling uneasy when something is "too fast" is an indictment of everything else being too damn slow. :D


Regions that use the metric system use liters per kilometer. "The less fuel needed for the same distance, the better."


I’m sure some sort of instantaneous indicator (e.g. a checkmark icon appearing) could be used instead of inserting artificial delays.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened

IMO it can be attributed more to bad UI than optimizations.


Everywhere but the US uses l/100km (which is a much better metric than MPG).


It's still used in the UK too, in our hybrid metric/imperial setup.


I wonder how this trend will be affected by the slowing of Moore’s law. There will always be demand for more compute, and until now that’s largely been met with improvements in hardware. When that becomes less true, software optimization may become more valuable.


‘poor UI design, accidental inputs’

I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.

Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.


> admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance

I think for webpages it is the opposite: non-orthogonal in most cases.

If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.


I dont know, that just feels wrong. If anything, the rise of mobile means there should be more emphasis on speed. All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness. Can you point to a newish app that is clearly better that its predecessor?


> All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness

That's not really true. Slack could be just as pretty and a fraction of the weight, if they hadn't used Electron.


I think there are two factors preventing mobile from being a force to drive performance optimizations.

One, phone OSes are being designed for single-tasked use. Outside of alarms and notifications in the background (which tend to be routed through a common service), the user can see just one app at a time, and mobile OSes actively restrict background activity of other apps. So every application can get away with the assumption that it's the sole owner of the phone's resources.

Two, given the above, the most noticeable problem is now power usage. As Moore's law has all but evaporated for single-threaded performance, hardware is now being upgraded for multicore and (important here) power performance. So apps can get away with poor engineering, because every new generation of smartphones has a more power-efficient CPU, so the lifetime on single charge doesn't degrade.


I think objections like this may be put in terms of measurable cost-benefits but they often come down to the feeling of wasted time and effort involved in writing, reading and understanding garbage software.

Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.

That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.

Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).


> If basic computer operations like loading a webpage took minutes rather than seconds...

Wait, are you implying they don't ? What world do you live in, and how do I join?


This takes about 100 liters per 100 kilometers: https://en.wikipedia.org/wiki/M3_half-track

It does fill some other requirements that a regular car doesn't.


The analogy is wrong as well because a car engine is used for a single purpose, moving the car itself. Imagine if you had an engine that powered a hundred cars instead, but a lot of those cars were unoptimized so you can only run two cars at a time instead of the theoretical 100.

or... something.

The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.


RTFA:

>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.


FWIW, I did RTFA (top to bottom) before commenting. I chose to reply to some parts of the article and not others, especially the parts I felt were particularly hyperbolic.

Anecdotally, in my career I've never had to compile something myself that took longer than a few minutes (but maybe if you work on the Linux kernel or some other big project, you have; or maybe I've just been lucky to mainly use toolchains that avoid the pitfalls here). I would definitely consider it a problem if my compiler runs regularly took O(10mins), and would probably consider looking for optimizations or alternatives at that point. I've also benefited immensely from a lot of the analysis tools that are built into the toolchains that I use, and I have no doubt that most or all of them have saved me more pain than they've caused me.


Then you're being disingenuous in picking a quarter of the quote.

>You’ve probably heard this mantra: “Programmer time is more expensive than computer time.” What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.

The point is that we are wasting all the resources at every scale. We are supposedly burning computer cycles because developer time is more important. Yet we are also burning developer time with compiling, or testing for interpreted languages, at a rate that is starting to approach the batch processing days.


Complaints about slow compilers or praise for toolchains being faster than others are very common, so I don't see how "nobody thinks" that.


I agree it's all slower and sucks. But I don't think it's solely a technical problem.

1/ What didn't seem to get mentioned was the speed to market. It's far worse to build the right thing no one wants, than to build the crappy thing that some people want a lot. As a result, it makes sense for people to leverage electron--but it has consequences for users down the line.

2/ Because we deal with orders of magnitude with software, it's not actually a good ROI to deal with things that are under 1x improvement on a human scale. So what made sense to optimize when computers were 300MHz doesn't make sense at all when computers are 1GHz, given a limited time and budget.

3/ Anecdotally (and others can nix or verify), what I hear from ex-Googlers is that no one gets credit for maintaining the existing software or trying to make it faster. The only way you get promoted is if you created a new project. So that's what people end up doing, and you get 4 or 5 versions of the same project that do the same thing, all not very well.

I agree that the suckage is a problem. But I think it's the structure of incentives in the environment that software is written that also needs to be addressed, not just the technical deficiencies of how we practice writing software, like how to maintain state.

It's interesting Chris Granger submitted this. I can see that the gears have been turning for him on this topic again.


I might strengthen your argument even more and say it's largely a non-technical problem. We have had the tools necessary to build good software for a long time. As others have pointed out, I think a lot of this comes down to incentives and the fact that no one has demonstrated the tradeoff in a compelling way so far.

I find it really interesting that no one in the future of programming/coding community has been able to really articulate or demonstrate what an "ideal" version of software engineering would be like. What would the perfect project look like both socially and technically? What would I gain and what would I give up to have that? Can you demonstrate it beyond the handpicked examples you'll start with? We definitely didn't get there.

It's much harder to create a clear narrative around the social aspects of engineering, but it's not impossible - we weren't talking about agile 20 years ago. The question is can we come up with a complete system that resonates enough with people to actually push behavior change through? Solving that is very different than building the next great language or framework. It requires starting a movement and capturing a belief that the community has in some actionable form.

I've been thinking a lot about all of this since we closed down Eve. I've also been working on a few things. :)


I'll take this opportunity to appreciate C# in VS as a counterexample to the article. Fast as hell (sub-second compile times for a moderately large project on my 2011 vintage 2500k), extremely stable, productive, and aesthetically pleasing. So, thanks.


It's very hard for me to get away from C# because it's just so crazy productive. The tooling is fanstastic and the runtime performance is more than good enough.

One thing I found was that surprisingly the C# code I write outperforms the C++ code I used to write at equal development times.

I was good at C++, but the language has so many footguns and in general is so slow to develop in that I would stick to "simple" and straightforward solutions. I avoided multi-threading like the plague because it was just so hard to get right.

Meanwhile in C# it's just so easy to sprinkle a little bit of multithreading into almost any application (even command-line tools) that I do it "just because". Even if the single-threaded performance is not-so-great, the end result is often much better.

Similarly, it's easy to apply complex algorithms or switch between a few variants until something works well. In C++ or even Rust, the strict ownership semantics makes some algorithm changes require wholesale changes to the rest of the program, making this kind of experimentation a no-go.

The thing that blows my mind is the "modern" approach to programming that seems to be mostly young people pretending that Java or C# just don't exist.

Have you seen what JavaScript and Python people call "easy?" I saw a page describing a REST API based on JSON where they basically had thousands of functions with no documentation, no schema, and no typed return values. It was all "Just look at what the website JS does and reverse engineer it! It's so easy!"

I was flabbergasted. In Visual Studio I can literally just paste a WSDL URL into a form and it'll auto-generate a 100K-line client with async methods and strongly-typed parameters and return values in like... a second. Ditto for Linq-2-SQL or similar frameworks.


I've also been lurking on the FoC community, and hadn't seen much on an articulation on the social and incentive structures that produce software. Do you think they'd be receptive to it?

And by "social and inventive structures", I'm assuming you're talking about change on the order of how open source software or agile development changed how we develop software?

While agile did address how to do software in an environment for changing requirements and limited time, we don't currently have anything that addresses an attention to speed of software, building solid foundations, and incentives to maintain software.

What would a complete system encompass that's currently missing in your mind?


This is very much a social and political problem. Will be interesting to see if us technical folks can solve it.


I think you will see great change if you were to look at the personalities around one opportunity.

Because it's never problems really, it's perceived that way though.

A certain challange needs a specific set of personalities to solve it. That's the real puzzle.

Great engineers will never be able to solve things properly unlessed given the chance by those who control the surroundings.

We seek how we should develop, what method should be used, is it agile or is it lean? But maybe the problem starts earlier and focusing on exactly what methods and tools to use we miss out on the most simplest solution even beginners can see.

For example I am an architect, I tend to not touch the economics in a project. It's better fitted for other persons.

While not having read much about team based development I do want to be directed to well read literature about it. Maybe it's better called social programming, just another label of what we really do.

The one I miss the most at work is my wife. She clearly is the best reverse of me and makes me perform 1000x better. I find that very funny since she does not care about IT at all.


Maybe we should enforce some guidelines, and sponsor some programs to address these issues.

There's ways to develop working software, but not if it's all locked behind closed OSes and other bullshit.


The stuff I write I don't think is that bloated, but like most things these days the stuff I write pulls in a bunch of dependencies which in turn pulls in their own dependencies. The result, pretty bloated software.

Writing performant, clean, pure software is super appealing as a developer, so why don't I do something about the bloated software I write? I think a big part of it is it's hard to see the direct benefit from the very large amount of effort I'll have to put in.

Sure I can write that one thing from that one library that I use myself instead of pulling in the whole library. I might be faster, I might end up with a smaller binary, it might be more deterministic because I know exactly what it's doing. But it'll take a long time, might have a lot of bugs and forget about maintaining it. Then end of the day, do the people that use my software care that I put in the effort to do this? They probably won't even notice.


I think part of it is knowing how to use libraries. It's actually a good thing to make use of well-tested implementations a lot of time rather than re-inventing the wheel: for instance it would be crazy to implement your own cryptography functions, or your own networking stack in most cases. Libraries are good when they can encapsulate a very well-defined set of functionality behind a well-defined interface. Even better if that interface is arrived at through a standards process.

To me, where libraries get a bit more questionable is when they exist in the realm of pure abstraction, or when they try to own the flow of control or provide the structure around which your program should hang. For instance, with something like Ruby on Rails, it sometimes feels like you are trying to undo what the framework has assumed you need so that you can get the functionality you want. A good library should be something you build on top of, not something you carve your implementation out of.


A good compromise would be to replace bloated modules with alternatives that are leaner and have fewer nested dependencies.


Most developers I have known want to work on the new great new thing. They don't want to spend a great deal of time on the project either. Forget about them wanting to dedicate time to software maintenance. Not sexy enough.


Ok but why ? And what can we do to improve things? Promote maintenance, but I think one of the issues is that you can show something new, it's much more difficult to show that something could have changed (failure, difficulty to grow), but didn't.


To the extent it's in your power as a developer and a team member, don't tolerate low-performance code from yourself or your co-workers.

In my experience, a lot of performance problems boil down to really stupid problems, like simple code using the wrong data structure out of convenience (e.g. linked lists instead of arrays for lots of randomly-accessed data), or structured in a bad way (e.g. allocating a lot of small pieces of memory all the time). Often times, there are cheap performance wins to be had if you occasionally run the product through a profiler and spend couple of hours fixing the most pressing issue that shows up. Couple of hours isn't much; there's enough slack in the development process to find those hours every month or two, without slowing down your regular work.


I agree with your point of developers being responsible for the performance.

But I have a different experience (probably because we work in different areas):

Most the performance problems of the products I ever worked were purely systemic.

They boiled down to technologies and architectures having been chosen for "organizational" rather than technological reasons.

And "organizational" is in quotes because sometimes it was just blackmail: I worked with two developers who quit in protest after the prototype they wrote in Scala was deemed not good enough and dropped for... being too slow, ironically.


This has been a major frustration for me as a UI developer on the current application I work on. The UI is often hamstrung by how the backend API was implemented. There are frequently cases where we stitch together pre-existing API functionality to make something work in a far-from-ideal manner just because it would take longer to do it right and no one is interested.


I've seen a similar thing happen. It all started with good intentions, like only having simple endpoints that do "one and only one thing".

In the end the backend was pure and beautiful, but the the frontend devs had to perform joins in the the client and make 21 API calls in a 20-item list and then everything goes to hell.


Ah, I can tell you such stories of a stack that evolved solely out of incompetence...


It's not a technical problem at all. It's an economy problem.


From a Reddit comment:

> While I do share the general sentiment, I do feel the need to point out that this exact page, a blog entry consisting mostly of just text, is also half the size of Windows 95 on my computer and includes 6MB of javascript, which is more code than there was in Linux 1.0. Linux at that point already contained drivers for various network interface controllers, hard drives, tape drives, disk drives, audio devices, user input devices and serial devices, 5 or 6 different filesystems, implementations of TCP, UDP, ICMP, IP, ARP, Ethernet and Unix Domain Sockets, a full software implementation of IEEE754 a MIDI sequencer/synthesizer and lots of other things.

>If you want to call people out, start with yourself. The web does not have to be like this, and in fact it is possible in 2018 to even have a website that does not include Google Analytics.

https://www.reddit.com/r/programming/comments/9go8ul/comment...


The sad thing is, to quote The Website Obesity Crisis[1],

> Today’s egregiously bloated site becomes tomorrow’s typical page, and next year’s elegantly slim design.

[1] https://idlewords.com/talks/website_obesity.htm


Since this Reddit comment was made, the Twitter iframe responsible for the megabytes of JavaScript has been replaced by a <video> tag. The only JavaScript left on the page is Google Analytics, which is way less than 6MB.


I feel bad now that my comment received so much attention. I didn’t realize that the Reddit comment was made a year ago, and I should have tested the webpage size myself. The author’s argument is still important, after all.


And this really wasn't the author's fault—it's completely logical that if your story contains a tweet, you should attempt to embed it in the way Twitter recommends.

This is Twitter, not some random framework!


It fits in just under an MB instead of 6MB


Long ago I watched a documentary about the early Apple days, when management was encouraging their developers to reduce boot times by 10 seconds. The argument was that 10 seconds multiplied by the number of boot sequences would result in saving many human lives worth of time.

Edit: found a link with the same story: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt

The software world needs more of this kind of thinking. Not more arguments like "programmer's time is worth less than CPU time", which often fail to account for all externalities.


I like the "human lifetimes wasted"-metric. It's interesting to think that a badly optimized piece of code used by a few million people basically kills a few each day. If every manager, client and programmer thought for a second if the 30min they save is worth the human lifespans wasted I think we'd have better software.


I wish more companies thought like this in general. I often think about the nature of the work I'm doing as a developer and wonder if it's making society better off as a whole. The answer is usually a resounding no.


Same here, but why exactly?

In my country, SW engineer is one of the best careers in terms of income, and I bet it is similar in most of the other countries. Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?

These thoughts just haunt me from time to time.


> Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?

I understand that you're asking a theoretical question, not a practical one, but in practical terms the answer is fairly simple. Our economy is not built to (indeed, is built not to) reward individuals in line with what they contribute to society. An entirely different set of incentives are what structure our economy, and therefore the jobs and lives of most people.

In some sense, David Graeber's Bullshit Jobs is all about the widespread awareness (and denial) of this phenomenon, and what caused it. I wouldn't say it's a perfect book but it's the best one I've read on the subject.


That's obvious. It's a work that by definition reach many others automatically and acts faster than humans, with less human intervention so it saves work. Anything that saves time/money and has this multiplication effect will generate tons of cash. No wonder we catch a part of it.

Edit: in other simpler words, it's useful and scales fine.


They could think like this if it became part of their cost structure. There's no reason for them to think like this other than in terms of profit & loss.


I think my work makes society some infinitisemal amount better.


That's an important comment and made me think that nobody here has mentioned climate change (where human lives are/will be affected, literally). There is an emerging movement toward low-carbon, low-tech, sustainable web design, but it's still very much fringe. To make it mainstream, we all need to work on coming up with better economic incentives.


If the cost of that boot time was somehow materialized upstream - e.g. if companies that produced OSes had to pay for the compute resources they used, rather than the consumer paying for the compute - then economics would solve the problem.

As it is, software can largely free ride on consumer resources.


This implies that time not spent using their software is time wasted doing nothing. Not that reducing boot times would be a bad thing, but that sounds more like a marketing gimmick. As kids we would wait for forever for our Commodore 64 games to load - knowing this we planned accordingly.


"...would result in saving many human lives worth of time."

Meh this is manager-speak for "saving human lives" which they definitely were not. They weren't saving anybody. I mean, there's argument that, in modern day, 2020, time away from the computer is more well-spent than on a computer; so a faster boot time is actually worse than a slower boot time. Faster boot time is less time with the family.

Good managers, like Steve Jobs was, are really good at motivating people using false narratives.


Performance is one thing, but I'm really just struck by how often I run into things that are completely broken or barely working for extended periods of time.

As I write this, I've been trying to get my Amazon seller account reactivated for more than a year, because their reactivation process is just... broken. Clicking any of the buttons, including the ones to contact customer support just take you back to the same page. Attempts to even try to tell someone usually put you in touch with a customer service agent halfway across the world who has no clue what you're talking about and doesn't care; even if they did care, they'd have no way to actually forward your message along to the team that might be able to spend the 20 minutes it might take to fix the issue.

The "barely working" thing is even more common. I feel like we've gotten used to everything just being so barely functional that it isn't even a disadvantage for companies anymore. We usually don't have much of an alternative place to take our business.


Khan Academy has some lessons aimed at fairly young kids—counting, spotting gaps in counting, talking that simple. I tried to sit with my son on the Khan Academy iPad app a few weeks ago to do some with him, thinking it'd be great. Unfortunately it is (or seemed to be to such a degree that I'm about 99% sure it is) janky webtech, so glitches and weirdness made it too hard for my son to progress in without my constantly stepping in to fix the interface. Things like, no feedback that a button's been pressed? Guess what a kid (or hell, adult) is gonna do? Hammer the button! Which... then keeps it greyed out once it does register the press, but doesn't ever progress, so you're stuck on the screen and have to go back and start the lesson over. Missed presses galore, leading to confusion and frustration that nothing was working the way he though it was (and it, in fact, supposed) to work.

I don't mean to shit on Khan Academy exactly because it's not like I'm paying for it, but those lessons may as well not exist for a 4 year old with an interface that poor. It was bad enough that more than half my time intervening wasn't to help him with the content, nor to teach him how to use the interface, but to save him from the interface.

This is utterly typical, too. We just get so used to working around bullshit like this, and we're so good at it and usually intuit why it's happening, that we don't notice that it's constant, especially on the web.


Send bug reports in to Khan Academy if you get a chance.


I'd love to see a software-industry-wide quality manifesto. The tenets could include things like:

* Measure whether the service you provide is actually working the way your customers expect.

(Not just "did my server send back an http 200 response", not just "did my load balancer send back an http 200", not just "did my UI record that it handled some data", but actually measure: did this thing do what users expect? How many times, when someone tried to get something done with your product, did it work and they got it done?)

* Sanity-check your metrics.

(At a regular cadence, go listen for user feedback, watch them use your product, listen to them, and see whether you are actually measuring the things that are obviously causing pain for your users.)

* Start measuring whether the thing works before you launch the product.

(The first time you say "OK, this is silently failing for some people, and it's going to take me a week to bolt on instrumentation to figure out how bad it is", should be the last time.)

* Keep a ranked list of the things that are working the least well for customers the most often.

(Doesn't have to be perfect, but just the process of having product & business & engineering people looking at the same ranked list of quality problems, and helping them reason about how bad each one is for customers, goes a long way.)


You might be interested in Software Craftsmanship [0] manifesto. There are many communities and initiatives around the world gathering folks with the interest in producing high-quality software. From the few of the folks I have been working with that are involved in SC, I can definitely recommend the movement and so I'm also exploring options in joining some local meet-ups and/or events.

[0] http://manifesto.softwarecraftsmanship.org/


This is also one of my pet peeves. It's easier than ever to collect this data and analyse it. Unfortunately, most of our clients are doing neither, or they are collecting the logs but carefully ignoring them.

I've lost count of the number of monitoring systems I've opened up just to see a wall of red tapering off to orange after scrolling a couple of screens further down.

At times like this I like to point out that "Red is the bad colour". I generally get a wide-eyed uncomprehending look followed by any one of a litany of excuses:

- I though it was the other team's responsibility

- It's not in my job description

- I just look after the infrastructure

- I just look after the software

- I'm just a manager, I'm not technical

- I'm just a tech, it's management's responsibility

Unfortunately, as a consultant I can't force anyone to do anything, and I'm fairly certain that the reports I write that are peppered with fun phrases such as "catastrophic risk of data corruption", "criminally negligent", etc... are printed out only so that they can be used as a convenient place to scribble some notes before being thrown in the paper recycling bin.

Remember the "HealthCare.gov" fiasco in 2013? [1] Something like 1% of the interested users managed to get through to the site, which cost $200M to develop. I remember the Obama got a bunch of top guys from various large IT firms to come help out, and the guy from Google had an amazing talk a couple of months later about what he found.

The takeaway message for me was that the Google guy's opinion was that the root cause of the failure was simply that: "Nobody was responsible for the overall outcome". That is, the work was siloed, and every group, contractor, or vendor was responsible only for their own individual "stove-pipe". Individually each component was all "green lights", but in aggregate it was terrible.

I see this a lot with over-engineered "n-tier" applications. A hundred brand new servers that are slow as molasses with just ten UAT users, let alone production load. The excuses are unbelievable, and nobody pays attention to the simple unalterable fact that this is TEN SERVERS PER USER and it's STILL SLOW!

People ignore the latency costs of firewalls, as one example. Nobody knows about VMware's "latency sensitivity tuning" option, which is a turbo button for load balancers and service bus VMs. I've seen many environments where ACPI deep-sleep states are left on, and hence 80% of the CPU cores are off and the other 20% are running at 1 GHz! Then they buy more servers, reducing the average load further and simply end up with even more CPU cores powered off permanently.

It would be hilarious of it wasn't your money they were wasting...

[1] https://en.wikipedia.org/wiki/HealthCare.gov#Issues_during_l...


Johnathan Blow did a really interesting talk about this topic:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.

I can't verify these claims, but it's an interesting thing to think about.


This is an interesting talk, thank you. What frightens me, is that the same process could be happening in other fields, for example, medicine. I really hope we won't forget how to create antibiotics one day.


I have a feeling however that this is in fact not broken but working exactly as intended. Corporate dark pattern just to gently "discourage" problem customers from contacting them.


I feel like the entire implementation of AWS is designed to sell premium support. There is so much missing documentation, and so many arbitrary details you have to know to make it work in general that you almost have to have a way to ask for help in order to make it work.


this usually happens with ad blockers. they somehow mess up a page, and then you get angry customers saying the page doesn't work for them.

we need a solution to this mess. so far i've seen popups (of all things) letting users know they should disable the ad blocking. but that's not a solution. ideally websites should not break when ad blockers are enabled, but i've seen sites where their core product depends on ad blocking being disabled. strange/chaotic times we live in.


"...how often I run into things that are completely broken..."

That's because the shotgun approach(sick 40 developers on a single problem idc how they dole out the workload) works well for most low stakes, non-safety-critical software.

So like a reactivation portal for your Amazon seller account is very low stakes. But Boeing treating the 737-MAX the same way, would be(and was) a very bad idea.

Because that low-stakes approach is extremely bug prone.


I think it's also a problem with the culture of a lot of software practices. There's a tendency to naval-gaze around topics like TDD and code review to make sure you're doing Software Development(tm) effectively, without a lot of attention to the actual product or user experience. In other words, code quality over product quality.


He has a nice follow up which gets to the reasons why

https://tonsky.me/blog/good-times-weak-men/

Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.


He hints at Electron in the end, but I think the real blame lies on React which has become standard in the past five years.

Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.


>I’m really looking forward to whatever next thing becomes popular and replaces this shit show.

I'm with you, but motivation to really learn a system tanks when there's something else on the horizon. And what happens when new-thing appears really great for the first 1-2 years, but goes downhill and we're back to asking for its replacement only 5 years after its release? That tells me we're still chasing 'new', but instead of a positive 'new', it's a negative one.

This was also reinforced constantly by people claiming you'll be unemployable if you aren't riding the 'new' wave or doing X amount of things in your spare time.

It's a natural consequence of an industry that moves quickly. If we want a more stable bedrock, we MUST slow down.


I completely agree, here. React has replaced the DOM, and it's pretty fast, pretty efficient when you understand its limitations... but when you start rendering to the canvas or creating SVG animation from within react code, everything is utterly destroyed. Performance is 1/1000 of what the platform provides. I have completely stopped using frameworks in my day-to-day, and moved my company to a simple pattern for updatable, optionally stateful DOM elements. Definitely some headaches, some verbosity, and so forth. But zero tool chain and much better performance, and the performance will improve, month-by-month, forever.


It seems to me that using your react components to render SVG animations, or to canvas, is just inviting disaster.


Well yeah. But I've seen it done; the attitude being "this is fine, React is fast, it works on my Mac..."


Can you expand? You actually convinced other folks to stop using React and go back to writing DOM-manipulating VanillaJS?


No one's going back to the messy spaghetti-generating "just-jQuery" pattern of yore.

I devised a way of using plain closures to create DOM nodes and express how the nodes and their descendants should change when the model changes. So a component is a function that creates a node and returns a function not unlike a ReactComponent#render() method:

props => newNodeState

When called, that function returns an object describing the new state of the node. Roughly:

{ node, className: 'foo', childNodes: [ childComponent(props) ] }

So, it's organized exactly like a react app. A simple reconciliation function (~100 lines) applies a state description to the topmost node and then iterates recursively through childNodes. A DOM node is touched if and only if its previous state object differs from its current state object - no fancy tree diffing. And we don't have to fake events or any of that; everything is just the native platform.

I implemented an in-browser code editor this way. Syntax highlighting, bracket matching, soft wrap, sophisticated selection, copy/cut/paste, basic linting, code hints, all of it... It edits huge files with no hint of delay as you type, select, &c. It was a thorough test of the approach.

Also, when we animate something, we can hook right in to the way reconciliation works and pass control of the update to the component itself, to execute its own transition states and then pass control back to the reconcile function... This has made some really beautiful things possible. Fine grained control over everything - timing, order, &c. - but only when you want it.

Sorry for the wall of text.


I am sorry, but I don't fully understand. To me it sounds like you are describing exactly tree diffing when you say that the next node is only touched if its state object changed.

I have been through this struggle too. Of wanting to get rid of bloated tools and tools I don't understand, and the best I've found for this is Hyperapp. I've read the source code a few times (was thinking about patching it to work well with web components), so I feel it falls into a category of tools I can use. But I'm genuinely interested in understanding what you've done if it offers an alternative (even if more clunky).


>>> it sounds like you are describing exactly tree diffing

The object returned by the function expresses a tiny subset of the properties of a DOM node. Often just {className, childNodes: [...]}. Only those explicit attributes are checked for update or otherwise dealt with by my code. My code has no idea that a DOM node has a thousand properties. By contrast, a ReactComponent is more complex from a JS POV than a native DOM node.

In other words, if my code returns: {className: 'foo'} at time t0, and then returns {} at time t1, the className in the DOM will be 'foo' at t0 and at t1. That is not at all how exhaustive tree diffs work, and not at all how react works.

With 5,000 nodes, you might have 8K-15K property comparisons. Per-render CPU load thus grows linearly and slowly with each new node. I can re-render a thousand nodes in 2-5 milliseconds with no framework churn or build steps or any of that. But more importantly, we have the ability to step into "straight-to-canvas" mode (or whatever else) without rupturing any abstractions and without awkward hacks.

This is unidirectional data flow plus component-based design/organization while letting the DOM handle the DOM: no fake elements, no fake events -- nothing but utterly fast strict primitive value comparisons on shallow object properties.

EDIT: Earlier I said that a node changes if and only if its state description changed; that is not strictly true. "if and only if" should just be "only if".


This makes a lot of sense. It's essentially giving up some "niceness" that React gives to make it faster and closer to the metal. That sounds like a critique, but that's what this whole thread is about, and one way to approach something I've also given a lot of thought.

To do this, I imagine you will have to do certain things manually. I guess you can't just have functions that return a vdom, because, as you say, the absence of a property doesn't mean the library will delete it for you. So do you keep the previous vdom? Patch it manually and then send it off to update the elements? ... I guess it's a minor detail. Doesn't matter.

Interesting approach, thanks for sharing! I will definitely spend some time looking into it. Encouraging that it seems to be working out for you :)


To answer your technical question: You can approach it one of two ways (I've done both). The first you hinted at. You can keep the last state object handy for the next incoming update and compare (e.g.) stateA.className against stateB.className, which is extremely fast. But you have an extra object in memory for every single node, which is a consideration. You can also just use the node itself and compare state.className to node.className. Turns out this is ~90-100% as fast ~95% of the time, and sips memory.

If you're thinking, "wait, compare it to the DOM node? That will be slow!" - notice that we're not querying the DOM. We're checking a property on an object to which we have a reference in hand. I can run comparisons against node.className (and other properties) millions of times per second. Recall that my component update functions return an object of roughly the form:

{ node, className: 'foo', childNodes: [...], ... }

That first property is the DOM node reference, so there's no difficulty in running the updates this way. Things are slower when dealing with props that need to be handled via getAttribute() and setAttribute(), but those cases are <10%, and can be optimized away by caching the comparison value to avoid getAttribute(). There are complications with numerical values which get rounded by the DOM and fool your code into doing DOM updates that don't need to happen, but it's all handle-able.

Here's quick gist: https://gist.github.com/jeffmcmahan/8d10c579df82d32b13e2f449...


Maybe React has the advantage as the project grows? From what I understand it batches updates from all the different components on the page, avoiding unnecessary reflows that might easily creep in when you do things the old-fashioned way.


It makes it even slower, and you have to manually optimize component renders. Sure you get less reflow due to dom diffing, but tons higher cpu time.


I think my favourite fact(oid) to point out here would be that the React model is essentially the same thing as the good ol' Windows GUI model. The good ol' 1980s Windows, though perhaps slightly more convenient for developers. See [0].

I think it's good to keep that in mind as a reference point.

--

https://www.bitquabit.com/post/the-more-things-change/


if webdev is going to go through all the iterations of GUI development ... oh boy , there are decades of frameworks ahead


It's just the underlying model that is similar, but React is pretty good at abstracting all that (unlike Win32).

When it comes to developer experience I'd say that React and company are ahead of most desktop UI technologies, and has inspired others (Flutter, SwiftUI).


So where's the RAD React tooling? Is that a thing yet?


Apparently there's React Studio, BuilderX and tools like Sketch2React and Figma to React. Ionic Studio will probably support React in a close future (maybe it already supports).


What is better? Jquery? It comes with its own can of worms and React designers had solid reasoning to migrate away from immediate DOM modification. In general UI is hard. Nice features like compositing, variable width fonts, reflow etc come with the underlying mechanisms that are pretty complicated and once something behaves different to the expectations it might be hard to understand why.


jQuery: 88KB, standard everywhere, one entity responsible for all of it, people know what it is and what it does, if it breaks you know what went wrong and who to blame.

Literally anything built with NPM: megabytes? tens of megabytes? in size, totally inscrutable, code being pulled in from hundreds of megabytes of code in tens of thousands of packages from hundreds or thousands of people of unknown (and unknowable) competence and trustworthiness, if it breaks not only do you not know who to blame but you probably have literally no idea what wrong.

Yeah, jQuery was probably better.


It Depends, as always. The problem React was originally solving was that DOM updates cause re-rendering which can be slow; jquery (usually) works directly in the DOM, so applications heavy in updates don't perform well.

So initially an equivalent React and jQuery app would have React look a lot faster, due to smart / batched DOM updates. However, because React is so fast it made people create apps differently.

As always in software development, an application will grow to fill up available performance / memory. If people were to develop on intentionally constricted computers they would do things differently.

(IIRC, at Facebook they'll throttle the internet on some days to 3g speeds to force this exact thing. Tangentially related, at Netflix (iirc) they have Chaos Monkey which randomly shuts down servers and causes problems, so errors are a day to day thing instead of an exception they've not foreseen).


That's a problem with the npm ecosystem.

React is just so, so much nicer to work with. It's easy to be dismissive if you've never had to develop UIs with jQuery and didn't experience yourself the transition to React which is a million times better in terms of developer experience.

I feel like people that don't build UIs themselves think of them too much in a completely functional way as in "it's just buttons and form inputs that do X", and forget about the massive complexity, edge cases, aesthetic requirements, accessibility, rendering on different viewports, huge statefulness, and so on.


Old is better is just not true here. React is a dream. Synthetic eventing, batched updates, and DOM node re-use are so good. I rolled my own DOM renderer recently and remembered a lot of problems from the past that I would not like to re-visit.


Yes, you're absolutely right: React itself is great. But React is part of the NPM ecosystem; try using one without the other.

And then if you're still feeling cocky try finding someone else who uses one without the other.


Write your own framework-like code with just jQuery and watch it turn into a pile of mush. React is many things, but it is absolutely better than jQuery or Backbone. People always mis-use new technology; that isn't React's fault.


My whole argument is that _it is_. I don’t know why we are comparing to jQuery though, they are not replacements for each other.


To an extent, UI was solved in 1991 by Visual Basic. Yes, complex state management is not the best in a purely event-based programming model. Yes, you didn’t get a powerful document layout engine seamlessly integrated to the UI. Yes, theming your components was more difficult. And so on. But… if the alternative is what we have now? I’m not sure.


One big disadvantage with Visual Basic (and similar visual form designers) is that you can't put the result in version control and diff or merge it in any meaningful way.


UI is hard because you're using a hyper text language with fewer features than were the standard in the 60s. Then with styling on top of that, then with a scripting language on top of that.

Reading Computer Lib/Dream Machine over the holidays and I wonder where it all went so wrong.


Free markets hate good software. "Good" meaning secure, stable, and boring.

On both ends.

Software developers hate boring software for pragmatic HR-driven career reasons and because devs are apes and apes are faddish and like the shiny new thing.

And commercial hegemony tends to go to the companies that slap something together with duct tape and bubble gum and rush it out the door.

So you get clusterfucks like Unix winning out against elegantly designed Lisp systems, and clusterfucks like Linux winning out against elegantly designed Unix systems, and clusterfucks like Docker and microservices and whatever other "innovations" "winning out" over elegantly design Linux package management and normal webservers and whatnot.

At some point someone important will figure out that no software should ever need to be updated for any reason ever, and a software update should carry the same stigma as...I don't know...adultery once carried. Or an oil spill. Or cooking the books. Whatever.

But then also it's important to be realistic. If anyone ever goes back and fixes any of this, well, a whole lot of very smart people are going to go unemployed.

Speaking of which...

https://www-users.cs.york.ac.uk/susan/joke/cpp.htm


Free markets hate unchanging software. Software churn generates activity and revenue, and the basic goal of the game is to be the one controlling the change. Change is good when you have your hands on the knobs and levers, bad when someone else does. Organizations try to steer their users away from having dependencies on changes that they don't control. "You're still using some of XYZ Corp's tools along with ABC's suite? In the upcoming release, ABC we will help you drop that XYZ stuff ..."


That brings to mind one common computer scientest fallacy - that elegence is an end to itself. It may share some properties which make it practical but unfortunately it is not in practice.

Recursive solutions are more elegant but you still use a stack and while loop to not smash the stack.


Scheme is properly tail-recursive and has been around since 1975. Most (all?) Common Lisp implementations have proper tail recursion. Clojure has tail call optimization for simple cases and only if you explicitly ask for it, but it gets you most of the way there most of the time.

So there are reasons to prefer more imperative languages and their systems, but stack-smashing isn't one of them.


This a thousand times. It's amazing how each new layer of abstraction becomes the smallest unit of understanding you can work with. Browser APIs were the foundation for a while, then DOM manipulation libs like jquery, and now full blown view libraries and frameworks like react and angular.

I wrote a little bit more about my thoughts on the problem here: https://blog.usejournal.com/you-probably-shouldt-be-using-re...


If someone's starting a new website project (that has potential to become quite complex), what would you recommend is the best frontend technology to adapt then?


Flutter is a very good bet IMO. It uses Dart was designed from the ground up to be a solid front end language instead of building on top of JS. The underlying architecture of flutter is clearly articulated and error messages are informative. Still seems a bit slow and bloated in some aspects but it is getting better every day and I think their top down control of the stack is going to let them trim it all the way down.


React is super simple - I could implement same API from memory so I don't think it's the root of the problem.

> Nobody

Speak for yourself


I take it you’re thinking of virtual DOM only, which is not the problem, or the component class which hides all of the details.

React is huge, it’s unlikely you’ll implement the synthetic events, lifecycle hooks, bottom up updates, context, hooks with their magical stacks, rendering “optimizations” and all of react-specific warts.

There are simple reimplementations like hyperapp and preact and I completely recommend using those instead. I really meant React the library and ecosystem are at fault, not the general model.


never used React but my guess is, it is pretty simple to use, but most people using it don't know what happens behind the scenes

(which is not specific to React but more like an issue for any framework that tries to do everything)


This doesn't seem unique to React projects. Can anyone explain what is happening under the hood in their Angular projects? How about Vue? It seems to be a failing of all major UI frameworks, lots of complexity is abstracted away.


Yes both appear to be a disaster. Vuejs is a bit better imo but i'm generally holding out for the next thing.


... which is https://svelte.dev


It might be the next big thing, but Svelte doesn't solve the problem outlined in the root of this subthread: nobody has any idea what the fuck is going on.

I like Svelte, the simplicity of programming in it is great, and it has several advantages compared to React. But I have no idea how it works, past a point of complexity. Like, yes: I can run the compiler and check out the JS it generates, same as I can do in React. For simple components, sometimes the compiled code even makes sense. But when I introduce repeated state mutations or components that reference each other, I no longer know what's going on at all, and I don't think I'm alone in this.

Svelte might be an improvement in ergonomics (and that's a good and much needed thing!) but it does nothing to answer the obscurity/too-far-up-the-abstraction-stack-itis that GP mentioned. The whole point of that is frameworks/abstraction layers that tell you "you don't need to understand what's going on below here" are . . . maybe not lying, exactly, but also not telling the whole truth about the costs of operating at that level of both tooling abstraction and developer comprehension.


More likely to be web components. Then you can use your web components in Svelte, React, Angular, Vue, etc projects.


Time is money and engineers aren't given time to properly finish developing software before releases.

Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.

The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.

Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.

Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.

As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".


Yet somehow, it seems to me that most software - including all the "innovative" hot companies - are mostly rewriting what came before, just in a different tech stack. So how come nobody wants to rewrite the prior art to be faster than it was before?


Rewrites can be really amazing if you incentivize it that way. Its really important to have a solid reason for doing a rewrite though. But if there are good reasons, the problem of 0 (or < x) downtime migrations is an opportunity to do some solid engineering work.

Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.


He seems to make a contradictory point... he complains:

> iOS 11 dropped support for 32-bit apps. That means if the developer isn’t around at the time of the iOS 11 release or isn’t willing to go back and update a once-perfectly-fine app, chances are you won’t be seeing their app ever again.

but then he also says:

> To have a healthy ecosystem you need to go back and revisit. You need to occasionally throw stuff away and replace it with better stuff.

So which is it? If you want to replace stuff with something better, that means the old stuff won't work anymore... or, it will work by placing a translation/emulation layer around it, which he describes as:

> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce. We cover shit with blankets just not to deal with it.

Seems like he wants it both ways.


And yet at the time of its release, iOS 11 was the most buggy version in recent memory. (This record has since been beaten by iOS 13.)

I don't quite know what's going on inside Apple, but it doesn't feel like they're choosing which features to remove in a particularly thoughtful way.

---

Twenty years ago, Apple's flagship platform was called Mac OS (Mac OS ≠ macOS), and it sucked beyond repair. So Apple shifted to a completely different platform, which they dubbed Mac OS X. A slow and clunky virtualization layer was added for running "classic" Mac OS software, but it was built to be temporary, not a normal means of operation.

For anyone invested in the Mac OS platform at the time, this must have really sucked. But what's important is that Apple made the transition once! They realized that a clean break was essential, and they did it, and we've been on OS X ever since. There's a 16-year-old OS X app called Audio Slicer which I still use regularly in High Sierra. It would break if I updated to Catalina, but, therein lies my problem with today's Apple.

If you really need to make a clean break, fine, go ahead! It will be painful, but we'd best get it over with.

But that shouldn't happen more than once every couple decades, and even less as we get collectively more experienced at writing software.


I think that's not quite the point in the article. The idea is, in my reading, that we've built lazily on castles of sand for so long that sometimes we think it makes sense to throw away things we shouldn't, and other times we obsessively wrap/rewrap/paper over things we should throw away. What falls into each category is obviously debatable, but the author seems to be critiquing the methodology we use to make those decision--debatable or not, people aren't debating it so much as they're taking the shortest and often laziest path without prioritizing the right things (efficiency, consistency).

Even with our priorities in order, there will still be contentious, hard choices (to deprecate so-and-so or not; to sacrifice a capability for consistency of interface or not), but the author's point is that our priorities are not in order in the first place, so the decisions we make end up being arbitrary at best, and harmful/driven by bad motivations at worst.


It's possible to both improve efficiency and maintain backwards compatibility.


Combining these two is only a non-issue with unlimited resources.

Otherwise it's a tradeoff if you add constraints like cost, effort, time to market, and so on...


Windows does it. And despite that, versions like win 7 were pretty fast


I'd argue that of any software project on the planet, Windows is the closest to having unlimited resources; especially when you consider the number of Windows customers for whom backwards compatibility is the #1 feature on the box.

And speed isn't the only metric that matters; having both the 32-bit and 64-bit versions of DLLs uses a non-trivial (to some people) amount of disk space, bandwidth, complexity, etc.


Surely, Apple and Google have just about as many resources as Microsoft does.

If Android, Mac OS, etc were super slimmed down systems in comparison to Windows, I would understand the argument much better. Instead, it feels like we're in the worst of both worlds.


>Windows does it.

Yeah, didn't say it's impossible. I said it's a tradeoff.

Windows does it and pays for it with slower releases, more engineers, bugs, strange interaction between old and new, several layers of UI and API code for devs to decode and for users to be confused with, less ability to move to new paradigms (why would devs bother if the old work), 2 versions of libs loaded (32/64 bit), and several other ways besides...

E.g. I've stopped using Windows for a decade or so, but I read of the 3 (4?) settings panels it has, the modern, the Vista style, the XP style, and so on, with some options in one, the others in the other (if you click some "advanced" menu, etc).


I have heard a lot of complaints about the costs to the Windows ecosystem caused by having to always maintain backwards compatibility.


Fuck no. A bit faster does not mean fast. It's slow as fuck basically across the board.


The goal is that you throw out things that aren't useful (cost > benefit, or better replacement available and easily usable), not that you have a periodic "throw out everything written before X".


See also: in Good times create weak men [0], the author explains his interpretation as to why. I can't summarize it well. It's centered around a Jonathan Blow talk [1] Preventing the collapse of civilization.

[0] https://tonsky.me/blog/good-times-weak-men/

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk


I watched that talk a while ago. It is great, and it did change my opinion on a few things. Whether you agree with the premise or not, you can still learn something. For me, the importance of sharing knowledge within a team to prevent "knowledge rot". "Generations" in a team are much more rapid than the general population/civilisation, so that effect is magnified IMO.


This article really resonates with me. But my biggest complaint is everything is _so_ buggy! I won't name any names, but I find many major pieces of software from large, well known companies are just riddled with bugs. I also feel like you almost need to be a programmer to think of workarounds "hmm, ok, so clearly it's in a bad state. If I had coded this, what boundaries would likely cause a complete refresh?" My wife is often amazed I can find work arounds to bugs that completely stop her in her tracks.

Before we fix performance, bloat, etc, we really need to make software reliable.


I'll gladly name names.

Apple have totally forgotten how to test and assure software against what appear to be even stupid bugs. macOS Catalina has been fraught with issues ranging from the minor to the ridiculous. Clearly nobody even bothered to test whether the Touch Bar "Spaces" mode on the MacBook Pro 16" actually works properly before shipping the thing. Software updates sometimes just stop downloading midway through, the Mail.app just appears over the top of whatever I'm doing seemingly at random and Music.app frequently likes to forget that I'm associated with an iTunes account.

Microsoft are really no better - Windows 10 continues to be slow on modest hardware and obvious and ridiculous bugs continue to persist through feature releases, e.g. the search bar often can't find things that are in the Start menu!

My question is who is testing this stuff?


> My question is who is testing this stuff?

Telemetry.

Companies seem to be increasingly preferring to use invasive telemetry and automated crash reports in lieu of actual in-house testing, and they use that same telemetry to also prioritize work. I have a strong suspicion that this is a significant contributing factor to the absurdities and general user-hostility of modern products.


I'm in complete agreement. Thanks to automated crash report uploading, the software I use is more stable than ever — it's a genuine surprise to me when an application crashes, and I can't remember the last time I had to reboot because my OS froze.

But this means that anything that's not represented in telemetry gets completely ignored. The numbers won't show you how many of your users are pissed off. They won't alert you to the majority of bugs. They won't tell you if you have a bloated web application that's stuffed full of advertising. They won't tell you if your UI is incoherent.

I really do think that large companies are looking at the numbers instead of actually using their software, and the numbers say that everything's fine.


Indeed. The way I've been summing it up recently: A/B testing is how Satan interacts with this world.

It's understandable people want to base their decisions off empirical evidence. But it's important to remember that what you measure (vs. what you don't measure) and how you measure will determine the path you're going as much as the results of these measurements.


Apple is one company I've been willing to name as it's extremely frustrating to have such a sub par experience with such expensive products. I used to be a huge Apple fan, and now I no longer use any Apple products. I've never used Catalina, but iOS is unbelievably buggy now so I'm not surprised.


I think both are true depending on the software. But yes it's unfortunately too easy to run into bugs daily even if one does not use new apps every day.

The reason for unreliability is probably the same reason why things are slow: developers and project managers who don't care about the users and/or who are not incentivized to improve performance and reliability.

If you think that "not caring about the users" is too harsh, consider that users do suffer from e.g. unoptimized web pages or apps that use mobile data in obscene quantities. This has a direct consequence on people's wallets or loss of connectivity which is a huge pain.

As developers we can all try to instill "caring about the users" into our team's priorities.


A related article in a similar spirit from 4 years ago: https://news.ycombinator.com/item?id=8679471

I can comfortably play games, watch 4K videos, but not scroll web pages?

I think this is one of the more important points that the article tries to get across, although it's implicit: while the peak of what's possible with computing has improved, the average hasn't --- and may have gotten worse. This is the point that everyone pointing at language benchmarks, compiler/optimisation, and hardware improvements fail to see. All the "Java/.NET is not slow/bloated" articles exemplify this. They think that, just because it's possible for X to be faster, it always will be, when the reality couldn't be further from that.

Speaking of bloat, it's funny to see the author using Google's apps and Android as an example, when Google has recently outdone itself with a 400MB(!) web page that purports to show off its "best designs of 2019": https://news.ycombinator.com/item?id=21916740


I agree that the peak is pulling away from the average, and most of us want the average performance of applications to lift. We have to throw aside facile "Good Enoughism" and genuinely respect the time of our users.

Where I differ a bit from your take: Languages and platforms that target high performance are providing application developers an elevated performance ceiling that allows them the luxury to use CPU capacity as they see fit. Application developers using high-performance platforms may then elect to make their application high-performance as well, yielding a truly high-performance final product, or they may elect to be spendthrifts with CPU time, yielding something middling on performance. And yes, a truly wasteful developer can indeed make even a high-performance platform yield something low-performance.

What benchmarks and the resulting friendly competitiveness help us avoid is a different and worse scenario. When we select a language or platform with a very low performance ceiling, application developers continuously struggle for performance wins. The high water mark for performance starts out low, as illustrated by how much time is spent in order to accomplish trivial tasks (e.g., displaying "hello world"). Then further CPU capacity is lost as we add functionality, as more cycles are wasted with each additional call to the framework's or platform's libraries. When we select a low-performance platform, we have eliminated even the possibility of yielding a high-performance final product. And that, in my opinion, illustrates the underlying problem: not considering performance at key junctures in your product's definition, such as when selecting platform and framework, has an unshakeable performance impact on your application, thereby pulling the average downward, keeping those peaks as exceptions rather than the rule.


> How is that ok?

Probably because a browser like FF has the goal to load and display arbitrary dynamic content in realtime like a reddit infinite scroll with various 4k videos and ad bullshit, whereas the game has the goal to render a known, tested number of pre-downloaded assets in realtime.

Also on shitty pages the goal is different-- load a bunch of arbitrary adware programs and content that the user doesn't want, and only after that display the thing they want to read.

Also, you can click a link somewhere in your scrolling that opens a new, shitty page where you repeat the same crazy number of net connections, parsing ad bullshit, and incidentally rendering the text that the user wants to read.

If you want to compare fairly, imaging a game character entering a cave and immediately switching to a different character like Spiderman and inheriting all the physics and stuff from that newly loaded game. At that point the bulk of your gameplay is going to be loading new assets and you're back to the same responsiveness problems of the shitty web.

Edit: clarification


I'm both a web developer and a game developer, and this comparison doesn't ring true at all. Games usually have tons of arbitrary dynamic content to display in realtime. Minecraft will load about 9 million blocks around your character plus handle mobs, pathfinding, lighting, etc. Reddit infinite scroll loads a sequence of text, images, and videos. Multiplayer games have such tight latency and bandwidth targets that game developers routinely do optimizations web developers wouldn't even consider.

As a web developer, sending an 8 KB JSON response is no problem. That's nice and light. In a networked action game, that's absurd. First, (hypothetical network programmer talking here) we're going to use UDP and write our own network layer on top of it to provide reliability and ordering for packets when we need it. We're going to define a compact binary format. Your character's position takes 96 bits in memory (float x, y, z); we'll start by reducing that to 18 bits per component, and we'll drop the z if you haven't jumped. Then we'll delta compress them vs the previous frame. Etc.

Really, what's happening is things are getting optimized as much as they need to be. If your game is running at 10 fps, it's going to get optimized. When it's hitting 60+ fps on all target platforms, developers stop optimizing, even if it could potentially be faster. Same for Reddit; it's fast enough for most users.


> As a web developer, sending an 8 KB JSON response is no problem. That's nice and light. In a networked action game, that's absurd.

It depends on what that 8 KB is doing. If that 8 KB is a chat message, that's way too big. On the other hand, I've never seen an 8 KB game patch.


This doesn't really relate to my point. The blog post asked why is it that we can handle games (fancy 3D simulations, sometimes with hundreds of players and millions of objects) at a smooth 60 fps but not scrolling a web page. The parent comment suggested that it's easier to render games smoothly because you know the content in advance. I'm suggesting that software gets optimized (by necessity) until it works well enough. If some website had to display a million elements, the devs would either optimize it until it could do so, or the project would get scrapped.

When I talk about sending 8 KB in a "networked action game", I'm referring to the update packets sent to and from clients in something like Fortnite or Counter-Strike, not a game patch. I'm not trying to make a competition for who uses the least bandwidth (which a 60 GB game would lose just on the initial download). I'm trying to illustrate that games don't run faster than some website because it's inherently easier to make games run fast, but rather that developers, by necessity, optimize games until they run fast (or in this example, until they reduce network lag enough).

I'm not sure why a chat app would tack on something like 7.5 KB of overhead on a chat message, but I wouldn't be surprised if there's a chat app out there that does so. Users won't notice the extra couple milliseconds (especially so because they don't know exactly when the other person hit send). A 3 character Discord message is close to 1 KB including the headers. The same message over UDP in a game might be under 20 bytes, including the UDP header (games could also use TCP for chat - text chat isn't going to strain anything). So I'd say the overhead of a Discord message is still an order of magnitude or two bigger than it could be. Which is perfectly fine; we can afford 1 KB of overhead on a modern connection. It's optimized as much as it needs to be.


Browsers are fine. It's the websites that are slow.

It's not the fault of Firefox that Reddit's new UI is pathetically slow. It's the Reddit's implementation of their UI itself which is total garbage.

And given that people do write fast, complex, real-time games in JavaScript for the browser, gamedev absolutely becomes a valid reference point for the possible performance of any individual page.


Hmm, that leads be to an interesting counter idea.

Why should Firefox or any other dynamic software have the ability to be slow for what it archives? If compilers should be fast, Web engines should be equally as fast. The Web should have never been designed such that a slow website (relative to the task) could be achieved. In the same way that you can only express memory safe code in rust and type safe code in haskell why not being only able to express "fast for what is interactive"?


> The Web should have never been designed such that a slow website (relative to the task) could be achieved

That's already the case, your orders of magnitude are just off. Long-running AJAX or page loads are timed out at a pretty consistent point across browsers. Half-open/closed TCP connections are timed out at a pretty consistent point across operating systems. Busy-looping JS gets you a "page is not responding" block in a similar amount of time; same for nonresponsive native applications on many operating systems.

Their definition of "slow" or "stuck" just tends to be "tens of seconds or minutes", not the threshold of perceived responsiveness you want in a website.

Also, your parenthetical is a pretty tall-née-impossible order:

> a slow website (relative to the task)

How could the "task" be classified? Do you mean "task" as in "clicking a button and having a DOM update"? Or as in "this is a TODO application so it should have responsiveness threshold X"?


>whereas the game has the goal to render a known, tested number of pre-downloaded assets in realtime.

Say hello to shaders.


One thing nobody seems to mention is the environmental cost of inefficient software. All those wasted CPU cycles consume electricity. A single laptop or phone on it's own is insignificant, but there are billions of them. Combine that with the energy wasted shovelling unnecessary crap around the internet, and it adds up to a big CO2 problem that nobody talks about.


I hear that argument very frequently and I don’t buy it.

Think about all the gas that is saved because people don’t have to drive to the library, all the plane trips saved by video conferencing, all the photo film, all the sheets of paper in file cabinets, all the letters being sent as emails, all the mail order catalogues, ... you get the idea.

Does anybody know of a comprehensive study on this?


> Think about all the gas that is saved because people don’t have to drive to the library....

You're right that computers have saved huge amounts of energy compared with the things you mention. My point here was that even more could be saved with a bit of thought about efficiency in programming.


It's not either-or. I don't buy the argument that if we didn't shovel garbage that we call "software" today, we wouldn't have equivalent but better software at all. It's a multi-agent problem, and a lot of it is driven by business dysfunction, not even actual complexity or programmer laziness.

In my - perhaps limited - work experience, there's enough slack in the process of software development that I don't buy the "time to market" argument all that much.


If websites and business software were as lean as they could be, most computers could have amazingly weak, low-powered processors.

I'm quite disenchanted with software myself. It takes way too long to open any program, for this JIRA ticket to properly display.

One thing that has improved was boot times, I seem to remember that Windows 7 was quite a bit faster than XP. Maybe someone in upper management wanted it to be as fast as MacOS? So speed IS possible, if it is prioritized.


> One thing that has improved was boot times, I seem to remember that Windows 7 was quite a bit faster than XP. Maybe someone in upper management wanted it to be as fast as MacOS? So speed IS possible, if it is prioritized.

I seem to remember boot times being a frequent topic of discussion the early 2000s, because people turned off their computers.

In a way, this is a great little microcosm of the problem. Just fix habits instead of fixing the software.


It's much worse than that: what is the environmental cost of buying a new phone because Slack runs too slowly on your old one?

The things I'm doing on my phone today are not fundamentally different than what I was doing ten years ago. And yet, I had to buy a new phone.


That's why tricle down carbon tax is a right answer.


I've heard this argument often, but I don't buy it.

1. The environmental cost of ineffective software is negligible, when compared to Bitcoin mining or other forms of hardware planned obsolescence.

2. By using more efficient software, surely, you can save a lot of CPU cycles, and it can improve the energy efficiency of some specific workloads under some particular scenarios. However, on a general-purpose PC, the desire for performance is unlimited, the CPU cycles saved in one way will only be consumed in other ways, and in the end, the total CPU cycles used remain a constant.

Running programs on a PC is like buying things, when you have a fixed budget but everything is cheaper, often people will just buy more. For example, I only start closing webpages when my browser becomes unacceptably slow, but if you make every webpage use 50% less system resource, I'll simply open 2x more webpages simultaneously. LED lightning is another example, while I think the overall effect is a reduction of energy uses, but in some cases, I think it actually makes people to install more lightnings, such as those outdoor billboards.

This is called the Jevons paradox [0].

For PCs, certainly, as I previously stated, in specific workloads under some particular scenarios, I totally agree that there are cases that energy use can be reduced (e.g. faster system update), but I don't think it helps much in the grand scheme of things.

[0] https://en.wikipedia.org/wiki/Jevons_paradox


> it adds up to a big CO2 problem that nobody talks about

if you haven’t seen it already, you’d probably be interested in the below talk by chuck moore, inventor of forth.

https://www.infoq.com/presentations/power-144-chip/


> https://www.infoq.com/presentations/power-144-chip/

Fascinating talk. Thanks for the link.


I agree but apparently the number is not very big - computing is max 8% perhaps of electricity usage. But it still feels so wasteful, and also wasteful of people’s time.


I think it's actually the time that matters the most. I think the tens of thousands human lifetimes killed each day on slow software we're forced to use really adds up, how exactly I don't know, it's hard to imagine what humanity would do with the millions of work hours saved.


You are going to have a heart attack if you check the energy consumption of bitcoin.


Then again , you could embed those as space heaters, or cooking machines, since they dont need portability


But there's no point, because all those heaters aren't going to beat ASIC farms near cheap electricity sources anyway.

(Do individuals really bother mining bitcoin these days anyway?)


I'm not sure in the accounting of the environmental costs of modern life that inefficient software counts for much. Doing totally pointless crap with highly efficient software might be worse.


Sure: If you can optimize the software that runs on many millions of computers then you can have a huge impact. If you run those computers yourself you can even save money.

But the vast majority of software is one-off stuff. It makes no sense to optimize it for performance instead of features, development time, correctness, ease of use, etc.


> But the vast majority of software is one-off stuff.

Is it now? I can't think of any example, except a few tools we use internally in a project. Everything else I use, or I see anyone else using, has an userbase of many thousands to millions, and a lot of that is used in context of work - which means a good chunk of the userbase is sitting in front of that software day in, day out.


It'd be interesting to know, say, what percentage of software developers work on programs that have less than a million users.

I wouldn't be at all surprised if it's the majority. For every big-ticket software offering there's going to at least be things the user never interacts with, like a build system and a test suite and a bug tracking system and whatever else. And there's just so much software everywhere, most of which we never see. Every small business has its little web site. Who knows how much software there is powering this or that device or process at random factories, laboratories, and offices.

The Debian popularity contest looks like it has a big long tail of relatively unpopular packages [1]. It looks like the app store has 2 million apps, only 2857 (.14%) have more than a million dollars of annual revenue. These are of course incomplete and flawed and do not really directly address the question. I don't really know how to research this in a thorough way.

[1] https://popcon.debian.org/by_inst [2] https://expandedramblings.com/index.php/itunes-app-store-sta...


Excerpt:

"An Android system with no apps takes up almost 6 GB. Just think for a second about how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone?

Windows 95 was 30MB. Today we have web pages heavier than that!

Windows 10 is 4GB, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 MB. But whatever Windows 10 is, is Android really 150% of that?"

My favorite line: "Windows 95 was 30MB. Today we have web pages heavier than that!"

If there's a new saying for 2020, it shouldn't be that "hindsight is 2020"... <g>

Also... each web page should come with a non-closable pop-up box that says "Would you like to download a free entire OS with your web page?", and offers the following "choices":

"[Yes] [Yes] [Cancel (Yes, Do It Anyway!)]". <g>


My favorite was "Google’s keyboard app routinely eats 150 MB. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? "


I mean... considering the fact that in contains the probabilities of typing every single word in the English language versus every other, at every stage of typing, including potentially the probabilities of all the ways you might mistype each word while swiping without precise accuracy...

...maybe?

I don't know if that's 150 MB' worth of data... but it's certainly a lot.


So why is it persistently, perniciously and stubbornly insistent in refusing to spell 'naughty' words like 'duck'?


Probably because it contains extra data about which words are "naughty".


What's the problem again? I just typed "duck" by swiping. If you mean the "i" variant, be informed that swear words are blocked unless a preference is set.


They mean the "f" variant.


I'm sure that's intentional... I think you can disable it?


This in particular is a terrible example. Gboard is full of good features that Windows 95 did not have. Features that require a way deeper understanding of natural language than existed on computers in 1995. Autocorrect, voice typing, swipe typing, handwriting input, translation. It's so far from "30 keys on a screen" that it's not even funny.


I agree in spirit but really most of this is is going to be related to unpacking images of more than 2 colors on a high density display.


Windows 10 is way richer than Win95 - there were countless big features added since 1995 (all enterprise and security features, backups, full disk encryption, ...), plus existing features got way more complex - multimonitor support, support of complex USB stack, GPU support etc. IIRC, out of the box Win95 didn't even have DirectX! Still, of course the total sum of changes does not justify the 133x size increase.


damn. windows 95 was indeed 13 floppy disks. amazing to think about.

So what is in these huge downloads? Layers upon layers of virtual machines?


In a way each layer of abstraction could be seen as a virtual machine - a new set of “instructions” implemented via previous layer of instructions.

The analogy holds as long as we don’t cross layers, which is quite often true.

Therefore counting total number of layers is performance-wise quite similar to running this many layers of virtual machines. Of course, you need code for all of these translation layers.


Mac OS Catalina ships with 2.1 GBs of desktop pictures https://mobile.twitter.com/dmitriid/status/11981966747301109...


My favorite: "What's in there, HD movies?"


When I was in school and first leaning about programming I assumed that code written in C or Java would eventually be ported to hand tuned assembler once enough people were using it. Then I got in to the industry and realized that we just keep adding layer after layer until we end up at the point this article talks about.

I remember once reading that IBM was going to implement an XML parser in assembler and people were like "Why? If speed is needed then you shouldn't use XML anyway." I thought that concern was invalid because these days XML ( or JSON ) is really non-negotiable in many scenarios.

One idea that I've been thinking about lately is some kind of neural network enabled compiler and/or optimizer. I have heard that in the javascript world they have something called the "tree shaking" algorithm where they run the test suite, remove dependencies that don't seem to be necessary and repeat until they are getting test failures. I'm thinking why not train a LSTM to take in http requests and generate the http response? Of course sometime the request would lead to some sql, which you could then execute and feed the results back into the LSTM until it output a http response. Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.


> I'm thinking why not train a LSTM to take in http requests and generate the http response?

Why? With responses generated according to what? Are you really just suggesting using neural networks in the compiler's optimiser?

> Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.

Why? What's the advantage over just building software?


I'm suggesting that you take an existing system and build up a corpus of request/response pairs. Then you use the LSTM to build a prediction model so that given a request it will tell you that the current production system will produce the following sql statement and this http response. Once the LSTM's output is indistinguishable from your current production system , for all use cases, then you replace the production system with the LSTM and a thin layer that can listen on the port, encode/decode the data, and issue sql queries.

Why would I want to do this? I'm not 100% sure ... I think it would be super fast once you got it working. I think it would avoid many security bugs. You wouldn't have to read that "oh drupal 3.x has 20 new security bugs" better go patch our code. I think when I had this idea I was thinking about it terms of a parallel system that could catch hacking by noting when actual http responses diverged too much from the predicted response. The main idea being that for a given input the output really is 100% predictable, assuming your app doesn't use random numbers like in a game or something.

To link this idea to the article, I think things like XML parsers could be written this way .... I can't prove it but I suspect that they would be very fast and not come with all the baggage that the article complains about.

I started thinking along these lines after reading stuff like this https://medium.com/@karpathy/software-2-0-a64152b37c35


Since you seem sincere, I’d like to mention that neural networks (as of now) are a complete clusterfuck for a problem with as much structure as you’re describing. There are no known ways to impose the relevant constraints/invariant on neural network behavior — and stop them from producing junk — leave alone doing something useful. That Karpathy article is pure hype with very little substance (like most commentary on neural networks). I like the vision, but it might take a minimum of twenty to fifty years to get there.

If you consider yourself a world-leading expert on neural networks and have some secret sauce in mind, by all means, good luck... otherwise it sounds like a fools errand.


Thanks for the feedback. I don't consider myself a world-leading expert on neural networks by any means.

I do want to point out that I'm thinking of doing this on a very limited website, not a general purpose thing that replicates any possible website. When I imagine the complexity of a modest CMS or an online mortgage calculator I think that it is much less complex than translating human languages. The fact that web code has to be so much more precise than human language actually makes the task easier. But to be fair, I'm all talk at this point with no code to show for it. So I will keep these comments in mind, this thread has been helpful for helping me think through some of this stuff.


What if your app has literally any mutable state? Registering accounts, posting comments, etc.

Also I'll bet you that your neural net is > 100x slower than straight line code.


Mutable state in the sense of database writes would be part of the network's output and just passed on to a regular db. Mutable state in the sense of variables that the application code uses while processing a request? Well LSTM networks can track state like that.

For session based variables? Not sure, either it all becomes stateless and the code has to read everything from storage for each reqeuest .... or maybe the lstm is able to model something like an entire user session and remember the stuff that the original app would have put in the session.

That Andrej Karpathy article that I linked to two comments above ... he pointed out, in a different blog post, that regular neural networks can approximate any pure function. Recurrent neural networks like the LSTM can approximate any computer program. It is because they can propagate state from step to step that allows them to do this.

As far as it being 100X slower, well at a certain point I will be willing to take your money :)


The main idea being that for a given input the output really is 100% predictable [..] I think it would be super fast once you got it working.

I imagine it would be fast, then you realise you've made a static content caching layer out of a neural network and replace it with Varnish cache and it would be hyper fast.


I don't think a caching layer would work. One example would be an online mortgage estimator. You input the loan amount, interest rate, length of loan etc. all as http input parameters. I'm suggesting that the LSTM can eventually figure out that those variables are being used by the application code to go in to a formula. That application code and its formula would all be replaced by the LSTM.

I just don't know how you can achieve that with static cache ... only if somebody else requested that exact mortgage calculation before and it is still in the cache.

Also, my idea of the "given input" from the earlier comment would have to include results of sql queries that would form the entire input to the LSTM.

But honestly I think over trained auto encoders can be used as hash maps. That would be an application more in line with what I think you are saying.


Seems that either the NN memoizes all the inputs and outputs until the function is totally mapped - then functions as a memoized lookup table, or the NN has discerned what the mortgage calculation is, and is doing exactly the calculation your {Python} backend does, but migrated into an NN middleware layer instead, which sounds like it would be slower.

And then you're hoping that the NN would act like a JIT compiler/optimiser and run the same code faster. But if it was possible to process (compile? transpile? JIT compile?) the Python code to run faster, then writing a tool to do that sounds easier than writing an AI which contains such a tool within it.

So there's a handwave step where the AI develops its own innate Python-subset optimiser, without anyone having to know how to write such a thing, which would be awesome indeed .. is that possible?


If I ever get any actual code running I'll try and post it here.


> The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”...

I'm not sure I'd like a http webserver to silently fail, or be undebuggable when it comes to security vulnerabilities when given strange inputs.


XML actually is parsed with assembly code, now, using vector instructions that split up bytes, bits 0 of 128 or more bytes in XMM0, bits 1 of the same bytes in XMM1, et al., and doing bitwise operations on the registers to recognize features.

Imagine how bad it would be if not!


That's interesting: would you happen to have any reference implementing/describing such a parser, by any chance?

In the crypto world, this is called "bitslicing":

https://www.bearssl.org/constanttime.html#bitslicing


It's not NN-based, but you might be interested in https://en.wikipedia.org/wiki/Superoptimization.


An unpopular but effective short-term solution: Developers ought to use five year old hardware for development and testing tasks. FWIW, my work laptop and mobile phone are both 2015 models and I feel they're completely adequate for running all the software I've written.


I remember a story about an electronic musician who preferred to mix his songs with earbuds/headphones instead of with a high-end megabucks studio sound system. His reasoning was basically, “that’s how my fans will listen to my music, so I need to make sure it sounds good to them.”

I can’t remember who it was, but the idea always stuck with me.

Anyway, I agree that we should test our applications with the same hardware (and internet speed) of our average user. Very few people use a computer as good as a software engineer’s. :)


That’s funny; I’m always debating my brother, talking about how earbuds sound much better than even high-end monitors, largely because of the surrounding environment (or lack thereof, when earbuds are in).

When I make music I have to do the opposite; take my earbuds out and listen to the music over my car stereo or via my MacBook Pro speakers, to make sure it doesn’t sound good only through earbuds.


Hah, I have nice studio monitors since forever, but I also make music with Earbuds.

I like them, but I've found that they make a bit too conscious about the sound, so I have to be extra careful. The isolation emphasizes noise and frequencies that the monitors don't, and that will be masked in the full mix. It used to lead me to a rabbit hole of noise-gating and EQs.


Interesting; I could totally see myself going down the same rabbit hole, or the opposite, thinking that a certain faint sound will be great, but when I play it in my car later it's completely unnoticeable.


> thinking that a certain faint sound will be great, but when I play it in my car later it's completely unnoticeable.

That's true! This happens all the time with super expensive recordings too, and I love it. IMO discovering new sounds in the music you're used to only adds to the experience.


Unfortunately this isn't the perfect solution as there is no universal device or average user. Especially with music, mixing songs for earbud/headphones will produce a worse quality song for another pair. In the same way we've seen software written by users primarily using x86 work less effectively when compiled to arm. Plenty of open source projects are developed on by people with less than adequate hardware yet they're still bloated and slow.

I think it's very sensible (if a little unrealistic) to test applications on as many pieces of intended hardware as possible, and this somewhat reminds me about this story of snow clearance[0], where the solution was not only the intrudction of a new perspective but also empirical measurements.

[0]: http://info.gritit.com/blog/what-is-gender-balanced-snow-cle...


No one said you have to test on everything. Just adjust your target and testing hardware to something midrange, maybe even budget optioned. As above, test on some mid-range or even budget-level 2015 hardware, laptop or phone, and if it's smooth on that, it'll be smooth for people running 2020 hardware. You can never be sure about the reverse.

That being said, there's a lot of crap software out there that's slow even on 2020 hardware.


I think testing with multiple hardware is a good idea. For audio, I'd say that editing audio on low grade audio equipment might backfire. I've always heard that professionals listen through multiple sources to make sure it sounds good on everything but idk how true that is these days.

I've listened to quite a few people on YouTube and some musicians who clearly did their editing with a certain pair of headphones/earbuds. It comes through immediately on my speakers because high pitched whining noises made it through the edit (making it nearly unbearable to listen to) and/or they boosted the bass to infinity. (Usually indicates they never listened to their project on speakers because the bass heavy music between cuts becomes disgustingly loud whereas on headphones it's not that bad)

It makes listening to their content impossible on decent speakers but fine on okay headphones.


I was so proud in 2014 that our development team managed to pull a 60 FPS on iPhone 4. Built our own UI framework (before Unity rebuilt theirs), pulled images from network, assembled and updated sprite atlases at runtime and even created our own text rendering - all of it, in order to get 60 frames per second on 4 year old phone. And most likely, it still does, in 2020.

Since then, I've never worked in a team that would be so dedicated to perfomance. May be because that was a team that worked on a product with over 100 million installs that had all major features already developed - everybody else is too busy trying to figure out a market fit.


So the official toolkit wasn't capable of 60fps?


Official toolkit at the time was only immediate mode, calling all gui logic every frame, so, no. But even with the unity UI, building atlases on the fly isn't something that you can do out of the box.


I have an underpowered chromebook for testing performance. Some sites actually crash on it. Much of the web is frustrating on it.


Good point, and I do that periodically.

But I believe that's not the problem here, or at least not so much as the business being very impatient and just never respecting objections from the programmers that the quality of the software suffers due to far too tight deadlines.


Also limit your network bandwidth to see how your app behaves when the user is on crappy 3g or what not.


Try developing a react application built for IE11. That thing will fly if you ever run it in Chrome or FF (or even Edge)


Ha! I use a 2011 MacBook Air with 2GB of RAM. It was the cheapest model when I bought it and the battery inflated twice, already!

I still manage to crank some terrible solutions, though. :P


I’m tinkering with the Atomic PI SBC — Quad Core 1.5ish GHz Intel Atom with 2 Gigabytes of RAM. You can load up an OS and launch Firefox with maybe 3 tabs until you run out of memory and have to page out (which will basically hang the device due to eMMC/flash I/O speeds).

These kind of specs are incredible for a high-end desktop machine from 15 years ago that would be good for almost anything — gaming, browsing, etc... What happened to software (Linux & Friends, Firefox, etc) that renders hardware obsolete so quickly? Is it just purposeful optimization that uses more RAM to benefit performance elsewhere, or is it truly this disenchantment?


Programming is now a bureaucracy.

[1] "In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely"

[1] https://en.wikipedia.org/wiki/Jerry_Pournelle#Pournelle%27s_...


That's a good observation on many levels.

For instance, I started noticing that a lot of the code I've written or worked with in many projects have a particular flavor to it. Pieces that take some data, repackage it, and pass it on to different code that does essentially the same - all arranged in a structure that's supposed to reflect some shared, abstract understanding of the problem. I've started to call this type of code "bureaucracy", and I see it as something to be kept in check.


I was in Rome recently, and google maps were basically unusable on EDGE (dsepite pre-downloading the area before the trip). We'd wait a minute (or more) for a timetable of a bus stop and a route of the bus to be show on the map.

Try planning a route in an unfamiliar area with this slow an UI when you are standing outside and there's no place to sit and rest, and you need to click around on a bunch of stops just to see what busses are going through a stop and where they are heading.

So yes, optimization is still important.

We replaced the glorious and easily iterated and expanded google maps app with a photograph of a public transport map, and we could get an answer of how to get from any A to any B within seconds of looking at the map without typing or searching or waiting for anything.

Which also shows that, sometimes, slow software is less than useless.


I was traveling in Europe for a few months and I didn't care to change my phone plan so I was stuck with 2G speeds the entire time. It gave me a profound appreciation for web applications that load the minimal amount of code/resources/whatnot to display. HN really shines here. :)


Is making something fast enough to be usable even considered "optimization" at that point?


I've found OSMAnd~ on F-Droid to be a good offline-only alternative, but it certainly has much worse efficiency problems than GMaps.


Do you mean computationally? Because I don't think that's a fair comparison.

If you mean UX, then fair enough.


I mean UX. Also though I don't think there's a good excuse for OSMAnd~ to be so computationally intensive.


I agree with your general point, but in this particular case bandwidth is muddying the waters. 2g tech has less bandwidth dedicated to it now than it did years ago, so your kbps is lower now than it would have been when using 2g in the past.


Google Maps is just horrible on mobile when you aren't on Wifi. I'd like to wireshark it some day to figure out just how many different web requests it is making.


You don't need wireshark - open dev console and observe that any map operation (drag/zoom/draw and anything else) requires an api request to gmaps.

You can contrast that with mapbox which seems to have been designed upfront with offline mode in mind.


Google probably wants the usage data, so they take additional load to handle more requests.


Oops! Then that's a terrible design.


Why were you using EDGE if there’s dense 4G coverage in Rome ?


Replace Rome with the German countryside and you'll still just have EDGE.

Sites like Hacker News or https://i.reddit.com are still perfectly usable. In contrast to the 'modern' Reddit UI that that takes a couple of seconds to load even on my home WiFi.


It would be nice to have a maps/routing service that could just produce a basic HTML page with text directions from a given point A to another given point B.

Bonus points if it also lists the before and after streets at direction changes so people know if they missed a turn.

This could be implemented as a proxy to either of the maps apps (Google, Apple, OpenStreetMap)


> In contrast to the 'modern' Reddit UI that that takes a couple of seconds to load even on my home WiFi.

On my phone it takes about 4-5 seconds to load even on Gbit WiFi. It's so bad I have a conspiracy theory that the pulsing Reddit orb is just on a timer and it's not actually loading anything. More likely it's just horrifically unoptimized I suppose.


old.reddit.com is what you need to be using on your laptop if you're using i.reddit.com on your phone.


(not gp) I have T-Mobile service in the USA, and my plan includes free international phone and data, but only at EDGE speeds. I could get a local SIM, but it's rarely worth it if I'm traveling to multiple countries.


https://fi.google.com is great if you travel often.


Should the answer to that question even matter? Even a 56k modem should be more than sufficient to fetch a few measely bus schedules.


Didn't enable it, because 3G is usually fine. But there was lack of 3G in many places.


I haven't been to Rome in a few years, but aren't the schedules and routes posted on signage so people can still do it the old-fashioned way?


Even quite underpowered phones can boot in ~1-2 seconds if optimized for that. Not everything in the phone will startup in that amount of time (modem, wifi), but it's possible to start to Linux userspace and display fully interactive UI in that amount of time.

Even my e-book reader Linux port boots to UI in ~2s.

It really is just bloat and lack of care.


To be honest, I have never seen any phone or computer that could boot in about 2 seconds. 6-7 seconds is the absolute minimum I've ever seen, regardless of hardware or OS.


You know? Once upon a time i've done some freelance sysadmin for small businesses. So i installed a new fileserver and a networking gateway with least-cost router functionality for the phones, somewhere. When ready i asked what that big tower did, that stood aside unplugged. 'Nothing, that is just trash we haven't discarded yet.' Since the case had a nice design it seemed wasteful to me, and i took it back home. At home i plugged it into to some spare screen, and since it had a networking card into that also. When ready i rocked the big red switch it had with a satisfying clack into the upwards posistion and wanted to do something else at my other systems while it booted up (i thought). So i rolled, sitting on my chair, maybe 3 feet, and then heard a loud 'Ta-Daa!'. I looked sideways and barely saw how Word 6 and Excel 5 popped up, out of the corner of my eyes. I was dumbstruck!

I shut it down and repeated that, looking at my wristwratch.

FOUR SECONDS!

I couldn't believe that and tested maybe five times again. It took never more than 4 seconds from rocking the switch to WfW 3.11 Desktop with Word 6 and Excel 5 in autostart ready to use! It even got a DHCP address while being at it.

The thing is, at that time i had some Sun SparcStations, some HP PA-Risc Workstations, and assorted X86 PCs, one of them an AMD Athlon XP PR1800 overclcocked to PR2100 with 1.5 GB virtual channel memory SD-Ram, running NetBSD (because it just worked, don't ask). I felt very ahead of 'the curve'.

And then this trash came along and burst my bubble...

I sat there and only thought: 'what for?!'

Now some old Windows running atop of DOS isn't something to really envy, but in this combination of components, BIOS, drivers it got the job done without any hassle, FAST!

That was eye opening for me.


Edit: WRT BIOS. It had a MR-Bios from Microid Research which allowd to use it over the onboard serial ports, you could choose which, how, and so on. Another functionality which has been gone, and came back later only as expensive add on.


Ever tried installing freeDOS on your computer?


These days I spent most of those seconds waiting for the bios than the kernel and what's on top.


The Commodore 64 boots up in ~2 seconds.


It's not just the problem with large and slow today. There are also these dark patterns everywhere special in Android. Even local thing do not work, if there is a network connection, but connections for ex. to Google are not allowed. For ex. the standard Android "Photos" App does not always show you the newest pictures, if the network blocks Google. Also you can't share something to another App if you have a network connection, but Google is blocked. If you switch off the network complete on the phone, everything is working again.


This. I know that today we get so many software for free, but I see so many good opportunities spoiled by the fact that someone from marketing told the developer to hamper interoperability or usability to push some other product or feature that when you don't have it, it's so refreshing. Remembering my browser of choice. Ability to save some place in google maps without letting it have all my location, always, local only functions, accept : always/not now, click here for extra features, ... we could save so many bytes or cpu cycles or UI missteps...


I'm so weary of this moral panic we've been having. There are so many other factors to be weighed against efficiency when it comes to making software. There are completely legitimate tradeoffs to be made that sacrifice performance. There are also programmers who write bad code on all dimensions - performance included - out of sheer laziness. But those aren't the primary cause of this hardware "waste". Demanding efficiency for efficiency's sake, ignoring all other constraints, is shortsighted and narrow-minded.


Yeah, I really dislike the black and white thinking. The python script example the author gives is a perfect example of what doesn’t need to be made any faster. If you are interested only in execution time, you might as well never write anything in native python.

But on the other hand, a lot of web content does need to be faster. Gmail has somehow gotten so much slower to load over time. And every time I visit a newspaper/magazine website I am aghast at how bloated they are. Does that mean nodejs is inherently bad, no, but it does mean people should try to optimize noticeably terrible performance that actually degrades UX.


Sure. But even that probably has more to do with businesses prioritizing features over quality, not programmers lacking character.

Much of it on the web also has to do with how much browsers can do. The number of CSS properties that can be applied, the ways different elements' sizes can automatically influence the layout of other elements, etc. These traits are what make the web such a powerful and attractive platform for user interfaces, but the complexity of the platform is definitely becoming a real issue that deserves attention.

A couple of points:

- NodeJS is server-only and usually has nothing to do with perceived performance of web apps

- The biggest offender of web performance is ads. They dump piles and piles of crappy JavaScript from dozens of different sources that all include their own copies of common libraries and have no incentive not to slow down the page.

- Beyond ads, the bottleneck is usually not even JavaScript, but layout (as in the paragraph above). Web layout is incredibly flexible and incredibly complex. Computing and rendering it all is slow, but it does serve a purpose. Not that it couldn't be improved.

Bad ads are a tragedy of the commons and I don't know what can be done about them unless Google or Facebook decided to throw their weight around to force them to be better.

I do wonder if a new web standard could be developed for using some constrained subset of the layout vocabulary, that would be cheaper and more straightforward to compute. The current version has to remain for backwards compatibility reasons, but it's trying to serve a bunch of different types of cases at once, and therefore doesn't do a great job at any one of them.


The thing is people are not looking at the less visible stuff hidden in scope. A mere fraction of a second increase in the time you don't have to wait after klicking something? Barely noticeable. Perfectly usable. Hardly worth the time investment to improve... after all that dev's expensive and then that button or whatever gets used by countless people. That fraction of a second ends up actually being numerous lifetimes.

And as it's surrounded by numerous such "slightly inefficient but efficient from some other perspective" interactions overall efficiency dies a death of a thousand cuts.

When you actively start paying attention to it and comparing it to little examples of what could be you start noticing how utterly garbage everything is. Even at the base level in things that have a massive userbase and that are used not just once in a while but constantly it's disgusting trash. I look around at the company i work at. They're all using it and probably not noticing at all but the fucking file explorer in Windows is slow as fuck as is countless other elements and interactions of it. The companies website so simple in it's content and functionality and is made by a webdev agency but is a bloated mess that takes a while to get to....a logo that shows whilst it continues to load. The software my coworker wrote is small scale and he said i was wasting a lot of my time making some small action faster not recognising that it's been used many thousand times a day every day for more than 10 years now.

There's way too little moral panic


I think people who recognise this have generally been developing from the days where computing resources were scarce (or on mediums now where they need to be efficient). It was a necessecity to implement efficient techniques instead of a nice to have. Nowadays those restrictions have been lifted for the most part.

In this day of "Agile" development, as long as something's working during UAT, that's all that's needed for sales and consumers.

Webdev, IME, is an example where the ecosystem has facilitated bloated websites. I've worked with developers who throw any library they can just for basic things because they don't have a need to try to optimise. The meme of using jQuery for everything when it came out has just been replaced by other frameworks. I find it often depends on developers who really want to work on something and take pride in it vs those who just need something on their CV or got hired by following a few tutorials on the web but not understanding what they wrote (which, to me, signifies a hiring problem in the company). During code reviews, I encourage leads to keep calling up hacky code to a point where the developer will just start writing it properly the first time round. As developers, I feel we should be aware of not creating selfish software which hogs memory from other software or requires huge data downloads for mobile users (whenever doable). Possibly a naive ideal but if it's a byproduct of developing fast software for my end users, I think that's a win-win.


Time is as scarce as it ever was.


Indeed, and pragmatism should be applied, but I mean in the context of not being rushed. I don't mind my team watching YouTube/browsing the web during work if things are going well but I wouldn't accept it if it's done after submission of suboptimal code. If there's time to watch YT, there's time to improve your code (unless it's clearly too much of a refactor).


> Jonathan Blow has a language he alone develops for his game that can compile 500k lines per second on his laptop. That’s cold compile, no intermediate caching, no incremental builds. You don’t have to be a genius to write fast programs.

The guy is most definitely at least a genius.


Depends on your definition of genius, but I definitely agree that these folks don't quite hold up the sentiment that "anyone can do it." I would put Martin Thompson, Raph Levien, and Jonathan Blow at least in the top 0.1% of programmers.

They are great examples for his overall point though. It probably would've been better just to leave out the genius bit and talk about them as folks proving it can be done.


You should add the late Terry Davis to your list, or if that area interests you, read up on his work: https://en.wikipedia.org/wiki/TempleOS


Yeah, there are many others I'd add to a more general list - Carmack, Bellard, Wirth, the folks from Our Machinery, etc. I was just referencing the people specifically mentioned in the original post.


It really all depends on the language grammar. TurboPascal had comparable speeds on i386 machines back in the early 90s.


The linked Medium article (when talking about npm) was what I expected, but gives wonderful examples of just how bad it's gotten.

For those who don't want to dig, one example was Ember.js, which has a dependency called "glimmer" which makes up ~95% of the code size. The author looked into glimmer and found that it had the entirety of the Encyclopedia of Britannica's "G" section to include a definition of "glimmer" in their help menu.

And that wasn't even the most ridiculous example.

It's shameful that it's gotten this bad; but when you look at what people are expected of in the current climate it makes sense that this would happen.

   * Horrendously short deadlines for enterprise CRUD (and the "frameworks" that support it)
   * REUSE REUSE REUSE THIS REFUSE (few seem to know how to read source code before installing the dependency)
   * "Not paying me enough for that shit"
   * "We can't rewrite, we put 20 years into this codebase"
   * Even our languages are shit, JS (despite it's usefulness) has undefined behavior as a feature.
   * [among many others I'm sure you could think of]
It's toxic, corps incentivize lazy quick work that won't hold up in the long run, but they are too stupid to realize that. Though i blame more so the sycophant that just silently nods and does the work without a sliver of conscience telling them that "this is wrong".

Civility has a lesser place in efficiency then what we have now; you can't make a decent product without bashing a few skulls (figuratively ofc)

Lastly, don't be afraid to reinvent the wheel if your wheel is better than mine.


Unless I have misunderstood, I think the ember.js/Brittanica story is an amusing but untrue anecdote from a satire piece ( https://medium.com/s/silicon-satire/i-peeked-into-my-node-mo... ).


Not to burst your bubble but the linked Medium article you’re referring to is satire. Glimmer doesn’t actually pull in Britannica - it never has.


The sad state of affairs is that satire is dead and this guy fell for the bait. It doesn't even sound that ridiculous anymore.


As a dinosaur who has been programming for >30 years, it shocks me how bloated many modern programs are. The installers for my own products are around 20MB and most of that is Qt libraries. But 100s of MB seems to be standard now. The Airbnb app on my iPhone is 210 MB. I understand that if you are shipping a 3d game with maps, textures, sound etc, but not for a mobile phone app.


I'm a bit less of a dinosaur, but the setup package for my product is still under 3MB. The package carries both 32- and 64-bit versions and the installer itself is 250KB, no dependencies outside of stock Windows supply. This is for an enterprise-grade file backup program with a reasonably long list of features.

That is, you can still write small and light software. It takes the same amount of time as it always did, not much changed.

The troubling part is that the proliferation of bloated software is steadily establishing a new status quo - the software is now _expected_ to be big and heavy in order to be proper. Doubly so for the enterprise-y software. Bloat is becoming a sign of maturity and robustness.


The sound driver on my ZBook at work does need almost 500MB Ram. No idea what is in there .. maybe HP has these days not just keyloggers included in the drivers, maybe the have screengrabbers too.


Probably 1 MB for the actual driver and 499 MB for the Electron GUI to interface with it.


>And then there’s bloat. Web apps could open up to 10 times faster if you just simply blocked all ads. Google begs everyone to stop shooting themselves in the foot with the AMP initiative—a technology solution to a problem that doesn’t need any technology, just a little bit of common sense. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that?

If you "simply blocked all ads" the people making the pages wouldn't have the income which they maintain the pages with.

How smart do you have to be to understand that?

>We haven’t seen new OS kernels in what, 25 years? It’s just too complex to simply rewrite by now. Browsers are so full of edge cases and historical precedents by now that nobody dares to write layout engine from scratch.

Well, there's Fucsia, speaking of new kernels. And Mozilla is doing exactly that, written a new layout engine from scratch (plus a language to write it in).

(I agree with the general sentiment of the post, but the examples are often shoddy)


> If you "simply blocked all ads" the people making the pages wouldn't have the income which they maintain the pages with.

One upon a time, adverts were just cross-linked GIF images. No iframes, no Flash, no javascript, no cookies, just images. Easy for rendering engines to show, no need to call script interpreters or to add boatloads of rubbish into the DOM. No real performance hit beyond downloading the image initially.

I would quite happily return to that world.


What if the site has multiple revenue avenues, of which ads are just a part? What if the site's actually a part of the OS that I've already paid for, but which inexplicably has to be connected to the Internet and serving me ads for Candy Crush?


>What if the site has multiple revenue avenues, of which ads are just a part?

Well, in most cases this is not the case. Subscriptions don't work but for outlets with passionate niche audiences, or for upper income targeted "high class" outlets. Which leaves most of mainstream media outlets outside. Besides those outlets where ads are peripheral to the main income streams (niche fan supported, paywalled) usually have less ads to begin with.

>What if the site's actually a part of the OS that I've already paid for, but which inexplicably has to be connected to the Internet and serving me ads for Candy Crush?

Well, then it doesn't really need ads.


Ads dont have to be so heavy. They load entire frameworks in ads these days.


> If you "simply blocked all ads" the people making the pages wouldn't have the income which they maintain the pages with.

That's how we get deliberately slow pages, articles split over multiple page loads, image slideshows and deliberately slow pages (because time spent on your site is time not spent on the competing websites).

On the other hand we have pages like HN, paid for(?) using other means and built to be perfectly usable and fast. Or some CSEs where revenue comes from affiliates and CPC fees instead of ads, so they try to keep things fast too. Then we have news sites with paywalls and hopefully some day better engineered UI to read those news.

People should just vote with their pockets for a better user experience, IMO.


People should just vote with their pockets for a better user experience, IMO.

How? How can I vote with my pocket for old.reddit.com instead of new reddit UI? How can I vote for another site which is different, when the main draw of Reddit is the number of people on there?

How can I vote for an instant messenger which works the way I want, when the people I talk to aren't using it because the people they talk to aren't using it?

How can I pay for a faster Windows 10 where the start menu works every time I press the start button and updates install in the time it takes to copy the data to SSD and no longer? That option isn't on the market.


I'll vote for the old version - fix the bugs and DON'T add new features. One app, one task.


> If you "simply blocked all ads" the people making the pages wouldn't have the income which they maintain the pages with.

I agree with regards to ads specifically; that's why I don't use an ad blocker.

But the problem is so much greater than ads. Ads aren't what's slowing down the new gmail, or Slack.


Yes, it is true, many programming they don't do such a good job. I do try to make better software, but will not always succeed. Also computer hardware is becoming too complicated too actually, I think. I generally don't add so much dependencies to a program though. Some people (including myself) do still write DOS programs sometimes. The web browser is too complicated. I use IRC and NNTP, I think they are much better anyways than Slack and so on (and even this HN, too, I think). I do program in C (I use other programming languages as well, but mainly C). Many programs I think have too many animations. And a lot of programs, they just write it stupid!! TeX is good and it still works more than thirty years later, and is fast, too. But anyways I don't like WYSIWYG, so that is why I don't use LibreOffice and Microsoft Word and so on.


Energy (or more accurately power) is a scarce resource. If it's not being spent to keep the organization or individual going, then it could be argued that the energy is wasted. As it requires functioning organizations to acquire more energy and so on.

All systems - biological, physical, meta-physical are built on layers that once deep enough are pretty well cemented in. Hindsight is 20/20 and though we know if the foundations were different things would be better - they won't actually change until the energy gain exceeds the energy cost of uprooting everything to make the change.

Just saying this problem isn't exclusive to software, but also laryngeal nerves in giraffes, x86, and Esperanto.


On the other hand, life is extremely frugal with energy. The foundations of biological systems are ridiculously efficient, and all the complex life forms are pretty good too. Despite being designed by a biased random process, they have at least this going on for them: the fitness function favors energy-efficient systems. Unlike our markets.


Plants convert 2% of incident light into usable chemical energy. Muscles turn 30% of consumed chemical energy into motion.

But they don't waste much material.


In addition to the costs of developing good foundational code, I think revenue is becoming a driver for slower-than-necessary software.

An increasing number of providers (for example databases) charge per server, per core, or another hardware-usage metric. The more hardware, the more revenue for them as they make about 90% margin for every new machine. There is a high incentive to get users to need more machines on their "managed cloud".

Vendors could try to improve their software so it requires twice as little CPU. But why bother since this would result in 2x cost in revenue? It makes more sense to focus on horizontal scalability than on core efficiency so users can keep on adding machines to their cluster over time.

If software is running at 1% of the maximum performance as suggested in the article, a 1% improvement could reduce costs by 50%. But I think none of the existing vendors will ever make the move as it conflicts against their own interest.


> Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. ... Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance.

> I can comfortably play games, watch 4K videos, but not scroll web pages? How is that ok?

IMO the comparison should be buildings <-> fine-tuned libraries (i.e. video decoding algorithms), modern applications <-> cities.

Go to any city center in Europe. Urban planning a century ago was much more elegant and elaborate taking into consideration the city as a whole. Nowadays developers and investors often ignore important aspects, such as surrounding buildings, infrastructure, making cities inefficient for the people who actually live there.

Any system which has plenty of resources has to become inefficient. It's just that the Moore's law allows for pretty damn inefficiency.


If we're talking cities, then I don't see how "Any system which has plenty of resources has to become inefficient" follows.

The problem of modern urban design is that of unrestricted freedom for developers. They build mostly whatever the like, however they like, the city be damned as long as they make a profit off their construction. What's lacking here is care - personal care about the city, and centralized care in form of a city that can tell them they can either submit to the constraints of a more holistic planning, or take their business elsewhere.


>Go to any city center in Europe. Urban planning a century ago was much more elegant and elaborate taking into consideration the city as a whole.

Eh, no. Just a castle/wall, shops around in the main St's, a market in the middle, and expanding chaotically from that.


That's more than a century ago. I was thinking about stuff built like in 1850-1950.


Users don't know about this performance gap, but there is a gap, and that gap is an economic opportunity.

Since the current practices are so inefficient, users are having to pay for this from pocket as hardware expenses. Buy a $1000 smartphone, to get the same experience as last-year's.

A different stack (don't need to reinvent the universe), can be branded as brutally efficient, uses slim hardware, and strict engineering practices to provide a much better experience at a fraction of the price (1/5 possible.)

I believe nobody is doing that yet since there are two main barriers:

1- Risk of no-market (I think this might be proven unfounded, given the current trend in price hikes)

2- Capital investment necessary to get started (But this also can be solved given the obvious appetite for the next-big-thing money being poured left and right, without anything catching on yet)


>You’ve probably heard this mantra: “Programmer time is more expensive than computer time.” What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.

The argument is incomplete.

The correct question (to maintain the analogy) is:

"Would you buy a car if it eats 1000 liters per 100 kilometers, but that doesn't affect you at all (you still get to where you want to be fast enough), and the time to manufacture and cost to buy it is much lower than would be possible with more efficient car that used 10 liters per 100 km?"

The answer to which would be yes. A software that does a task we do once a day or so at 1 second vs 0.2 seconds doesn't cost us money (and even the environmental impact is small).


It's all fine until - as it often is - we want to do that task more often. Any task worth doing once is likely worth doing more than once; any task worth doing across many people is likely to be worth outsourcing to a business focused on it.

Using a real example: in a certain company that wife worked in, there was a task that - every couple of days - would be repeatedly done by multiple office workers for several hours. I was visiting that place once, and since I had to wait for my wife, I was asked to give them a hand with that task. Growing frustrated, I very quickly located a hidden option for batch processing they didn't know anything about. It solved the entire task in couple of minutes. By finding that option, I freed several man-days a month for that company. Time that can be used for other tasks (or even goofing off).

This example sticks with me because these days, a lot of work is done in front of computers, and every inefficiency there removes productivity from the economy. The way I see it, if you're developing software, and there is a possibility that that software will end up being used by someone in their job, you owe it to them to make it efficient; by ignoring efficiency, you'll be robbing such people of their life and mental health, and their employers of potential profit.


A single dropped can on the grass is a small impact. Everyone dropping their litter everywhere, because "a piece of litter doesn't matter" is a large impact.

I can deal with a 1 second delay to run a script. I can deal with a 20 second delay to launch a large program. I resent dealing with hundreds of 1 second lags every day for tasks which didn't used to have any lags ten years ago.


That is a really good analogy.

In the same way that recycling was meant to fix the environment (and garbage now sits in warehouses), millions of objects get mass produced and chucked onto the memory heap, in the hope that garbage collection will fix it.


But in your example, your only describing one specific task (n=1). The issue is that time accumulates non linearly as the number of tasks increase. i.e the petrol stations must be able to supply enough petrol to all customers that drive the most inefficient car.

The culture of "developer time is most important", makes overall system performance someone else's problem, because "my program is fast enough when I measure it off the wall clock". But who's responsible to fix overall system performance, and how can they fix it? I think a lot of people would just upgrade their RAM, CPU or IO to solve the issue (create more petrol stations), rather than asking vendors to change the programming language, or to be more conservative on RAM.

And because there's costs to switching language stacks, people will stick to writing in the language they are comfortable in, so critical business systems get written in slow languages.


>But in your example, your only describing one specific task (n=1).

That's to maintain parity with the example TFA gives. I'm not saying it's never worth it... Of course you'll optimize often repeated tasks, loops, etc.

Here's what TFA quotes as bad reasoning, because it makes fun of optimization effort in some cases:

"@tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I’ll make my time back in 41 years, 24 days :-)"

Unlike TFA I agree with the twitting guy above that the efficiency improvement here wasn't worth it.


Yeah agreed. i mean in that example, if tveastman wrote it in a fast language the first time, the rewrite probably wouldn't have been required. (i think that tweet was for a bit of a laugh anyway, because he can now write new code in Rust.)

But as many languages support implicit static typing, the folklore that dynamic languages are "more expressive" is not actually correct.

such as:

let x = "string literal" //compiled as string

let y = 42 //compiled as int

let z : i32 = 100 //compiled with type declared as 32 bit int.


I work in a support center. We have a web-based ticketing system. Every ~5 minutes, the ticketing system goes down for ~2.5 minutes. Nobody seems to care. It makes me want to chuck my mouse every time.


I agree with the author on every point except:

>That is not engineering. That’s just lazy programming

I don't believe that developers/programmers are all lazy. There are a lot that want to do a good job optimizing their code and making sure it performs well and is future proof as much as possible. I believe that budget limits and pressure from deadlines set by non-technical people forces even the good programmers to cut corners in order to deliver.


I believe fully in cutting corners when you know they are being cut. It’s when you are unaware you are cutting corners that you end up with a seriously unrecoverable system.


> I want to take pride in my work. I want to deliver working, stable things. To do that, we need to understand what we are building, in and out, and that’s impossible to do in bloated, over-engineered systems.

Somebody needs to write this software. Efficient, maintainable, and debugged are not the low-energy state.

That means it either needs to come from business, or from open-source hobbyists. Business doesn't see a competitive advantage in it -- even Apple, which has traditionally cared more about UX than anybody, just has to be better than their #2 competitor (and price of entry to "desktop OS" or "smartphone OS" is high so that list is short, and not changing on any relevant timescale). And the open-source world has never delivered well on the end-user experience side of things.

The sad truth is that users would rather have new software for $0 (paid by ads, or media subscriptions, or whatever) than pay what it truly costs to develop software.

My main hopes today are that the end of Moore's Law will force companies' hands, or that government will step in to regulate minimal quality, or that workers will organize so they can stand for quality behind a CBA. These all seem rather unlikely at this junction. The number of programmers in it for the paycheck far outweighs the number of people who care about simplicity.

Software is going to get much worse before it gets better.


Heh, software in general, absolutely! But I think the "Web 2.0" is the worst offender: https://nixpulvis.com/ramblings/2018-08-11-web-shit-point-oh


The irony is, Web 2.0 was enabled by a performance improvement! It's only once JavaScript engines became fast enough to be useful that all these troubles on the Web started.



I like the fact that people are starting to get fed up with slow software. Maybe this will increase demand big enough and we can start seeing new performant software popup


The article quotes this in regards to npm: https://medium.com/s/silicon-satire/i-peeked-into-my-node-mo...?

It seems he does not realise that this is a satire piece, and seem to completely buy in to his view of the world instead of seeing things in a more nuanced way.


This made my day! It just gets better and better!


A single car has about 30,000 parts, counting every part down to the smallest screws (according to Toyota). A lot of people don't understand the difference of complexity between a car and software. The analogy between two is quite unfortunate.

The software in nutshell is programmable transistors. Each CPU instruction is in effect just a convenient way to design a specific electronic circuitry. Even a trivial act of printing Hello World takes an astonishing amount of complexity when you take into account all kind of protoicals, APIs, driver codes, kernal code, fonts, rendering, graphics that gets executed in between. If you showed a computer printing Hello World on screen to someone from 1920s who knows how to build electronic circuits and primitive "display", they can estimate the amount of work that would be required to do that. Nothing has changed from 1920s to 2020s in terms of complexity to enable simple Hello World. A relatively simple program will easily exceed 30,000 low level components working together to achieve a goal in an intricate dance. Now think of large code bases with million lines of code... This is why software is hard, software is complex, software is messy and software is magic.


Nothing feels more broken to me than the React / Javascript web ecosystem. Writing rich, stateful UI's in HTML isn't trivial, but React plus webpack hell turns this into a mess that is so difficult to develop, maintain, and reason about, that people just quit trying and accept that their error tracker will be full of random bugs that nobody can explain or reproduce. It makes me sad.


I totally agree with him that it's awful, but I think the problem is that making things efficient is expensive in terms of time and having to hire expertise. Evidently efficiency just doesn't have enough return on investment for companies to care. The thing about looking back at the windows 95 era and comparing it to now, is that windows 95 needed to be efficient to be usable, windows 10 doesn't.

The exception would be games and embedded software, but even there, there's certain degrees of lazyness. For instance, games are very cpu/memory/gpu efficient, but they're almost always ridiculous when it comes to disk space usage. There's no reason that your average AAA game needs to take up 50GB other than that things which could address that aren't worth the fuss. (I'm thinking common demo-scene tricks like procedural textures/data/everything, aggressive compression schemes, reusing assets, etc.)


But what real enhancements does Windows 10 provide, which couldn't have simply been done as patches to Windows 7, keeping the same main structure and layout and overall design, but just continually improving it, the way Henry Ford did with the Model T for 20+ years?

For that matter, one could for example take the Windows NT 4.0 source code, add in drivers for the necessary hardware, fix boot code, linking, etc to be compatible with late model computers, spruce up the UI with better font rendering, antialiasing, 24-bit color wallpaper, OpenGL rendering even--and in the end, you'd have something just as functional as Win 7/10 but at 1/4 of the bloat.

This sort of thing would be technically very easy to do. It's much easier than the status quo of continually reinventing the wheel. So why, oh why, is there this overpowering desire to continually throw out good code and replace it with heavier, more bloated junk, which doesn't really offer any real increase in functionality?


> For that matter, one could for example take the Windows NT 4.0 source code, add in drivers for the necessary hardware, fix boot code, linking, etc to be compatible with late model computers, spruce up the UI with better font rendering, antialiasing, 24-bit color wallpaper, OpenGL rendering even--and in the end, you'd have something just as functional as Win 7/10 but at 1/4 of the bloat.

You've almost described ReactOS!


My point isn’t that they couldn’t or shouldn’t be more efficient, it’s that there’s no market pressure. No incentives means no change in behaviour. The only way to fix it is to change the incentives, otherwise we’re just in “old man yells at clouds” territory.

Besides, you could essentially do what you’re describing by piecing together a Linux desktop from a lean distro. It’s not hard to find efficient software, it’s just not really mainstream


The overpowering desire is to make more money for your shareholders. That sometimes isn’t going to align well with the interests of end users.


Except that it also happens with internal software, where there's no separation between developers and end-users. It's second-system effect writ large. "We'll rewrite this old, outdated software and add plenty of bells and whistles! This time will be different, we'll finally be doing it right."


See, to me, internal software may be the one place where "developer time" really is more valuable (depending on the size of the organization), because you're developing for far fewer uses.

One second times five million users is almost two months. One second times five thousand users is less than 90 minutes.


^ Bad typo, I meant to say "you're developing for far fewer users."


All these critiques really resonate with me. I've asked myself that same "What could it possibly be DOING!?" question every time I update windows. Recently I started learning 6502 assembly so I can write NES games. It's really pleasant to abandon the layers and layers of dysfunctional, messy framework goo that is now the norm in my daily work life in web based programming.

But something crucial is missing from this manifesto. As the author says, the problems don't exist because we can't solve them, but because no one ever takes an interest in solving them. We're all engineers, so why don't we just do some engineering and fix this bullshit? Well, because it wouldn't make any money.

Perplexingly and contradictorily, the profit motive drives both innovation and stasis, both growth and sprawl, and both efficiency and inefficiency. The problem is both technical and social, and probably so will be the solution.


My concern about this state of affairs in software engineering is that it may be producing generations of engineers who couldn't actually build/conceive things better (eg. Building a rendering engine or a mobile OS) if they wanted to concentrating that power in a few corporations. That's a bit terrifying.


Second the OS > VM > Docker > Kubernetes gripe. Soon no-one will know how to admin a sys. Companies like Hetzner (http://www.hetzner.com) are offering an 8-core VPS with 32Gb RAM for a mere $40 per month but all we hear about is AWS. It's insane. Facebook conqured the world on a fraction of that hardware back in 2004. Docker and Kubernetes were originally designed to solve problems managing massive fleets of servers but now devs at every 2-bit startup with a single server are expected to be hiding their efforts behind these 2 extra layers. "Over-engineering" doesn't even come close.


My problem with that 8C/32GB machine as opposed to AWS is that that’s literally everything I want my VPC and direct connect.


Sorry, I don't understand the last part of the sentence.


I missed a period. I meant to say that Hetzner gives me great machines, but nothing else (actually they changed that a little bit on their VPS offering, which now has an internal network).

This works great for my personal machines, but not for my company, which wants to do fancy networking things without going over the open internet.

I do fully agree that it’s an order of magnitude cheaper if you know what you are doing though. I’m sure our $3000/month AWS stack would run a lot more efficiently on my $20/month Robot server.


Can someone from the quant/HFT world comment on whether these kinds of problems exist in that sector as well? Somehow I imagine that the fierce competitiveness of trading would force people to write better software, but that’s just a theory.


It depends what you mean by better. Obviously, real-time trading systems are going to be optimized to reduce latency, since it directly correlates with profits. One way to do that is to completely bypass certain abstraction layers, use polling instead of events, do more stuff in userland, tune kernels, and so on. This works, but you end up with a really bespoke system that is not particularly elegant.

But you don't have to go to those extremes to have good software performance.


A coworker of mine explained his belief that professional software development is an inherently economic activity. This was clarified by saying that the amount of imperfections, performance problems, and bugs in a piece of software is not reflective of the software nor its writers, but what most end users ultimately care about.

Whenever I read posts like Software Disenchantment, I find myself agreeing with that philosophy. In other words, it’s probably by design. Of course this doesn’t account for the enormous waste of time and money that occurs during software development, but that doesn’t really affect my feelings on the matter.


That assumes end users can make informed decisions about the software they use, and have free choice to choose a less-buggy option if they just preferred that. Neither of which is true in many cases.


...And someone just made their first million with some buggy PHP app. Perfectionism is the enemy of progress. In the end, software is categorized into two types: software that people bitch about and software no one uses.


In the past, there were elixirs that claimed to cure baldness, impotence, cancer, etc. Marketing based on unproven claims was legal, and everyone was doing it.

Software today is in the snake oil era. "Secure", "privacy-friendly", "robust"...

We need the government to create the FDA of software, to protect the consumers against potentially harmful software, marketed using false advertisement.

We also need job applicants to be protected against predatory companies that hire people that want to do right by the customer, but are forced to produce rushed, poor quality stuff.


>> Build systems are inherently unreliable and periodically require full clean

What frustrates me above all else is the trend of compile-to-javascript languages. IMO interpreted languages are great, of course some performance was sacrificed to get there - I think that was a fair trade-off because saving the developer from having to build the project is a HUGE advantage when developing (at least for my particular development flow)... So when I see people throwing away that massive advantage by adding an unnecessary compile step in order to get slightly better syntax or static typing (e.g. CoffeeScript or TypeScript), I find it deeply disturbing. Static typing can be a useful feature, but is it worth adding a compile step? Not by a long shot.

And the idea of transpiling a language into an interpreted language is just ridiculous in principle. We had an army of very smart people who invested a huge amount of time and effort into making efficient interpreters for certain languages but all that work is thrown away as soon as you add a build step.

And the stunning thing is that it's actually possible (easy even) to create excellent software with clean code without a build step (I've done it many times) but these simple, clean approaches are never popular. People want to use the complex approach that introduces a ton of new problems, delays and compatibility issues.


While I agree with the author's observation on efficiency and simplicity in modern software, I think that the article's tone is needlessly antagonistic. It read like a diatribe about all of us these terrible programmers screwing up software for him. Perhaps I am an exception, but so far the majority of programmers I've worked with were very conscious of their software's performance and worked hard on making it as fast as they could.

The problem is that when your software is built on top of a framework and/or uses X different web APIs etc then you often run into issues where a part of the system that you don't have control over causes performance issues and you don't have the expertise/time to profile it in order to fix it. So I think what's causing problems is that software has become a lot more about putting together frameworks, libraries and reusable components and when faced with such a complex system a programmer will often give up and say "there! it's as fast as I can make it without rewriting everything from scratch".

Therefore, the issue seems to be that programmers are building on top of other systems that they don't know enough about to use efficiently. The author does mention this issue in his article as well, but in a slightly derogatory fashion blaming programmers for bringing in dependencies they don't need.

I think if everybody had the time and ability to write everything from scratch like Jonathan Blow is doing with Jai then yes, things would be more efficient. It is far easier to profile and debug code you've written yourself. However, seeing how this isn't feasible for most projects, I think more focus should be put on better documentation of frameworks and libraries.


I agree with this.

I think we need a guild. We need licensed software engineers.

Not every programmer needs to be one, just like not every engineer needs to be licensed, but there needs to be a licensed engineer on every team. And, of course, sometimes there doesn't need to be. But I sure wish there was the option.

And hell, bring back apprenticeships and mentoring with the guild. There is so much we could learn from the physical science engineering disciplines


It's an interesting idea, but I feel like a lot of the cause of the problem is unrealistic deadlines and a desire to build software insanely fast.


While the apprenticeship model is a good one and can work in software, I would be wary of such an effort considering how rapidly software evolves. Guilds work well for technologies that are fairly stable and require years of practice and study to get right (metal working, carpentry, plumbing, surgery etc.), while with software you could specialize in languages and frameworks that become obsolete in the order of decades.

However, the idea of working closely with Senior Engineers and learning from them is certainly something that I vehemently agree with. I've been fortunate to have had that opportunity.


While software engineering is growing and evolving, there's also a lot of cyclical fads that we should look past. Sure, there is the framework du jour, but the fundamentals of computer science and software engineering grow far slower.

Mastery of frameworks is NOT mastery of our craft. They are useful tools that can come and go. But the underlying principles are what should concern is.

To that point, none of your examples of stable disciplines are static. New surgical tools, techniques, and technology are constantly produced and surgeons must learn to lay down their old ones and adapt. Metalworking, carpentry, plumbing all need to learn about new materials developed and code changes.

All of those things are like frameworks.


It's true. We do. People don't want CS degrees to mean a "programming" degree, and this is one way to do. Create guilds, apprenticeships, and mentoring so that when companies push for faster code, more programmers, etc, we have a solution.


This is Jevon's Paradox of software development.

As self driving cars enable us to fit far more cars on the road before causing the same level of congestion, people will start taking increasingly longer car rides for decreasingly valuable reasons, up until the roads are equally intolerable as they were before the innovation had occured.

Replace "self driving cars" with "faster hardware / more memory".


A good way to remove bloat is to find and remove needless abstractions. Even a C compiler will produce huge executables and it takes a lot of work to reduce a program to the smallest amount of code:

http://www.muppetlabs.com/~breadbox/software/tiny/teensy.htm...


Sounds like he wants software to contain AGI, such that it can work out which version of a document you want to keep, and fix it's own errors.

Honestly things were way more sucky years ago. Your Windows computer crashing was just normal in the 90's. Rebooting and reinstalling par for the course. Getting Linux to install with drivers working seemed impossible unless you carefully chose hardware.


FTA: > As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features.

Or fewer features. Talk to an actual professional who uses spreadsheets all day long about switching from Excel to Google sheets. It's infuriating the infantilization of UIs and the "oh they'll never miss it" attitude.


we all want simple, fast, small programs with clear objectives.

but:

- economic incentives do not align with common sense

- landscape is fragmented, continually evolving and unstable

- we are all posers. we all have opinions and morals, look at others and criticise, but when it comes to our own work we are just like anyone else. we need money to live, so we just go with the flow


From the JavaScript ecosystem, I'm frustrated by Babel's popularity. Whenever I discuss it with developers, everyone agrees that it's not worth it, and yet people still use it! It's as if we have no choice in the matter!

The whole point of Babel was to allow us to use the latest JavaScript syntax so that we wouldn't have to update our own source code when the new syntax finally became broadly supported by browsers and other engines.

IMO, the Babel project is a failure because:

- Babel itself is always being upgraded to support newer ECMAScript syntax, so people still need to upgrade their own code whether they use Babel or not. The only benefit is that using Babel allows you to use these features before other people.

- Instead of just worrying about how JavaScript syntax changes affect your code, with Babel, you also need to worry about how Babel upgrades will affect your code. The babel plugin dependencies often change and break over time (even if you don't change your own code) and you always have to support both ecosystems.

So when you consider the big picture, Babel doesn't save you from having to upgrade your own code (as per its original promise). When you evaluate the pros and cons, the cons greatly outnumber the pros:

Pros:

- You get to use the newest language features before other people.

Cons:

- You need to maintain your code for compatibility with two ecosystems instead of just one. Keeping up with both ECMAScript + Babel is a lot of work. I would even argue that staying up to date with Babel is more work because dependencies keep changing underneath your project.

- It forces you to use a build step so you lose the benefits/iteration speed that an interpreted language brings to your development flow.

- Adds a lot of bloat and unnecessary, hard-to-describe dependencies to your project which can open up security vulnerabilities and make your code more opaque and brittle.


This is extremely well-worded and gets right-to-the-point. This is so refreshing in contrast to the constant attempts to explain why software is so slow, bloated and especially unreliable.

An explanation of "why" does not explain "why this is acceptable".


> “Programmer time is more expensive than computer time.”

I support the idealism of the article, but this quote is very accurate. Nobody wants to pay for quality software. Not your users, not your stakeholders, not even you!

And that's because the price of quality isn't just a little more expensive. It's not 1$ a year. It's exponentially more expensive. And some of those paid efforts will still end up slow and crappy in outcome, because of something else the article doesn't acknowledge:

Writing fast, efficient, simple, correct, full featured software is really really hard.

So not only is it expensive, it's also just plain difficult. Meaning it's not just about money and resources, but also time, time to experiment and fail.


>> That is not engineering. That’s just lazy programming. Engineering is understanding performance, structure, limits of what you build, deeply.

If I think back to my engineering school days, the definition of "engineering" for my classmates in civil and electrical engineering was to look up well defined procedures and calculations from a book and apply them. No deep understanding required to design a bridge that didn't fall down or a circuit that didn't overheat.

What's the equivalent for software? Design patterns were a bust. SICP is for cultists. It's a huge void. There is hardly any such discipline as "software engineering" yet.


>Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come?

Well, where's your word processor buddy? Try to write one to achieve those goals -- and offer what people want today, including syntax highlighting, linting, auto-completions, etc, and come back to us...


>Well, where's your word processor buddy? Try to write one to achieve those goals -- and offer what people want today, including syntax highlighting, linting, auto-completions, etc, and come back to us...

Emacs could do that since 25 years ago, if not more.


Emacs never could, and still can't draw while processing input, and even needs occasional manual refresh. A badly written code (plugin, etc), a loaded loop, etc, can freeze emacs. So there's that...

And, Lisps aside, the kind of linting, auto-completion, understanding of syntax/AST, etc emacs could do 25 years ago is much easier compared to what devs expect today.


Gvim and Sublime do this quite well. I haven't looked at the Xi editor mentioned in the article but I would assume it does a decent job or it wouldn't have been mentioned. VScode actually isn't even that bad considering it's Electron based but doesn't quite meet the mark.

Also you're conflating syntax highlighting/auto-competition with time to render an update to an input. You don't need the popout letting you know that 17 source files and 50k lines over it found "TextComplete()" is a valid auto-complete and should be turned blue in order to draw an "x" to the screen in response to a keystroke.


Each and every point the author makes shows lack of understanding of some basic things. Comparing cpu resources with fuel consumption? Really? Or windows update taking 30 minutes... Remember premature optimization is the root of all evil.


> Linux kills random processes by design. And yet it’s the most popular server-side OS

There's no reference for this in the article, and it caught my attention - anyone have any idea what the author is talking about here? Never heard of this before


It could be referring to the out of memory (OOM) killer https://www.kernel.org/doc/gorman/html/understand/understand...


There has been some work on this in nohang[0].

> Customizing victim selection: impact on the badness of processes via matching their names, cgroups, exe realpathes, environs, cmdlines and euids with specified regular expressions;

[0]: https://github.com/hakavlad/nohang


I can imagine how many software products come get to this state. At some point someone makes a poor design decision, and that sets a precedent. Or programmers can't be bothered to clean up and refactor their code, and it adds over time.

But the best explanation for why this problem persists even in teams of proficient engineers that I have seen comes from the 2005 GDC Keynote from John Carmack [1].

> I can remember clearly in previous projects thinking [...] I've done a good job on all of this but wouldn't it be nice if I could sit back and really clean up the code, perfect the interfaces, and you know, just do a wonderful nice craftsman job [...] interestingly this project I've had the time to basically do that and I've come to the conclusion that it sort of sucks [...] there's a level of craftsman satisfaction that you get from trying to do what you do at an extreme level of quality and one thing that I found is that's not really my primary motivation [...] that's not what's really providing the value to your end-users [...] you can sit back and I am do a really nice polishing job it's kind of nice but it's not the point of maximum leverage and I found.

So, as others have mentioned here, there is a threshold where extra performance provides less value to the project than, say, an extra feature.

It seems that in some cases we have crossed that threshold and some software has become comically bloated. I attribute the reason why these are not solved to the same reasoning. Refactoring an existing project to reduce the bloat would take too much time and effort from one single developer, that could be used somewhere else, although in the end everyone would benefit from it. So you are better off adding stuff to the dumpster fire and moving on.

It's sort of a tragedy of the commons [2] of software performance.

[1] https://youtu.be/N0auhzHZe5k?t=1015

[2] https://en.wikipedia.org/wiki/Tragedy_of_the_commons


> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms.

Modern text editors have to deal with proportional font, right to left writing, anti-aliasing, and a big bag of Unicode related issues (Arabic is going to break all assumptions you have on language and text editing)

Fantastic read on this subject from few months ago:

https://news.ycombinator.com/item?id=21384158


I remember 5-6 yrs ago I tried to create an app using Pascal to remove cache/temporary files, fix registry errors and a few optimizations like popular "optimization/speedup" apps. I wanted to show the progress on progress bar but see what: the process just took 0.3-0.5 seconds and the progress bar is not required anymore. But I wanted to show it anyway and put random amount of wait time (sleep) between steeps. I didn't realize why the popular apps needs 3-5 minutes for the task that my simple app just needs 0.5-1 second.


>I’ve been programming for 15 years now. Recently, our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and IT in general.

You and me both buddy! 11 years here, you are not alone in this!

What I found helped me take my mind of this nagging feeling is to bring in a tool that is like a razer sharp brand new scalpel. That was Nim. It's readable, small, fast, and spits out tiny statically compiled binaries.

Find your scalpel and slice and dice. Back to basics. Purity.


25 years since I started learning C++ as a kid, 14 years in industry and feeling like a disappointed old man despite still having the youthful looks to pass for a student!


I love that he linked this page in the article:

https://docs.gitlab.com/ee/administration/operations/unicorn...

because I've had to deal with that a bit. N.B. the last sentence here:

One other thing that stands out in the log snippet above, taken from GitLab.com, is that ‘worker 4’ was serving requests for only 23 seconds. This is a normal value for our current GitLab.com setup and traffic.


It appears to me that much of your complaint is based on the flawed assumption that the software "industry" is filled with engineers. ("Industry" because in software what appears to be production of something new, is really just a re-invention or re-creation of what's been done before - and unfortunately rarely an improvement)

For example, how many authors on medium proclaim themselves "Senior Software Engineers" but when you dig find they've got maybe 5 years experience doing web development, with no CS or engineering education. Maybe stuff like a 36hr "Web Development Bootcamp". Do people really not understand the definition of an engineer anymore?

From there they progress into the deeper parts of software. And create the atrocities to be found in the npm registry, which become dependencies of dependencies of dependencies that results in nightmares everytime one needs to navigate to a website.

If it would've been possible to see the background and education of the numerous critics here, what may we find? If I (the developer as described by the OP) am surrounded by people like me, and the world filled with people who think like me, and create things at a similar cognitive level as my peers, would I not misjudge the collective level of quality that I perceive to be acceptable? Smells like confirmation bias to me? Maybe a few others. Dunning-Kruger anyone?

In their defense, the marketing campaigns created by large corporations to turn the supply-demand (cost of employment) of employees in their favor, has I think been a big part of the problem. First the programmers, then the "Data Scientists", etc - the amount of disappointment and student debt being created as these people eventually realise they've been sold something that they're not suited for!

If we cannot critically look at our industry and admit our flaws, we cannot move forward as a collective.


> A 16GB Android phone was perfectly fine 3 years ago. Today, with Android 8.1, it’s barely usable because each app has become at least twice as big for no apparent reason. There are no additional features. They are not faster or more optimized. They don’t look different. They just…grow?

I jumped straight from Android 2 to Android 8 development and was surprised myself, so did some investigation and have half an answer. The author is actually wrong here, on both looking different and additional functionality. However, the bloat is still far larger than it needs to be.

All the bloat comes from the AppCompat modules, which all the docs recommend to the point of it apparently being required if you don't know better.

AppCompat is for both supporting differing APIs and creating a consistent look and feel across different Android versions. Each Android version has its own visual design, which Google decided was a bad thing, opting to use AppCompat so the most recent designs (and in some regards design functionality like coloring the selection handles) were used in older versions of Android.

To do this however, it includes a crap-ton of images. The build scripts are supposed to remove the unused ones, but even with maximum trimming enabled it can only remove somewhere around 10%. There are hardcoded inclusion rules for some of the AppCompat java code, that no one's found a way to override, which in turn reference the images - so they get kept a well, even if your app never uses them.

As for differing APIs, notifications have changed massively over the years. The interface is so different you do actually need the AppCompat subset for notifications to target different Android versions (and that can be used separately from the rest of AppCompat), but there also have been a huge number of new features added to notifications - such as delay settings, shortcuts, icons, even full-on fancy designs, that didn't exist early on.

I'm calling this only half an answer because there's no apparent reason for some of the notification API changes, and the build scripts/AppCompat can certainly be significantly improved to remove more cruft. I have a sneaking suspicion that it's not done because this is low-hanging fruit for handing over signing keys to Google for their "optimized" builds...


Lots of moving parts.

I am a dependency skeptic. I think that you need them to do big stuff, but should probably avoid them for small stuff.

High-quality dependencies can have a drastic impact on the quality of your software, but so can low-quality dependencies.

I think we are at the tail-end of a "wild west" of dependencies.

When the dust settles, there will be a few really good, usable and stable dependencies, and a charnel pit, filled with the corpses of all the crap dependencies, and, unfortunately, the software that depended on them.


Great article. Author makes some really good points. Surely the big tech firms have skunk works projects going on to rebuild the problem areas? Microsoft. Google. Facebook. Amazon. Netflix. They all must have decent sized R&D departments? Perhaps an independent, very public, curated list that points out areas that desperately need work would be helpful. Name and shame so-to-speak, would prod them to move in the right direction on the areas that need work the most.


Couldn't agree more. I am going to throw marketing under the bus for web application bloat. They make us install script after script for tracking you. They are the worst.


Most of the issues outlined are problems of the web. The HTML+CSS+JS combo is just painfully slow and wasteful by design - it's just way too many levels of flexible abstractions. Which is suboptimal for app development. Moreover it's expensive to maintain two apps - web and native - which share have the exact same UI/UX. Hence the rise of Electron, React Native , ...

The only way out of this is to rethink the web. Which is a hard one to tackle.


I think it's a desktop problem, too. TBH I don't know how things are over in the Mac/Win world, but over here in Linux land it's... not so good.

Boot used to be a small number of seconds. Now (on the rare occasions I'm forced to actually boot/reboot) I start the machine and bugger off to make coffee while it does its thing. I don't know what takes so fucking long, but it's in the range of 'several minutes'.

Starting apps likewise. I just started up an infrequently used picture-editing app a little while ago,... up towards a full minute of 'loading this crap', 'loading this other crap', etc.

And let's not mention Atom (an Electron app unless I've misunderstood something) -- so laggy for some things that should be near-instant that I'm developing an active hate.

Alright: get the hell off my lawn now!


...but it works everywhere (Win/Linux/Mac/Phones/Tablets/Consoles/Cars), no need to fight with incompatible shared dynamic libraries or ask users to install 20 native library dependencies. And the webapp is developed waaaay faster with waaay less work and money.

I also see the problem. I hate bloated websites. I hate all those little unnecessary fancy CSS animations, I use uMatrix and regularly have to figure out which of those blahblahcdn.com domains (subdomains are whitelisted) needs to be enabled to make it work.

But even those webapps can be made to be "fast enough". Developers just have to be very careful on the design from the beginning and try to use as little deppndencies as possible. Prefer simplicity and speed over fancy effects and unnecessary features no one asked for.


IMO the article is painting a really negative picture about the state of technology. Any profession with large enough practitioners will create tons of crappy stuff; with most professions that crap is just not discoverable. e.g. if you're a crappy carpenter, only your city or neighborhood can see your shitty work. But if you make a crappy website or app, anyone in the world can see and use it.


Sturgeon's Law, "Ninety percent of science fiction is crud, but then ninety percent of everything is crud".


He talks about major companies though


> Google Play Services, which I do not use (I don’t buy books, music or videos there)—300 MB that just sit there and which I’m unable to delete.

Thats....not what google play services is, like at all. https://developers.google.com/android/guides/overview


> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time.

because we are going to make editors using JAVASCRIPT, which was never meant to be used this way


The law of software bloat has been known since the 1980's. It's called Wirth's Law, or "What Intel giveth, Microsoft taketh away". See https://en.wikipedia.org/wiki/Wirth%27s_law.


I’m in a bad mood today, and IM bad mood O, much of this is a symptom of modern economics and business management.

Also - resources that are essentially free to use (customer/users CPU cycles, storage and bandwidth) will be consumed.

We as consumers are paying the bills for it all in different ways(electricity, new gadgets, cloud costs etc).


How does this apply to programming languages? I've always heard that developers should just stick to the language in which they're most productive, but am I part of the problem if I pick Python over a more performant language like Rust or Go for all of my work (web apps, command-line tools, etc.)?


Nowadays, the shittiness of software doesn't come from lack of performance, but from ads and tracking.


Large socio economic factors are at play.

It depresses me to no end too, but i am surprised in the least.

There were tiny groups trying to go frugal and solid. Remember suckless ? I forgot other names, there's also alan kay vpri project with ometa.

Maybe we should make a frugalconf. Everything 25fps on a rpi zero


In my opinion, the reason nobody cares about something taking 15 seconds, is because the process would have taken them 4 hours of manual work.

Of course they’re going to accept that delay, even if adding a few million numbers together should really take less than a millisecond.


Yet another Electron hit-piece... curious, how do other cross-platform frameworks compare in both build time and live-reload time? For comparison, VS Code clean builds in about a minute, and a change-build-reload dev cycle is about a second or two.


Everyone is so angry. Isn't this the the world of ubiquitous code and unlimited resources we wanted?

Not everything needs to be super efficient. Most things are tuned for production cost and time. Efficient code isn't going anywhere. Relax guys.


I'll relax when I can get through the day without fighting some idiotic organizations excuse for a web form or attention hogging pop ups or, god help me, what else?!

We shouldn't have to put up with all this shit. That's the point.


Excellent. What, then, are you doing to make sure this shit doesn't happen?

Are you willing to pay $20 for a phone app, instead of fishing for the free version? Are you willing to pay for websites to get rid of ads? Are you willing to pay $300 for a new OS?

Alternatively, are you willing to ditch capitalism for a system that prioritizes the commons?

It's us. We are creating the incentives for a world that produces this. Unless we change that world, this is what we'll get.


I'd like it if my phone let me install my own software. That would be a pretty big step for me personally. But I'm still stuck on iOS for now.


iOS doesn't have 3rd party app markets?


Yeah, but it would be kind of nice not to have to reboot my TV every couple of weeks because of a memory leak.


> Isn't this the the world of ubiquitous code and unlimited resources we wanted?

I don't really know what this is supposed to mean, it sounds rather poetic and abstract, but I'm quite certain few if any people ever asked for a world where it takes 10+ seconds to open a simple email.


The AI/ML hype drives away talented students from engineering and systems research, as well. Because of the hype engineering somehow is less prestigious than “data science”. Wonder how long until this trend will blow up.


I agree with the overal sentiment, yet the examples could be better.

> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come?

Text is very complicated. Does your 42-year-old Emacs support Unicode? And not just accents, but whole different scripts?

See https://news.ycombinator.com/item?id=21105625 for some discussion and a good link about the complexities of rendering text.


I'm not sure if the author meant current versions of emacs (with their 42 year lineage) or not. But if that's the case, then I think emacs was one of the very first to support Unicode and whole different scripts. If my memory does not fail me, emacs had extensive wide character support long before Unicode. With an extraordinarily broad treasure trope of different input methods on top.

emacs has also always been (in)famous for its text update render algorithms, that probably still work well over the old slow terminal lines it was originally used for.


>Text is very complicated. Does your 42-year-old Emacs support Unicode? And not just accents, but whole different scripts?

Emacs had MULE in late 90's. It was THE best Unicode editor because it supported lot of encodings. You could switch from iso8859 to UTF8 on the fly.


This article ignores the primary reason for the basic “problem”: changeability. How easy is it for me to hire a cheap programmer to iterate my profitable saas biz to keep it relevant in the rapidly changing market?


Whenever I read articles like this I often wonder how much overhead is due to security concerns, and how much that overhead and additional complexity contributes to the problem of slower software.


The rant is entertaining. The problem is pricing. Developers largely don't pay the costs for what they deploy; there's no profit in efficiency. Thus things aren't efficient.


So we really need to blame the developers of all the (free) libraries we use for not optimising them to the nth degree so our code doesn't become bloated...

No one's forcing you to use a library. But if you do they come with tradeoffs.

OK, things are slow and buggy. But we've got lots more things thanks to all the productivity we've gained from using libraries, etc. That means we collectively solve more problems for more people.

Purism is a nice idea, but ultimately probably not worth the effort until things become so bad that they are, and at that point become a differentiator. I mean, I don't care if a web page is twice the size of Windows 95 because my computer is way faster than a 486.


Choose your scape-goat:

a) Lack of fundamental understanding of how computers work

b) Abstraction away from the bits, flops, shifts and pops

c) Quick sort

d) Electron.js

e) Ruby hipsters

f) Magical cloud computing

g) Software companies that know the cost of everything and the value of nothing

h) All of the above


Had to remove background to read the article :D https://imgur.com/gFla2WG


npm would be a lot better if it flattened out its node-modules. I've seen plenty of structures with multiple identical installs of the same library.

All module folders should be at root level.

Need version 15 and version 16? Yep different folders under the root. Modulename_version, eg somelib_0.1.15 alongside somelib_0.1.16

It would allow clear identification of old versions and much less duplication. Less bloat. No way to have duplicate copies of the same library.


> Web pages ask you to refresh if anything goes wrong. Who has time to figure out what happened?

I feel like this is a feature, not a bug of software dev. The fact that you can push updates out so instantaneously allows you to work incredibly incrementally. You can make barely functional, inefficient things (shortcut hacks) just to make sure that your product is something that people actually use before focusing on optimization. If users are willing to put in the extra work to "refresh" the page, then certainly there's some real problem you are solving.


This whole article is regurgitation, aside from that, I was just wondering, what complex, highly efficient software has the author written?


The analogy to cars misses where they are in their life cycle. When cars were evolving at a faster pace, and gas was relatively cheaper, they did get something like 5-10% efficiency, particularly when you correct for the level of emissions produced.

Gas car technology matured, gas got more expensive, and cars got more efficient. Moore's Law isn't going to go on forever. We are hitting limits in battery life. Software will tend to get more efficient over time.


Safari on iOS would not render Nikita’s article until I killed and restarted it, nicely illustrating one of his key points.


iOS has evolved into such a buggy mess I switched to Android. I used to be a huge Apple fan, and now I don't use any Apple products. I feel they are one of the worst offenders in degrading software quality, unfortunately.


I understand what the author is saying but it sounds like wanting tech for tech's sake. If the user doesn't care, then does it matter that sites are slower? And if it matters, why hasn't someone come along and done something better like the author suggests? Surely this trend in tech performance has been going on for so long that someone could be doing something about it?


Obligatory mention of Parkinson's law (https://www.wikiwand.com/en/Parkinson%27s_law#/Generalizatio...), which states

> The demand upon a resource tends to expand to match the supply of the resource (If the price is zero).

Have cheap hardware, software will expand to use more of it.


Everyone forgets the spiritual cost that it takes on the meat to make silicon think. Does it suck? Certainly not. Could it be better yes. That's why we're here. Right now though the human can barely interpret things less than 1/60th of a second. Speed/Performance isn't everything.


Ha! And people call me crazy for using emacs and eww for everything.


Since about 2004, when computers started having lots of excess ram and CPU, all that excess went towards surveillance. Your software is slow because watching you is hard work!


Peter Principle

Conway's Law


Avast! the river was angry at us arr ...


Something no one has said which probably should be mentioned is that programmers today are just not as good as programmers of yesteryear.


Perhaps it's more that most (not all) of the many programmers today are not as good as the (relatively fortunate) few who had access to computers in the past.

In fact, I'd go so far as to suggest there are more good programmers about today then there were in the past. Though having said that, percentage-wise I'd say the profession is definitely being deskilled (good thing if you're a manager, bad thing if you're a good programmer).


> few who had interest in computers rather than viewing programming as a path to easy money.

FTFY


This. I feel this to the bone.


From the OP:

> I hope I’m not alone at this. I hope there are people out there who want to do the same. I’d appreciate if we at least start talking about how absurdly bad our current situation in the software industry is. And then we maybe figure out how to get out.

Okay, I'll respond, especially on the last part "how to get out".

For the problems and struggles in the OP, I've seen not all of them but too many and sympathize. Mostly though, I don't have those problems, and the main reason is in the simple, old advice in the KISS Principle where KISS abbreviates Keep it Simple Silly although the last S does not always abbreviate silly.

In particular my startup is a Web site and seems to avoid all of the problems in the OP. Some details:

(1) Bloated Unreliable Infrastructure?

My Web site is based on Microsoft's .NET with ASP.NET for Web pages and ADO.NET for access to SQL Server (relational database). The version of .NET I'm using is some 4.x. So far I've seen essentially no significant revisions or bugs.

For the software for my Web pages I just used one of the languages that comes with .NET. I wanted to select between two, the .NET version of C# and the .NET version of Visual Basic. As far as I can tell, both languages are plenty good ways to get to the .NET classes and make use of Microsoft's managed code, e.g., garbage collection, and their CLR (common language run time) code. And IIRC there is a source code translator that will convert either language to the other one, a point which suggests that really the two language are deeply equivalent.

I've written some C code off and on for 20+ years; mostly I remember the remark in the Kernighan and Ritchie book on C that the language has an "idiosyncratic syntax" or some such -- I agree. I never could understand the full generality of a declaration of a function -- IIRC there is some code in the book that helps parsing such a statement. I do remember that

i = ++j+++++k++

is legal; I don't want any such code in my startup; also my old tests showed that two compilers gave different results.

I find Visual Basic to have a less "idiosyncratic syntax" and a more traditional syntax closer to the original Basic and then Fortran, Algol, PL/I, Pascal, etc. So, my 100,000 lines of typing are in the .NET version of Visual Basic (VB).

For my Web site, part of the code is for a server process for some of the applied math computing. The file of the VB source code is 478,396 bytes long (the source code is awash in comments), and the EXE version is 94,720 bytes long. As far as I can tell, the code loads and runs right away. Looks nicely small and fast and not bloated or slow to me.

(2) A bloated IDE (integrated development environment).

I have no problems at all with IDEs. The reason is simple: I don't use one.

Instead of an IDE, I typed all my code, all 100,000 lines, into my favorite text editor, KEdit. It has a macro language, KEXX, a version of REXX, and in that language I've typed about 200 little macros. Some of those macros let KEdit be plenty good enough for typing in software.

E.g., I have about 4000 Web pages of .NET documentation from Microsoft's MSDN site. Many of the comments in my VB source code refer to some one of those pages by having the tree name on my computer of the HTML file; then a simple command displays the Web page. When reading code and checking the relevant documentation, that little tool works fine.

After all, VB source code and HTML code are just simple text; so are my Web site log files, the KEXX code, Rexx language scripting code, all the documentation I write either in the code or in external files (just simple text or TeX language input), etc. So, a good general purpose text editor can do well. And, "Look, Ma: I get to use the same spell checker for all such text!" The spell checker? ASPELL with the TeX distribution I use. It's terrific, really smart, blindingly fast, runs in just a console window.

For KEdit, it seems to load and run right away. I just looked and saw that what appears to be the main EXE file KEDITW32.exe is

1,074,456

bytes long -- not so bloated.

(3) Windows 10 Home Edition Reliability.

For a move, I got an HP laptop; it came with Windows 10 Home Edition. I leave it running 24 x 7. It hasn't quit in months. It appears that now the Microsoft updates get applied without stopping the programs I usually have running, e.g., KEdit, the video player VLC, Firefox, etc.

Using carefully selected options for ROBOCOPY, I do full and incremental backups of my files. I keep the ROBOCOPY log output; that output shows the data rate of the backup, and I have not seen that to grow slower over time. The disk in that laptop is rotating, and I've never done a de-fragmentation. So, I can't complain about performance growing slower from bloat, disk fragmentation, etc.

(4) Windows 7 64 bit Professional Server.

For a first Web server, I plugged together a mid-tower case with an AMD FX-8350 processor, 64 bit addressing, 8 cores, 4.0 GHz standard clock speed and installed from a legal CD and authentication code Windows 7 64 bit Professional SP1. As I left that pair running 24 x 7, occasionally it would stop with a memory error of some kind. I installed an update and never again saw any reliability problem in months of 24 x 7 operation.

Since then I looked into Windows 7 64 bit updates and concluded that (i) there was a big roll-up of about 2016 or some such; (ii) since then there have been updates and fixes monthly and cumulative since the big roll-up, and (iii) the updates for Windows 7 64 bits and Windows Server 2008 are the same.

I can believe that Windows Server 2008 long ran and still runs some of the most important computing in the world. So if my Windows 7 64 bit Professional has the same updates as Windows Server 2008, maybe for use as a server my Windows 7 installation will be about the most reliable major operating system in computing so far. Fine with me.

So, I am not screaming bloody murder about operating system or software reliability.

(5) Smart Phone Bloat and Reliability.

I have no problems with smartphones if only because I have no smartphone and don't want one: When smartphones first came out, I saw the display as way too small and the keyboard as just absurd. Heck, in my desktop computing I have a quite good keyboard but would like a better one and, of course, would like a larger screen -- no way do I want to retreat to an absurd keyboard and a tiny screen.

Next I guessed that there would be problems in security, bloat, reliability, system management, documentation, and cost. Maybe history has confirmed some of these guesses!

For a recent move, I got a $15 cell phone and did use it a few times. Then I junked it.

My phone is just what I want -- a land line touch tone desk set with a phone message function from my provider. Works fine. So, I have a phone with lower cost and no problems with keyboard, screen, security, bloat, or documentation. Ah, old Bell Tel built some solid hardware!

(6) Web Site Speed, Reliability, and Bloat.

My Web site apparently has few or no problems with speed, ....

Why? The site is simple, just some very standard HTML code sent from now old and apparently rock solid ASP.NET.

Fast? The largest Web page sends for just 400,000 bits.

The HTML used is so old that it should look fine on any device anywhere in the world with a Web browser up to date as of, say, 10 years ago.

The key to all of this? The KISS Principle.

YMMV!


> Modern text editors have higher latency than 42-year-old Emacs.

Hmmm. Let's think about this.


At least the "small" Window 10 still does have a text editor. The larger Android don't even has an text editor.


Nor should it. Shipping an app 99% of users won't use as a system app is the anti-thesis of what this post was saying.


This is me.


RIP Flash


What are you up to these days Chris? And the obligatory question, what is your take on the article/why post?

I piled standout quotes below.

I think a big takeaway from the intersection of Bret Victor, Alan Kay, Jim Hollan and the ink&switch folks and your work is that the right dynamic interface can be the "place we live in" on the computer.

Victor shows a history of interactive direct manipulation interfaces, live environments where explorations of models or the creation of art go hand in hand with everything else related to that task: data input, explicit (programmatic) requirements and the visual output.

Hollan and ink&switch show the environment (ZUIs, canvas) can contain everything for doing work, the code alongside any manipulation of the viewport that can be conceived. Tools infinitely more advanced than Microsoft OneNote and designed 40 years ago.

From what I know about your work, I see another take on the environment I want to live in on the computer. I dont understand why I would want to lose power by stepping away from my language/interpreter/compiler/repl into a GUI or some portal when I can bring whatever it is which is nice about GUIs or portals into my dynamic computing environment. I very much want a personal DSL or set of DSLs for what I do on the computer, and I want to be able to hook into anything ala middle mouse button in plan9.

The superior alternative to walled gardens and this absurd world of bloat and 'feature loss' (for lack of a better term for software engineering's enthusiastic rejection of history) seems to be known, and facets of it advocated by you and these others. It seems clear that "using the computer" needs to return to "programming the computer" and that to achieve that we need to fundamentally change "programming the computer" to be a more communicative activity, to foster a better relationship between the computer and the user.

Where is this work being done now? VPRI shut down 2 years ago, Dynamicland seems to be on hiatus? I am inspired most these days by indie developers who write their own tools and build wild looking knowledge engines or what they sometimes call "trackers."[1] And of course the histories and papers put forward by the above and their predecessors. And I play with my own, building an environment where I can write, draw, code, execute and interact with it all. I see no existing product which approaches what I want.

> Everyone is busy building stuff for right now, today, rarely for tomorrow.

> Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs.

> You need to occasionally throw stuff away and replace it with better stuff.

> Business won’t care Neither will users. They are only learned to expect what we can provide.

> There’s no competition either. Everybody is building the same slow, bloated, unreliable products.

> The only thing required is not building on top of a huge pile of crap that modern toolchain is.

> I want something to believe in, a worthy end goal, a future better than what we have today, and I want a community of engineers who share that vision.

[1]: https://webring.xxiivv.com


I'm working on tools/interfaces at Relational AI, which is doing really cool work in the declarative languages space. It was started by several of the folks whose papers were foundational to Eve. :)

I agree with the post, though as others have pointed out, it doesn't really dive into the fact that this problem is systemic and would require a shift in incentive structure.

I think the last quote you have is one of the most important missing pieces for making a meaningful change in this space. A lot of people want something better, but right now, as a community, I don't think we really know what that is. What is the complete story for an ideal version of software development? And by that I don't mean idealized examples, I mean the ideal version of the real process we have to go through. What does perfect look like in the world of changing requirements, shifting teams, legacy systems, crappy APIs, and insufficient budgets? If we could show that - not the simple examples we had for Eve, but something that addresses the raw reality of engineering - I think it would just be a matter of beating the drum.


Some valid and useful points wrapped up in a pile of failure to do even trivial research (e.g. Google Play Services isn't what the author thinks, the iOS 'nothing changed' hand-wave) and the sensibility of someone walking around an art show saying "I could do that, better." The author could bear introduction to Chesteron's Fence, if nothing else, and a review of their apparent GitHub profile points to, perhaps, needing some time spent in the land of embedded systems to understand why a phone doesn't just boot in 1s.


I caught the play services as well but rather than try to take down the article for not being perfect I've been trying to think about what it's talking about.

I don't think there is anything mentioned in the article that is not realistic, including a 1 second boot time on a phone that you bring up. The only reason we can't get a phone with a static hardware configuration to boot in a second is because that's not what we've been optimizing phones for. LinuxBoot is a great example of how the time it takes to boot is purely about how much time we assign people to optimizing it not some laws of physics that require servers to take 7 minutes to boot.


Your reply indicates a similar software-centric focus, which I've come to realize is one of the flaws to the article. LinuxBoot is nifty in that it replaces all those UEFI chunks that no one spent time bothering to make fast, but guess what? I'm looking at an embedded system I have in hand and the Linux part, with a minimalist u-boot and fixed hardware, still takes over 3 seconds to transfer control to init.

Assuming you've laid out your memory layout to fixed storage and your bootloader just tosses it in RAM as fast as possible, what else needs to happen? The hardware needs to be ready to initialize and then get initialized. This isn't trivial, fast, or resolvable solely by getting better about software. Waiting for inrush currents to stabilize takes time. Waiting for a component to signal its readiness for firmware and initialization takes time. Firmware downloads frequently take on the order of hundreds of milliseconds, if not seconds. One can, in theory (and sometimes in practice) deal with this by being clever about timeslicing and never spinning, but even then. Many systems have some peripherals queried by I2C; even at 400kHz, the maximum I2C speed, you aren't going to be able to shove mass quantities of data, let alone if you need to act on any results from a read.

Humorously, I just realized I neglected the time to initialize said RAM with the appropriate settings. Glancing over at a help forum for the Snapdragon 820, it looks like the initial RAM setup takes 18ms and it's waiting for the DDR for what looks like the better part of a second, which makes it awful hard to continue doing other things.

None of that can just be changed by the folks doing software. Certainly Microsoft tried -- many of their boot improvements have been making boot more like resuming from hibernation, and it still takes time to boot.

Yes, we've built our towers of abstraction a bit high, and yes folks tend to throw threads at things and hope it fixes our throughput issues -- but there are also fundamental failures to understand that the abstractions are abstracting away real-world aspects.


I've seen several similar posts like this, but this is the best one yet. Kudos!


Don't feed the russian troll. These trolls take 1% truth and mix it with 99% lie.


To put a finer point on it, software today is absolute garbage. I've been screaming about this for decades.

All of this bloated 'shitware' today is the result of it having been written by people who a) have no deeper understanding of what the computer is actually doing; also known as typical Python/Java/etc/etc/etc programmers, and/or b) simply not giving a damn about conservation of resources--as further evidenced by all of the other extremely wasteful and destructive habits they hold in their personal lives, and in their societies in general.

After all, this is the same civilization that's burning through increasingly vast quantities of oil at an astounding rate, despite the fact that previously existing abundant and cheap oil is nearly depleted, with no possibility of replenishment or replacement. So is it any surprise that foolish developers also burn through CPU and memory with reckless abandon?

Really, the problems we face aren't just in software; they're more about the foundations of our entire Western 'civilization.' Such problems generally tend to be rather intractable, in the historical view.

I'm working to construct, in my own computing life, something of a 'personal oasis', which is increasingly removed and estranged from all of the horrible things I see Other People out there having to suffer in their personal computing lives, thanks to talentless 'developers' who Just Don't Fucking Care. Some of these pricks actually have the audacity to call themselves 'engineers', even.


> To put a finer point on it, software today is absolute garbage. I've been screaming about this for decades.

That statement contradicts itself. Is this satire?

> All of this bloated 'shitware' today is the result of it having been written by people who a) have no deeper understanding of what the computer is actually doing; also known as typical Python/Java/etc/etc/etc programmers, and/or b) simply not giving a damn about conservation of resources--as further evidenced by all of the other extremely wasteful and destructive habits they hold in their personal lives, and in their societies in general.

Everything has a cost / benefit associated with it. Pretending otherwise show how little you know about engineering.

> I'm working to construct, in my own computing life, something of a 'personal oasis', which is increasingly removed and estranged from all of the horrible things I see Other People out there having to suffer in their personal computing lives, thanks to talentless 'developers' who Just Don't Fucking Care. Some of these pricks actually have the audacity to call themselves 'engineers', even.

I dunno how far your head has to be up your own behind to actually believe this? It not like most developers these days work on large software projects that are normally poorly costed, estimated, planned usually with stifling restrictions because certain "enterprise" technology is mandated by some architect who hasn't written a line of code in a couple of decades. Most people like a consistent regular income which allows them to support themselves and their family. I suspect these concerns are more important then incoherent ramblings of some guy on HN.


If you want things to be cheaper, more choices, this is what happens.

Not just software, see houses, furniture, consumer goods, etc.

I see no point in getting angry, you are in control of what you use.


> this bloated 'shitware' today is the result of it having been written by people who a) have no deeper understanding of what the computer is actually doing; also known as typical Python/Java/etc/etc/etc programmers, and/or b) simply not giving a damn about conservation of resources

Much of the blame lies with bloated frameworks, such as Electron, rather than with the developer who uses the framework. You've covered this under Option B.

This topic cropped up recently on a Show HN. Someone built a Slack client for Windows 3.1. [0] It uses a tiny fraction of the memory used by the official Electron-based client, of course.

[0] https://news.ycombinator.com/item?id=21832815


I like calling regulations, "protections" in some circumstances. And here we are, awaiting some new protections.

My "oasis" is just text files in git, or photos in directories for the most part. The challenge is integrating with others and my god damn iPhone.


Did you notice the subtle contradiction in your post?

> they're more about the foundations of our entire Western 'civilization.'

> talentless 'developers'

It's not talentless developers, it's the foundations of civilization. Capitalism, chasing the $ not the writing of beautiful software. If an Electron app makes me more money (e.g. by getting to market quicker) than a well crafted native app, then the Electron app will be built. See also building construction and why we end up with 60m^2 poorly constructed but funky looking apartments - who to blame. The concrete pourers? The architect? I don't think it is that simple.


They seem to believe civilization is always and is necessarily a good thing, and is not a bad things. No! Like many things, civilization is both good and bad.


Well complaining about it does not change anything. Support the projects that you believe will help with these problems with $ and things might actually change.


> Well complaining about it does not change anything

The twelve steps program may be controversial, but I think the first step, 'admitting you have a problem', is generally the first step of any productive approach to problem solving.


Whenever I see a rant that is a blanket "all is bullshit" list of complaints, I check if there are any actionable proposed solutions. Author gave none, so I dismiss all this with a flick of my hand.


I get really tired of hearing people complain incesently about how inefficient software is. I want it too, but it isn't going to just happen.

Effiency is a selling point that most users don't care much about in most markets. There are efficient browsers out there but everyone uses Chrome because those browsers are inferior to Chrome in many other ways. Ways that are more important to the average browser user.

If there's a market for a more efficient software solution, go make it and get rich. Otherwise, I'm getting sick of the complaining.


Here's why I'm slightly annoyed by articles like this one. Oftentimes the "Software is slow" mantra rings true, but here's the thing: everyone repeating it claims it's the shit further down the stack that's the cause of the slowness, this is often untrue. V8 is fast; it's your shit JS code that is slow. PostgreSQL is fast; your shit queries are slow. We live in the age of the Stack Overflow programmer. Think about it for just a second, what requires the most competency? Writing V8 or PostgreSQL, or churning out some JS for a web app or Electron? It's not the programmers working on the former that is not concerned about performance. They spend considerable effort on it.

The least competent programmers are the once writing slow code. The least competent programmers are the ones working at the top of the stack.


yea, I don't like articles like this for the same reason. nobody has the brain power (yet) to rewrite the entire stack in a gpu shader and add in complex logic and a dynamic interface. it's an optimization vs delivery tradeoff. If someone is getting payed to build a product, they must deliver it with the tools at hand, you can invest time to optimize it but it will come at a cost of less features. This also depends on the industry, if you are building lower stack drivers that others will depend on, you care about performance more. Higher level applications are more user oriented and care more about ui and features. organically, people tend to spend their time and braincells in the most valuable way they can. a lot of products, especially in the web world, features hold more value than performance. Once we get the agi thing going, we can just task it to strip and redesign the entire OS down to the kernel and tailored for every user, strip everything down to only the buttons that the user clicks. some granny only knows how to open up google and read her newspaper website, all that other code can be removed lol.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: