Hacker News new | past | comments | ask | show | jobs | submit login
Software Disenchantment (2018) (tonsky.me)
934 points by ibdknox 21 days ago | hide | past | web | favorite | 488 comments



> Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters?

I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.

If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).

These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).


> "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.

I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:

- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.

- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.

- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...

Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.


This is definitely an interesting take on the car analogy so thanks for posting it! I don't know that I agree 100% (I think I could 'settle' for a car that needed be be fueled once or twice a year if it came with some other noticeable benefits), but it is definitely worth remembering that sometimes an apparently small nudge in performance can enable big improvements. Miniaturization of electronics (including batteries and storage media) and continuing improvements to wireless broadband come to mind as the most obvious of these in the past decades.

I'm struggling to think of recent (or not-so-recent) software improvements that have had a similar impact though. It seems like many of the "big" algorithms and optimization techniques that underpin modern applications have been around for a long time, and there aren't a lot of solutions that are "just about" ready to make the jump from supercomputers to servers, servers to desktops, or desktops to mobile. I guess machine learning is a probably contender in this space, but I imagine that's still an active area of optimization and probably not what the author of the article had in mind. I'd love if someone could provide an example of recent consumer software that is only possible due to careful software optimization.


V8 would be one example. Some time ago, JavaScript crossed a performance threshold, which enabled people to start reimplementing a lot of desktop software as web applications. In the following years, algorithms for collaborative work were developed[0], which shifted the way we work with some of those applications, now always on-line.

That would be the meaningful software improvements I can think of. Curiously, the key enabler here seems to be performance - we had the capability to write web apps for a while, but JS was too slow to be useful.

--

[0] - They may or may not have been developed earlier, but I haven't seen them used in practice before the modern web.


> sometimes an apparently small nudge in performance can enable big improvements

In this thought experiment we are talking about a 2 orders magnitude improvement - hardly a small nudge!


> I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase!

It's a nice idea but it wouldn't work. The gasoline would go bad before you could use it all.

Plug-in hybrids already have this problem. Their fuel management systems try to keep the average age of the fuel in the tank under 1 year. The Chevy Volt has a fuel maintenance mode that runs every 6 weeks:

https://www.nytimes.com/2014/05/11/automobiles/owners-who-ar...

https://www.autoblog.com/2011/03/18/chevy-volts-sealed-gas-t...

Instead of having a "lifetime tank", a car that uses 0.005L per 100km would be better off with a tiny tank. And then instead of buying fuel at a fuel station you'd buy it in a bottle at the supermarket along with your orange juice.


There is [1] https://duckduckgo.com/?q=alkylate+petrol which is said to last anything between two years and up to ten years, depending on the mixture while burning rather clean.


You are thinking too small, with a car generating power that cheaply you could use it to power a turbine and provide cheap electricity to the entire world. It would fix our energy needs for a very long time and it would usher a new age!


Or the car could just be very efficient. Gasoline has a lot of energy. Transporting a person 100km on 34MJ/l * 0.05l =1.71MJ doesn't sound as impossible as you make it seem.


Trains transports at 0.41 MJ/t·km. If the person weights 0.1t it would take a train packed full of people 41MJ per person to transport them 100km, or a bit more than one litre of gasoline. I don't think it is possible to go significantly below that without transporting them on mag rails or vacuum pipes.

Secondly we talked about 0.005l cars, not 0.05l, so it would be a few hundreds times more efficient than train transportation.


Bicycles are probably a bit more efficient than trains.


Its about the same, you burn several thousand calories or a few tens of mega-joules biking 100 km.


my strava says 100km cycling is ~3370kcal

The big problem is this, if we related this back to software it would mean the software being delivered in 10-15 years, rather than in 6 months. Kind of a big downside...


Not necessarily. For one, relating this doesn't remove the ability for incremental development. Another thing, there's very little actual innovation in software being done. Almost anything we use existed in some version in the past two or three decades, and it was much more faster, even if rougher at the corners. Just think how many of the startups and SaaS projects we see featured on HN week after week are just reimplementing a feature or a small piece of workflow from Excel or Photoshop as a standalone web app?


That's the old Ruby on Rails argument. In that specific case it only made sense when there were no similar frameworks for faster languages, but that's hardly the case today.


Ironically though, I'd be willing to bet that end-user performance on most traditional server-side-rendered apps using the "heavyweight" RoR framework is far better than the latest and greatest SPA approach.


It really depends.

I've worked with a Preact SPA where the time to initial render was faster than the HAML templates it replaced.

But then, again, that was an outlier. If your target is speed, traditional SSR or static pages are the best bet, anyway.


In a previous life I did back office development for ecommerce. We had two applications, one RoR monolith and a "modern" JavaScript Meteor SPA. The SPA was actually developed to replace the equivalent functionality in the RoR application but we ended up killing it and sticking with what we had. Depending on what you're trying to accomplish server side rendering is just as good, if not better than the latest and greatest in client side rendering.


Nitpicking: gas goes bad eventually and needs to be burned before that, the usually given timeframe is after ~6 months.


That's the ethanol-component of the gas (i.e. the E part of E5, E10), as it degrades.

If you had pure gasoline, you could store it for years (and in past, countries and armies did exactly that for their reserves).


"Most users," yeah, perhaps.

A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.

I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.


> These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use.

I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.


Virtually all of my accidental inputs are caused by application slowness or repaints that occur several hundred milliseconds after they should have.

I want all interactions with all of my computing devices to occur in as close to 0ms as possible. 0ms is great; 20ms is good; 200ms is bad; 500ms is absolutely inexcusable unless you're doing significant computation. I find it astonishing how many things will run in the 200-500ms range for utterly trivial operations such as just navigating between UI elements. And no, animation is not an acceptable illusion to hide slowness.

I am with the OP. "Good enough" is a bane on our discipline.


How about the i-am-about-to-press-this-button-but-wait-we-need-to-rerender-the-whole-page. At which point you misclick or not at all. Especially some recent shops and ad heavy pages use this great functionality ;)


Twitter on mobile.....


The rule for games is that you have 16ms (for a 60Hz monitor) to process all input and draw the next frame. That's a decent rule for everything related to user input. And since there are high refresh-rate monitors, and it's a web app and not a game using 100% CPU & GPU, just assume 4-5ms for a nicer number. If you take longer than that to respond to user input on your lowest-capability supported configuration, you've got a bug.

0ms is great, 4ms is very good, 16ms is minimally acceptable, 20ms needs improvement (you're skipping frames), 200ms is bad (it's visible!), 500ms is ridiculous and should have been showing a progress bar or something.

Responding to input doesn't necessarily mean being done with processing, it just means showing a response.


This happens to me all the time starting pipelines in Gitlab. Which typically results in unwanted merges to master which then need to be reverted.


Don’t get me started with all the impressive rotating zooming in Google Maps every time you accidentally brush the screen.

The usage story requires you to switch to turn-by-turn, and there’s no way to have bird eye map following your location along route (unless you just choose some zoom level and manually recenter every so often.)

It’s awful, distracting and frankly a waste of time... just to show a bit of animation every time I accidentally fail to register a drag...

Damn Ui


Well, Google Maps is its own story - it's like the app is being actively designed to be as useless as possible as a map - a means to navigate. The only supported workflow is search + turn-by-turn navigation, and everything else seems to be disincentivized on purpose.


I respectfully disagree -- something that is 10 times more efficient costs 10 times less energy (theoretically). When the end user suffers a server outage due to load, when they run out of battery ten times quicker, all of these things matter. When you have to pay for ten servers to run your product instead of one, this cost gets passed on to the end user.

I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...

There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).


Another problem is that the inefficiency of multiple products tends to compound.

- Opening multiple tabs in a browser will kill your battery, and it's not the fault of a single page, but of all of them. Developers tend to blame the end user for opening too many tabs.

- Running a single Electron app is fast enough in a newer machine but if you need multiple instances or multiple apps you're fucked.

- Some of my teammates can't use their laptops without the charger because they have to run 20+ docker containers just to have our main website load. The machines are also noisy because the fan is always on.

- Having complex build pipelines that take minutes or hours to run is something that slows dow developers, which are expensive. It's not the fault of a single software (except maybe of the chosen programming language), but of multiple inefficient libraries and packages.


> "Even worse, try using an OS running in a VM for an extended period of time..."

I actually do this for development and it works really well.

Ubuntu Linux VM in VMware Fusion on a Macbook Pro with MacOS.

Power consumption was found to be better than running Linux natively. (I'm guessing something about switching between the two GPUs, but who knows.)

GPU acceleration works fine; the Linux desktop animations, window fading and movement animations etc are just as I'd expect.

Performance seems to be fine generally, and I do care about performance.

(But I don't measure graphics performance, perhaps that's not as good as native. And when doing I/O intensive work, that's on servers.)

Being able to do a four-finger swipe on the trackpad to switch between MacOS desktops and Linux desktops (full screen) is really nice. It feels as if the two OSes are running side by side, rather than one inside another.

I've been doing Linux-in-a-VM for about 6 years, and wouldn't switch back to native on my laptop if I had a choice. The side-by-side illusion is too good.

Before that I ran various Linux desktops (or Linux consoles :-) for about 20 years natively on all my development machines and all my personal laptops, so it's not like don't know what that's like. In general, I notice more graphics driver bugs in the native version...

(The one thing that stands out as buggy is VMware's host-to-guest file sharing is extremely buggy, to the point of corrupting files, even crashing Git. MacOS's own SMB client is also atrocious in numerous ways, to the point of even deleting random files, but does it less often so you don't notice until later what's gone. I've had to work hard to find good workarounds to have reliable files! I mention this as a warning to anyone thinking of trying the same setup.)


What year MBP is this? I tried running Ubuntu on Virtual Box on my mid 2014 MBP with 16GB ram, but that was anything but smooth. I ended up dual booting my T460s instead.

But perhaps the answer is VMware Fusion instead then.


It's a late 2013 MBP, 16GB RAM.

I've only given Linux 6GB RAM at the moment, and it's working out fine. Currently running Ubuntu 19.10.

I picked VMware Fusion originally because it was reported to have good-ish support for GPU emulation that was compatible with Linux desktops at the time. Without it, graphics can be a bit clunky. With it, it feels smooth enough for me, as a desktop.

My browser is Firefox on the Mac side, but dev web servers all on the Linux side.

The VM networking is fine, but I use a separate "private" network (for dev networking) from the "NAT" network (outgoing connections from Linux to internet), so Wifi IP address changes in the latter don't disrupt active connections of the former.

My editor is Emacs GUI on the Mac side (so it integrates with the native Mac GUI - Cmd-CV cut and paste etc, better scrolling), although I can call up Emacs sessions from Linux easily, and for TypeScript, dev language servers etc., Emacs is able to run them remotely as appropriate.

Smoothness over SSH from iTerm is a different thing from graphical desktop smoothness.

When doing graphics work (e.g. Inkscape/GIMP/ImageMagick), or remote access to Windows servers using Remmina for VNC/RDP, I use the Linux desktop.

But mostly I do dev work in Linux over SSH from iTerm. I don't think I've ever noticed any smoothness issues with that, except when VMware networking crashes due to SMB/NFS loops that I shouldn't let happen :-)


Thanks a lot for the long through reply. It sounds like I might want to give VMware Fusion a go if I want to play around with Linux on my MBP again.


The answer is I/O latency.

Having your VM stored inside a file on a slow filesystem is bad. Having a separate lvm volume (on linux)/zvols (with zfs)/partition/disk is much more performant.


I store my Linux VM disk inside a file on a Mac filesystem (HFS+, the old one), and I haven't noticed any significant human-noticable I/O latency issues when using it. The Linux VM disk is formatted as ext4.

That's about human-scale experience, rather than measured latency. It won't be as fast as native, but it seems adequate for my use, even when grepping thousands of files, unpacking archives, etc, and I haven't noticed any significant stalling or pauses. It's encrypted too (by MacOS).

(That's in contrast to host-guest file access over the virtual network, which definitely has performance issues. But ext4 on the VM disk seems to work well.)

The VM is my main daily work "machine", and I'm a heavy user, so I'd notice if I/O latency was affecting use.

I'm sure it helps that the Mac has a fast SSD though.

(In contrast, on servers I use LVM a lot, in conjunction with MD-RAID and LUKS encryption.)


Yes, but it's not just relative quantities that matter, absolute values matter too, just as the post you replied to was saying.

Optimizing for microseconds when bad UI steals seconds is being penny-wise and pound foolish. Business might not understand tech but they do generally understand how it ends up on the balance sheet.


But the balance sheets encompass more than delivering value to end-users; business can and do trade off that value for some money elsewhere (see e.g. pretty much everything that has anything to do with ads).

Note also the potential deadlock here. Optimizing core calculations at μs level is bad because UI is slow, but optimizing UI to have μs responsiveness is bad, because core calculations are slow. Or the database is slow. This way, every part of the program can use every other part of the program as a justification to not do the necessary work. Reverse tragedy of the commons perhaps?


> Even worse, try using an OS running in a VM for an extended period of time...

I do that for most of my hobbyist Linux dev work. It's fine. It can do 4k and everything. It's surely not optimal but it's better than managing dual boot.


Any hints? How are you getting any kind of graphics acceleration? What's your host/guest/hypervisor setup?


Host is Windows, guest is Ubuntu. Hypervisor is VMWare Workstation 12 Player. There is a very straightforward process to get graphics acceleration in the VM. The shell has a "mount install CD" option that causes a CD containing drivers to be loaded in the guest (Player > Manage > Reinstall VMWare Tools). You install those, and also enable acceleration in the VMWare settings (https://imgur.com/a/PUaE38u). Again, it's not perfect, but I can e.g. play fullscreen 1080p YouTube videos. Not sure how it would like playing 4k videos, but my desktop doesn't like that so much even in the host OS.


I do this the other way around, Ubuntu host and a KVM virtual machine controlled by virt-manager with PCIe passthrough for its own GPU and NVMe boot drive. I enjoy Linux too much for daily use (and rely on it for bulk storage with internal drives mergerfs fused together and backed up with snapraid), but I do a lot of photography and media work so I also rely on Windows. This way, I can use a KVM frame relay like looking-glass to get a latency free almost native performance windows VM inside a Ubuntu host, without the need to dual boot (but since the NVMe drive is just windows, I can always boot into windows if I please)


I have to be careful about what I describe, but I don't think people care about speed or performance at all when it comes to tech, and it makes me sad. In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.

The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

I never liked this view. I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time.

> they probably blame their 4G signal for it

Sad thing is, enough companies thinking like this and the incentive to improve on 4G itself evaporates, because "almost nothing can work fast enough to make use of these optimizations anyway".


"I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time."

Consider a loading spinner with a line of copy that explains what's happening. Say it's for an action that can take anywhere from 20 milliseconds to several seconds, based on a combination of factors that are hard to predict beforehand. At the low end, showing the spinner will result in it flashing on the screen jarringly for just a frame. To the user it will appear as some kind of visual glitch since they won't have time to even make out what it is, much less read the copy.

In situations like this, it's often a good idea to introduce an artificial delay up to a floor that gives the user time to register what's happening and read the copy.


Wouldn't it be better to delay the appearance of the spinner, so it doesn't show at all for those fast operations?


This doesn't work well in apps but games do incredible things to hide that state; and it's partially a consequence of avoiding a patent on minigames inside loading screens.

e.g. back in the 90s with Resi 1, the loading screen was hidden by a slow and tense animation of a door opening. It totally fit the atmosphere.

Plenty of games add an elevator or a scripted vehicle ride, or some ridiculous door locking mechanism that serves the same purpose without breaking immersion, especially as those faux-loading screens can be dynamic.

It's pretty much the exact same technique used in cinema when a director wants to stitch multiple takes into a single shot (e.g. that episode in True Detective; that other one in Mr Robot; all of Birdman).


You can still end up with the jarring flash. Say you delay 100ms--if the action takes 120ms, you have the same problem.


Flash is good. If the state transition is "no indicator -> spinner -> checkmark", then if the user notices the spinner flashing for one frame, that only ensures them the task was actually performed.

It's a real case, actually. I don't remember a name, but I've encountered this situation in the past, and that brief flash of a "in progress" marker was what I used to determine whether me clicking a "retry" button actually did something, or whether the input was just ignored. It's one of those unexpected benefits of predictability of UI coding; the less special cases there are, the better.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.

I see this argument coming up a lot, but this can be solved by better UX. Making things slow on purpose is just designers/developers being lazy.

Btw users feeling uneasy when something is "too fast" is an indictment of everything else being too damn slow. :D


Regions that use the metric system use liters per kilometer. "The less fuel needed for the same distance, the better."


I’m sure some sort of instantaneous indicator (e.g. a checkmark icon appearing) could be used instead of inserting artificial delays.


> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened

IMO it can be attributed more to bad UI than optimizations.


Everywhere but the US uses l/100km (which is a much better metric than MPG).


It's still used in the UK too, in our hybrid metric/imperial setup.


‘poor UI design, accidental inputs’

I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.

Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.


I wonder how this trend will be affected by the slowing of Moore’s law. There will always be demand for more compute, and until now that’s largely been met with improvements in hardware. When that becomes less true, software optimization may become more valuable.


I dont know, that just feels wrong. If anything, the rise of mobile means there should be more emphasis on speed. All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness. Can you point to a newish app that is clearly better that its predecessor?


> All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness

That's not really true. Slack could be just as pretty and a fraction of the weight, if they hadn't used Electron.


I think there are two factors preventing mobile from being a force to drive performance optimizations.

One, phone OSes are being designed for single-tasked use. Outside of alarms and notifications in the background (which tend to be routed through a common service), the user can see just one app at a time, and mobile OSes actively restrict background activity of other apps. So every application can get away with the assumption that it's the sole owner of the phone's resources.

Two, given the above, the most noticeable problem is now power usage. As Moore's law has all but evaporated for single-threaded performance, hardware is now being upgraded for multicore and (important here) power performance. So apps can get away with poor engineering, because every new generation of smartphones has a more power-efficient CPU, so the lifetime on single charge doesn't degrade.


> admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance

I think for webpages it is the opposite: non-orthogonal in most cases.

If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.


I think objections like this may be put in terms of measurable cost-benefits but they often come down to the feeling of wasted time and effort involved in writing, reading and understanding garbage software.

Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.

That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.

Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).


This takes about 100 liters per 100 kilometers: https://en.wikipedia.org/wiki/M3_half-track

It does fill some other requirements that a regular car doesn't.


> If basic computer operations like loading a webpage took minutes rather than seconds...

Wait, are you implying they don't ? What world do you live in, and how do I join?


The analogy is wrong as well because a car engine is used for a single purpose, moving the car itself. Imagine if you had an engine that powered a hundred cars instead, but a lot of those cars were unoptimized so you can only run two cars at a time instead of the theoretical 100.

or... something.

The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.


RTFA:

>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.


FWIW, I did RTFA (top to bottom) before commenting. I chose to reply to some parts of the article and not others, especially the parts I felt were particularly hyperbolic.

Anecdotally, in my career I've never had to compile something myself that took longer than a few minutes (but maybe if you work on the Linux kernel or some other big project, you have; or maybe I've just been lucky to mainly use toolchains that avoid the pitfalls here). I would definitely consider it a problem if my compiler runs regularly took O(10mins), and would probably consider looking for optimizations or alternatives at that point. I've also benefited immensely from a lot of the analysis tools that are built into the toolchains that I use, and I have no doubt that most or all of them have saved me more pain than they've caused me.


Then you're being disingenuous in picking a quarter of the quote.

>You’ve probably heard this mantra: “Programmer time is more expensive than computer time.” What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.

The point is that we are wasting all the resources at every scale. We are supposedly burning computer cycles because developer time is more important. Yet we are also burning developer time with compiling, or testing for interpreted languages, at a rate that is starting to approach the batch processing days.


Complaints about slow compilers or praise for toolchains being faster than others are very common, so I don't see how "nobody thinks" that.


I agree it's all slower and sucks. But I don't think it's solely a technical problem.

1/ What didn't seem to get mentioned was the speed to market. It's far worse to build the right thing no one wants, than to build the crappy thing that some people want a lot. As a result, it makes sense for people to leverage electron--but it has consequences for users down the line.

2/ Because we deal with orders of magnitude with software, it's not actually a good ROI to deal with things that are under 1x improvement on a human scale. So what made sense to optimize when computers were 300MHz doesn't make sense at all when computers are 1GHz, given a limited time and budget.

3/ Anecdotally (and others can nix or verify), what I hear from ex-Googlers is that no one gets credit for maintaining the existing software or trying to make it faster. The only way you get promoted is if you created a new project. So that's what people end up doing, and you get 4 or 5 versions of the same project that do the same thing, all not very well.

I agree that the suckage is a problem. But I think it's the structure of incentives in the environment that software is written that also needs to be addressed, not just the technical deficiencies of how we practice writing software, like how to maintain state.

It's interesting Chris Granger submitted this. I can see that the gears have been turning for him on this topic again.


I might strengthen your argument even more and say it's largely a non-technical problem. We have had the tools necessary to build good software for a long time. As others have pointed out, I think a lot of this comes down to incentives and the fact that no one has demonstrated the tradeoff in a compelling way so far.

I find it really interesting that no one in the future of programming/coding community has been able to really articulate or demonstrate what an "ideal" version of software engineering would be like. What would the perfect project look like both socially and technically? What would I gain and what would I give up to have that? Can you demonstrate it beyond the handpicked examples you'll start with? We definitely didn't get there.

It's much harder to create a clear narrative around the social aspects of engineering, but it's not impossible - we weren't talking about agile 20 years ago. The question is can we come up with a complete system that resonates enough with people to actually push behavior change through? Solving that is very different than building the next great language or framework. It requires starting a movement and capturing a belief that the community has in some actionable form.

I've been thinking a lot about all of this since we closed down Eve. I've also been working on a few things. :)


I'll take this opportunity to appreciate C# in VS as a counterexample to the article. Fast as hell (sub-second compile times for a moderately large project on my 2011 vintage 2500k), extremely stable, productive, and aesthetically pleasing. So, thanks.


It's very hard for me to get away from C# because it's just so crazy productive. The tooling is fanstastic and the runtime performance is more than good enough.

One thing I found was that surprisingly the C# code I write outperforms the C++ code I used to write at equal development times.

I was good at C++, but the language has so many footguns and in general is so slow to develop in that I would stick to "simple" and straightforward solutions. I avoided multi-threading like the plague because it was just so hard to get right.

Meanwhile in C# it's just so easy to sprinkle a little bit of multithreading into almost any application (even command-line tools) that I do it "just because". Even if the single-threaded performance is not-so-great, the end result is often much better.

Similarly, it's easy to apply complex algorithms or switch between a few variants until something works well. In C++ or even Rust, the strict ownership semantics makes some algorithm changes require wholesale changes to the rest of the program, making this kind of experimentation a no-go.

The thing that blows my mind is the "modern" approach to programming that seems to be mostly young people pretending that Java or C# just don't exist.

Have you seen what JavaScript and Python people call "easy?" I saw a page describing a REST API based on JSON where they basically had thousands of functions with no documentation, no schema, and no typed return values. It was all "Just look at what the website JS does and reverse engineer it! It's so easy!"

I was flabbergasted. In Visual Studio I can literally just paste a WSDL URL into a form and it'll auto-generate a 100K-line client with async methods and strongly-typed parameters and return values in like... a second. Ditto for Linq-2-SQL or similar frameworks.


I've also been lurking on the FoC community, and hadn't seen much on an articulation on the social and incentive structures that produce software. Do you think they'd be receptive to it?

And by "social and inventive structures", I'm assuming you're talking about change on the order of how open source software or agile development changed how we develop software?

While agile did address how to do software in an environment for changing requirements and limited time, we don't currently have anything that addresses an attention to speed of software, building solid foundations, and incentives to maintain software.

What would a complete system encompass that's currently missing in your mind?


This is very much a social and political problem. Will be interesting to see if us technical folks can solve it.


I think you will see great change if you were to look at the personalities around one opportunity.

Because it's never problems really, it's perceived that way though.

A certain challange needs a specific set of personalities to solve it. That's the real puzzle.

Great engineers will never be able to solve things properly unlessed given the chance by those who control the surroundings.

We seek how we should develop, what method should be used, is it agile or is it lean? But maybe the problem starts earlier and focusing on exactly what methods and tools to use we miss out on the most simplest solution even beginners can see.

For example I am an architect, I tend to not touch the economics in a project. It's better fitted for other persons.

While not having read much about team based development I do want to be directed to well read literature about it. Maybe it's better called social programming, just another label of what we really do.

The one I miss the most at work is my wife. She clearly is the best reverse of me and makes me perform 1000x better. I find that very funny since she does not care about IT at all.


Maybe we should enforce some guidelines, and sponsor some programs to address these issues.

There's ways to develop working software, but not if it's all locked behind closed OSes and other bullshit.


The stuff I write I don't think is that bloated, but like most things these days the stuff I write pulls in a bunch of dependencies which in turn pulls in their own dependencies. The result, pretty bloated software.

Writing performant, clean, pure software is super appealing as a developer, so why don't I do something about the bloated software I write? I think a big part of it is it's hard to see the direct benefit from the very large amount of effort I'll have to put in.

Sure I can write that one thing from that one library that I use myself instead of pulling in the whole library. I might be faster, I might end up with a smaller binary, it might be more deterministic because I know exactly what it's doing. But it'll take a long time, might have a lot of bugs and forget about maintaining it. Then end of the day, do the people that use my software care that I put in the effort to do this? They probably won't even notice.


I think part of it is knowing how to use libraries. It's actually a good thing to make use of well-tested implementations a lot of time rather than re-inventing the wheel: for instance it would be crazy to implement your own cryptography functions, or your own networking stack in most cases. Libraries are good when they can encapsulate a very well-defined set of functionality behind a well-defined interface. Even better if that interface is arrived at through a standards process.

To me, where libraries get a bit more questionable is when they exist in the realm of pure abstraction, or when they try to own the flow of control or provide the structure around which your program should hang. For instance, with something like Ruby on Rails, it sometimes feels like you are trying to undo what the framework has assumed you need so that you can get the functionality you want. A good library should be something you build on top of, not something you carve your implementation out of.


A good compromise would be to replace bloated modules with alternatives that are leaner and have fewer nested dependencies.


Most developers I have known want to work on the new great new thing. They don't want to spend a great deal of time on the project either. Forget about them wanting to dedicate time to software maintenance. Not sexy enough.


Ok but why ? And what can we do to improve things? Promote maintenance, but I think one of the issues is that you can show something new, it's much more difficult to show that something could have changed (failure, difficulty to grow), but didn't.


To the extent it's in your power as a developer and a team member, don't tolerate low-performance code from yourself or your co-workers.

In my experience, a lot of performance problems boil down to really stupid problems, like simple code using the wrong data structure out of convenience (e.g. linked lists instead of arrays for lots of randomly-accessed data), or structured in a bad way (e.g. allocating a lot of small pieces of memory all the time). Often times, there are cheap performance wins to be had if you occasionally run the product through a profiler and spend couple of hours fixing the most pressing issue that shows up. Couple of hours isn't much; there's enough slack in the development process to find those hours every month or two, without slowing down your regular work.


I agree with your point of developers being responsible for the performance.

But I have a different experience (probably because we work in different areas):

Most the performance problems of the products I ever worked were purely systemic.

They boiled down to technologies and architectures having been chosen for "organizational" rather than technological reasons.

And "organizational" is in quotes because sometimes it was just blackmail: I worked with two developers who quit in protest after the prototype they wrote in Scala was deemed not good enough and dropped for... being too slow, ironically.


This has been a major frustration for me as a UI developer on the current application I work on. The UI is often hamstrung by how the backend API was implemented. There are frequently cases where we stitch together pre-existing API functionality to make something work in a far-from-ideal manner just because it would take longer to do it right and no one is interested.


I've seen a similar thing happen. It all started with good intentions, like only having simple endpoints that do "one and only one thing".

In the end the backend was pure and beautiful, but the the frontend devs had to perform joins in the the client and make 21 API calls in a 20-item list and then everything goes to hell.


Ah, I can tell you such stories of a stack that evolved solely out of incompetence...


It's not a technical problem at all. It's an economy problem.


From a Reddit comment:

> While I do share the general sentiment, I do feel the need to point out that this exact page, a blog entry consisting mostly of just text, is also half the size of Windows 95 on my computer and includes 6MB of javascript, which is more code than there was in Linux 1.0. Linux at that point already contained drivers for various network interface controllers, hard drives, tape drives, disk drives, audio devices, user input devices and serial devices, 5 or 6 different filesystems, implementations of TCP, UDP, ICMP, IP, ARP, Ethernet and Unix Domain Sockets, a full software implementation of IEEE754 a MIDI sequencer/synthesizer and lots of other things.

>If you want to call people out, start with yourself. The web does not have to be like this, and in fact it is possible in 2018 to even have a website that does not include Google Analytics.

https://www.reddit.com/r/programming/comments/9go8ul/comment...


The sad thing is, to quote The Website Obesity Crisis[1],

> Today’s egregiously bloated site becomes tomorrow’s typical page, and next year’s elegantly slim design.

[1] https://idlewords.com/talks/website_obesity.htm


Since this Reddit comment was made, the Twitter iframe responsible for the megabytes of JavaScript has been replaced by a <video> tag. The only JavaScript left on the page is Google Analytics, which is way less than 6MB.


I feel bad now that my comment received so much attention. I didn’t realize that the Reddit comment was made a year ago, and I should have tested the webpage size myself. The author’s argument is still important, after all.


And this really wasn't the author's fault—it's completely logical that if your story contains a tweet, you should attempt to embed it in the way Twitter recommends.

This is Twitter, not some random framework!


It fits in just under an MB instead of 6MB


Long ago I watched a documentary about the early Apple days, when management was encouraging their developers to reduce boot times by 10 seconds. The argument was that 10 seconds multiplied by the number of boot sequences would result in saving many human lives worth of time.

Edit: found a link with the same story: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt

The software world needs more of this kind of thinking. Not more arguments like "programmer's time is worth less than CPU time", which often fail to account for all externalities.


I like the "human lifetimes wasted"-metric. It's interesting to think that a badly optimized piece of code used by a few million people basically kills a few each day. If every manager, client and programmer thought for a second if the 30min they save is worth the human lifespans wasted I think we'd have better software.


I wish more companies thought like this in general. I often think about the nature of the work I'm doing as a developer and wonder if it's making society better off as a whole. The answer is usually a resounding no.


Same here, but why exactly?

In my country, SW engineer is one of the best careers in terms of income, and I bet it is similar in most of the other countries. Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?

These thoughts just haunt me from time to time.


> Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?

I understand that you're asking a theoretical question, not a practical one, but in practical terms the answer is fairly simple. Our economy is not built to (indeed, is built not to) reward individuals in line with what they contribute to society. An entirely different set of incentives are what structure our economy, and therefore the jobs and lives of most people.

In some sense, David Graeber's Bullshit Jobs is all about the widespread awareness (and denial) of this phenomenon, and what caused it. I wouldn't say it's a perfect book but it's the best one I've read on the subject.


That's obvious. It's a work that by definition reach many others automatically and acts faster than humans, with less human intervention so it saves work. Anything that saves time/money and has this multiplication effect will generate tons of cash. No wonder we catch a part of it.

Edit: in other simpler words, it's useful and scales fine.


They could think like this if it became part of their cost structure. There's no reason for them to think like this other than in terms of profit & loss.


I think my work makes society some infinitisemal amount better.


That's an important comment and made me think that nobody here has mentioned climate change (where human lives are/will be affected, literally). There is an emerging movement toward low-carbon, low-tech, sustainable web design, but it's still very much fringe. To make it mainstream, we all need to work on coming up with better economic incentives.


This implies that time not spent using their software is time wasted doing nothing. Not that reducing boot times would be a bad thing, but that sounds more like a marketing gimmick. As kids we would wait for forever for our Commodore 64 games to load - knowing this we planned accordingly.


If the cost of that boot time was somehow materialized upstream - e.g. if companies that produced OSes had to pay for the compute resources they used, rather than the consumer paying for the compute - then economics would solve the problem.

As it is, software can largely free ride on consumer resources.


"...would result in saving many human lives worth of time."

Meh this is manager-speak for "saving human lives" which they definitely were not. They weren't saving anybody. I mean, there's argument that, in modern day, 2020, time away from the computer is more well-spent than on a computer; so a faster boot time is actually worse than a slower boot time. Faster boot time is less time with the family.

Good managers, like Steve Jobs was, are really good at motivating people using false narratives.


Performance is one thing, but I'm really just struck by how often I run into things that are completely broken or barely working for extended periods of time.

As I write this, I've been trying to get my Amazon seller account reactivated for more than a year, because their reactivation process is just... broken. Clicking any of the buttons, including the ones to contact customer support just take you back to the same page. Attempts to even try to tell someone usually put you in touch with a customer service agent halfway across the world who has no clue what you're talking about and doesn't care; even if they did care, they'd have no way to actually forward your message along to the team that might be able to spend the 20 minutes it might take to fix the issue.

The "barely working" thing is even more common. I feel like we've gotten used to everything just being so barely functional that it isn't even a disadvantage for companies anymore. We usually don't have much of an alternative place to take our business.


Khan Academy has some lessons aimed at fairly young kids—counting, spotting gaps in counting, talking that simple. I tried to sit with my son on the Khan Academy iPad app a few weeks ago to do some with him, thinking it'd be great. Unfortunately it is (or seemed to be to such a degree that I'm about 99% sure it is) janky webtech, so glitches and weirdness made it too hard for my son to progress in without my constantly stepping in to fix the interface. Things like, no feedback that a button's been pressed? Guess what a kid (or hell, adult) is gonna do? Hammer the button! Which... then keeps it greyed out once it does register the press, but doesn't ever progress, so you're stuck on the screen and have to go back and start the lesson over. Missed presses galore, leading to confusion and frustration that nothing was working the way he though it was (and it, in fact, supposed) to work.

I don't mean to shit on Khan Academy exactly because it's not like I'm paying for it, but those lessons may as well not exist for a 4 year old with an interface that poor. It was bad enough that more than half my time intervening wasn't to help him with the content, nor to teach him how to use the interface, but to save him from the interface.

This is utterly typical, too. We just get so used to working around bullshit like this, and we're so good at it and usually intuit why it's happening, that we don't notice that it's constant, especially on the web.


Send bug reports in to Khan Academy if you get a chance.


I'd love to see a software-industry-wide quality manifesto. The tenets could include things like:

* Measure whether the service you provide is actually working the way your customers expect.

(Not just "did my server send back an http 200 response", not just "did my load balancer send back an http 200", not just "did my UI record that it handled some data", but actually measure: did this thing do what users expect? How many times, when someone tried to get something done with your product, did it work and they got it done?)

* Sanity-check your metrics.

(At a regular cadence, go listen for user feedback, watch them use your product, listen to them, and see whether you are actually measuring the things that are obviously causing pain for your users.)

* Start measuring whether the thing works before you launch the product.

(The first time you say "OK, this is silently failing for some people, and it's going to take me a week to bolt on instrumentation to figure out how bad it is", should be the last time.)

* Keep a ranked list of the things that are working the least well for customers the most often.

(Doesn't have to be perfect, but just the process of having product & business & engineering people looking at the same ranked list of quality problems, and helping them reason about how bad each one is for customers, goes a long way.)


You might be interested in Software Craftsmanship [0] manifesto. There are many communities and initiatives around the world gathering folks with the interest in producing high-quality software. From the few of the folks I have been working with that are involved in SC, I can definitely recommend the movement and so I'm also exploring options in joining some local meet-ups and/or events.

[0] http://manifesto.softwarecraftsmanship.org/


This is also one of my pet peeves. It's easier than ever to collect this data and analyse it. Unfortunately, most of our clients are doing neither, or they are collecting the logs but carefully ignoring them.

I've lost count of the number of monitoring systems I've opened up just to see a wall of red tapering off to orange after scrolling a couple of screens further down.

At times like this I like to point out that "Red is the bad colour". I generally get a wide-eyed uncomprehending look followed by any one of a litany of excuses:

- I though it was the other team's responsibility

- It's not in my job description

- I just look after the infrastructure

- I just look after the software

- I'm just a manager, I'm not technical

- I'm just a tech, it's management's responsibility

Unfortunately, as a consultant I can't force anyone to do anything, and I'm fairly certain that the reports I write that are peppered with fun phrases such as "catastrophic risk of data corruption", "criminally negligent", etc... are printed out only so that they can be used as a convenient place to scribble some notes before being thrown in the paper recycling bin.

Remember the "HealthCare.gov" fiasco in 2013? [1] Something like 1% of the interested users managed to get through to the site, which cost $200M to develop. I remember the Obama got a bunch of top guys from various large IT firms to come help out, and the guy from Google had an amazing talk a couple of months later about what he found.

The takeaway message for me was that the Google guy's opinion was that the root cause of the failure was simply that: "Nobody was responsible for the overall outcome". That is, the work was siloed, and every group, contractor, or vendor was responsible only for their own individual "stove-pipe". Individually each component was all "green lights", but in aggregate it was terrible.

I see this a lot with over-engineered "n-tier" applications. A hundred brand new servers that are slow as molasses with just ten UAT users, let alone production load. The excuses are unbelievable, and nobody pays attention to the simple unalterable fact that this is TEN SERVERS PER USER and it's STILL SLOW!

People ignore the latency costs of firewalls, as one example. Nobody knows about VMware's "latency sensitivity tuning" option, which is a turbo button for load balancers and service bus VMs. I've seen many environments where ACPI deep-sleep states are left on, and hence 80% of the CPU cores are off and the other 20% are running at 1 GHz! Then they buy more servers, reducing the average load further and simply end up with even more CPU cores powered off permanently.

It would be hilarious of it wasn't your money they were wasting...

[1] https://en.wikipedia.org/wiki/HealthCare.gov#Issues_during_l...


Johnathan Blow did a really interesting talk about this topic:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.

I can't verify these claims, but it's an interesting thing to think about.


This is an interesting talk, thank you. What frightens me, is that the same process could be happening in other fields, for example, medicine. I really hope we won't forget how to create antibiotics one day.


I have a feeling however that this is in fact not broken but working exactly as intended. Corporate dark pattern just to gently "discourage" problem customers from contacting them.


I feel like the entire implementation of AWS is designed to sell premium support. There is so much missing documentation, and so many arbitrary details you have to know to make it work in general that you almost have to have a way to ask for help in order to make it work.


this usually happens with ad blockers. they somehow mess up a page, and then you get angry customers saying the page doesn't work for them.

we need a solution to this mess. so far i've seen popups (of all things) letting users know they should disable the ad blocking. but that's not a solution. ideally websites should not break when ad blockers are enabled, but i've seen sites where their core product depends on ad blocking being disabled. strange/chaotic times we live in.


"...how often I run into things that are completely broken..."

That's because the shotgun approach(sick 40 developers on a single problem idc how they dole out the workload) works well for most low stakes, non-safety-critical software.

So like a reactivation portal for your Amazon seller account is very low stakes. But Boeing treating the 737-MAX the same way, would be(and was) a very bad idea.

Because that low-stakes approach is extremely bug prone.


I think it's also a problem with the culture of a lot of software practices. There's a tendency to naval-gaze around topics like TDD and code review to make sure you're doing Software Development(tm) effectively, without a lot of attention to the actual product or user experience. In other words, code quality over product quality.


He has a nice follow up which gets to the reasons why

https://tonsky.me/blog/good-times-weak-men/

Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.


He hints at Electron in the end, but I think the real blame lies on React which has become standard in the past five years.

Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.


I completely agree, here. React has replaced the DOM, and it's pretty fast, pretty efficient when you understand its limitations... but when you start rendering to the canvas or creating SVG animation from within react code, everything is utterly destroyed. Performance is 1/1000 of what the platform provides. I have completely stopped using frameworks in my day-to-day, and moved my company to a simple pattern for updatable, optionally stateful DOM elements. Definitely some headaches, some verbosity, and so forth. But zero tool chain and much better performance, and the performance will improve, month-by-month, forever.


It seems to me that using your react components to render SVG animations, or to canvas, is just inviting disaster.


Well yeah. But I've seen it done; the attitude being "this is fine, React is fast, it works on my Mac..."

Can you expand? You actually convinced other folks to stop using React and go back to writing DOM-manipulating VanillaJS?


No one's going back to the messy spaghetti-generating "just-jQuery" pattern of yore.

I devised a way of using plain closures to create DOM nodes and express how the nodes and their descendants should change when the model changes. So a component is a function that creates a node and returns a function not unlike a ReactComponent#render() method:

props => newNodeState

When called, that function returns an object describing the new state of the node. Roughly:

{ node, className: 'foo', childNodes: [ childComponent(props) ] }

So, it's organized exactly like a react app. A simple reconciliation function (~100 lines) applies a state description to the topmost node and then iterates recursively through childNodes. A DOM node is touched if and only if its previous state object differs from its current state object - no fancy tree diffing. And we don't have to fake events or any of that; everything is just the native platform.

I implemented an in-browser code editor this way. Syntax highlighting, bracket matching, soft wrap, sophisticated selection, copy/cut/paste, basic linting, code hints, all of it... It edits huge files with no hint of delay as you type, select, &c. It was a thorough test of the approach.

Also, when we animate something, we can hook right in to the way reconciliation works and pass control of the update to the component itself, to execute its own transition states and then pass control back to the reconcile function... This has made some really beautiful things possible. Fine grained control over everything - timing, order, &c. - but only when you want it.

Sorry for the wall of text.


I am sorry, but I don't fully understand. To me it sounds like you are describing exactly tree diffing when you say that the next node is only touched if its state object changed.

I have been through this struggle too. Of wanting to get rid of bloated tools and tools I don't understand, and the best I've found for this is Hyperapp. I've read the source code a few times (was thinking about patching it to work well with web components), so I feel it falls into a category of tools I can use. But I'm genuinely interested in understanding what you've done if it offers an alternative (even if more clunky).


>>> it sounds like you are describing exactly tree diffing

The object returned by the function expresses a tiny subset of the properties of a DOM node. Often just {className, childNodes: [...]}. Only those explicit attributes are checked for update or otherwise dealt with by my code. My code has no idea that a DOM node has a thousand properties. By contrast, a ReactComponent is more complex from a JS POV than a native DOM node.

In other words, if my code returns: {className: 'foo'} at time t0, and then returns {} at time t1, the className in the DOM will be 'foo' at t0 and at t1. That is not at all how exhaustive tree diffs work, and not at all how react works.

With 5,000 nodes, you might have 8K-15K property comparisons. Per-render CPU load thus grows linearly and slowly with each new node. I can re-render a thousand nodes in 2-5 milliseconds with no framework churn or build steps or any of that. But more importantly, we have the ability to step into "straight-to-canvas" mode (or whatever else) without rupturing any abstractions and without awkward hacks.

This is unidirectional data flow plus component-based design/organization while letting the DOM handle the DOM: no fake elements, no fake events -- nothing but utterly fast strict primitive value comparisons on shallow object properties.

EDIT: Earlier I said that a node changes if and only if its state description changed; that is not strictly true. "if and only if" should just be "only if".


This makes a lot of sense. It's essentially giving up some "niceness" that React gives to make it faster and closer to the metal. That sounds like a critique, but that's what this whole thread is about, and one way to approach something I've also given a lot of thought.

To do this, I imagine you will have to do certain things manually. I guess you can't just have functions that return a vdom, because, as you say, the absence of a property doesn't mean the library will delete it for you. So do you keep the previous vdom? Patch it manually and then send it off to update the elements? ... I guess it's a minor detail. Doesn't matter.

Interesting approach, thanks for sharing! I will definitely spend some time looking into it. Encouraging that it seems to be working out for you :)


To answer your technical question: You can approach it one of two ways (I've done both). The first you hinted at. You can keep the last state object handy for the next incoming update and compare (e.g.) stateA.className against stateB.className, which is extremely fast. But you have an extra object in memory for every single node, which is a consideration. You can also just use the node itself and compare state.className to node.className. Turns out this is ~90-100% as fast ~95% of the time, and sips memory.

If you're thinking, "wait, compare it to the DOM node? That will be slow!" - notice that we're not querying the DOM. We're checking a property on an object to which we have a reference in hand. I can run comparisons against node.className (and other properties) millions of times per second. Recall that my component update functions return an object of roughly the form:

{ node, className: 'foo', childNodes: [...], ... }

That first property is the DOM node reference, so there's no difficulty in running the updates this way. Things are slower when dealing with props that need to be handled via getAttribute() and setAttribute(), but those cases are <10%, and can be optimized away by caching the comparison value to avoid getAttribute(). There are complications with numerical values which get rounded by the DOM and fool your code into doing DOM updates that don't need to happen, but it's all handle-able.

Here's quick gist: https://gist.github.com/jeffmcmahan/8d10c579df82d32b13e2f449...


Maybe React has the advantage as the project grows? From what I understand it batches updates from all the different components on the page, avoiding unnecessary reflows that might easily creep in when you do things the old-fashioned way.


It makes it even slower, and you have to manually optimize component renders. Sure you get less reflow due to dom diffing, but tons higher cpu time.


>I’m really looking forward to whatever next thing becomes popular and replaces this shit show.

I'm with you, but motivation to really learn a system tanks when there's something else on the horizon. And what happens when new-thing appears really great for the first 1-2 years, but goes downhill and we're back to asking for its replacement only 5 years after its release? That tells me we're still chasing 'new', but instead of a positive 'new', it's a negative one.

This was also reinforced constantly by people claiming you'll be unemployable if you aren't riding the 'new' wave or doing X amount of things in your spare time.

It's a natural consequence of an industry that moves quickly. If we want a more stable bedrock, we MUST slow down.


What is better? Jquery? It comes with its own can of worms and React designers had solid reasoning to migrate away from immediate DOM modification. In general UI is hard. Nice features like compositing, variable width fonts, reflow etc come with the underlying mechanisms that are pretty complicated and once something behaves different to the expectations it might be hard to understand why.


UI is hard because you're using a hyper text language with fewer features than were the standard in the 60s. Then with styling on top of that, then with a scripting language on top of that.

Reading Computer Lib/Dream Machine over the holidays and I wonder where it all went so wrong.


Free markets hate good software. "Good" meaning secure, stable, and boring.

On both ends.

Software developers hate boring software for pragmatic HR-driven career reasons and because devs are apes and apes are faddish and like the shiny new thing.

And commercial hegemony tends to go to the companies that slap something together with duct tape and bubble gum and rush it out the door.

So you get clusterfucks like Unix winning out against elegantly designed Lisp systems, and clusterfucks like Linux winning out against elegantly designed Unix systems, and clusterfucks like Docker and microservices and whatever other "innovations" "winning out" over elegantly design Linux package management and normal webservers and whatnot.

At some point someone important will figure out that no software should ever need to be updated for any reason ever, and a software update should carry the same stigma as...I don't know...adultery once carried. Or an oil spill. Or cooking the books. Whatever.

But then also it's important to be realistic. If anyone ever goes back and fixes any of this, well, a whole lot of very smart people are going to go unemployed.

Speaking of which...

https://www-users.cs.york.ac.uk/susan/joke/cpp.htm


Free markets hate unchanging software. Software churn generates activity and revenue, and the basic goal of the game is to be the one controlling the change. Change is good when you have your hands on the knobs and levers, bad when someone else does. Organizations try to steer their users away from having dependencies on changes that they don't control. "You're still using some of XYZ Corp's tools along with ABC's suite? In the upcoming release, ABC we will help you drop that XYZ stuff ..."


That brings to mind one common computer scientest fallacy - that elegence is an end to itself. It may share some properties which make it practical but unfortunately it is not in practice.

Recursive solutions are more elegant but you still use a stack and while loop to not smash the stack.


Scheme is properly tail-recursive and has been around since 1975. Most (all?) Common Lisp implementations have proper tail recursion. Clojure has tail call optimization for simple cases and only if you explicitly ask for it, but it gets you most of the way there most of the time.

So there are reasons to prefer more imperative languages and their systems, but stack-smashing isn't one of them.


jQuery: 88KB, standard everywhere, one entity responsible for all of it, people know what it is and what it does, if it breaks you know what went wrong and who to blame.

Literally anything built with NPM: megabytes? tens of megabytes? in size, totally inscrutable, code being pulled in from hundreds of megabytes of code in tens of thousands of packages from hundreds or thousands of people of unknown (and unknowable) competence and trustworthiness, if it breaks not only do you not know who to blame but you probably have literally no idea what wrong.

Yeah, jQuery was probably better.


It Depends, as always. The problem React was originally solving was that DOM updates cause re-rendering which can be slow; jquery (usually) works directly in the DOM, so applications heavy in updates don't perform well.

So initially an equivalent React and jQuery app would have React look a lot faster, due to smart / batched DOM updates. However, because React is so fast it made people create apps differently.

As always in software development, an application will grow to fill up available performance / memory. If people were to develop on intentionally constricted computers they would do things differently.

(IIRC, at Facebook they'll throttle the internet on some days to 3g speeds to force this exact thing. Tangentially related, at Netflix (iirc) they have Chaos Monkey which randomly shuts down servers and causes problems, so errors are a day to day thing instead of an exception they've not foreseen).


That's a problem with the npm ecosystem.

React is just so, so much nicer to work with. It's easy to be dismissive if you've never had to develop UIs with jQuery and didn't experience yourself the transition to React which is a million times better in terms of developer experience.

I feel like people that don't build UIs themselves think of them too much in a completely functional way as in "it's just buttons and form inputs that do X", and forget about the massive complexity, edge cases, aesthetic requirements, accessibility, rendering on different viewports, huge statefulness, and so on.


Old is better is just not true here. React is a dream. Synthetic eventing, batched updates, and DOM node re-use are so good. I rolled my own DOM renderer recently and remembered a lot of problems from the past that I would not like to re-visit.


Yes, you're absolutely right: React itself is great. But React is part of the NPM ecosystem; try using one without the other.

And then if you're still feeling cocky try finding someone else who uses one without the other.


Write your own framework-like code with just jQuery and watch it turn into a pile of mush. React is many things, but it is absolutely better than jQuery or Backbone. People always mis-use new technology; that isn't React's fault.


My whole argument is that _it is_. I don’t know why we are comparing to jQuery though, they are not replacements for each other.


To an extent, UI was solved in 1991 by Visual Basic. Yes, complex state management is not the best in a purely event-based programming model. Yes, you didn’t get a powerful document layout engine seamlessly integrated to the UI. Yes, theming your components was more difficult. And so on. But… if the alternative is what we have now? I’m not sure.


One big disadvantage with Visual Basic (and similar visual form designers) is that you can't put the result in version control and diff or merge it in any meaningful way.


I think my favourite fact(oid) to point out here would be that the React model is essentially the same thing as the good ol' Windows GUI model. The good ol' 1980s Windows, though perhaps slightly more convenient for developers. See [0].

I think it's good to keep that in mind as a reference point.

--

https://www.bitquabit.com/post/the-more-things-change/


if webdev is going to go through all the iterations of GUI development ... oh boy , there are decades of frameworks ahead


It's just the underlying model that is similar, but React is pretty good at abstracting all that (unlike Win32).

When it comes to developer experience I'd say that React and company are ahead of most desktop UI technologies, and has inspired others (Flutter, SwiftUI).


So where's the RAD React tooling? Is that a thing yet?


Apparently there's React Studio, BuilderX and tools like Sketch2React and Figma to React. Ionic Studio will probably support React in a close future (maybe it already supports).


This a thousand times. It's amazing how each new layer of abstraction becomes the smallest unit of understanding you can work with. Browser APIs were the foundation for a while, then DOM manipulation libs like jquery, and now full blown view libraries and frameworks like react and angular.

I wrote a little bit more about my thoughts on the problem here: https://blog.usejournal.com/you-probably-shouldt-be-using-re...


Flutter is a very good bet IMO. It uses Dart was designed from the ground up to be a solid front end language instead of building on top of JS. The underlying architecture of flutter is clearly articulated and error messages are informative. Still seems a bit slow and bloated in some aspects but it is getting better every day and I think their top down control of the stack is going to let them trim it all the way down.


React is super simple - I could implement same API from memory so I don't think it's the root of the problem.

> Nobody

Speak for yourself


I take it you’re thinking of virtual DOM only, which is not the problem, or the component class which hides all of the details.

React is huge, it’s unlikely you’ll implement the synthetic events, lifecycle hooks, bottom up updates, context, hooks with their magical stacks, rendering “optimizations” and all of react-specific warts.

There are simple reimplementations like hyperapp and preact and I completely recommend using those instead. I really meant React the library and ecosystem are at fault, not the general model.


never used React but my guess is, it is pretty simple to use, but most people using it don't know what happens behind the scenes

(which is not specific to React but more like an issue for any framework that tries to do everything)


This doesn't seem unique to React projects. Can anyone explain what is happening under the hood in their Angular projects? How about Vue? It seems to be a failing of all major UI frameworks, lots of complexity is abstracted away.


If someone's starting a new website project (that has potential to become quite complex), what would you recommend is the best frontend technology to adapt then?


Yes both appear to be a disaster. Vuejs is a bit better imo but i'm generally holding out for the next thing.


... which is https://svelte.dev


It might be the next big thing, but Svelte doesn't solve the problem outlined in the root of this subthread: nobody has any idea what the fuck is going on.

I like Svelte, the simplicity of programming in it is great, and it has several advantages compared to React. But I have no idea how it works, past a point of complexity. Like, yes: I can run the compiler and check out the JS it generates, same as I can do in React. For simple components, sometimes the compiled code even makes sense. But when I introduce repeated state mutations or components that reference each other, I no longer know what's going on at all, and I don't think I'm alone in this.

Svelte might be an improvement in ergonomics (and that's a good and much needed thing!) but it does nothing to answer the obscurity/too-far-up-the-abstraction-stack-itis that GP mentioned. The whole point of that is frameworks/abstraction layers that tell you "you don't need to understand what's going on below here" are . . . maybe not lying, exactly, but also not telling the whole truth about the costs of operating at that level of both tooling abstraction and developer comprehension.


More likely to be web components. Then you can use your web components in Svelte, React, Angular, Vue, etc projects.


Time is money and engineers aren't given time to properly finish developing software before releases.

Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.

The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.

Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.

Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.

As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".


Yet somehow, it seems to me that most software - including all the "innovative" hot companies - are mostly rewriting what came before, just in a different tech stack. So how come nobody wants to rewrite the prior art to be faster than it was before?


Rewrites can be really amazing if you incentivize it that way. Its really important to have a solid reason for doing a rewrite though. But if there are good reasons, the problem of 0 (or < x) downtime migrations is an opportunity to do some solid engineering work.

Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.


He seems to make a contradictory point... he complains:

> iOS 11 dropped support for 32-bit apps. That means if the developer isn’t around at the time of the iOS 11 release or isn’t willing to go back and update a once-perfectly-fine app, chances are you won’t be seeing their app ever again.

but then he also says:

> To have a healthy ecosystem you need to go back and revisit. You need to occasionally throw stuff away and replace it with better stuff.

So which is it? If you want to replace stuff with something better, that means the old stuff won't work anymore... or, it will work by placing a translation/emulation layer around it, which he describes as:

> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce. We cover shit with blankets just not to deal with it.

Seems like he wants it both ways.


And yet at the time of its release, iOS 11 was the most buggy version in recent memory. (This record has since been beaten by iOS 13.)

I don't quite know what's going on inside Apple, but it doesn't feel like they're choosing which features to remove in a particularly thoughtful way.

---

Twenty years ago, Apple's flagship platform was called Mac OS (Mac OS ≠ macOS), and it sucked beyond repair. So Apple shifted to a completely different platform, which they dubbed Mac OS X. A slow and clunky virtualization layer was added for running "classic" Mac OS software, but it was built to be temporary, not a normal means of operation.

For anyone invested in the Mac OS platform at the time, this must have really sucked. But what's important is that Apple made the transition once! They realized that a clean break was essential, and they did it, and we've been on OS X ever since. There's a 16-year-old OS X app called Audio Slicer which I still use regularly in High Sierra. It would break if I updated to Catalina, but, therein lies my problem with today's Apple.

If you really need to make a clean break, fine, go ahead! It will be painful, but we'd best get it over with.

But that shouldn't happen more than once every couple decades, and even less as we get collectively more experienced at writing software.


I think that's not quite the point in the article. The idea is, in my reading, that we've built lazily on castles of sand for so long that sometimes we think it makes sense to throw away things we shouldn't, and other times we obsessively wrap/rewrap/paper over things we should throw away. What falls into each category is obviously debatable, but the author seems to be critiquing the methodology we use to make those decision--debatable or not, people aren't debating it so much as they're taking the shortest and often laziest path without prioritizing the right things (efficiency, consistency).

Even with our priorities in order, there will still be contentious, hard choices (to deprecate so-and-so or not; to sacrifice a capability for consistency of interface or not), but the author's point is that our priorities are not in order in the first place, so the decisions we make end up being arbitrary at best, and harmful/driven by bad motivations at worst.


It's possible to both improve efficiency and maintain backwards compatibility.


Combining these two is only a non-issue with unlimited resources.

Otherwise it's a tradeoff if you add constraints like cost, effort, time to market, and so on...


Windows does it. And despite that, versions like win 7 were pretty fast


I'd argue that of any software project on the planet, Windows is the closest to having unlimited resources; especially when you consider the number of Windows customers for whom backwards compatibility is the #1 feature on the box.

And speed isn't the only metric that matters; having both the 32-bit and 64-bit versions of DLLs uses a non-trivial (to some people) amount of disk space, bandwidth, complexity, etc.


Surely, Apple and Google have just about as many resources as Microsoft does.

If Android, Mac OS, etc were super slimmed down systems in comparison to Windows, I would understand the argument much better. Instead, it feels like we're in the worst of both worlds.


>Windows does it.

Yeah, didn't say it's impossible. I said it's a tradeoff.

Windows does it and pays for it with slower releases, more engineers, bugs, strange interaction between old and new, several layers of UI and API code for devs to decode and for users to be confused with, less ability to move to new paradigms (why would devs bother if the old work), 2 versions of libs loaded (32/64 bit), and several other ways besides...

E.g. I've stopped using Windows for a decade or so, but I read of the 3 (4?) settings panels it has, the modern, the Vista style, the XP style, and so on, with some options in one, the others in the other (if you click some "advanced" menu, etc).


I have heard a lot of complaints about the costs to the Windows ecosystem caused by having to always maintain backwards compatibility.


Fuck no. A bit faster does not mean fast. It's slow as fuck basically across the board.


The goal is that you throw out things that aren't useful (cost > benefit, or better replacement available and easily usable), not that you have a periodic "throw out everything written before X".


See also: in Good times create weak men [0], the author explains his interpretation as to why. I can't summarize it well. It's centered around a Jonathan Blow talk [1] Preventing the collapse of civilization.

[0] https://tonsky.me/blog/good-times-weak-men/

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk


I watched that talk a while ago. It is great, and it did change my opinion on a few things. Whether you agree with the premise or not, you can still learn something. For me, the importance of sharing knowledge within a team to prevent "knowledge rot". "Generations" in a team are much more rapid than the general population/civilisation, so that effect is magnified IMO.


This article really resonates with me. But my biggest complaint is everything is _so_ buggy! I won't name any names, but I find many major pieces of software from large, well known companies are just riddled with bugs. I also feel like you almost need to be a programmer to think of workarounds "hmm, ok, so clearly it's in a bad state. If I had coded this, what boundaries would likely cause a complete refresh?" My wife is often amazed I can find work arounds to bugs that completely stop her in her tracks.

Before we fix performance, bloat, etc, we really need to make software reliable.


I'll gladly name names.

Apple have totally forgotten how to test and assure software against what appear to be even stupid bugs. macOS Catalina has been fraught with issues ranging from the minor to the ridiculous. Clearly nobody even bothered to test whether the Touch Bar "Spaces" mode on the MacBook Pro 16" actually works properly before shipping the thing. Software updates sometimes just stop downloading midway through, the Mail.app just appears over the top of whatever I'm doing seemingly at random and Music.app frequently likes to forget that I'm associated with an iTunes account.

Microsoft are really no better - Windows 10 continues to be slow on modest hardware and obvious and ridiculous bugs continue to persist through feature releases, e.g. the search bar often can't find things that are in the Start menu!

My question is who is testing this stuff?


> My question is who is testing this stuff?

Telemetry.

Companies seem to be increasingly preferring to use invasive telemetry and automated crash reports in lieu of actual in-house testing, and they use that same telemetry to also prioritize work. I have a strong suspicion that this is a significant contributing factor to the absurdities and general user-hostility of modern products.


I'm in complete agreement. Thanks to automated crash report uploading, the software I use is more stable than ever — it's a genuine surprise to me when an application crashes, and I can't remember the last time I had to reboot because my OS froze.

But this means that anything that's not represented in telemetry gets completely ignored. The numbers won't show you how many of your users are pissed off. They won't alert you to the majority of bugs. They won't tell you if you have a bloated web application that's stuffed full of advertising. They won't tell you if your UI is incoherent.

I really do think that large companies are looking at the numbers instead of actually using their software, and the numbers say that everything's fine.


Indeed. The way I've been summing it up recently: A/B testing is how Satan interacts with this world.

It's understandable people want to base their decisions off empirical evidence. But it's important to remember that what you measure (vs. what you don't measure) and how you measure will determine the path you're going as much as the results of these measurements.


Apple is one company I've been willing to name as it's extremely frustrating to have such a sub par experience with such expensive products. I used to be a huge Apple fan, and now I no longer use any Apple products. I've never used Catalina, but iOS is unbelievably buggy now so I'm not surprised.


I think both are true depending on the software. But yes it's unfortunately too easy to run into bugs daily even if one does not use new apps every day.

The reason for unreliability is probably the same reason why things are slow: developers and project managers who don't care about the users and/or who are not incentivized to improve performance and reliability.

If you think that "not caring about the users" is too harsh, consider that users do suffer from e.g. unoptimized web pages or apps that use mobile data in obscene quantities. This has a direct consequence on people's wallets or loss of connectivity which is a huge pain.

As developers we can all try to instill "caring about the users" into our team's priorities.


> How is that ok?

Probably because a browser like FF has the goal to load and display arbitrary dynamic content in realtime like a reddit infinite scroll with various 4k videos and ad bullshit, whereas the game has the goal to render a known, tested number of pre-downloaded assets in realtime.

Also on shitty pages the goal is different-- load a bunch of arbitrary adware programs and content that the user doesn't want, and only after that display the thing they want to read.

Also, you can click a link somewhere in your scrolling that opens a new, shitty page where you repeat the same crazy number of net connections, parsing ad bullshit, and incidentally rendering the text that the user wants to read.

If you want to compare fairly, imaging a game character entering a cave and immediately switching to a different character like Spiderman and inheriting all the physics and stuff from that newly loaded game. At that point the bulk of your gameplay is going to be loading new assets and you're back to the same responsiveness problems of the shitty web.

Edit: clarification


I'm both a web developer and a game developer, and this comparison doesn't ring true at all. Games usually have tons of arbitrary dynamic content to display in realtime. Minecraft will load about 9 million blocks around your character plus handle mobs, pathfinding, lighting, etc. Reddit infinite scroll loads a sequence of text, images, and videos. Multiplayer games have such tight latency and bandwidth targets that game developers routinely do optimizations web developers wouldn't even consider.

As a web developer, sending an 8 KB JSON response is no problem. That's nice and light. In a networked action game, that's absurd. First, (hypothetical network programmer talking here) we're going to use UDP and write our own network layer on top of it to provide reliability and ordering for packets when we need it. We're going to define a compact binary format. Your character's position takes 96 bits in memory (float x, y, z); we'll start by reducing that to 18 bits per component, and we'll drop the z if you haven't jumped. Then we'll delta compress them vs the previous frame. Etc.

Really, what's happening is things are getting optimized as much as they need to be. If your game is running at 10 fps, it's going to get optimized. When it's hitting 60+ fps on all target platforms, developers stop optimizing, even if it could potentially be faster. Same for Reddit; it's fast enough for most users.


> As a web developer, sending an 8 KB JSON response is no problem. That's nice and light. In a networked action game, that's absurd.

It depends on what that 8 KB is doing. If that 8 KB is a chat message, that's way too big. On the other hand, I've never seen an 8 KB game patch.


This doesn't really relate to my point. The blog post asked why is it that we can handle games (fancy 3D simulations, sometimes with hundreds of players and millions of objects) at a smooth 60 fps but not scrolling a web page. The parent comment suggested that it's easier to render games smoothly because you know the content in advance. I'm suggesting that software gets optimized (by necessity) until it works well enough. If some website had to display a million elements, the devs would either optimize it until it could do so, or the project would get scrapped.

When I talk about sending 8 KB in a "networked action game", I'm referring to the update packets sent to and from clients in something like Fortnite or Counter-Strike, not a game patch. I'm not trying to make a competition for who uses the least bandwidth (which a 60 GB game would lose just on the initial download). I'm trying to illustrate that games don't run faster than some website because it's inherently easier to make games run fast, but rather that developers, by necessity, optimize games until they run fast (or in this example, until they reduce network lag enough).

I'm not sure why a chat app would tack on something like 7.5 KB of overhead on a chat message, but I wouldn't be surprised if there's a chat app out there that does so. Users won't notice the extra couple milliseconds (especially so because they don't know exactly when the other person hit send). A 3 character Discord message is close to 1 KB including the headers. The same message over UDP in a game might be under 20 bytes, including the UDP header (games could also use TCP for chat - text chat isn't going to strain anything). So I'd say the overhead of a Discord message is still an order of magnitude or two bigger than it could be. Which is perfectly fine; we can afford 1 KB of overhead on a modern connection. It's optimized as much as it needs to be.


Browsers are fine. It's the websites that are slow.

It's not the fault of Firefox that Reddit's new UI is pathetically slow. It's the Reddit's implementation of their UI itself which is total garbage.

And given that people do write fast, complex, real-time games in JavaScript for the browser, gamedev absolutely becomes a valid reference point for the possible performance of any individual page.


Hmm, that leads be to an interesting counter idea.

Why should Firefox or any other dynamic software have the ability to be slow for what it archives? If compilers should be fast, Web engines should be equally as fast. The Web should have never been designed such that a slow website (relative to the task) could be achieved. In the same way that you can only express memory safe code in rust and type safe code in haskell why not being only able to express "fast for what is interactive"?


> The Web should have never been designed such that a slow website (relative to the task) could be achieved

That's already the case, your orders of magnitude are just off. Long-running AJAX or page loads are timed out at a pretty consistent point across browsers. Half-open/closed TCP connections are timed out at a pretty consistent point across operating systems. Busy-looping JS gets you a "page is not responding" block in a similar amount of time; same for nonresponsive native applications on many operating systems.

Their definition of "slow" or "stuck" just tends to be "tens of seconds or minutes", not the threshold of perceived responsiveness you want in a website.

Also, your parenthetical is a pretty tall-née-impossible order:

> a slow website (relative to the task)

How could the "task" be classified? Do you mean "task" as in "clicking a button and having a DOM update"? Or as in "this is a TODO application so it should have responsiveness threshold X"?


>whereas the game has the goal to render a known, tested number of pre-downloaded assets in realtime.

Say hello to shaders.


A related article in a similar spirit from 4 years ago: https://news.ycombinator.com/item?id=8679471

I can comfortably play games, watch 4K videos, but not scroll web pages?

I think this is one of the more important points that the article tries to get across, although it's implicit: while the peak of what's possible with computing has improved, the average hasn't --- and may have gotten worse. This is the point that everyone pointing at language benchmarks, compiler/optimisation, and hardware improvements fail to see. All the "Java/.NET is not slow/bloated" articles exemplify this. They think that, just because it's possible for X to be faster, it always will be, when the reality couldn't be further from that.

Speaking of bloat, it's funny to see the author using Google's apps and Android as an example, when Google has recently outdone itself with a 400MB(!) web page that purports to show off its "best designs of 2019": https://news.ycombinator.com/item?id=21916740


I agree that the peak is pulling away from the average, and most of us want the average performance of applications to lift. We have to throw aside facile "Good Enoughism" and genuinely respect the time of our users.

Where I differ a bit from your take: Languages and platforms that target high performance are providing application developers an elevated performance ceiling that allows them the luxury to use CPU capacity as they see fit. Application developers using high-performance platforms may then elect to make their application high-performance as well, yielding a truly high-performance final product, or they may elect to be spendthrifts with CPU time, yielding something middling on performance. And yes, a truly wasteful developer can indeed make even a high-performance platform yield something low-performance.

What benchmarks and the resulting friendly competitiveness help us avoid is a different and worse scenario. When we select a language or platform with a very low performance ceiling, application developers continuously struggle for performance wins. The high water mark for performance starts out low, as illustrated by how much time is spent in order to accomplish trivial tasks (e.g., displaying "hello world"). Then further CPU capacity is lost as we add functionality, as more cycles are wasted with each additional call to the framework's or platform's libraries. When we select a low-performance platform, we have eliminated even the possibility of yielding a high-performance final product. And that, in my opinion, illustrates the underlying problem: not considering performance at key junctures in your product's definition, such as when selecting platform and framework, has an unshakeable performance impact on your application, thereby pulling the average downward, keeping those peaks as exceptions rather than the rule.


One thing nobody seems to mention is the environmental cost of inefficient software. All those wasted CPU cycles consume electricity. A single laptop or phone on it's own is insignificant, but there are billions of them. Combine that with the energy wasted shovelling unnecessary crap around the internet, and it adds up to a big CO2 problem that nobody talks about.


I hear that argument very frequently and I don’t buy it.

Think about all the gas that is saved because people don’t have to drive to the library, all the plane trips saved by video conferencing, all the photo film, all the sheets of paper in file cabinets, all the letters being sent as emails, all the mail order catalogues, ... you get the idea.

Does anybody know of a comprehensive study on this?


> Think about all the gas that is saved because people don’t have to drive to the library....

You're right that computers have saved huge amounts of energy compared with the things you mention. My point here was that even more could be saved with a bit of thought about efficiency in programming.


If websites and business software were as lean as they could be, most computers could have amazingly weak, low-powered processors.

I'm quite disenchanted with software myself. It takes way too long to open any program, for this JIRA ticket to properly display.

One thing that has improved was boot times, I seem to remember that Windows 7 was quite a bit faster than XP. Maybe someone in upper management wanted it to be as fast as MacOS? So speed IS possible, if it is prioritized.


> One thing that has improved was boot times, I seem to remember that Windows 7 was quite a bit faster than XP. Maybe someone in upper management wanted it to be as fast as MacOS? So speed IS possible, if it is prioritized.

I seem to remember boot times being a frequent topic of discussion the early 2000s, because people turned off their computers.

In a way, this is a great little microcosm of the problem. Just fix habits instead of fixing the software.


It's not either-or. I don't buy the argument that if we didn't shovel garbage that we call "software" today, we wouldn't have equivalent but better software at all. It's a multi-agent problem, and a lot of it is driven by business dysfunction, not even actual complexity or programmer laziness.

In my - perhaps limited - work experience, there's enough slack in the process of software development that I don't buy the "time to market" argument all that much.


It's much worse than that: what is the environmental cost of buying a new phone because Slack runs too slowly on your old one?

The things I'm doing on my phone today are not fundamentally different than what I was doing ten years ago. And yet, I had to buy a new phone.


That's why tricle down carbon tax is a right answer.


I've heard this argument often, but I don't buy it.

1. The environmental cost of ineffective software is negligible, when compared to Bitcoin mining or other forms of hardware planned obsolescence.

2. By using more efficient software, surely, you can save a lot of CPU cycles, and it can improve the energy efficiency of some specific workloads under some particular scenarios. However, on a general-purpose PC, the desire for performance is unlimited, the CPU cycles saved in one way will only be consumed in other ways, and in the end, the total CPU cycles used remain a constant.

Running programs on a PC is like buying things, when you have a fixed budget but everything is cheaper, often people will just buy more. For example, I only start closing webpages when my browser becomes unacceptably slow, but if you make every webpage use 50% less system resource, I'll simply open 2x more webpages simultaneously. LED lightning is another example, while I think the overall effect is a reduction of energy uses, but in some cases, I think it actually makes people to install more lightnings, such as those outdoor billboards.

This is called the Jevons paradox [0].

For PCs, certainly, as I previously stated, in specific workloads under some particular scenarios, I totally agree that there are cases that energy use can be reduced (e.g. faster system update), but I don't think it helps much in the grand scheme of things.

[0] https://en.wikipedia.org/wiki/Jevons_paradox


> it adds up to a big CO2 problem that nobody talks about

if you haven’t seen it already, you’d probably be interested in the below talk by chuck moore, inventor of forth.

https://www.infoq.com/presentations/power-144-chip/


> https://www.infoq.com/presentations/power-144-chip/

Fascinating talk. Thanks for the link.


I agree but apparently the number is not very big - computing is max 8% perhaps of electricity usage. But it still feels so wasteful, and also wasteful of people’s time.


I think it's actually the time that matters the most. I think the tens of thousands human lifetimes killed each day on slow software we're forced to use really adds up, how exactly I don't know, it's hard to imagine what humanity would do with the millions of work hours saved.


You are going to have a heart attack if you check the energy consumption of bitcoin.


Then again , you could embed those as space heaters, or cooking machines, since they dont need portability


But there's no point, because all those heaters aren't going to beat ASIC farms near cheap electricity sources anyway.

(Do individuals really bother mining bitcoin these days anyway?)


I'm not sure in the accounting of the environmental costs of modern life that inefficient software counts for much. Doing totally pointless crap with highly efficient software might be worse.


Sure: If you can optimize the software that runs on many millions of computers then you can have a huge impact. If you run those computers yourself you can even save money.

But the vast majority of software is one-off stuff. It makes no sense to optimize it for performance instead of features, development time, correctness, ease of use, etc.


> But the vast majority of software is one-off stuff.

Is it now? I can't think of any example, except a few tools we use internally in a project. Everything else I use, or I see anyone else using, has an userbase of many thousands to millions, and a lot of that is used in context of work - which means a good chunk of the userbase is sitting in front of that software day in, day out.


It'd be interesting to know, say, what percentage of software developers work on programs that have less than a million users.

I wouldn't be at all surprised if it's the majority. For every big-ticket software offering there's going to at least be things the user never interacts with, like a build system and a test suite and a bug tracking system and whatever else. And there's just so much software everywhere, most of which we never see. Every small business has its little web site. Who knows how much software there is powering this or that device or process at random factories, laboratories, and offices.

The Debian popularity contest looks like it has a big long tail of relatively unpopular packages [1]. It looks like the app store has 2 million apps, only 2857 (.14%) have more than a million dollars of annual revenue. These are of course incomplete and flawed and do not really directly address the question. I don't really know how to research this in a thorough way.

[1] https://popcon.debian.org/by_inst [2] https://expandedramblings.com/index.php/itunes-app-store-sta...


Excerpt:

"An Android system with no apps takes up almost 6 GB. Just think for a second about how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone?

Windows 95 was 30MB. Today we have web pages heavier than that!

Windows 10 is 4GB, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 MB. But whatever Windows 10 is, is Android really 150% of that?"

My favorite line: "Windows 95 was 30MB. Today we have web pages heavier than that!"

If there's a new saying for 2020, it shouldn't be that "hindsight is 2020"... <g>

Also... each web page should come with a non-closable pop-up box that says "Would you like to download a free entire OS with your web page?", and offers the following "choices":

"[Yes] [Yes] [Cancel (Yes, Do It Anyway!)]". <g>


My favorite was "Google’s keyboard app routinely eats 150 MB. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? "


I mean... considering the fact that in contains the probabilities of typing every single word in the English language versus every other, at every stage of typing, including potentially the probabilities of all the ways you might mistype each word while swiping without precise accuracy...

...maybe?

I don't know if that's 150 MB' worth of data... but it's certainly a lot.


So why is it persistently, perniciously and stubbornly insistent in refusing to spell 'naughty' words like 'duck'?


Probably because it contains extra data about which words are "naughty".


What's the problem again? I just typed "duck" by swiping. If you mean the "i" variant, be informed that swear words are blocked unless a preference is set.


They mean the "f" variant.


I'm sure that's intentional... I think you can disable it?


This in particular is a terrible example. Gboard is full of good features that Windows 95 did not have. Features that require a way deeper understanding of natural language than existed on computers in 1995. Autocorrect, voice typing, swipe typing, handwriting input, translation. It's so far from "30 keys on a screen" that it's not even funny.


I agree in spirit but really most of this is is going to be related to unpacking images of more than 2 colors on a high density display.


Windows 10 is way richer than Win95 - there were countless big features added since 1995 (all enterprise and security features, backups, full disk encryption, ...), plus existing features got way more complex - multimonitor support, support of complex USB stack, GPU support etc. IIRC, out of the box Win95 didn't even have DirectX! Still, of course the total sum of changes does not justify the 133x size increase.


damn. windows 95 was indeed 13 floppy disks. amazing to think about.

So what is in these huge downloads? Layers upon layers of virtual machines?


In a way each layer of abstraction could be seen as a virtual machine - a new set of “instructions” implemented via previous layer of instructions.

The analogy holds as long as we don’t cross layers, which is quite often true.

Therefore counting total number of layers is performance-wise quite similar to running this many layers of virtual machines. Of course, you need code for all of these translation layers.


Mac OS Catalina ships with 2.1 GBs of desktop pictures https://mobile.twitter.com/dmitriid/status/11981966747301109...


My favorite: "What's in there, HD movies?"


When I was in school and first leaning about programming I assumed that code written in C or Java would eventually be ported to hand tuned assembler once enough people were using it. Then I got in to the industry and realized that we just keep adding layer after layer until we end up at the point this article talks about.

I remember once reading that IBM was going to implement an XML parser in assembler and people were like "Why? If speed is needed then you shouldn't use XML anyway." I thought that concern was invalid because these days XML ( or JSON ) is really non-negotiable in many scenarios.

One idea that I've been thinking about lately is some kind of neural network enabled compiler and/or optimizer. I have heard that in the javascript world they have something called the "tree shaking" algorithm where they run the test suite, remove dependencies that don't seem to be necessary and repeat until they are getting test failures. I'm thinking why not train a LSTM to take in http requests and generate the http response? Of course sometime the request would lead to some sql, which you could then execute and feed the results back into the LSTM until it output a http response. Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.


> I'm thinking why not train a LSTM to take in http requests and generate the http response?

Why? With responses generated according to what? Are you really just suggesting using neural networks in the compiler's optimiser?

> Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.

Why? What's the advantage over just building software?


I'm suggesting that you take an existing system and build up a corpus of request/response pairs. Then you use the LSTM to build a prediction model so that given a request it will tell you that the current production system will produce the following sql statement and this http response. Once the LSTM's output is indistinguishable from your current production system , for all use cases, then you replace the production system with the LSTM and a thin layer that can listen on the port, encode/decode the data, and issue sql queries.

Why would I want to do this? I'm not 100% sure ... I think it would be super fast once you got it working. I think it would avoid many security bugs. You wouldn't have to read that "oh drupal 3.x has 20 new security bugs" better go patch our code. I think when I had this idea I was thinking about it terms of a parallel system that could catch hacking by noting when actual http responses diverged too much from the predicted response. The main idea being that for a given input the output really is 100% predictable, assuming your app doesn't use random numbers like in a game or something.

To link this idea to the article, I think things like XML parsers could be written this way .... I can't prove it but I suspect that they would be very fast and not come with all the baggage that the article complains about.

I started thinking along these lines after reading stuff like this https://medium.com/@karpathy/software-2-0-a64152b37c35


What if your app has literally any mutable state? Registering accounts, posting comments, etc.

Also I'll bet you that your neural net is > 100x slower than straight line code.


Mutable state in the sense of database writes would be part of the network's output and just passed on to a regular db. Mutable state in the sense of variables that the application code uses while processing a request? Well LSTM networks can track state like that.

For session based variables? Not sure, either it all becomes stateless and the code has to read everything from storage for each reqeuest .... or maybe the lstm is able to model something like an entire user session and remember the stuff that the original app would have put in the session.

That Andrej Karpathy article that I linked to two comments above ... he pointed out, in a different blog post, that regular neural networks can approximate any pure function. Recurrent neural networks like the LSTM can approximate any computer program. It is because they can propagate state from step to step that allows them to do this.

As far as it being 100X slower, well at a certain point I will be willing to take your money :)


Since you seem sincere, I’d like to mention that neural networks (as of now) are a complete clusterfuck for a problem with as much structure as you’re describing. There are no known ways to impose the relevant constraints/invariant on neural network behavior — and stop them from producing junk — leave alone doing something useful. That Karpathy article is pure hype with very little substance (like most commentary on neural networks). I like the vision, but it might take a minimum of twenty to fifty years to get there.

If you consider yourself a world-leading expert on neural networks and have some secret sauce in mind, by all means, good luck... otherwise it sounds like a fools errand.


Thanks for the feedback. I don't consider myself a world-leading expert on neural networks by any means.

I do want to point out that I'm thinking of doing this on a very limited website, not a general purpose thing that replicates any possible website. When I imagine the complexity of a modest CMS or an online mortgage calculator I think that it is much less complex than translating human languages. The fact that web code has to be so much more precise than human language actually makes the task easier. But to be fair, I'm all talk at this point with no code to show for it. So I will keep these comments in mind, this thread has been helpful for helping me think through some of this stuff.


The main idea being that for a given input the output really is 100% predictable [..] I think it would be super fast once you got it working.

I imagine it would be fast, then you realise you've made a static content caching layer out of a neural network and replace it with Varnish cache and it would be hyper fast.


I don't think a caching layer would work. One example would be an online mortgage estimator. You input the loan amount, interest rate, length of loan etc. all as http input parameters. I'm suggesting that the LSTM can eventually figure out that those variables are being used by the application code to go in to a formula. That application code and its formula would all be replaced by the LSTM.

I just don't know how you can achieve that with static cache ... only if somebody else requested that exact mortgage calculation before and it is still in the cache.

Also, my idea of the "given input" from the earlier comment would have to include results of sql queries that would form the entire input to the LSTM.

But honestly I think over trained auto encoders can be used as hash maps. That would be an application more in line with what I think you are saying.


Seems that either the NN memoizes all the inputs and outputs until the function is totally mapped - then functions as a memoized lookup table, or the NN has discerned what the mortgage calculation is, and is doing exactly the calculation your {Python} backend does, but migrated into an NN middleware layer instead, which sounds like it would be slower.

And then you're hoping that the NN would act like a JIT compiler/optimiser and run the same code faster. But if it was possible to process (compile? transpile? JIT compile?) the Python code to run faster, then writing a tool to do that sounds easier than writing an AI which contains such a tool within it.

So there's a handwave step where the AI develops its own innate Python-subset optimiser, without anyone having to know how to write such a thing, which would be awesome indeed .. is that possible?


If I ever get any actual code running I'll try and post it here.


> The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”...

I'm not sure I'd like a http webserver to silently fail, or be undebuggable when it comes to security vulnerabilities when given strange inputs.


XML actually is parsed with assembly code, now, using vector instructions that split up bytes, bits 0 of 128 or more bytes in XMM0, bits 1 of the same bytes in XMM1, et al., and doing bitwise operations on the registers to recognize features.

Imagine how bad it would be if not!


That's interesting: would you happen to have any reference implementing/describing such a parser, by any chance?

In the crypto world, this is called "bitslicing":

https://www.bearssl.org/constanttime.html#bitslicing


It's not NN-based, but you might be interested in https://en.wikipedia.org/wiki/Superoptimization.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: