Hacker News new | past | comments | ask | show | jobs | submit login
Software disenchantment (tonsky.me)
317 points by hyperpallium on Sept 18, 2018 | hide | past | favorite | 94 comments



People won't pay for efficiency. People buy solutions to their problems (features), not efficiency.

If it needs to be efficient, it will be (the games example).

The death of Moore's Law may rejuvenate efficient software.

BTW If you use old unix tools on a phone, it's super fast (in a terminal emulator like http://termux.com).

Sadly, the idea of abstractions enabling you to think better (like Alan Kay's point of view is worth 80 IQ points and maybe Jobs' bicycle for the mind) doesn't seem to work, at least for software abstractions. It's far more work than can be justified - more appropriate for pure research (which is instead preoccupied things like esoteric type systems with nice mathematical properties).


> If you use old unix tools on a phone, it's super fast

Of course it is fast, software designed for a PDP-11 running on a pocket Cray.


Just a few months ago, a very senior engineer on my team made this same observation about Moore's Law after I lamented that bit twiddling tricks and knowledge of obscure algorithmic speedups don't often end up being useful in practice.


The only sane reason that AirBnb or Uber are worth as much is that they promise to be more efficient than traditional companies. SO the 'market' pays for efficiently and a lot of digital revolution has been all about making existing processes slightly more efficient (Amazon online shopping experience vs physically going to the different shops etc) So the market definitely value the efficiency where the cost of developing that efficiency does not outweigh the benefits. Typing speed and cursor refresh are not limiting factors for programmers so there's diminishing return in making text editors refresh >120hz. Simple business logic and OP can lament as much as he wants to but we have limited resources, mostly time and that time is usually better spent writing business logic than optimising text editor for the 100th time.


>If it needs to be efficient, it will be (the games example).

Yeeah, no. Definitely a no.

>People won't pay for efficiency. People buy solutions to their problems (features), not efficiency.

Not wrong. Businesses pay for efficiency.


One day through the primeval wood

A calf walked home as good calves should...

https://m.poets.org/poetsorg/poem/calf-path



Fantastic


I agree that something is out of whack. In-house plane-jane CRUD development used to be pretty simple and quick in the 90's. One could focus on the domain analysis side instead of micromanage tech. The IDE products had glitches, but got better every release, including deployment. The Web bleeped it all to heck and back, and nobody seems interested in promoting the standards to fix it. We toss in JavaScript gizmos to attempt to improve the UI to desktop standards, but these gizmos are clunky and browser-version-sensitive. Maybe the desktop era tools were artistically ugly, but much easier to develop and maintain. PHB's seem more swayed by UI eye-candy than practicality. It's great job security, but hugely wasteful. Our industry needs some deep soul-searching.


I think this is what pisses me off the most. The frontend is a complete clusterfuck, but the serverside is almost as bad. Everything is just half-assed and you end up spending most of your time fighting the tech. The worst thing is people who have drunk the Kool-Aid and just stare you, uncomprehending, when you complain that lunatics are running the asylum and the only way to win is to not play. Or at best agree but have ended up silently accepting the situation as a coping mechanism of some sort, like people in abusive relationships. "It's not that bad, there are good moments too, and besides, I'd have nowhere to go anyway."

I think I'm done with the tech industry. 98% of the sort of programming people actually pay you for is about as pleasant as a root canal. I'm currently on a sick leave and seriously entertaining the idea of changing careers to something entirely different. Like gardening.


> I think I'm done with the tech industry. 98% of the sort of programming people actually pay you for is about as pleasant as a root canal. I'm currently on a sick leave and seriously entertaining the idea of changing careers to something entirely different. Like gardening.

More than a decade of professional software development later, I'm having very similar thoughts.

I now spend 1/4 of my work week fighting dependency hell (after just about each addition of a new package by any other developer on the project), another 1/4 figuring out how the "latest and greatest" tool of the week is best used to do something that would normally take me 5 minutes to do custom (god forbid, not the C-word!), and the remaining half is spent maybe doing actual work. So incredibly frustrating that I've just about had it.

I've grown to loathe and hate that which I used to adore.


> We toss in JavaScript gizmos to attempt to improve the UI to desktop standards, but these gizmos are clunky and browser-version-sensitive.

Agreed! doesn't anyone know how to write vanilla JavaScript anymore?

relevant: http://vanilla-js.com/


Pretty hard to learn vanilla JavaScript when you're stuck in the mud 8 abstractions above it, trying to keep up.


I put some of the blame on an unhealthy insistence of code reuseability and sticking to paradigms. The result is so much code that is just abstractions piled onto abstractions piled on to more abstractions until you get the famous FizzBuzz Enterprise Edition [0] that I'm sure we've all seen by now.

Nobody wants to just write some god damn code anymore. They include entire frameworks instead of just writing a couple helper functions.

[0] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


> include entire frameworks instead of just writing a couple helper functions > unhealthy insistence of code reuseability

The problem you're alluding is that until recently, there was very poor support on the web for the equivalent of Unix strip[1] or Proguard[2] (often used by Android devs). We need reliable and accurate dependency analysis to be able to properly eliminate unused code.

Today, with ES6 modules and a recent version of webpack, this is partly possible. Unused modules are not included in the final output file. I noticed a drastic drop in the final minified JS file size, after moving to a configuration that enabled this sort of (module) stripping.

[1] https://en.wikipedia.org/wiki/Strip_(Unix)

[2] https://stuff.mit.edu/afs/sipb/project/android/sdk/android-s...


Those shouldn't even be needed.

To give an example, let's say you need to calculate SHA-256 hashes in your program. People would rather include the OpenSSL library than just add a SHA-256 function to their code. Instead of a single function (that maybe has a couple helper functions), they add a massive library full of features they don't need, taking up memory, bloating the executable size, and increasing load time.

If you're using less than 10% of a module, you should be questioning if you even need that module at all.


If I understand correctly, I think you're suggesting they copy the functions they need from the library into their own source tree.

I think that makes sense for something simple like left-pad, but I don't think it's advisable in most cases. For instance, the SHA-256 function is non-trivial[1]. Even thought it's just around 100 lines, it's a fairly intricate function, and I would want to use a library for it. Preferably a heavily-used library like OpenSSL.

Using a library reduces the size of my own code base, and places the responsibility for fixing bugs and optimizing the SHA-256 with the open source community / the maintainers of the library. SHA-256 is particularly interesting, because modern Intel and AMD CPUs have dedicated instructions[2] for performing it. If I copied the source code for SHA, and later the library was updated to use the x86 instruction if available, then I would lose out on this performance gain.

Ultimately, the solution should be in constructing reliable and correct dependency management systems, and using the operating system's dynamic/shared library infrastructure to not bloat code size.

Modern operating systems are incredibly efficient when it comes to DLLs/shared libraries. They load up the DLL only once, and every subsequent program that needs it reads off the same shared memory pages for the DLL. Whenever, some data (in the shared library), it forks that memory page alone for the process. It's incredibly efficient, and except for the first time the library is loaded, it is fast at loading.

I think due to a fear of the so-called "DLL Hell", at some point people decided to statically link everything, and create giant executables. The smart solution would have been to find a solution to the dependency hell problem (where a good start would be simply versioning .so/.dll files).

The limitation to this approach of course is that using (.so/.dll) shared libraries is only possible with native programs. Although, I think with web pages/apps, if you link to the same resource (script/CSS/etc) in multiple pages, the browser should only download/load it once; with a request sent to server to see if the resource has been modified (getting back a "304 Not Modified" response most of the time).

[1] https://www.movable-type.co.uk/scripts/sha256.html

[2] https://en.wikipedia.org/wiki/Intel_SHA_extensions


My colleagues (and probably twitter followers) are always sick of me ranting about stuff like this. A few years ago I decided to compile screenshots I'd posted on twitter in the previous single month:

https://blog.dantup.com/2016/04/have-software-developers-giv...

That it happens is bad enough, but the fact that so few care about it is reslly depressing :(


It's an awesome collection you have there, thank you. (And yeah, also a bit depressing...)


I agree with the author. Often I find it refreshing to step away from the absurdity of software development and go work on my car. Working on your car exposes you to the fruits of a bona fide engineering process: objects that are carefully designed, work reliably, and are made with serviceability in mind. They aren't just slapped together and thrown out the door like (100-epsilon)% of software out there.


I would prefer not to touch the turbines attached to downsized 1L 120HP engine, software update only at authorized shops cost north of 150 EUR, the entertainment system had access to youtube initially but was cut off after 12 months, the LED system of headlights was likely someone’s PhD dissertation. Car industry is the eqivalent of web front-end in the software world, except one is unable to just shut down the browser.


I agree with your sentiment, the car industry in its current state is getting pretty ugly.

I choose to avoid these trends entirely, and I'm happy to keep driving my 2.4L naturally aspirated, 5 speed manual, touchscreen-less shitbox until it dies. Parts are still cheap and it's easy to work on.


I see you have an oldish car ...


But it doesn't break down and require constant maintenance as the parts it uses become obsolete and need constant updating.

I'm just trying to get from A to B, not fly to Mars.


A wise and cynical manager pointed out to me that software purchases are driven by features, not quality. You don't get to assess the quality of software until after the purchase.

All other (feature) things being equal, the first product to market gets the majority of the purchases. So dev teams are in a race to the bottom regarding quality. As long as software is purchased, as long as rival companies compete to provide software, this is an iron law.

To improve software quality in a race to the bottom, we must improve the quality of (young, inexperienced) software developers. But companies don't want to pay for high quality developers, they are content with barely-qualified devs earning the lowest possible wage. They hire young. They hire offshore. They hire minimally skilled talent.

There are places where quality still lives: operating systems, compilers, databases. Places where the software cannot perform its mission if it fails frequently. You have to find one of these jobs, or suppress your gag reflex.


> Recently our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and the IT in general.

Loading this website resulted in 5.3MB being downloaded over 42 requests.


I don't think he authored that though.

He didn't say he didn't use software; he wishes they were lighter.


I have only 2.8MB on my computer with the adblocker (the website has Google Analytics and a Twitter widget), mostly images. Fonts/Text/CSS: 120KB. And the website works perfectly without Javascript.


I block everything but HTML by default. One request, 10.49 KB transferred.


With images, it's 14 requests, 24 KB, a little over a second. (I block JS.)


+1 much of the weight/bloat of web pages are: javascript code, massive amounts of media, ADVERTISEMENTS!!!!, etc.

The base html is fairly lightweight. ;)


If anything, the rants about this are getting better. That's probably a good sign. This rant is at least 10x better than mine from last year :) https://www.jasonhanley.com/blog/2017/08/when-did-software-g...

To the author: I'm with you. Recently rebuilt my website and blog as an experiment in efficiency.


Love this article. I gave an internal talk recently that was about the sad state of programming right now.

How we used to set a variable in assembly language:

  mov [foo], ax
How we do it in React / Redux

  // constants.js
  export const SET_FOO = 'SET_FOO'


  // actions.js
  import { SET_FOO } from './constants'

  export const setFoo = foo => ({ type: SET_FOO, payload: foo })


  // reducer.js
  import { SET_FOO } from './constants'

  const initialState = { foo: 0 }

  const reducer = (state = initialState, action) => {
    const { type, payload } = action
    switch (type) {
      case SET_FOO:
        return { ...state, foo: payload }
       
      default:
        return state
  }

  export default reducer


  // Component.js
  import { connect } from 'react-redux'
  import { setFoo } from './actions'

  class Component extends React.Component {
    handleSetFoo = foo => {
      this.props.dispatch(setFoo(foo))
    }
  }

  export default connect()(Component)

I don't understand how people don't see something is horribly wrong.


This is extremely hyperbolic, and I'm sure you know this. Here's how you set a variable in React:

const foo = 1;

What you're doing in your complicated sample is not _just_ setting a variable, you're also exposing it to a KVO subscription system that you could never represent succinctly in ASM.

What's sad is how programmers communicate programming concepts right now, with quick digs and hot takes and zero actual critical thought to what is being compared. I'm disappointed that you're spreading FUD in your internal talks.


Yes, still its a way to set a value. And don't be so sure you can't make an entry in a KVO subscription system in asm - of course at bottom that's exactly what's happening.

I'm disappointed at the zero-attempt-to-understand-the-point digs made on hacker news (like the one I'm responding to). The point is, how is multi-layered abstraction an unalloyed good? Its heavy, slow, complicated to author and explain, and not doing all that much of value.


Yes, it's an apple to oranges comparison, but the point is more why are we using an orange when an apple will suffice?

The real question is, is all that really needed?

We keep building abstraction layers on top of abstraction layers when often times, there is already tried and proven solution that works and is much simpler.

I've been doing some experimenting creating a single page application (SPA) without a web framework and turns out you can get 90% of what React offers with a tiny amount of code.

https://github.com/brennancheung/volgenic/blob/master/13-IDE...

My current iteration is just to do a:

    State.update(() => { state.foo = 123; state.other = 456;  })
I just use plain javascript objects to store data and wrap it in a function that will trigger re-rendering the VDOM / repaint.

And even if we do need that additional layer of abstraction we can always make it appear simpler and provide a cleaner interface to the programmer. A Proxy object setter could be used to eliminate the boilerplate of State.update.

Good abstractions simplify the problem by creating a mental paradigm that is closer to the problem domain. Poor abstractions do the opposite, they take away what is needed to solve the problem and create additional steps that are not really germane to the problem.

Many abstractions are well intentioned, but after we toss layer on top of layer, we often get so far removed from the problem domain, we have to ask ourselves if it is not simpler to just build up another set of abstractions that allow us to get closer to the problem domain (i.e., "First Principles").


Don't forget Enterprise Java:

    public class VariableSetter	{
    	private String variableName;
    	private String variableValue;
    
    	private VariableSetter();
    
    	public VariableSetter variableSetterInstance()	{
    		return new VariableSetter();
    	}
    
    	public void setVariableName(String variableName)	{
    		this.variableName = variableName;
    	}
    
    	public void setVariableValue(String variableValue)	{
    		this.variableValue = variableValue;
    	}
    
    	public String getVariableName()	{
    		return this.variableName;
    	}
    
    	public String getVariableValue()	{
    		return this.variableValue;
    	}
    }
    
    public class VariableSetterFactory	{
	    private VariableSetterFactor()	{
    	}
    
    	public static getVariableSetterFactory()	{
    		return new VariableSetterFactory();
    	}
    
    	public VariableValueSetter getVariableSetter()	{
    		return VariableSetter.variableSetterInstance();
    	}
    }


I feel the same way as the author.

I have a theory that we're not caring as much about performance and good software because we don't have to, because Moore's Law has led to machines that are fast enough to run our crappy and hastily written code with acceptable performance so the majority of users do not really care about it running more efficient.

In the past, when computing resources were constrained we had to care about performance just so we could release software that worked good enough so people actually wanted to use it.

How do we get people to care about performance and efficiency in software? I don't know.


> I have a theory that we're not caring as much about performance and good software because we don't have to, because Moore's Law has led to machines that are fast enough to run our crappy and hastily written code with acceptable performance so the majority of users do not really care about it running more efficient.

Depends what you define as "acceptable" and as "users caring". The bloat does have actual consequences, like slow websites and greatly inflated bandwidth costs (which has direct consequences for users on metered connections).

I've heard complaints from completely non-technical friends how frustrating it is to work with slow, unstable software. But of course they can't identify more detailed causes because they don't know the technical details (or are interested in them).

Likewise, they can't stop using the software because they need it to do their job. So I think there is not much else they could do to indicate they "really cared".


I spend a significant portion of my work-week defending my employment - inputting time cards, updating JIRA tickets with "time spent", attending status meetings... all in the name of "efficiency". And on some level, I understand - nobody wants a Dilbert-esque Wally type wasting space. Yet the corporate attempts at maximum efficiency fail more horrifically than you would be able to imagine anything failing. Not only does the time spent proving that you weren't wasting time waste time itself, it's also so trivially gameable that the people who thrive are the ones who are MOST effective at being silently inefficient.


When the cost of increasing performance enough to satisfactorily run spaghetti is greater than the cost of writing efficient code.


> I have a theory that we're not caring as much about performance and good software because we don't have to, because Moore's Law has led to machines that are fast enough to run our crappy and hastily written code with acceptable performance so the majority of users do not really care about it running more efficient.

I've always said that the lack of performance concerns is not a technology problem, this is a business and a societal problem. Hell, this might just be the first massively-distributed "tragedy of the commons" problem.

tl;dr- companies don't have to directly pay for client-side computing resources, so those companies which offload costs onto the client most efficiently have a competitive advantage. (Also, I kinda veered into talking about mobile computing later on here, and diverged into UX a little bit as well, but IMO performance is absolutely a part of the user experience so it's worth considering the two together.)

If I'm developing a piece of software that's not going to run on software I'm personally paying for, the only degree to which I have to optimize for performance is the point at which performance problems completely outweigh the usefulness of the app's features. You can get away with this for a long time with few repercussions- even longer if you can lock them into the software via "dark features".

The problem is, this leaves little space for companies who want to make highly-performant software, because it's still much easier to sell users on what a product does rather than how it does it... so you often can't afford to optimize for performance, because your competitor is going to gobble up user resources and wow them with shiny new features while you're still "boring" them with the "same-old, same-old". (This isn't how I personally feel, but you see my point.)

In a strange way, it's similar to the problem of designing great UX- the more your user has to use your software, the less you have to worry about making it convenient to use your software. You have successfully externalized the cost of dealing with it to your users.

Web applications themselves are a good example of this- companies with any sort of load absolutely do have to pay a non-trivial cost for each byte coming out of a datacenter or stored in an edge cache. The client's local cache is really the only "free" part of the whole process, so if you can get the client to store your entire app in cache, you can save a ton of actual real-world dollars in CPU cycles (not generating so much HTML/CSS/JS) and networking costs (not transferring so much HTML/CSS/JS) and caching costs (not storing so much HTML/CSS/JS in-house). If you're a startup trying to capture a chunk of a market, and you're only concerned about expanding your customer base, you're absolutely going to do that, whether it's ultimately good for customers or not.

I do think however, there are a couple factors which are changing for the better. First, Moore's Law is slowing down, and a lot of hardware optimizations these days seem to be revolving around getting the same performance using less power, rather than more performance for the same amount of power. Same with storage- application software can't continue bloating forever, we're starting to hit physical limitations.

Secondly, consumers are becoming more technology-literate, and with that literacy, are starting to understand the relationship between performance and features.

Thirdly, OS and device manufacturers are starting to experiment with tightly integrating common services into the system. For instance, NFC payments on mobile, or "iMessage-like" services. This is a bit of a double-edged sword, because OS and device manufacturers are also trying to lock you into their services, but they also have to be more serious about performance and UX because users are interacting with the system much more often than any given app. One way this could dramatically improve is by getting system manufacturers to agree on standards like RCS. When it doesn't make business sense to compete in a space because you literally can't do better than the well-understood solutions, there will be less bloat.

So, to sum it up, I don't think you improve things by getting users specifically to care about performance and efficiency- you do it by attempting to change the calculus around why people and businesses create inefficient software.


Software is hard. I blame it in Moore's law, in the sense that computers have gotten more capable but we programmers have not. Its like the brain can only hold so much before it gets overrun, and we reached that point several years ago. Working with large programs is very difficult, even for the best architected ones. And of course we can't expect all software to be the best software.

The result of these big programs is people blindly using modules created by others, built on top of ever more modules. In theory this is good though. It is specialization. But we haven't done a good job of it. I think if we want to make software easier to write (and as a result better) the place to start is on the process of programming with more specialized pieces, be it a better module system or domain specific features in the programming language. I think it comes down to managing the complexity some a person can better view the big picture.


One of my friend pretend that we - software engineers - should work as mechanical engineers. They have a documented, standardized, and limited set of screws, nuts, ... and they build anything with it. This allow to develop CAD tool to assemble, verify, simulate, ... On our side, we have thousands of implementations of basic SW blocks (fifo, mutex, ...), or even "bigger" concepts (server, reader/writer, state-machine ...). What do you think ?


When we try to standardize software development, we usually end up with people mindlessly chaining together GoF patterns into AbstractSingletonProxyFactoryBeans.

I don't know why mechanical engineering is different in this regard from software "engineering". Maybe it's because the domain is more restricted (software concerns basically everything, so it's hard to come up with standardized patterns that are still widely applicable). Maybe it's because mechanical engineering is constrained by the laws of physics, whereas software engineering is mostly constrained by the developer's imagination (and resource constraints, but Moore's Law took care of that one for a long time).


Shit quality of software is easily explained with this formula.

Because unlike in many other industries there are no strict regulations in software, there is no floor for the quality of the products.

So economic forces push quality to the lowest they can get away with.

It's much easier and cheaper to apologise to a customer for a bug and give them a refund than to ensure ahead of time that the bug doesn't happen in the first place.

The worst case scenario is not all that bad so generally businesses only bother to keep the quality high enough to compete but not "unnecessarily high".

If it costs $10k to make an app. You could also always spend $100k to make it more performant and robust. You could also go the other way and only spend $1k and have a slower buggier version. Where do you the line?

We end up reaching an equilibrium state.

Over the long term things get better incrementally. Individual vendors focus on their core objective and rely on those 'free' improvements.

For example a game maker whose game is a bit slow may make a 1 month effort to make their game faster and do what they can. But it's not practical for them to spend 1 year to achieve some sort of perfection state.


This isn't an economic issue, it's an efficiency issue.

There are 3 competing ideas: Fast, cheap, quick. You get to pick 2. We're picking cheap and quick because "fast" comes for free on a longer-term timeline. You can argue with whether or not this is the right choice, but this is the choice society is making, and IMO it would work better than taking significantly more time to build something that is only 10% faster. It's more efficient to let hardware manufacturers solve the fast problem when the differences are on the order of 5-10% a year but the time-to-release problem is on the order of 50-100% differences in development times.


I think this is a symptom that, as engineers we are now relegated to a second plane, our focus is just to accomplish the next task set by people who doesn't care or understand a bit about software engineering.

You can read it here multiple times "we are pay to make features", the not so passive sentiment that we should be relegated to code monkeys that accomplish business objectives is just so sad to me.


1) some of these complaints are invalid: yes Linux kill a "random" process when there is no more memory available, but except in 'static' configuration where everything has a known size, what's the alternative?

You can have a misbehaving application use lots of memory but it's a normal application whose memory allocation fail, what do you do?

2) this bloat isn't new: I remember being amazed by BeOS responsiveness compared to Windows or Linux but now I spend 80% of my time in a webbrowser and I doubt WebKit is going to be faster on Haiku than it is on other OSs..


1) Don’t promise memory you don’t have. On windows VirtualAlloc will fail if it doesn’t have enough page file space to satisfy the request. This allows the application to do something smarter than with the false promise it gets from mmap.

2) Really depends. Applications spend a lot of time in syscalls.


Yup, pretty sure that "overcommit" should never have been the default option (although it is quite easy to disable, comparatively few people know that it even happens).


1) it doesn't really solve the issue except if you can allocate everything you need at startup: if you need more memory but someone else had a bug which used all the memory you also have a problem..


You may have caches or other optional buffers you can free. Or you may choose to save or do some other action to save user data. It doesn't solve the fact you don't have memory, but it does let you at least do something about it.


It's an interesting game we are playing.

Take websites as an example. We have to build websites using CSS and HTML and JS because that is what works in browsers. But browsers have to build DOMs, layout engines and JS engines this way because that's the way we write code. It's nobody's fault. It's evolution with a bad fitness function.

We need a new start. A new paradigm.


No, there were some deliberate design decisions. E.g., there was an attempt at a new start: XHTML 2. This got boycotted by browser vendors out of a mix of politics and business priorities.

Instead they went with HTML 5, which generally sees scripting as the core to make non-trivial web applications and also explicitly encourages JS libraries and frameworks.

So I don't think it's surprising that today, pages are piles of scripts on top of scripts.


In a sense AMP is the answer, produce less by having explicit limits while not throwing all tooling out. While AMP is somewhat more technically complex it's much simpler when you consider what it disallows.

However technology/solution is only tool, real problem is in the domain of users, how to make useful systems, how to make users care, technology is irrelevant for most of us.


So in a sense, web browsers are a bit to promiscuous. Website builders would have more incentive to properly engineer their websites if browsers were pickier. One issue with that is that it is the websites visitors who are most likely to see the errors and not the designer of the site.


We should split layers and rush out everyone from the room where low-level things are going. Then, at least, the low-level things will not break.

In an ideal world, we should have building bricks, which even end-users should build an app. Instead, we have frameworks, which are only for programmers, and provide only some automatizable patterns (e.g. a web framework finds which code should be run based on URL), or wrappers (e.g. ORM).

Frameworks and libraries _literally_ hide stuff from progrmmers. They even don't know how computers work.

I've written some 256-byte intros, and showed them for my colleagues. They were amazed:

- Wow, which language do you use, Java? Oh, no you must probably can not use even C++, only C. (Solution: you can not use any language, only assembly.)

- Which framework do you use, Unity? Or pure OpenGL? (Solution: you can't use any, you must put every pixel yourself.)

They don't know, what fits in 256 byte. They don't care what they're producing. And it's not their fault.


I think I first read this rant in a 1997 article by Dr Peter Cochrane, former Head of Research and Chief Technologist of British Telecom [1]:

"In the past few years I have watched generations of word processors, spreadsheets, and graphics packs transcend the useful and user friendly, to become fiendishly complex - from auto-spelling checks that irritatingly pop on to the screen as you type, to the graphics-by-questionnaire that realise the wrong format in five easy stages.

Application changes of this type beggar belief. Not only do they consume vast amounts of storage, they reconfigure commands, change names and locations, present a vast range of never to be used (or discovered) options that just confuse users. [..]

It can only be that commercial considerations prevent us having a cut down and basic set of applications that are backward, forward and sideways compatible. Writing a letter, book, business case or report does not demand the capabilities of an entire publishing industry." - "How to upgrade your stress" - http://archive.cochrane.org.uk/opinion/archive/telegraph/199...

That was written on a 36Mhz laptop with 20MB RAM and 0.5GB disk space.

Within weeks, he wrote "My recent computer hardware upgrade, and simultaneous software downgrade, has resulted in huge performance improvements. More or less all delays between hand, screen, and eye have been removed. Applications open instantaneously once booted up and files save in a second." - in "Beyond Biology" - http://archive.cochrane.org.uk/opinion/archive/telegraph/199...

And then he goes on to tear apart tonsky's argument about aircraft and cars: "If we could manipulate the space in materials we could perhaps reduce the weight of an aircraft by 90%, suggests Peter Cochrane [..]" - "There's nothing to the Universe" - http://archive.cochrane.org.uk/opinion/archive/telegraph/199...

[1] http://www.cochrane.org.uk/bio/


Absolutely agree with the author. I care about my program's efficiency because some day somebody compares my software and competitors and write an article that my software is faster than other. It's a direct influence on my business. Furthermore, I like to write efficiency programs. It's my engineer nature. It's right for me.


> Windows 10 takes 30 minutes to update. What could it possibly be doing for that long?

Backing up the old Windows folder, decompressing (I guess downloadable updates use something very efficient in terms of compression like lzma), and probably some sleep() — I guess the update being slower makes users feel like it's something substantial.

> Android system with no apps takes almost 6 Gb

That "System" figure is the size of all partitions that aren't data. Quite a lot of that is literally wasted space.


Well, if you take the analogy of the car, that today is working quite efficient and hassle free, that, I would argue, is in large part due to regulation and bureaucracy. Something that the Internet and computing does not have at the moment.

That word has some negative connotations, but I do not think that is fair, considering the cars, or the airplanes, history. We assume that the car we purchase has seat belts and catalytic converters which will prevent lots of injuries and effects on the climate and living things. These are mandated by laws in many places, and that is a good thing, because we can worry about the entertainment system, service deals, colour and style and still get a car that meets certain "basic" requirements. I say "basic" as they can be quite advanced, we would hardly buy a car without ABS or a crash rating today.

Computer programs and websites do not have to meet any regulations and they therefore easily succumb to bloat, because there is no one being blamed for them being unusable, which is broad, but a website over, say, 1MB is quite large. Plenty are wasting CPU cycles to render and compute, which might not be as polluting as exhaust from a car, but it is polluting and stealing time.

Early cars had to change oil very often or have it refilled, because it leaked in to the combustion chamber or gasses leaked out. This causes emissions to be quite hostile, which is bad because it affects the environment and living beings respiratory systems, among other things. We then got laws regulating this and manufacturers were keen to improve the performance of the engines. Today there are 1,6 liter, four cylinder engines that work well pulling loads and still manage to have a good fuel economy when driven under average conditions.

Another thing is that customers got fed up with steel wheels and rubber tires that blew out all the time and the wheels on our transports improved as well, but that took some years to happen, the Internet and personal computers are still quite young and immature, there is still time for it all to mature quite a bit, and this article is a good step on the way towards that in my opinion.

UX is maybe not as simple to regulate as cars and their emissions and safety standards but I would say that we should not be all to opposed to some more requirements or maybe even laws regarding how we operate computers and programs in some respects and let the market, users that is, dictate what is desired in other areas. We might like to think that the internet is the wild west, so to say, but this has led to some mighty bloat and headaches for all involved, as this article shines a light on.


And much like software, there are many of us who get nostalgic about the days when automobiles just worked, and were knowable, not inscrutable piles of nonsense.


Yeah, about that, it's a bit like when people complain about phones just working previously.

Cars did not used to have AC, be all that stylish and comfortable, they lacked finesse.

Cars today are way more complex with a lot of features and things, which means that they are harder to build, use and maintain, same thing with software and phones.

A website today does more than just show text and images in a certain layout.

It's a trade off between easy to set up, maintain, use and having a lot of nice features.


I can think of many reasons to deliberately create software that is inefficient. Number one on the list is to make it more reliable. The number one reason software is rarely reliable is because code is more often than not complex to the point where the developer unsure whether it will work or not. In turn, this occurs most often when trying to maintain state via flags. If, instead of trying to maintain state, the software is designed so that the current state of objects is always a function of the original state of the object at the time the App started, you have an App that is potentially slow but like to be extremely reliable and extremely easy to read.

I always program this way these days and my Apps are always reliable. For example ToteRaider (see Toteraider.com), a complex gambling App, released 2.5 years ago is still on version 1. Zero defects in production and also zero defects in testing.


Great article! I've been thinking about this problem since 2006, and there are probably a hundred reasons why things are as bad as they are. Many commenters here have hit upon some: misuse of abstractions (they should be used to save time, NOT save knowledge), insistence on code reuse, etc. To add a few more: treating programming like a form of manufacturing instead of an art, believing that the latest language/tool/process is a silver bullet (and even believing that it's new!), not realizing/accepting that constant refactoring is vital, worshiping teams at the cost of individuals, not letting teams form themselves, and a huge one is this: trying to eliminate problems and/or complexity instead of mitigating them with tools and skill-building. If you try to eliminate a problem or complexity, you will succeed, but at the cost of creating at least as many other different problems than you had before, and you'll just end up moving the complexity to another place. Later, when you realize you still have problems, you'll try the eliminate them too, and do the same thing all over again. Building layer after layer, always increasing your problems and complexity while attempting to reduce them. (This is exactly the same as government controls breading more controls, and it happens for the same reasons.)

When I first started out, if you were programming computers at all (microcomputers, anyway) you were someone who was naturally curious and probably had a high IQ. If you continued programming, it's because you really enjoyed the process and had a knack for it. You understood how the computer really worked, and that didn't scare you, it thrilled you. We didn't have "testers" and "UX Designers" and "QA people" back then, so if you produced a good program at all, it's because you had a high level of conscientiousness. Today, that description fits only a tiny minority of the programmers that I know, just like it fits only a tiny minority of the general population. Those programmers all have the same complaints that you do, and they all have side projects that they work on because the work in their day jobs is unfulfilling.

As someone here pointed out, this is a relatively new industry, and I think we're just seeing the same thing that plays out in any industry once "everybody" starts doing it. Just like vacuum cleaners, mattresses, doctors, or oil paintings, there are a handful of great ones, a whole bunch of mediocre ones, and some really awful ones. The bell curve has arrived and is here to stay!

My belief is that if you want to have great software, then make it yourself. Don't wait for the majority of others to produce it, because they never will.


My buddy Tom says "Lots of people are writing code now. Most of them shouldn't" He's old-school write-it-from-the-bottom-up. His solutions are elegant, bulletproof marvels that fit in tiny devices and run like lightning.


Just give up on modern software and join the demoscene.


Does it pay the bills?


Nice try. It's everyone's wish that somebody makes efficient software for them. But when it comes to themselves, they just lump some random components together that seem to do the job and call it a day.


yeah, wellcome to the club. i am making software for 30 years, and... IMHO, although last 10 years might have been worst, in terms of bloat and inefficiency and dont-cares-ness, it's not just that. That's a result.. It's more like IT industry as such is trying to grow for the sake of growing. At any price. Much like a lot of other industries and/or institutions, trying to exist for the sake of existing. And they will.. as long as there are users of.. "Each train has its passengers, the saying goes."


> Much like a lot of other industries and/or institutions, trying to exist for the sake of existing.

You've just described the purpose of literally every life form.


To capitalize current programmers know how with those expectations of performance would require a radical shift in hardware. It would demand hardware that is optimally designed for multi-paradigm high level languages.

Ironically, that was what the Burroughs 5000 was: https://www.smecc.org/The%20Architecture%20%20of%20the%20Bur...

"The Burroughs B5000 was designed from the start to support high level languages (HLLs), specifically ALGOL and COBOL. The initial features supporting this were the hardware-managed stack, the descriptor, and segmentation. A redesign of this system, the B6000, extended the concept by moving the tag bits to their own field, external to the 48-bit data word. The instruction set was also revised and extended, and has remained essentially unchanged ever since. HLLs are supported by hardware that allows virtual memory, multiprocessing, fast context switching and procedure invocation, memory protection, and code sharing."

Emulator project: http://retro-b5500.blogspot.com


People are focusing too much on the "performance" aspect of the article.

The "Programming is the same mess" section is what really resonates with me, specifically this:

> And dependencies? People easily add overengineered “full package solutions” to solve the simplest problems without considering their costs. And those dependencies bring other dependencies. You end up with a tree that is something in between of horror story (OMG so big and full of conflicts) and comedy (there’s no reason we include these, yet here they are):


One reason for software bloat is DRY principle. From very first steps every programmer told DRY! As a result modern software usually has many abstraction layers (because with many layers you can reuse more code) and hundred of dependencies (instead writing/copying a few lines of code we just add one more dependency, which has it's own dependencies and so on).


I couldn't agree more. I was a maintenance programmer/developer for 30 years and saw too much of this. It disgusted me to the point where I just rewrote entire systems during slow periods (unauthorized and on the sly). The Amiga OS had full multi-tasking with robust, inter-process messaging, and a full-featured windowed GUI that fit onto one 720K floppy so there is no valid reason that we need multi-gigs for the current version of Windows. I'm posting a link to your excellent article on Daniweb to encourage discussion.

Reverend Jim (on Daniweb)


Software is typically written to serve some business need, and when crappy software serves that need it's incredibly difficult to convince the organization to invest more in quality. However, being asked to crank out half-baked features, poorly tested code, and poorly designed systems takes a toll on programmers who want to be proud of their work.

So what is a programmer to do? I feel like buyers and sellers of software are each cool with the status quo, so who has incentive to improve?


an IRCer reminded me of http://harmful.cat-v.org/software/ ;)


Excellent. You could have mentioned the tremendous amount of energy wasted by servers and data centers to drive the mess. Is there such a thing as Code Ecology?


It really looks like what you are looking for is Software Craftsmanship. Several books have been written tackling this topic. You can watch Robert C Martin (aka Uncle Bob) and others (Sandro Mancuso) talking about this problem and how to address it.


I think sometimes we, as programmers, are too worried about our own experience coding than about the final product and its quality. And some of that might have been caused by terrible deadlines imposed by PjMs or "Architechts".


I agree with the author’s statement of the factual elements, size of binaries, memory utilization, load speeds, resource intensive simple task applications.

But I disagree with the author’s reasoning and line of thinking regarding the how and why.

The functionality and tome period being referenced for when things were simple and fast referred to a period when your interface was lot resolution or entirely text baed. The action of updating information on the screen and the information on the back end was minimal. Contrast that to today, where you are dealing with a large display with high resolution imaging. You are now talking about a full blown HD gui where your screen updates includes plenty of images, videos, and on the back end, you’re handling edits that may or may not even be local resources, subject to network latencies.

Also, modern computers are basically all multi-user systems under the hood, configured by design to be able to recognize, load drivers for, and initialize any and all devices you might deem fit to stick into the computer. Your text editing is running at the same time the system is performing back end scans, probably running multiple web pages, mini services, etc.

Not saying there isn’t inefficiency or that there isn’t really bad design. But rather than look at it as meaningless, understand the underlying mechanism for why.

Why has no major project taken on the task of a new lean kernel? Why? You can existing kernels and slim them down. Open source means being able to take kernel source, reconfigure it to remove the bulk of the cruft you don’t need, and shrink down your footprint.

It also means you OS doesn’t need to be the size of one or more DVD(s), you can slim it down.

Same thing for compilation toolchains. You can slim it down. You don’t need to include every library you’ve ever loaded onto your compile environment.

There ARE lightweight text editors out there. The heavy weight ones are because people wanted to support all kinds of fonts, formatting, input/output file formats, multi-media, etc. You want to pare that down, you can.

Sure, there are lazy coders out there. But what one person calls extra cruft, another person calls tech debt waiting to happen.

So you write a super light weight piece of code that doesn’t need to incorporate any special libraries because you wrote it all using standard library calls. Yay. Who is going to maintain that when you leave?

You write an application component and forego all of the build toolchain and frameworks that make up so much bulk now, so you hardcore everything. That’s a maintennace nightmare when it gets deployed and other people need to integrate it into a productional stack because no one ever bothered to ready it for CD/CI.

Yes, there is alot that sucks about software design at large. But the author is painting with a broad brush, focusing on one set of pet peeves, as observed, and ignoring the underlying reasons for why those design decisions might be in place.

For build environments: npm, python, runy, golang, etc: you can configure the build environment for your piece of code to require a specific version of npm, dependencies, etc. this would freeze the runtime environment so that as the language at large advances, you aren’t forced into a broken state. This applies to OS package management as well.

The example of DOS being able to run on modern hardware... that is only seeing one facet of the situation. Yes, DOS itself, which is x86 code, can run on modern hardware unchanged.... but that isn’t because of excellent coding. That is because of the massive effort put in by the chip makers to maintain backwards compatibility. In that example, the bulk and legacy stuff just got offloaded to hardware. :/ it works not because it’s great, but because the hardward was designed to allow it to keep working.

Much in the same say that some of the more visible software bloat exists: backwards compatibility, cross compatibility, wide array of hardware compatibility, file and data format compatibility, language compatibility, etc.

The author makes some good points, but I think that the author also has some tunnel vision going on.

Can UI be improved and be less bulky? Hell yes.

Can applications return to their simple roots and be highly performant and low latency, hell yes(as long as you are willing to pare down on optional functionality)

- can we tun fast OS environments on “older” hardware? Yes! But with the caveat that the price you pay is using older libraries, older methods, or having to back-port code yourself.


> We haven’t seen new OS kernels in what, 25 years?

There's actually one in active development. www.redox-os.org


Can't agree more. In the early days IT was a science. Now it's a playground.


Finally, someone I agree with on this topic


I share the sentiment, but I also appreciate the truth.

> Text editors! What can be simpler? On each keystroke, all you have to do is update tiny rectangular region and modern text editors can’t do that in 16ms.

I use text editors with syntax highlighting. Also IDEs with completion, error highlighting, etc. Assuming we are talking about those, updating a rectangular region is not "all you have to do". (Though it is the only thing that has to be done synchronously.)

> Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

You can format an SSD, download a multi-gig image and install it in under 6 minutes?

> The whole webpage/SQL database architecture is built on a premise (hope, even) that nobody will touch your data while you look at the rendered webpage.

What?

> Ever seen this dialogue “which version to keep?” I mean, bar today is so low that your users would be happy to at least have a window like that.

I'm failing to see the problem with that dialog. It's hardly new. Vim:

    WARNING: The file has been changed since reading!!!
    Do you really want to write to it (y/n)?
> Nobody thinks compiler that works minutes or even hours is a problem.

Everyone thinks that a problem. It's the reason that memory-bloating scripting languages the author is shitting over have become so popular.

> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce.

Huh? When will you put virtual machines inside Linux and then Docker inside those virtual machines. (Maybe a Docker dev testing against different kernel versions?)

> Linux kills random processes by design.

It can kill processes (configurable). But AFAIK it doesn't do it randomly.

> Programs can’t work for years without reboots anymore.

Anymore? Really? I reboot my Windows computer one-tenth as often as twenty years ago.

---

Well, there might be problems, but as least the author is not overgeneralizing, right?

> Nobody understands anything at this point. Neither they want to.

> What’s worse, nobody has time to stop and figure out what happened.

> Nothing stops us from making build process reliable, predictable and 100% reproducible. Just nobody thinks its important.

> We’re stuck in local optima and nobody wants to move out.


> You can format an SSD, download a multi-gig image and install it in under 6 minutes?

5 minutes to download 4 Gb at 100 Mbit/s, 0,5 min to write 4 Gb at 1Gbps

> I'm failing to see the problem with that dialog. It's hardly new.

It’s always been a problem. Trouble is, you want work from both versions combined, not to select one and destroy another

> Everyone thinks that a problem.

Yet new languages (e.g. Rust) are still developed with terribly slow compilers

> It can kill processes (configurable). But AFAIK it doesn't do it randomly.

Well, I meant randomly in a sense that you can hardly predict or control what will happen. Of course it’s a determenistic algorithm, but engineers treat it as a black box.

> Anymore? Really? I reboot my Windows computer one-tenth as often as twenty years ago.

Why do you need to reboot at all?


> 0,5 min to write 4 Gb at 1Gbps

By "install it", I thought you meant install Windows, which takes for longer than 30 seconds. It's not a straight byte copy. For one, part of the install process compiles code for your variation of CPU architecture.

> It’s always been a problem.

In fact, it's less of a problem now than it was 15 years ago. Google Docs, Office 365.

> Yet new languages (e.g. Rust) are still developed with terribly slow compilers

Slow compilers (Rust, TypeScript, Scala, Haskell) almost universally have complex type inference.

Languages with limited type inference (Go, Java) have fast compilers.

It's really a matter of choice of language design, rather than compiler implementation.

> Well, I meant randomly in a sense that you can hardly predict or control what will happen.

This is like complaining search engine results are random because they are complex. Does it really matter?

> Why do you need to reboot at all?

Usually my battery dies.


> By "install it", I thought you meant install Windows, which takes for longer than 30 seconds. It's not a straight byte copy. For one, part of the install process compiles code for your variation of CPU architecture.

That was my point. It does something very slow but unnecessary. If you could install it by straight copy, it would be way faster. And there’re no reasons why it can’t be straight copy (or really fast unzip & compile).

> Google Docs, Office 365

Google Docs disables offline mode by defaults. It only works as long as everyone are almost on the same page. It’s a step forward, yes. I wish using those algorithms was as easy as writing LAMP apps

> Usually my battery dies.

Doesn’t mean it has to reboot. It’s just an implementation design of current OSes. See Phantom OS


  I'm failing to see the problem with that dialog
To me, the problem is that you are made to choose that one set of changes must be lost. A better design would be (1) save, (2) revert to original, (3) accept other change version instead, or (4) put me into a diff/merge process so I can control which changes (from both sources) are to be posted.


Article tl;dr: "I believe that apples are more efficient than oranges after all."

I just want to point out that the idea that "modern buildings use just enough material to fulfill their function and stay safe under the given conditions" is fundamentally at odds with the author's subsequent thesis.

Modern buildings don't use "just enough material", because "just enough material" would be _just_ concrete everywhere; it would have been "just wood" 200 years ago, but that's not good enough now. This is exactly the problem: it's not software that is unnecessarily bloated, it's software that has evolved to solve higher order problems, ones that are not simply based in how fast a computer can count to 10. Similarly, the definition of "fulfill their function" in the context of buildings have changed too. That definition changes all the time, even in building codes.

In the modern-building hierarchy of needs, we are way passed the "stay safe" level. We still optimize for safety, sure, but that hardly accounts for your spray foam insulation, HVACs, builtin wireless units, complex built-in cabinetry, complex appliances, and more. Simple things like "electricity" are now part of the definition of fulfilling building function. Go find a 200 year old building and you will find a building that simply does not fulfill today's functions. Even safety standards have and continue to change all the time. You can look at historical building codes and see evolving fire safety (asbestos? NO ASBESTOS!), seismic safety, and more.

This is the point. To say that Windows 95 is 30MB discounts the years of improved process space isolation, memory protection, Spectre mitigation, that, if missing, would cause enormous public backlash about why Microsoft doesn't care about security. Windows 10 is 100x larger because WE asked for it to be. WE wanted WiFi, VPN, IPv6 switchover and tunneling all added to our network stack. WE wanted GPU enhanced UI threads. WE want haptic feedback and touchscreens and predictive text and predictive multi-touch pixel accuracy for our touchscreen laptops. This extra complexity exists because our standards changed. A text editor isn't just something that renders ASCII anymore-- heck, it's not even just for rendering characters. My "text editor" is a full web browser because I _need_ that for development these days.

Extra complexity is a feature not a bug. We built computers specifically to do this stuff, not in spite of it. The abstractions and complexities aren't getting in our way, they are literally the things we are building. Does performance suffer? Maybe, but that's because we are explicitly paying into functionality. If you wanted a fast text editor, obviously a black-and-white screen that only renders 256 characters will be faster than VS Code, go ahead and use that software, but you're not getting the other things you probably want. Your very next complaint will undoubtedly be "how do easily I diff my Git branches?" -- and this is how software becomes more complex.

Welcome to progress.


You don't need a web browser to edit text. As evidenced by dozens of excellent text editors that are not built on web browsers.

It's just easier to take a web browser that already has something approximating a text editor, and pile things on top of that.

And why is that? Well, because you want to support all the different platforms, and we as an industry have absolutely screwed up the portability story, and so we build it as hacks instead. There's absolutely no reason why developing a GUI app for macOS should be radically different from developing a GUI app for Windows or Linux - they all ultimately do the same thing. There's no sensible reason for them to be different. But they are. And so now the easiest way to solve that problem is to pile the browser on top, and forget that the differences exist. Of course, it doesn't actually solve the problem in general, because the differences are still there, and the industry as a whole still pays the overhead tax, in both man-hours of work someone has to spend maintaining that flimsy stack of abstractions, and in runtime performance tax those abstractions impose.

But the only way to fix it is to burn the whole thing to the ground. And that's not happening, because the short-term cost is too large to even contemplate any long-term gains.


> My "text editor" is a full web browser because I _need_ that for development these days.

I mean, you definitely don't need a full web browser for software development. That's maybe the least worst option you have at the moment (which is sad) but it could be hella smaller.


functional programming saves the world. learn that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: