
When did software go off the rails? - jasonhanley
http://blog.jasonhanley.com/2017/08/when-did-software-go-off-rails.html
======
perlgeek
> Hardware capability increases exponentially, but software somehow just
> bloats up, using the power and space, without providing much more
> functionality or value.

I disagree. Our baseline for software has increased dramatically. If you don't
care much about the added functionality or value of the new software, use
Mosaic or Netscape 4.0 to browse the web.

There are obvious improvements in browsers which you are so used to that you
forgot them: tabs, ability to zoom, process-level isolation for websites,
support for newer protocols, CSS, Unicode support, font rendering things I'm
probably not aware of, a built-in debugger and inspector in the browser, and
so on.

Again, if you think that software hasn't advanced much, simply use the
software from 10 or 20 years ago, and see if you can stand the experience.

~~~
daemin
I think for the starkest contrast, play games from 10-20 years ago. If the
difference in graphics, animation, physics and world size doesn't amaze you,
then the changes in user interface should. It is so much easier to play modern
games, especially for the disabled.

~~~
sametmax
Yeah but the level design sucks, the writing is terrible, open worlds are
bloated with useless details and not much gaming, gameplay are repetitive and
lack innovations, atmosphere and personal touch has been traded for myriad of
pixels. Those games are big fat short interactive movies and without souls.

The best games I played recently are all indie stuff that I could have played
on a much older machine.

~~~
jerf
"The best games I played recently are all indie stuff that I could have played
on a much older machine."

If you try it, you may discover that's not the case. A lot of indie stuff is
taking advantage of faster computers to use things with higher levels of
abstraction, and indie games often have really quite terrible levels of
performance relative to the complexity of the game as compared to a AAA title.
They run at 60fps because they're net simpler, and a lot of times you may find
they're _barely_ running at 60fps on a medium-class machine.

I'm not complaining, because the alternative is often that the game simply
wouldn't exist if the only choice the developers had was to drop straight into
C++ and start bashing away with OpenGL directly.

~~~
Bartweiss
I think this is a really good point. It's true that indie games are often
simple, and don't use complex graphical features. But it's also true that a
lot of them are startlingly memory-heavy and poorly implemented - they're
comparable to AAA performance, but for a vastly simpler task. Every so often
something like Dungeons of Dredmor will bog down on me despite being a very
basic task on a high-end machine.

I don't object to that, either. People are knocking out games in high-level
languages or using extremely rich frameworks. You can put out an Android game
and barely touch a single Android feature because your environment is so
extensive. We do pay a price in speed and memory usage, but the upside is that
people get to release all kinds of wild projects without learning the arcana
of their environment.

It's fantastic that a team of <5 people with limited software expertise can
put out a genuinely brilliant game. I'm less happy with it outside of gaming,
where people often have to standardize on a single product. But video games
are moving steadily closer to boardgames, novels, and other art forms where
the tools of the trade aren't a limiting factor.

------
seanwilson
I think a lot of it is you tend to only optimise as much as you need to. When
the average user only has 4MB of memory, you're going to spend a lot of time
optimising image size for example. When you can assume 4GB of memory, you're
going to put that time into other places as the effort would be very difficult
to justify.

Wouldn't your users appreciate more features than optimisations most of them
aren't going to notice? For the same block of time today compared to decades
ago, you're going to be creating an app that uses more memory and CPU but has
a lot more features. Developer time isn't free and needs to be justified.

I used to write DOS apps for 66MHz PCs and you'd spend an enormous amount of
time optimising memory and CPU usage. This was similar for the initial batch
of Android phones as well as you didn't get a lot of resources (e.g. loading a
large image into memory would crash the app). Now I can create more features
in way less time and rarely have to think about optimisations unless dealing
with a lot of data.

I think expecting every software developer to release software that uses the
absolute minimum amount of CPU and memory possible is completely unrealistic.
The people commenting that developers don't know how to write optimised code
anymore have different priorities to a typical business. I know how to make
low level optimisations and have even resorted to assembly in the past but I'm
only going to these lengths when it's absolutely required.

For commercial projects, it doesn't make any business sense to optimise more
than is needed even if that makes software developers squirm.

~~~
fauigerzigerk
You are right, but the result of all apps (plus adverts) taken together is a
very sluggish system.

I think what operating systems should do is to allow users set per app/website
quotas and use sensible defaults.

Developers should get the message that no we can't use all the resources
available on a given system just for our particular app.

~~~
chriswarbo
Whilst this would be nice for power users (I've certainly hit out-of-memory on
Linux and had random processed killed), it wouldn't have much effect in the
grand scheme of things.

Commercial developers will use as much memory as they can get away with; OS
vendors would disable the quota system (or set the quota to 100% of whatever
RAM the device has) since they don't want apps to "break" on their
laptop/tablet/phone and work on their competitors'.

~~~
jackmott
There is some very basic education and industry choices we could make to get a
~4x or so efficiency gain without any substantive increase in development
time/effort/maintance cost etc.

If people graduated from college with just a little bit of understanding of
how slow ram is, how CPU caches work, how expensive allocating memory on the
heap is (either, initially with something like malloc, or amortized with
garbage collection) and if we stopped using languages that haven't at least
built a good JIT yet, and use AOT native compilation more often, we would all
be in a much happier place. Systems would be much more snappy, or we could add
more features, users can run more things at once before it goes to hell,
batteries would last longer.

None of this requires even changing what language you use, or massive changes
in coding style. Just knowing some basic facts about how the computer works
can let you choose the more efficient option among equivalently complex
options -> "don't need a linked list here, let's use an array backed list
instead" -> let's not use LINQ here because that allocates and this is a tight
loop that runs on every request -> lets not use electron, let's build
something with a similar external API that is more efficient

~~~
seanwilson
Why do you just assume apps using more resources now is because of coders with
bad optimisation knowledge?

> lets not use electron, let's build something with a similar external API
> that is more efficient

People also choose Electron because it's an efficient in terms of development
time to release apps on multiple platforms, not because they don't know how to
optimise.

~~~
dkersten
Tools like Qt+QML are, in my experience, just as efficient in terms of
development time, but much more computationally and memory efficient than a
typical electron application. I'm not saying that you can't make electron run
efficiently, just that it takes more work and therefore since most teams
optimise for development rather than computation, the end result is that
electron is (again in my personal experience) typically less
computationally/memory efficient. It also helps that in QML javascript is
typically used only as non-performance-critical glue code and the rest (and
the framework itself) is in C++ and the GUI rendering is in OpenGL.

------
wahB4vai
Organizations get what you reward.

Tech Debt is rewarded.

Doing something for the first time, almost by definition, means one does not
really know what one is doing and is going to do it somewhat wrong.

Hiring less skilled labor (cheap coder camps for example) to implement
buzzword bingo solutions gets you into a place where all the software contains
large chunks of it's substance coming from people doing it for the first
time... and not 100% right.

As we never go back to fix the tech debt we end up building stills for the
borked and less than great. When that structure topples over we start over
with a new bingo sheet listing the hot new technologies that will fix our
problems this time round for sure.

I'd think that a good fraction of the current language expansion is that the
older languages are too complex and filled with broken. Green fields allow one
to reimplement printf, feel great about it, and get rewarded as a luminary in
the smaller pond.

.....

oh... and well the cynic in me would argue planed obsolescence at the gross
level. No one buys the new stuff unless there's new stuff.

~~~
krylon
> Doing something for the first time, almost by definition, means one does not
> really know what one is doing and is going to do it somewhat wrong.

The Mythical Man-Month has a chapter on this titled "Prepare to throw one
away". Brooks argues, that the program/OS/whatever you build the first time
around should be considered a prototype. Then you reflect on what problems you
encountered, what went well, and so on and use those insights to guide you
when start over.

It seems like such an obvious idea, but Brooks wrote that almost 50 years ago,
and it seems like only very few people listened. Primarily, I guess, because
much software is already written under highly optimistic schedules - telling
management and/or customers that you want to throw the thing out and start
over is not going to make you popular.

~~~
coldcode
That do it twice idea originally came from Royce's famous article in 1970
which lead people to claim he invented Waterfall (due to the diagram he
started with). Reflecting on practices from leading teams in the 60's was that
do the project once to learn everything, and then do it again and ship the
second one. Worth reading.

------
GuB-42
This is not a single thing.

I think the biggest culprits are abstraction layers. You use a framework that
use js that use a VM in a browser that runs on an OS. The time when showing
text was done by the application writing a few bytes in VRAM is long gone.
Each layer has its internal buffers and plenty of features that you don't use
but are still there because they are part of the API.

Another one is laziness : why bother with 1MB of RAM when we have 1GB
available? Multiplied the number of layers, this becomes significant.

Related to laziness is the lack of compromise. A good example is idTech 5 game
engine (Rage, Doom 2016). Each texture is unique, there is no tiling, even for
large patches of dirt. As expected, it leads to huge sizes just to lift a few
artistic constraints. But we can do it so why not?

Another one is the reliance on static or packaged libraries. As I said,
software now use many layers, and effort was made so that common parts are not
duplicated, for example by using system-wide dynamic libraries. Now these
libraries are often packaged with the app, which alleviate compatibility
issues but increase memory and storage usage.

There are other factors such as an increase in screen density (larger images),
64 bits architectures that make pointers twice bigger than their 32 bit
counterparts, etc...

~~~
amagumori
just to set the record straight, idtech5 uses "megatextures" / sparse virtual
texturing, which is actually a very clever performance enhancement - a low-
resolution render of the scene is made to determine the needed mip levels for
the visible textures, which are streamed from disk into an atlas texture. then
there's a redirection texture that maps from the textures needed by models to
the UVs of the correctly mip'd texture in the atlas. it's a great solution to
disk and API latency in games. to call it bad because it's a big texture
instead of 50 textures individually streamed from disk...it's not a lack of
compromise. it's a great engineering solution, you dingus!

~~~
GuB-42
I don't think that megatextures are bad in fact I am kind of a fan of John
Carmack and id software.

The rationale behind megatexture is that storage capacity increase
exponentially but our perception doesn't. There is a limit to what our eyes
can see. In fact for his future engines, John Carmack wanted to go even
further and specify entire volumes in the same way (sparse voxel octrees).

And sure, the way megatexture is implemented is really clever, and yes it is
for a good reason, but it doesn't change the fact that it makes some of the
biggest games on the market (Doom is 78GB)

When I said no compromise, it is no compromise for the artists. The whole
point of megatexture is to free them from having to deal with some engine
limitation. They don't have to be clever and find ways to hide the fact that
everything is made of tiles, they just draw. And yes, this is a good thing,
but a good thing that costs many gigabytes.

------
dvfjsdhgfv
To understand this, you need to read Andy Grove, especially "Only the Paranoid
Survive". It's fascinating: basically everything I, as a user, see as a boon,
he perceives as a threat. From his point of view, everything that allows
people to buy cheap machines, run fast software etc. is negative and needs to
be dealt with. Intel basically didn't change over the years, with the recent
x86/ARM fuss showing just that. On the other end of the spectrum are companies
exploiting the existing possibilities - for a long time it was Microsoft,
making each version of their flagship products more and more resource hungry,
so the users were forced to upgrade ("What Andy giveth, Bill taketh away").
What is happening now is the extension of the same story - "Memory is cheap?
Let's use all of it!".

As a developer, you rarely care about memory usage; as a web developer, you
have limited influence on CPU usage. And since most managers care only about
getting the project done on time and within the budget, this is what most
developers concentrate on.

~~~
noir_lord
> And since most managers care only about getting the project done on time and
> within the budget, this is what most developers concentrate on.

I think that is the crux of the issue succinctly put.

~~~
amiga-workbench
Commercial software is very rarely of a sufficient quality.

Software that comes out of nonprofits or the free software movement is
arguably better built and treats the user better.

~~~
sacado2
[citation needed]

Lots of people stopped using firefox in favor of chrome precisely because
firefox was incredibly greedy memory-wise.

~~~
noir_lord
I stopped using Chrome on Linux (except for development and testing) because
it absolutely batters the CPU, enough that it powers on the fans on my laptops
where FF rarely does.

I've never really understand why either and it's been on two totally different
laptops.

------
jeffdavis
Software is causing science is moving backward.

Software makes the world more complex faster than we can understand it, so
even though we have more knowledge we understand less about the world.

We used to know how cars work. We used to know how phones work. Now we don't,
and never will again.

The implications are unsettling.

~~~
titzer
+1000 to this.

Imagine a world populated entirely by IOT devices. Imagine, for a moment,
starting with a blank slate and trying to make sense of these devices using
the methods of Science. They are so complex and their behavior governed by so
much software that it'd be impossible to make a local model of how the device
actually worked. You simply would not be able to predict when the damn thing
would even blink its lights. When the world gets to this point...One would
have to understand how software worked, in many different programming
languages; kernels, scripts, databases, IO, compilers, instruction sets,
microarchitecture, circuits, transistors, then firmware, storage...it'd be
impossible to reverse engineer.

~~~
adrianN
It would still be simpler than figuring out how biological machines work but
biologist are trying with some success.

~~~
titzer
My point is that it is completely stupid to think of the stack of knowledge
necessary to understand how an IOT device really, fundamentally works.

A toaster is not a complex thing. It has a few springs, a tray, a body, some
heating elements. Some wires. There is absolutely no need to put the internet
in there.

/rant

~~~
alexlarsson
You think a toaster is simple? Try building one from scratch! This guy did:
[http://www.thetoasterproject.org](http://www.thetoasterproject.org) and it
was HARD!

~~~
titzer
Well, smelting your own iron and making plastic are the hard part. There is
nothing particularly challenging if you have a few pieces of metal laying
around.

~~~
alexlarsson
Sure, it all depends on how you define a "blank slate". The whole world of
engineering is a huge stack, and near the bottom are things like smelting iron
and making plastic, up a few layers you have things like standardized screws,
near the top you have things like kernels, databases, etc.

If all these things can be taken as a given, why would you not want to use
them? I mean, yes, you can avoid some complexity by making a simple toaster,
but the second the consumer wants things like "never burn my toast" or
"personalized toast levels" you need to go up the stack.

That said, some IOT things are clearly lame ideas that should never have been
made in the first place, but that doesn't mean you should avoid using existing
technology.

~~~
titzer
> If all these things can be taken as a given, why would you not want to use
> them?

When the device breaks, what do we do with it? If it is mostly software, it is
not user serviceable, whereas something with a spring and clips and wires is
something that a curious person armed with a screwdriver could disassemble and
fix.

I fear that software is ushering in an age where users are absolutely helpless
when something breaks. Then we get big stupid bricks with chips in them.

~~~
maxerickson
The complexity can be worth it. Take electronic fuel injection. Uses less
fuel, has a better power band, has a wider range of self correction.

~~~
alexlarsson
Cars are actually a great example of where things have become highly complex,
which means that they are now essentially impossible to fix yourself. On the
other hand, for regular day-to-day use they are a lot better.

------
RcouF1uZ4gsC
I think it started when we went away from native compiled languages. Visual
Studio 6 was in my opinion the best version in terms of
responsiveness/functionality. After that, with .Net, Visual Studio started
including more .Net code and become more and more slow over time. Over time,
people slowly got used to their applications being slower and slower. Then the
web came and people started writing apps in Javascript. The Javascript apps
are not too bad compared to the .Net apps, so people did not notice. However,
if you were comparing them to Pascal/C/CC++ apps, you would have noticed a big
difference.

~~~
badsectoracula
I don't think that interpreted languages is the problem since people were
using Visual Basic since the Windows 3.0 days. There are some collections in
archive.org for 16bit Windows shareware programs and games with hundreds of
entries and like 2/3 of them are made in some version of VB.

Now, .NET and JVM _might_ be a problem since those VMs tend to be resource
hogs (after all both use GC methods that allocates tons of memory whereas
something like Python or even classic VB use reference counting - even then
there are languages that aren't using reference counting but some other method
of GC and still are fast). But i don't think you should put all interpreted
languages at the same box.

~~~
pjmlp
Reference counting is GC.

Also you should not put all GC languages in the same box, as many do allow for
AOT compilation to native code and do support value types and GC-free memory
allocation as well.

~~~
badsectoracula
Heh, it finally happened on Hacker News too, people misunderstood what i wrote
and downvoted me for it instead of trying to understand what i am talking
about (yes i am annoyed with that, it is one thing to be misunderstood and
another to be penalised for being misunderstood - especially on HN where
messages fade out when downvoted).

So, first of all:

> Reference counting is GC.

How did you thought that i said otherwise when i clearly wrote "aren't using
reference counting but ___some other method of GC_ __" ("other method" here
implying that reference counting is also GC)?

Moving on...

> Also you should not put all GC languages in the same box

I did not, as should have been obvious from the "after all both use GC methods
that allocates tons of memory whereas something like Python or even classic VB
use reference counting" where i compare two different methods of GC, one that
uses a lot of memory and another that doesn't.

Now i get that making the previous misunderstanding would make this bit sound
as if i was making a comparison between "GC" (Java, C#) and "non-GC" (Python,
classic VB) - and please note that the quotes here are to show what one could
think while having that misunderstanding, not what i really think, after all i
already made it clear with the previous quote that i think that reference
counting is a method for GC - however i do not think that it is my fault here,
i gave examples and tried to make myself clear about what i mean. After some
point i believe it is up to the reader to actually try and understand what i
am talking about.

I think the rest of your message (the "as many do allow for AOT compilation to
native code and do support value types and GC-free memory allocation as
well.") is relying on the above misunderstandings, especially considering i
didn't do what you describe, so i am ignoring it.

Now don't get me wrong, i am not attacking you or anything nor i believe you
are wrong with the fact parts of your message ("reference counting is GC",
"not all GC languages are the same"), it is just that the message doesn't have
much to do with what i wrote.

------
godelski
I still find it amazing that DOOM was 2.5MB. A Google search page is ~20MB
(16MB in DDG). And a Wikipedia page is ~19MB. (FF 55). This is crazy to me.
That even simple things take so much space now. I know space is cheap, but
this does feel bloated. And while these sizes might not be noticeable on a
computer, it definitely is on a mobile connection. I had figured the advent of
mobile would make optimization more appealing, but it seems to go in the other
direction.

~~~
wingerlang
There is no way a google search is 20MB and a wiki page is 19MB. My tests
shows a google page is around 1MB, and wiki pages obviously depends on weather
or not the page is has many and large images. But the average page definitely
isn't near 19MB that's for sure.

~~~
Houshalter
Maybe he means how much the browser uses to display the page, which is much
much larger than the size over the wire.

~~~
godelski
I just gave what about:memory was giving me. So yes, what the browser uses to
display the page.

------
d--b
I think the explanation is that bloated software is cheaper to make.

It is cheaper to develop a .Net app than a C app. Cheaper in Development and
maintenance.

It is cheaper to not care about efficient data management, or indexed data
structure.

What we're losing in efficiency, we gain in code readability, maintainability,
safety, time to market, etc.

~~~
jasonhanley
As a developer and development manager, I haven't personally noticed major
improvements in any of those metrics over the past 20 years.

But I'd definitely be interested in any studies that have tried to measure
these over long time periods.

~~~
hyperpallium
It's a gas law, as software expands to fill the available hardware. If it can
reach a minimal standard with less work, a smaller budget is allocated.

------
throwaway9980
Software complexity increases exponentially with linear growth in features and
polish. Occasionally someone takes the time to step back and rethink things,
but generally you’re just adding another layer on top of the ancient ruins.
Code archaeologist should be a job title in many organizations.

~~~
majewsky
It's less than archaeology (where artifacts are embedded in soil) and more
like geology (because everything is code, i.e. code is the soil). But yeah,
I've had the same feeling when refactoring a decade-old application. You could
really recognize the styles of different developers and eras in the same way
that geologists recognize eras by looking at the type of stone deposited
during that era.

------
TheEnder8
This keeps coming up every couple of years, but is just wrong.

In the last 5-10 years, there hasn't been almost increase in requirements.
People can use low-power devices like Chromebooks because hardware has gotten
better/cheaper but software requirements haven't kept up. My system from 10
years ago has 4gb of ram - that's still acceptable in a laptop, to say nothing
of a phone.

If you're going to expand the time horizon beyond that, other things need to
be considered. There's some "bloat" in when people decide they want pretty
interfaces and high res graphics, but that's not a fair comparison. It's a
price you pay for a huge 4k monitor or a retina phone. Asset sizes are
different than software.

I won't dispute that the trend is upward with the hardware that software
needs, but this only makes sense. Developer time is expensive, and
optimization is hard. I just think that hardware has far outpaced the needs of
software.

~~~
xg15
> _Developer time is expensive, and optimization is hard._

In the case of front-end development also "Developer time is paid by the
company while hardware is paid by the users."

This is basically a nicer way to put the "lazy developers" point from the
article, but I think that's actually important.

The problem is that this seems to create all sorts of anti-patters where
things are optimized for developer-lazyness at the expense of efficiency.
E.g., adding a framework with layers of JavaScript abstraction to a page that
shows some text - after all, the resources are there and it's not like they
could be used by something else, right?

~~~
fahadkhan
There is a cost to the company for non performant front end code though. If
the front end preforms poorly users are less likely to use it.

~~~
xg15
If that were the case, I think there wouldn't be that much discussion about
the "website obesity crisis". E.g., see this post from another thread:
[https://news.ycombinator.com/item?id=15028741](https://news.ycombinator.com/item?id=15028741)

~~~
lmm
Users who aren't using an up-to-date phone are probably not an audience
websites are likely to make money from. If a website's performance isn't "good
enough" on a modern phone, that will hurt the site.

Dark thought: maybe sites actually profit from a certain level of "bloat", if
it drives away less lucrative visitors while not affecting the demographics
that are most valuable to advertisers.

------
AnimalMuppet
It happened one step at a time.

We wanted multi-tasking OSes, so that we could start one program without
having to exit the previous one first. That made the OS a lot bigger.

Eventually, we got web browsers. Then Netscape added caching, and browsers got
faster and less frustrating, but also bigger. And then they added multiple
tabs, and that was more convenient, but it took more memory.

And they kept adding media types... and unicode support... and...

We used to write code in text editors. Well, a good IDE takes a lot more
space, but it's easier to use.

In short: We kept finding things that the computer could do for us, so that we
wouldn't have to do them ourselves, or do it more conveniently, or do
something _at all_ that we couldn't do before. The price of that was that
we're dragging around the code to do all those things. By now, it's a rather
large amount to drag around...

~~~
jasonhanley
This is all very true, but I feel like we (as users) haven't really gained
proportionally compared to the increase in computing power and storage.

For example IDEs: Visual Studio in 2017 is certainly better than Visual Studio
in 1997, but do those advancements really justify the exponential growth in
hardware requirements?

How'd we get so little usable functionality increase for such a massive
increase in size/complexity?

~~~
flukus
> For example IDEs: Visual Studio in 2017 is certainly better than Visual
> Studio in 1997

Is it? 97 might be a bit extreme, but the other day I opened an old project
which was still on VS2010 and I was struck by how much faster 2010 was while
still having nearly every VS feature that I wanted. They're slowing porting VS
to .net and paying a huge performance penalty for that.

~~~
jasonhanley
That's the type of example I've come across all too frequently. Software
that's 5-10 years old, has all the same functionality, uses a fraction of the
resources, and is often "better" in several ways.

Older versions of Android Facebook seem massively faster and use a fraction of
the RAM while providing (nearly?) the same functions and features.

~~~
dave7
Excel 2000 is lightning quick compared to the modern versions (well, 2013 is
as modern as I got)

------
bsaul
Wonder how much of it is only due to asset. I keep making my colleague notice
than a single background image displayed on a high res iphone occupies more
ram than the total data of their app by a factor of 100 (at the minimum). Same
goes for app size on disk : it's most often mainly assets.

So just loading from disk, decompressing, put on Ram then moving around and
applying visual effects is probably half of the reason everything is slow.
Those "bloat" graph should also mention screen resolution and color depth.

~~~
wodenokoto
My first thought too.

I might have done the same on my 2000 era machine as I do today (browse the
web, listen to music, program, maybe edit a picture or two) but I'll be damned
if I had to do all this in Windows 98 with a resolution of 800x640 again!

~~~
chx
640x480 or 800x600 and I think you mean the latter. 640x480 was more a Windows
95 resolution.

We could waste some words here how display resolution didn't keep up due to
Windows being crap and people being bamboozled into 1366x768 being "HD" or "HD
Ready". 800x600 vs 1366x768 that's only double the pixels and barely more
vertical.

~~~
digi_owl
Win95 would default to 640x480 at some low color depth if there were nothing
but vesa drivers to work with.

Made for a fun first hour after a fresh install.

Back then i had a habit of building up a cache of drivers and common software
on drive, later dumped to CDs at irregular intervals, just to lessen the
rebuild time.

Funny thing is that i kinda miss those days.

Back then i could pull the drive from a computer, cram it into another, and
more often than not 9x would boot. It may give me that vesa only resolution
etc, but it would boot.

These days it feels like it would just as well throw a hissy fit ascii screen
about missing drivers, or something about the license not being verifiable.

I thought maybe Linux would rekindle this, but sadly it seems the DE devs are
driving even it head first into the pavement. This so that they can use the
GPU to draw all their bling bling widgets and paper over the kernel output
with a HD logo...

------
AndyMcConachie
TFA doesn't even load without javascript. I use noscript and this HTML file
won't display anything unless I allow JS to load from a different domain.
Perhaps the author should practice what he preaches. Then again, I don't
really know what he preaches because I can't be bothered to allow the JS to
load.

~~~
jasonhanley
Didn't really expect to hit HN front page with my random rant :)

In any case, glad I was hosted on Google infrastructure but embarrassed by the
bloated, default Blogger template.

Interested in suggestions for simple, lightweight alternative.

Medium, Square, WordPress, etc. all seem to suffer from similar insane bloat.

~~~
CaptSpify
Then build it yourself. It's not that hard to create your own simple, static
webpage.

------
PaulHoule
Back in the late 1960s, see

[https://en.wikipedia.org/wiki/The_Mythical_Man-
Month](https://en.wikipedia.org/wiki/The_Mythical_Man-Month)

Note NASA controlled the moon missions with an IBM 360/95 that had something
like 5 MB of RAM, 1GB of disk, and about 6 million instructions per second.

Today an internet-controlled light switch runs Linux and has vastly larger
specifications. Connecting to WiFi is more complex than sending astronauts to
the moon!

~~~
vacri
> _Note NASA controlled the moon missions with an IBM 360 /95 that had
> something like 5 MB of RAM, 1GB of disk, and about 6 million instructions
> per second._

And an army of technicians available around the clock to keep it working.
Whereas your IoT light 'just works' and isn't expected to require any support
at all.

~~~
PaulHoule
Actually, the 360 was a big leap forward in reliability as it was second-
generation transistorized, made with automated manufacturing, etc.

As for the high complexity of IoT things, I don't think the extra complexity
helps reliability, security, etc.

------
TekMol
This is what I get after starting Chromium, before opening any website. Does
this mean it is using 178064 + 175208 + 92132 + 90492 + 87596 = 623492 KB =
623 MB right off the bat without having loaded _any_ HTML?

    
    
        NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND        
    
         0  979612 178064 108084 S   0,0  1,1   0:01.38 chromium-browse
         0 3612044 175208 128552 S   0,0  1,1   0:01.83 chromium-browse
         0 1372444  92132  67604 S   0,0  0,6   0:00.27 chromium-browse
         0 1380328  90492  58860 S   0,0  0,6   0:00.62 chromium-browse
         0  457928  87596  75252 S   0,0  0,5   0:00.67 chromium-browse

~~~
majewsky
For Firefox, I have 378.3 MiB. Better, but much more than I expected:

    
    
      USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
      user      9309 59.0  4.6 2020900 282568 pts/2  Sl   09:46   0:01 /usr/lib/firefox/firefox
      user      9369  8.0  1.7 1897024 104800 pts/2  Sl   09:46   0:00 /usr/lib/firefox/firefox -contentproc -...
    

However, When I look at the "available" column in free(1), it looks much
better. Only 170 MiB increase when Firefox is started:

    
    
      $ free
                    total        used        free      shared  buff/cache   available
      Mem:        6115260      598900     4324616       41748     1191744     5205960
      Swap:             0           0           0
      $ free
                    total        used        free      shared  buff/cache   available
      Mem:        6115260      781304     4116456       68908     1217500     5032360
      Swap:             0           0           0

~~~
jeffhuys
Safari (Technology Preview) uses ~50MB with only the "favourites" tab open
(default):

    
    
        PID    COMMAND      %CPU TIME     #TH   #WQ  #PORT MEM    PURG
        44688  Safari Techn 0.0  00:03.41 10    3    311   51M    2648K
    

Safari (original) uses ~300-400MB with 20-100 tabs open. I think they use a
different way of storing tabs which are not on your screen at that moment.

(I didn't check Safari (original) for nothing-open-and-idle RAM usage, simply
because I didn't want to have to reload all my tabs, so I booted Safari
(Technology Preview) to quickly test it.)

------
blt
Software went off the rails when Java decided to store all nested nonprimitive
objects on the heap, and Sun thought this was an acceptable choice in a
language marketed at general purpose app development.

------
jeremyjh
Typically as developers we let the software bloat until it hurts us directly.
Since we are always using more and more powerful machines we are always going
to let the bloat continue. If we get called on it we've got the slam-dunk in
our back pocket: "Premature optimization is the root of all evil."

------
DannyB2
I'll make three points.

The article is focused on machine efficiency. Human efficiency also matters.
If you want to save cycles and bytes then use assembly language. Geez, we had
the assembler vs fortran argument back in the 70's. Probably even in the 60's
but I'm not that old. Guess what? High level languages won.

Next.

Hey, Notepad is much smaller, leaner, and faster than Word! Okay. But Word has
a lot of capabilities that Notepad does not. So is it "bloat" or is it
"features" ? Guess what, Windows is a lot bigger and slower than DOS.

Imagine this argument from a business person:

If I can be to market six months sooner than my competitor for only three
times the amount of RAM and two times the amount of CPU horsepower -- IT IS
WORTH IT! These items cost very little. You can't buy back the market
advantage later. So of course I'll use exotic high level languages with GC.
I'm not trying to optimize machine cycles and bytes, I'm trying to optimize
dollars.

~~~
mikebenfield
> If you want to save cycles and bytes then use assembly language ... High
> level languages won.

This is a ridiculous straw man. It is completely possible to write efficient
software in high level languages, and no one is suggesting people extensively
use assembly language. Actually, in many cases it is very difficult to write
assembly that beats what's generated by an optimizing compiler anyway.

~~~
_Tev
Considering how many people are preaching "do not use electron, write true
native apps", the argument is the same (use lower level tools) and reply to it
also stays the same:

Dev time is the most expensive resource.

------
cakedoggie
Ok, fair enough, but have a look at yourself. Your blog is static content not
served statically, that requires javascript.

~~~
tradersam
On that point:

> The Blogger tab I have open to write this post is currently using over 500MB
> of RAM in Chrome.

If that is so, why post it and have it use a similar amount of RAM on others
machines? If they know _sooo much_ about software, why even use Blogger, a
site that's heyday was 15 years ago?

------
dzink
Look at any analog - real estate property, an army of people, a kitchen.
Anything left un-groomed during high use becomes unusable. Grooming software
needs to be either someone's full-time job (real-life custodians), or a big
constantly-enforced chunk of everyone's job (military grooming standards).

At a big company could argue requirements writers need to be technical, but
once you've done a startup you'd know that you're in an army and not in a
fixed building that needs architected once. The customer and your enemies are
always on the move and you have to keep moving on new terrain as the money and
your business keeps moving there. Build with code of conduct for your
developers, and allow the code base to evolve with modules, or some other
approach.

------
jankotek
I love bloated software, I wish opening facebook would require 128GB RAM.

It makes hardware cheaper for everyone, even nerds who use terminal and
lightweight WMs ;-)

~~~
franciscop
Exactly the same for Snapchat and Mobile Internet speeds, I love those people
taking selfies constantly.

~~~
csydas
To take a serious look at this scenario, being abroad at the moment without a
foreign sim makes me really frustrated as to how greedily and wastefully apps
on mobile will use mobile data. Google is pretty bad about this with their
array of apps, and I must admit I'm surprised that there isn't a function
built into Android and iOS that triggers when the phone returns that it's on
roaming data - the OS seems to be able to tell at a much deeper level than the
apps run at so to me it seems like an easy path to have a true low bandwidth
mode kick in, or at least offer to when the phone reports a roaming condition.

I understand this is edge case material though so such ideas need not apply,
but it seems like a fairly easy to implement idea that is one of those "oh
that's really nice" features customers stumble across.

------
andrewflnr
Software was never on the rails. This whole industry has, AFAICT, been flying
by the seat of its pants from Day 1 with whatever hacks get today's job done.
For a while, that meant hacks to save space and time. Now... it doesn't.

------
Silhouette
I suspect this is the inevitable price for the expanding scale of the software
industry. Perhaps the problem isn't about depth, as in what a given piece of
software can do, as much as breadth, as in how much software there is now and
how many different things it can do collectively.

One cost of the increasing breadth in the industry is that if we want to have
a lot more software then obviously it can't all be written by the same
relatively small pool of expert programmers who wrote software in the early
days. With more software being written by people with less skill and
experience, many wasteful practices can creep in. That can include technical
flaws, like using grossly inefficient data structures and algorithms. It can
also include poor work practices, such as not habitually profiling before and
after (possibly) optimising or lacking awareness of the cost of taking on
technical debt.

Another cost of the increasing diversity in the software world is that to keep
all that different software working together, we see ever more generalisations
and layers of abstractions. We build whole networking stacks and hardware
abstraction libraries, which in turn work with standardised protocols and
architectures, when in days gone by we might have just invented a one-off
communications protocol or coded directly to some specific hardware device's
registers or memory-mapped data or ioctl interfaces.

There is surely an element of deliberately trading off run-time efficiency
against ease of development, because we can afford to given more powerful
hardware and because the consequences of easier development can be useful
things like software being available earlier. However, just going by my own
experience, I suspect this might be less of a contributory factor than the
ones above, where the trade-off is more of an accident than a conscious
decision.

------
jbergens
Besides all other things people have mentioned there is also new requirements
for ui:s. People complain if their drag-movement with a finger don't align
perfectly with a nice animation on the screen on a cheap phone. Those things
did not exist 10 years ago or 20 years ago. Interfaces were simpler and easier
to build and users were happy that it worked.

But I do agree that software could be a bit faster nowdays.

------
jbg_
A: Around the time that a simple blog post required JS from four separate
domains in order to display anything other than a blank page.

------
sien
It's kind of because we can.

I'm currently on a Mac and a PC. The Mac's CPU is at maybe 5% when I'm doing
most things.

The PC is at 1%.

I'm using half the PC's memory and 3/4 of the Mac's.

These are no up to date, high memory or high performance machines.

Have a look at your own machine. Surely for most of us it's the same.

And that memory usage is mostly in one application - Chrome. The only bloat
that hurts a bit now is web page bloat. And on a good connection this isn't an
issue either.

It's also different on phones where application size and page size seems to
matter more.

~~~
tradersam
Chrome regularly uses about 10% of my RAM, but honestly Atom was the biggest
offender, which is why I switched to Visual Studio. Also, games can be a huge
CPU and RAM eater but it is almost always completely necessary.

------
Boothroid
I do often wonder wth is going on with bloat. I can understand a videogame
which has huge amounts of media might be large. But business apps?! It doesn't
make sense.

~~~
noir_lord
I think it makes absolute sense (in overall terms), hardware capabilities at a
fixed price scaled exponentially while the costs of producing software went up
mostly linearly, or to put it another way.

I have a 5 minute mp3 that takes more space than my first _hard drive had_ and
some icons on my desktop that take more space than my first computer had
_RAM_.

Whether that will continue to hold I don't know, mobile has certainly pushed
certain parts back towards caring about efficiency (though more because it
impacts battery life).

If you remove a constraint people stop caring about that constraint.

The old school geek in me laments it sometimes but spending twice as long on
developing something to save half the memory when the development time costs
thousands of dollars and twice as much RAM costs a couple of hundred
seems..unwise.

~~~
flukus
> Whether that will continue to hold I don't know, mobile has certainly pushed
> certain parts back towards caring about efficiency (though more because it
> impacts battery life).

Has it? Try using a low end phone with something like 8GB of internal storage,
mobile apps are ridiculously slow and bloated. It's to the point where I
haven't looked in the play store for years because I simply don't have enough
room on my phone. That means the dev community has screwed itself over with
wastefulness.

~~~
Tomis02
> mobile apps are ridiculously slow and bloated

When you look at the Android SDK you have to wonder if it's even possible to
have a different outcome.

------
srssays
Software is written better.

In the past, computational complexity was lowered by arbitrary size limits.
e.g. if you had a O(n^2) algorithm you might cap n at 10 and now you have a
O(1) algorithm. Job done.

Now, computational complexity is lowered by aggressive use of indexing, so you
might lower your O(n^2) algorithm by putting a hash table in somewhere, and
now you have an O(n) algorithm. Job also done.

The practice of putting arbitrary size limits on everything has almost died
out as a result.

------
divanvisagie
This article is a waste of bandwidth and server storage space, the entire
content is just an elaboration of the title, with not even a single
consideration to what the cause is.

There are also a few graphs to make the author feel like he is a scientific
researcher writing a paper instead of what he is actually doing , which is
posting a question that quite frankly could with little extra thought fit in a
tweet.

~~~
davemp
At least it's promoting some decent discussion in this thread?

~~~
jasonhanley
Thanks :) That's what I was hoping for.

It's a serious problem with no clear solution.

------
infiniteparamtr
I've been teaching myself C, with the intention of learning how to write code
for embedded applications where efficiency is key.

It seems to me that new languages prioritize quick iteration over effective
machine operation. The easier a language is to write and interpret, the faster
an outfit can churn out an application. The exponential computing power growth
has been sufficient enough to absorb these collective "shortcuts". Thus, it is
not being taken advantage of properly.

The CSCI/Engineering fields have become more of a gold rush than thoughtful
trades. Boot camp management seeks profit, and trainees seek to quickly fill
high paying jobs. It all culminates into this situation where code doesn't
need to be clever and thought out - just created A.S.A.P to handle whatever
trending niche market or "low-hanging fruit" there is. The work of these
products get handled server side, where electrical costs for cooling is a
fundamental expense.

~~~
shmerl
Rust is new. It's made with performance in mind.

~~~
shpongled
I want to second Rust. I've been writing C for 5+ years and finally decided to
give Rust a serious try... and I love it. I never ever thought I would say
this, but I don't see myself going back to C (with minor exceptions)

~~~
flavio81
I recall being excited the first time i learnt C. But after i learnt other
programming languages, the defects of C started to appear clearly before my
eyes. C++ is even worse (in terms of things that look like hastily designed)

Rust is comparably more coherent and elegantly designed.

------
kensoh
Thanks Jason, enjoyed your thought-provoking post. I'm reminded of Parkinson's
law that "work expands so as to fill the time available for its completion".
It's as if the software bloats up to fill up the available hardware capacity.

From a fundamental level though, my hunch would be how modern development
takes modularization / abstraction to a type of extreme. Imagine a popular
Node.js module and how many dependencies it has and how many dependencies its
dependencies have.

It's not hard to imagine a lot more computing power is required to handle
this. But that's ok to decision makers, computing power is cheap. Saving
developers time by using modularized developments brings more cost/profit
benefits, like what Dan said.

PS: the link on Visual Studio. Oh wow, what fond nostalgic memories it brings
me :)

------
toyonut
I would guess at least some of the issue is that most users don't show a
preference for faster smaller software, especially because it doesn't benefit
them if a product uses less RAM and CPU. Displaying a UI is better than a text
mode interface. Icons that don't look jaggy on the screen are nicer than ones
that do. Anti aliasing, gradients and drop shadows make things look nice. Drag
and drop that has a pretty animation is nicer than drag and drop workout. It
is the same reasons people choose a BMW when a Corolla does pretty much the
same job. People pay the cost they want to live with. In the trade off between
functional and thrifty and pretty and feature packed, the latter nearly always
wins.

~~~
peterburkimsher
Looking pretty is good for marketing.

Users are showing "a preference for faster smaller software" \- the author of
that blog is one of them, and I'm another. But even the best software has to
be passed to marketers before it can reach your hands. There are some small,
efficient programs out there, but they're overlooked because they don't pay.

------
bsder
"The Blogger tab I have open to write this post is currently using over 500MB
of RAM in Chrome."

So, why are you using Blogger instead of emacs/vi/notepad to write a static
HTML page?

Apparently the author seems to think that all that bloat _DOES_ give him
something, no?

------
kozak
Software does so much more today as a baseline requirement. Think about all
that internationalization, high DPI graphics, security, nice APIs and
modularity: these aspects of software have never been at such a high level
before.

------
fpgaminer
I look at articles like this and the comment responses to it and I can't help
but think everyone is like the old man grumbling how "things used to be built
to last!"

Have people really forgotten their computing history so soon?

Let's roll back the clock. Windows 95 ran for a total of 10 hours before blue
screening. Windows ME ran for -2 minutes before blue screening and deleting
your dog.

Roll back further. IBM was writing software not for you. Not for your
neighbor. They were writing software for wealthy businesses. Bespoke software.
Software and hardware that cost more than you make in a lifetime.

Software, today, represents responses to those two historical artifacts.

1) At some point software became complex enough that we discovered something
we didn't know before ... programmers are really bad at memory management.
Concurrently, we also realized that memory management is really important.
Without it, applications and operating systems crash.

And yes, this point was hit roughly around Windows 95. You really couldn't use
Windows 95 for more than a day without something crashing.

So the programming ecosystem responded. Slowly and surely we invented
solutions. Garbage collected languages and languages without manual memory
management. Java, .NET, Python, etc. Frameworks, layers of abstractions, etc.

Now fast forward to today. I'm absolutely shocked when an app crashes these
days. Even games have become more stable. I see on average maybe 1 or 2
crashes in any particular game, through my _entire_ playthroughs. And usually,
the crashes are hardware bugs. I haven't seen a Firefox crash in ... months.

This is leaps and bounds better. Our solutions worked.

The caveat, of course, is that these new tools use more memory and more CPU.
They have to. But they solved the problem they were built to solve.

2) In the "good old days" software was bespoke. It was sold strictly B2B. For
a good long while after that it remained a niche profession. Does no one
remember just how expensive software and hardware used to be? And people scoff
at $600 phones...

But software exploded. Now everyone has a computer and software is as
ubiquitous as water.

With that explosion came two things. Software got cheaper. A _lot_ cheaper.
And software filled every niche imaginable.

When software was bespoke, you could get the best of the best to work on it.
Picasso's and Plato's. But those days are long gone. Picasso isn't going to
make Snapchat clones.

We needed a way to allow mere mortals to develop software. So we created
solutions: Java, JavaScript, Python, .NET, Ruby, etc. They all sought to make
programming easier and broaden the pool of people capable of writing software.

And just like before, these tools worked. Software is cheap and plentiful.

We can bemoan the fact that Slack isn't a work of Picasso. But who wants to
pay $1 million per seat for Slack? Instead, Slack is free in exchange for the
sacrifice of 4GB of RAM.

The lesson here is two fold. Software today is better than it ever was, and it
will continue to get better. We've learned a lot and we've solved a lot of
problems. Battery constraints are forcing the next evolution. I would never
have dreamed of a world where my laptop lives for 10 hours off battery, but
here we are. I can't wait to see what the next decade holds!

~~~
dep_b
I don't remember NT4 crashing around that time. And my Win95 set-up would
crash sometimes when booting an audio app or game with a faulty driver. But
again I would shut down the computer every day. It must have been around the
same time Linux geeks started posting their uptimes.

~~~
krylon
I remember installing NT4 on my desktop PC in ... 1997 or '98\. Within 24
hours, it wouldn't boot any more. I formatted the hard drive an installed it
again, and within 24 more hours, it once more refused to boot.

Of course, I did not have much of a clue what I was doing back then.

------
SomewhatUseful
Is this a projection of "the next 90%" issue?

"The first 90% took 2 weeks to finish. The second 90% also took 2 weeks to
finish (and now your 99% done). The next 90% also takes two weeks to finish
(99.9% done)..." Reapplied to another resource..."The first 90% consumes 1GB
of RAM. To solve the next 90% of the problem, takes 1GB of additional RAM...

If you continue this trend, the problems solved in the incremental steps maybe
used fractionally less often, but are probably also more complex and required
a greater resource investments. Our software does a lot more, but the later
developed parts are usually used left often and are more complex. Talking to
the _one and only_ ship headed to the moon when you don't particularly care
who hears you is less difficult than securely purchasing things online over a
WiFi connection. At the user experience level its just "thing A talks to thing
B" but the later case has also had to solve n-th 90% issues of congestion and
security and handshake and...

That being said, we rarely go back and see what in the earlier iterations are
now based on false assumptions. So there probably is a fair amount of
accumulated cruft with no clear detector for what is cruft and what is
essential.

------
VarFarYonder
> There have been a few attempts at a Software Minimalism movement, but they
> seemingly haven't gained much traction.

For those interested in exploring a minimalist approach, it's worth checking
out [http://suckless.org/philosophy](http://suckless.org/philosophy) and
[https://handmade.network/manifesto](https://handmade.network/manifesto)

------
BatFastard
Excuses excuses, most modern software developers have no clue as to how to
right tight code

------
jsight
This line caught my attention:

"And somehow our software is still clunky and slow."

It is? I haven't really noticed. It seems to me that my 9 year old desktop
still runs most modern software reasonably well. My new laptop runs it much
more quickly than machines from 15-20 years ago.

Granted, in an abstract sense, CPU usage and memory consumption has grown a
bit, but the actual user experience is better.

------
y04nn
It's all about cost and what users can accept. If a feature costs 10 times
less, takes half the time to implement and is easier to maintain but produces
the same result for the end user, why would software companies bother to
optimise (if the end user does not care)?

Also, it seems to me that the optimisation on the web is done on speed at the
detriment of memory usage.

------
chadcmulligan
Older software was written in languages you had to manage the memory in C,
C++, pascal etc, it requires more skilled developers (I'm told), simpler
languages like javascript require less knowledge to write applications. The
cost is though higher resource usage.

------
fouc
I think a potential solution would be to encapsulate every program in their
own VMs, with memory & cpu limits set.

Put some control back in the user's hands, prevent less run-away bad behaviour
from the apps.

------
Sir_Substance
> The Blogger tab I have open to write this post is currently using over 500MB
> of RAM in Chrome. How is that even possible?

Mate, you picked /blogger/ as your preferred blogging platform. Blogger can't
even deliver a page title without Javascript.

There's a whole world of much higher quality software out there, it may be
that you've chosen not to use it.

This dude should try switching his blog to Pelican[1], it might be something
of a revelation.

[1] [https://blog.getpelican.com/](https://blog.getpelican.com/)

------
scarface74
While application software has become more bloated, Microsoft has done a
fairly good job at keeping Windows necessary footprint down.

After Apple stopped supporting 32 bit x86 Macs years ago, I decided to put
Windows 7 on my old 2006 era Core Duo 1.66ghz Mac Mini with 1.25GB of RAM. My
parents still use it occasionally. It can still run an updated version of
Chrome and Office - not at the same time of course - and it isn't painful.

My Plex Server is a 2008 era Core 2 Duo 2.66Ghz Dell business laptop with 4Gb
of RAM.

~~~
a_imho
Just a couple of days ago I was unable to put a Win10 iso on a 4GB usb drive,
it was something like 4,5-5GB.

~~~
scarface74
I don't know how much drive space Windows 10 needs. But the Core 2 Duo with
4Gb of RAM I referenced is running Windows 10 and can transcode at least 2
streams at once. It's running the Plex Server and Plex Connect - a Python (?)
web server that intercepts requests from the 3rd gen AppleTV to render a Plex
client.

------
xycodex
I would posit that the decrease in price of computational resources drives a
couple of things - better quality (more resolution, colors, for e.g) and
ease/cost of development.

You might see individual applications do the same things, and consume more
resources due to the layers and layers of abstractions, BUT be cheaper to
build. As a consequence, many, many, many more applications being built, for
cheaper, reaching more people, "eating the world".

------
13of40
At the risk of being downflamed for not knowing something obvious, I see it as
the gap between what standard runtime libraries provide (e.g. nice generic
list, stack, and queue classes) and the algorithms devs implement on top of
them. A part of me wants to see all of the grotesque implementations of "visit
every relationship in this tree" that get invoked when I click the "add
comment" button on this.

------
man2525
Not sure about when, but I think that restrictive software licensing combined
with Moore's law guaranteed wasted computing power. Companies that sold both
hardware and software had an incentive to soak up that power to encourage
perpetual upgrading. It grew wasteful "software ecosystems". That or testers
should fail more software that doesn't run quickly on low end hardware.

------
reacweb
IMHO, the starting point was the requirement of 3D accelerated video cards.
The last 3D games I have played (alone in the dark 1, doom) did not require
any 3D card. But now, you can't display a line of text without a huge pile of
crappy layers above insane GPU. 25year ago, direct video memory mapping and
bitblt operations were sufficient.

------
stewbrew
I think the author demonstrates well the problem by using a blogger theme that
requires JavaScript to be turned on instead of statically rendering the blog
post that is just a few paragraphs long as HTML and serving that from a simple
server.

He shouldn't ask "when" though but who made it go off rails (sic!) and why.

------
alkonaut
This article is based on the misunderstanding that software complexity has NOT
grown exponentially. It has.

Even though features might be added in a linear fashion (and I think that's
not true either - the teams that build large applications have grown too), the
complexity the whole system might scale as the square of the number of
features, or exponentially. That is: if word2017 has 10 times the number of
features as Word6.0, we should not be surprised to see CPU and RAM
requirements be 100 or 1000 times higher.

Finally, just like a memory manager in an OS makes sure all memory is actually
_used_ , software should be using the computers' resources. If an average
computer now is 4x3Ghz then a foreground application such as a photo editor
should have features that at least some times puts that hardware to good use.
Otherwise there was no point in having that hardware to begin with. As
software developers we should aim to scale our software to use the available
hardware. We should not just let twice as fast hardware run our software twice
as fast.

~~~
nkkollaw
I completely disagree with you.

I don't know whether the "square of the number of feature" claim is true, but
we have certainly made our system huge for no reason—besides perhaps making it
more convenient for programmers.

I go to [http://artelabonline.com/home/](http://artelabonline.com/home/) every
once in a while—which will crash if many people click on this because there
are only 128MB of RAM—which I built in 2009. With the worse PHP one can think
of, no CDN, etc., it's lightning fast. Websites nowadays are over 10MB, and
most of that crap are JavaScript libraries and frameworks. Most apps I use
daily are Electron apps, which contain a whole copy of a browser even though I
already have one installed, and routinely take up 700MB of RAM to show me an
app that is actually a web page.

I believe that the problem is that programmers are doing things for
themselves, and not users—which is eventually who will use the product.
Electron is a great example. That, mixed with this idea that a little control
panel for a client who has to check 10-20 orders for his store should be built
with the same framework used by the most visited website in the world.

~~~
alkonaut
> completely disagree with you.

Well I was being perhaps a bit deliberately controversial

> for no reason—besides perhaps making it more convenient for programmers

That's a _massive_ and _excellent_ reason to make a system consume more
resources. In fact I think it's probably the main reason programs do! If new
feature X can be done in 1 man-week and consume Y resources, it's entirely
possible that it can be done in such a way that it consumes just 1/4 of those
resources. That might take 10 man-weeks (and/or much better devs). So you
don't, because users generally aren't willing to _pay_ for that. Basically
only a few very niche products do this (game engines, embedded, ...).
Basically, the economics of adding feature X was such that if it can't use a
huge amount of resources, the buyer can't afford it. So it uses a lot of
resources, becaue the buyer wanted the feature.

> I believe that the problem is that programmers are doing things for
> themselves, and not users—which is eventually who will use the product.
> Electron is a great example.

This is partyly true. Electron (and similar) is an excellent example of the
economics above. I also can't believe how someone can write a _chat client_
that uses 1Gb of ram in any universe. But the economics were such that JS
developers, unfortunately were easy to come by, and a browser engine with DOM
(of all things) was the best way to get a cross platform UI running with these
developers. So the arrival of Slack was really just like any other feature.
Someone wanted a cross platform shiny group chat application, and they wanted
it now and not in 10 years, and they wanted it to cost reasonably little. The
answer to that, unfortunately, was "ok but it'll cost you two cpu cores and a
gig of ram".

Was it just for the developers? well, partly. But indirectly it's for the
users who weren't going to PAY for C++ devs to write slick and lean native
versions of this software

Bottom line: every user has a limited amount of money and a limited amount of
computer resources. When given a choice, my experience is that users are much
more willing to pay with more resources and less money, than vice versa. The
important thing to remember is that the two are connected - a program that
takes less resources is more expensive.

~~~
jaclaz
Don't forget what I call "technological supremacy" bias:

[https://news.ycombinator.com/item?id=14902333](https://news.ycombinator.com/item?id=14902333)

~~~
alkonaut
Of course, and having system requirements is obviously a good thing.

While it's obviously a fact that devs have more powerful machines than their
users, I'd argue that the tolerance for performance is MUCH lower among devs.
When my IDE does a GC pause for 5 seconds I go insane.

Yet I have users that use the software I write, and somehow insist on using
enormous datasets in it (much more than we would have imagined) meaning
completely ridiculous delays (minutes if not more). When I ask if it doesn't
drive them crazy, they say "no, because in the past thus job took a week with
pen and pencil" or "the old program didn't have this feature at all, so of
course a minute wait is OK in the new program!".

A developer would never say that about the new shiny feature in their IDE or
similar "yeah I like the new go-to-symbol even though it takes 15 seconds..."
(and it might SAVE us time still, compared to doing it the old way). But we'd
complain like crazy.

So our technological supremacy bites us some times too.

------
stephengillie
Software went off the rails approximately when JavaScript became mandatory to
display text on a webpage.

~~~
unabridged
Exactly. Its ironic his post doesn't even show up until I allow js loaded from
multiple domains just to show 4 pictures and a couple paragraphs of text.
Sites like his are huge part of the problem.

~~~
jasonhanley
The irony is not lost on me.

The unfortunate thing is that Blogger started out being fairly light and clean
-- at least it was back when I migrated to it.

At least static site generators are coming back into fashion. I've always
considered them a better solution for media/news/blog type sites.

~~~
microcolonel
FYI: I think you can still select blogger themes which don't require JS to
render. Some people use them, including (perhaps just as ironically) the
Google Developers Blog[0].

[0]: [https://developers.googleblog.com/](https://developers.googleblog.com/)

------
Waterluvian
Software itself is the sacrifice of system optimization for human
optimization. Give me unlimited human power and I'll give you purpose-designed
chips for each individual use case with the program encoded in the wiring of
the logic gates.

------
jlebrech
Software went off the rails following the advent of the downfall of the real
IDE, the mouse and the web browser.

The browser is the big one, you're running an operating system inside of
another.

The IDE not being able to target a true cross platform binary has hampered us.

The mouse has made developers lazy and et them put interfaces in clunky
places.

(contradicting myself) Not using the mouse for coding anymore (see the
downfall of the IDE)

And OS and browser vendors should have allowed binaries to run at a higher
ring level, IE run something similar to dos inside of a browser, I would much
rather cross compile to the major architecture than code in HTML/CSS/JS.

------
rdiddly
The Jevons Paradox applies here: As technology improves, cost goes down, which
increases demand, which causes more of a given resource to be used, not less.

------
kisstheblade
Everybody here says that "software bloats" but nobody says how it bloats. Why
does chrome take 500mb to render a page? Where does that memory go.

------
pmurT
Although it's great for campfire stories I don't really see a problem.
Resources are there to be used - the lesson from StarCraft is you don't horde
resources to victory. I think most believe this as how many of us are writing
unikernel OSs purposed built for our hand tuned assembly backends

~~~
khedoros1
Sticking with the Starcraft analogy: We're spending the resources constantly,
and even increasing the size of our army to combat the constant onslaught of
enemies. If we could basically decrease the price of our units, then wouldn't
that be a benefit? You spend a boatload of time and resources on research, and
gain on ongoing benefit from it.

Take this laptop: spinning platter drive and 4GB of RAM. I'd be happy if it
could do more with the same hardware, and I think that's the core of what
we're talking about.

~~~
valesco
But time took care of the cost of a given hardware unit.

------
z3t4
Just like roads and budgets the utilization will always grow to overflow.

------
Izmaki
Why did a bunch of open, unintelligent questions even get to the frontpage of
HN? If this is the way one should think (ask questions we all on a low,
unintelligent level agree are interesting) in order to become CTO, I will
never get there...

//The Engineer

------
ninjabeans
Ironic that your blog needs javascript to display a page.

------
mandie
Written in a blog that does not work with javascript off.

~~~
neogodless
Yes - I often think about posting this sort of comment, but won't if it's not
really relevant. But not only do you need to enable JavaScript for the domain
itself, then you need to enable it for blogblog.com, and load 7 resources from
that site. The page loads up images for about a dozen unrelated articles in
the background behind the modal.

It would be great to walk the walk when you post about this kind of topic, by
using a simple HTML web page with just the images you need to aid in
presenting your argument.

------
c517402
IMHO, there is no Software Engineering. There is only Computer Science. There
is none of the discipline that comes from engineering, and only the messiness
that comes from science. The heroes in software are Computer Scientists and
everyone wants to be a Computer Scientist. No one wants to be a Software
Engineer. It isn't even clear to me that Software Engineering exists in a
clearly defined way. Maybe a manager or director imposes their will on a group
somewhere and engineering is done.

~~~
rdiddly
An engineer is just an _ingéneur_ \- someone who applies ingenuity.

~~~
tome
* ingénieur

~~~
rdiddly
Oops-a-daisy.

