
Software bloat makes me sad - vanschelven
http://www.remarkablyrestrained.com/articles/-software-bloat-makes-me-sad/
======
Kaali
There is another side of extra resource use that I don't really see addressed
except in the mobile space: ecology.

Even though my computer can run all applications without a hitch, it is still
very wasteful to constantly use CPU power because of technology choices or
plain laziness. As an example, Spotify and Slack are two applications that
seem to use most of my CPU after Chrome. Spotify and Slack combined seems to
hover around 5-15% of total CPU (a two year old i7). When there is a lot of
traffic in Slack I have seen it using 15-20% by itself, with multiple
processes running and memory use going above 200 megs.

Both applications work smoothly, but should they really use that much
resources? A chat application? A music player? With modern CPU's I would
expect them to be at the bottom of the process list when sorted by CPU usage.
I used IRC on my Pentium 75Mhz and it ran fine. When simple applications are
made so poorly that they use that much resources, what is the worldwide impact
of that power use? And what about the users that don't have powerful and
expensive CPU's?

~~~
tormeh
It's not that they have poor algorithms or whatever, it's that they're part
native, to handle the desktop interaction and the rest is a bunch off
html/css/JavaScript running in an embedded browser. At least that's how
Spotify works.

The problem is that our tools for making multiplatform native GUI's suck so
bad we'd rather just embed an entire Web browser into everything.

~~~
Kaali
There are some okayish cross platform frameworks, such as QT and even JavaFX.
One of the main complaints of cross platform GUI's has been that they don't
work like native applications. But for some reason nobody cares when the app
works like a single page web app, which in many cases is a lot worse than even
plain old Swing apps. Which at least supports right-click properly.

I think the main reason that node-webkit and what-not are popular, is because
of web developers moving to native app development. It's really easy to get
started that way, and you can even share code with your web app. Where
something like QT has a really huge learning curve for programmers
transitioning from Javascript.

About poor algorithms. I actually worked on optimizing a well known web
browser for a couple of years. And most of the stuff we did, was because of
really bad Javascript code. Even though it seems gluttonous to embed a web
browser in applications, and even insecure, it doesn't have to be as bad as it
is, especially with a simple application like Spotify. This is going on a bit
of a tangent, but every frontend programmer should at least learn how the
browser actually works, a nice site for that is
[http://jankfree.org/](http://jankfree.org/)

~~~
mwcampbell
> There are some okayish cross platform frameworks, such as QT and even
> JavaFX. One of the main complaints of cross platform GUI's has been that
> they don't work like native applications. But for some reason nobody cares
> when the app works like a single page web app, which in many cases is a lot
> worse than even plain old Swing apps. Which at least supports right-click
> properly.

There's more to non-nativeness than just look and feel, which the more mature
cross-platform GUI toolkits can ape fairly well. Another concern is
accessibility for people with disabilities, e.g. blind people using screen
readers. Qt, for example, is kind of accessible on Windows, Linux, and Mac,
but not at all on mobile platforms. Not sure about the status of JavaFX. At
least a single-page rich web app can be made accessible using the ARIA
extensions to HTML, and if you use one of the big four web rendering engines,
you can be sure they've done the hard work of implementing the underlying OS
accessibility APIs well. Of course, many (most?) web developers don't
implement ARIA for their custom UIs.

~~~
Kaali
Accessibility is a point I didn't think about at all. Thanks for reminding me,
it's really something that is all too often forgotten. JavaFX supports ARIA
and all standard controls have accessibility built-in. But I have no expertise
to actually comment on the quality of accessibility features in JavaFX.

------
mwcampbell
I've previously decried bloat too, but I find that my rants against software
bloat are based on emotion, not reason.

So, looking at this rationally, consider your favorite lightweight window
manager or desktop environment for Linux. Is it fully internationalized,
including support for CJK input methods? Is it fully accessible to users with
disabilities, e.g. blind people and people with mobility impairments? Does it
auto-mount USB thumb drives? Does it do all of the other things I'm not even
aware of that are required for a fully usable desktop environment covering a
wide variety of machines, users, and use cases? AFAIK, the only Linux desktop
environments that come close are GNOME and Unity.

Basically, the world is complicated, so real-world software has to be
complicated too.

It sounds like Atom could still use some optimization though.

~~~
scrollaway
> AFAIK, the only Linux desktop environments that come close are GNOME and
> Unity.

I'm the project lead of LXQt. All those things you listed can be supported
without introducing much bloat at all. In DEs, most bloat comes from pure
technical debt. Unnecessary libraries, duplicated code/libs, etc.

Feature bloat is another matter and is more subjective. A lot of it is down to
UX.

Fun fact: KDE 4.x depends on ruby in most distros. Do you know why? Dolphin
ships with a ruby script to update some folders (don't remember what it does).
Nobody runs it. That's it.

~~~
Twirrim
> Fun fact: KDE 4.x depends on ruby in most distros. Do you know why? Dolphin
> ships with a ruby script to update some folders (don't remember what it
> does). Nobody runs it. That's it.

Why did they allow an additional dependency on Ruby in? It seems like perl or
python would be the more logical choice, given they'd already be on the system
(heck.. maybe bash?)

~~~
exprL
Probably because a Ruby activist wrote a cool script and no-one cared about
the implications of a new dependency at all.

~~~
digi_owl
I fear many projects seems to take the infrastructure and manpower of large
distros for granted these days.

------
prof_hobart
>The usual counter to this is that the actual (as opposed to imagined)
bottlenecks will only become apparent after intense usage.

The usual counter is that "optimising" takes time. And all of the time that
you're spending trying to optimise before release is time that no one is able
to use your software at all.

And the reality is that a lot of the bottlenecks _can 't_ be predicted in
advance - who knows how may files the average user might want to load into
your system, or whether your website is going to attract 100 or a million
users a month? And many expected bottlenecks become largely irrelevant as
technology moves on.

Take his Ubuntu example - for what percentage of users does the fact that it
doesn't fit on an actual CD really matter any more? Is it worth spending weeks
or months fine-tuning the distro to a point that it can be fitted onto a
medium that most people probably aren't even using anymore?

~~~
bloateddevtards
I could edit a doc with a word processor that could fit onto a floppy, and it
worked well. Now, why does the same task require a multi GB pos bloatfest to
do the same job? And oftentimes, more slowly! Imagine for a second, the power
of today's hardware running optimised lean code. It could be so good, but we
accept bloated crappy bug ridden shitfests of OS and application software. We
seem to care more about glossy and social then performance. It is a real
problem.

~~~
goldfire
If I want to start Microsoft Word right now, here's what I have to do: I press
the Windows key, type "word", press enter, and then wait for it to start. That
entire process takes three seconds; I just timed it. Now that Word's been run
once already, I can start it again in less than one second. That compares
pretty favorably with the process of finding a floppy disk, inserting it, and
then finally running a command on it (it's even worse if I don't remember the
cryptic command name, because then I have to wait for the floppy's directory
to be read). To say nothing of how much easier to deal with this "bloatfest"
will be once I get it loaded, compared to your leaner alternative.

~~~
TeMPOraL
That's probably because you have SSD, but anyway; I could do the exact same
process 10 years ago (with "winword" instead of "word"), get similar response
times, and yet the software was an order of magnitude smaller _and_ more
responsive. And it's not like Word gained many actually useful features during
that time.

~~~
nlawalker
>> And it's not like Word gained many actually useful features during that
time.

Then use the 10 year old version! Unless, of course, one of those few
"actually useful features" is something you can't live without.

A _lot_ of people use Word for a _lot_ of use cases. The value of an added
feature that someone needs always trumps the performance cost of adding it
until the performance becomes so bad that it becomes the reason that other
people _stop_ using it.

~~~
XorNot
People who complain about bloat are usually complaining about features they
don't use. Of course watch what happens when people start doing metrics and
optimizing their UI for "common uses" \- turns out you're optimizing for non-
existent users (see: the Ribbon).

~~~
JoeAltmaier
The ribbon gets knocked a lot; but the median word processor user has zero
experience. The ribbon gives them a fighting chance to find what they need. At
the expense of 'expert users' with their idiomatic expectations.

~~~
TeMPOraL
Which is a problem because tools should be just that - tools. You don't make
welders or soldering irons easy for people with zero experience. You make them
effective and efficient tools, and then _train_ people to use them. Heck, show
me a single musical instrument giving zero-experienced users "a fighting
chance to find what they need".

It's weird how new trends in UX design try to make a first-time user become a
genius immediately after double-clicking on the program icon. The only way you
can do that is by dumbing down the software to the point it can actually be
comprehended this way - which makes it much less usable and effective as a
tool.

~~~
JoeAltmaier
That's the expert talking. The tool is made for the most common case - a new
hire meeting it for the first time. They're not gonna do a good job; but
anything that can improve their performance, pays. Follow the money.

------
matthewmacleod
I'm not really convinced. Leaving aside the argument about whether or not
software is 'bloated' at all, or what 'bloat' actually means, this in
particular stands out:

 _“wouldn’t it have been easier not to create the problem in the first place?”
It would seem obvious that the answer to this is indeed “yes”._

It doesn't seem obvious at all, especially considering the implicit
assumptions about whether or not bloat is actually a problem. There are three
classic examples highlighted in the article of writing about this tradeoff:
"Make it work, make it right, make it fast", "Premature optimization is the
root of all evil", and "Bloatware and the 80/20 Myth" – but the argument about
why these don't apply is a bit weak: "bloated software simply makes me sad".

To be honest, I don't find we've got a problem, generally. My experience with
my computer is that I can do many more things much faster than I could in the
past. And I don't think that counts as bloat.

~~~
TeMPOraL
I came to believe there are two types of programmers - those who get
emotionally upset about ineficciencies, and those who couldn't care less. I'm
in the first camp, so I understand the feelings of the author perfectly.

You say your computer can do "more things much faster" than before - just
imagine how many more things, how much faster you could do if the software
didn't bloat itself up almost at the same pace as the hardware gets faster.

And you see, I can understand abstraction layers. I can excuse coding for
function first, speed second. But then there's the kind of bloat that can only
be explained by programmers having no clue about what their software is doing.
There's literally no reason why an article should lag in your browser, or
Spotify should take 10+% of CPU on standby. There seems to be a lot of
pointless code that's being executed, and nobody cares, because developers
often have powerful machines. Not so the general population who then has to
endure such creations.

~~~
gldalmaso
As demand for software grows, a lot more code made by immature (as in
beginner, junior, green) developers finds its way to production.

Part of it is not code reviewed by senior developers. Another part is but too
late in the cycle and goes in anyway because features.

The bloat is technical debt accrued in favor of shipping.

It probably happens to both types you described, but shipping is a business
decision.

~~~
mondoshawan
It's not always a business decision. GNOME, KDE, X, systemd, dbus... The list
goes on and on. None of this software is business derived.

Honestly, an engineer that cares about the performance of their code has
several options available: \- design the project first, then code later. \-
design the code for the architecture (arrays of objects vs. objects of
arrays). \- use tools that can optimize out inefficiencies. \- hand optimize
after writing a functional slice. \- document the APIs and interfaces to
explain the performance expected as part of their contract.

Unfortunately, it seems as though we've gotten into a bit of a rut when it
comes to this -- we rely on the awful tooling we have to optimize the code we
write (yes, LLVM and friends are light years better at this than GCC et. al.,
but it's still not enough), we never document performance characteristics of
the libraries and APIs we create, and we (at least in the OSS world) almost
never design our projects to the level of detail necessary to fully understand
it.

Honestly, I'm not sure how we can solve these problems if the engineers have a
lack of discipline. It's a hobby, after all, right? We do this in our spare
time, and nobody wants to write designs or document because that's the least
interesting part of coding. See also GTK's autodocs, which for /years/ lay
around as nothing more than a simple enumeration of which functions were in
the libraries. Even in self documenting languages we fail at this (I'm looking
at you, Common Lisp).

Optimizing afterward is a simple, if lazy, way out but post-facto optimization
can only take you so far if the design is simply broken.

Since we never document or design, the problem ends up falling to our tooling.
Things like oprofile and so on work, but are usually so coarse grained that
digging through the code to instrument it effectively becomes a grind nobody
wants to do. Maybe we need companies to develop free and open tooling for
understanding the performance characteristics of the code we write. After all,
the "modern" tooling we use for languages like C and C++ hasn't changed since
the 80s, and it's often actually hostile to the end user, which just makes the
job harder. Even our linkers often fail to do their jobs of optimizing out
useless dependencies or symbols unless we explicitly tell them to do so.

Ultimately, the blame lies with us, the engineers who work on this code. New
engineers joining our projects need to understand the code itself through and
through. Our projects have become so large that without documentation no-one
can make a clear, clean decision on what should or shouldn't be written
anymore.

What I want to see is tooling that can semantically understand the code I
write, and give me characterizations of the function calls I make as I write
them. When I need to examine source to better understand it, I want tooling
that tells me how often the functions are called, where they are called, and
how. Raw text is great for writing prose, but we're not writing prose -- we're
writing code. Our tools should reflect this, and give us as much help as
possible.

~~~
keithpeter
Civilian comment: the Gnome/dbus/logind/systemd situation seems to have
arrived as a result of a number of _independent_ projects each on their own
random walk but responsing to the earlier states of other projects that each
depends on. So it may be overall integration issues (Linux based systems have
always seemed to me as a user to be like bags of lego) rather than efficiency
of any particular aspect of one project.

~~~
mondoshawan
While I agree they're all attacking the same problems, ultimately they all
have exactly the same problem when it comes to code: no designs, no docs, and
no tooling.

The point you make about Linux systems being bags of Lego is interesting.
That's exactly how it is supposed to be. Unfortunately, the projects we're
talking about don't compose well -- they're more like Duplo than Lego because
often times they subsume whole feature sets into one huge monolithic block.
Huge lumps of code exist in GNOME and KDE and systemd and dbus that just
simply have no business being in there.

Dbus and systemd especially fail at the whole design angle.

~~~
keithpeter
Not sure why you are being down-voted for expressing an _opinion_ about
program design. Would downvoters care to explain?

------
fullwedgewhale
One thing that strikes me is the explosion in dependencies in most software.
I'm guilty of this, too. I've seen plenty of examples where an entire library
or framework is added to a project just for a couple of features. Add a few
libraries like that and suddenly you have a few megabytes of additional
libraries, where maybe 90% or 95% of the features will never be used. A good
article a while back looked at common unix utilities, comparing the size of
commands like cp from the 1980's to the present. Most of the bloat had to do
with features that almost nobody ever uses. It wouldn't be so bad if everyone
used the same set of libraries. For example, almost all applications have a
dependency on certain core libraries like libc.

But we often use different libraries that do essentially the same thing or
different versions of the same library, so instead of 1 copy of libfoo.jar, I
have 2 copies of libbar.jar and 4 copies of libfoo.jar that may all do
essentially the same thing. Then I have essentially the same functionality in
C++ (some libraries that wrap collections), Python (where maybe one of they
python versions wraps one of the C++ libraries, but a different version). And
of course I have a version installed in each ruby environment. Add to that
their dependencies, and the dependency's dependencies, and you have a perfect
storm of craptastic. So libfoo.jar version 1.2.3 depends on libbaz.jar 2.3.4
which depends on libqux 1.5.7. Let's say each one is 250k, and all I ever used
was some list sorting utility in libfoo.

But I don't know what we could really do about it. You can't force everyone to
program in C++ or limit them to a set of blessed libraries. I think maybe
developers could be more judicious about when they could add a few lines of
code and when they actually need to bring in a hard dependency an an external
library. And it happens with commercial software as well. Maybe this is just
the way the world will be.

~~~
Eridrus
Dart makes a decent argument for a smarter compiler. They've implemented "Tree
Shaking" in their compiler: essentially cross-library dead code elimination.

This would probably be quite tricky in Java land where reflection does add new
entry points, but it could be used to solve the problem of "I only need this
one function from this library, don't compile in anything else".

I was personally quite surprised when I ported from code from Node to
Java/Groovy and the resulting shaded JAR was > 70MB, I think at some point it
peaked above 110MB. I don't know what I changed, but it's down to 35MB now.
But the code that we've written in house on that codebase boils down to 1MB.
But besides figuring out that I don't want to make local builds I scp to
staging (because scp is terrible), these numbers are all completely equivalent
for writing server-side software that runs on dedicated machines.

We could certainly make it more efficient, but there's exactly zero business
case for it.

~~~
rurban
You can only tree-shake a whole program compilation, but then you cannot use
compilation units, modules and modularity efficiently. You have to choose one
or the other.

Every normal compiler implements simple (i.e. module level) dead-code
elimination already.

EDIT: Of course you could use static libs, which does pull in only used
symbols, but then you cannot share them across apps and update independently.

I implemented a tree shaker for my lisp and was very happy with it, esp. for
delivery. Like Go does it nowadays.

~~~
Eridrus
Right, I guess I wasn't clear, this was a shaded/fat jar, so it had all its
dependencies included statically.

I feel like our computing infrastructure has gotten to the point that
dynamically linked libraries are no longer a good choice. I think dynamic
linking has only caused us problems at work (devs install Node deps on the
staging server, forget to tell ops, service crashes when deployed in prod),
and the memory/disk/transfer overhead are practically irrelevant at this
point. The only remaining reason to have dynamic libs is the idea that they
can be updated without help from upstream, but that really only works if the
software is compatible with the latest libraries, which isn't always true.

Supposedly ProGuard has some cross-module dead code elimination for JARs, but
I haven't tried it:
[http://proguard.sourceforge.net/](http://proguard.sourceforge.net/)

------
placebo
As someone who is also saddened by software bloat, I also wondered a bit about
where it stems from. There are of course the usual suspects (some of them
mentioned in the article), but I think it boils down to another symptom of the
modern rat-race culture, and it is not limited to software. When the goal of
creating beauty and enjoying (as well as taking pride in) the creation process
is replaced by the god of money and getting it out the door as fast as
possible, the quality of the thing you're getting out the door has to
deteriorate. I'd argue that a side effect of this is that the quality of life
of both the creator and the receiver of the product decreases, but that is not
immediately apparent when you're too busy trying to win a rat race...

However, I'm optimistic. For one, I think that the tools we have today are
superior in absolute terms (convenience, speed, ease of use etc.) than they
ever were. Of course they are not exponentially better (like the hardware they
run on), but they _are_ better. Also, occasionally I see some gems here and
there. There are still many developers who take pride in what they do and I
enjoy finding that diamond once in a while when searching for an alternative
technology to use in a new project.

------
deckiedan
I agree in general, but the 'vim in combination with a few small plugins may
lead to your whole computer locking up' is extremely misleading.

It's the _editor_ which has frozen, not the whole computer. And the plugin in
question is a fairly complex autocompletion / analysis plugin for python.
There's a bug listed in that thread, to do with it scanning all the files
millions of times, but the big problem actually isn't bloat, in this instance,
it's the non-async nature of vim. neovim should, I hope, solve the 'totally
unresponsive' problem, and hopefully someone fixes the rope/vim interaction
re-scanning all the files bug.

But it's not bloat, per se.

If your whole computer is locking up due to this, then that's a scheduling
issue, and you could try switching scheduler, or lowering the 'nice' priority
of that program.

Not that we should have to care about such things - that's pretty terrible.

------
zimpenfish
"Convenient though it would be if it were true, Mozilla is not big because
it's full of useless crap. Mozilla is big because your needs are big. Your
needs are big because the Internet is big. There are lots of small, lean web
browsers out there that, incidentally, do almost nothing useful. If that's
what you need, you've got options..."

[http://www.jwz.org/doc/easter-eggs.html](http://www.jwz.org/doc/easter-
eggs.html)

~~~
vezzy-fnord
That doesn't excuse resource inefficiency, which is really the main point. The
issue of size can be further addressed by separation of mechanism and policy,
and building for extensibility.

~~~
XorNot
Are you sure there's inefficiency? Do you have specific points in the code or
behavior which are obviously inefficient, or does it just feel "too big"?

~~~
mondoshawan
_ahem_ MorkDB.

There's tons of code in Mozilla that isn't necessary to do the one job the
browser was built for.

I don't need code that manages bookmarks. At all. I don't need code that
stores history for years. Honestly, I don't /want/ cookies persisted /at all/.
I don't need themes to browse the web, and a reorganizeable UI is /never/ what
I'm looking for in a browser.

I don't need WebGL. I don't need Canvas. Hell, I definitely don't need syncing
of all this extra bloated data around to services somewhere in the 'net,
either. Frames? What about _floating frames_?

Half this stuff is better left to tools that are built for it, the other half
is junk I don't need, or is junk that actually creates more problems than it
solves.

~~~
ZenoArrow
Then use something else. It's not like you don't have a choice.

[https://en.wikipedia.org/wiki/List_of_web_browsers](https://en.wikipedia.org/wiki/List_of_web_browsers)

Based on your stated requirements, perhaps you'll like Dillo.

[http://www.dillo.org](http://www.dillo.org)

~~~
mondoshawan
I think you misunderstand. The parent was asking for examples of bloated code.
I was providing examples.

I actually /do/ use other browsers like Midori, w3m, and surf. The problem
doesn't disappear, though, because it's up to all of us writing code to be
better at it.

~~~
ZenoArrow
Not all code bases need to be minimal to be valid. Some people like having the
features you class as superfluous. So long as you have the option to choose
software that suits your taste there's not really much of a problem. I, for
one, am glad that the web continues to evolve beyond its original design,
clearly that's not something you're as interested in, both viewpoints are
valid.

~~~
mondoshawan
So my point here isn't that I feel those features are superfluous, but that
that including them wholesale into the web browsing application directly is
bloat -- it would be better served to produce separate applications
specialized for those features.

Bookmarks, for instance, can be handled in a much cleaner and portable way
than isolating them to the specific browser implementation. The same can be
said for history, authentication, and so on.

Also, to be fair, I wasn't arguing for minimal codebases -- that was your
point. I was providing examples for the GP.

~~~
XorNot
And this is where I'm bored with bloat rants. "Software X is bloated! <lists 5
common features everybody uses daily>"

------
kkapelon
"There are very little tools available to to help the user select unbloated
software"

Actually there is a whole community around this
[http://suckless.org/](http://suckless.org/)

~~~
Freaky
It's a funny idea of sucking less where any user configuration involves
editing config.h and recompiling.

~~~
falcolas
Which is easier? Re-running make, or having to write and induce another
library to read configuration files?

Configuration files introduce a new point of failure (file not present, file
corrupted, file not readable), a point of slowdown (how long does it take to
read the file from disk, how long does it take to parse and load the file),
and a point of complexity (parsing even ini files is pretty complicated, if
you want to support the myriad of ways we write them).

All of this for values which change once every... how long? Once a year? Once
an install? Or for most configuration values, never?

I still use configuration files, because I'm OK with including third party
libraries... but I can certainly understand why some people may not be.

~~~
Freaky
> Which is easier? Re-running make, or having to write and induce another
> library to read configuration files?

It's certainly easier for the programmer to just leave some of the work to the
compiler, but it's at the cost of being a complete pain in the ass to
packagers and users.

Do you even have to do much work yourself? .Xresources is already parsed and
loaded for you by xrdb if you can't afford the cost of an extra 100
microseconds doing it yourself. Is the API to interact with that mindblowing
horrid or something?

> All of this for values which change once every... how long?

When I'm configuring software to taste, several times a minute. And
considering this is likely to be my first exposure to the software, it better
not suck completely.

------
kazinator
Software bloat as such doesn't make me sad.

Free open source software bloat makes me sad.

However, that bloat is winning on its own merits. For instance, GNU/Linux
distributions which are minimal are not popular. The popular ones tend to be
the bloated ones.

Vim has gotten a lot bigger in 20 years, yet I'm not going to jump ship to a
smaller vi implementation.

FOSS makes us confront the fact that we actually seem to like bloat. We
inflict bloat upon ourselves --- there is no monopolistic software vendor
there to blame.

~~~
drabiega
Because bloat is like government. Multiple people might agree that there's too
much of it, but they don't necessarily agree on which parts should be gotten
rid of.

~~~
omegaham
This. As always, there's a relevant xkcd[1]. Features that I might decry as
bloat are probably someone else's "necessary, to be preserved at all costs"
features. If you want to cover a large number of use cases, you have to
include all of them. And with that comes bloat. There's a reason why even a
minimalist CLI-only Linux installation requires more than 24MB of RAM when
Windows 3.1 runs on 3MB - the Linux kernel is capable of a hell of a lot more.

[1][https://xkcd.com/1172/](https://xkcd.com/1172/)

~~~
kazinator
A minimalist CLI-only Linux installation was still possible in around in 2
megs of RAM in 1994.

A minimalist XWindow installation was possible with a 40 megabyte hard drive
and 4 megs of RAM over a 386 CPU. (That's similar specs to what 68K-based Sun
workstations had perhaps a decade before that.)

------
maramono
Related: great article on code inflation
[https://www.computer.org/cms/Computer.org/ComputingNow/issue...](https://www.computer.org/cms/Computer.org/ComputingNow/issues/2015/04/mso2015020010.pdf)

------
a15971
The bloat is simply a side effect of the evolution. Software is not as much
designed as evolved.

You make a name for yourself in SW development by adding features that others
deem useful. You get money (in commercial setting) or prestige (in freely
distributed software) if you seem to be producing something useful. It's
trivially easy to recognize a contribution that adds code, but it's hard to
recognize a contribution that means absence of something (e.g. absence of
bloat, absence of memory hogging, ...). You get inclusion in newer, larger or
more important projects easier the more you make yourself recognized.

So any useful software gains more contributors that add things than those who
remove things. Commercial code can and does gain developers if it earns money,
open source software gains developers if it is deemed useful by programmers.

This is a force that shapes both the group of involved developers and the
resulting software in a process akin to evolution. In both cases the selection
is biased towards adders of code. There's also always possible to add
improvements that help some use cases and audiences. On the other hand,
arguing for removing something or limiting might make you unpopular (you are
seen as an obstacle to everlasting progress) and removed from the group of
developers. Rare people have enough recognition and clout to prevent inclusion
of something (Linus Torvalds is one of them - he can get away with rejecting
patches to Linux kernel and he can play the role of the kernel guardian).

On the whole, software expands until it fills the resources available.

------
tribaal
That's pretty much the only reason I cringe every time somebody writes a piece
of client code in golang.

While I really like the language itself, statically linking everything is
creating exactly what this article describes.

Burn karma, burn. I don't care.

~~~
vezzy-fnord
Not gonna burn your karma, however:

a) You are failing to account for the bloat caused by the presence of the
dynamic linker, the dynamic loader, all the code that must operate under the
assumptions of a dynamically linked environment, and the various auxiliaries
used to treat shared library hell issues like WinSxS and libtool.

b) Most modern static linking implementations do things like sharing of text
segments across processes and other dedup techniques, so they're not actually
that bloated at all.

~~~
tribaal
So maybe my comment stems from ignorance.

Are you saying that two Golang projects sharing the same (large, for the sake
of the argument) dependency would not take (size of the dependency * 2) on
i.e. the Ubuntu live CD (as was the example in the fine article)?

~~~
TheLoneWolfling
Correct.

I would not be surprised if those two Golang projects take _less_ than the
size of the dependency. 2x is a degenerate worst-case.

With static linking, you only need to pull in things you actually need (and
you can optimize from there!), whereas with dynamic linking you need to pull
in the entire dependency regardless. (You only need to pull in one copy
regardless of how many things are using the library, true. But you need to
pull in one full copy always.)

There are _very_ few cases where you actually use the entire library in a
project - and if you are indeed using an entire library in a project your
project is necessarily large enough compared to the library that it's not much
overhead percentage-wise.

~~~
tribaal
That makes a lot of sense.

Thanks for the insight, I now understand why my comment was misguided.

------
aikah
point 2 about atom:

Well, writing a text editor is far more complicated than it seems at first
place.In order to support big files (50mb logs for instance), you need to make
all your code rely on complex memory management, streaming and caching. Maybe
js is not the right tool for the job in order to implement these features. I
always thought that atom should be build in Qt with an api that can interact
with JS,a bit like Adobe products for instance. That way you still allow JS
plugins, just like Photoshop while getting a max of perfs thanks to C++.

~~~
runj__
Atom is a super weird example, wouldn't it bloat the software to add support
for files larger than 2MB? Wouldn't that be feature bloat? I mean, a code
editor shouldn't have to deal with gigantic files since code shouldn't be
written that way <sup>[citation needed]</sup>.

~~~
creichert
What if the code isn't written but generated? How would I inspect it with me
editor?

------
keedot
Only experience with the same problem space can make you accurately predict
where a bottleneck is going to be. Premature optimization will have you
focusing on areas where optimization isn't required. YAGNI applies to all
levels of development.

I would rather see bloat and new ideas, than only experts working in a problem
domain. My best ideas come from a marriage of understanding a single problem
space, and a wonder what I can do in another. Bring on the bloat.

------
alkonaut
Feature bloat is inevitable if you write an application in which thousands of
users want different features. Most users want just a subset of the features,
and think of the rest as "bloat" even though it might be a core feature for
another user. The alternative is multiple lighter applications which would
just reduce bloat at the cost of integration, a trade off almost no one seems
to want.

~~~
TheLoneWolfling
Or, you write the application as a plugin-centric architecture with very
minimal core features, and go from there.

Not perfect (especially as writing things as plugins tend to introduce a
certain amount of bloat regardless), but it can be better regardless.

------
rumcajz
I guess devs don't necessarily believe that the bloated software is good.
However, they believe that the project should be developed forever, which
yields the same result. I've written about it here:
[http://250bpm.com/blog:50](http://250bpm.com/blog:50)

------
redcalx
The notion of 'premature optimisation is evil' is partly to blame; It's not a
terrible rule, but it's much better with conditions applied. i.e. apply some
intelligence/judgement to where you use optimisation rather than just lazily
following some rule.

~~~
kazinator
Firstly, lack of optimization for software size or execution time _is_ a form
of optimization: it's optimization of someone's time.

Not writing a compiler for your scripting language saves you years.

Not choosing carefully what features to include in a program, or what packages
in a system image: design time saved again.

It also saves considerable time if a user wants the program to do something,
and by golly, "look, they thought of that already: there is a feature for it!"
Seconds later, the user is just doing whatever they need instead of surfing
the web for workarounds or calling support.

Having all the features also increases the flexibility of customization: you
have more choices about what you can remove to create smaller images which
have specific feature sets. You can hardly remove anything from a minimal
image to make a specialized custom image.

I.e. by not making a program as minimal as possible, compact as possible or
fast as possible, you _are_ making some kind of trade off. Though the program
is worse in some regard, there exist parameters (of something, not necessarily
the program) which have gotten better: if you think about it a little, you can
identify what those parameters are. The worsening in some of those parameters
in a "bad trade" is what makes premature optimization "evil".

~~~
redcalx
I agree with the general point, however, I believe it's commonplace for
software to be poor on those other metrics you mention too, i.e. the one
metric that /has/ been optimised is development cost. All of the optimal
outcomes define a Pareto front, i.e. spend time optimising feature A versus
speed, cost, etc. But as soon as you bring in low(er) quality engineers you're
moving away from the Parento front, probably quite significantly so.

~~~
kazinator
I.e. not only will there be bloat, but users won't find the features they
need, it will be delivered late, and with bugs, etc. :)

------
vlad
I don't care if an app is 20% bigger in file size as long as the user
interface makes sense and the product actually works, especially if the bloat
is due to a programming language or architecture that makes it easier to add
new features or fix bugs.

~~~
fullwedgewhale
But what if it's 200% bigger? I remember reading an article that looked at the
time it takes to load MS office under Windows. Even though computers over the
last 15-20 years or so are much, much faster, the wall-clock time to get Word
or Excel up and running has pretty much stayed constant. Granted, disks
haven't kept pace with CPUs, but when your computer is 100 times faster and it
still takes just as long... Is that a good trade off?

------
redcalx
I've idly thought of writing a 'manifesto for quality software', and/or trying
to start some kind of political-esque movement to that effect.

I get annoyed by having to reboot my TV or radio when they freeze up, or by
having delays to my channel changes, or having to navigate complex menus to do
things that would have just been a physical button click before.

(this is all part of the same problem as bloat IMO).

This is where the likes of Apple or Dyson do well (or OK) I think. It's not
necessarily that they're brilliant at what they do, it's just that the
competition is pretty awful.

------
Djonckheere
this discussion reminds me of...
[http://s1.postimg.org/6tm9yeue7/invisible_software.jpg](http://s1.postimg.org/6tm9yeue7/invisible_software.jpg)

------
bryanlarsen
Lotus 1-2-3 is the textbook example of the benefits and costs of the speed /
feature trade-off curve.

A major reason for 1-2-3 becoming dominant was its speed and memory
efficiency; it was written in assembler. If you had a 640K machine, 1-2-3 let
you write bigger spreadsheets than anybody else could.

But 1-2-3 was killed by Excel because it lost the feature race. It's origins
in assembly language gave it a significant disadvantage in this race.

~~~
Shorel
> But 1-2-3 was killed by Excel because it lost the feature race. It's origins
> in assembly language gave it a significant disadvantage in this race.

Yet the winner was written in C++ for many years, while the world was
distracted with the less inefficient VB.

Now some parts are .Net, but the crucial parts are still C++.

------
yiyus
You are not alone. See, for example, suckless.org.

------
emsy
Atom and Vim (and imo even Ubuntu) aren't good example of software bloat.
While they might be slow and use a lot of memory, they're usually shipped with
basic functionality and more functions can be added and removed at any time.
Good examples are iTunes, Office, iOS and Android (the latter two are usually
shipped with non removable bloatware).

~~~
falcolas
In my (controversial) opinion, Atom is a great example. It includes/requires
Nodejs, an embedded web browser, and runs Javascript, to run a rather basic
text editor.

And in many cases, it's slow, CPU hungry, and memory intensive. Those three
characteristics are the very definition of bloat to me.

~~~
emsy
I don't think Atom wants to be a basic text editor in the first place. It's
shipped as a basic editor but the killer feature is that you can easily hack
together extensions to suit your needs. It's a tradeoff: you pay for the
html+js engine, but get a lot of potential developers. That being said, having
worked with Atom on a larger project I can't confirm it's slow.

------
grandalf
Atom is designed to harness community participation from a community that
currently embraces some technology that can easily lead to bloat.

So ti's a tradeoff. On the margin, many more people are likely to write a
halfway decent Atom extension than learn elisp and write one for emacs.

------
istvan__
There is a nice study in the subject:

[https://twitter.com/lix/status/589171043010412544/photo/1](https://twitter.com/lix/status/589171043010412544/photo/1)

------
legulere
I think lots of the bloat is actually cruft that piles up over the years. Why
can't we remove it? Because somebody will use the feature and complain about
it getting removed.

------
al2o3cr
"In fact, in my experience the process of finding such bottlenecks on running
systems is itself quite time-consuming - time which cannot be spent actually
reducing bloat."

Wow. Talk about a self-refuting argument. If it's hard to figure out where to
optimize a RUNNING system, how is it supposed to be _easier_ to do so when the
system is being created?

Arguing "bloat" is "bad for performance"? No problem - graphs or GTFO.
Otherwise all you're left with is the emotional "OMG NUMBERS ARE HIGHER NAOW
THAN BEFORE" drivel.

------
chrismcb
A blog that has a full screen image, with no text and adds no value to the
story asks why there is software bloat?

------
dpweb
Over time, all software tends to get bigger and not necessarily better.

Windows went from 7MB required (95) to 6GB required (Vista), 1000x bigger, in
just 10 years. It's true in file formats as well and verbosity is one of the
reasons XML was/is so hated.

There is an irresitable urge to make things bigger bigger bigger, and huge
cpu, mem, and disk resources are essentially free.

------
outworlder
We can't all agree on the definition of 'software bloat'.

Is using more memory resources for caching "bloat"? It irks me when someone
pulls a task manager screenshot, orders by memory usage, and picks one
application out of it. "See, program X is using XXXMBs of memory! Bloated!"
Well, perhaps it is, perhaps it isn't. You are still using it, so it is
probably doing its job well. Perhaps that "wasted" memory is being used for
caching.

This is specially true when people compare OS memory consumption. If an OS is
not using all your memory, you are wasting it. It should be using it for
_something_ , be it caching, eager loading, whatever. As long as you can
quickly reclaim it when needed.

Same goes for CPU usage. "It's using 100% of my CPU!". Well, did you tell it
do anything? If so, isn't that what you want? You want the task to complete
quickly so it can go back to idle. Now, if it is supposed to be idling, and
using a lot of CPU, that's a problem (Slack, I'm looking at you).

From the article:

> The same CD that hasn’t been able to fit Ubuntu since 2011 still fits
> approximately 150,000 pages of unformatted English text without any
> compression.

Well, yeah. But one-dimensional metrics are useless. What about the number of
packages? Did it increase? Is Ubuntu now packing high-resolution artwork, to
be used with our 5k displays? What else has changed? I'm certain that the CD
is not all source code, so the English text comparison is meaningless.

>The burden of selecting software that is not bloated is entirely on the user.
The default is bloated, if you want the unbloated version, you’ll have to work
(search) for it yourself. And in many cases (e.g. anything that needs a web
browser) such a search may not even be fruitful.

Sometimes, the opposite is true. Take Vim and Atom, which are mentioned in the
article. You can _add_ packages to them, so increasing the perceived "bloat"
is entirely up to the user. Unless the user takes the easy way and installs a
"vim-full" package. Which, if we have memory and cpu to spare, isn't usually a
problem. We are talking about VIM in the age of laptops with 16GB of RAM.

> There are very little tools available to to help the user select unbloated
> software. Very few packages make any claims about their storage and runtime
> charactaristics at all

Now, here I agree. It is something difficult to measure.

> Over time, the battle against bloat is always lost. Even Ubuntu, which has
> traditionally presented itself (besides other things) as a method to extract
> a few extra life-years out of old hardware, is mentioned in the list above.
> In other words: it’s only less bloated than the alternatives.

Yes. And yet it is inching closer and closer of being "ready for the desktop".
It has to cater to a lot of people, which means bundling lots of features.
Still less "bloated" than other commercial operating systems. Considering the
number of available packages, Ubuntu offers a very good deal.

> or even simply the introduction of flash screens.

Oh, now we are in agreement. See, if you are waiting for a splash screen, you
can't do your job. The software is not doing its job. Therefore, the amount of
time being spent at the splash screen should be reduced, so that we can
eliminate the need for one. In the bargain, eliminating the code and artwork
for the splash screen, thus reducing bloat.

Still, in some cases, this is unavoidable. Take games. They will often present
splash screens when loading a level (if they have such a concept). Some of
them will even go as far as present animated 3D geometry, using the GPU (and
CPU). Which are mostly idle anyway, waiting for I/O. Sometimes, one can devise
a better way of packing the files, to reduce loading times. Even more so when
the actual serialization mechanism is inefficient(hi, Kerbal Space Program).
But what they are usually trading off is a lag-free game play, in exchange of
a loading screen (think of it as warming a huge cache). Is that bloat? The
assets are huge, because we want them to be.

Are you developing for an embedded platform? No, mobile doesn't count, they
are effectively shrinked PCs from a few years ago (with non-mechanical storage
even). If so, then worry about every CPU cycle you are using, as well as
storage. Build your own linux distribution if you have to, compile with the
exact flags for your platform so you can generate optimal code.

Running in a battery powered device (no matter the size)? Then try not to use
too much CPU, and certainly not constantly. Batch stuff, run in bursts, get it
over quickly. Disable any non-essential tasks.

Is a background task? Try not to disturb the rest of the system, please.

A foreground, interactive desktop application? You are likely the focus of the
user's attention, do whatever it takes to minimize the latency! The only
reason not to gobble all RAM is that the user may be running other stuff too.
And the reason for minimizing CPU usage is heat and power. Other than that, I
bet a user will rather have an application that is making use of all resources
available, if it means getting work done quickly. And feeling... "snappy"!

If the user is not being impacted, then I don't see the problem. My issue with
Slack, for instance, is that it uses a lot of system resources _and_ doesn't
feel fast in return.

