
I hate almost all software (2011) - panic
http://tinyclouds.org/rant.html
======
monodeldiablo
The older I get, the more I agree with this rant. I used to be that guy
putting cute unicode checkboxes in his test runner, telling myself I was
making the world a better place because my software was nifty and clever. Now,
I realize that, while that's nice to see, my time was better spent just
solving the problem and then moving on to the next problem. Nobody was dying
for animated tick marks and I probably ruined someone's experience because
their terminal emulator or multiplexer had shitty unicode and escape code
support.

It's a delicate balancing act, of course. The tinkerers are often the ones
pushing tech forward with their incessant fiddling. But it comes at such an
incredibly high cost. I mean, how many _human lifetimes_ have been consumed by
the shitty intricacies of dynamic linking alone. And for what? To solve a
class of problems we haven't really had since 1992?

What a waste.

There's ample space in the software world for all kinds -- and arguably the
industry is fueled by the blood of well-intentioned innocents -- but my
increasingly jaded self just wants the shit to work so that I can close my
laptop and get outside with my kids. The guy who spent 15 hours a day
obsessing over The Right Way and code purity and ecosystems has aged into a
man who firmly believes that less is much more, cuteness is a code smell, and
solutions should be as self-contained as humanly possible.

~~~
bluetomcat
The trouble is that the tinkerers learn to cope with the idiosyncrasies of the
legacy systems and build their shiny new stuff on top of them. In the end, we
get software that solves a relatively simple problem but depends on layers
upon layers of other systems and abstractions.

IMO, we need to fundamentally rethink the design of our computing
infrastructure so that it suits our current way of using computers. This would
start from the CPU microarchitecture and instruction set, through the
programming interface of the operating system, up to the middleware used by
most programs.

~~~
monodeldiablo
I'm sympathetic to your point of view, but Big Design Up Front never works, in
my experience. It's impossible to anticipate future uses for tech that we
currently consider vestigial, unnecessary, or inefficient.

Mainframes and terminals were the future until PC displaced them, until the
web came along, until mobile and cloud computing smashed it all and brought us
full circle (sorta). Designing to optimize the full stack for one of these
iterations would have made the transition to the next phase much harder. Fewer
people than you'd think foresaw the IoT, FPGA, Arduino, processors-in-
everything revolution currently underway back in the early 2000s, when Moore's
Law was still gospel.

In the end, I take a sort of Buddhist view toward the fuckedupedness of
software: It sucks, but it's all we've got and must be accepted on its own
terms. Find a strategy that works for you and that minimizes global pain. Do
no harm. Attempting to force your way into Nirvana and out of Samsara only
wedges you deeper inside it.

~~~
collyw
"Big Design Up Front never works"

Seems to work for NASA and others doing way more complex projects than I am .

~~~
pjc50
And yet people praise SpaceX for doing much better and cheaper with their
iterative refinement.

BDUF can work if you have a single organisation with a single stable set of
requirements that's reasonably compact. The Kennedy "man on the moon" speech
was such an example.

Where it falls down is trying to meet the needs to the 7 billion distinct
human individuals, which are inevitably vague and shifting and change in
response to publication of software.

~~~
braveo
SpaceX also doesn't have the track record NASA does, and cannot be compared.

In 50 years when they've been working alongside each other, maybe, but not
right now.

~~~
greglindahl
NASA has also used iterative refinement in the past.

------
nextos
I share this sentiment. As a user, I have found some relief by switching to a
very simple setup with just a WM (XMonad), a browser (Firefox+vimperator),
Emacs and a terminal (urxvt, which I only use occasionally).

Previously, even though I used Emacs as an editor, the idea of moving
functions that correspond to applications (like mail reading) into an editor
looked clunky and even dirty to me.

But I grew disillusioned of the Unix ethos. While it works well for composing
very simple programs, it fails to deliver on most other things. For example,
Mutt is a great mail user agent. But it is very difficult to customize beyond
its author's original intentions using its own little configuration DSL. And
scripting it from your shell does not lead anywhere. This happens with many
other nice Unix utilities.

In contrast Emacs, understood as a text-mode Lisp machine, is at odds with
this approach. The code you write is as first class as the code you run from
whatever module you import. Everything is alive, integrated, and can be
changed.

As a programmer, I am moving towards more and more abstract application
domains for the same reason. Mathematica, PyMC or Scheme allow me to ignore
all the horrid details the OP describes and work at a higher level.

~~~
shakna
I love the ethos of Scheme.

The language has ways that the program is expected to do things, such as named
let and syntax-rules, but is flexible enough to get out of your way if you
have a different idea, such as defmacro.

I wish that the rest of my userland would be as flexible.

~~~
nextos
You can always use Emacs and treat all low level things as plumbing.
Eventually Emacs will hopefully switch to Guile Scheme. In any case Emacs Lisp
is a pretty good language and would not feel clumsy to an Scheme hacker, I
think.

You can have a pretty modern userland with Org, Gnus or mu4e, pdf-tools, etc.

I have recently switched to GuixSD after 2 years toying with NixOS. All dirty
Systemd plumbing fades away and gets replaced with a minimal service manager
written in Scheme, GNU Shepherd. Plus all packages are defined in Scheme. It's
very refreshing.

I hope one day we get to see a cleaner kernel, ABIs and compilers. Things have
gone a bit out of hand.

~~~
shakna
I'd rather not have Emacs as my environment. I really wish that something like
Gnome worked as well as Emacs to be customiseable and extendible, and in fact
that everything in /bin would share a cohesive, modular and programmable
interface, preferably Scheme, but I'll settle for something sensible.

Emacs, and to a lesser extent, sch, are a nice first step to making a nice
environment. But they still feel ten years old, or more. I use TUIs, and
appreciate them. But VLC and my window manager should be beholden to the same
level of control and interactivity.

And yes, I am never satisfied tweaking my system to my preferences, because I
always hit roadblocks before I get there.

(I am totally fine with the kernel, and compilers and the like using "dirty"
code -> that's one of the easiest ways to reach a damn fast system.)

------
amasad
I don't think this is a software problem. It's a general legacy systems
problem. Think of self-driving cars. It seems plausible that one day all cars
on the street will be self-driving, at which point the existing system --
which was designed for humans -- is a legacy system that is not efficient for
the current users. If you were to design an autonomous transportation system
from the ground-up I doubt, for example, you'll put in stop signs that cars
have to read via machine vision. You'd probably have that information
somewhere on the internet (or better yet, design something from the ground up
that doesn't require stopping).

You can see how this translates to computing. The reason why I still have to
deal with a `document` object while building apps in JavaScript is because we
built an application distribution platform on top of a document viewer. There
is no centralized planning here -- we're just riding a wave of distributed
innovation and making the best out of it. (Nor should you expect centralized
planning in technology to emerge unless a superintelligent AI system takes
over).

~~~
pjc50
Javascript is the app distribution platform that _won_ , because the others
were worse.

ActiveX? Too much lockin, no sandbox. Java applets? Too heavyweight. Flash?
Nice tooling for the producer, _terrible_ security record.
ClickOnce/Silverlight? Never really got out of the gate and would have been
MS/Win only.

The next generation of app distribution is "app stores" with arbitrary refusal
policies which take a 30% cut of sales. A success for Apple.

Javascript _mostly_ delivers "write once, run anywhere", albeit through a vast
shifting layer of shims. Its sandbox is pretty reliable. It's not owned by
anyone and they can't stop you shipping JS apps. Thus it survives.

~~~
amasad
I agree with most of what you said but you're conflating languages and
platforms. JavaScript as a language says and does nothing about sandboxing.
It's the browser that does that. It doesn't even specify anything about
concurrency (up until promises maybe) or the event loop. The event loop is a
feature of the host environment. When Netscape put JS on the server they used
in CGI mode.

So I wouldn't completely remove the browser from the picture. Had it been Lua
or some other simple general purpose programming language that was the
language that ran in the browser I'd say it would've been the JavaScript of
today.

(I think one other component of this is "worse is better". JavaScript was easy
to copy and implement, although not elegant or the best designed language)

------
Animats
Much of what he's complaining about is a failure to obey a rule that appeared
in the original Macintosh User Interface Guidelines.

 _" You should never have to tell the computer something it already knows."_

For example, users should never have to type in a pathname. The user should,
at worst, be offered a list of workable alternatives from which they can
select. Expecting the user to manually typing in a value for
"$LD_LIBRARY_PATH" fails this test.

The original Macintosh software had applications which were one file, with a
resource fork containing various resources. This kept all the parts together.
UNIX/Linux and Windows never got this organized. Hence "installers", and all
the headaches and security problems associated with them. The Windows platform
now has "UWP apps" with a standard structure, but they're mostly tied to
Microsoft's "app store" to give Microsoft control over them.

~~~
rev_null
Doesn't packaging each piece of software as a single file lead to a security
disaster? When openssl has a vulnerability, suddenly the end user needs to
upgrade multiple applications, and that is assuming that all of the developers
have actually built the upgraded software.

~~~
majewsky
That's also the one reason why I'm very reluctant to throw shared libraries
out the window.

~~~
qznc
Ok, security-critical stuff as shared libraries (libjpeg, libz, libssl, etc).

Why do we package the other stuff as shared libraries too? GMP, GtK, Qt,
OpenCV, SDL, BLAS, ...

~~~
hollerith
The original motive for adding shared libraries to Unix was so that X11 would
fit in memory on the machines in use at the time (according to a comment I
read many years ago).

If the use of shared libraries saves on memory, it probably also saves on L3
and L2 cache, so on the aggressively cached CPU architectures of today,
replacing a shared library with a statically-linked version might slow things
down by decreasing cache hit rates.

In particular, if every KDE application is statically linked to Qt, then when
KDE application A's time slice ends, whatever parts of A's copy of Qt are in
cache will be invalidated with the result that if B wants one of those parts
it will have to fetch it from it's own copy of Qt in memory whereas if A and B
shared a copy of Qt, the fetch from memory could be avoided.

~~~
qznc
Has anybody actually measured it in the last decades?

~~~
hollerith
I don't know.

But we know that Intel and AMD design their CPUs to go as fast as possible on
the operating systems people actually use (Windows, Macos, Linux) all of which
use dynamic linking. Plan 9 is the only OS I know of that does not support
dynamic linking (and Plan 9 simply does not have large libraries -- they have
what they call services instead, which are similar to souped-up Unix daemons).

Linux and Windows in turn are designed to run as fast as possible on Intel and
AMD hardware.

After a few iterations of this sort of mutual evolution, it starts to become
very unlikely that a change as big as switching a bunch of big libraries from
being dynamically linked to being statically linked would actually improve
performance because lots of optimizations have been made to squeeze a few
percentage points of performance out of the existing system (which includes
the practice of shipping most large libraries as shared libraries), and
typically those optimizations stop working if there are large changes in the
system.

------
dexwiz
Sounds like someone had a bad day.

Software developers are in a strange position in that they create the world
they live in, so any warts seem magnified and self inflicted. But any
significantly advanced discipline is going to be complex. Physics, chemistry,
and biology does not have the same luxury of being "rewritable." They operate
in the natural world, so you curse God and not Man.

Rewriting a software system from the ground up is nearly impossible. Also any
sort of system will need to maintain some level of backwards compatibility,
which induces complexity. Thinking about an operating system as something that
was built according to a grand design is wrong. They evolve over time, written
by different people with different styles solving different problems.

A counterpoint to this argument, "It’s harder to read code than to write it."

[1] [https://www.joelonsoftware.com/2000/04/06/things-you-
should-...](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-
do-part-i/)

~~~
0xCMP
He is the creator of Node.JS

~~~
ronilan
True. But Maybe he did have a bad day? Like a promise that never resolved or
something...

------
fb03
In my workspace sometimes people think I'm off to kill their buzz whenever
they bring up the topic of introducing another library into our stack because
it solves a specific section of our problem-space.

The real value and fun imho is solving complexity _in your head_ , and then
coding down a solution that is simple and uses the least number of moving
parts as possible.

And _yes_ , we programmers are also humans and we sometimes like to cherish
our creations in non-productive ways, so if you code something simple and well
thought out, it will be almost always come out _elegant_ and _concise_ , and
you'll be able to pat yourself on the back at the end of the day for building
something solid.

I don't believe solving something is _all about the user experience_ because
there are humans on the back pulling levers and making that solution happen,
so there's that.

~~~
digi_owl
Wonder how much of that is because the MBA beancounters use LOC as they
productivity measure, as if programming is the same as a widget factory.

------
averagewall
I live in a VS .Net bubble and avoid this Linux, C, native etc. stuff like the
plague. No environment variables, no scripts, no dependencies except things
that have a real and obvious purpose, all native interop isolated to its own
small ugly files that I don't touch.

Sometimes I want to use an open source library or something and it's honestly
days of fighting with compiler flags and environment variables and versions of
compilers. There's always undefined symbols or other mystery failures. Last
time I did this, the makefile failed because I was getting a source file from
a badly configured web server that caused the downloaded file to get corrupted
with some downloaders but not others. So there's learning about idiosyncrasies
of the HTTP protocol too. It's a complete clownpants nightmare of wasted time.

~~~
234dd57d2c8dba
Funny, I feel the exact same way about the windows world. The few times I've
ever had to touch a windows machine, someone tells me \- "Oh just reboot it to
fix that microsoft problem." \- "Oh the system update must have been
corrupted, reinstall the OS." \- "Oh just go download this rando .exe using
your browser that isn't signed or open source, that tool will be able to fix
your problem."

I just wonder how that could ever possibly be acceptable.

You don't even have to reboot a Linux machine to upgrade the kernel nowadays
for crying out loud.

~~~
CJefferson
But then, I hear Linux people talk about systemd making their system unusable,
and having to recompile the kernel to make network cards work, and laptops
never waking up from sleep, and reading the arch wiki to find which bizarre
kernel options to pass to make CPU low power modes work, and (yesterday)
having to update the kernel to make chrome work.

I'm not saying these are all real problems, just that if you only hear the
bad, it sounds terrible!

~~~
digi_owl
Outside of systemd, that is hardware related. And hardware is a pain because
most of it is made with barely any testing towards Windows. And Microsoft and
standards are on a "best effort" basis.

Thus you get odd behavior in hardware trying to match "extended" behavior in
Windows, that is then papered over by the binary only driver shipped with the
product initially. None of that being available to the Linux devs trying to
get things to behave sanely.

The systemd issue is quite different, as for many it seems to be making Linux
more Windows like. And they specifically turned to Linux in the first place
because they could no longer stand the cryptic errors and "heisen-bugs" that
was daily life while using Windows.

Linux before systemd was to them a land of predictable computing. If something
broke the error would point you in the right direction for a fix. And once it
was fixed it stayed fixed.

------
nice_byte
I both agree and disagree with this. We're not paid to create new things,
we're paid for our suffering - to drag ourselves through the nails and glass
shards of previously existing code to accomplish a thing. In that respect, I
don't see learning the details of a programming language or a tool as a waste
of time - it's an investment to make sure I get less glass stuck in my body
next time.

~~~
falcolas
> We're not paid to create new things, we're paid for our suffering

Even I wouldn't phrase it that negatively - instead, we are paid for getting
features created, no matter what it takes. Sometimes it is bliss, sometimes
it's the seventh circle of hell. But in either case, it's our job to just get
it done.

------
nzjrs
It's crazy to think from this came NPM and from that came left-pad. I can't
reason about this, I can't understand the evolution in any other way than
Chaos.

Is this the best we can offer as engineers. We launch our beautifully crafted
boat into the ocean only to watch with despair as is taken and swamped by the
waves and the enormity of the sea?

~~~
TeMPOraL
> _I can 't reason about this, I can't understand the evolution in any other
> way than Chaos._

I think the most fitting description of the process in our industry would be
"throwing shit against the wall and seeing what sticks".

------
madhadron
> In the past year I think I have finally come to understand the ideals of
> Unix: file descriptors and processes orchestrated with C. It's a beautiful
> idea. This is not however what we interact with. The complexity was not
> contained. Instead I deal with DBus and /usr/lib and Boost and ioctls and
> SMF and signals and volatile variables and prototypal inheritance and
> _C99_FEATURES_ and dpkg and autoconf.

The next step is to understand that file descriptors and processes
orchestrated with C are a substrate that would make everyone reimplement most
of what they want from scratch each time. Part of the list describes system
services that we have come to expect from an OS: DBus = interprocess
communication; signals = process control; volatile variables and ioctl =
hardware control; /usr/lib and dpkg = software packaging and distribution.
There are more to be added. at+cron = scheduling. syslog/journald/systemd =
monitoring; wall = alerting; cgroup = process and process group control; bash
= orchestration from a repl; systemd = orchestration not from a repl; lots of
stuff = security; every developer's own file format = configuration; every
developer's own plain text protocol = data encoding for IPC. The list goes on
from here. Even in the early days of Unix, mature systems like VMS had these
capabilities.

The problem is not that these exist. It's that they were all built piecemeal
instead of being standard system services accessible from a programming
language like processes and file descriptors.

Except for bash. That's the result of an old mistake: the job control language
on OS/360\. That's when we got the idea of a command language distinct from a
programming language, and we've been building it back towards a programming
language ever since.

Autoconf, though, is purely penance for the sins of the past.

------
chrisallick
"The only thing that matters in software is the experience of the user." is
going on the wall in the office. And I'm mailing one to Android development
team.

~~~
mozey
That statement seems simple and correct at first, but the term "user" is not
well defined. Even if there was only one user in the world, me, it still isn't
well defined. The reason being that I can't tell you right now at this moment
about all the possible use-cases I might have for some software in the future.

~~~
_ao789
I think what this statement really means is that we all create software for
the end user, but then get lost in all the theory and principles of software
engineering that often drag out software projects and make them overly
complex. The point at the end of the day is that "it's all about delivery" and
we need to remember that without the customer/end user, we're just wasting
everyones time moving bits and bytes around because we can.

------
nurettin
So we are against perceived complexity and learning new things now?

To elaborate a bit: At work I use OLAP functions to simplify the business
layer and focus on the data, moving much of the complexity to sql statements.
Others get the data in it's list form and complicate the data layer or the
business layer by transforming the inconveniently-structured data there. Yet I
am the one who is told to tone it down and think simple.

Much like the author of this article, no matter how senior he is, they seem to
divine a vague notion of "simple" when they face perceived complexity. And
they always seem to associate this complexity with a real life example of
something which could have been simplified (like how angry Linus Trovalds got
when he tried to set up a printer on linux) which is unfortunately a red
herring.

So is NaN equivalent to null? Does that question even make sense? Perhaps the
real question is, how do I represent an optional entity without incurring
additional runtime costs? C++, one of the most lamented languages for it's
terrible template complexity, has no null problems. Most values are copyable,
moveable or re-constructable and most code throws when faced with fatal
situations where e.g. the OS runs out of allocatable memory to the point where
returning null simply vanquishes and you won't see it in production code
anymore. Did that solve your complex null problem? Absolutely yes. Did that
come with a bunch of baggage you have to learn and implement? Yes it kinda
did.

So the complexity is there for a reason. Go is shameless when it comes to nil.
Yet Go is mostly called out for it's lack of generics. Ah, how ironic.

Anyway, if it is there in programming, it is probably for a reason and
alternatives introduce their own baggage. Usage of POKE or sprite DATA
segments did not suddenly dawn on me when I first started to learn me some
programming. I did not say it was unnecessarily complex, or question why it
didn't work like legos. I just opened the book and reduced the complexity by
_learning_.

~~~
TeMPOraL
> _So the complexity is there for a reason._

Reminds me of the occasional questions about equality, like "why Java doesn't
let you override ==?", or "why Lisp has eq, eql, equal, equalp, and you still
end up using your own equality predicates anyway?". The root of the problem
being, the concept of equality itself is complex at the philosophical level.
It's easy to lament something is complex, it's harder to ask yourself why it
is so, and accept that there is complexity in reality itself.

------
Ericson2314
I was with it until the last paragraph. One reason for the mess is nobody
thinks of downstream developers as users.

------
bpyne
Twenty-six years of developing software for different organizations yet I
still struggle with the desire for simplicity in the face of conflicting
needs.

The tendency is to think of a "user" as a lone person in front of a screen
using a UI of some kind. This idea of user is incomplete. In addition to
people using a UI, for say a university financial aid system, a management
chain exists with the need for applications to be modified as quickly as
possible with new requirements and with no interruption of service. They need
applications to communicate with one another. To meet those needs, developers
utilize libraries, frameworks, etc. for modularity and reusability. To insure
that nothing gets broken while modifications are made, regression testing must
occur. Adding test suites increases the likelihood of uninterrupted service.
To help with module interdependencies, developers use package managers. And so
the stack of software grows.

I'd call it accidental complexity, but it's not an accident. We are dealing
with complex software stacks, often with interdependent modules, because we
are trying to meet many different needs simultaneously. I don't see an easy
way out of the situation.

Over the years, I've become better at determining when to abstract problem
solutions in order to provide for future needs and when to do the minimum
possible to get the problem solved. I don't think there is a rule or formula
for making that determination. You just have to make mistakes and learn from
them.

~~~
digi_owl
There is also all too often a tendency as treating all "users" as drooling
idiots.

------
snarfy
You know what software I like? Video games. It's a model we should all be
striving for. There isn't another category where the experience of the user
matters more than anything else.

------
z3t4
I think dev-ops are often overly complex ... Although most people want too
simple programs/computers, if they would just invest some time in them, they
would become more productive.

------
BJanecke
This is a bit of a rant so either ignore it or consume the frustration and
empathise.

Good enough is a turd in disguise, your desire to deliver enough working
software to justify your existence and salary is egotistic and myopic.

Build _sustainable_ software, that will work, for you, your predecessors and
successors.

The constraints of the domain are fine and dandy, that's what matters, that's
the problem deadlines, delivery and all of that is superficial. Keeping the
business afloat for now is a band-aid on cancer if you can't write software in
a a way that you in 10 months, 1 year, 3 years will be able to maintain.

Here's the reality, you screwed up, you buckled and allowed the pressures of
institutionalised shittyness to constrain you capacity to deliver real value.

Please, everyone lets stop acting like software is some ephemeral joke that we
can half-ass, and start being serous about the implications of our actions.

------
josteink
> The only thing that matters in software is the experience of the user.

While ultimately true, it's a bit too simplistic, because it presents user-
experience as a constant. Done deal. Delivered or not. In reality the user
experience is something which will change over time, as the software is
updated and evolves to accommodate the additional needs of the users.

If you can't keep your code clean over time, your code will start suck,
needless complexity might sneak in, and so will the user-experience start to
suffer as well.

A good user-experience, just like the code, needs to be maintained. And thus
comes the need to organize the code. And for a developer to bother maintaining
the code, and for him to be efficient at it, he should work with tools and
within an environment which fits _his_ needs (and accommodates his
requirements for a good user-experience). And thus comes fiddling with editor-
settings. Etc etc.

Basically this rant highlights a real problem: Software complexity is out of
control. But the conclusion does not support the arguments given. Rather his
conclusion drives all the things he rants about being wrong in the first
place.

I think, in the end, to get clean code, to get rid of needless complexity,
there's one simple thing which trumps everything else: You need someone
empowered to make the required changes. And you need that person to _care_.

Without that, it's all just us waiting it out, until it starts heading into
the abyss. We can already see that with lots of the bigger open-source
projects which once fuelled the (small) Linux desktop revolution a decade ago.

So how do we cope?

> There will come a point where the accumulated complexity of our existing
> systems is greater than the complexity of creating a new one. When that
> happens all of this shit will be trashed. We can flush boost and glib and
> autoconf down the toilet and never think of them again.

Maybe we will. But I don't believe for one second that we're going to break
the cycle this time around, because finding and keeping someone caring enough
for decades to come is a pretty tall order.

~~~
digi_owl
And this is perhaps why the unix philosophy has such pull, as it suggests an
environment where one can make changes and adaptions piecemeal and at ones own
pace.

But that suggests abandoning the notion of the DE, and going back to WMs and
agreeing on, never mind adhering to, the ways software exchange data.

Something that is the polar opposite of the direction that "desktop Linux" is
moving these days.

------
musha68k
Best blog article I've read on HN in a very long time.

------
digi_owl
That dbus was turned from the Desktop BUS to the de-facto system bus is me an
eternal puzzle.

------
jasonkostempski
Developers are users too.

------
lolive
In my opinion, he hates the fact that documentation of stacking things up is
very poor. Which is true.

------
whyileft
Most technology is useless. Its not a product of necessity, but a product of
our chosen economic system.

------
rodrigosetti
Yes, you hate technology because is hard to build it.

------
LeicaLatte
The author has not articulated beyond - you don't understand how fucked the
whole thing is. Terrible writing.

~~~
elzi
That's the nature of a rant, no? It's not a persuasive argument, or an essay.
It's the frustration felt in a moment lived.

~~~
LeicaLatte
Yes, the essay is a terrible format for capturing life moments. Poems are a
forgotten tool from our past that I think work way better. The sentence-
paragraph structure is a slave to the period. It leaves no space for
attention. Very information heavy. Poems, on the other hand, are like long
tracking shots. They are naturally infinite. You can create a moment and stay
with it. Note taking, list making, messaging are effective for similar
reasons.

~~~
AdieuToLogic

      Poems are a forgotten tool from our past that
      I think work way better.
    

Good point. So here's a haiku which might be applicable to the post in
question.

    
    
      The Master laments,
      When those who produce oysters,
      Declare "find the pearls."

------
theamk
I see the reason for the complains, but I do not agree with the basic premise.
It should not be "I hate almost all software", it should be "I hate my
employer and the job I am stuck in".

I mean, this person is writing for Solaris/Illumos (judging by SMF references
and autoconf complains). Yeah, running modern stuff on solaris sux, unless you
have a sysadminny mindset. This is clearly not for him. Fine -- different
people like to optimize different things: user experience, internal code
beauty, maintainability, code robustness, number of platforms supported.

No one should be forced to work on stuff they don't like. There are lots of
jobs, and most of them do not require SMF, DBus, /usr organization, or
volatile variables.

~~~
cocktailpeanuts
> This is clearly not for him.

Haha I had a good laugh. He created node.js.

~~~
234dd57d2c8dba
So what? It probably started off as a fun hobby project like "I bet I can
write a fast server in javascript." That was probably all he was interested in
at first. Now he deals with problems that are not fun and not anywhere near
that original hobby. These problems are clearly not what he considers fun at
all, therefore I don't see any reason why you had to be snarky with the OP.

~~~
cocktailpeanuts
Node was written in c++.

Not even sure if you read the original article, because that's far from what I
took away from it. I guess we all choose to see what we want to see.

There's a difference between someone complaining when they are bad at
something, and someone complaining when they're good at something.

------
ythn
Maybe you should stop thinking of code as solving problems and more like
painting a picture. It doesn't matter if there are mistakes or if you've done
everything perfectly, as long as you've created something cool.

It's like telling a room full of artists that you hate almost all art because
the tools available etc etc over complicate painting a simple picture whose
purpose is to fulfill a client contract, etc.

~~~
AdieuToLogic
Here are a few software development maxims which I believe you, or others
reading this comment, may benefit from:

    
    
      Software is the manifestation of a solution
      to a problem.  Define the problem which needs
      to be solved.
    
      Developers can test their solution or Customers
      will.  Either way, the system will be tested.
    
      Only progress a system from a known working
      state to the next known working state. If the
      current state of the system is unreliable
      (IOW: "buggy"), peel back the existing logic
      until it is in a provably working state.
      Only then reintroduce removed logic (in
      adherence with the above) to reach a working
      state where additional logic can be introduced.
    

And, yes, it _does_ matter "if there are mistakes" for the vast majority of
software systems. Perhaps not those with which you've been exposed, but for
many this is the case.

HTH

~~~
dredmorbius
Known working state ... That's a key argument against systemd. It's not
inherently deterministic.

