
Software Runs the World: How Scared Should We Be That So Much of It Is So Bad? - cwan
http://www.theatlantic.com/business/archive/2012/08/software-runs-the-world-how-scared-should-we-be-that-so-much-is-bad/260846/#
======
mgkimsal
How many of the business processes that software manages were _better_ before
the software took over? MOST companies I've seen inside were a hodge-podge of
bad/missing/sloppy paperwork and poorly documented or even understood
guidelines and policies. Software amplifies everything about a business, both
the good and the bad. The downside is that because so much is networked today,
something "going wrong" can have much larger consequences (and faster) than a
wrong decision carried out on paper 30 years ago.

~~~
SoftwareMaven
While this is definitely true, the amount of damage those paper-based systems
was limited. As the Knight Capital case showed, it is possible to lose
millions of dollars a minute with poor software. That would not have been
possible with human trades.

~~~
patio11
_That would not have been possible with human trades_

Tell this to Mizuho Securities, where a human sold 620,000 shares at 1 yen
instead of 1 share at 620,000 yen. ~$300 million loss in a single botched
trade. Humans at both Mizuho and the Tokyo Stock Exchange saw the trade and
_all declined to intervene because they assumed they had insufficient
authority to countermand the trader_.

------
kitsune_
Holy crap, The Atlantic has gone down hill in the past few years. It feels
like they try to become the next Huffington Post. A couple of years ago it
felt like a serious competitor to magazines like The New Yorker or The
Economist. Then they rebranded themselves from The Atlantic Monthly to The
Atlantic and started their "social media revolution".

This is an incredibly shoddy article, it even closes with an absolutely brain-
dead "what if"... the way average high school students close their essays.

~~~
tjic
You've hit the nail right on the head.

I'm a huge believer in the free market, but the economics of journalism really
do seem to be pushing a lot of serious periodicals into a crappy direction.
Salon was never excellent, but it was good at times. Now it's utter crap.
Slate was pretty good, now it's sliding fast. The Atlantic? Yep, just as you
say.

Most of these places are ditching the "serious researched journalism" and even
the "serious political opinion pieces" and going with fluffy bloggy chit-
chatty crap.

"Round tables" are getting more an more popular, especially round tables that
are just emotional reactions to lifestyle news. Four women discussing birth
control. Five guys talking about Breaking Bad (Slate: I'm looking at you for
BOTH of these).

And now the Atlantic is headed down the huffpo route.

All markets "clear" at some price and some volume. It may just be an
unfortunate fact that in the Internet age the market for real journalism
clears at $0 for 0 volume.

Again, my philosophical / political inclination is to deny this, to say that
there's never a market failure, that everyone can always buy the magical pony
they want at a price that sounds fair...but journalism seems to be giving me
really good reasons to not say that.

~~~
pavel_lishin
> I'm a huge believer in the free market, but the economics of journalism
> really do seem to be pushing a lot of serious periodicals into a crappy
> direction.

To throw out a glib generalization, it seems that the free market is best at
producing things people want, not necessarily things people need.

------
tomp
> And perhaps the most mission-critical of all mission-critical applications
> are the ones that underpin the securities markets where a large share of the
> world's wealth is locked up.

Really? I don't even consider the stock market software as mission critical. I
mean, Jane Street's software is written in _OCaml_...

Mission critical is software that controls lives. Airplane, spaceship, heart
pacer, ...

~~~
ucee054
You have your definitions confused.

 _Critical_ software leads to people's death when it goes wrong.

 _Mission critical_ software leads to the _mission_ 's death when it goes
wrong.

Where the mission is usually "make money".

~~~
69_years_and
Unless of course ones mission is to fly an aircraft, control a power station,
connect a phone call, keep someone's heart beating... Just words... The above
commenter has a point. Mission critical is also about the effective control of
a physical process just as much as it can be about smooth flow of money (which
is just another process).

In some ways the article title is misleading as it touches on only a fraction
of the software that actually runs the world - there is a shit load of
software actually running the world (incl. making physical stuff go) and from
what I can see most of it does a decent job, in general (of course there is
always room for improvement). Disclosure: I'm a process control programmer so
am biased in my view as to what actually 'runs the world' :). Taking nothing
away from the money processing side of the business.

~~~
ucee054
Intersect("Critical","Mission Critical") != "Mission Critical"

Intersect("Critical","Mission Critical") == "Critical"

------
stcredzero
So, here's the unpleasant, unspoken, often subliminal truth about lots of
enterprise software. It's supposed to be about automation to increase
efficiency. In reality, it's often largely about _control_. Software enforces
a certain workflow and can be used to allow or disallow certain actions. It's
a way to enforce the procedures in the 3 ring binder.

One recurring pattern I've seen in enterprise software projects is separation
between users and the developers. Often, there is this game of "telephone"
where the user's managers talk to a manager above them, then there might be
another layer above before information can start heading back down to the
devs. Often, one is strictly _forbidden_ to talk to users, except in
exceptional situations. This is because the project is largely about _control_
, so restrictions on communication with the users is necessary, since many of
the unstated goals have to do with the frustration of the user's wishes.

What's even worse is when this power of control is used in political
infighting.

~~~
flogic
The trick is to find way to knife through a couple of those layers of
management to the managers who actually know what should be done. You really
can't ask most users or managers because they're just not helpful.

~~~
stcredzero
I found that's not so straightforward for consultants like me, because the
manager that brings you in is embedded in the "layers," directly in the
information path. People-skills were not my forté.

~~~
flogic
We got lucky. I got hired into a project that started out in research which
had/has a very loose management. It's the only part of the org structure where
shit flows uphill. That gave us quite a bit of leeway in terms of establishing
a cross cutting team. We've since been transferred but so fair the culture
seems to be holding.

------
da02
People run the World: How scared should we be that so many of them [insert pet
peeve].

Mine is: ...will vote for war and more spending.

------
lazyjones
The article tries to make a point and then at the end evades the conclusion:

> The only real solution is to acknowledge that computer programs are going to
> fail and try to minimize the damage they can cause in advance. [the best way
> to do this is to not use the software in question at all]

No, the only real solution is to provide a meaningful warranty with software
(rather than typical "no warranty" clauses). Regulation can enforce this and
the price will have to be paid by the customer. We have put up with these
crappy licenses for decades and the result is the buggy mess we have now and
no way for customers to demand a working product. Mistakes will still be made,
but they shouldn't harm the customer more than the vendor.

[http://mil-embedded.com/articles/software-warranties-new-era...](http://mil-
embedded.com/articles/software-warranties-new-era/)

~~~
aneth4
How often have you really encountered such buggy software that's cause you
significant harm as a consumer? I've had annoyances and had to get refunds,
though rarely and it's hardly ever caused any significant loss.

No way for customers to demand a working product? Which products don't work?
How does consumer choice and refunds for completely broken software not take
care of this?

The software referenced in this article was mostly built in house, perhaps
some with enterprise licensing which you can be sure included negotiations
covering reliability.

Regulation is not the answer. Software is just fine.

~~~
lazyjones
Vulnerabilities in widely-used software (e.g. Windows, MSIE) were often fixed
much too late, so the customer was at a risk or had to use 3rd party software
at an additional cost just to compensate for them. Botnets that exist only
because of these vulnerabilities put every business on the web at risk.

Software is just as "fine" as any other product where no accountability
exists.

~~~
learc83
How do you provide a warranty against someone actively trying to destroy
something?

Doors don't have guarantees against someone kicking them in. Cars aren't
covered against someone putting sugar in the gas tank.

What counts as a vulnerability?

How can a regulatory body possibly decide what was a "vulnerability" they
should have known about and what was something unforeseeable? I guarantee you
there isn't always clear line between the two.

What about when the vulnerability is exploited by a government who has greater
resources than the software vendor?

Does Microsoft have to pay out on a claim when damage is caused because a
certificate is forged by the US government?

~~~
lazyjones
> How do you provide a warranty against someone actively trying to destroy
> something?

You provide a warranty that states exactly what is covered and what not (and
suitability for a particular purpose, which software licenses often deny
completely).

> Doors don't have guarantees against someone kicking them in

They do, at least here in the EU you get differently rated safety doors at
different prices. You know exactly what kind of minimum resistance they offer.

> How can a regulatory body possibly decide what was a "vulnerability" they
> should have known about and what was something unforeseeable? I guarantee
> you there isn't always clear line between the two.

Stupid Word macro and buffer exploits are very obviously grave mistakes made
by the manufacturer, who should be held accountable.

> What about when the vulnerability is exploited by a government who has
> greater resources than the software vendor?

Why should that make a difference? If you cannot handle the responsibility you
have as a vendor, then don't be one.

> Does Microsoft have to pay out on a claim when damage is caused because a
> certificate is forged by the US government?

No, because Microsoft's stuff works as intended in that case, the CA should be
held liable.

~~~
learc83
>You provide a warranty that states exactly what is covered and what not (and
suitability for a particular purpose, which software licenses often deny
completely).

So software vendors state in their warranties that running this software on an
internet connected device voids the warranty, and they stop offering patches
and support. All support moves to unofficial third parties.

>They do, at least here in the EU you get differently rated safety doors at
different prices. You know exactly what kind of minimum resistance they offer.

And that's an objective quantifiable measurement. Levels of software
vulnerability are not.

>Why should that make a difference? If you cannot handle the responsibility
you have as a vendor, then don't be one.

That is stupid, no software vendor can harden consumer software meant to run
on a desktop internet connected computer to the point that a large government
agency can't find a vulnerability.

~~~
lazyjones
> That is stupid, no software vendor can harden consumer software meant to run
> on a desktop internet connected computer to the point that a large
> government agency can't find a vulnerability.

I don't buy this. If the vulnerability is in the OS or some other program,
those parts are to blame. But it's perfectly possible to avoid the mistakes
that are typically employed for such intrusions (usually they are in the OS
anyway - you know, the OS with no warranty, no accountability, but a
monopolist price). When they aren't, it was usually some mediocre C
programmer's sloppy coding (there are safer languages for mediocre
programmers). The whole point of such regulations would be to put some
pressure on developers so they must choose proper tools, languages,
methodologies to keep customers from harm, whereas now they simply do what
they like with no consequences.

------
Quequau
Recently I have been using SPARK and it's a pretty different development
mindset than the embedded C that I had been used to. It makes me wonder if
other development areas could benefit from adopting some of this mindset and
discipline.

For example, just to take a random example of Java or Python, what would it
look like to create a strict subset of language features which allowed for
formally provable code and to create a static analysis tool which could
achieve it. I realize that this sort of thing isn't for many developers or
many sorts of projects. However, I would like to see more folks taking the
best parts of High Integrity Software development and bringing it out to the
wider world.

~~~
rwmj
Link to SPARK programming language:

[https://en.wikipedia.org/wiki/SPARK_%28programming_language%...](https://en.wikipedia.org/wiki/SPARK_%28programming_language%29)

------
nopassrecover
Steve Yegge wrote about this recently:
<http://news.ycombinator.com/item?id=4365255>

------
wizard_2
Humans are the resilience in any system. No system is ever perfect, and no
system will ever be. I think we'll be fine.

~~~
quanticle
Not necessarily. Part of Kwak's problem with software is that it is taking
humans _out_ of the system. For example, the stock trading software that runs
a modern exchange replaces the human traders who used to exchange physical
slips of paper. A human trader can look at a particular trade, say, "Hmmm,
that doesn't look right," and ask for confirmation. A computer can't (in the
general case). So, if humans are the resilience in any system, what happens
when you start taking humans out?

~~~
yummyfajitas
_A computer can't (in the general case)._

They do this all the time. There are quite a few pre and post-trade checks
enforced by the exchanges. The net result is that if things go wildly wrong,
trading will most likely stop.

This is part of the reason why Knight Capital's implosion had such a minimal
effect on anything besides Knight's shareholders.

~~~
quanticle
That's true, and the point that I was trying to make is that we need _more_ of
these built-in checks and assertions, even if they mean that the system isn't
running at its maximum potential efficiency. I think that software development
practices (especially in the financial industry) have gravitated too far
towards speed (in computing and executing actions) and not enough towards
safety (e.g. doing some meta-analysis and determining if the computed actions
make sense given higher level trends).

------
michaelochurch
It's "bad" because of three factors: (1) complexity, (2) inadequate
motivation, (3) lack of understanding.

First, complexity. Bridges aren't doing several million operations per second.
They either bear the weight or they don't. They handle a known temperature
range. If it can function at -50 F and 120 F, then it can handle the typical
65 F day. With a bridge, people put a lot of work into solving a simple
problem extremely well and reliably. Software can be developed this way (Unix
philosophy) but most software isn't. Small programs are written to solve
problems well, but often invisibly; large-program methodologies exist to get
promotions for higher-ups.

There are few things in the world like software, which is pure logic. We don't
have the tools to understand the complexity, and what tools we do have are
often used to write more complex software, not understand the software we have
(see: IDEs and their "four wheel drive problem" of getting people stuck in
more inaccessible places.)

Second, a lot of "software engineers" aren't very good and don't have the
incentives and leeway to get better. That requires lifelong learning and
professionalism in the true sense of the word. We're not a profession. Many
things define a profession, but some salient traits of professions are: (1) an
ethical ruleset that supersedes managerial authority-- not only are you
allowed to refuse your manager on ethical grounds, but you have to do so; "I
was just following orders" is not an excuse-- and (2) an expectation and
allowance that the professional will dedicate half (~20 hours per week) her
working time to continuing education, networking, and other varieties of "off-
meter" work that will never occur in a typical manager/subordinate context
because they benefit the professional's long-term development rather than the
manager's parochial goals, (3) a very high degree of autonomy in work
regarding social-bullshit protocols, but with very strong restrictions on the
things that matter (i.e. no one sets working hours or vacation limits over a
research professor, but if he steals another's work or publishes results he
knows to be false, it's career-ending). None of these apply to software
engineering. (1) If you disagree with your boss about how things should be
done on ethical grounds related to software quality or intellectual honesty,
you don't have an appeal process. You just get fired. (2) Most software
engineers, if they want to keep learning, have to do it on their own time;
hence, the stagnation that sets in for all but the most energetic and
ambitious. (3) Nope. Here, the software industry looks more like middle-middle
class white-collar culture (show up at certain times, take orders) than upper-
middle-class professionalism. The salaries are often professional level, but
work conditions are merely white-collar.

In a truly professional environment, manager-as-SPOF is avoided like the
plague and people have genuine incentives to do good work, not just enough
work to appease the manager. The manager still has some power and influence,
but the role is more like a graduate advisor than an overseer. Also,
professionals are encouraged to build lifelong reputations and seek
visibility. (In a typical software company, trying to do this gets you the
smear of a careerist and a "socialite".) Professional environments motivate
people to do their best work. Merely white-collar environments don't.

The third issue is that we simply don't have a good understanding of software.
This may be derived from the two above: complexity and lack of
professionalism. Or it may result from something else entirely.

We know that, theoretically speaking, "it's impossible to reason about code"
(cf. Halting Problem). Well, my response invariably is that that's correct:
it's impossible to reason about _arbitrary_ code. But we shouldn't be writing
arbitrary code. Ever. We should write simple code that doesn't rely on
extremely high-level mathematical or conceptual insight (which we model as
non-computational "genius", though I have no interest in getting sidetracked
into the debate of whether we are or are not computers) to understand.

I think individual people are good at writing software. People don't usually
set out to write "arbitrary code". There are many individual software
engineers who (1) use only as much complexity as they can handle, and (2) act
professionally in spite of the lack of incentives or requirements ensuring it.
Which means that _small_ software can be good. What I like about the Unix
philosophy and small-program philosophy is that it enables islands of quality
to exist, and eventually bridges between those islands, which can lead to
generally decent systems even though it's GRAI (generally recognized as
impossible) to have high quality in an entire codebase. Systems design is all
about accepting the possibility of failure amid complexity. It's about using
small components that do one thing really well and interchangeable parts, not
single-program mudballs. Engineers get this. Managers, who conflate bigness
(cf. interview questions like "what's the largest team you've ever managed?"
and metrics like kLoC) with success, generally don't.

The problem is that, once a program has a certain number of hands pass over
it, it turns to shit. Even if the original code was good, entropy sets in at
some point. One bad apple spoils the barrel, and a "bad apple" doesn't have to
be a bad programmer. It could be a skilled programmer who hasn't had a
promotion in 4 years and no longer gives a shit. The reason why big-program
methodologies generally produce legacy mudballs is that it's just impossible
to prevent the "one bad apple" problem from corrupting the whole program.

