
Automated to Death (2009) - mcspecter
http://spectrum.ieee.org/computing/software/automated-to-death
======
munificent
There's a weird vibe in this article I don't like. It (correctly) notes that
as the number of anomalies that the automation passes on to the human
operators goes down, the rate that humans successfully handle them also goes
down.

But it doesn't seem to do a good job of clarifying that _the total number of
incorrectly handled anomalies is still decreasing._ Let's say your automation
goes from handling 90% to 99% of the anomalies and that when it does handle
one, it does so correctly. We'll say that the increased rarity of human
interaction and the inattention and weakened training that causes makes the
human pilots to go from being able to handle 90% of them correctly to only a
terrifying 40%.

Let's run the simulation. With the old automation:

    
    
        1000 anomalies occur
         100 (10%) make it past the automation
          10 (10%) make it past the human operators
    

So 10 catastrophes. Now with the new moderately better automation and much
worse human performance:

    
    
        1000 anomalies occur
          10 (1%) make it past the automation
           6 (60%) make it past the human operators
    

6 catastrophes. Even though the human performance was _much_ worse, because
they are the last stage in the pipeline, it has a lower effect.

Now, I just pulled these numbers out of my ass, but I think it's important to
focus on the total number of automation+human failures and not single out one
stage or the other. From the passenger's perspective, they don't care who
saved their ass, just that it got saved. If we can make one stage more failure
proof at the expense of the other, it can still be a net win.

~~~
galdosdi
Also, there are things organizations can do to combat the problem of human
operators getting rusty. Practice. A lot of organizations just don't do it
though because it's too tempting to view the automation's cost savings as
"free" and just take them for granted, but it can help a lot.

~~~
thaumasiotes
People get rusty for a reason. Your solution suffers from a couple of
problems:

\- Practice isn't the same thing as actual events. Being good at practice is
more likely to diverge from being good at crisis response as actual crises
become rare, because the criterion of matching what would happen in a crisis
gets much less important. Thus, peacetime militaries often need radical
overhauling before they can really get much accomplished when war breaks out.
(Also consider - we have a lot of people who are really interested in how
medieval combat (whether in a battle line or a duel) worked, what effective
use of weapons looked like, and so on. But for all the discussion, we don't
know, and we can't know unless we actually stage regular battles-to-the-death
with period tooling. One form of combat practice, however, has been preserved
as European fencing. How closely does it correspond? Again, we don't know, but
consensus is "not well".)

\- The automation's cost savings _are_ free -- in fact, in this example, they
have a large negative cost, cutting catastrophes by 40%. Keeping everyone in
shape to handle crises they're likely to never actually see is, arguably, an
enormous waste of money. (In addition to actually being impossible much of the
time, as in my first bullet point.)

~~~
galdosdi
\- Yeah, practice is definitely not the same, but depending on the domain,
it's usually at least a lot better than nothing, even if the real thing is a
lot better than practice. (To speak to your example, at least in wartime,
peacetime militaries don't have to worry about reteaching their soldiers
EVERYTHING. Maybe they've never been in real combat, but at least they can
consistently shoot at a target. That's better than not being able to do that
either.)

\- I mean, it depends on the specific case, right? Obviously that's true in
this example. You can also come up with opposing examples. I also misspoke a
bit -- even if the automation cost savings are free and you're strictly better
off with it than without it, adding some practice for human operators may get
you even more savings, and it's often overlooked.

My point is just that practice/drilling can be a useful tool in the toolbox.
It depends on the situation, but it shouldn't be ignored.

~~~
afarrell
First responders practiced for handling scenarios like the Boston bombings for
years before 2013. It paid off.

~~~
thaumasiotes
Initial responses to ebola in the US tended to be pretty badly bungled because
essentially nobody was trained for handling very dangerous, highly contagious
disease. Should they have been? Going back how long? Should they be now?

------
Jtsummers

      However, when the second accelerometer failed, a latent software
      anomaly allowed inputs from the first faulty accelerometer to be
      used, resulting in the erroneous feed of acceleration information
      into the flight control systems. The anomaly, which lay hidden for a
      decade, wasn’t found in testing because the ADIRU’s designers had
      
      never considered that such an event might occur.
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    

An important lesson. Assumptions can kill.

~~~
xanderstrike
Here's my theory. The engineers assumed it would never occur because they fail
rarely and if one failed you'd replace the unit.

Then it got into the hands of the airlines and they said, "You mean it'll run
with one failed accelerometer? Then we don't need to replace it when one
fails."

~~~
Jtsummers
If they were even aware that it'd run with one failed accelerometer.

Aircraft maintenance outside the US military (Air Force in particular)
terrifies me. While it's not perfect, the USAF essentially rebuilds most of
its fleet every X years. There's a reason they're flying planes, successfully,
from 60 years [ago]. [EDIT: Missing a word.]

The airline industry does not do this sort of depot level work. Instead, they
discover a crack, the put on a piece of sheet aluminum and "patch" it. Wash,
rinse, repeat. 10 years later the aircraft is still flying but at greatly
reduced fuel efficiency because, like an overweight, middle-aged man, it's
gotten a bit of extra weight on it all the damn time.

This is what they do for _structural_ maintenance. I don't even want to
imagine what happens with electronics and other subsystems. They're literally
willing to cost themselves millions of dollars a year in extra fuel
consumption (across their fleet), rather than spend the money to do real
maintenance on it.

~~~
mahyarm
So what does the USAF do when they find a crack? If they do rebuilds it sounds
like it takes a lot more time to do the same fix.

~~~
Jtsummers
They monitor and patch it but eventually the plane gets stripped down to
basically the frame and rebuilt.

Their patches are treated as temporary patches. The airline industry has a bad
habit of making them effectively permanent.

------
pipio21
I don't really understand. If one accelerometer fails for years and nothing is
reported it is a big failure on the design team, probably with criminal
responsibility.

Depending on GPS for main navigation is also very bad idea.

In the near future with the cost of today one fiber optic gyro and
accelerometer you will be able to buy ten. Software will improve and
redundancy like it has done tremendously in the past making airplanes the
safest of transports precisely because it does not depend so much on fallible
humans that get tired and need to rest,pee and other biological necessities,
have ego(that blinds their judgments) or get in love with the air hostess,get
bored(some flying could bore you to tears) have problems of vision or hearing
with age, get distracted(and lose situational awareness) or ill or intoxicated
by food.

It is easy to forget that death was what we had when humans were in charge. We
are talking about thousands of times more dangerous than today. So the title
is yellow sensationalistic garbage.

The only reason humans have not been completely replaced is because people
naturally trust other people more than machines, landing on side winds
automatically requires engineers taking responsibility for it(and nobody had
sone so, so far), and someone needs to be in charge in the plane at all
time(for example what to do if a person have an stroke).

------
scotty79
If you need operator to be ready to take over then allow him to play a game
with the vehicle he drives. Give him points for how close his attempts at
controlling vehicle are to what the software that actually contols the vehicle
does. This way if emergency that can be handled by automation arrives he can
train for it without risk but when you need to hand over the control to him
he'll be ready and aware and do his best.

------
userbinator
It is notable that this article was published only months after AF447[1] which
crashed also due to pilots' lack of experience in flying without automation.

[1]
[https://en.wikipedia.org/wiki/Air_France_Flight_447](https://en.wikipedia.org/wiki/Air_France_Flight_447)

------
ChoHag
> “People, after all, are the backup systems, and they aren’t being
> exercised.”

If it's not tested, you don't have a backup.

------
exar0815
Every automated System needs a very well trained and calm operator when it all
goes south. Thats the difference between e.g. Chernobyl and Fukushima/Three
Mile Island. While very bad accidents, the last fail-save, the humans, didnt
fuck it completely and spectacularly in the latter two cases.

~~~
throwanem
Chernobyl isn't a very good example of automation failure; responsibility for
that disaster lies entirely with human beings from start to finish.
Wikipedia's summary is solid, and rather than excerpt it here I'll just point
you at
[https://en.wikipedia.org/wiki/Chernobyl_disaster#Accident](https://en.wikipedia.org/wiki/Chernobyl_disaster#Accident)
.

~~~
exar0815
Yeah, my comment doesnt make that much sense, reading it again. That was just
an example how badly trained personnel can make any accident worse, justifying
a very well trained operator for crucial automated systems.

------
I_HALF_CATS
Also check out the podcast by "99% Invisible"

[http://99percentinvisible.org/episode/children-of-the-
magent...](http://99percentinvisible.org/episode/children-of-the-magenta-
automation-paradox-pt-1/)

------
outworlder
> As the plane passed 39 000 feet, the stall and overspeed warning indicators
> came on simultaneously—something that’s supposed to be impossible, and a
> situation the crew is not trained to handle.

But it is not impossible at all! That's called the "coffin corner". All flight
crews are aware of it.

------
mschuster91
For this reason, U-Bahn (subway) drivers in Munich have to randomly drive
under signalling (i.e. total manual control), while "normal" operation is that
the computer handles everything from acceleration over cruise to stopping at
the station.

S-Bahn (in Munich) is fully manual, too, but augmented.

~~~
scotty79
> drive under signalling

How would that help if signalling failed?

------
edem
This is almost exactly the same as the Law of leaking abstractions don't you
think?

------
lunchTime42
Could the decay of abilitys be avoided if the supervisors where kept in
constant uncertainty wether the system is working?

~~~
Thriptic
Possibly but it would probably lead to people disregarding the data they are
being provided with, effectively removing a lot of the benefits of the
automation.

