

AF447: how a series of small errors turned a Airbus cockpit into a death trap - damian2000
http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash

======
Animats
William Langewiesche on aviation is always worth reading. For more background
on the technology, read his "Fly by Wire", which explains why the "miracle on
the Hudson" water landing was mostly the work of the computers.

This article discusses the other side of cockpit automation, in the context of
the Air France 447 crash. The key point here is, as "Boeing’s Delmar Fadden
explained, “We say, ‘Well, I’m going to cover the 98 percent of situations I
can predict, and the pilots will have to cover the 2 percent I can’t predict.’
This poses a significant problem. I’m going to have them do something only 2
percent of the time. Look at the burden that places on them. First they have
to recognize that it’s time to intervene, when 98 percent of the time they’re
not intervening. Then they’re expected to handle the 2 percent we couldn’t
predict."

This is a key problem in computing. Things are "user friendly", and don't
require much understanding, until something goes wrong. Then they require vast
amounts of understanding. This is OK (although tacky) for banal applications,
but not OK for ones that can kill.

Automotive engineers sort of get this. Anti-skid braking systems, traction
control, roll control, and air bags are all intended to work properly in
emergencies with no user attention. During normal driving, they do nothing.
That's as it should be.

"Internet of things" people do not get this at all. They're happy to hook a
phone app to some actuator (probably via some "cloud") and let the end user
worry about problems. This will be dangerous for "things" that have any real
power.

