Or, "That's why I'm skeptical of people who look at some catastrophic failure of a complex system and say, "Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!" What they don't realize is that one or two of those systems are failing all the time, and it's up to the other three systems to prevent the failure from turning into a disaster." 
1. the airframe held together despite an explosion at the back. The rudder and horizontal stabilizer stayed on.
2. the aircrew figured out how to control the airplane with no hydraulics, i.e. there was still some redundancy in the system.
3. the landing gear was designed so it could be extended and locked with no hydraulic power, and that worked
4. if the airplane or aircrew was any less, nobody would have survived
5. electric power stayed on
And, airframe companies learn from these disasters, which is why airplane travel is incredibly safe. Boeing airliners, for example, do not locate critical components inline with the turbines. Hydraulic lines do not extend past the inboard engine. There are a number of other improvements as well.
Having worked on 757 flight controls for three years, I can assure you that none of the engineers want any part of a defective design. None want to make any decisions that lead to a smoking hole in the ground. An awful lot of effort is spent pouring over the designs again and again looking for mistakes.
As with these matters there is no one true correct answer, but rather a very complicated set of tradeoffs and probability estimates. In hindsight it is easy to see designs as defective, but they could all be done in good faith.
Yeah, if they were any less. But they should have been more. More is always better than less.
- what are the total number of failures that can happen?
- what is the probability of those failures occurring
- how many sets of those can combine into a catastrophic failure?
And then from those numbers you can derive the probability of a catastrophic failure occurring.
i.e. it's possible that some of those failures have already occurred, and you just haven't noticed because the redundant systems are being redundant and preventing the overall system from failing catastrophically. Or you have noticed, but think that the redundant systems are sufficient, not realising how much closer they bring you to a single point of failure. So the probability of your whole system failing are higher than you'd expect, because you already have failures which you think are p < 1 (possibly p << 1) but are actually p = 1.
In the case of the Gimli glider, they had two independent FQIS systems and a floatstick in case of a single failure, and a rule that the airplane was non-servicable in case of both failing.
On the flight in question, one FQIS was non-servicable. The second was servicable but had been switched off, but due to a miscommunication it was thought that the no-fly rule had been overridden and the plane was OK with a floatstick measurement. Further, if the fuel calulation from the floatstick measurement had been correct, they would have refueled the plane and no-one would ever have heard about Air Canada Flight 143 because everything would have been fine.
The problem was that they were knowingly operating in a failure mode, without either FQIS and disregarding the no-fly rule, and thinking that the floatstick measurement was sufficient. Therefore it only needed one further failure - miscalculating the amount of fuel from the floatstick - to bring about disaster.
I sort of take the opposite approach with comments that the odds of such a failure are astronomical. Given how safe airliners are these days, anything that causes a bad emergency with one must be an extremely unlikely event. If it weren't, more airliners would crash than actually do. I've seen this going around with MH370, for example. People will dismiss an idea for what caused the disappearance with a comment that such an event is extremely unlikely. Well sure, pretty much by definition, whatever caused it has to have odds of something like a billion to one.
I'd also like to bring up a minor quibble, in that Air Canada 143 did not end in disaster. It certainly came close, and should be regarded as a serious incident with lessons to be learned, but ultimately everybody survived and the airplane was returned to service, precisely because there weren't quite enough failures to cause a disaster. Various things went wrong, but the pilots managed to stop the chain of events by successfully responding to the in-flight emergency. The ability of the airplane to continue flying and somewhat functioning after fuel exhaustion is a type of redundancy, and it ultimately saved it.
If it had had a few hundred pounds less of fuel, it would have ditched in the middle of the ocean and likely many of the passengers would have died.
I hope that's something that IS now covered. Why would you never train for that secenario?
(but yeah you're right)
The driving portion is not much more difficult.
While it has been some time since my driver's test, I remember somewhat complicated questions about right-of-way, dealing with vehicle problems, handling adverse driving conditions, and responding to potential accident situations. Of course, part of the problem I have with our driver education system is how much it varies among the states. Most of what is shared deals with the operations of highway driving, while I think there needs to be more requirements for overall safe driving.
Also, I was a little incredulous at first that 80 mph is the answer for 'poor' conditions. But, I suppose I have driven on 80 mph interstates here under extremely heavy rain while everyone maintained the speed limit. I suppose it's just one of those things where we have a different sense of scale.
I think it was phrased a bit differently, but that was the essential question. The multiple choice answers were also extremely leading with one being even more obviously correct than you would expect.
In California you're also able to take the test 3 times in a row if you fail it every time, you can then take it another three times immediately if you pay $20. Since there aren't many questions it's basically impossible to fail.
I wish we were required to have many, many more training hours before being licensed, especially in the colder climates with icy conditions on the regular.
Was cruising along the freeway, when the bonnet (hood) flipped up. Thankfully, I could see through the gap, and managed to pull the car over safely.
Not my brightest moment.
But I can think of a dozen other emergency situations they didn't cover.
Although I agree, driver training is abysmal, from what I see of people on the roads I swear they teach new drivers just enough to pass the test.
The general spirit is, of course, still completely true.
[The safety board] noted that Air Canada "neglected to
assign clearly and specifically the responsibility for
calculating the fuel load in an abnormal situation,"
finding that the airline had failed to reallocate the
task of checking fuel load that had been the
responsibility of the flight engineer on older
(three-crew) aircraft. The safety board also said that
Air Canada needed to keep more spare parts, including
replacements for the defective fuel quantity indicator,
in its maintenance inventory, as well as provide better,
adequate training on the metric system to its pilots and
In fact, they did that twice, before initial takeoff and after their first landing in Ottawa.
They did a great job of handling the subsequent emergency, but I see no way around blaming them for it happening in the first place.
Instead look at what happened and say "if things had been different this couldn't have happened" and then make those things a reality. Maybe the answer is putting a sticker with metric/imperial conversion on the tank or something, so pilots aren't confused when checking with dipstick.
That's why it's a good idea in complex systems to pre-assign to each requirement a responsible party. Otherwise, you could just use the "systemic effect" argument to either blame every party or no party in the system, neither of which is very useful.
If you are running an organization that deals with a lot of complexity (airlines, web systems etc.), it's generally not a good idea to blame anyone but the system. If you look at everything systemically, then the organization continually learns. You have to trust the people in the system to be coming to work in good faith, if you can't do that you have other issues.
If you think they were truly acting in bad faith, you have other issues.
There's a wide range of reasonable "it's the pilot's fault" judgments which don't involve firing them or just shrugging your shoulders and saying "do better".
What really matters is what the fault implies. Finding the pilot to be at fault does not imply that people just give up on figuring out a fix.
Terrifying to say the least.
It's convenient to use feet because planes are stacked in 500' increments and not 152.4m increments. VFR (visual flight rules) flights are usually on the 500's (eg. 3500', 4500', etc. depending on heading) whereas IFR (instrument flight rules) flights are on the 000's (4000', 5000', etc.).
Knots are convenient because 1 knot is equal to 1 minute of 1 degree of arc on a great circle. If you're flying anywhere far away this ends up being important as a great circle is the shortest route between any two places.
Oddly enough, the metric system is useful with temperature because the standard lapse rate is 2 degrees per 1000' of altitude. So if you had to climb from 6000' to 8000' on an IFR flight plan, you would usually drop 4 degrees centigrade, which might be significant if it was raining out and it dropped below freezing. Having water on your wings and climbing up to an altitude where it's freezing is going to make you have a really bad day.
Wouldn't it be faster to adjust the heading to point directly to the destination and then fly in a straight line?
"In this case, the weight of a litre (known as "specific gravity") was 0.803 kg. (...) Between the ground crew and pilots, they arrived at an incorrect conversion factor of 1.77, the weight of a litre of fuel in pounds."
See, here's where metric system sanity checks become relevant, that are simply not possible with imperial because nothing is compatible with each other.
In metric, I just know, without any conversion factors to memorize, that 1kg water equals 1l at room temperature. Everyone with high school education should also know that gasoline products are lighter. Now I can do a sanity check - since the weight of one liter should be less than 1kg, there's no way 1.77 is right. If two well educated people do this math, at least one of them should see the mistake.
Otherwise, it's an interesting story and it makes a good lesson, but I've been hearing it, now, for almost twenty years...
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Please don't submit comments complaining that a submission is inappropriate for the site. If you think something is spam or offtopic, flag it by going to its page and clicking on the "flag" link. (Not all users will see this; there is a karma threshold.) If you flag something, please don't also comment that you did.
Or, "That's why I'm skeptical of people who look at some catastrophic failure of a complex system and say, "Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!" What they don't realize is that one or two of those systems are failing all the time, and it's up to the other three systems to prevent the failure from turning into a disaster."