In my research into the topic the saddest bit of information I've seen is the image of the black box data for the flight (the first crash): https://i.imgur.com/WJuhjlO.png
You can see from the graph that in the final minutes and seconds, the pilot put insane amounts of force on the control column (aka the yoke) to try to pull the plane out of the dive - to save the 189 people on board. But no, MCAS was overpowering and lacked the documentation for the pilot to try anything else.
Also interesting to see is the amount of times the pilots bring the nose up, only for MCAS to kick in and force the nose back down. 26 times.
All data from this Seattle Times article, which was written before the second crash occurred:  https://www.seattletimes.com/business/boeing-aerospace/black...
I am curious what scenarios the designers of the plan drove then to not trust that the human in the seat really has no idea what they are doing. Was the ignoring the wishes of the pilot and attempt to prevent a crazed irresponsible and unlicensed idiot from doing something? These are trained humans why does the computer totally ignore there efforts?
Pulling on the yoke 26 times, as hard as you possibly can, is a bit different.
I think the critical issue in both cases is that the wrong system was in control against the pilots wishes.
So, the pilots can pick my poison based on their training and best judgement (and maybe not put their children in control?).
One plane actually crashed because the prevention system disabled itself and the pilots believed it was still there to protect them from bad actions on their part:
> caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall, from which it did not recover
A pretty extreme UX issue.
AF447 seems like a pretty important case study for figuring out where to put the balance of trust. Along with the very final stage of Aeroflot 593 (after the child had left the seat), also mentioned.
There's a few more: Colgan Air 3407, AirAsia 8501, Korean 501, etc, where flight crew ignored or overrode systems. Which is not to deny that in the majority of cases the flight crew are the best judge of what is needed, just that one surely has to look critically and honestly at counter examples.
There is of course also Germanwings 9525, though as the GP implies, systems protecting against malicious piloting will probably be counter-productive.
If you upload or want to share what is on the platform, they would prefer if you link it that way, as is their right.
It is a business after all. And If people want to hotlink directly to an image they should pay up and host it themselves.
I just find it amusing that imgur was born out of being a better image host then the competition as linking to images on reddit at the time sucked, and now (IMO) has turned to “the dark side” and are doing the things that made the competition so crappy to start with.
I’m not saying they shouldn’t show ads but I’ve seen and reported countless bad ads on the site (forced redirects away from the site, unannounced “would you like to open the App Store” dialogs. APKs just auto downloading. Seemed at the time they or their ad network would accept any old crap of an advert (this was a quite a while ago, hopefully they are more choosing about the ads they run).
Also their UI is a pain in the arse (IMO) on mobile Safari it’s a pain in the arse to pinch and zoom (not that it will do any good as the image on mobile UI has been resized) and getting to the full sized image is another pain in the arse. For images like above it makes it impossible to read. They said look at the force applied but the wording on the scale is just a blur making the graph hard to understand.
It’s their site and they can do what they like with it. I just find it amusing that (IMO) they have become what they hated. At least they managed to include reddit style communities/comments before reddit started hosting images themselves.
I think that's the fate of all image hosts: current offerings have a lot of (crappy) ads to pay for hosting => competitors arrives, starts with less ads to incite onboarding => competitors gets a market share => competitors now has to turn a profit => add more ads (or ask money for hosting)
Back in the day you used to be able to pay to upgrade your imgur account to allow larger file uploads (which was great for gifs), disable the ads you saw site wide and disable the ads people saw coming to your submissions. I guess the Rev they were generating off ads back in 2015 was out stripping the Rev generated from Pro accounts (esp with the ability to prevent ads on your submissions. If you were a imgur pro user you were either trying to support that platform or was a power user and getting a fair few visits to your submissions) because that’s when they they dropped them and they were reporting as profitable back then. Which is odd (to me) because it’s been since then that the service (IMO) has gotten worse. To me they had a winning formula and the user base to make it work but fucked it up (or not, they are still around today. So maybe they did the right thing, I’m just an old grumpy user who doesn’t like change. Now get off my lawn, I have clouds to shout at...)
So why did it crash?
Especially if as the article says a failure of the AOA sensor on the system would be Hazardous (looks like it was Catastrophic when paired with MCAS in retrospect), that would have made the functional Design Assurance level for this system DAL B, which adds enough rigour not only in the software development process but so much before you even get to that in terms of Safety Assessments and ESPECIALLY change impact analyses when the function changes.
For sure there may have been pressure from management to keep MCAS out of the manual but it’s not really up to he regulatory agency to be experts on the aircraft design, if things are being hidden by the company then I’d consider this bordering on professional misconduct on the parts of the engineers overseeing this work.
I say this as a Professional Engineer working as an aerospace systems engineer.
As the article says, the function of MCAS changed and its operational envelope was greatly expanded. What if, internally at least, the new system was referred to as MCAS2?
This is somewhere that things can get political, as was exactly the case with the Max, where they did not want anything to be considered a change to the aircraft type, let alone knowing the MCAS system existed in two majorly differing versions.
I believe changing the aircraft type would trigger regulatory events carrying rather gargantuan costs.
Avoiding those seem to be the entire reason for the existence of the 737 MAX.
Failing to clearly communicate precisely how big this change was, and not making it extremely clear to all stakeholders what was happening and doing analyses of how these failure modes have changed, is really awful.
It should have resulted in that team actively seeking buy-in, and clear communication that all subsystems comprehended precisely what was changing.
It likely should have triggered some kind of additional scrutiny from the safety organization.
That it didn’t is heartbreaking. It seems like either some common practices were not followed or were rushed.
I can’t see Boeing keeping their CMMI certification level after this news breaks. Certainly some major steps were skipped in the Systems Engineering process.
From my understanding, this was an intentional decision, as the only way they could certify the airframe without simulator training being required was to feed the MCAS with only 1 sensor.
The only way they could do that is keeping the system overall rated as hazardous, as the Catastrophic rating would require multiple redundancy plus the training.
This can be corroborated from the Australian 60 minutes expose.
>For sure there may have been pressure from management to keep MCAS out of the manual but it’s not really up to he regulatory agency to be experts on the aircraft design, if things are being hidden by the company then I’d consider this bordering on professional misconduct on the parts of the engineers overseeing this work.
That is my conclusion as well.
There the thing: nobody wants to pay engineers on par with managers. Maybe one can not manage something he/she can not understand.
That's always the moment when I'm happy to have documented the decisions.
I used to think like that but I learned in time that the core issue is that engineers rarely know how to present their job in terms of monetary benefit during salary negotiations.
i.e. as an automation engineer I led an initiative that saved a previous employer more than 60k€ yearly using automation and optimizing other validation workflows. as principal engineer at my current job I saved them more than 30k€ yearly by replacing a licensed component with an open source one, filling the feature gaps myself outside of work hours.
these thing get noticed, not just the stuff done but the ability of thinking in money, and unlocks the full engineering potential in salary negotiations
The point is, they shouldn't have to. The managers should be handling that fairly. Otherwise, it's an adversarial game where only one team knows how to play.
Engineers should be experts in engineering. Holding them at a disadvantage because they are not good at the other feels wrong.
someone in between has to connect the dots, but there is no such figure and the incentives for doing that communication work are all on the engineers' side
addendum: I do however agree it sucks to have to compete with other fellows
Your achievement is to save money by doing all the work for free in your spare time?
While I agree that engineers should negotiate better and keep track of their achievements, that is not one at all.
I'm significantly ahead of all my peers in job position and salary so I must have been doing something right.
I've a family now and such investment done prior are helping immensely.
all my peers know I'm dependable and all previous manager know I do get things done.
the large reference network is both a source of work opportunities and has been a safety net during times of need.
why would people choose the misery of strictly contractual relationship when you can build partnerships and friendship wherever you go with just very little effort.
To be clear, I wasn't referring to the board having a choice in the matter.
You may claim it’s unfair that productive members are exploited to pay for nonproductive ones, but that’s the neo-conservatives’ argument, and has been rejected by most of the Silicon Valley. I honestly think that the currebt situation is already the middle ground betweeb “pay proportional to your added value” and “diversity, equality and social responsibility requires adjusting the salaries of highest-paid engineers.”
It makes me wonder if there are other issues with the Max that the public doesn't know about yet.
I hope a thorough review of Boeing's internal communications is already underway. If there is proof that these decisions were made for financial gain, they should face criminal charges.
IMO, whether it was greed or just general incompetence, Boeing has demonstrated that they are not responsible enough to self-certify their aircraft.
We have no idea what other potentially lethal corners have been cut. What if they go back into service, after several months of retrofitting all of them at Boeing maintenance hangers, and then the following year there are two more deadly crashes from some other overlooked hack.
Really, these planes need to be scrapped. The engines, equipment, seats, etc can all be stripped and used in other planes, but the air frames will need to be recycled and this line of planes should end here.
Even if it doesn't (probably won't), I highly doubt we'll see another generation of 737s. They did survive the rudder problems way back from the... 80s? or 90s? .. So their reputation might recover, but they still can't make the types of planes airlines want and keep that name/certification.
> Perhaps the single most complex, insidious, and long-lasting mechanical problem in the history of commercial aviation was the mysterious rudder issue that plagued the Boeing 737 throughout the 1990s. Although it had long been rumoured to exist, the defect was suddenly thrust into the spotlight when United Airlines flight 585 crashed on approach to Colorado Springs on the third of March, 1991, killing all 25 people on board. The crash resulted in the longest investigation in NTSB history, years of arduous litigation, and a battle with Boeing over the safety of its most popular plane.
A world in which the faulty software was required to fix faulty hardware.
that’s a hardware problem.
MCAS only exists because of that hardware problem.
the fact that boeing also did not train or tell pilots about MCAS, in order to make the airplane more financially appealing by retaining the 737 type rating, is a separate (also bad) issue.
Aircraft design is a giant bag of compromises between desirable characteristics, most which are in conflict with each other.
This is a gross oversimplification of the problem. In most of the flight envelope the aircraft is stable.
At high alpha the aircraft has pitch problems.
There are myriad ways to address this, and MCAS was one (bad) choice of many available to Boeing.
That would put you at odds with Boeing's test pilots.
The issue is not that the airplane shows negative or neutral aerodynamic pitch stability, it's that it does not exhibit an increasing stick force gradient, as required by certification rules.
>> At high alpha the aircraft has pitch problems.
> I believe the airplane is aerodynamically stable even at high alpha. [...] This is not a flying wing or intentionally unstable airplane (as the F-16).
In some ways that's likely worse news than an airplane that's inherently unstable in general, no? A corner case, and one that evidently isn't actually all that uncommon. If you're building something like an F-16 you know that you absolutely have to make the fly-by-wire correct and robust, and the ground crews similarly know that if anything affects its performance the plane isn't fit to fly.
FAR 25 applies to all transport category aircraft. The section on stability (§§ 25.171 - 25.181). In exactly what manner is the airplane not stable, with or without MCAS?
But a few weeks later, Mr. Wilson and his co-pilot began noticing that something was off, according to a person with direct knowledge of the flights. The Max wasn’t handling well when nearing stalls at low speeds.
i still maintain my macro point, either way.
making the airframe on a pax airliner aerodynamically stable during normal takeoff and landing operations seems like basic “good engineering” to me.
Modern fighter planes are inherently unstable because this is required for better maneuverability. Passenger planes can certainly benefit from that too.
Consider gusty crosswinds during a landing. With an inherently unstable aircraft, there is greater capability to compensate. You can have a computer stabilize the plane, preventing the tail or wing tips from striking the ground. When wake turbulence threatens to flip the plane or when a microburst threatens to pound the aircraft into the ground, a fast response is possible. Stability would deaden the performance of the needed response.
The extreme example is probably wings that are low-mounted anhedral and forward-swept, with the bending controlled by rapidly actuated aerodynamic surfaces near the tips.
In exactly what manner is it not obvious this is not an acceptable design outcome?
The problem results from an edge case or it would be happening a lot more often. That it's an edge case doesn't mean it isn't serious or shouldn't be fixed or that it's not a design flaw. But it is not a stability problem, it's the wrong word to use.
It misdirects the conversation from where it should be. The airplane aerodymics are the distraction. The central problem is when perturbed, this feature becomes a saboteur, 2.5 degrees of deflection in 10 seconds is asinine at Vmo. A human pilot acting on all the same information the flight computer has available, would be considered a maniac to correct for a clearly bogus angle of attack value with 40 degrees of nose down. It's that insane. And Boeing knew about the possibility, classified it as hazardous, and yet somehow no further exploration of what would happen upon arrival at such a hazardous event (MCAS upset) by any team at Boeing or 3rd parties or the FAA. It's mindboggling.
Meanwhile some people prefer distractions from those issues by using the wrong terms: it was designed badly, and the whole plane should be scrapped. With the above systemic problems at Boeing and FAA, who knows what kind of airplane they'd design to replace it and what sorts of problems it would or could have.
The whole impetus of the 737 MAX was a race against time to compete. If they had faced a much longer time frame for a whole new model, the pressure to cut corners is even higher. The opportunities to make mistakes are even higher.
You are right on target, but I do wish to point out the aerodynamics are still a problem, and a problem that has caused a great deal of grief in aviation history.
Take a trip down memory lane, and give the D.P. Davies Interview from the Royal Aeronautics Society a listen. Specifically, the one revolving around the 727 certification.
There seems to be two schools of thought to aircraft design. One is the test pilot's wet dream: simple layout, well behaved, neutral stability, or minimal bad behavior up to the corners of the flight envelope, then easily discernable, and recoverable stall behavior.
The other school is the realm of the Engineer. The Tricky Sick school if you will. Apply enough computer and piloting aid to the properly shaped brick, and it can be flown like a 737! Or Airbuses version of "let the plane fly itself, just tell it where to go."
Even as far back as the certification of the 727, test pilot's saw the shift away from the meutrally stable machine that "just flew" to an ever increasing complex mishmash of complex systems working in the background to male unstable airframes fly like naturally (neutrally) stable ones. Which is all fine and good until something goes wrong, and those systems fail, leaving a pilot in uncharted waters.
The control stick force stuff is not a distraction, just another link in the chain of normalization of deviance that resulted in a departure from "building an airworthy frame" to figuring out how to mask the "unairworthyness" of a frame sufficiently so as to get it by the regulators.
That's not to say it can't be done, but one approach is definitely inherently riskier than the other, and requires increased levels of communication among everybody involved.
Point being: this has been built up to since as far back as the 60's. See the 727 certification in above mentioned interview, the many difficulties that the MD-11 ran into with it's LSAS, and note the similar less than stellar results that emerged from trying to optimize for fuel efficiency at the cost of having to implement increasingly complex control system hacks to maintain parity with regulations/previous airframes.
In the US maybe. In Europe and Asia, I don't think so.
The planes have been grounded pending investigation. Given the speed of investigations and the political ramifications of all of this, I bet they won't be ungrounded anytime soon.
It's not something an airline can do hastily, as different aircraft (well, at least those of differing type rating) require different crews to operate. So the airline will have to grab a totally different crew to match their replacement aircraft.
Before the 737 Max got grounded (after the 2nd crash), I remember reports that some airlines allowed passengers to rebook their flights for free (to end up on a different plane). I doubt they will continue doing it once the 737 Max is allowed to fly again. Eg, Southwest is too dependent on it.
This reasoning reminds me of the tendency to argue "this code needs to be thrown out and rewritten from scratch" among software engineers. It's easy to see the flaws, but not so easy to all the things that have been fixed. See https://www.joelonsoftware.com/2000/04/06/things-you-should-...
This is like taking your MySQL app and running it against MongoDB with a bunch of hacks to translate the SQL into mongo calls.
And then claiming you did all this so you could "avoid a rewrite".
Some architectural decisions are far reaching and leaky. This is not the ideal but it often is the ideal trade off. The alternative is decoupling to the nth degree resulting in a hunk of junk impossible to change that won't get off the ground.
Changing the position of the engines means it's no longer the same 'tried and tested airframe'.
However the analogy to "rewrite all this" politics is good I think. Often you could walk into a building and say "all this needs to go" but truely the costs of doing so and starting from scratch are hidden in code and engineering design that's survived a trial period.
No matter how nice and neat everything might look, that's only because it doesn't yet account for 10,000 other edge cases you're not able to simultaneously consider...a system as complex as a commercial plane is still going to care about these.
They don't seem to have learned from this, which means it's likely to happen again.
Don’t forget, the system they used to paper over the cracks had a single point of failure.
OTOH seeking a non aerodynamic solution to a significant stability degrading airframe modification was IMHO a bridge too far. If the pitch stability (not the pitch feel) problems couldn't be dealt with aerodymancally without busting type certification, than perhaps the whole concept was just too much of a stretch for the venerable old 737.
It surprises me, though, that this couldn't have been engineered out with enhancements to the horizontal stabilizer, such as tip fences or a span increase to offset the lift from the engine nacelles.
If it could have, but software was cheaper, then that's even a darker indictment of Boeing's engineering incompetence.
It's not the case that they won't have to try, they already did it years ago.
This in turn is because getting a new airplane type to market would cost some (I assume, facts are welcome!) unholy amount of money and time to get approved.
If you consider that decision "greed" or "a rational response to perverse regulatory incentives" is I suppose a personality test as good as any :)
It's one thing to scoff and believe "Oh, it's nothing! You're just holding us all back!"...
...Right up until a plane load of freight or people plunges out of the sky.
Free market economics optimizes for one thing, and one thing only as a first,order optimization. That's why we regulate. To ensure that all those nuisance secondary facets are accounted by everyone equally to ensure that market forces natural race to the bottom doesn't compromise the central tenet of air safety; that everyone and everything that goes up, comes back down, safely, controlled, and alive.
The business part, if you think about it, is secondary to the capability to make and safely deploy a new plane. A nice bonus.
Sacrificing the quality of the final product for the sake of looking better on the balance sheets us a cardinal sin. Plain and simple. Based on testimony from inside, that sin seems to be SOP at Boeing for the better part of the last decade.
But I hope that the CEO Dennis Muilenburg deep down understands he seriously fuxxed up real bad and every now and then is having a hard time falling a sleep in his $10M mansion knowing that he is ultimately responsible for hundreds of peoples unnecessary deaths due to his failed values as a leader.
which is extremely sad because it was really hard won.
classic story of gutting a gov organization, and regulatory capture.
i know some really good people at faa and the situ makes their blood boil. mine too.
Even with the two MAX planes that went down, fatal crashes are enormously rare (for Airbus/Boeing/CRJs/ERJs). This one is only getting so much attention because it was that extraordinarily rare "design defect" rather than a maintenance issue or pilot issue.
More specifically, the complaints reportedly referenced issues pertaining to a takeoff “autopilot system” and situations where the plane is “nose-down” while trying to gain altitude. One pilot reportedly wrote that it was “unconscionable” that Boeing and federal authorities allowed pilots to fly the planes without fully describing how the 737 Max 8 was different than other planes. “The fact that this airplane requires such jury-rigging to fly is a red flag,” the same pilot wrote.
That is almost certainly not what happened. It is more likely a system of procedures and policies which failed. The company should take the hit, but unless an investigation reveals otherwise, I see no reason a single individual should take the blame for all of this.
Aren't the insane compensations of executives justified by their "great responsibility"? So I think it makes them eventually responsible for what their companies do.
In that sense, it absolutely is the fault of Boeing management for not acknowledging reality.
After the Lion Air crash, it was very apparent to Boeing that MCAS was not safe. This whole article focuses on how MCAS slipped through development+certification - but really even after Boeing new the dangers of MCAS, the MAX still was allowed to fly.
It was hidden and dangerous. Then it was open and dangerous but was still defended by Boeing. Damning.
I think it's a huge issue, but perhaps not criminal. The hiding/lying/etc is a criminal issue in my view.
Why did exactly did the engineers/test pilots feel the need to "enhance" the original MCAS with the new, more powerful version that worked at lower speeds? What did they know? I doubt they did it for the hell of it. And therefore, what has changed that that enhanced functionality is now no longer necessary, and it's fine that MCAS is being returned to its original, more subtle implementation?
These things just don't add up for me and Boeing's constant pronouncements that they did nothing wrong, everything was fine, and now they're fixing it so everything will be even more fine ring very hollow indeed. I would almost like to see everyone involved in this subpoenaed so the public can learn the truth of what, exactly, took place.
Until we have some answers, especially to my main one - what was so bad about the airframe's handling that it was necessary to massively increase the power of the MCAS system, but is now apparently not necessary anymore and it's fine for them to nerf it - I don't think I'll be flying on a MAX.
I mean, if one-sensor based MCAS failed twice so early in the life span of the plane model, what is the probability that a two-sensor model will fail pretty soon as well? The math should be simple, we have all data needed: combined hours flown by all planes of the type and number of failures (at least two known, which can help us to estimate a MTBF of the sensor).
If the sensor had just stopped responding, there wouldn’t have been any problem. The planes would keep flying, the sensors would get replaced, and everyone would be fine.
What happened was that the sensor gave erroneous readings. The MCAS system reacted to those erroneous reading and crashes the plane.
With two sensors, you can detect failure. It’s very unlikely that both would fail simultaneously. If they did, it’s very unlikely that both would provide the same erroneous readings.
Birgenair 301 crashed into the Atlantic because mud dauber wasps built nests in both pitot tubes while the plane was on the ground. It happens.
Think about it like the Antilock brakes on your car. Suppose the wheel position sensor fails. It's fine if the car puts up a warning light and says that you don't have antilock brakes anymore. You can drive fine without them until you can get them fixed with a minor safety impact. It's not fine if the wheel position sensor fails and this causes the car to slam on the brakes going 65mph down the highway.
You wouldn't hear about it unless the activation was triggered by egregious pilot error and you're scouring aviation news sites.
It's not fine if the wheel position sensor fails and this causes the car to slam on the brakes going 65mph down the highway.
Been there, done that. It's an unpleasant failure mode, but it is survivable.
And, anecdotally, I've had ABS kick in in some occurrences for which I was very thankful.
Suppose the crash probability on a normal flight is 1/1E7, but without MCAS it's 100x more dangerous, or 1/1E5.
Suppose MCAS failure probability is 1/1E6, the probability of an additional crash due to the failure of MCAS is 1/1E11, which is acceptable.
The problem is that in practice, the crash probability if MCAS fails is empirically 2/3 instead of 1/1E5, because MCAS actually causes the crash rather than merely failing to prevent a crash.
"ABS has close to a zero net effect on fatal crash involvements."
Further down, there is a chart which seems to show that ABS is associated with a huge increase in fatal accidents in inclement weather, and a large increase in side impact accidents, both fatal and non-fatal.
Maybe there's something obvious I'm missing?
This is something all 737 pilots are trained for, so they have to balance that pitch up from the engines throttling up by pitching the plane down more than they would otherwise need to. The precise relationship between throttling up and pitching up was changed when the 737 was modified to create the 737 MAX. Namely, on the 737 MAX the pitch-up is more extreme. It is not so extreme that it makes the airplane a bad design, but it is extreme enough that pilots need to not expect the legacy 737 behavior.
Had pilots been trained for the performance characteristics of the 737 MAX specifically, it would have been perfectly fine. But they weren't, and instead MCAS was meant to paper over the difference so pilots could be kept in the dark (which is cheaper but borderline homicidal...)
To put this another way, in some hypothetical universe where the 737 MAX was the first 737 ever, it would have been introduced without MCAS and pilots would have said it handles like a dream. Then, when the 737 NG was introduced after the 737 MAX, there might have been a reverse-MCAS system implemented to make the NG handle like a MAX. That reverse-MCAS system may have then failed catastrophically.
The problem with the 737 Max is that the engine nacelles are further forward and larger than previously, so the lift that the nacelles make at high angle of attack is greater. This results in less stick force being necessary to maintain a high angle of attack than at lower angles of attack. This is uncertifiable behavior. Hence, MCAS, to augment the manuvering characteristics of the 737, to increase stick forces at high angles of attack.
In a hypothetical universe where the Max was the first 737, it'd be fly by wire and all the stick forces would be synthetically generated anyway.
“The investigation concluded that one of the three pitot tubes, used to measure airspeed, was blocked.”
They don't have to fail simultaneously in a flight. And they don't have to fail by internal sensor problems.
There are many cases in which they can simultaneously fail and give same readings, article even mentioned such types of events:
>> That probability may have underestimated the risk of so-called external events that have damaged sensors in the past, such as collisions with birds, bumps from ramp stairs or mechanics’ stepping on them.
And AF447 gives an example when such erroneous readings combined with pilot errors may lead to.
AF447 is an example of a fly by wire system that has to keep working no matter what happens, thus a bunch of redundant systems and a series of alternate modes the system can fall back on to operate in a degraded state.
MCAS, in contrast, is not a critical system. It could shut down with no problems at all. These crashes happened only because it didn’t shut down when faced with a failed sensor, because it couldn’t detect the failure.
AoA sensors by design don't work reliable in low speed and they don't work at all on the ground.
> AF447 is an example of a fly by wire system that has to keep working no matter what happens, thus a bunch of redundant systems and a series of alternate modes the system can fall back on to operate in a degraded state.
AF447 is a good example that two AoA sensors can simultaneously have same erroneous readings. It's not that unlikely.
Airspeed is a critical system while aoa is not. (Unless aoa is tied to mcas like in the max jets)
You’re just badly arguing details while ignoring the actual point here.
What was the point? Two sensors (in this case alpha vanes) can fail at the same time and in the same way on an Airbus:
> First: I said it’s very unlikely, not impossible.
> You’re just badly arguing details while ignoring the actual point here.
My point here was that with two AoA sensors you can't reliably detect failure. They can both fail simultaneously and provide the same erroneous readings. And because it's not that unlikely we have to be sure that pilots can handle MCAS problems when two AoA sensors fail.
> Second: the failure of AF447’s pitot tubes was detected immediately
Because A330 have three pitot tubes, right? Not two out of two sensors?
I'm not arguing about AF447 case, I gave it as an example that two sensors _can_ have same erroneous readings.
Airbus engineers were not sure that can happen in real life, so stall warning issue was a real surprise for them.
Triple redundancy is the norm for the specific reason that it's highly likely for symmetrically placed sensors to be prone to failing in the same way not long after each other, but having a third differently placed can keep you flying.
Although there's at least one instance where an Airbus plane had two AoA sensors malfunction at the same time and outvote the last remaining sensor.
This is why critical systems are built with higher degrees of redundancy and graceful degradation of operational envelope in mind.
Training on how to deal with unaided flight is also absolutely essential. Many Airbus accidents where pilot's were caught off guard when the automation that kept them from breaking out of the operating envelope failed.
Long story short; Boeing has put themselves in the unenviable position of having delivered a product in ways that are not only illegal, but deadly, and short of pilots accepting a significant burden in the form of being as good at or better than the MCAS system at this point; a lot of man hours and capital has been expended to end up in a situation where every MAX is in a not inconsiderable risk of being scrapped.
The problem is that in order to save tiny amount of money Boing made plane rely on unreliable sensors.
You're assuming that faulty sensors will tend to have random output. But since we're talking about a real life mechanism, it seems likely it has some erroneous states that are more likely to occur than others. For instance if the mechanism often fails up against one of it's mechanical limits, the sensor might erroneously read out the limit position every time.
You can't actually say anything about the distribution of failure states for a sensor without evaluating that particular sensor.
Ice, insects, birds, and volcanic ash are all things that tend to cause the pitot and the static tubes to become blocked. When you encounter ice, insects, birds, and volcanic ash, it is often the case that you get multiple simultaneous blockages. Blockages of the various tubes are not statistically independent events in practice.
Would it trouble everyone to get a basic grasp of what we’re talking about before replying? I’m getting a little tired of constantly correcting basic stuff in the replies.
You get a reading of 20 on one sensor and get a reading of 34 on the second, which one is correct. To achieve reliability a minimum of five sensors need be used. four primary and one back-up. If three primary agree then system normal. If two primary disagree then switch to backup.
There’s a big difference between a system which must work and a system which must not go wrong. For example, the fly by wire system in an Airbus must work. A failed sensor must not disable the system. Thus, you need at least triple redundancy to keep functioning in the event of a failure.
Boeing’s MCAS system, on the other hand, doesn’t need to work. The plane flies just fine without it. It merely needs to not go crazy. Two sensors is sufficient.
Per the article MCAS was originally intended to handle uncommon edge cases but was extended to cover additional (low speed) deficiencies. This expanded scope is what made MCAS as problematic as it is because it did away with the second input (accelerometer) and expanded the authority dramatically (from something like 0.6 degrees to 2.4 degrees of stabilizer movement).
The pilots knew something was going wrong. That wasn't the issue. The issue was that the bloody thing could mistrim the plane to the point of nigh irrecoverability, and no one knew enough about it until two planes full of people plunged out of the sky.
The plane may be able to fly just fine; but the way this thing was developed and brought into mainstream use had critical problems in terms of essential information being communicated.
All the decisions and motivations behind these lack of communication have to some point been traced back to trying to circumvent regulations in order to prop up share price by scoring sales of a new airframe of comparable efficiency to the a320neo.
Fly-by-wire Boeings still only have two alpha vanes. Go ahead, take a look at the next 777 or 787 you come across.
When you do that you now have an aircraft the pilots aren't certified to fly.
It would increase risk. But for that increased risk to materialize into harm, the plane would also need to experience an unlikely, near-edge-of-flight-envelope situation that the working MCAS was intended to handle.
This would be comparable to a plane with any other mechanical defect that is discovered in-flight. If the above situation is expected to be too-risky to continue the flight and repair on the ground, then it would give cause for an emergency landing.
Failure of the AOA sensor and edge of the flight envelop events can't be assumed to be uncorrelated.
It's actually higher than the probability that a one-sensor version will fail. With two sensors, you have an effective failure if either sensor fails, and the probability of that happening is roughly twice the probability that a single sensor will fail (assuming failures are independent, which is not necessarily a valid assumption).
However, with two sensors you can tell when one has failed (even though you may not know which one it was) and so the consequences of the failure might be less severe.
The problem is: now pilots need to be prepared to fly the plane with a failure sensor, which is to say, without MCAS. To do that, they will need additional training. Avoiding that was the whole point of MCAS in the first place. That's the reason it's taking so long to sort this out. Technically, it's an easy problem to solve. It's the economics that are daunting.
Note that Airbus uses three of these sensors on their planes, so that when one fails you know which one it is, and can still rely on the signals from the two remaining good ones. Then you replace the failed sensor before the next flight.
>The aircraft's computers received conflicting information from the three angle of attack sensors. The aircraft computer system’s programming logic had been designed to reject one sensor value if it deviated significantly from the other two sensor values. In this specific case, this programming logic led to the rejection of the correct value from the one operative angle of attack sensor, and to the acceptance of the two consistent, but wrong, values from the two inoperative angle of attack sensors. This resulted in the system's stall protection functions responding incorrectly to the stall, making the situation worse, instead of better.
Considering the number of flights, that does sounds reliable to me.
I still think two failures in the same sensor, in the same airplane, under the same condition, in less than one year did not happen by chance.
Also, your GPS measurements give you position, direction, and speed, but they don't give you orientation. You would have to have another instrument to feed that into the system (such systems exist).
But yes, it would be a sanity check.
Unfortunately, no. Stalling is a function of the wing's _angle_ relative to the flow of air, not of speed. If the angle is too sharp the air can't follow the curve of the wing. The critical angle is (pretty much) independent of speed. For example: if you stick your hand out of the window of a car traveling at 60 MPH, and hold it almost flat to the wind (say 80 deg.), then the air can't follow down the back of your hand. All of the "push" is backwards, and there's no push up. If you hold it at 30 deg. then the air flows around your hand, which deflects the air down and your hand up, very strongly.
Even if you're only traveling at 5 MPH, if you hold your hand at 30 deg. the air will flow around your hand and deflect it upward; it will just be a very weak effect.
The angle between the wing and the air flow is what is called the "angle of attack", and what the AoA sensors measure. The only other instrument that comes close is the Attitude gauge (the globe thing). However, it measures the plane's angle relative to the horizon, and air moving relative to the plane usually isn't parallel to the ground in conditions where the AoA matters.
Wikipedia article, with much detail, pictures, etc.: https://en.m.wikipedia.org/wiki/Angle_of_attack
Normally, speed is from a tube aimed into the air. Normally, direction is from a little fin that can spin.
There are lots of alternatives:
Direction can be via multiple tubes aimed into the air, each with slightly different direction.
Speed can be from a hot wire. Weather stations sometimes use this.
You can get both via lidar. You just need to make it sensitive enough to pick up a response from minute particles of dust or ice.
I think I just invented a new way: do a short-duration high-power pulse of an electron source or an EUV laser, causing the air to fluoresce at enough distance from the aircraft to be clear of the boundary layer. Track the motion of the fluorescing air with multiple cameras.
* IAS is the raw airspeed reading from the pitot tube.
* CAS is IAS corrected for instrument errors, e.g. if the plane is at an angle that disrupts air flow around the pitot tube.
* TAS is basically CAS adjusted for altitude and air pressure. It’s the aircraft’s speed relative to the air around it.
* Ground speed (or speed over the ground) is TAS adjusted for the wind. This is the number that GPS is going to give you.
IAS and CAS are particularly important for describing performance characteristics - if an aircraft stalls at 100 knots CAS, then it always stalls at that CAS. If you try to describe the stall speed in terms of TAS you go from a single data point to a graph of speed and altitude.
If these accidents prove anything, it's that we need a computer that takes many different inputs (GPS from the tail and the nose, pitot, barometer, AoA indicator, input from the pilot, engine RPM, etc) and put them into a mathematical model of the airplane before overriding the pilot.
The incuriosity of all parties to an event categorized as hazardous is astonishing. Boeing says it's a system that's completely transparent to the pilot, and therefore there is no need to describe a failure that they say would be hazardous. What part of that passes a reasonable smell test? It's safe unless it fails, which would be rare, but if it fails people could die? But meh, it's rare so let's not even find out what would happen if it happened?
Boeing must be compelled to show their work for this probability computation, because it is clearly wrong. And both Boeing and the FAA have to answer why there's no mandatory testing of hazardous events. At least what does a simulator think will happen in various states of perturbed sensor data, and how does a pilot react when not expecting such an event?
Oh, and the part about depending on a single sensor is not, per Boeing, a single point of failure because human pilots are part of the system? That's a gem. The pilots are the backup? This poisonous form of logic is perverse.
Too often systems are designed with procedural mitigation as the primary way of controlling a hazard without realizing all the human factors that come into play. Maybe the pilot is distracted because she just had a fight with her spouse. Maybe her co-pilot a bad night's sleep. Or maybe he isn't physically capable generating the force necessary to move the trim wheel.
I think too often designs can over rely on administrative mitigation because the engineering controls seem too costly or difficult to implement. In some cases, this rationalization that a person "just" has to do XYZ activities to control the outcome falls short because we don't acknowledge all the factors that person is dealing with in the moment.
In this case, to someone like me without intimate knowledge of the Boeing process, it looks like they failed at their hazard analysis. They did not design the hazard out of the system (airframe design), the engineering controls were inadequate (MCAS), and the administrative controls were poorly managed (pilots did not understand the procedures for disabling MCAS or the procedures were not capable of being executed effectively). In other words, they did not apply appropriate hazard analysis and mitigation. Hindsight is easy, I know, but when schedule pressure hits a lot of these processes are rushed.
I'm sure there are many people who will do the same. In fact, every flight I do go on now, I check to make sure it is not a MAX.
I doubt there will be enough people who think this way that it would cause a problem economically for any airlines that carry this line, and I'm sure with time, people will forget, but I sure as hell will do my best not to.
Be very wary if pilot training is not part of the "fix" to getting the Max back up in the air. If MCAS is being "rolled back" then certain situations such as "The Max wasn’t handling well when nearing stalls at low speeds." come back.
This probably won't happen of course, all they seem to want to do is fix as little as possible as quickly as possible while denying they ever knew anything.
If I were someone powerful like a pilot union leader I would start throwing conniption fits in public and refuse to let my people fly on Max's at all.
Can you cite the basis for this often-expressed sentiment? There's absolutely no reason why a properly-designed and -vetted MCAS system wouldn't have been a perfectly acceptable solution to any handling irregularities caused by the engine configuration.
The idea was fine. The fault was 100% in the implementation.
And no, downvotes are not a valid citation.
But you have to recognize the whole engine hack is just a convoluted workaround to avoid as much pilot training as possible. The entire goal of the project seems to be to avoid ever training pilots for as long as possible. It's a brand new plane, the newest plane on the market, and the first thing you need to do to take off is turn off the cabin air conditioning. Why? Because that's what we had to do 50 years ago in the first 737.
God forbid this plane startup any way besides turning off the cabin air conditioning. If we changed that, we'd have to... gasp retrain pilots!
I don't know what is the correct answer to the problem, but clearly good safety regulations are trapping some carriers and Boeing. Sooner or later Boeing will have to build a true successor to the 737 (and I guess, they now wish they had sooner)
Citation needed. I've never heard this before, except for some other person on a message board, and I've been involved with aviation and known pilots with multiple type ratings.
Surely that was exactly the MCAS that was installed in both of the planes that crashed.
But the fact it failed so badly suggest it might in fact be a rather difficult system to get right.
Boeing's implementation will be a mainstay in engineering ethics classes for the next 100 years, right next to the Therac-25 and the Kansas City Hyatt.
That one popped out to me. Man. Lots to learn.
> Boeing continued to defend MCAS and its reliance on a single sensor after the first crash, involving Indonesia’s Lion Air.
Also...how? So many non safety critical services use a load balancer and at least a couple of servers because who can trust just one thing working perfectly all the time?
Don't restrict your definition of "actionable" to only pertain to fixes that don't come with a monetary cost?
Don't get rid of your Quality people. Definitely don't get rid of them for raising too many defects.
Don't stop focusing on "the box" (I.e. the plane) because customers already assume it will be "high quality", and reengineer a physical engineering/design firm into some ungodly act of "financial innovation".
Don't treat regulations as something to be worked around.
Don't skimp on Acceptance Testing of outsourced software deliverables.
Make sure your CEO and Sales staff understand there are things you can not (and should not) sell.
Listen to your Unions. Don't try to work around them.
These aren't hard. They are all also things that by not doing them, Boeing set the stage for this cascaded failure of epic proportions.
Pity that American manufacturing and Engineering firms never (in my experience) took Edward W. Demmings seriously. His 14 points are a hell of a good start.
Maybe not in the near future, but as technology progresses and every manufacturer strives to optimize their designs with the latest features, it will become an unsourmountable task to oversee every aspect of it (efficiently). I'm not talking about actively designing, but rather for warning/flagging for potential error. In very complex enterprises like global transport or building skyscrappers there is a lot to learn from experience and little human time, but it might be very cost-efective to train all-observing self-learning AI to look over everyone's shoulder, and warn you about using the right type of bolts, or how the coming heavy rains in Guatemala might affect your supply chain.
It's not that far-fetched when you realize it doesn't need to really understand anything, just be very good at playing word association and micromanaging.
Why not have 20 airspeed sensors of 5 different types? It's an obvious failure mode that your one sensor will fail and then the pilots and the computer will be left in a state of dangerous uncertainty about the situation.
Good safe airplane design is about a neutral flying design without the need for complex systems.
This plane is fundamentally flawed because the engines are in the wrong position because the landing gear is two short to fit them in the correct position.
The test pilot was clear about very poor flying characteristics at slow flyong speeds requiring mcas to be more aggressive.
This plane should not be flying with this engine configuration as it fails the most fundamental principal of good aeroplane design of neutral handling.
In a meeting at Boeing Field in Seattle, Mr. Wilson told engineers that the issue would need to be fixed. He and his co-pilot proposed MCAS, the person said.
It is not clear this translates into a fundamentally flawed design. It's a serious assertion, even though at the same time it's vague. Why did it need to be fixed? To avoid pilot training? Or to pass a FAR 25 airworthiness certification requirement? We can't tell from this reporting. Months after these accidents, people are still asking this question. The difference matters.
I'm very skeptical that software can legally be used to paper over aerodynamic flaws, as I read FAR 25. In fact, neutral design is not adequate, it must exhibit positive static and dynamic stability in all three axes. Fly by wire software doesn't make a plane with negative stability behave as if it has positive stability, the software provides various safeguards in a layered manner.
I'm certain correctly designed software can safely control critical functions, otherwise failure in a large category of aircraft systems would result in many more MCAS unrelated accidents.
This particular MCAS control philosophy seems to be a flawed control system. With reference to the the graph (link provided by obituary_latte):
With only one sensor being "looked at" at any time, and with the system not having the sense to know to stop commanding pitch down after 26 times with attempted pilot overrides, it would seem almost beyond belief that any competent team of on-the-ground engineers (as per Boeing) would not see that the system is flawed.
Would be interesting to see if this was the case, and how the likely good engineering decision was overridden by the commercial aspect.
With increased tech, comes increased scope for this kind of cost optimisation, and we must be careful in many more industries. Eg Automotive self driving cars.
They’ve found the MCAS issues, but with a procedure this lax I’d expect several other issues to have gotten through.
For reference, the Therac 25 was a computer-controlled radiation therapy machine involved in several over-exposures due to replacement of physical controls with computer based ones without complete understanding of the interactions of the controls.
The Max feels very much like that. No one can really keep a whole aircraft in their head, much less a whole aircraft development project. We use computers for that, as well as mental heuristics. But if those computers and brains are not fed all the proper data and connections, they will not find the all the problems.
Additionally, there seems to be a lot of the tail wagging the dog. If this system is expected to perform according to X specifications, then by golly it will, and we will show that it does.
Edit: Please don't take the above as absolution of Boeing. Someone (a lot of someones) really should have known better.
As to bad faith, yes, I'm sure there was some of that, but generally decisions like these don't look like bad faith to the people making them. It's easy to get swamped by technical details.
> As to bad faith, yes, I'm sure there was some of that
You're downplaying this. This is not downplayable, and this is not similar to Therac. "A consequence of incremental changes" is a rather gross way to paint this, as if it's hard for a single guy to see how e.g. non-redundant sensors is an extremely bad idea on its own, let alone everything else. There have been multiple huge missteps, not made by accident, each of which is individually worth a huge red flags obvious to people in different areas. That 1 single mistake wasn't enough to bring the plane down doesn't mean the mistakes must've been small or somehow downplayable. And no, this isn't some kind of gray area with people getting genuinely swamped by technical details. It's abundantly clear here's been a ton of bad faith here, that there is still ongoing bad faith even after the fact, and that they're still unwilling to address the problem properly.
What I was actually thinking is that this kind of thing is likely to crop up in complex systems, and if we work on complex systems we should be wary for it.