Hacker News new | past | comments | ask | show | jobs | submit login

This is a very good analysis, but fatally incomplete.

One really essential reason those planes crashed was that each time the MCAS triggered, it acted like it was the first time. If it added 1 degree of trim last time, it adds a second this time, a third next time, up to the five degrees that runs the trim all the way to the stops.

A second reason is that, under the design still on file at the FAA, it could only add a maximum of 0.8 degrees (each time). This was raised to 2.4 degrees after testing, so only two hits could, in principle, put you almost to the stops.

A third was that the only way to override the MCAS was to turn off power to the motor that worked the trim. But above 400 knots, the strength needed to dial back the trim with the hand crank was more than actual live pilots have, especially if it is taking all their strength to pull back on the yoke.

A fourth was that, with two flight control computers, the pilot could (partly) turn off a misbehaving one, but there is no way to turn on the other one. You have to land first, to switch over, even though the other is doing all the work to be ready to fly the plane.

A fifth was that it ignored that pilots were desperately pulling back on the yoke, which could have been a clue that it was doing the wrong thing.

A sixth was that, besides comparing redundant sensors, it could have compared what the other flight computer thought it should be doing.




This analysis is completely right, but in my opinion, focuses too much on the technical aspects.

Is MCAS a hack? yes. Is it fixable? yes. Will the 737 MAX continue to fly for two to three decades after all the items above have been addressed? yes.

But from an engineering perspective, putting an additional system to "fix" another system feels always a bit weird. Sometimes it's not avoidable (ex: cooling), but when it is avoidable, something is at least a bit wrong. A few hacks like that are manageable, but too many, and you dramatically increase the chances of one of these hacks misbehaving.

And if an organization is pushing a lot for this kind of hacks as Boeing did, the issue is not even technical.

The story of MCAS reminds me a little about the MD11. The DC10 as a tri-jet could not really compete with new dual engine airplanes in the late 80ies/early 90ies in term of fuel consumption, but McDonell Douglas try to anyway. They optimized the wings, change the engines, add winglets and more significantly reduce the horizontal stabilizer size. This made the MD11 quite hard to land as it needed to go-in with a very high speed for a wide body jet. It was a contributing factor in several accidents (Fedex 14, Fedex 80, Lufthansa Cargo 8460), and pilot training/technical fixes never fully compensate for the design flaw. And in the end, the aircraft kind of failed to reach the target fuel consumption. However, it's still flying today, and it's still a workhorse for cargo companies.


Ultimately it was not a technical failure: the failure was in allowing the plane to be sold with such a thoroughly bad design. Thus, a management and regulatory failure. Regulatory, because FAA signed off on it, obviously without applying any of the process that would have prevented it. Management, because the cost of this debacle will be many, many times what they saved by trying to skate by with a faulty design.


I think the blame shall be entirely on the management of Boeing. The engineering details are mainly a plea against management.


You watched the VOX video and are reposting the summery here?

They are going to fix the software with no changes to the design.

They don't need to change the design, that would be expensive overkill. Software updates are as close as we can get to Free. There is not a single company or human with infinite money to spend on anything, we can't always have the fantasy.


The MAX can be fixed by addressing the flaws in MCAS, and it will be in the end. The production will resume, the MAX will be a slightly less commercial success than it should have been, Boeing will be fined a few Billions for its failings, the FAA will have to take an hard look at itself, and everything will be ok.

I just hope the correct conclusions will be learnt from this crash:

* have a truly independent certification process

* don't fix your physical design issues with software

* don't write and maintain software running on an airplane like the software from the start-up world: significant design flaws are not tolerable for software running in an airplane, even if they can be fixed easily.

* don't hide new systems, if there is a new system, then pilot should be trained

by the way, it's not the first time a new system caught pilot by surprised because they didn't knew about it. SAS Flight 751 is another example, ice ingestion damaged the engines, the pilots reduced the thrust to reduce stress, but an automatic system (ATR, Automatic Thrust Restoration), unknown to the pilots, put the thrust back to full power, disintegrating the engines, the plane fortunately crash landed without fatalities thanks to the pilots skills and a bit of luck.

* Boeing should really think about replacing the 737, the old design is putting too much constraints, preventing a clean design.


Its not very easy to redesign the 737. Bigger Engines need more clearance. More clearance requires longer landing gear. Longer landing gear requires a wider plane to fold up into. A wider plane is less fuel efficient and needs bigger engines and suddenly you are a 757,767,787, or 777


> Its not very easy to redesign the 737.

Too bad. I know I am now permanently reluctant to fly 737s


You really shouldn't be. There is a lot of misconception going around that the 737 is inherently unstable. This is not true. The 737-600/700/800/900 was one of the safest designs ever built (http://www.airsafe.com/events/models/rate_mod.htm). Yet of the few crashes it had, a surprising amount was from pilot disorientation causing them to accidentally stall the plane.

MCAS is not a feature to correct an unstable aircraft, it's to correct a confused pilot. MCAS is triggered if the following circumstances are right: 1) The airspeed is near stalling speed (takeoffs and landings) 2) The AOA (as reported by both sensors now) is greater than the aircraft can climb.

The problem was that 1) the AOA data was faulty, 2) it did not check if there was a disagreement with the backup system 3) The aircraft failed to reset each time it took corrective measures such that each the correction was compounded. 4) the corrective action was 3 times more than approved.

All of these things have been fixed but it also shows how many things have to go wrong for something to be catastrophic. But the max has been built on a solid foundation and that's much better than having to start from scratch.


The airlines knew what they were doing when they didn’t pay for the “upgrade” and they doubly knew when they didn’t pay for it after the first crash. Airline management is just as culpable in this debacle.


This is bonkers. Nobody anywhere in the world should expect that one buys an airplane which is unsafe by design. Also, the idea that there are two versions of a plane, one that kills people and one that doesn't, and if you buy the wrong one that's completely your fault is just insane.


Frankly, you have no idea what you're talking about. This is how the airline industry works. Airlines WANT cheap planes and they ARE WILLING to and DO skimp on some safety features to save money. Airlines WANT safety features, at least some of them, to be optional so that they aren't forced to spend money on them if they don't want them.

For example, backup fire extinguishers in the cockpit? Optional. Extra oxygen masks? Optional. Advanced radar? Optional. There are hundreds of items like this on every model, whether it's Boeing or Airbus or someone else, because their customers WANT them to be optional.

And by the way, not all airlines do this. American paid for the upgrades. Southwest paid for the upgrades.

So maybe you should ask yourself why it is that Lion Air and Ethiopian Air were willing to spend huge amounts of money on a new jet, then spend some $1-$2 million on optional upgrades, and yet not include the MCAS upgrade in those options (and they were made well aware of it, as you can see from other airliners purchase of these upgrades). Or for that matter maybe you should ask why Lion Air knew that the plane was having trouble with the AoA sensors for several flights prior to the crash, and yet did not perform the required maintenance on them (a serious violation) and did not pass along information to its next crews (also a serious violation), who were caught completely unaware.

Airlines run at 2, 3% margin, and they do so by trimming costs at every possible area. Sometimes those areas are safety related. Airlines are not some doe-eyed naive little up who just trust whatever Big Boeing/Airbus tells them. They know how the planes work, they have their own pilots, they have their own engineering teams, they have their own specialists and experts, and they make decisions to sacrifice safety for savings, and they do it ALL THE TIME.


There's plenty I could say in answer to this but most of it is unnecessary.

There is no excuse for a company such as Boeing, working under the regulation of the FAA, to produce any version of any model of plane which is fundamentally unsafe to fly. Period. None.

We're done here.


Life must be great in black and white.


The 80K upgrade was simply a AOA disagree warning light and has zero affect on MCAS behavior. There is no evidence that airlines were told about MCAS until the Lion Air accident.


There is more than enough blame to go around. E.g., Congress, for the last N decades, failing to fund FAA at levels clearly needed.


Yeah, they willingly bought these planes. Airlines should be held as equable responsible.


You’re saying that Boeing intentionally sold unsafe airplanes? And the airlines, knowing this, still bought them?


I'm giving money to the airlines. Not Boeing.

>You’re saying that Boeing intentionally sold unsafe airplanes?

Yes.

>And the airlines, knowing this

You know airlines have their own engineering teams, right? And they were bought explicitly to save money.


> Will the 737 MAX continue to fly for two to three decades after all the items above have been addressed? yes.

There is no doubt the MCAS can be fixed.

But I would say there with all the bad press the 737 MAX has received, there must be some doubt as to whether the 737 MAX will fly for decades to come.

I would say the flying public will need some convincing before they consider the plane safe.

> However, it's still flying today, and it's still a workhorse for cargo companies.

One of the reasons the DC10 is used for cargo, is very early on the DC10 faced it's own own 'bad design' issues that resulted in several fatal crashes.

A fault in the DC10 cargo door meant it sometimes did not close, which then resulted in an explosive decompression.

That fault and those early crashes greatly helped the 747 win the race to be the dominant wide body passenger jet of that time.


The flying public will buy the cheapest ticket, same as always.

This isn't even the first time a design flaw leading to loss of control has crashed multiple 737s.

https://en.wikipedia.org/wiki/Boeing_737_rudder_issues


I'm confused with your response. So nothing to see here? Nothing to learn and change? You seem to say we should accept design flaws (like we did with the MD11?)... correct? Or?


Basically yes. The airlines learnt to mitigate the risks associated with the design, and there were a few small modifications (light indicator to detect bounced landing for example). There is a risk, but it's at acceptable level.

The fact is an airplane is a huge investment, made to last 20, 30 and in some cases even 40 years (not necessarily with the same owner). You cannot exactly throw it away and buy a new one even if it has defects. At most, it ends-up lasting a bit less (ex: 15 years instead of 20 years) or is relegated to a specific usage where the consequences of failures are less dramatic (ex: freight).

There are still 20/25 years old airplanes flying with passenger, which are inherently less safe than (properly designed) new airplanes because of their age and their avionics. Yet there are still flying.


Will it fly? I think we live in a different world now. The 737 MAX via social media has a perception of being a "Boeing death plane"

Public opinion could ground it.


From the people I've spoken to about this, I'd say less than half even know that the 737Max is a thing, never mind that it's crashed twice.


That is very interesting as it's been front-page news regularly in pretty much every newspaper (ok, I only read the NYT and WSJ, but it's quite the topic of discussion on those two papers).

Every time there is news my entire office lights up with discussion about it. Then again, I work for a company called "Pilot" so perhaps people are more interested than average. It's not an aviation company though ;)


The DC-10 did fine after its rough start. It's still flying today.


That's also the interesting part.

The DC-10 ended-up being a reliable airplane, but its early crashes damaged its reputation heavily, and the handling of these issues by MD was poor.

Commercially, MD never completely recover, and was absorbed by Boeing in the 90ies. (To be honest, it's not the only factor, you have also the L1011 competition and the fact a trijet was somewhat of an evolutionary dead end).

Boeing is much larger, and much stronger it can probably cope with it, but it will be a hit on their best selling aircraft. It's basically the 737 that finances the new 777 or 787, costly programs not certain to recoup design costs (same with Airbus, the A320 is basically financing the A380 failure). At the same time, part of the 737 market is a bit captive with the biggest low cost companies (Southwest, RyanAir) using it unlikely to switch.


There are thousands of “hacks” like this in every modern airliner. That’s how complicated problems are solved, you come up with a basic idea, and you iterate on it thousands of times until you squash all the edges cases.


You don't cover edge cases mainly by writing specific code for everything and adding on to the existing load. You do it mainly by creating an elegant single solution that covers all cases. Every line of code and every technical layer you add makes you susceptible to even more bugs and edge cases.

And just because the airline industry does software backwards, doesn't mean that you should do so.


Or, you know, you don’t put engines way too big for your frame.


There really aren't. Please don't state opinion as fact. There aren't "thousands" of "hacks" to make an airframe fly reliably. That is preposterous.


OK, I am going to comment, as a certified and practising functional safety engineer with a TUV number and experience designing and building industrial systems to IEC61508 (functional safety parent standard) and IEC61511 (process industries).

Functional safety is the engineering discipline related to designing machinery instrumented safegaurding systems to protect humans from harm, to a deterministic safety performance level.

eg just the right amount of safety so that all the safety money is spent in the right places and right amounts so as to reduce risk across the board to the required level, not over-investing in one area and neglecting another (or so the dream goes, in practice it is a moving target based on a lot of guesses, you hope the swings and roundabouts balance out more or less).

Aeronautics design is one of the closely aligned fields within the group of this overall discipline. Closest thing I have worked upon in terms of risk/consequences is mine winders - safety failure can kill 10-100 people in one go, they ride multiple times every day.

Right now, this very minute before needing a break at 1am to browse hacker news, I was trying to wade thru a mess of a fault tree analysis my current project owners "specialised" consultant has produced for the systems I currently need to instrument for safety.

Most people in general, but especially Americans who live primarily with prescriptive standards, struggle to come to grips with the nature of performance based safety standards. There is no "do it like this and you have met code and have no problems" - you have to analyse and build everything up from scratch.

It is all about layers, layers of risk reduction that eventually (whether by perception or reality) get the risk down to an acceptable level. So there are cludgey little things that get stuck on as hacks to address this issue or that, not uncommonly often pet issues of one of the review panel. Repeat this several hundred or thousand times and any hope of some kind of uniformly elegent and simplified solution is pretty slim.

The general reliance is on redundancy and independence, eg layers of protection, "defense in depth", or as more commonly known "the swiss cheese model" - you get a bunch of slices of swiss cheese and when the holes line up to allow a path through, that is when an accident can occur. More layers, less chances (also smaller holes, but that is another story again).

And, as almost always, the machines are actually the easy part most of the time. It is the humans that design, build, test, maintain, certify the machines that are the weak point, over and over again. Plus the creative ways humans can get around systems in place to protect them, get their job done when the system is telling they should stop, or doing a maintenance task a new "better way" despite the manual that might have cost over $100k plus of engineering time to write and approve is telling them to do it a specific way etc etc etc

90% of the time overly conservative thinking during risk analysis occurs (we might get hit by a meteorite, happened to my cousin once), which can layer complexity and associated uncertainty and poor availability onto a solution.

10% of the time there is the wishful thinking of "it will never happen because I have never seen it or heard of it" that allows the unexpected and unusual (black swans often, if you will) to sneak through, at least for the first time. Endless discussions occur about "credible scenarios", sometimes the "discussion"is won by the dominant personality in the room, who might also be doing some of the pay reviews next month.

It is incredibly difficult to be the person that has to herd the flock of cats that represent all the stakeholders in a hazard revue and risk assessment. These workshops sometimes run for months, maybe in extreme cases for years on and off, considering every system, subsystem, part, action, event, procedure etc etc and all the possibilities and how they can go wrong and what might mitigate failures and events - and on and on.

I could write about this all night, but I guarantee you that any magical opinion or assumption you might have about graceful and elegant solutions to difficult and dangerous problems being the norm are unrealistic - there is a always a consensus or committee to satisfy, often top heavy with people that might have to own or operate the machine in question, but never designed anything in their lives. You fight for the things you know matter and concede some of the crap, hoping subsequent reviews will see it as pointless or not credible.

All of this is the reason that grandfathering is so attractive. To apply the current internationally recognized performance based safety standards from scratch to design something as complex as a plane that can kill hundreds of people in one go is an incredibly difficult task. And from a business perspective fraught with immense dangers of totally unpredictable outcomes impacting budget and schedule and even viability.

This is a highly specialised field with what are often counter-intuitive outcomes (otherwise you would just let John out the back room design the whole plane from scratch, because "he knows what he is doing").

While I am aghast at some of the information about some of the information about design decisions taken that is emerging, none of it surprises me in the least. I can see directly how a number of them may have effectively resulted from path of least resistance when a product had to be produced.

I like flying older planes in general, as long as the airline does reasonable maintenance. The unexpected has often been detected and corrected, the chances of latent faults turning up decrease with hours in service. Plus I always remind fearful daughter that the taxi ride to the airport is more dangerous, by the numbers.


Please don't state opinion as fact.


I don't know, I think the 3 points in the article make it glaringly obvious that the root cause is NOT engineering.

The decisions made clearly ignored engineering and historical precedent at every turn.

It's sad because Boeing has had some wonderful engineers, and Boeing aircraft have traditionally allowed the pilots to have the final say.


> So Boeing produced a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737’s dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail .... Big strike No. 3.

The article definitely does partially blame engineering.


We don't know the MAX is inherently unstable. That's still conjecture at this point. MCAS may have altered fight characteristics to match older 737 to avoid additional pilot training. If the MAX is inherently unstable, then this scandal is much bigger and far reaching.


I heard the MCAS was required to avoid a situation where the MAX, after reaching a certain pitch, would continue pitch up into a stall even if both pilots released the yoke. I don't know if that's considered "inherently unstable", but this is against FAA regulations for any commercial airplane from what I heard. That's probably a big reason why Boeing made it so difficult to completely disable the MCAS system.

My understanding is that MCAS altered fight characteristics not only to match older 737s and avoid additional pilot training. MCAS altered fight characteristics so the FAA would approve the MAX as a commercial aircraft period. The fact they could match older 737s flight characteristics for a more speedy approval from the FAA was just gravy.


The article is making the point that the MAX is inherently unstable because of the larger engines causing the "pitch up" problem with an increasing angle-of-attack:

> Pitch changes with increasing angle of attack, however, are quite another thing. An airplane approaching an aerodynamic stall cannot, under any circumstances, have a tendency to go further into the stall. This is called “dynamic instability,” and the only airplanes that exhibit that characteristic—fighter jets—are also fitted with ejection seats.

So arguably the existence of MCAS in the first place indicates that the aircraft design is dynamically unstable (otherwise MCAS wouldn't have been necessary).


There are several respected industry professionals that believe the MAX is inherently unstable (and maybe they have insider information...) but this is not considered "fact" right now. (AFAIK)

MCAS was needed to maintain original 737 type specification which allows 737 pilots to fly any 737... significant operational flexibility & cost savings for airlines.

Commentary from blancolirio who is a current 777 pilot. https://www.youtube.com/watch?v=zGM0V7zEKEQ&t=0s


I'm not in any way an authority when it comes to aeroplanes.

However, my understanding is that the reason why MCAS was needed to maintain the original 737 specification is because of the "pitch up" behaviour on increased AOA (which is what is being described as "dynamic instability" in TFA).

The video you linked doesn't disagree with this -- though it's phrased as being primarily there to "replicate the same feel as earlier versions of the 737, by giving a little bit of nose-down trim". The article claims that being dynamically unstable means that at high-AOA you get nose-up lift (I'm not a pilot or aeronautics expert, so this might be an incorrect definition -- but I've not seen anyone disputing that definition nor disputing it's against FAA guidelines).

If you need an additional system to "replicate the feel" of not having nose-up lift at high-AOA that tells me that your plane design must therefore have nose-up lift at high-AOA. The guy in the video then goes on to say that it's an inherently stable design, but he doesn't really qualify it (other than saying that all other 737s are stable designs) and goes on to say that "the nose goes a little bit light".

Obviously we should hold back judgement until we know all the facts, but "the 737 MAX is an inherently stable design" is not someone holding back judgement.


But wasn't this explained in the article, already. Dynamically stable plane doesn't aggressively rotate along any of its axis, for example, when you increase or decrease throttle. The location of MAX engines generates additional forces to increase pitch when engine power is increased, and even more so when the plane is already pitched high. Earlier 737's could do without MCAS because they were designed with smaller engines in lower locations in order to be dynamically stable.


You are referring to thrust asymmetry, not dynamic stability. Thrust asymmetry is actually not that different between MAX 8 and NG. MAX 8 engines have more thrust, but they are also mounted higher (closer to the centerline), reducing torque.

Dynamic stability is a tendency of the plane that flies straight and level to maintain this straight and level flight. MAX 8 still has this property, MCAS or not.


The engineering blame is that they couldn't engineer themselves out of a political problem. I wouldn't call that "engineering blame".

Deep ethical failure, maybe, but this here place usually has a hard-on for epically failing the most basic ethical non-challenges.


See my other comment in this thread, engineering is done by humans, and they are almost always the weakest point in all phases of engineering.


Of course, that much engineering failure is not possible without even more management and regulatory failure. But documenting the engineering failures is necessary to quantify the management failures, at least until discovery in the wrongful-death lawsuits begins.


I find that engineering vs management is not a very useful discussion point. Management is composed of engineers too and engineers are responsible for the output of their work. Production of something that doesn't work is an engineering failure.

It's depressing that everyone went along with this. Clearly there's a lack of impartial checks and balances in the process.


I believe Boeing's shift from an "Engineering Culture" to a "Corporate Political Culture" is the root of the issue. https://www.seattletimes.com/business/boeing-aerospace/book-... https://www.forbes.com/sites/stevedenning/2013/01/21/what-we...

Boeing executives shifted focus from engineering to cost-cutting, outsourcing, and moving production & HQ to gain political influence. Moving Corporate HQ from Seattle was to insulate executives from engineering and increase political leverage in DC.


If you're an engineering-led company, putting MBAs in charge of it will kill it - and probably a lot of other people too.


Regarding #6...Boeing charged $80k to unlock the software feature to gather data from more than 1 angle of attack sensor.


> Regarding #6...Boeing charged $80k to unlock the software feature to gather data from more than 1 angle of attack sensor.

80k for a _light_ in the panel, not to have MCAS take the two sensors into account - which it couldn't do anyway. With two sensors, how do you know which one is right and which one is faulty?


Well you can’t get quorum but you can certainly shut off when you detect a conflict.


you can't shut MCAS off AND also NOT train the pilots on the system and MAX flight characteristics. Conflicting system goals.


To me this is the crux of the issue. And it's non uncommon in software development:

The developer fails to check for an error condition or raise an exception because doing so would add too much complexity to the system. So instead it is assumed (or hoped) that it simply can't or won't happen. Problem solved...

(Edit: Or that the user will just have to reboot if it happens.)


Yes. Not checking for errors is the feature of the MCAT "solution" which hides the system from pilot's training AND involvement with other 727 system/team engineers. A bureaucratic solution to a marketing problem. I wonder how many programers they had to go though to find it.


That's 100 false. There are people below you correcting this and you still haven't changed this. The paid option to display AoA information was supposed to be used by military pilots who are familar with how to interpret information from AoA sensors.


The amount of misinformation on this point is astounding. That's not true in any way.


In court that will cost them dearly: it demonstrates they were aware using two sensors would be safer.

Probably AA or SWA demanded it, and Boeing compromised by charging for it.

But... #6 was about paying attention to the other computer, not the other sensor. That would involve some big and expensive changes to the flight computer software, which they probably should have done long before the MAX project started.

More management failure.


It's nonsense. There was no paid option that affected the use of the second sensor.


tl;dr the first thing you learn doing control systems is NEVER use += 1; when it relates to an electro-mechanical device.

I learned this a long time ago by putting a robot through a door-frame at max speed.

Edit: To add to this in a way that might actually make it useful to someone, motor outputs should almost always be a continuous function of something rather than their own internal state. MCAS should have been coded as something like (vastly oversimplified)

if(AoA >= MAX_AOA) trimPosition = g(f(AoA, IAS), trimPosition, dt);//AoA/IAS dependent ramp function

else trimPosition = g(trimInput, trimPosition, dt);//ramp function

not

if(AoA >= MAX_AOA) trimPosition += 1;

Because the latter is very likely to result in a runaway even if you have bounds checking somewhere else. Worst case you don't have bounds checking and the position value/register value loops over and the tail just starts spazzing out, flapping up and down as fast as the motor will drive it.


PID controls do exactly what you are objecting to.


It's more nuanced than that. My ramp functions technically have a += 1 somewhere in them. You just don't want to += a motor value directly or otherwise add to state out of a feedback loop. You can verify the PID function in simulation/unit test. It's much harder to unit test the motor/driver/controller on a stand.


It seems like what you're really getting at is that when you have a control like this where there's a non-linearity in response for a linear change in input quantity, what you really want to do is instead have a non-linear change in input quantity so that there is a linear response. In that case, you would characterize the effect of the control on the response variable and arrive at a table of acceptable values. Then your +=1 becomes an increment of an index which returns the next-highest acceptable value and you no longer produce a non-linear change in the response.


No, you’re missing the point a bit. It’s not about the magnitude of the response, it’s about the propensity for a feedback loop.

=+1 is shorthand for “ignore everything going on in the world and increase your value”. This is almost never what anyone actually wants so they try to spend a bunch of time guarding against calling that when it’s already at a maximum.

Instead, the safe thing to do is only assign to it from a function with a ceiling.

val = min(CEIL, val+1)

It’s way to easy to get runaways with =+1 even in serious systems like this one. Every time I see that in code I review where the value is some long-lived thing, I just confirm with the author that they don’t care if it overflows, because it’s probably gonna happen.


All practical PID controls have anti-windup features.


Stab trim cutout is not the only way to disable MCAS. Extending the flaps any amount also disables MCAS.

Turning on autopilot also disables MCAS, though this isn’t entirely effective since spurious AoA readings may quickly disable the autopilot again.


Boeing put out an emergency airworthiness directive after Lion Air. It doesn't tell pilots to lower flaps. AoA sensor failure causes IAS Unreliable warnings, and the checklist for that item demands that flaps be left alone -- if you don't know how fast you're going, lowering flaps could cause a wing stall.

It's not reasonable to expect pilots to disobey checklists. We would all be less safe if they did. If pilots are following Boeing's instructions and planes are crashing, that's on Boeing.


One question I've had is why Boeing changed the design of the stab cutout switches from the 737NG from a pair of switches - the right side switch that disabled the autopilot control of trim and the left side that disabled the trim motors entirely [1] - to a design with a similar pair of switches, but each controlling the primary and backup control motors of the elevator trim [2] instead of controlling the source of the input. (If you look closely at the yoke trim switches, there are two independent switches that are grouped together by a frame so that they move simultaneously. One drives the primary motor, and the other the secondary).

Assuming they'd kept MCAS on the switch associated with the automatics, the procedure would have been to throw the AUTO PILOT switch, disabling MCAS, but keeping the MAIN ELEC switch on, allowing them to trim back to neutral for that speed.

[1] https://www.airliners.net/photo/Australia-Air-Force/Boeing-7...

[2] http://www.b737.org.uk/mcas.htm#stcs


EDIT: Oops, there's only one trim motor on the NG. Both trim switches must be actuated on the yoke switches though. But the input cutoff switches still work the same way I originally stated.

Not sure about the Max's # of trim motors, but the Max definitely doesn't have input source cutoff switches.


> A sixth was that, besides comparing redundant sensors, it could have compared what the other flight computer thought it should be doing.

If you mean AoA sensors, AFAIK, there was absolutely no redundancy at all the way MCAS was designed. Exactly one sensor was ever used for MCAS. And last time I've read the Boeing's reported coming software changes, they wanted to keep it so, but just to add the notification to the pilot when the sensors disagree.


Comparing sensors and comparing judgment of the other computer were both things they could have done, that they utterly failed to do, in both cases.


Because their main premise was that they are selling the "same old" plane, and any change in any workflow would make obvious it is not.


Architecting solutions is hard. In this case, you need knowledge of motors, flight controls, sensor fusion, etc.

It’s easy to find edge cases when they present themself (tragically here). But most electrical-mechanical-software assemblies have similar issues.


This case is unusual because, rather than a whole series of things that all had to go wrong before the plane would crash, this system has numerous failure modes that individually were almost enough by themselves to cause a crash. It is only astonishing that it took so long for it to happen.

In the Lion Air case, painfully inadequate maintenance contributed.


Thus, the standards and procedures that if followed would have prevented the problem.

There is a standard that says automated controls are not allowed "authority" such that the pilot cannot counteract it.

There is a standard that says failure of a single sensor must not cause a critical failure.

Either violation alone makes the design not airworthy, and not certifiable for civil aviation, preventing both crashes.


> In this case, you need knowledge of motors, flight controls, sensor fusion, etc.

Expertise which should be readily available at one of the world's foremost designers of high performance commercial aircraft.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: