When you start criminally charging people for mistakes in critical systems, what you're going to get is coverups.
With a no-fault system, mistakes are not covered up, but fixed.
With a fault system:
1. if a mistake is discovered, it will not be fixed because that would be an admission of criminal liability.
2. Quality will not be improved, because that is (again) an implicit admission that the previous design was faulty.
3. You won't get new airplane designs, because (again) any new design is an inherent risk.
I understand the desire for revenge, but there are heavy negative consequences for a revenge/punishment/fear based system.
P.S. I worked at Boeing on the 757 stabilizer trim system. At one point, I knew everything there was to know about it. It was a safety critical system. I did a lot of the double checking of other peoples' work on it.
I did not work in an atmosphere of fear.
The 757 in service proved an extremely reliable airplane. I've flown commercial on many 757s, and often would chat a bit with the flight crew. They were unanimous in their admiration for the airplane. Made me feel pretty good.
You're not the first person I've head of this idea from; it's from the NTSB charter
The NTSB does not assign fault or blame for an accident or incident; rather, as specified by NTSB regulation, “accident/incident investigations are fact-finding proceedings with no formal issues and no adverse parties … and are not conducted for the purpose of determining the rights or liabilities of any person”
But there was plenty of blaming / faulting going around outside of the NTSB. Hence the fraud we're currently talking about where Boeing tried to blame the pilots who rather conveniently for Boeing died and are no longer around to defend themselves.
I'm glad you feel personally assured though your enthusiastic personal connections but that does not work for me. That it works for you actually makes me feel less safe flying.
> That it works for you actually makes me feel less safe flying.
Feel free to ask the flight crew next time you're on a 757. The service record of the 757 speaks for itself. I fly Iceland Air because their fleet consists of 757s. I literally bet my life on my work.
> Boeing tried to blame the pilots
The pilots were partially responsible.
1. First incident - the LA crew followed emergency procedures and continued the flight safely.
2. Second incident - the LA crew restored normal trim 25 times, but never turned off the stab trim system, which is supposed to be a memory item.
3. Third incident - Boeing sent an Emergency Airworthiness Directive to all MAX pilots. The EA crew did not follow the simple 2 step procedure. A contributing factor is the crew was flying at full throttle, and ignored the overspeed warning horn you could hear on the cockpit voice recorder.
> who rather conveniently for Boeing died and are no longer around to defend themselves.
The flight voice and data recorders spoke for them. We know what they did.
If you want to know the truth, read the official government reports on the crashes. Ignore what the mass media says.
Weird to see such a strongly held belief of not blaming people abandoned so quickly.
It's pretty standard to blame the pilots, as we know there is a Swiss cheese model for statistical analysis of risk and from my vantage point opening such large holes in the MCAS layer with the expectation that events are caught at the pilot layer is inadequate. That not all events are caught at the pilot layer is to be expected.
Edit: I should add that the pilots already got a death sentence for their involvement.
Finding the source of the error is not the same thing as meting out punishment. I am not aware of any pilot being criminally charged for making a terrible mistake.
Though their careers are usually ended by it. That seems fair to me. An engineer at Boeing who could not be relied on will also find his career shunted to working on non-critical stuff. That's fair as well.
> That not all events are caught at the pilot layer is to be expected.
Of course.
But consider that piloting can all be automated today. Why isn't it done? Because pilots are the backup system for unexpected failures. A great deal (most?) of pilot training is training on what to do in an emergency. One aspect of that training is turning off the stab trim system if the system runs away. It's a switch on the console in easy reach, and it's there for a reason.
Remember that Boeing also sent out an Emergency Airworthiness Directive to all MAX pilots after the LA crash. The EA pilots did not follow it.
Do you want to fly with pilots who don't read/understand/remember emergency procedures? I don't. I wouldn't put them in prison, though, I'd just revoke their license to fly. Pilots undergo regular checks for their competency. No pass => no fly.
Dying in plane crashes and having their license revoked are pretty severe punishments, perhaps if management could similarly have their license to practice management revoked we could consider that somehow equivalent. I'm sure many people would prefer prison time to dying in a plane crash or having their career destroyed.
I don't trust Boeing and haven't for several decades so while I agree in theory an automated system could fly aircraft with a great deal of reliability I do not trust Boeing to be able to deliver that reliability. I like the idea that someone educated in the safety of the aircraft is up in the airplane with me sharing in the risk of flying even if all they are doing is babysitting a computerized system.
Most of my knowledge of Boeing is from engineers that used to work there and complained bitterly about the degradation of engineering safety culture. And sure, perhaps flying is safer than it has ever been, I think that's more down to improvements in technology than the culture of safety. Unfortunately while technology generally improves it's hard to say the same thing about culture. In general I'm upset that flying is less safe than it could have been.
> Dying in plane crashes and having their license revoked are pretty severe punishments
Neither are punishments. If you stab yourself that's your own doing, not punishment. Removing someone from a position where they cannot be trusted is not a punishment.
Airbus has had problems with their automation systems, too, that the pilots were able to save. And crashes where the pilots forgot how to recover from a stall.
Nobody is saying safety cannot still be improved. Every accident gets thoroughly investigated, and all contributing causes to the accident get dealt with.
Not sure what you mean, but pilots are trained to deal with runaway stabilizer trim. The EA pilots were also sent an Emergency Airworthiness Bulletin reiterating instructions on dealing with runaway stabilizer trim.
They're also trained to reduce thrust when they hear the overspeed horn, rather than continue at full throttle. Overspeeding the aircraft is extremely dangerous, and also makes it almost impossible to manually turn the trim wheel.
As an example, you said they “ignored” the over-speed warning. I would bet it’s much more likely that there was too much going on in the cockpit, likely with confusing or conflicting information, that prevented them from making the correct assessment in a time critical environment. Expecting the humans to act perfectly when the system is working against them is as bad of a design assumption as assuming your airspeed sensor will always act perfectly.
In the LA crash, the pilots restored normal trim something like 25 times over 11 minutes (or maybe it was 11 times in 25 minutes, I forgot). That's plenty of time to realize that the stab trim system should be turned off. They never turned it off.
In the EA crash, they restored normal trim at least once. They had the overspeed warning going off. I don't recall the exact sequence of events, but they turned off the stab trim with the airplane sharply nose down, and tried to restore normal trim by turning the manual wheel. At high speed, they should know that wouldn't work. They'd need to unload the stabilizer first by reducing the speed.
The overspeed warning should never be ignored, as it means parts of the airplane can be torn off. Especially in a dive.
Even so, if they had followed the directions in the Emergency Airworthiness Directive to use the electric trim thumb switches (which override MCAS) they could have restored normal trim.
It's not hard:
1. restore normal trim with the thumb switches
2. turn off the stab trim
That's it.
> Expecting the humans to act perfectly
Reading the EAD and do steps 1 and 2 is not some super complicated thing. Besides, pilots who get flustered in emergency situations should be washed out of flight school.
My dad was a military pilot for 23 years. He had many in flight emergencies, but kept a cool head and properly resolved each of them. In one, the engine on his F-86 conked out. The tower told him to bail out. But he knew exactly what the rate of descent of the F-86 was, his altitude, his speed, how far he was from the landing strip, the effect of the wind, etc., and calculated he could make the field. Which he did, saving the very expensive airplane. (However, he was reprimanded for not bailing out, as the pilots were more valuable than the jet.) But he was unrepentant, confident in his calculations.
BTW, I've talked to two different 737 pilots on different occasions. Their opinion is the pilots should have been able to recover.
>In the LA crash, the pilots restored normal trim something like 25 times over 11 minutes (or maybe it was 11 times in 25 minutes, I forgot). That's plenty of time to realize that the stab trim system should be turned off. They never turned it off.
Yes, and Boeing also changed the functionality of the Stab Trim cutout switches such that the Flight Computer with MCAS running on it was never able to be isolated from the electronic actuator switches on the yoke, and the use of said trim switches reset the undocumented MCAS system activation to 5s after release, which also ramped itself up far beyond the documented limits sent to the regulators, upward to a max of 2.5 degrees per activation triggered by a fubar'd, non-redundant, misclassified, intentionally non-cross checked, ultimately safety-critical sensor!
>In the EA crash, they restored normal trim at least once. They had the overspeed warning going off.
Airspeed unreliable checklist on takeoff/climbout: increase throttle. Maybe they went through a different CL?
>I don't recall the exact sequence of events, but they turned off the stab trim with the airplane sharply nose down, and tried to restore normal trim by turning the manual wheel.
Because of aforementioned screwing with the cutout switches which was not clearly communicated to pilots and only came out in retrospect.
>At high speed, they should know that wouldn't work. They'd need to unload the stabilizer first by reducing the speed.
...using a maneuver removed from documentation for several versions of 737 that only military trained pilots clued into cold on a simulator, and a civil aviation captain failed to arrive at, again, due to it's undocumented nature since about dino-737.
>Reading the EAD and do steps 1 and 2 is not some super complicated thing. Besides, pilots who get flustered in emergency situations should be washed out of flight school.
Aerospace Engineers who design airplanes and don't take into account human info processing limitations, human factors research, basic ethics, and an appreciation for the fact that "management can wish in one hand, shit in the other, and see which one fills up first, because I. Am. Not. Going. To. Kill. People." should likewise, also wash out. But lo and behold, we live in an inperfect world with fallible people. Which has generally meant we should be on our toes to be up front about what we're building instead of hiding implementation details from regulators for fear of all the money we might lose if we actually do our jobs to the professional standard we should.
>My dad was a military pilot for 23 years. He had many in flight emergencies,>>
>BTW, I've talked to two different 737 pilots on different occasions. Their opinion is the pilots should have been able to recover.
Walter, if it'd have been built the way it should have, and been documented as it should have been, and had the simulator training it damn well should have requirex it never would have happened.
Stop trying to justify it. This wasn't a bunch of "get 'er done" skunkworks engineers, working on mindbending, cutting edge, ill-understood designs.
This was a civil transport project, co-opted by a bunch of fiscal min-maxer's who pressured everyone, and continue to pressure everyone to cut every corner imaginable.
I genuinely feel bad for you. It'd rip at my soul to see something I worked so hard for to fall so damn hard. I'm going through a crisis of faith on that front at the moment. It ain't fun at all.
We cannot afford to be kind to these institutions once we've left them though. People like you were, and I certainly am are counterweights to people who think that all those corners we fastidiously upkeep and check are just so much waste, when they damn well aren't.
You can also read the Boeing documents. The hazard analysis listed MCAS as hazardous. By Boeing process controls, that should have mandated redundant sensors. Why were they optional? Because that could increase profits. (Seattle Times has done good reporting on this).
Boeing indeed has a failure here, and is responsible for it. But as the first LA incident confirms, the pilots, by following their training, were able to deal with it. It was the subsequent flight on the same airplane that crashed.
This remains omitted by the mass media, which contraindicates any of the reporting on it as "good".
There is also responsibility in the production of a faulty AOA sensor, and the failure of LA to fix the known faulty sensor, or inform the crew, before authorizing the next, fatal, flight.
I disagree. The media has covered the training (or lack of it) extensively. It seems like you want to point to a singular cause. As with most mishaps with complex systems, it’s the result of a cascade of failures. Training is only one part of the problem. Fixing that is still only an administrative mitigation which is the least desirable form of mitigation when designing a safety system. Even if the pilots had additional training, it wouldn’t fix the root cause of the issue and is just asking for trouble because administrative fixes “only” require humans to act perfectly in accordance with their training, all the time.
> The media has covered the training (or lack of it) extensively.
I've never seen it in the media. Not knowing what the stab trim cutoff switch does is a massive failure either on the part of the pilot or the training. Not reading the Emergency Airworthiness Directive is a failure of the pilot.
> It seems like you want to point to a singular cause
Au contraire. It seems that everyone but me wants to blame a singular cause - Boeing. There was a cascade of failure here:
1. A defective AOA sensor was manufactured and installed
2. After the first LA incident landed safely, LA failed to correct the problem before sending the airplane aloft again
3. Single path MCAS inputs could not detect failure
4. Pilots failed to apply known emergency procedures
> require humans to act perfectly in accordance with their training, all the time.
Of course we try to design airplanes so they do not cause emergencies. But we still need pilots to train to react to emergencies. You don't put a person in the pilot seat who is not well-trained in emergency procedures.
Making airplane travel safe means identify and correct ALL causes in the cascade of failures. Most accidents are a combination of airplane failure and pilot failure.
Sometimes pilots make deadly mistakes. That's why we have copilots, they check each other. But in the MAX crashes, both pilot and copilot failed to follow known emergency procedures. Why they didn't, I have no idea.
"Boeing" is not a cause. Things like their safety culture can be a cause, or lack of good process control can be a cause. The reason why Boeing is catching a lot of flack is because they deviated substantially from best practices. This is related to your other comment so I'll just respond here.
There is a general hierarchy when it comes to controlling hazards (remember, Boeing already identified MCAS as hazardous): remove the hazard, engineering controls, administrative controls, and PPE. We can apply a little thought experiment to see the gaps in what you're advocating when it comes to the MCAS issue. (Admittedly, this is contrived to just illustrate the point within the confines of a forum reply).
1) Remove the hazard: They could have redesigned the airframe to adjust the center-of-gravity to remove the stall issue that MCAS was developed in the first place. Why didn't Boeing do this? Because of cost and schedule pressure to have a new plane ready, after threats that American Airlines would take their business elsewhere.
2) Engineering controls: MCAS was an engineering control, but an incomplete one. Because MCAS was listed as "hazardous" in the hazard analysis it required redundant sensors. Why didn't they put on redundant sensors as the default? I can only speculate, but considering they were sold as optional, profit motive seems likely. (This also ignores the fact that MCAS should have been categorized as 'catastrophic' meaning they didn't fully understand the impacts of their system)
3) Administative controls: this was the training piece that you're hanging your hat on. This has multiple problems. For one, even though MCAS changed the handling dynamics of the airframe, Boeing pushed hard to reuse the same certification to avoid additional pilot training. Again, this was a business decision to make the airframe more competitive. Secondly, administrative controls are an inherently weak because of human factors. There's a lot that can go wrong if your plan is to have humans follow an administatrive procedure. You claim "it's not hard". Sorry, this is just a bad mindset. Usually when I hear people say things like "it's not hard" or "all you've gotta do" when they're talking about complex systems, it indicates they take an overly simplified mental model. In this case, you may be ignoring the chaos in the cockpit, the fact that the plane has handling characteristics different from what the pilots were trained on, human factors related to stab trim force at speed, conflicting or confusing information (like why the MCAS comes on at changing timeframes, time criticality, etc. Having adminstrative controls as your main mitigation is bad practice, and setting the system up for failure.
4) PPE: this isn't particularly relevant to this case, but a silly example of PPE control is giving everyone parachutes and helmets in case things went south.
You can obviously see PPE is an absurd control. But your main point is that the main control should be administrative, which is the next worst option. Boeing ignored good safety practice to pursue profit. So they probably deserve some of the heat they are getting.
"redesigned the airframe" means designing an Entirely New Airplane. Boeing didn't do this because designing an entirely new airplane would have been:
1. an absolutely enormous cost, like a couple orders of magnitude more
2. several years of delay
3. pilots would have to be completely retrained
4. the airlines liked the 737 very much
5. mechanics would have to be retrained
6. the inherent risk of a new, untried airframe
There was nothing at all inherently wrong with the concept of the MCAS system, despite what ignorant journalists wrote. What was wrong with it was it was not dual path, had too much authority, and would not back off when the pilot countermanded it. These problems have been corrected.
Pilot training - there have been many, many airplane crashes because pilots trained to fly X and Y did the Y thing when flying X. The aviation industry is very well aware of this. Boeing has been working at least since 1980 (and likely much earlier) to make diverse airplane types fly the same way from the pilot's point of view. This is because it is safer. And yes, it does make pilot training cheaper. Win-win.
> You claim "it's not hard". Sorry, this is just a bad mindset.
Yet it's true. Read about the first MCAS incident, the one that didn't crash. There were 3 pilots in the cockpit, one of which was deadheading. The airplane did some porpoising while the pilot/copilot would bring the trim back to normal, then the MCAS would turn on, putting it into a dive. Then, the deadheading pilot simply reached forward and turned off the stab trim.
And the day was saved.
I've seen your points many times. They are all congruent with what journalists write about the MAX. Next time you read one of those articles, I recommend you google the author to see what their background is. I have done so, and each time the journalist has no engineering degree, no aeronautical training, no pilot training, no experience with airline operations, and no business experience.
You can also google my experience. I have a degree in mechanical engineering with a minor in aeronautics from Caltech. I worked for Boeing for 3 years on the 757 stabilizer trim design. (It is not identical to the 737 stab trim design, but is close enough for me to understand it.) At one point I knew everything there was to know about the 757 trim system. A couple dozen feet away was the cockpit design group, and we had many very interesting discussions about cockpit user interface design.
I'm not a pilot myself, but other engineers at Boeing were, and they'd take me up flying for fun. Naturally, airplanes were all we talked about. My brother and cousin are pilots, my cousin was a lifer engineer at Boeing, my dad flew for the AF for decades in all kinds of aircraft, with an engineering degree from MIT. I inherited his aviation library of about a thousand books :-/ Naturally, airplanes were a constant topic in our family.
I've talked with two working 737 pilots about the MAX crashes.
I've read the official government documents on the MAX crashes, and the Emergency Airworthiness Directive.
That's what I "hang my hat" on. So go ahead, tell me I don't know what I'm talking about. But before you do, please look up the credentials of the journalists you're getting your information from.
P.S. "Aviation Week" magazine has done some decent reporting.
P.P.S. Amazingly, the "Aviation Disasters" TV documentary is fairly decent in its analysis of various accidents, lacking the histrionics of other treatments. But it's rather shallow, considering it's a 40 minute show.
>designing an entirely new airplane would have been:
1. an absolutely enormous cost, like a couple orders of magnitude more
2. several years of delay
3. pilots would have to be completely retrained
Yes, this is the same case I was making. They took a higher (or unknown) risk for business (profit) motives. That's why they deserve a large chunk (but certainly not all) of the blame.
There was nothing at all inherently wrong with the concept of the MCAS system
My claim isn't that the concept was inherently wrong, it's that the execution was wrong. Their own process documents say so and they also belie the fact that they didn't understand their airframe. The damning part of it is that they were likely wrong for the reason of increasing profit. (Still, even if MCAS isn't inherently bad, we still have to acknowledge it's not the best solution...see the discussion above about hierarchies of controls).
>You claim "it's not hard"...Yet it's true.
This is exactly the wrong way to think about this. Just because a mitigation works some of the time doesn't mean it's the best mitigation. Can I still design a car with a coal-fired steam engine and cat-gut brake lines and drive it safely? Sure. But by modern standards, it's still a sub-par design and the likelihood of safe operation is lower because of it. That likelihood is the entire reason there is a hierarchy of controls. You are advocating against well-established best practices in safety and reliability.
>You can also google my experience. I have a degree in mechanical engineering with a minor in aeronautics from Caltech. I worked for Boeing for 3 years
Please don't do this next time and argue the points rather than appealing to (relatively weak) authority. I'm familiar with your points and can usually set a clock by the time it takes you to either bring up your experience at Boeing or some story about your daddy in these discussions. But you aren't the only one with aerospace experience. I've been an airframe mechanic. I also have an ME and additional engineering degrees to include a PhD and published aerospace-related research. I've worked in NASA for many more years than you worked for Boeing. I filled roles in aerospace, quality/safety, reliability, and software engineering related to both software and hardware design. I've worked alongside Boeing on crew-rated space systems. I've also piloted aircraft (although my ratings are no longer current). I've had dinners and discussed similar issues with pilots and astronauts with thousands and thousands of hours of flight time. But parading out your credentials doesn't make your points any stronger and tends to be the bastion of those without much else to rely upon. This isn't a pissing contest, so please make an argument based on its own merits rather than relying on credentials.
> The damning part of it is that they were likely wrong for the reason of increasing profit.
Are you still advocating designing a whole new airframe instead?
> we still have to acknowledge it's not the best solution
We don't have to agree on that at all. It's an inherently simple solution, although Boeing made mistakes in its implementation.
> Just because a mitigation works some of the time
It's turning off a switch. The purpose of that switch is supposed to be a "memory item", meaning the crew should not need to consult a checklist. The switch is for dealing with runaway stab trim. Reading the step-by-step of the crisis, it is impossible for me to believe that the pilots did not know they had a runaway trim problem. There are two wheels on the side of the console, painted black and white, that spin when the trim runs, making a loud clack-clack sound. They are physically connected to the stab trim jackscrew with a cable. If the trim motor fails, the crew can manually turn the jackscrew via those wheels.
> You are advocating against well-established best practices in safety and reliability
Turning off a system that is adversely working is well-established in aviation. It's quite effective.
> and argue the points rather than appealing to (relatively weak) authority
Appeal to authority is not a fallacy when one has some authority :-) And so it is fair to list what makes one an authority.
> This isn't a pissing contest
You might want to review the condescending and quite rude post you wrote that I replied to. Your reply here is also rather rude. I don't think I've been rude to you.
>Are you still advocating designing a whole new airframe instead?
That would be the ideal solution for that hazard. But I can’t say if it’s the best risk profile overall. I would settle for a properly implemented engineered mitigation of MCAS.
>* It’s an inherently simple solution*
The fact that the engineers who built the thing mischaracterized it would seemingly be evidence to the contrary. I have experienced this flawed thinking often, where software is treated as a quick simple solution without considering the effects on the overall system.
>Appeal to authority is not a fallacy when one has some authority
It actually is. As Carl Sagan said, “mistrust arguments from authority.” But regardless, we seem to have different ideas on what makes someone an “authority”. You may be an aerospace authority when you’re in a room of CRUD software developers, but this forum has a much wider net than that. “Technical authority” is an actual assigned title at NASA, and you probably wouldn’t get it with 3 years of experience from decades prior.
“Turning off a switch” is the easy solution when you’re dealing with the benefit of hindsight. The pilots were operating in a completely different different environment. That’s why administrative mitigations are not a favored approach. Boeing simulator results demonstrate that it was a confusing scenario to identify the correct root cause in a time-critical situation.
As to the tone of the post, I’ve witnessed you in many aero circumstances state your credentials as a “I know what what I’m talking about so that’s the end of it” puffery tone. It in itself comes across as condescending and, worse, adds little to the conversation. I generally try to be respectful on these forums until someone shows they aren’t reciprocating. Normally I just roll my eyes and move on. I’ll give you the benefit of the doubt and assume you don’t even realize the condescending tone some of your posts take. I debated even responding but thought it might make you realize how off-putting you’re style can be. In your case, it comes across as very arrogant rather than curious which is contrary to the HN guidelines. If it offended you, I apologize.
I admit to being arrogant, perhaps that's an inevitable side effect of being confident. When proven wrong, however, which has happened on HN (!) I try to admit it. I don't like making mistakes.
I'm not offended, as that's to be expected when I write things that are unpopular.
But I still accept your apology, and no worries. Perhaps we can engage again in the future!
P.S. I know about the simulator issues, but the information came filtered through a journalist and I am skeptical. What I cannot reconcile is the first incident where the deadhead pilot simply turned off the switch, compared with the simulator pilot. It didn't matter whether the runaway trim was the root cause. It did matter that runaway trim will kill you and it must be stopped. All three crews knew that, which is why they fought it.
In my work on the stab trim system, it was accepted that the first resort for trim failure was turning the thing off. Overhead in the 757 is a matrix of circuit breakers, the purpose of each is to turn off a malfunctioning system. The stab trim one, being critical, isn't located overhead but right there on the console.
I work with machinery all the time. When it fails, my first reflex is to always turn it off. For example, one day I smelled smoke. Looking around, smoke was coming out of my computer box. I could see fire through the grille. The very first thing I did was yank the plug out of the wall, the second was to take the box outside. The flames went out when the current was removed.
I simply do not understand failing to turn off a runaway trim system. Especially when it kept coming back on after normal trim was restored.
For another example, an engine fire. I don't know what the fire checklist says, but I bet right at the top it says to activate the fire extinguishers and cut off the fuel and electric power to the engine. I've done the same for an engine fire in my car :-)
>3. Third incident - Boeing sent an Emergency Airworthiness Directive to all MAX pilots. The EA crew did not follow the simple 2 step procedure. A contributing factor is the crew was flying at full throttle, and ignored the overspeed warning horn you could hear on the cockpit voice recorder.
Walter, ffs, they were taking off hot & high from Adis Ababa with airspeed unreliable because of the screwed AoA sensor; a sensor unjustifiably classified too low in terms of consequence for failure, and deliberately wired non-redundantly and kept from being cross-checked in most configurations in order to avoid training burden to avoid cert work to compete with the NEO. Without that AoA sensor, overspeed had no basis; garbage in, garbage out.
Let it go. Boeing f*cked up. Period. Their (the pilot's) instruments and alarms were completely untrustworthy. You (Boeing) don't get to blame pilots for not doing memory items when your (again, Boeing's) technical design is screwed from the first. Your 757 work was fine. The MAX as it was was not. Nothing will ever make it so.
With a no-fault system, mistakes are not covered up, but fixed.
With a fault system:
1. if a mistake is discovered, it will not be fixed because that would be an admission of criminal liability.
2. Quality will not be improved, because that is (again) an implicit admission that the previous design was faulty.
3. You won't get new airplane designs, because (again) any new design is an inherent risk.
I understand the desire for revenge, but there are heavy negative consequences for a revenge/punishment/fear based system.
P.S. I worked at Boeing on the 757 stabilizer trim system. At one point, I knew everything there was to know about it. It was a safety critical system. I did a lot of the double checking of other peoples' work on it.
I did not work in an atmosphere of fear.
The 757 in service proved an extremely reliable airplane. I've flown commercial on many 757s, and often would chat a bit with the flight crew. They were unanimous in their admiration for the airplane. Made me feel pretty good.