That profits and executive bonuses should take precedence over the safety of human lives is horrible. I'm curious to hear more about why the unions formed, since it sounds like there was a very contentious relationship there.
Investigations after the Challenger Disaster revealed that engineers at NASA thought the space shuttle had a 1/100 failure rate per mission, while managers thought the shuttle had a 1/10,000 failure rate.
by making unsafe aircraft Boeing was able to avoid losing customers, and in fact make many more sales because airline’s pilots didn’t need to be retrained and their equipment didn’t need to be changed. Combined that made the MAX a more cost effective alternative to the airbus.
That’s your shareholder value, as long as your unsafe aircraft don’t end up crashing.
Also it’s mostly nonsense: executive compensation is often tied up in revenue and market performance, and they’ve mostly cashed out on that before the financial damage caused by their “work”.
To be certified as a training device it has to be accredited to act similar to the real thing. That accreditation costs money.
That's not to say it couldn't be done cheaper, but until someone does $15 million is what the market will bear. The other option is gassing up a real jet for training.
Also, who accredits the simulator? The company that built it?
But $15 million is still expensive. A major selling point (possibly the biggest selling point) of the MAX jets was that Boeing convinced the FAA that they were similar to old 737s and did not need a new type certificate. That meant for the many pilots already certified to fly the 737 they would not need to do any flight training to transition. All those currently certified 737 pilots could fly the new jets just by reading a manual on an iPad for an hour. That is a huge savings in training cost even compared to powering up a simulator. The simulators still exist because new pilots still need to get certified need to log flight time- the iPad app is not good enough for them.
Regarding simulator accreditation, I believe Boeing does that themselves. They hand the FAA a stack of documents claiming that they tested the simulator with the same flight profile as a real jet and it handled the same as the real thing. The FAA basically says, "as long as you're not lying about running the test then the simulator is acceptable for training." This is how the FAA handles most certification- they do not do their own testing, they just accept manufacturer test results and documentation. As long as the documentation claims to meet standards then it's all gravy. In general this system works, because if someone crashes your plane and a whistleblower says that you did not actually perform the correct test procedures then you are in hot water.
Anyway I have yet to see a software related “certificate” that isn’t rote-learnable, comically high level, or both.
You also have to ask, what are you certifying?
All of these are fairly trivial to avoid in small programs:
* Use after free
* time of check/time of use
* out of bounds
* numeric overflow
Especially in any kind of test environment where you are being extra careful.
Then there’s the language problem: many engineers have to use multiple languages, some only have to use “safe” languages. Should you require a different cert for each?
You’re also saying “not everyone gets to write software anymore” because the certification won’t be free.
How does open source then work? Clearly people working on the Linux kernel should be certified, so now you’re saying Linux should only accept patches from people who live in countries that can provide the required certs.
I have two comments. One replace hardware engineers with 'management'
The second when I've read people talk about validating the AOC readings it makes me twitch a bit. Partly because my day job involves firmware that manages a self organizing sensors network. Validation of sensor data sounds easy until you force yourself to conceptualize what the system can know based on the actual data it sees and not your perceptions.
More there is a strong tendency to over focus on the ordinary case. And not all the edge cases. Very often dealing with edge cases is the fundamental problem. Consider designing the front end of a car. The primary design goal is actually 'passengers don't die when you drive it into a tree'
Problem with the MCAS system is it needs to work under all the edge cases, not just when the plane is flying in smooth air while the pilot is pulling the nose up. Like during a hard turn into wind-shear.
I mean there's also the space shuttle system where you have N redundant systems controlling N separate motors (or whatever), and assume that if you'll never have >= N/2 producing incorrect output. That's a "no validation" approach that works by virtue of the correct instruments literally overpowering the incorrect ones.
I'm walking away with the following explanation. Boeing made a breaking change to the aircraft and did such a good job hiding it from themselves, the PAA, and pilots that they made it impossible for experienced pilots to handle things when it failed.
oof, what makes AoA sensors so terrible? Also, it seems like if you have something that isn't particularly robust (pitot tubes apparently also being egregiously terrible in that regard), surely having a less accurate but more robust reference tool would be a good "oh bollocks" back up. e.g. additional redundancy based on different technology.
It’s a difficult problem to solve! These sensors are already probably much closer to what you would expect a low-fidelity reliable backup to be than you realize.
A similar thing that happened with Birgenair Flight 301. A wasp built a nest inside the one of the pitot tubes. Which was the one the pilot and autopilot was using. And also being used to generate warnings.
I think validating sensor readings is a hard problem. The validation itself becomes another point of failure and confusion.
The pass the buck circle jerk is how this design flaw came to exist. Everyone in the engineering organization needs to have the balls to point out systems design errors. Management needs to listen to them and not issue "make it work" marching orders. Regulators need to not delegate their responsibility to the previous.
More than one person could have put their foot down and demanded triple redundancy. That this didn't happen suggests even more safety concerns lurk in all of Boeing's products.
Currently, the avionics software is certified, not the software engineer. The FAA-delegate safety reviewers get special training, but otherwise a bachelor's degree in a related discipline is the standard for an individual contributor's formal education.
There is arduous process in place to help ensure that commercial avionics software is produced to an acceptable level of quality. Problems can still get through, but the process helps weed out a lot of issues that you'd likely see in non-safety-critical software.
Certify the software, not the person.
I don't think you're considering how everything works together - in mechanical engineering a certified engineer designs her machinery around other tools that themselves were designed by certified engineers, all using manufacturing processes designed by process and industrial design engineers.
What you're claiming is equivalent to a technical engineer designing something based on equipment designed by - for all intents and purposes - a random person on the internet, and you're now responsible for determining all the mechanical qualities of everything yourself.
I think you're dismissing what it means to say "I am a professional engineer and I approve of the use of linux" in such a context. The only rational approach is to only take OSs (or any other part of your software stack) that has itself been written entirely by certified engineers.
In effect you've just shifted the blame. Developers working at the lower levels could've pushed back on this harder if they were legally required to. My point is if mechanical and electronic engineers are liable then so should software guys - they need more power to say no.
> You also have to ask, what are you certifying?
An argument could be made that formal verification & ethics would be useful in this context.
> You’re also saying “not everyone gets to write software anymore” because the certification won’t be free.
Degrees aren't free either. Most developers aren't working in aerospace and won't need the rigour.
> How does open source then work?
I'm not talking about OSS. I'm talking about people who work with software that can kill people. If the Linux kernel is used as a technology in these machines then the software 'engineer' who made that decision is legally liable. The blame stops with them.
No. If the bug was in the software (say the bug was numeric underflow leading to crashing) it would be software. In this case the software engineers would have been told "here is your current AoA" and adjust the plane correctly in response. The hardware engineers/designers then provided them with unvalidated data, and I assume no details on the error rate (presumably because that would get the whole system flagged by the FAA as being nonsense)
> Degrees aren't free either. Most developers aren't working in aerospace and won't need the rigour.
"most" != all, literally my point. Also at what level does it kick in: OS developers? If they're using a licensed OS like QNX should all the QNX engineers need to be certified for avionics? How about linux?
> I'm not talking about OSS
So you're saying OSS shouldn't be used in commercial industry?
If you work on linux: that's used in medical hardware, so it seems like all contributors should have your new Certificate in Not Killing People.
But also, at what distance from killing people does this license cease being relevant? You worked on (say) a firewall product on some device, it fails to prevent some attack and the medical device kills someone.
Or the radio stack?
A perfect example of why the title engineer needs to be earned. This is a baseless assumption given that literally anything could go wrong. Sensors could become damaged, circuits broken, etc.. It is our job to plan for edge cases.
> But also, at what distance from killing people does this license cease being relevant?
The last link in the chain: The engineers who put their stamp of approval on the system being shipped to consumers (aka Boeing employees). If you're willing to risk human life on the fact the Linux kernel is acceptable for this task, then you should damn well be able to risk your job title.
If Linux isn't up to the task then why is it being used?
Not those edge cases. They have nothing to do with the core competencies of a software engineer and should be offloaded to someone who is competent. Do architects plan for edge cases where the steel beams were actually made of wood?
What, exactly, do you think a PE cert for a software engineer would have done here? Do you think the software people should have refused what the aero people certified as safe?
It gives legal teeth for them to say; "No, this has not yet been proven to be safe, I cannot sign on to that". However at the same time a union or guild is required so that management doesn't penalize for being a moral engineer versus a rubber-stamp engineer.
How about people doing software for medical systems? Would they have to go to med school, do a residency, and pass medical boards before coding? How would this work?
Because refusing to accept specifications from domain experts and substituting your own is a great way to attach personal liability to yourself for something which you are not trained as an a reasonably knowledgeable lay person, much less an expert. I doubt any software engineer could obtain professional liability insurance if that was the practice.
That's the TYPE of thing I expect to happen in this context.
The job of the software engineer is to correctly implement the given spec. As far as anyone knows, that was done.
There is no one, in any industry, that wants their software engineers to say "I'm not moving forward until I've seen the validated medical testing and lab results that this design is based on. I will also need you to run a several year safety trial, provide multiple attestations that the design is correct by end users, regulators, and independent auditors, before proceeding."
What you are suggesting is ridiculously impractical. The specialties rely on one another, and if the controls and human factors people have signed off on the design spec that's what the software engineers should faithfully implement. During implementation, if it becomes apparent that there are states the system can get in to that are not called out in the spec that obviously requires re-engagement. But that's not what you are suggesting as far as I can tell.
That title should be reserved for those that have the same credentials as an ME, EE, etc. Someone who is a CS degree holder or a self taught comp dev have in no way the same training as someone with a CE degree.
Engineers are able to take their PE exam in either CE or SE. https://ncees.org/engineering/pe/
Provided you have a PE credentialed coworker who can vouch for you. That is a chicken and egg problem for most people in an organization with no PEs.
Probably the better model would be the apprentice/journeyman/master progression from the medieval trade guilds.
And it isn’t even clear to me that most consumers would prioritize security / stability over feature-sets when choosing software.
If you are in a context where your software has significant implications on the state of a physical system, you must be willing to work with the other engineers to make sure you've accommodated all the eventualities you can.
Part of being an Engineer is knowing what you don't know, yet following through and making sure you connect with the people who do in order to ensure all relevant questions are asked and answered.
IDE's, compilers, and other tools of the software engineering profession should absolutely not be treated as the tools of a privileged class.
However, certain contexts in which software can be applied should be subject to an expectation of higher scrutiny, and entail compulsory cross-disciplinary knowledge acquisition and application of expertise.
You want to hack kernel code? Knock yourself out!
You want to make that code responsible for operating an airplane? Bust out The Complete Engineer's Guide to Jargon, and don thy pocket protector, because it's gonna be a bumpy ride to build the consensus that that piece of software you wrote is is actually the right tool for the job.
You're mot getting it.
It's not about having/being a PE.
It's about knowing when the stakes are high enough where you need to talk with them, and make sure that you are making full use of their expertise in their subject area, and that they have full use of your understanding of writing software to ensure all your bases, behaviors, inputs/outputs, and edge cases are covered by tests, implementation, and appropriate requirements.
That means, for example, looking at the MCAS requirements, scratching your chin, picking up the phone, and calling the System Engineer the requirements percolated down from to figure out what happens on the path from sensor to entry point? What other pieces of data may be appropriate to include?
If you have no tolerance for asking meddlesome questions in the process of making a system which as written, has the capacity to pitch a plane into the ground, you are probably not ready to be put in charge of that implementation.
Write your Linux kernel in your own time however you want. But there is a time and a place for everything, and implementing hacks in a flight control system (which you as an individual should know is covered by regulatiin), is not the time to "Yes" Man. If you spend 90% of your time talking to people until the design is so solidified enough that everyone will have the schematic pop up in their dreams until the end of their days; then you're doing it right.
And at the end of the day, if you throw up your hands and hit the "Fine! I won't question this!" button regardless of that situation, and code that piece of software that enables an unsafe design to kill 300+ people... Then congratulations, you just learned this life lesson the hard way. By being a contributory factor to the deaths of 300+ people.
Engineering done wrong kills people. Fact of the territory. Please don't skimp when it truely matters. Even if no one may find out in reality, go into every project assuming once you die, you'll have to answer for every decision you made in life and ask, if I was taken to task for this, would I truly be comfortable that I've asked all the right questions?
If you can sleep at night without doing that on a Flight Control System... Please don't seek employment in the writing software for the aviation field any more critical than the infotainment system.
I'm not trying to be elitist. It just is that complex, and the consequences of a shoddy job are that catastrophic.
You can't fool physics. She is the coldest, most evil bitch imaginable.
I hope this fully answers your question.
The problem is you don't know its unsafe. It sometimes takes a disaster to shed lite on a problem. Engineering and design is hard.
Yes there were outsourcing companies that abused the system (harming legitimate companies), but if you think outsourcing companies are going to magically start recruiting top of the barrel engineers you are sorely mistaken.
Have a safety concern? The product owner doesn't think it's a priority.
>This story is going to be an 8 point story? But Jim said he could get it done in 2
If you treat individual ticket SP values as measures of time, you've already completely lost the point of everything.
if not, is this relevant to the story?
Have I seen agile/scrum used to cut corners on and dismiss engineering concerns? Absolutely.