Hacker Newsnew | comments | show | ask | jobs | submit login

Different teams working on modules is a very different animal than one engineer saying "this item is going to blow up" for years and obviously everyone ignoring him... I find it particularly shocking how your answer suggests a strong "well, shit happens" attitude when clearly the potential AND a strong reason to make things better was right there.

What would really be interesting is why he "failed to make his case" according to executives.




The engineering team had approximately 3 hours notice prior to a teleconference to make a presentation. The telecon occurred the evening before launch, with a midnight deadline for the go / nogo decision to be made.

Due to the short timescale to build their presentation, they re-used info from existing presentations. Unfortunately, the same info had previously been used to demonstrate why their o-rings were safe. The NASA people basically said, 'last time you showed me this graph it meant things were safe, this time it means things are dangerous. What gives?'

The decision makers were looking for excuses to move forward, and that gave them an excuse to ignore the warnings.

-----


What you're not seeing is that pretty much everything you work on has some risk of failure, and much of it could be catastrophic. Sorting through all of that is not easy. Yes, in this case, there were systemic and human failures.

It just goes to show you that even when everyone is paying attention, things still go wrong. Some people heard him and made the call that it was still safe. They were wrong. That, unfortunately, is the state of the art today (or it was back in the 80's). The alternative is to stay on the ground.

But yes, shit does happen, and nobody climbing abord the orbiter is under any illusion that it is a safe thing to do.

-----


If you've read Fenyman's appendix to the report on the accident and investigation (http://www.ralentz.com/old/space/feynman-report.html), I don't see how you can possibly believe that the decisions made around the Challenger launch were made with the right process.

-----


I have read it, and I do think there are defects in the processes. But what to you do about it? I'm sure there are many more unknown vulnerabilities in the orbiter that were never found out, but you keep trying and fixing.

-----


There is an obvious difference between "unknown vulnerabilities" and "known vulnerabilities"

-----


Your willfully ignorant (and I don't mean that in a crude, insulting way) responses here lead me to think there remains some very serious cultural issues within NASA.

-----


I think you've read something I didn't write. What you call willful ignorance I call a realistic assessment and acceptance of the risks of pioneering space flight.

Nobody is forced into an orbiter - people BEG for the opportunity. We gave them that opportunity, working in good faith to the best of our ability. Sometimes it doesn't work out. Sometimes things break. Sometimes people screw up. We all know the risks.

You can sit on the porch with a near 100% safety record or you can give it a try. Your choice.

-----


Actually I have to say I am with "mistermann" on this... you make it sound like there is no other way. I can totally accept and understand that it cannot be all that safe to sit you on tons of rocket fuel, fire you into the oxygen-less and freezing depth of space and then hope you somehow make it onto another planet AND then do the same stunt from there back to earth. I get it, I can also understand the trade-off between "making it 100% safe" and "otherwise we'd never get lift-off".

What I cannot understand is: an unknown, unforseen contingency is a completely different thing than an engineer pointing out "this WILL fail, it will blow up and I have proof" and there really should not be any excuse for ignoring a warning like this... yes, you cannot make it 100% safe but you should at least aim to make it as safe as humanly possible given your current level of technology and knowledge... so, in my book overriding an engineer saying "this WILL fail and it'll blow up" is actually negligent man slaughter. When I get into my car in the morning and don't care that the brakes aren't working even my mechanic told me my brake lines were cut, what would you call that?

-----


While this sentiment is understandable, it's not justified unless you know how many times people said "this will fail" and it didn't. We only have definite data on this one statement. You can not from that data conclude (I'm not saying there isn't other data) that this was negligent. If every engineer who disagreed with something said "this thing is going to blow up", eventually one would be right. But you can not then infer that that individual was any different than the others and that people should have known this. It's the "monkeys on typewriters" fallacy.

-----


This is science and engineering not statistics. It is not a numbers game or "monkeys on typewriters" or how many bug reports we can file on the same issue to get said issue fixed!

At the end of the day, if even ONE person demonstrates scientific or engineering knowledge that shows a serious safety concern, then why would you actively choose to ignore it. Period.

NASA management - whether it be by organisational process and or personally identifiable decision making - failed in their responsibilities in spectacular fashion!

-----


While I agree (especially with the last sentence), I would point out that the engineering behind these problems is rarely black and white, and hindsight tends to make it look more so than it is.

I do not believe that if someone knew with 100% certainty that Challenger would blow up that it would ever have launched. The trouble came in in the judgment of that risk. In this case, from what I've read, they got it wrong - very wrong[1].

You can argue about how certain they have to be, or how negligent people were to ignore estimated failure probabilities of whatever magnitude. But it's not like someone says, "this will blow up 85% of the time, period. Make a call." It's more subtle, complex, and less concrete than that.

1. Note that this is not equavlent to "if it blew up, they got it wrong.". Sometimes the small, properly calculated risk blows up on you just because you're unlucky - which is different from a miscalculated risk blowing up on you.

-----


No hindsight was required to observe the following:

O-rings are supposed to seal on compression, not expansion.

As it is now, the O-rings are getting blown out of their tracks but still managing to seal the whole assembly quickly enough.

The above unplanned behavior, which is the only thing preventing a hull loss (and a crew loss since there's no provision for escape) is sufficiently iffy that sooner or later we're likely to run out of luck.

(I'd also add about the Columbia loss that NASA had a "can't do" attitude towards the problem they observed of the foam hitting the wing. Hardly a "crew first" attitude.)

-----


That would be the "people screw up" part. Do you have a cure for that?

-----


You are leaving out data. The same engineers also, during that time, agreed to decisions that the problem had been fixed. Apparently, there was an established mechanism for any engineer working on the shuttle to file some official "bug report" that then would have required a thorough investigation. None of the engineers did, all concerns were voiced through informal channels.

-----


Part of the conclusions of the Challenger post-accident report was that engineers were discouraged from filing such "official bug reports." Informal reports made in a briefing did not require investigation, so they were not discouraged in the same way.

When I visit a NASA center, there are posters up all over saying "If it's not safe, say so." Part of the reason for the 2-year grounding of the Shuttle fleet, post-Challenger, was to put in place a stronger culture of safety at NASA.

A commenter mentioned the false dichotomy of "engineers vs. managers". It's a hard call, as an engineer, to disappoint a manager (or a whole line of managers, all the way up) with a call to solve a possible problem. Civil engineers may be more used to this sort of accountability.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: