It's worth remembering that SpaceX's first 3 attempts failed as well. Space is hard. It sounds like a lot of things went right with this launch before something failed. So, good for them!
Yes, and none of them were carrying "real" satellites. (Falcon 1.1 had a student build satellite, 1.2 a dead weight, and 1.3 had some nano satellites and another test satellite.
According to the article this first attempt at launch in China had a State television communications satellite on board. Ouch!
According to spacenews.com it was a "small “Future” (Weila-1) satellite for China Central Television (CCTV) for space science experiments, remote sensing and use in a television show" [1], so not quite that much of a loss.
Unless the payload is a mass simulation, even cubesats and nanosats are "real" satellites. The customers for University satellites are typically USAFRL, NASA, or DARPA.
And thing that tend to fail, at least in last few decades do seem to be stupidly simple from first look: somebody forgets to put a bolt in a mission critical assembly, banal programming errors, accelerometers being put upside down, not up to spec critical parts...
This is a design problem. It should not be possible to install them in the wrong orientation. So many disastrous problems in aerospace result from crossed connections and things installed backwards that a lot of effort is put into making this impossible.
For example:
1. having asymmetric bolt patterns
2. having one of the bolts being a different size/thread
3. having a package shape that won't fit any way but the right way
> By July 13, investigators simulated the improper installation of the DUS angular velocity sensors on the actual hardware. As it turned out, it would be very difficult to do but not impossible. To achieve that personnel would need to use procedures and instruments not certified either by the design documentation or the installation instructions. As a result, the plate holding the sensors sustained damage. Yet, when the hardware recovered from the accident was delivered to GKNPTs Khrunichev, it was discovered that the nature of the damage to the plate had almost exactly matched the simulated version.
Not only should it be impossible to install the part without doing some machine work, it should be easily inspect-able to see if it is right. Hence the arrows and color coding.
For aerospace hydraulic actuators, where installing the lines backwards will crash the airplane (and this has happened):
1. inlet and outlet have different size ports
2. one is a left hand thread, the other right hand
3. the lines are not long enough to reach the wrong port
4. they're color coded, with arrows
5. the acceptance test tests that the actuator goes the right direction. In fact, this is often part of the preflight checklist.
Just for fun, take a look at the connections on your car's battery. It's usually easy to hook them up backwards - the cables are not color-coded, they're long enough, they use the same connectors, the + and - is easily obscured by dirt, etc. Hooking them up backwards will set your car on fire. (google it!)
When I hook up cables, I always find the one that connects to the engine block, that way I know it is -, and scrape off the dirt to make sure I find the - post.
> By July 9, it is transpired that investigators sifting through the wreckage of the doomed rocket had found critical angular velocity sensors, DUS, installed upside down. Each of those sensors had an arrow that was suppose to point toward the top of the vehicle, however multiple sensors on the failed rocket were pointing downward instead.
> On January 6, 2006, Ryschkewitsch revealed that a pre-test procedure on the craft was skipped by Lockheed Martin, and he noted that the test could have easily detected the problem.
Twenty minutes of a designer role-playing an assembly tech installing those components (with the mindset of trying to fail) might have gotten us a "this can only fit one way" result.
Oh geez I did this one time on my first car when I was 16 or so. Transposed the battery connects. It sparked and was just done for a second, but fried the whole ECU, battery, alternator, and more. Very costly lesson I will never repeat.
A whole bunch people people died in an early Soviet lunch attempt because a technician was able to plug a cable into the wrong socket. Which ignited the second stage while the rocket was on the pad.
Which brings up an oft mentioned statement: Operators usually don't do the wrong thing, they do the thing that's almost right.
Also if you want to lutz on the Soviets there was an Boeing airliner that crashed because the cables to the two engines alarm sensors were swapped at a bulkhead. On that flight one of the engines threw a serious alarm, and the swapped connectors caused the crew to shutdown the good engine. When the bad engine failed, the crew tried restarting the other engine but it was too late.
Yeah, this is design practice in other industries as well.
If you're assembling a process gas delivery system, you'll find that pressure bottles containing fuels (H2, CH4, etc.) have left-hand threads in the delivery valves. Oxidizer bottles (O2,, etc.) have right-hand threads. Associated hardware (regulators, flow limiters, etc.) is also threaded for their intended use.
Makes it much harder to accidentally connect an acetylene tank to a cryogenic oxygen delivery line, with resultant catastrophic exothermic disassembly.
Everything you mentioned was done. Including assymteric patterns, the bolts couldn’t be put in the wrong way and had arrows with color markings. The installation engineer literally “hammered” these in the wrong polarity.
“flight control sensors were hammered into the rocket’s compartment upside down.”
> a pair of five-millimeter pins on the mounting platform for DUS sensors are designed to help the technician in the correct placement of instruments, however with a certain effort it is possible to mount the sensor without those pins fitting into their holes and still attach it securely with fasteners. Moreover, it was possible to insert all incoming color-coded cables in their correct sockets, despite a wrong position of DUS sensors.
> However the technician's supervisor and a quality control specialist were supposed to check on the completion of the installation. All three people involved in this process did leave their signatures in the assembly log.
There are 3 failures here: the fasteners were still able to be fastened, the cables were long enough to fit on the wrong side, and the inspector who signed off on it didn't inspect it.
Yes, I'm being picky and pedantic, but that's necessary for flight critical hardware.
I earned money in college doing electronic board assembly work. One major source of problems was installing components backwards. We'd try to mitigate this by at least having all the orientations the same, so all the + on the capacitors pointed the same way, the lettering and pin 1 dimple on the ICs always pointed the same way, etc.
At least the chip package manufacturers could have slightly offset the pins on one side. But noooo... :-(
If I remember correctly, the progon m that crashed in 2013 did have at least some of those design features for the accelerometers, but the mechanics just hammered the sensors “in place” as the wouldn’t um “properly fit” turned out they were upside down ...
A quote from the fortune files springs unbidden to mind:
> All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use a hammer.
> -- IBM maintenance manual, 1925
More seriously, there need to be strong cultural norms against hammering the components, and against making components which need to be hammered.
I have a few automotive service manuals with similar warnings --- with the exception of parts which are press-fit, brute force is not necessary to assemble components.
It seems more like another instance of "build an idiot-proof system and the world builds a better idiot."
I heard horror stories at Boeing about mechanics who were sure the instructions were wrong, and would go so far as to use adapters to hook things up backwards. This is why the design tries to make it as hard as possible, plus making it easy to inspect and determine if it is wrong, plus adding it to the preflight checklist.
This is a complex design, quality assurance, and overall production culture problem. Yes, things should be designed to not be ambiguous, but QA should catch such issues as well, because this will happen inevitably, sooner or later. And the production culture should disincentivize process violations and "creative" (aka untested) solutions during production, such as hammering a sensor into the place.
I gave a whole presentation about this sort of thing some years back. These concepts very definitely are applicable to software design, especially programming languages.
For example,
a < b < c
What does it do?
Python:
a < b && b < c
C:
((a < b) ? 1 : 0) < c
D:
syntax error!
Instead of trying to better educate programmers and code design reviews to solve it, make it impossible for such to pass the compiler.
A lot of electrical wiring is that way, including your high voltage house wiring. Yes, the wires hooked up wrong in a house of mine long ago, I don't know if the electrician did it or the previous homeowner.
If you're color blind, you're going to have problems building electronic boards. In the Air Force, you can't be a pilot if you're color blind.
A friend of mine is totally color blind. He'd ask me to check his work when it mattered. He memorized the locations of all the flashing yellow traffic lights, and was fine until one day it was changed to a flashing red. Fortunately, his wife was with him and was able to alert him.
Because people can be remarkably ignorant about color-blindness.
Including those claiming to speak for those with color-deficient vision. I’m “color-blind” in the colloquial sense, and I’ve done wiring for years without problem. The reason is, and I don’t know if this is by design, that there is rarely a single distinction between wires. IOW, not just blue but blue with a white stripe. There will not be a purple-with-white-stripe (easy to confuse with blue). There might be purple-with-black-stripe. There might be red-with-white-stripe, which most people with color-deficient vision can distinguish from blue.
The only time in recent memory that I’ve had to have the wife help was when a store-front shop sent me a harness with solid blue and solid violet, and the violet wasn’t “violet” enough to tell from blue.
And house wiring? If you can’t tell white from black from bare copper, you’ve got bigger issues.
Yup, not a problem, at least compared to each other (“there’s black, blue, brown: match wire to color”). Ask me if the brown wire is really green? Dunno.
Real color blindness, which is a tiny percentage of the population, might have a problem, though.
The other day I had a pretty tough exam. It was 3 hours / 20 questions long and was proof heavy, so I was stressed. Well, in one of the simpler questions, I somehow manager to derive with respect to the wrong variable. The questions was "minimize mu" and I minimized theta. Not that the minimization of mu was hard, it was just as easy ! I did get some of the harder stuff right. And this was just an exam, not rocket science !
In the end, I think that when your schedule is highly stressful and time sensitive, people make stupid errors on easy stuff because they'll prioritize their time on the harder stuff - which they will probably get right !