It's also explained further down in the article why the software was programmed the way it was:
>One major contributing factor to this design issue was a decision to modify the landing site after critical design review completed in February 2021. This modification influenced the verification and validation plan despite numerous landing simulations carried out before the landing. ispace as the mission operator maintained overall program management responsibility and took into account the modifications in its overall analysis related to completing a successful mission. It was determined that prior simulations of the landing sequence did not adequately incorporate the lunar environment on the navigation route resulting in the software misjudging the lander’s altitude on final approach.
TL;DR: Plans were modified after the software was programmed, software was not sufficiently reprogrammed due to overreliance on old, pre-modification simulation data.
As human errors go this looks egregious. One hopes their subsequent missions don't run afoul of the same screw ups.
> overreliance on old, pre-modification simulation data
It somewhat reminds me of the Genesis sample-return mission’s landing failure.
There the parachutes failed to open. The parachutes failed to open because the accelerometer intended to trigger them did not trigger. And it did not trigger because it wqs installed according to the plans, but the plans had them upside down. And they didn’t catch the issue because the submodule in question has already flown, and thus was deemed not necessary to review it in details. But what changed from the previously succesfully flown configuration that they turned the submodule around. So they introduced a change without realising that this change invalidated some of their previous tests/analysis.
Obviously the technical root cause is very different here (software vs hardware; landing site change vs submodule orientation change), but the organisational root cause is similar. A change invalidates assumptions in previously performed tests/analysis/review, and nobody spots this thus the test/analysis/review is not performed again and a problem sneaks in.
This is something I’ve long struggled with: how you capture assumptions tied to decisions so you can revisit the decisions when the facts on the ground change.
(Architectural) Decision Records should explicate assumptions behind the decisions. There will be implicit assumptions too, but at least you can later go back to the records and analyse what was implicit in them.
Yup, we have a strong culture around this at work. Problem statement, decision drivers, decision criteria, alternate options to seriously evaluate. It should be clear why one is recommended over the other, and under what circumstances other options might be preferable.
requirements traceability - dependency tracking/tracing graph, upwards (because of) and downwards (therefore).
It is a very tedious process, Very few companies do it.
But once the graph is built, one can poke questions on it, like what would/might be affected if node X changes. And any assumptions also counts as requirement of sorts
>One major contributing factor to this design issue was a decision to modify the landing site after critical design review completed in February 2021. This modification influenced the verification and validation plan despite numerous landing simulations carried out before the landing. ispace as the mission operator maintained overall program management responsibility and took into account the modifications in its overall analysis related to completing a successful mission. It was determined that prior simulations of the landing sequence did not adequately incorporate the lunar environment on the navigation route resulting in the software misjudging the lander’s altitude on final approach.
TL;DR: Plans were modified after the software was programmed, software was not sufficiently reprogrammed due to overreliance on old, pre-modification simulation data.
As human errors go this looks egregious. One hopes their subsequent missions don't run afoul of the same screw ups.