

BP oil spill clearly foreseeable on monitoring software, official - DMPenfold2008
http://www.computerworlduk.com/news/it-business/3256288/bp-oil-spill-clearly-foreseeable-on-monitoring-software-official/

======
sophacles
There is a real conflict when designing systems for control of these types of
processes, between "too much alarming" and "not enough alarming". They both
have the same consequence -- a lack of trust in the control software. The too
much alarming case is obvious, but the too little case also leads to a lack of
trust like this:

1\. software doesn't alarm when operator thinks it should

2\. bad consequence

3\. thoughts of "software doesn't work"

4\. valid alarm not trusted in the same way too many false positives.

This sequence of events is not necessarily rational, but it is human non-the-
less.

There is a lot of really cool work going on, and a lot more to be done, in
this space. Current best practices include lots of training for operators, who
do an awful lot of "flying the factory by feelings and hunches", rather than
by good, informed software interaction.

------
ninjayenn
Here is a related interesting article on what any industry can learn from this
incident in regards to risk management and regulatory oversight.

Two pointers: 1) Ensure layers of controls are implemented and tested for
known risks.

2) Do not underestimate or ignore the low risk vulnerabilities. It does not
take many in aggregate to result in catastrophe.

[http://www.redspin.com/blog/2011/01/07/lessons-learned-
from-...](http://www.redspin.com/blog/2011/01/07/lessons-learned-from-the-bp-
well-blowout-for-your-industry/)

