Hacker News new | past | comments | ask | show | jobs | submit login

It's not a matter of being tired and having to catch up. The operators are explicitly instructed to treat the AI results as orders without questioning the results.

In other words, operators are threatened with punishment if they take the time to inspect the results more closely before following orders. It's not even an option!

> In order to speed up target elimination, *soldiers have been ordered to treat Lavender-generated targets as an order, rather than something to be independently checked*, the investigation found. Soldiers in charge of manually checking targets spent only seconds on each person, sources said, merely to make sure the target was a man. Children are also considered legitimate targets by Lavender.




Right! The extreme point of laziness/stress is just pressing approve. At which point the machine is making the de-facto decision.

In either case, the role of the recommendation engine is immensely impactful, as we already know from consumer products. But here the software engineers are directly involved in life-and-death decisions at scale. I really hope they know that.


They know that because it was designed to increase civilian casualties by unprecedented civilian:"combatant" ratios and confidence levels. These were design requirements, not faults.

The notable element of this news is not that the decision-making was automated. It's that they set the civilian collateral death ratio to ~100 (and other details such as considering children to be valid combatant targets), regardless of whether the process that arrives at that was automated or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: