In utilitarian ethics it depends on how people value life. If everyone is equal it doesn't matter, because you trade one life for another. If we allow the system to be higher order and allow for a symmetry split in "self vs other" then self-preservation becomes the moral thing.
It consequentalist ethics the previous details can be formalized differently, but it helpfully points out that if you are coerced then someone is coercing you, hence your moral agency is pretty much diminished. Yet if the consequence of your actions is that someone dies and without your actions they wouldn't have died, then sure, it's morally wrong. But that's hard to know when you have to make such a decision.
In virtue ethics you can simplify this to simply say that protecting the life of others' is a virtue and shooting people in the head is a vice, so let's do the virtuous thing! (Of course protecting your own life is a virtue too!)
(And AI Alignment folks spend pages upon pages discussing these problems of how to even began thinking about a formal ethics that is sort of computable. Should the agent predict the consequences of their possible actions and then compute the utilitarian value of them and thus choose the action that has the most utility? Should the agent factor in the possible actions of the coercer? Should the agent factor in time? If the agent is terminated it cannot maximize future utility. If the agent can live forever by default, then future utility can be much much much more than the utility of "one life". Should then one human life be valued at "infinite + 1"? Are we on a tangent to transfinite ordinals? Yes, yes we are! https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3 )
In utilitarian ethics it depends on how people value life. If everyone is equal it doesn't matter, because you trade one life for another. If we allow the system to be higher order and allow for a symmetry split in "self vs other" then self-preservation becomes the moral thing.
It consequentalist ethics the previous details can be formalized differently, but it helpfully points out that if you are coerced then someone is coercing you, hence your moral agency is pretty much diminished. Yet if the consequence of your actions is that someone dies and without your actions they wouldn't have died, then sure, it's morally wrong. But that's hard to know when you have to make such a decision.
In virtue ethics you can simplify this to simply say that protecting the life of others' is a virtue and shooting people in the head is a vice, so let's do the virtuous thing! (Of course protecting your own life is a virtue too!)
(And AI Alignment folks spend pages upon pages discussing these problems of how to even began thinking about a formal ethics that is sort of computable. Should the agent predict the consequences of their possible actions and then compute the utilitarian value of them and thus choose the action that has the most utility? Should the agent factor in the possible actions of the coercer? Should the agent factor in time? If the agent is terminated it cannot maximize future utility. If the agent can live forever by default, then future utility can be much much much more than the utility of "one life". Should then one human life be valued at "infinite + 1"? Are we on a tangent to transfinite ordinals? Yes, yes we are! https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3 )