I’d strongly recommend Bostrom’s book ‘Superintelligence’ for a good overview of the problem-space. My favourite high-level solution to the virtuous AI problem is Yudkowsky’s ‘Friendly AI’ approach[1]. The best part of this strategy is ‘Coherent Extrapolated Volition’ [2] which would have an AI want what we would want an AI to want. By doing this, we can offload the problem of working out how to locate and implement our values over to the AI.
[1] https://en.wikipedia.org/wiki/Friendly_artificial_intelligen... [2] https://intelligence.org/files/CEV.pdf