>Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.
Understanding is not the same as accepting as your utility function. Morality is specific to humans. A different being would have different goals and different morality (if any.) It's very likely they would be compatible with humans.
Understanding is not the same as accepting as your utility function. Morality is specific to humans. A different being would have different goals and different morality (if any.) It's very likely they would be compatible with humans.