I submitted a talk by Thomas Metzinger recently which didn't get much traction but is very relevant to this specific topic, "Three Types of Arguments for a Global Moratorium on Synthetic Phenomenology".
I would guess that we will have created, tortured, and deleted millions of conscious AIs before we even come close to recognizing their rights or, even, the fact of their consciousness.
Doesn't that raise some serious ethical concerns if at some point what you might end up having built is human-like consciousness?
Staying with Westworld, it also explores the possible reactions of those consciousnesses to that ethical issue being ignored