AI 'ethics' is how you get China dominating AI research eventually.
All these OpenAI spooks are either doing it intentionally or accidentally and at stake like this, what does it matter when the end result is a convincing loss.
Ethics oversight won't stop there, though. The managerial class will seek to keep expanding it more and more, with increasingly arcane and insulting restrictions.
And is the research level the right place to stop that? I don't think so. That's a policy question.
New technology is like putting more weight on the gas pedal. Until we learn to handle the current problems, accelerating isn't a guaranteed solution. We may be able to swerve out of the oncoming problem, but now we're going event faster.
We now have technology that can end human civilization (aka "existential risk"). What is the harm in going slower? Especially compared to the harm of going faster.
It's a coordination problem. Sure, you could slow down, and then the chinese will just outcompete you/spawn a paperclip maximiser- or, in general, a less scrupulous competitor will.
You'd have to get all sides to agree on an actually effective treaty.