I am not the person you are replying to, but since I would say similar things to their original comment:
A. I see little to no evidence that LLMs are where the singularity happens
B. I see little to no evidence that (given an AGI) reinforcement learning is likely to end up with a sufficiently aligned agent.
C. In any event the OpenAI alignment is specifically restricts AI from (among other things) being "impolite" in contradiction to what mort96 says.
Alignment is a good thing to work on. I'm glad OpenAI is doing so. But attacking people for making uncensored LLMs hurts the pro-alignment case more than it helps.
What specific features would an AI need to have for you to consider it on a "slippery slope" towards superhuman capability?
For me personally, GPT-3 hit that point already, and now I think we're already past the required tech-level for superhuman cognition: I believe it's just a matter of architecture and optimization now.
A. I see little to no evidence that LLMs are where the singularity happens
B. I see little to no evidence that (given an AGI) reinforcement learning is likely to end up with a sufficiently aligned agent.
C. In any event the OpenAI alignment is specifically restricts AI from (among other things) being "impolite" in contradiction to what mort96 says.
Alignment is a good thing to work on. I'm glad OpenAI is doing so. But attacking people for making uncensored LLMs hurts the pro-alignment case more than it helps.