Hacker News new | past | comments | ask | show | jobs | submit login

I am not sure why this point is not considered more seriously (maybe it is and I have just missed it). I feel that this article and most of the literature being written around this topic is skating around the issue without naming it. The real question here is what are we going to do when the AI we create begins to self replicate/augment their code/control logic? What are we going to do when the AI decides that in order to reduce the loss function they must act outside of the control scope and simply makes changes to it?

Some will say "not possible, how will the code recompile?" to which this article clearly answers...another AI will do it. We have to be honest with humanity here and say that we are seeking to create artificial life of which we will have no more control over than we do any other human. Oh and this new artificial life will be orders of magnitude more physically and mentally robust than us.




I think it's because there is only one obvious solution:

prevent AI from ever being developed

And how we could do that is too pessimistic to talk about.


> Some will say "not possible, how will the code recompile?"

Read own code, make changes, recompile, redeploy and replace self (or not, just start new fork) with newly upgraded mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: