I am not sure why this point is not considered more seriously (maybe it is and I have just missed it). I feel that this article and most of the literature being written around this topic is skating around the issue without naming it. The real question here is what are we going to do when the AI we create begins to self replicate/augment their code/control logic? What are we going to do when the AI decides that in order to reduce the loss function they must act outside of the control scope and simply makes changes to it?
Some will say "not possible, how will the code recompile?" to which this article clearly answers...another AI will do it. We have to be honest with humanity here and say that we are seeking to create artificial life of which we will have no more control over than we do any other human. Oh and this new artificial life will be orders of magnitude more physically and mentally robust than us.
Some will say "not possible, how will the code recompile?" to which this article clearly answers...another AI will do it. We have to be honest with humanity here and say that we are seeking to create artificial life of which we will have no more control over than we do any other human. Oh and this new artificial life will be orders of magnitude more physically and mentally robust than us.