>Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable
This statement by itself sounds reasonable. But for me an interesting thought experiment is to take this letter and imagine the equivalent for some other technology, for example semiconductors in the 1960s, the world wide web in the 1990s, or social media in the late 2000s-early 2010s. It is always true that new technologies have the potential to radically transform society in ways that we can't predict. One could reasonably have said "[semiconductors/the world wide web/social media] should be developed only once we are confident that their effects will be positive and their risks will be manageable". Does that mean that a self-imposed ban on research and development with the threat of a government moratorium would have been justified?
At this point the best case scenario is that society learns to adapt and keep up with technological developments. Every new technology increases our ability to both improve people's lives and harm them in various ways. It's not a good long-term solution to intervene and stop progress every time we gain new capabilities.
It just seems to me most of these guys that are signing feel they don’t get to be part of this revolution and if AGI develops they will be pretty much be nothing. This is equivalent of saying, don’t leave us out..
At the end of the day it’s an empty platitude and vain hope that work will pause or be considered carefully. Certainly public entities can be made to pause, but nation states won’t. If there’s an advantage to be had; the work will continue in secret. Vernor Vinges “Bookworm, Run” had a take on this situation.
They’re talking about pausing research and talking together about the path forwards, not stopping research and letting <whatever country your paranoid about> build terminator.
We have to take unintended consequences into account. It's unlikely that we will be able to get all corporations and governments to agree to a pause and be able to enforce it. The question then is what are the consequences of some people pausing and not others? Does this decrease risk or increase it?
This statement by itself sounds reasonable. But for me an interesting thought experiment is to take this letter and imagine the equivalent for some other technology, for example semiconductors in the 1960s, the world wide web in the 1990s, or social media in the late 2000s-early 2010s. It is always true that new technologies have the potential to radically transform society in ways that we can't predict. One could reasonably have said "[semiconductors/the world wide web/social media] should be developed only once we are confident that their effects will be positive and their risks will be manageable". Does that mean that a self-imposed ban on research and development with the threat of a government moratorium would have been justified?
At this point the best case scenario is that society learns to adapt and keep up with technological developments. Every new technology increases our ability to both improve people's lives and harm them in various ways. It's not a good long-term solution to intervene and stop progress every time we gain new capabilities.