Absolutely. These are the types of pragmatic, real problems we should be focusing on instead of the "risk of extinction from AI".
(The statement at hand reads "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.")
Einstein's letter to Roosevelt was written before the atomic bomb existed.
There's a point where people see a path, and they gain confidence in their intuition from the fact that other members of their field also see a path.
Einstein's letter said 'almost certain' and 'in the immediate future' but it makes sense to sound the alarm about AI earlier, both given what we know about the rate of progress of general purpose technologies and given that the AI risk, if real, is greater than the risk Einstein envisioned (total extermination as opposed to military defeat to a mass murderer.)
> Einstein's letter to Roosevelt was written before the atomic bomb existed.
Einstein's letter [1] predicts the development of a very specific device and mechanism. AI risks are presented without reference to a specific device or system type.
Einstein's letter predicts the development of this device in the "immediate future". AI risk predictions are rarely presented alongside a timeframe, much less one in the "immediate future".
Einstein's letter explains specifically how the device might be used to cause destruction. AI risk predictions describe how an AI device or system might be used to cause destruction only in the vaguest of terms. (And, not to be flippant, but when specific scenarios which overlap with areas I've worked worked in are described to me, the scenarios sound more like someone describing their latest acid trip or the plot to a particularly cringe-worthy sci-fi flick than a serious scientific or policy analysis.)
Einstein's letter urges the development of a nuclear weapon, not a moratorium, and makes reasonable recommendations about how such an undertaking might be achieved. AI risk recommendations almost never correspond to how one might reasonably approach the type of safety engineering or arms control one would typically apply to armaments capable of causing extinction or mass destruction.
(The statement at hand reads "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.")