No, lots of important AI researchers are missing and many of the signatories have no relevant AI research experience. As for being the cats whiskers in developing neural architecture or whatever, so what? It gives them no particular insight into AI risk. Their papers are mostly public, remember.
> Has it occurred to you what happens if you are wrong?
Has it occurred to you what happens if YOU are wrong? AI risk is theoretical, vague and most arguments for it are weak. The risk of bad law making is very real, has crushed whole societies before and could easily cripple technological progress for decades or even centuries.
IOW the risk posed by AI risk advocates is far higher than the risk posed by AI.
First of all, you agree that the probability is non zero right?
I am not a world known expert on xrisk to estimate this. You are not either. We have all these people claiming the probability is high enough. What else is ti be said? HNers shouldn't be reminded "trust in sciense".
Has it occurred to you what happens if you are wrong, like 10% chance you are wrong? Well it's written in the declaration.