Academic research involves large components of marketing. That's why they grumble so much about the time required in the grant applications process and other fund seeking effort. It's why they so frequently write books, appear in newspaper articles and on TV. It's why universities have press relations teams.
No, lots of important AI researchers are missing and many of the signatories have no relevant AI research experience. As for being the cats whiskers in developing neural architecture or whatever, so what? It gives them no particular insight into AI risk. Their papers are mostly public, remember.
> Has it occurred to you what happens if you are wrong?
Has it occurred to you what happens if YOU are wrong? AI risk is theoretical, vague and most arguments for it are weak. The risk of bad law making is very real, has crushed whole societies before and could easily cripple technological progress for decades or even centuries.
IOW the risk posed by AI risk advocates is far higher than the risk posed by AI.
First of all, you agree that the probability is non zero right?
I am not a world known expert on xrisk to estimate this. You are not either. We have all these people claiming the probability is high enough. What else is ti be said? HNers shouldn't be reminded "trust in sciense".
Academia and scientific research has changed considerably from the 20th century myths. It was claimed by capitalism and is very much run using classic corporate-style techniques, such as KPIs. The personality types it attracts and who can thrive in this new academic system are also very different from the 20th century.
So there is no way that you will accept anything from scientific research signed by a myriad of important figures in any science anymore. Shaman time? Or you will accept only the scientific research that you think is correct and suits you.