Hacker News new | past | comments | ask | show | jobs | submit login

Again since these are almost the top cream of all ai researchers there is a global conspiracy to scare the public right?

Has it occurred to you what happens if you are wrong, like 10% chance you are wrong? Well it's written in the declaration.




No, lots of important AI researchers are missing and many of the signatories have no relevant AI research experience. As for being the cats whiskers in developing neural architecture or whatever, so what? It gives them no particular insight into AI risk. Their papers are mostly public, remember.

> Has it occurred to you what happens if you are wrong?

Has it occurred to you what happens if YOU are wrong? AI risk is theoretical, vague and most arguments for it are weak. The risk of bad law making is very real, has crushed whole societies before and could easily cripple technological progress for decades or even centuries.

IOW the risk posed by AI risk advocates is far higher than the risk posed by AI.


In order to make your argument clear to people reading:

If you are wrong there are no humans left.

If I am wrong,inequality,societies will suffer like they have always in the hands of the strong.


Wouldn't you need to illustrate how likely each outcome is? I mean, there are lots of possible ways humans could be eradicated.


First of all, you agree that the probability is non zero right?

I am not a world known expert on xrisk to estimate this. You are not either. We have all these people claiming the probability is high enough. What else is ti be said? HNers shouldn't be reminded "trust in sciense".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: