Hacker News new | past | comments | ask | show | jobs | submit login

Even for your last example, two hypotheses need to be true: (1) such information exists, and (2) the AI has access to such information/can generate it. EDIT: actually at least three: (3) the human and/or the AI can apply that information.

It also unclear to what extent thinking alone can solve a lot of problems. Similar, it is unclear if humans could not contain superhuman intelligence. Pretty unintelligent humans can contain very smart humans. Is there an upper limit on intelligence differential for containment?




> Even for your last example, two hypotheses need to be true: (1) such information exists, and (2) the AI has access to such information/can generate it. EDIT: actually at least three: (3) the human and/or the AI can apply that information.

Those trade off against each other and don't all have to be as easy as possible. Information sufficiently dangerous to destroy the world certainly exists, the question is how close AI gets to the boundary of "possible to summarize from existing literature and/or generate" and "possible for human to apply", given in particular that the AI can model and evaluate "possible for human to apply".

> Similar, it is unclear if humans could not contain superhuman intelligence.

If you agree that it's not clearly and obviously possible, then we're already most of the way to "what is the risk that it isn't possible to contain, what is the amount of danger posed if it isn't possible, what amount of that risk is acceptable, and should we perhaps have any way at all to limit that risk if we decide the answer isn't 'all of it as fast as we possibly can'".

The difference between "90% likely" and "20% likely" and "1% likely" and "0.01% likely" is really not relevant at all when the other factor being multiplied in is "existential risk to humanity". That number needs a lot more zeroes.

It's perfectly reasonable for people to disagree whether the number is 90% or 1%; if you think people calling it extremely likely are wrong, fine. What's ridiculous is when people either try to claim (without evidence) that it's 0 or effectively 0, or when people claim it's 1% but act as if that's somehow acceptable risk, or act like anyone should be able to take that risk for all of humanity.


We do pretty much nothing to mitigate other, actual extinction level risks - why should AI be special given that its risk has an unknown probability and it could even be zero.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: