Hacker News new | past | comments | ask | show | jobs | submit login

The problem with this analysis is that we have complete uncertainty about the probability that AGI/ASI will be developed in any particular time frame. Anyone that says otherwise is lying or deluded. So the risk equation is impossible to calculate for AGI/ASI wiping out humanity. And since you appear to be evaluating the damage as essentially infinite, i.e. existential risk, you're advocating that as long as there's a greater than zero probability of someone using that capability then the risk is infinite. Which is not useful for deciding a course of action.



No, no need to calculate infinities. Instead lets imagine something with an IQ within one exponent higher of ours. This is exceptionally difficult to even imagine as below 70 we consider a person disabled in some form or another. Once we get in the range of IQ over 170 or so it really starts to break. When we get into multimodal self learning systems there may be vast amounts of knowledge encoded into its networks that we simply don't know the right question to ask as humans. Attempt to think of what other AIs at or near that level could ask each other.

Our different systems of intelligence keep falling back to ideas of human intelligence and its limitations. Things like we don't multitask well, our best attention is given to one thing at a time. Things like our highest bandwidth senses are our ears and eyes, and even then they are considerably bandwidth constricted. Human limits like our abilities cannot be horizontally scaled in an easily measurable way, and adding more people massively increases the networking costs of accomplishing a goal.

If for some reason the smartest AI can only reach the levels of the smartest human beings (which at least to me makes no sense), then this is still massively powerful, as of this point pesky human problems like 8 hours of sleep are not needed. If human level AI power can be shrunk down to near cellphone size and power consumption, which again doesn't seem out of the realm of physics, lays down the framework for an intelligence explosion at the rate which we can print chips out and assemble them.


But the problem still stands that we can't estimate how likely any of this is to occur. So in the overall risk equation we have a term that is essentially completely uncertain and therefore varying that term can yield any arbitrary risk level you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: