Hacker News new | past | comments | ask | show | jobs | submit login

Improvements in AI aren't linear, though. Artificial Superintelligence, after reaching AGI, might happen in the span of minutes or days. I imagine the idea here is to guide progress so that on the day that AGI is possible, we've already thoroughly considered what happens after that point.

Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to human life. Have a look at this article, it provides a better intuition for how slippery AI could be: https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-...




> Improvements in AI aren't linear, though

This is a point everyone makes, but it hasn't been proven anywhere. Progress in AI as a field has always been a cycle of hype and cool-down.

Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just speculation.


Sure, though you can't extrapolate future technological improvements from past performance (that's what makes investing in tech difficult).

Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence. AI safety addresses the risk of bootstrapped superintelligence indifferent to humans.


>Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence.

Of course, that assumes the return-on-investment curve for "bootstrapping its own intelligence" is linear or superlinear. If it's logarithmic or something other than "intelligence" (which is a word loaded with magical thinking if there ever was one!) is the limiting factor on reasoning, no go.


I don't see why a program needs to be a self-improving Charles Stross monster to have an impact on the world, for good or ill.


Hype then cool-down is a sine wave... See, not linear at all!


Or maybe the first few AGIs will want to spend their days watching youtube vids rather than diving into AI research. The only intelligences we know of that are capable of working on AGI are humans. We're assuming that not only will we be able to replicate human-like intelligence (seems likely, but might be much further away than many think), but that we'll be able to isolate the "industrious" side of human intelligence (not sure if we'll even be able to agree on what this is), enhance it in some way (how?), and that this enhancement will be productive.

But even if we can do all that any time soon (which is a pretty huge if), we don't even know what the effect will be. It's possible that if we remove all of the "I don't want to study math, I want to play games" or "I'm feeling depressed now because I think Tim's mad at me" parts of the human intelligence, we'll end up removing the human ingenuity important to AGI research. It might be that the resulting AGI is much more horrible at researching AI than a random person you pull off the street.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: