Hacker News new | past | comments | ask | show | jobs | submit login

It seems like most people expect the emergence of the strong AI to be a sudden event, something similar to the creation of a nuclear bomb. However, it's far more likely that AI will undergo gradual development, becoming more and more capable, until it is similar in its cognitive and problem solving abilities to a human. It's likely that we won't even notice that exact point.

I'm not even sure governments are interested in developing AGI. They probably want good expert systems as advisers, and effective weapons for the military. None of those require true human level intelligence. Human rulers will want to stay in control. Building something that can take this control from them is not in their interests. There likely to be an arms race between world superpowers, but it will probably be limited to multiple narrow AI projects.

Of course, improving narrow AI can lead to AGI, but this won't be the goal, IMO. And it's not a certainty. You can probably build a computer that analyses current events, and predicts future ones really well, so the President can use its help to make decisions. It does not mean that this computer will become AGI. It does not mean it will become "self-aware". It does not need to have a personality to perform its intended function, so why would it develop one?

Finally, most people think that AGI, when it appears, will quickly become smarter than humans. This is not at all obvious, or even likely. We, humans, possess AGI, and we don't know how to make ourselves smarter. Even if we could change our brains instantaneously, we wouldn't know what to change! Such knowledge requires a lot of experiments, and those take time. So, sure, self-improvement is possible, but it won't be quick.

There's good reason to suspect that when AGI appears, it will quickly be developed to clearly super-human capabilities, just from the differences in capability between species that we are very closely related to.

Bostrom and others makes an argument that the difference in intelligence between a person with extremely low IQ and extremely high IQ could be relatively very small related to the possible differences in intelligence/capability of various (hypothetical or actual) sentient entities.

There's also the case of easy expansion in hardware or knowledge/learning resources once a software-based intelligent entity exists. E.g. if we're thinking of purely a speed difference in thinking, optimization by a significant factor could be possible purely by software optimization, and further still if specialized computing hardware is developed for the time-critical parts of the AI's processes. Ten PhDs working on a problem is clearly more formidable than one PhD working on a problem, even if they are all of equal intelligence.

How do you measure intelligence? If we take an IQ score as the measure, we will will see that many individuals with high recorded IQ, are not that remarkable when it comes to their activities. Usually they don't make huge advances in any fields, or become ultra rich.

We don't know if humans are 1000 times smarter than rats. Maybe we are 10 times smarter, or a 1000000 times. We don't know how much smarter Perelman or Obama is than a Joe Sixpack. We don't even know what "smarter" means. So talking about some hypothetical "sentient entities", and how "smarter" they can be compared to anything, is a bit premature, IMO.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact