Hacker News new | past | comments | ask | show | jobs | submit login

This example of a human error has nothing to do with AI.



It's a small demonstration of the dangers of putting computer programs that do exactly as they are programmed in charge of making important decisions.

Let's say we program an AI and get one little detail wrong and things go to hell as a result. We can call that "human error" or "AI error" but either way it's a reason for caution.

I am actually somewhat concerned by this OpenAI project, and here's why. Let's say there's going to be some kind of "first mover advantage" where the first nation to build an AI that's sufficiently smart has the possibility to neuter the attempts being made by other nations. If there's a first mover advantage, we don't want a close arms race, because then each team will be incentivized to cut corners in order to be the first mover. Let's say international tensions happen to be high around this time and nations race to put anything in to production in order to be the first.

The issue with something like OpenAI is that increasing the common stock of public AI knowledge leaves arms race participants closer to one another, which means a more heated race.

And if there's no first mover advantage, that's basically the scenario where AI was never going to be an issue to begin with. So it makes sense to focus on preparing for the more dangerous possibility that there is a first mover advantage.


It seems like most people expect the emergence of the strong AI to be a sudden event, something similar to the creation of a nuclear bomb. However, it's far more likely that AI will undergo gradual development, becoming more and more capable, until it is similar in its cognitive and problem solving abilities to a human. It's likely that we won't even notice that exact point.

I'm not even sure governments are interested in developing AGI. They probably want good expert systems as advisers, and effective weapons for the military. None of those require true human level intelligence. Human rulers will want to stay in control. Building something that can take this control from them is not in their interests. There likely to be an arms race between world superpowers, but it will probably be limited to multiple narrow AI projects.

Of course, improving narrow AI can lead to AGI, but this won't be the goal, IMO. And it's not a certainty. You can probably build a computer that analyses current events, and predicts future ones really well, so the President can use its help to make decisions. It does not mean that this computer will become AGI. It does not mean it will become "self-aware". It does not need to have a personality to perform its intended function, so why would it develop one?

Finally, most people think that AGI, when it appears, will quickly become smarter than humans. This is not at all obvious, or even likely. We, humans, possess AGI, and we don't know how to make ourselves smarter. Even if we could change our brains instantaneously, we wouldn't know what to change! Such knowledge requires a lot of experiments, and those take time. So, sure, self-improvement is possible, but it won't be quick.


There's good reason to suspect that when AGI appears, it will quickly be developed to clearly super-human capabilities, just from the differences in capability between species that we are very closely related to.

Bostrom and others makes an argument that the difference in intelligence between a person with extremely low IQ and extremely high IQ could be relatively very small related to the possible differences in intelligence/capability of various (hypothetical or actual) sentient entities.

There's also the case of easy expansion in hardware or knowledge/learning resources once a software-based intelligent entity exists. E.g. if we're thinking of purely a speed difference in thinking, optimization by a significant factor could be possible purely by software optimization, and further still if specialized computing hardware is developed for the time-critical parts of the AI's processes. Ten PhDs working on a problem is clearly more formidable than one PhD working on a problem, even if they are all of equal intelligence.


How do you measure intelligence? If we take an IQ score as the measure, we will will see that many individuals with high recorded IQ, are not that remarkable when it comes to their activities. Usually they don't make huge advances in any fields, or become ultra rich.

We don't know if humans are 1000 times smarter than rats. Maybe we are 10 times smarter, or a 1000000 times. We don't know how much smarter Perelman or Obama is than a Joe Sixpack. We don't even know what "smarter" means. So talking about some hypothetical "sentient entities", and how "smarter" they can be compared to anything, is a bit premature, IMO.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: