Hacker News new | past | comments | ask | show | jobs | submit login

Actually, there's a growing amount of evidence that there's a single, general-purpose algorithm in the human brain that gives rise to intelligence. For one, there's the fact that every part of the brain looks and behaves the same. There's also the fact that the brain is very plastic in what it learns – the auditory cortex can learn to "see" if we were to rewire the signals from the eyes from the visual cortex to the auditory cortex. It's very unlikely that our brain is hard-wired to recognize faces, for instance, but rather that it learns to do so using this generic learning algorithm.

I urge you to watch Andrew Ng's talk that I linked to in the post, and read On Intelligence (http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...) by Jeff Hawkins, a book that totally changed the way I look at intelligent behavior.




Yep I've seen his talk. It's quite fascinating. However, what you're talking about is a learning algorithm, which does not necessarily equate with intelligence. OpenCyc would be the best example that illustrates my point. Edit: on second thought, you probably meant to say that given such a general purpose learning algorithm, and a suitable environment, the algorithm would in time learn enough to produce intelligence of some kind (of what kind, I'm not sure) that's capable of thinking. In that case, I agree with you, and I'll have to revise my opinion, but I'm still not sure if it qualifies as emergent phenomena from simple rules. An analogy would be Google's search algorithm running on huge amounts of data. Would you call the search results an emerging phenomena from simple rules?


The most concrete version of my point is that I don't think the most powerful AI we'll create will have, for instance, a human-coded algorithm for detecting faces. Instead, it'll have the ability to read electrical signals from a camera and understand the changing patterns in them, including the presence of faces. This ability to understand changing patterns would be due to "simpler" rules than the rules specifically designed to understand faces.

So yes, a general purpose learning algorithm, using the correct paradigm, would learn to think in a way as powerful as we do. And it'll do so in a way that its programmers would never be able to predict.

In the same vein, I would say that Google search results is an emerging phenomena, albeit not quite as interesting as general purpose intelligence. This is because it's intractable to predict what Google will return for certain queries, even if we know all of its rules. Keep in mind that there are degrees of emergence, it's not black and white. (On the other hand, I don't think Google's algorithm is as "simple" as it originally was, but that's for another discussion.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: