Hacker News new | past | comments | ask | show | jobs | submit login

At the time I did my AI post-grad, there were broadly speaking 3 schools of thought on how general AI would be achieved: via (i) symbolic AI (or "classic AI"), (ii) connectionist AI (i.e. neural networks, now "deep learning"), and (iii) what they called "robotic functionalism". It sounds like this article is referring to the last group, i.e. that embodiment in and interaction with the physical world are a necessary requirement for general intelligence. Can't find any references to it by this term, but as others have noted this is not a new idea. Personally, I've never been convinced by the idea that you had to have physical presence, and sometimes suspected the theory existed to allow robotics to fall within the AI camp, but that said I do think hybrid solutions (i.e. the combination of more than one "narrow" approaches) are one of the most promising areas right now.

Human intelligence is a tool evolved to interact with our environment. Not having an environment to interact with is, imho, a serious problem when trying to define/identify intelligence.

On the other hand, I'm not sure the environment necessarily needs to be physical. Ages ago, I worked on reinforcement learning in a simulated environment, which can provide lots of advantages.

And that's the heart of AI's core problem: an oversimplified world is needed in order for your research to produce short-term results that can sustain your project's existence. But trimming back nature's complex signals and noise also limits your solution/model so much that your system becomes too simplistic and fragile (AKA brittle) to thrive in the much-more-complex real world.

After 50+ years of AI research that hasn't scaled or meaningfully progressed on the fundamental capabilities needed by a synthetic mind, you'd think we'd agree more that simplifying reality into something easier to model is the wrong basis for creating AI that's more than a toy.

That seems a lot more cost effective, to train algorithms in a virtual, rather than physical, environment.

On the other hand, like with self-driving cars, for some purposes it makes sense to provide physical, real-life situations and objects, with all its chaos, unexpected and unpredictable events.

For "true" intelligence matching human expectations, I imagine an understanding of the physical environment and its complexity is key. Otherwise, it could only deal with abstract concepts, like pure mathematics, but missing the experience of concrete reality - to be able to relate to us.

It runs the other way too.

Developmental psychology demonstrates that you get very serious functional deficits if you deprive a young developing organism of its normal environment.

It's a mathematical-philosophical question, I suppose.

Can one use computers to simulate an environment with such fidelity that another computer doesn't notice the simulation and optimize around its quantum quirks?

Nvidia seems to think so. They claimed (a couple GTCs ago) to use virtual driving simulators to train their autonomous vehicle systems.

I think a more nuanced way of defining iii is that the intelligence an agent is capable of is limited by the extent to which it is capable of perceiving and interacting with an environment.

At one level this has to be true, if it weren't I could plop a black box on the table and say I've invented AGI it just can't interact with anyone, and you would have no recourse but accept my statement. We must necessarily define intelligence as the interaction that the agent can perform with some environment otherwise we'd have no way to know of its intelligence.

Reminds me of the joke about sci-fi intelligent plants - they would have to be a product of intelligent design because of how useless the massive energetic intake to maintain it would be to something sessile.

The some sort of environment is an important distinction - even if smart enough to derive linguistic translation on its own through "first contact" it could suggest we should generate free energy through floating point error exploits and destroy excess heat using the same because that worked perfectly well in its environment and it had no indication that wouldn't be possible in the real world.

If I recall rightly from the documentary "Fast, Cheap, and Out of Control," a representative of the last school is Rodney Brooks, who among other things cofounded iRobot, the Roomba maker.

We'll see who gets there first, but I have a lot of sympathy for this approach. It's the one way we know intelligence got going in the first place. And given that too many degrees of freedom make coherent creativity difficult, it imposes some useful constraints.

Anyhow, I think those interested in this debate would enjoy that movie. It's 20+ years old, but the director, Errol Morris, is a stellar documentarian. And it's available to rent on the major platforms for a few bucks.

I did research on model-based reasoning in the early '90s and actually thought (though I never mentioned this to my supervisor) that Brooks had a lot of good arguments summed up by his pithy phrase "the world is its own best model".

The reason I personally believe you have to have a physical presence is that changes to the physical body which do not touch the brain can and do have profound impacts on consciousness. If no body is necessary, then why can't consciousness sustain itself for prolonged periods in situations of total sensory deprivation?

For (iii) you're probably thinking of Rodney Brooks (aka the co-founder of iRobot).

Since the 80s, he's generally been a proponent of the idea that you can't have human-like intelligence without placing that nascent intelligence in a human-like world, with human-like sensory perception.


Which more or less bears out our experience with deep learning. If you place intelligent algorithms in a world where their sole sensory inputs are matrices then what you get out doesn't look anything like human intelligence.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact