On the other hand, I'm not sure the environment necessarily needs to be physical. Ages ago, I worked on reinforcement learning in a simulated environment, which can provide lots of advantages.
After 50+ years of AI research that hasn't scaled or meaningfully progressed on the fundamental capabilities needed by a synthetic mind, you'd think we'd agree more that simplifying reality into something easier to model is the wrong basis for creating AI that's more than a toy.
On the other hand, like with self-driving cars, for some purposes it makes sense to provide physical, real-life situations and objects, with all its chaos, unexpected and unpredictable events.
For "true" intelligence matching human expectations, I imagine an understanding of the physical environment and its complexity is key. Otherwise, it could only deal with abstract concepts, like pure mathematics, but missing the experience of concrete reality - to be able to relate to us.
Developmental psychology demonstrates that you get very serious functional deficits if you deprive a young developing organism of its normal environment.
Can one use computers to simulate an environment with such fidelity that another computer doesn't notice the simulation and optimize around its quantum quirks?
Nvidia seems to think so. They claimed (a couple GTCs ago) to use virtual driving simulators to train their autonomous vehicle systems.
At one level this has to be true, if it weren't I could plop a black box on the table and say I've invented AGI it just can't interact with anyone, and you would have no recourse but accept my statement. We must necessarily define intelligence as the interaction that the agent can perform with some environment otherwise we'd have no way to know of its intelligence.
The some sort of environment is an important distinction - even if smart enough to derive linguistic translation on its own through "first contact" it could suggest we should generate free energy through floating point error exploits and destroy excess heat using the same because that worked perfectly well in its environment and it had no indication that wouldn't be possible in the real world.
We'll see who gets there first, but I have a lot of sympathy for this approach. It's the one way we know intelligence got going in the first place. And given that too many degrees of freedom make coherent creativity difficult, it imposes some useful constraints.
Anyhow, I think those interested in this debate would enjoy that movie. It's 20+ years old, but the director, Errol Morris, is a stellar documentarian. And it's available to rent on the major platforms for a few bucks.
Since the 80s, he's generally been a proponent of the idea that you can't have human-like intelligence without placing that nascent intelligence in a human-like world, with human-like sensory perception.
Which more or less bears out our experience with deep learning. If you place intelligent algorithms in a world where their sole sensory inputs are matrices then what you get out doesn't look anything like human intelligence.