In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.
This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.
"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.
Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog". I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".
Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.
In a sense, the journey was the reward rather than the very unlikely short term outcome back then.
The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second
What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.
Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.
If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.
I always feel like he’d make a great soccer goalie if he had a human body.
There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).
We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.
Doors are basically planning triggers more than many things.
Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.
Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:
1. Equestrian jumping events; horses often balk before a hurdle
2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.
> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up
In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.
Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”
This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.
But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.
They will also open a gate to let another horse out of their stall which I would count as some form of planning.
Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.
Sounds like most human beings, given an unpleasant stimulus, for example a spider.
I am pleasantly surprised by how quickly they have been tackling big new decision spaces.
OpenAI has already done some experiments here . All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.