I'm curious if some future, hypothetical AGI agent, which had been trained to have these kinds of abilities, would be akin to how most humans see a ball in flight and just instinctively know (within reason) where the ball is going to go? We don't understand, consciously and in the moment, how our brain is doing these calculations (although obviously we can get to similar results with other methods), but we still trust the outputs.
Would some hypothetical future AI just "know" that tomorrow it's going to be 79 with 7 mph winds, without understanding exactly how that knowledge was arrived at?
I remember learning that humans trying to catch a ball are not actually able to predict where the ball will land, but rather, will move in a way that maintains the angle of movement constant.
As a result a human running to catch the ball over some distance (eg during a baseball game) runs along a curved path, not linearly to the point where the ball will drop (which would be evidence of having an intuition of the ball's destination).
This hypothesis could be tested, now that major league baseball tracks the positions of players in games. In the MLB app they show animations of good outfield plays with "catch difficulty" scores assigned, based (in part) on the straight-line distance from the fielder's initial position to the position of the catch. The "routes" on the best catches are always nearly-straight lines, which suggests that high-level players have developed exactly this intuitive sense.
Certainly what I was coached to do, what outfielders say they do, and what I see watching the game, is to "read" the ball, run towards where you think the ball is going, and then track the ball on the way. I was and am a shitty outfielder, in part because I never developed a fast-enough intuitive sense of where the ball is going (and because, well, I'm damn slow), but watch the most famous Catch[1] caught on film, and it sure looks like Mays knew right away that ball was hit over his head.
There are a few that kind of theories. You are probably referring to the Optical Acceleration Cancellation theory[1]. There are some similar later so called "direct perception" theories too.
The problem with these is that they don't really work, often even in theory. People do seem to predict at least some aspects of the trajectory, although not necessarily the whole trajectory [2].
Agreed. Reminds me of juggling, while learning I noticed that as long as I could see each ball for at least a split second on its upwards trajectory I could "tell" if it would be a good throw or not. In order to keep both hands/paths in my view I would stare basically straight forward and not look at the top of the arc and could do it at any height. Now I can do it much more with feel and the motion is muscle memory but the visual cues were my main teacher.
It makes sense that there are several heuristics. After all, "Thinking: Fast and Slow" already makes the point that human brains have several layers of processing with different advantages and drawbacks depending on situations.
> would be akin to how most humans see a ball in flight and just instinctively know (within reason) where the ball is going to go?
Up to a point .. and that point is more or less the same as the point where humans can no longer catch a spinning tennis raquet.
We understand the gravitional rainbow arc of the centre of mass, we fail at predicting the low order chaotic spin of tennis raquet mass distributions.
Other butterflies are more unpredictable, and the ones that land on a camels back breaking a dam of precariously balanced rocks are a particular problem.
Yes, humans are obviously limited in the things we can instinctively, intuitively predict. That's not really the point. The point is whether something that has been trained to do more complicated predictions will have the a similar feeling when doing those predictions (of being intuitive and natural), or if it will feel more explicit, like when a human is doing the calculus necessary to predict where the same ball is going to go.
My phrase "future, hypothetical" was trying very specifically to avoid any discussions about whether current AI have qualia or internal experiences. I was trying to think about whether something which did have some kind of coherent internal experience (assumed for the sake of the idle thought), but which had far greater predictive abilities than humans, would have the same intuitive feeling when making those much more complicated predictions.
It was an idle thought that was only barely tangentially related to the article in question, and was not meant to be a comment at all on the model in the article or on any current (or even very near future) AI model.
I don't expect anyone would have an answer, given the extreme degree of hypothetical-ness.
When you say "The point is whether a LLM has any feelings..." are you thinking specifically about an LLM, or AI in general?
I've seen nothing to indicate that Project Aardvark is using an LLM for weather prediction.
I think “chain-of-thought” LLMs, with access to other tools like Python, already demonstrate two types of “thinking”.
We can query an LLM a simple question like “how many ‘r’ are in the word strawberry”, and an LLM with know access to tools will quite confidently, and likely incorrectly, give you an answer. There’s no actual counting happening, and any kind of understanding of the problem, the LLM will just guess an answer based on its training data. But that answer tends to be wrong, because those types of queries don’t make up a large portion of its training set, and if they do, there’s a large body of similar queries with vastly different answers, which ultimately results in confidentiality incorrect outputs.
But provide an LLM tools like Python, and a “chain-of-thought” prompt that allows it to recursively re-prompt itself, while also executing external code and viewing the outputs, and an LLM can easily get the correct answer to query “how many ‘r’ are in the word strawberry”. By simply writing and executing some Python to compute the answer.
Those two approaches to problem solving are strikingly similar to intuitive vs analytical thinking in humans. One approach is driven entirely by pattern matching, and breaks down when dealing with problems that require close attention to specific details, the other is much more accurate, but also slower because directed computation is required.
As for your hypothetical “weather AI”, I think it’s pretty easy to imagine an AGI capable of confidently predicting the weather tomorrow, not be capable of understanding how it computed the prediction, beyond a high level hand wavy explanation. Again, that’s basically what LLM do today, confidently make predictions of the future, with zero understanding of how or why they made those predictions. But you can happily ask an LLM how and why it made a prediction, and it’ll give you a very convincing answer, that will also be a complete and total deception.
> would be akin to how most humans see a ball in flight and just instinctively know (within reason) where the ball is going to go?
Generally no. If I show you a puddle of water, can you tell me what shape was the ice sculpture that it was melted from?
One is Newtonian motion and the other is a complex chaotic system with sparse measurements of ground truth. You can minimize error propagation but it’s a diminishing returns problem, (except in rare cases like for natural disasters where a 6h warning can make a difference).
> "Sma," the ship said finally, with a hint of what might have been frustration in its voice, "I'm the smartest thing for a hundred light years radius, and by a factor of about a million ... but even I can't predict where a snooker ball's going to end up after more than six collisions."
[GCU Arbitrary in "The State of the Art"]
6? That can't be right. I don't know how big a GCU is, so the scale could be up to 1 OOM off, but a full redirection of all simulation capacity should let it integrate out further than that.
For ball-to-ball collisions, 6 is already a highly conservative estimate-- this is basically a chaotic system (outcome after a few iterations, while deterministic, is extremely sensitive to exact starting conditions).
The error scales up exponentially with the number of (ball-to-ball) collisions.
So if the initial ball position is off by "half a pixel" (=> always non-zero) this gets amplified extremely quickly.
Your intuition about the problem is probably distorted by considering/having experienced (less sensitive) ball/wall collisions.
> Would some hypothetical future AI just "know" that tomorrow it's going to be 79 with 7 mph winds, without understanding exactly how that knowledge was arrived at?
I think a consciousness with access to a stream of information tends to drown out the noise to see signal, so in those terms, being able to "experience" real-time climate data and "instinctively know" what variable is headed in what direction by filtering out the noise would come naturally.
So, personally, I think the answer is yes. :)
To elaborate a little more - when you think of a typical LLM the answer is definitely no. But, if an AGI is likely comprised of something akin to "many component LLMs", then one part might very well likely have no idea how the information it is receiving was actually determined.
Our brains have MANY substructures in between neuron -> "I", and I think we're going to start seeing/studying a lot of similarities with how our brains are structured at a higher level and where we get real value out of multiple LLM systems working in concert.
Would some hypothetical future AI just "know" that tomorrow it's going to be 79 with 7 mph winds, without understanding exactly how that knowledge was arrived at?