The real question: How much of human behavior is not sentient?
What this demonstrates is that much of human intelligence is not as special as we thought. Aristotle claimed that humans were intelligent because they could do arithmetic. Now we know how few gates it takes to do arithmetic. Then checkers. Then chess. Then go. Then poker. Now chatting.
As I point out occasionally, AI still sucks at animal-level common sense, defined as getting through the next 30 seconds without a major screwup. And at manipulation in unstructured environments. Both of which a squirrel, with a brain the size of a peanut, can do.
We still don't know how to build a good cerebellum. That's the big missing piece in AI. I went down some dead ends in that area in the 1990s. The big problem in that area today is that it's not of benefit to the advertising industry, so it's not being heavily funded.
I think there is a non-zero chance that AI as it stands today actually doesn't suck at animal-level common sense, it's just that we don't have a realistic enough simulation environment built that it can train on.
While at first glance it may appear that humans learn much better than AI in unstructured environments, I would argue that from the very moment a human is born, it is in a very highly structured environment provided by their parents and modern society.
If you made some baby robots and gave them human parents, would they be much better suited for driving a car after 16 years?
Of course I am hand waving away a lot of technical complexity here, but perhaps the road to artificial general intelligence... is an artificial general simulation?
The real question is, what even is sentience? The only real certainty about it is that one is sentient oneself - and that's because "I'm sentient" is taken as an axiom! From there, you basically have two approaches: either try to somehow define the specifics of sentience, or give up and say that anything that behaves as a sentient entity is one. The latter is the premise of the Turing test. The former all try to decompose one's sentience into bits that can be translated to other entities - but doing so "from within" produces criteria that are tautological or circular.
I strongly suspect that notions such as sentience and self-awareness don't have a meaningful physical definition, but are necessary for us to function, because they're the mental glue that makes thinking about "I" possible at all - which is broadly advantageous for survival. But why should a different mind necessarily need the same kind of glue to function as a whole? And how do we even know if it's the same or not without being able to observe it from the perspective of that other mind (since that's how we observe our own sentience, that's necessary to make accurate comparisons)?
I furthermore suspect that we refuse to embrace these notions because our sense of identity - of self - is, by necessity, so strong that any premise that undermines it is strongly disadvantaged from the get-go. Basically, we do not like to even contemplate the notion that "I am not one indivisible whole", even if it might be closer to how our mind actually operates on the physical level.
I find the opposite question more interesting: How much of human behaviour is sentient? Probably very little to none, right? Most of what we do is routine habits that we’ve trained over a long time. I spend most of my day solving abstract puzzles at work. Or typing these words into this website.
If our emotions were trained into us over many generations as a way to better cooperate and improve survivability, and AI chatbot fake emotions were trained over many generations where only the best neural nets survive… Maybe Aristotle was right, the bar is actually very low and these programs are just as sentient as we are?
A related thought experiment has to do with the possibility of someday using cyborg technology as a way to stave off dementia and Alzheimer's. If it turns out that you could replace the brain's long-term memory database with a computer database without losing any sentience, that would allow people to avoid the tragedy of losing their memories. This would require that the brain's database section is not a sentient part of the human brain, and that the sentient experience of reliving a memory happens elsewhere in the brain (perhaps in the perception centers). This conjecture feels plausible to me as reliving a memory feels similar to perceiving a VR video recording of the memory, just much less vivid, but seeming to indicate that the perception centers are involved. Furthermore, memories that are not currently being accessed are not "felt" at all, further suggesting that the long-term memory store might plausibly be a non-sentient part of our brains.
I'm not sure if this is in the Love Death and Robots version of Zima Blue - but it was part of the written story. The clips from people reviewing the LD&R version of it always concentrate on Zima and its pursuit of the color. There's an entire second story about the reporter that is intertwined with Zima.
> ‘But there isn’t anything natural about being alive a thousand years after I was born,’ I said. ‘My organic memory reached saturation point about seven hundred years ago. My head’s like a house with too much furniture. Move something in, you have to move something out.’
> ‘Let’s go back to the wine for a moment,’ Zima said. ‘Normally, you’d have relied on the advice of the AM, wouldn’t you?’
> I shrugged. ‘Yes.’
> ‘Would the AM always suggest one of the two possibilities? Always red wine, or always white wine, for instance?’
> ‘It’s not that simplistic,’ I said. ‘If I had a strong preference for one over the other, then yes, the AM would always recommend one wine over the other. But I don’t. I like red wine sometimes and white wine other times. Sometimes I don’t want any kind of wine.’ I hoped my frustration wasn’t obvious. But after the elaborate charade with the blue card, the robot and the conveyor, the last thing I wanted to be discussing with Zima was my own imperfect recall.
There's another page of the story that starts out with "As for me . . ." and happens several couple of decades later as the reporter reflects.
> The big problem in that area today is that it's not of benefit to the advertising industry, so it's not being heavily funded.
I feel like a good cerebellum would be worth untold wealth for ad targeting and even ad copy generation. The current models seem to me to be no better than the crudest keyword matches.
> …it's not of benefit to the advertising industry…
If the industry could be sold on the idea of a persuadabot, a bot that learns how to persuade you to buy the product, this might change. Surely Google and Facebook have enough data to train such bots for many of their users.
Interesting point re cerebellum, it may be that a better measure of AI general capabilities, at least from where we're at today, is the ability to make a good cup of tea. An example of an embodied problem. [1]
The lack of what you call animal-level common sense in these algorithms is tied to a lack of awareness. (Incidentally the cerebellum handles attention in part too!) This quote from later in the linked thread is apt:
> I fully expect that the next Douglas Hofstadter is a 19-year-old currently playing with actual robots and BERT models, not clever thought experiments. They’ll write this generation’s Godel, Escher, Bach bottom up starting with tea-making-butler blooper reels.
What this demonstrates is that much of human intelligence is not as special as we thought. Aristotle claimed that humans were intelligent because they could do arithmetic. Now we know how few gates it takes to do arithmetic. Then checkers. Then chess. Then go. Then poker. Now chatting.
As I point out occasionally, AI still sucks at animal-level common sense, defined as getting through the next 30 seconds without a major screwup. And at manipulation in unstructured environments. Both of which a squirrel, with a brain the size of a peanut, can do.
We still don't know how to build a good cerebellum. That's the big missing piece in AI. I went down some dead ends in that area in the 1990s. The big problem in that area today is that it's not of benefit to the advertising industry, so it's not being heavily funded.