The author says that the English speaker who simply manipulates symbols and follows rules will never get the joke written in Chinese, even if the people external to the room understand it and think it was produced by an intelligence that understands the joke.
But that contains the assumption that the human is the consciousness in that arrangement, when in fact the human is just the energy source which drives the hardware. One might as well say that computers can never create a 3D drawing because its power supply doesn't understand arithmetic.
Also. People taking Seale's position rarely reckon with just how big that rulebook would be.
On the other hand, if you could construct an algorithm which can "understand" Chinese using only polynomial space and runtime, then it seems a lot more intuitively clear that there is something genuinely intelligent happening.
It's still unclear to me what the point of this type of philosophy is though. Perhaps its practitioners find it entertaining? I haven't the faintest idea why we'd fund this more than say, people who want to make giant stone heads and leave them in the city's parks or people who make funny videos about their cats. It just goes around in circles, finding new ways to make the same (dualist) argument that I considered and rejected as an infant. I don't think we should ban it (or the giant stone head people) but it seems like it could be done independently on their own time without troubling society for funding.
Harnad at least had done/ been doing something else interesting and important unrelated to his philosophical musing - rebelling against the publisher monopoly on research output. Though he taught classes about Cognitive Science, he was partly at the University because of his radical thinking of how scholarly communication should work, in the form of the journal BBS (Behavioural and Brain Sciences). This was of course way ahead of its time, the things BBS struggled to do for small numbers of scholarly articles in the 1970s can now be easily replicated online for any topic you can think of.
But ... maybe that's why the human brain works so well. The 10s of billions of neurons in your brain are designed to adapt to patterns and adjust to external stimuli from sensory input.
Just like compression algorithms can only compress something so far before losing data, maybe being limited by how much processing we are throwing at this problem limits the usefulness of our solutions.
I mean how many resources can you even throw at "302 neurons and 95 muscle cells"?
edit: down the wormhole I go
just look at the screenshots of
the "brain": https://github.com/openworm/c302
and the sim: https://github.com/openworm/OpenWorm/blob/master/README.md#q...
Reading further, more physics:
> Some simulators enable ion channel dynamics to be included and enable neurons to be described in detail in space (multi-compartmental models), while others ignore ion channels and treat neurons as points connected directly to other neurons. In OpenWorm, we focus on multi-compartmental neuron models with ion channels.
- Failure to distinguish b/w narrow and human-level AI
- Zero mention of attention/transformer models
- Zero mention of BERT, GPT, let alone GPT-3
Note-to-self: ignore and file away in the gary-marcus box.
1) any computational AI can be represented as a finite state machine or FSM (author calls it a finite state automata. Same thing.)
2) when said computational AI performs an "act of cognition" it (as a FSM) will iterate through a defined series of states, based on a defined series of inputs
3) It is possible to build a simpler FSM composed of a counter and a lookup table that would take the same series of inputs + the counter as an input, and produce the same state/output as the original computational AI
4) since the response to stimulus is identical, the 2 finite state machines are equivalent.
5) if the state machines are equivalent, they must be equivalently conscious
6) the counter+look-up table is obviously not conscious ("reducto ad absurdum")
7) from (6) and (5) no computational AI can be conscious
To me, this argument fails in the following manner; the only way to actually construct the "simpler" finite state machine in step (3) above is to actually let the computational AI react with the world first and record its combinations of input and state. There is no way to predict what series of states an arbitrary FSM will go through in response to a particular series of inputs without actually running it. That would be equivalent to solving the halting problem. Any program can be encoded as an FSM. If you could predict the state sequence of such an FSM, you could tell whether the FSM would enter the 'halt' state.
IMO this is analogous to arguing that:
1) the animatronic band at Chuck E. Cheese could be programmed to play identical music to that which has been (previously) performed by a human band (and recorded in perfect detail).
2) because they produce identical outputs the 2 bands are equivalent
3) if they are equivalent, they must equally be said to create original music
4) the animatronic band obviously doesn't create original music
5) from (3) and (4) no band can create original music
He also elides any discussion of whether or not actual human intelligence manages to avoid the failure modes he uses to conclude that neural networks are not intelligent - e.g. he mentions adversarial examples fooling visual classifier networks without mentioning that "optical illusions" exist and people will reliably misperceive certain images in certain ways too.
I actually agree that neural nets as they currently exist are aggressively stupid, but the author concludes way too much.
TL;DR, author starts from a premise that there is something uniquely special about human consciousness that machines can't duplicate, and reaches the conclusion that there is something uniquely special about human consciousness that machines can't duplicate.
Best not to ignore.
We're seeing AIs that can kind of hide their lack of any kind of understanding by means of more and more sophisticated behavior. But unless understanding = "not understanding, but in a really sophisticated way", that road isn't going to get us to AGI, ever.
Personally, my bet is that understanding != "sophisticated non-understanding". I can't prove it, of course, because I don't know what understanding is either...
[Edit: I suppose the other alternative is that understanding is real, but AGI doesn't actually require understanding. That seems improbable to me, but it is at least theoretically possible.]
The missing piece for real understanding is embodiment - we can act on our environment while GPT-3 like models can only see a fixed training set. This means an AI could construct its own hypothesis and test it if it were embodied, but can't do that otherwise. Understanding comes from playing 'the same game' with us: sharing the same environment and having aligned rewards.
Imagine if a scientist were kept locked up all her life and only had access to a video feed showing the world. Would she gain anything by coming out of the cave and actually seeing and interacting with the world?
It was only believed that Go was impervious to the brute-force search strategies employed by the most successful chess programs. The alternative was never "true understanding", but rather believed to be the application of knowledge engineering, expert systems, etc, none of which could be equivocated with "true understanding".
Regarding poetry, the counter argument is that a modern neural network can't generate poetry any more than a Xerox machine can generate poetry. IOW, it can only replicate styles, not invent new ones. Though, that's often good enough as far as the vast majority of readers are concerned.
You assume, but are you sure...?
Did somebody create a program that can write poetry without first consuming basically everything humans have ever written? Or even after consuming roughly what a well-read human might have read?
I think it's reasonable to allow for the fact that humans must have a certain amount of information encoded by evolution, but I can't believe it's either qualitatively or quantitatively comparable to a database of all the text on the internet.
By the way, causal reasoning is not the product of just one human, it is based on experiments, observations and careful model building by the whole human society over long spans of time.
We didn't understand even basic things such as infections and the role of hygiene until recently. What does that say about our causal reasoning powers? That we were stupid?
We know how COVID spreads and many people are still exposing themselves without care, sometimes causing their own demise or somebody else's. Why isn't causal reasoning working for us all the time?
I think humans can only do causal reasoning when they have a very good model of the thing they are trying to understand. Causal intelligence is not in our brains naturally, it depends on having access to specific models.
So now we have 'consciousness of the gaps'. It's an ever retreating concept, as AI advances what we call consciousness recedes into these gaps. Now we're discussing about how some humans are not really 'conscious', what next? Maybe in the end the only remaining 'conscious' people will be philosophers who don't believe in AI.
Ugh, here we go. I swear this was all gone over with a very similar post just a week ago where it was pointed out that if an author says physical things can't 'understand' or whatever else, they are implying some non-physical soul-spark in humans.
Machine learning as currently done clearly has limits. The big problem is lack of an underlying model of the real world. Systems which do have real-world knowledge tend to store it as something like predicate form. Cyc is the classic example of that.
A useful question: what should knowledge about the real world look like? More specifically, is there some low-level form in which info about the real world could be represented that makes it useful as training data for machine learning? Skin contact and muscle tension, perhaps?
So his conclusion is based on Siri, an AI assistant I would agree was 'stupid', but not representative of SOTA. It's unfair to judge AI by Siri, Siri is a mass produced system with scaling costs, Apple can't host GPT-3 for everyone yet. Not even Google can use the latest and greatest neural nets in mass produced AI systems because they don't have the hardware and it would not make economical sense.
So 'reading the room'. In social settings you can't follow logic and rationale blindly because there are these things on two legs full of meat and organs that don't like it.
As a secondary point, I suspect youtube's classifier is a bag of heuristics they are constantly fiddling with. Any failures of it are no indictment of the futility of developing AGI.
2. But I indeed have one - giving algorithms so much power without an appropriate checking and appeal process is clearly wrong.
3. This doesn't imply we shouldn't do science or develop AI systems.