Sure, but I don't think that our modelling and reasoning abilities are superior to other primates primarily due to superior pattern recognition. Yes we couldn't function without it, but no I don't think it's the secret sauce we have as humans. It's what we do after we have resolved the patterns that makes the difference IMHO.
Why do I think this? Pattern matching will only tell you this thing or situation is like this other thing or situation you have recognised before. That only allows you to implement a simple strategy re-use or random variation algorithm. To go beyond that you need to reason about the ways this situation is different from previous patterns, analyse it, figure out how previous successful strategies might work or might need adjusting, or conceive a new strategy. Pattern matching is just the first phase. All animals have this to some extent. But the rest is supendoudly more advanced in humans than any other animal, if they have them at all.
Not the secret sauce as all organic material have it to some extent. The point is the emerging complexity plus pattern recognition creates more and more consciousness.
You don't believe many neuroscientists believe that the brain requires a certain level of complexity to become conscious?
You don't believe many scientists who believe that human life started in simple form and evolved into more and more complex life form ending with us and our brain?
Unless you are with Searle and his magical thinking in the Chinese Room argument I don't think you find many who disagree with this.
No, as I have explained I don't think complexity by itself inevitably leads to consciousness. That's what I meant by cart before the horse. You need both a cart and a horse, but the dependency relationship matters. But what I was referring to was the proposition that all intelligent behaviour is composed of pattern matching.
Yes of course I believe our brains evolved that way. I don't believe that arbitrary complex systems in arbitrary environments are bound to evolve in the same way. The environmental conditions, the fitness criteria, the selective pressures, the inheritance mechanism, or even having an inheritance mechanism, these are all crucial. For a designed system, the architecture matters.
Searle constructs convoluted arguments using intellectual sleight of hand to support a really dumb conclusion.
My point is that complexity is a necessary building block. It's not the cart before the horse exactly because I am talking about emerging complexity i.e. increasing complexity in a very specific context namely human brains and computer AI.
No one is claiming arbitrary complex systems in arbitrary environments are bound to anything. I am talking about something very specific namely how our brain came to be and how computers seem to be getting closer and closer to our brains. Just like how evolution in itself is not just arbitrary randomness.
Unless you are claiming that extremely simple systems can create the kind of consciousness we are talking about here I have a hard time understanding what you are disagreeing with in what I am saying.
Why do I think this? Pattern matching will only tell you this thing or situation is like this other thing or situation you have recognised before. That only allows you to implement a simple strategy re-use or random variation algorithm. To go beyond that you need to reason about the ways this situation is different from previous patterns, analyse it, figure out how previous successful strategies might work or might need adjusting, or conceive a new strategy. Pattern matching is just the first phase. All animals have this to some extent. But the rest is supendoudly more advanced in humans than any other animal, if they have them at all.