So on one level, I'm following the melody/harmony. On another level, I'm inventing (or remembering?) substitutions that can work over that melody and harmony. On another level, I'm thinking about the overall flow of the improvisation, and how to achieve my artistic goals, express the feelings and ideas I have at the moment. On yet another level, I'm being purely technical - like making my hand slightly roll to play a downstroke on the E string followed by an upstroke on the B string without accidentally hitting the E string twice. On another level, I'm listening to the other musicians, with whom I'm generating a shared tempo, and who are also improvising, generating new ideas I can respond to while playing.
That's expertise. Being able to work on all this patterns simultaneously, some being totally orthogonal to others (like "roll my hand" vs "play a sixth instead of a fifth to imply relative minor").
So the complexity of certain parts of the music are tightly constrained, which frees the performers to tackle other forms of complexity (like improvisation) much more aggressively.
I think people struggle with that because almost all cognition is at some level pattern recognition. Identifying your mom's face as a baby is pattern recognition. A dog learning to sit is pattern recognition. A mathematician finding the integral of some really difficult function is pattern recognition. A chess grandmaster playing is pattern recognition.
So I think this statement is correct, but it is pretty close to a tautology and hence not that helpful.
My solution has just been to fake it, if I can convince myself e.g. that my neighbors' names and faces are interesting/important than remembering them becomes almost impossible not to.
+ pattern matching - matching a series of events or a system state against an existing inventory of models or patterns to find a good fit and take action based on it
+ pattern recognition - decoding a pattern at work in a series of events or a system that is novel to you (or may be a hybrid of familiar and novel patterns)
As an example of the latter: the fire captain in Klein's "Sources of Power" who finds himself fighting what appears to be a small fire that is very hot and cannot be extinguished even after they douse if with water several times.
He becomes alarmed and orders his crew to pull back just as the floor collapses due to a fire in the basement below. It was not a pattern he recognized (one that matched prior experience) but it was anomalous in a way that he intuited was dangerous.
I’m not sure when was the last time you read Klein’s work, but he certainly does not distinguish between ‘pattern recognition’ or ‘pattern matching’ — it’s all the same implicit memory operation in his model.
You see a player round a corner and know that they are likely to end up at point A, B, or C within times X, Y, or Z.
You hear a specific gun being used to your left, it is likely being fired from areas D, E, or F because those have good sightlines to where you are.
For both circumstances you know that of the Q number of weapons you have R will be the best option for the range that you will likely encounter the enemy at and grenade S will also help. Weapons T and U will leave you undermatched. You switch to Q, reload it in an area that you know is relatively safe. As your approach the target you know that enabling powerup V will help you get the upper hand.
To use your FPS example, knowing all of those things still doesn't matter if you miss every shot.
Another example -- 99% of the best Olympic athletes at the 2012 Olympics were less than 40 years old. I don't think that people over 40 are worse at pattern matching, I think being an expert at something requires more than just pattern matching. Depending on the activity, physical strength, eyesight, reaction time, and so many other things matter too.
If expertise is just pattern matching, and AI is almost entirely optimized for/shooting towards pattern matching, are we to close to AI making experts obsolete?
Most of our software is still struggling at stage 1, and our software is so poor at pattern matching it can't detect obvious stage 1 failures at stage 2 or stage 3. We can make a program to recognize species of birds from photos, but it will also tell us that a bunch of static is (with 99.5% certainty!) a robin.
I don't think any of the experts I know are at any risk of being replaced by AI in their lifetimes.
Personall it feels like the current limit on the way to an AI that can take unknown information and turn it in to a pattern matching system is how much data it can fit in it's matrix at once.
If I remember correctly (and my info isn't out of date, it's hard to google) entity matching in photos is usually done internally with very low resolution files like 320x320 and lower. It's no wonder that it could easily mistake static for a robin. Take the right person's glasses off and you'll be lucky if they can tell you it's a bird.
It's also worth pointing out that it doesn't matter what red is. All it matters is that the entity you talk to understand to recognize in the situations you'd expect them to.
In other words, red isn't some absolute concept, for all we know every human may subjectively perceive colors differently but most of us can still recognize the color red when someone asks us to.
By now it is all pattern matching for me. I get 1h with a real decision maker within the company and my way forward is clear. I wrote about it here https://t.co/833yGyRDXM
I sometimes get imposter syndrome, but then I recognize it as a pattern and get over it.
Probably many people here have seen the argument from solid state physicist Philip Anderson that #1 does not necessarily imply #2.
"The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any rneans imply a
"constructionist" one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct
the universe", Anderson, link below.
It articulates well the fundamental problem that I struggle to explain when I talk about why I don't think strong AI will be a thing for a very long time, if ever.
"The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity."
Being an "expert" is great as a goal, but it's not the only be all end all. I'm reminded of a plaque from schoellkopf power station that said:
to know what to do... wisdom
to know how to do it... skill
to do the thing as it should be done... service
But now, some years later, I've become that person, at least in my current company. I can debug issues pretty quickly, and my brain has a store of literally thousands of bugs and snags that have come up over the years, to the point where if someone says, "This feature isn't returning the right variables", I can usually point them to the fix without even looking up from my laptop. It sounds like magic, but really it's just localized expertise.
That's the heartening part--that all it takes is years to do it. The expertise is available to everyone, perhaps.
The disheartening part is the other side of the coin--what if you've started late? Does this mean that software engineers who start late, in their 40's or 50's, for example, might never reach the heights of software engineers who started early? That isn't true--there are examples to the contrary, I hope--but maybe they have a much steeper hill to climb. Or they have to figure out a different approach to getting up the hill, maybe combining years of experience from a different field.
I love essays about these topics: expertise, pattern-matching, your brain being shaped by what you do, etc. If anyone has any others, I'd love to see them.
EDIT: Some grammar mistakes.
* other people may be faster learner and may forget less
* other people's interest in the subject matter may decrease at a slower pace than mine's
* other people are simply younger and their body can be "fully awake" for more time, have more energy available to learn (instead of dealing with family issues), their brain is healthier/younger
There's also a unintuitive negative effect of having expertise: you easily miss out the simple explanations/solutions. This makes total sense, the more things you about a subject matter the more your brain will attempt to explain behavior through those means. As you become an expert in a subject matter you deal with more specialized/detailed aspects which results in your brain trying to think about those to explain situations or find solutions when a "newcomer" will very easily figure out that it's a simple problem/solution that can be done.
So don't feel disheartened, there are pros and cons to being at every level of expertise, it's a tradeoff.
The negative effect is interesting. I've heard it said before that non-experts often come up with novel solutions in some fields. That might because of what you pointed out, that the experts are too focused on a certain range of solutions, so they might be missing out on a range that a "layman" can easily identify.
It made me realize how much of software development is just natural to me because there's years and years of practice under my belt.
Practice makes perfect.
The amount of creativity required for expertise in various domains varies from "some" to "really quite a lot."
If a creative were asked what their process is, wouldn't they have to honestly answer, "I don't know yet, but I'll know it when I see it"?
Pattern matching is just plain fun/entertaining, depending on the domain.