You can only have so much combinatorial logic before you need to break the problem up into multiple stages. The subitizing thing seems like solving the problem in one step with combinatorial logic, whereas counting is like breaking the problem into multiple stages.
It makes sense that the brain can only do so much combinatorial computation before the problem needs to be broken up into smaller pieces, incorporating memory.
Can you count without using a language?
Try this: Clap your hands or tap on something an arbitrary number of times. Can you tell how many times you did it without "saying" one, two, three in your head?
Even if you pay attention to it, it seems impossible to count without language.
At least not a stream of sensory inputs; small clusters of around 10 or fewer things seem easily countable from just looking at a "snapshot."
Geometry is the best example, anythig beyond the basics has no good common names and still you can do it in your head.
If you slice through a pair of carrots, how many pieces of carrots are there afterwards? (do this simple thing in your head an than read on)
I slice them imaginary and than start counting what I'm left with. The part where I go from 2 carrots to 4 carrot parts happens without words. Anyone doing it differently? You of course have to be self-observing enough to say more than "I just know" :).
There is no conscious calculation performed. The result might be fetched from memory, but there might also be subconscious processes involved. And that seems to be what the argument is about. Grandparent claims we cannot count without language while the article claims we have estimation hardware builtin (that's count-ish, imo). That's not really conflicting, since GP implies a specific counting process that's usually done consciously. Do the clapping, ask someone unsuspecting how often you have clapped, and he'll probably be able to tell the number without concisely counting. Even for a stream of sensory inputs.
But yeah, consciously thinking about something usually involves words for many of us, as that the most common/effective way for our thoughts to interact with the world.
But yea, get's more difficult with 73. But even there I might determine "visually" that cutting across a row of carrots halfs each of them. So the 2 in 73*2 ist still determined without words.
For instance you can use tokens to do counting. If you want to count the number of sheep in a herd you could put a small rock in a bag for every sheep. This bag of rocks will then "represent" the herd. And you can use it to reason about properties of the herd.
If lets say you want to count that all sheep are present you can move the rocks from one bag to another. If you want to split the herd between two peeople you can make two piles roughly the same size and take as many sheep.
You could have another type of object representing months in a year. Then you could assign the rocks to months to ration food.
When you start grouping your tokens in well defined shapes, or introduce tokens of different denominations you are on the slippery slope towards language.
Of course if you count symbols ticking by visually as "language" then yes, at the root the brain is going to use some sort of symbol to represent increasing quantity.
I haven't fully explored this, but made a simple web app to implement part of the training system in the book. It shows you (or your child) a number of dots, and says the number out loud.
(Maybe they can learn to recognize a particular grouping of dots in a book, but that's a different question.)
However, like you, I wonder:
- are these reports true?
- even if they are true, has the child just memorised a small section of each particular grouping?
Of course, this trick wouldn't work if a parent is using my online tool, as the patterns of dots are pseudo-random, so the same number is unlikely to have the same representation on subsequent views.
A crazier example, to me, is the Moken children in Thailand who can see clearly underwater (http://www.bbc.com/future/story/20160229-the-sea-nomad-child...).
I think part of this is a lack of practice. I've only counted how many macoroni noodles were in my bowl once
Totally recommended read.
I would just avoid factoids from psychology altogether for the next few years while the researchers in that field sort everything out. https://www.theatlantic.com/science/archive/2018/11/psycholo...
Worse, some of the famous studies aren't merely underpowered or based on nonrepresentative samples (like college kids) or sloppily executed — some famous studies are outright fraudulent. https://www.vox.com/2018/6/13/17449118/stanford-prison-exper...
If you're short on time, watch Kahneman's talk at Google, which basically summarizes the first chapter or so. https://www.youtube.com/watch?v=CjVQJdIrDJ0
Subitizing is pretty cool and it's the name for how you can look at items and instantly know how many there are if it's less than some small number (approximately 5 iirc).
Here's a wiki article on it for anyone interested: https://en.wikipedia.org/wiki/Subitizing
I suspect this is why tally marks are so effective and therefore common.
A friend was telling me that when she thinks of a number, a particular visualization of it automatically comes into her mind. From how she described it, I recall thinking it sounded like an accidental approximation of something like a logarithmic scale, but less direct and less precise than it had to be. I knew her pretty well, and, other than this visual, she didn't appear atypical (was social, smart and educated, but no savant superpowers, nor any unusually low limits).
I was wondering whether she had a normal human conception of numbers, and somehow just had a little extra introspection on that. Or maybe this visual was an independent mechanism (perhaps learned in childhood). Or maybe she conceived of numbers differently than most people.
Sounds like spacial-sequence synesthesia. When I think of numbers I see them at specific points on a logarithmic(-ish) scale as well. I don't think it gives me any advantage in mental math / counting / estimating, but I can not disable it and try to experience numbers any other way. I have only learned it's not common in my early adulthood, until then I have just assumed that's the way everyone's mind works.
I felt like this is an important mechanism at play when I read the classroom example. If I saw a picture of a classroom for 2 seconds and am then asked to estimate the chair count, in my head it goes something like: mean #chairs in classrooms in my experience * fullness of the classroom in picture (e.g. .5-2 range) = count estimate.
This is describing the introduction of the paper, i'd guess.
Since in cogsci the Gestalt effect/theory is widely accepted. What i believe the paper did (but i can't ready it) is to 'reverse engineer' the effect in some part of the brain and came up with an estimate of the number the process will 'estimate' based on your vision focal point into the collection of items.
...but who knows.
Here's an abstract for you. This link was provided in the article. Reading articles on scientific papers can be helpful if you like...read...and check the links/references. You know, the same way you might want to check the annotations in a scientific paper.
We also "approximate" for example a spatial distribution - think about how when running through rough terrain, you instantly know which path to choose to encounter fewer rocks. You are certainly not focusing and counting each individual rock.
You then just check what colour has the strongest (most) signals.
Or do you mean something else? Like how to do that consciously or something and not like, how it's possible?