The reasoning ability and executive function of birds, crows in particular, is astonishing. Intelligence (certain kinds, at least) may be just as reliant on brain:body mass ratio as it is on overall brain size and complexity.
That's unbelievably mysterious, since one would assume there to be some minimum amount of absolute processing power needed for this stuff.
It's like saying CPU to overall device size determines processing power, so a 12-core Xeon in a giant box is less powerful than an ARM32 chip in a phone.
My guess would be that this ratio is a proxy for how important brains are to that animal in its evolutionary niche. A small brain vs. body size indicates that brains must not matter as much as other things, therefore no need to waste resources on it. But a large brain vs. body size indicates that brains are important. This might in turn correlate with greater evolutionary pressure to optimize brain function and efficiency, leading to some super-interesting adaptations and a very minimal parsimonious design.
... which supports the idea that studying these small and efficient brains might be deeply revealing. They're so small and powerful that everything in there must be really important.
A computer's components doesn't need constant monitoring. They're monitored by humans - if the CD rom drive breaks down, a human will fix it.
If an animal gets hurt, it's nervous system will pick it up and do the management necessary to direct resources to repair where it's hurt. Each part of the body requires some sort of monitoring system. The bigger the body, the more computational power must be devoted to body maintenance & monitoring.
A small brain vs body size means there's little "computational surplus", it's mostly used up for making sure the body processes are running, making repairs as required, moving hormones around, etc. A large brain vs body size means spare computational capacity is available for observable intelligent behaviour.
Very true. We're talking about something the size of a small nut that appears to have the same cognitive power as the much larger brain of a much larger mammal.
It would be interesting - not very ethical and possibly deeply creepy, but interesting - to select birds and other animals for intelligence, and see how just how far you could take a selective breeding program.
This suggests -- and this is supported by brain mass studies -- that if the brain is not subjected to strong selection for power/efficiency/ability then you get a kind of "fatty brain" that takes up space but doesn't work that well.
You see analogs in e.g. "sclerotic" corporations with lots of employees doing nothing. My bio professor said "life doesn't work perfectly... it just works." Evolution isn't "survival of the fittest," but selection for a "surviving subset of the sufficiently fit." Economics is the same, thus the sclerotic corporation/government analogy.
Of course if evolution actually was "survival of the fittest" it would reduce to a greedy hill climbing algorithm and would converge on the first local maximum it encountered and stay stuck there forever. Tolerance for variation is a (probably provable) prerequisite for anything particularly interesting, and diversity implies inefficiency among other things.
When I was playing with alife and genetic algorithms a lot, I found that relaxing selective pressure often improved performance in terms of overall best solution generated. There's a number of papers out there that draw similar conclusions in a variety of systems. Sometimes you can get GA/GP systems that escape local maxima and find much more clever and interesting solutions by setting up a selection function that adjusts its strictness curve based on measured diversity (we jokingly called this affirmative action), or by creating an environmental topology that encourages diversity (many compartments / demes, etc.).
The kind of intelligence working dogs were bred for is a very specific kind of intelligence.
I'd like to see a species selected for social intelligence—the kind of tribal-affiliation status-dynamic thing that is theorized to have brought about language.
Indeed. Intelligence in the animal kingdom may have increased toward local optima several times, independently, and differently. And the final product in the simpler invertebrates might be a lot easier to study and gain insights from.
Bigger body mass means a bigger digestive system, and more ability to produce power for the brain. It's likely exactly the same as the conceptual shift from measuring raw MHz to MHz per watt.
I'd be tempted to look at it more as brain:sensor (including skin surface, muscle feedback etc.) ratio.
A small brain for a large amount of inputs means the brain's representation is much more aliased than perception, and the internal algorithms work despite very fuzzy input. Cause for that ratio probably being that the brain is very high pass and only cares about high input spikes (i.e. very dull sense of tact, but sensing a sting still matters, so it still needs a high skin sensor density).
While a large brain for a relatively smaller sensory input means the internal representation is more detailed, or it's even creating synthetic detail from internal models of perceptions at the same time as it processes inputs, which would be where problem solving and awareness capacities kick in.
Sure, but there's more than just single task unsupervised feedback learning going on here. In the first video I linked, as soon as the bird fails with the shorter stick, there is, evidenced by its subsequent behavior executed in ONE successful trial, some complete and integrated pattern of thought at minimum entailing:
1) This tool is no good, I need one with greater length.
2) I identify one over yonder.
3) It is obstructed.
4) I will need weight to remove that obstruction
5) I have identified suitable weights
6) I will need a tool to get those weights
7) My current tool is sufficient for that
8) I can chain these observations to obtain my goal.
That kind of executive functioning and judgment composed from simple tasks learned independently, the first time, and without supervision seems to me well beyond the norm for what we observe in perceptron models of learning and feedback.
https://www.youtube.com/watch?v=AVaITA7eBZE
https://www.youtube.com/watch?v=ZerUbHmuY04