> general intelligence may depend on system-wide network mechanisms, suggesting that local representations are necessary but not sufficient to account for the neural architecture of human intelligence.
> Our findings provide evidence for the importance of both localized lateral PFC connectivity and global functional brain network connectivity
So his paper is maybe one evidence towards this theory?
I'm actually more interested in actual theories or models of neuronal intelligence. This paper does not really propose any new theory or model but just observes, as most such papers do. Which is also important, of course, but I just find less exciting. Of course, just a theory or model in itself is also not interesting, but it becomes interesting when (not necessarily both):
- You can experimentally show that it is powerful (ignoring whether it is really biological accurate), e.g. all the current deep learning models, GPT and co.
- You have evidence that it is biologically accurate.
I'm wondering about the study ... I'm a complete novice so might get things wrong, yet the participant description has not much information (e.g. average, std of the age, gender etc.). N is also just given in the abstract.
Also the study does not mention how the experimental setup was:
"Subjects were administered a comprehensive psychometric battery of fluid and crystallized intelligence tasks,"
There can be substantial differences, just on how the experimental setup was administered (e.g. potential ordering effects between the tests) and when the scans were done.
Does anybody know if that's a standard procedure to determine "general intelligence"? Sounds vague to me.
Reading science papers (my field is biochem, YMMV for other disciplines) is a lot of work if you're not already expert in its very specific field and topic.
Paper authors try to be as succinct/brief as possible, and in my experience editors generally encourage this as well. There are incentives to do so.
In order to understand what that part of the paper means, you would need to actually read every reference, and maybe even their references, and maybe even go up pretty far up the reference tree. You might try and take a shortcut by reading recent or well-cited review papers on that specific subject. In order to take a shortcut from that, you might try and consult a handful of textbooks.
If you were an expert in the field, you probably wouldn't need to refer to the textbooks, and you might even know the rough ideas behind enough of the papers and might check only a few of the references to be sure.
Then beyond this, any variation from the afore-cited methods will usually be detailed within the paper, possibly in further sections or in figures or tables (and from what I can see, that seems to be generally the case in this paper).
Being succinct is also a neat excuse for leaving out details that are actually necessary for any scientist to replicate your paper. Oh and it allows certain interdisciplinary works to incorrectly abstract away technical details from the secondary domain.
You're not wrong, and in an ideal world it is a sensible way for experts to communicate amongst themselves. But at the same time this sort of deep reading is a fool's errand at least half the time IME, because the authors of the paper themselves did not carefully read every reference. And even when they did there are often subtle unpublished differences between a cited protocol and the protocol they actually used.
To be fair, it's possible this is a domain specific phenomenon, and I'm not especially familiar with human intelligence research. But it's a real problem in the areas of neurobiology and computational neuroscience I've studied.
I think it's in large part a natural consequence of trying to scale science in the wrong way, but there are also bad cultural factors and a few legitimately bad actors that push the issue even further.
As ever in these studies, there's the claim that they've developed a reliable quanitative program for measuring human intelligence. This is tough - I mean, how do you put error bars on these measurements in a meaningful way?
> "Subjects were administered a comprehensive psychometric battery of fluid and crystallized intelligence tasks, including a figure series completion test of fluid intelligence (Kyllonen et al., 2019), the Law School Admissions Test Logical Reasoning Battery (Mackey et al., 2015), Shipley-2 Vocabulary (Winfield, 1953), and the Adult Decision-Making Competence battery (Bruine de Bruin et al., 2007, 2020), collectively composing up to 3 hours of psychometric measurement administered over two days. Previous work demonstrates a high degree of overlap and redundancy between the Adult Decision-Making Competence battery and other canonical measures of intelligence (Blacksmith et al., 2019; Bruine de Bruin et al., 2020; Missier et al., 2010; Román et al., 2019), providing further evidence that the battery provides a valid and reliable measure of general intelligence."
Well, maybe... I wonder if this will ever become a standard part of the hiring interview? It might be appropriate if everyone in the company had to take the test and publicly post the results, at least for internal viewing. "Line up for your fMRI imaging, please, before taking your battery of intelligence tests, everyone... this will be your primary determinant of promotion and compensation, by the way."
For background on the fMRI vs CPM debate, this seems to be what it's all about:
> "Recently, we have developed connectome-based predictive modeling (CPM) with built-in cross validation, a method for extracting and summarizing the most relevant features from brain connectivity data in order to construct predictive models. Using both resting-state functional magnetic resonance imaging (fMRI) and task-based fMRI, we have shown that cognitive traits, such as fluid intelligence and sustained attention, can be successfully predicted in novel subjects using this method."
There's no good definition of intelligence, and certainly not one that can be operationalized in one or two numbers. So instead, this study correlates the outcome of quite a few tests that are commonly assumed to correlate with intelligence, and correlates it with an (also not well-understood) measurement of the brain. "Whole-brain connectivity" is best at predicting the outcome of the tests, but I couldn't find how they define it.
Could be. I glanced the neuroscientific theories of intelligence the paper referenced and they seemed to be mostly concerned with dividing general intelligence into abstract sections the purpose of which seems to be to define differences between humans within these abstract sections. Below an image I found depicting the core of Process Overlap Theory [1].
They're barely even related. It's like the difference between, on one hand:
a) studying the structure and functions within fruits, and similarities in structure and functions between different types of fruits, and on the other hand
b) ranking fruits by how tasty they are when made into pies, and closely inspecting them in order to find their intrinsic pie potential.
This comment confused me so I looked up the definition of homophone[0].
Turns out it can mean two words sounding the same (definition 2) or merely sounding alike (definition 1). Personally I always thought homophones had to sound identical. But you are right, they are homophones in the broad sense.
Any difference quickly disappears once you take into account various accents and dialects. It's a lot like pin and pen. They're technically not by the strictest sense of the word homophones because there should be a slight difference in how you pronounce them, but in practice, there is no difference.
It is truly remarkable that the distinguishing characteristic of human intelligence vs other animals, conceptual reasoning, is not mentioned. "concept" appears 0 times in this paper.
> general intelligence may depend on system-wide network mechanisms, suggesting that local representations are necessary but not sufficient to account for the neural architecture of human intelligence.
> Our findings provide evidence for the importance of both localized lateral PFC connectivity and global functional brain network connectivity
This reminds me of the thousand brains theory by Jeff Hawkins. (https://www.numenta.com/blog/2019/01/16/the-thousand-brains-...)
So his paper is maybe one evidence towards this theory?
I'm actually more interested in actual theories or models of neuronal intelligence. This paper does not really propose any new theory or model but just observes, as most such papers do. Which is also important, of course, but I just find less exciting. Of course, just a theory or model in itself is also not interesting, but it becomes interesting when (not necessarily both):
- You can experimentally show that it is powerful (ignoring whether it is really biological accurate), e.g. all the current deep learning models, GPT and co.
- You have evidence that it is biologically accurate.
This paper mentions Process Overlap Theory (https://www.tandfonline.com/doi/abs/10.1080/1047840X.2016.11...) and Network Neuroscience Theory (https://www.sciencedirect.com/science/article/pii/S136466131...). I don't really know these, but I guess those are an interesting read.