So I know a lot of people mentioned in this New Yorker piece. Not well but I consider them colleagues. I have a lot to say about it on several levels.
On the one hand, I've had to deal firsthand with the phenomenon Paige-Harden is talking about, in hiring discussions and faculty meetings. It's infuriating and bizarre to see the level of denial that arises among people who ostensibly are extremely intelligent and well-educated. It reaches a point where my only conclusion has been that these individuals are completely out of touch in an ivory-tower sense, that they have no idea what happens in clinical settings, and the problems that arise in terms of phenomena that are obviously congenital disorders or near-disorders, even if they don't have a label per se.
On the other hand, I think the behavior genetics community is in a bit of denial itself, about the strengths of its models and reliability of its conclusions. The field has found itself in a no-mans land the last decade or so, in that the revolution promised by the genome area instead stripped the field of results: the findings of the twin era, when faced with molecular genetics, largely evaporated. This is alluded to in the article with polygenic risk scores, but those I suspect will face a similar crisis of replicability; although they predict outcomes, it's relatively weak when you get to behavior, and much weaker still upon replication.
This all leads to a general sense within the BG community and without that the models and findings of behavior genetics are based on very big assumptions, assumptions that seem to fall apart when looked at through a microscope. They might have some validity in a broad brush sense — in the sense that yes, there are some sets of genes that collectively lead one to have higher functioning cognitive ability, and some that lead to impairments — but the tricky details most people wrestle with are mismodeled. Comparing one end of the bell curve to the other is doable; less doable is comparing adjoining areas of the curve. There's also the strange complexity of life, which is largely absent from these models. A common pattern is for some BG conclusion to be drawn in the literature, and then everyone has very clear, prima facie evidence — on the news, in everyday experience, and so forth — that clearly contradicts it. The BG community wrestles with it, comes up with some contorted explanation for why there's no contradiction, and then silence.
The reason why I'm posting in response to your comment is that underlying all of this I think is a fundamental, often implied concern about BG research: that it is not oriented toward intervention, change, or improvement. What it concerns itself with are characterizing with abstract descriptions — "additive genetic variance", or "polygenic risk" — people as if they are static objects. It's not oriented toward identifying something specific or manipulable, like a neurophysiological pathway that can be changed, or the products of specific genes. Even when research is oriented toward potential change, as in environmental interventions, genes are still treated as something fixed and unchanging, background that is to be covaried out because it's not something we should do anything about.
So, even though I think the left-leaning segments of academia definitely go overboard in how they approach behavior and genetics, there's some sympathetic groups who largely reject it for other reasons. They see the field as fuzzy and unreplicable in a levels-of-analysis sense, if not sample-to-sample sense, and are skeptical of the overall worldview implicitly being promoted.
I need to reread the Chomsky piece; it's dense and full of excellent points. But part of what it points to is the fundamental problem with seeing individuals in society in terms of reward or punishment for what they are, rather than in terms of the obligation of society at large toward improving other individuals.
The real crisis looming on the horizon isn't the failure of the left to appreciate behavior genetics. It's the failure of everyone to appreciate the implications of a civilization where we will soon be able to alter our genes at any stage of life, as well as the neurobehavioral structures that are downstream from them. Where does the moral obligation lie then? With the individual or society? What value is there then in abstract quantifications of variance due to the individual at birth, and variance that occurs later?
>This is alluded to in the article with polygenic risk scores, but those I suspect will face a similar crisis of replicability; although they predict outcomes, it's relatively weak when you get to behavior, and much weaker still upon replication.
Why would you say that? I think you implied you work in this field, so maybe you do know better than me. But from my perspective, you are incorrect. Not only a lot of SNP's are replicating, but we also have the definitive method to test causality, which is applying PGS intrafamily.
> The reason why I'm posting in response to your comment is that underlying all of this I think is a fundamental, often implied concern about BG research: that it is not oriented toward intervention, change, or improvement.
Ok, now I am definetely sure you don't work in the field. Wtf? Of course it is immensely valuable to know that cluster X of genes influences condition Y. Nowadays, drug discovery is a pretty messy process without any direction except trial and error. If we can previously know which genes are first implied in this condition, then we can reason what they are probably doing (using previous knowledge of them) and from there work in a possible solution.
On the one hand, I've had to deal firsthand with the phenomenon Paige-Harden is talking about, in hiring discussions and faculty meetings. It's infuriating and bizarre to see the level of denial that arises among people who ostensibly are extremely intelligent and well-educated. It reaches a point where my only conclusion has been that these individuals are completely out of touch in an ivory-tower sense, that they have no idea what happens in clinical settings, and the problems that arise in terms of phenomena that are obviously congenital disorders or near-disorders, even if they don't have a label per se.
On the other hand, I think the behavior genetics community is in a bit of denial itself, about the strengths of its models and reliability of its conclusions. The field has found itself in a no-mans land the last decade or so, in that the revolution promised by the genome area instead stripped the field of results: the findings of the twin era, when faced with molecular genetics, largely evaporated. This is alluded to in the article with polygenic risk scores, but those I suspect will face a similar crisis of replicability; although they predict outcomes, it's relatively weak when you get to behavior, and much weaker still upon replication.
This all leads to a general sense within the BG community and without that the models and findings of behavior genetics are based on very big assumptions, assumptions that seem to fall apart when looked at through a microscope. They might have some validity in a broad brush sense — in the sense that yes, there are some sets of genes that collectively lead one to have higher functioning cognitive ability, and some that lead to impairments — but the tricky details most people wrestle with are mismodeled. Comparing one end of the bell curve to the other is doable; less doable is comparing adjoining areas of the curve. There's also the strange complexity of life, which is largely absent from these models. A common pattern is for some BG conclusion to be drawn in the literature, and then everyone has very clear, prima facie evidence — on the news, in everyday experience, and so forth — that clearly contradicts it. The BG community wrestles with it, comes up with some contorted explanation for why there's no contradiction, and then silence.
The reason why I'm posting in response to your comment is that underlying all of this I think is a fundamental, often implied concern about BG research: that it is not oriented toward intervention, change, or improvement. What it concerns itself with are characterizing with abstract descriptions — "additive genetic variance", or "polygenic risk" — people as if they are static objects. It's not oriented toward identifying something specific or manipulable, like a neurophysiological pathway that can be changed, or the products of specific genes. Even when research is oriented toward potential change, as in environmental interventions, genes are still treated as something fixed and unchanging, background that is to be covaried out because it's not something we should do anything about.
So, even though I think the left-leaning segments of academia definitely go overboard in how they approach behavior and genetics, there's some sympathetic groups who largely reject it for other reasons. They see the field as fuzzy and unreplicable in a levels-of-analysis sense, if not sample-to-sample sense, and are skeptical of the overall worldview implicitly being promoted.
I need to reread the Chomsky piece; it's dense and full of excellent points. But part of what it points to is the fundamental problem with seeing individuals in society in terms of reward or punishment for what they are, rather than in terms of the obligation of society at large toward improving other individuals.
The real crisis looming on the horizon isn't the failure of the left to appreciate behavior genetics. It's the failure of everyone to appreciate the implications of a civilization where we will soon be able to alter our genes at any stage of life, as well as the neurobehavioral structures that are downstream from them. Where does the moral obligation lie then? With the individual or society? What value is there then in abstract quantifications of variance due to the individual at birth, and variance that occurs later?