Hacker News new | past | comments | ask | show | jobs | submit login

Generic arguments about bad methodology don't invalidate specific arguments about IQ. If he used bad statistical arguments himself in this case, we are in impasse.

Of course Taleb understand statistics if he wants to, but he is just pushing his rants and loses his bearings in Twitter. Even his old friends are lamenting his online personality and bitterness.




I really, really wish people took bad methodology criticisms more seriously. I have a degree in the social sciences, and if there is one thing that completely defines the field right now it is that you can't use their findings for absolutely anything.

A good portion of studies don't replicate, including fundamental ones (in particular see the failed replication here [1] on one on Rand et al, a study on the effects of priming). Priming is a huge topic in social psychology.

He understands statistics. I'm not sure his opponents properly understand that when you complain about methodology, the implications are not just "add a section in the paper acknowledging the potential for error", it's "your whole study might be fatally flawed in a way that invalidates all your conclusions".

[1]: https://authors.library.caltech.edu/91063/2/41562_2018_399_M...


The point of bringing up priming is this.

There is a big body of literature on priming. Each study is generally done to get a p-value < 0.05. In a sense, there are a bunch of replications of the effect itself. That points to priming as an effect large enough to matter.

There is another viewpoint, where priming is not an effect large enough to matter. (This is the viewpoint I hold.) The arguments for this viewpoint are that the original study does not replicate - the 2018 replication attempt I linked used a ~300% larger sample size (1014/343), but achieved a p-value of 0.366 and had an effect size 80% smaller than the original. A second argument is that priming is not used in industry, though the effect would be useful in fields like advertising or military psyops. A third argument is that there is a widespread suspicion in the field that psychology researchers are p-hacking to get spurious results.

A whole subfield exists on an effect that showed an 80% reduction in effect size with a 300% larger sample size and a 4000% increase in p-value on a direct replication. And my focus on this study ignores the fact that the group of replications turned up 9 failures in 21 replications pulled exclusively from studies picked from Nature and Science.

If psychology can botch the literature on priming this badly, what else have they botched?


Two experiments getting a large variance in p-value for the same hypothesis says that (a least) one is either done wrong or an extreme statistical outlier in the space of potential tests of the hypothesis, but it doesn't tell you which is wrong. And the effect of sample size is already reflected in p-value, it doesn't tell you which of two studies with apparently inconsistent p-values is more likely to be the error or extreme outlier. To do that you either need specific evidence of error or more studies which provide at least probabilistic evidence of which study is an outlier.


How does that justify making factually wrong claims like "There is no correlation IQ/Income above 45K"?

Is that claim true somehow? It certainly looks wrong, looking at the scatter plot (even setting aside whether a linear regression is appropriate). Are scatter plots 'bad methodology' somehow?

(BTW the comparison of IQ to priming is ridiculous. IQ is the most replicated and reliable measure in all of psychometrics.)


> IQ is the most replicated and reliable measure in all of psychometrics

This is not a high barrier to pass. Psychometrics is a field notorious for employing biased and non-existing constructs.

Just because it is well studied, doesn’t make it real. N-rays were at one point one of the most studied rays in physics. It didn’t make measurements using n-rays more reliable. If everyone is repeating the same mistake, it doesn’t erase the mistake.

You know the history of IQ and how countless studies in the past are biased in horrendous ways. It might be well replicated and "reliable", it is still wrong.


If it's not such a high barrier to pass then maybe you can make your arguments without appealing to the most easily knocked down nonsense (social priming).

If you want to argue that IQ is "wrong" you need to explain https://www.ncbi.nlm.nih.gov/pubmed/8162884 and every other result that has been published relating to it, not just make vague insinuations that IQ is just like social priming (it's not).


Again (this has been stated several times in this thread). Intelligence tests are useful to detect and diagnose mental disabilities. What we (and Taleb) are questioning is the usefulness of above average IQs.

I don’t know why you are all of a sudden talking about social priming (I’ve never seen it pop up before in this thread), especially since priming is a concept from physio-psychology and computation psychology (that has apparently been borrowed by social psychologists) not psychometrics. I’m not even sure what you mean by social priming (I did a quick browse on wikipedia without success) so you have to inform me.

If you are saying that my claim—that psychometrics is a field filled with pseudoscience—is unsubstantiated, you are right. I did (implicitly) claim that, and I didn’t provide any substantial evidence for my claim. I probably should have, but that is out of the scope of this thread, so I’ll just leave it unsubstantiated. Call me lazy, and you would be correct.

---

Edit: To clarify. Priming did come up in a grandparent’s comment. Priming (as my layman understanding goes) is believed to be a neurological effect that increases the efficiency of a search response for similar stimuli presented at short intervals. That is finding a particular pattern gets easier with subsequent trials. Priming effects have been demonstrated in numerous studies in the past two decades. However (as is usually the case in many scientific fields) a hype has arisen around the concept and many scientist are claiming that priming can explain several unrelated psychological constructs. Many of these studies have poor methodology and have never been replicated. Perhaps my parent comment was talking about one of these studies when they mentioned “social priming”.


Low level chronic lead exposure doesn't cause a "mental disability". What it does is cause permanent brain damage which subtracts a few IQ points, harming individuals with above average IQ scores just as it harms those with average and below average IQs.

Yes, social priming studies are what cljs-js-eval was referring to originally when they mentioned "priming". Priming itself is generally solid science (eg. the Stroop effect).


You could simply conclude that all papers could be true or not. Yet we wrote one, which is also true or not.


Another observation: as with any slushy area of science, peoples' willingness to apply skepticism and what they are willing to question tends to be determined by their politics and other biases.

Liberals are happy to question IQ and especially race/IQ work while conservatives are happy to question stuff like those gender blinded recruitment studies or those academic trolling studies that try to link conservative opinions with mental illness.

The reality is that all of it is very questionable because the entire field is riddled with shaky methodology and down right bad science. From what I've seen of the replication issues the whole field is worse than nutritional science, and that's bad.

The degree to which a scientific field is politically weaponized is usually inversely proportional to its "hardness." You don't see the same thing in math or physics. Liberals and conservatives oddly never disagree on the value of Pi or the formula for the Carnot efficiency of a heat engine. The closest things to hard science that you find massive political disagreements on are climate change and evolution, and I've noticed that more serious conservative thinkers are coming around on those topics because the evidence is overwhelming.


> Liberals and conservatives oddly never disagree on the value of Pi

https://en.wikipedia.org/wiki/Indiana_Pi_Bill

> The Indiana Pi Bill is the popular name for bill #246 of the 1897 sitting of the Indiana General Assembly, one of the most notorious attempts to establish mathematical truth by legislative fiat. Despite its name, the main result claimed by the bill is a method to square the circle, rather than to establish a certain value for the mathematical constant π, the ratio of the circumference of a circle to its diameter. The bill, written by the crank Edward J. Goodwin, does imply various incorrect values of π, such as 3.2.[1] The bill never became law [...]


What does his personality have to do with what he is arguing? Is he using “bad statistical” arguments or not?


> Of course Taleb understand statistics if he wants to, but he is just pushing his rants and loses his bearings in Twitter. Even his old friends are lamenting his online personality and bitterness.

Generic arguments about personality and character don't invalidate specific arguments about IQ.

If I indulge your "argument", from what I see on twitter is threads full of very openly nordic-/white- supremacist keyboard warriors and zerohedge trolls. Of all the people to call out in these threads, Taleb seems the most level-headed.


but all his specific arguments are completely false - see the linked blog post!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: