Hacker News new | past | comments | ask | show | jobs | submit login
Non-Standard Errors [pdf] (columbia.edu)
67 points by luu on Sept 11, 2023 | hide | past | favorite | 8 comments



Their "team quality" variable caused me to tune out really quickly. Sure, they found out it didn't explain much but it was enough to change the way I approached the rest of it. Reinforcing one academic problem while attacking another seems like two steps forward and one step back.

This phenomenon — variability in analytic results on the same sample — has been demonstrated before.


They describe how they constructed their measure of “team quality.” They took the first principal component of five measures: publications in top journals, self-assessed expertise in the literature, experience with big data, academic seniority, and team size.

These sound fairly reasonable to me. Publishing in the top journals is easier with the right connections which makes the first measure questionable. Tenure is also harder to achieve for women which makes the fourth measure questionable. But even if these measures are driven by connections and gender norms, they capture something that is interesting to study further if there’s an association found.


> a bazillion authors

was this necessary?


Yes. It's an analysis of the differences in results generated by different teams of academics researching the same data. That required lots of researchers, who all deserve credit for their contributions.


What should they have done with that paper? Maybe used the front and back right?


People complain about the weirdest thi... the first five pages of the paper contain nothing but the list of authors and their affiliations?!?


From the abstract:

> We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample.


So those are the subjects, not co-authors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: