Hacker News new | past | comments | ask | show | jobs | submit login
Observing many researchers using the same data (2022) (pnas.org)
16 points by throwaway13337 8 months ago | hide | past | favorite | 2 comments



Many of the results had error bars overlapping zero or were fairly close (see figure 1). For results like this, it's not too surprising that other researchers would detect a faint positive or a faint negative instead. That doesn't really undermine the usefulness research, though. After all, it's not that uncommon for studies to end up with small effects. When reading a paper with that magnitude of result, it's good to remember that different researcher decisions could nudge that range into statistical insignificance.

I also noticed most teams produced multiple models, so it seems like part of the variation could be down to that. For example, most teams produced at least one model per survey question. It could be that basically all models based on question one showed a negative AME and all models based on question six produce a positive AME, resulting in the models disagreeing but the teams basically agreeing. Presumably their analysis to identify particular decisions explaining the variance between models would have picked that out if it were down to something that simple.


thanks for all authors spent time to do this experiment. it need courage to disclose something like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: