Hacker News new | past | comments | ask | show | jobs | submit login

This is very interesting, however, I think the paper must be viewed with some skepticism:

- their Scopus queries had tons of problems, some of which they acknowledge. Their model for positive and negative results also seems to be inadequate: a paper can report multiple results, each of which will have its own p-value. How would this show up in their queries? Furthermore, how accurate were their queries (i.e. did they quantify them)?

- the results they got from Scopus depended heavily on what they queried for (as mentioned in a previous comment). (This was acknowledged in the paper)

- what about all the other p-values? They only looked at 0.04-0.049 and 0.051 and 0.06. What about 0.5? What about < 0.04? What about > 0.06? I can't understand why they don't report results for these other ranges, especially when they were already doing automated analysis. This makes me extremely suspicious.

- results before 1996 are suspect because the Scopus data is incomplete; this is assumed to not matter because "no discontinuity appears in Figs. 3 & 5." I.e., the authors have no idea what the results of the query would look like across the full data set.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: