I think this is part of being a data scientist.
> Given the problems in my data sets, these folks are proactively investigating data that they received from Jonathan ...
It's great that she is sticking solely to the facts and being very scientific. But I'd love to read a journalist's in-depth investigation into Johnathan's motivations. Everything points to this being a deliberate, dishonest fabrication of data.
And now she has a tenure track position at a UC.
While the scientific record might be corrected, the historic impact on a cohort of people who got less because of this remains unacknowledged and uncorrected.
Walk it all back.
(I hope that in reality there's a lot more to the author's research than the retracted papers, but of course in such a competitive job market, every bit helps.)
Look at it another way: the author sure was lucky they found out about the problem after they were securely in their tenure-track position, and not just before.
Look at it another way: The author was sure unlucky to have based their research on shoddy data from a trusted colleague. And it took guts and integrity to react in the way they did.
Not because I don’t care about spiders, but because the first is entirely within their control, while some hypothesis finding the data to support it comes down to luck far too often to use a single case a meaningful measure of an individual.
Her technique was presumably (not my field) otherwise quite good and she didn't know at the time the source data were bad. Apart from her willingness to follow up to the query on an old paper, her approach to followup was excellent. And she seems to have learnt from the experience.
All in all this sounds like what you want from a good scientist. After all once you have tenure, you can just ignore all that "old stuff" if you are so inclined.
As far as not getting the position: there are more people than tenure track positions these days so "luck of the draw" is also pretty significant.
Looks like he hasn't retracted these papers from his Pubs list:
But still, there is no fucking way that you get three papers into your research and then figure this out. Having done an MS, PHD, and postdoc in evolutionary biology, including sociality in insects, I can attest that one scrutinizes every (insignificant) data point.
And fuck off Did I read the article.
If you see fraud and don't say fraud, you are a fraud.
~Taleb (I think)
Also curious, how did no one question the data earlier when some guy, albeit respected, sends you a data file and you write several papers on the matter? No one knew what sheet #2 was and we're writing scientific papers based on this excel file? I think we need to revisit correct data hygiene and reasonable suspicion.
At least according to my experience as a researcher in geosciences, I'd be ready to claim that borderline fraud (i.e. being too sloppy about the data, even if not with explicit bad intentions) is waaay more common than generally thought or admitted. She at least had the guts to do something about it.
This is also a great story about hiw important it is to fund research that tries to verifie previous research. That is virtually non-existent in the current academic world.
On whom? It's a paper about the sociology of spiders, it's not a Reinhart-Rogoff scale disaster.
What's the name of this logical fallacy. Ad idiotim?