I wish we had some form of stacis we could put ourselves in to observe what happens.
She got a PhD the same year as me, 2014, so that's pretty early career in academic terms. She's published a series of pretty standard sounding academic papers in a range of standard sounding journals, which is a positive credibility signal for sure. She does a bunch of public outreach work for the environmental sciences, which is great. I liked her effort to explain in plain language what scientific evidence is:
So in general, as an academic in social science, I tend to find her post credible here on general this-looks-like-a-fellow-academic grounds.
This being said, I hadn't heard of the institution where she got her PhD before (Charles Sturt University), and I find it a little bit interesting that on google, pretty much the first hit says "#CSU works closely with industry, ensuring our courses keep pace with change."
Like it or not … when an institution positions themselves like that, it makes me wonder a little bit about whether they really can support unbiased environmental science, since industry has huge financial stakes in the outcomes of environmental research.
Yeah it is my instinct to to use credentials but then I pull myself up and think "is this a high brow ad-hominem thing" but it probably makes sense to be curious/skeptical at first.
Or, at least, that's how I see it. Anyway, I think this is fundamentally different than logical arguments. The burden of proof very much rests on the paper or study or critique. Shaky credibility raises the bar of proof. Remember, in science, new papers are not de-facto gospel until proven wrong- they must be repeatedly supported by successive studies & challenges before they are accepted as truth.
Consider, for example, datasets. There is a certain amount of implicit trust in the dataset provided in some new paper. The dataset can be fudged in ways which are undetectable, and no logical argument can refute- data is not argument. Therefore until you are prepared to replicate the dataset, you are trusting the author.
Yet we read that the findings in most published research are either never reproduced, or not reproducible.
I completely agree there's a lot of danger of ad hominem analysis in these kinds of cases. From my perspective, I probably would not post about this sort of thing at all, if it were a more mundane issue. But there seems to be a lot at stake in the climate crisis issues, so I'm trying harder than usual to figure out how to get my bearings.
I think your comment is a good example of a well formed, coherent yet fallacious argument.
With that being said, who happens to be saying something changes how one approaches a discussion. If the author of this paper happened to be a random crank, it would not mean that she was wrong, but being unable to directly judge her work for myself (not being an entomologist) I would not be able to take her word for it and would want to see how the entomological community responded to her work. If instead she was a widely respected entomologist I would be inclined to take her word for it until such time that this work was discredited by the entomological community. If I were an expert in her field, I would be able to judge her myself, but not being an expert, I must instead rely on the judgment of experts.
In cases like you describe, where someone can't speak to the validity of the methods of the studies, the prudent thing would be to refrain completely from weighing in as all it would do is add some kind of irrelevant or confusing information. I'm unclear what value such information adds when trying to evaluate the merits of the work itself.
Anyone who uses http://www.sourcewatch.org/ or prefers Britebart over MSN or visa versa is probably going to disagree. They evidently consider the source of information a useful indicator of its quality.
Speak for myself I aggressively filter the Internet fire hose using all sorts of techniques that don't involve considering the substance of the story at all. I'll look at stories with good HN scores, I'll pause to see listen what Leonard Susskind has to say about something he admits he does not know a lot about long before I'll listen to anti-vaxer's argument about something they say they some expertise. Past performance matters, as does the opinions of others who think like me.
And I've also done exactly what this person did - not here but on theconversation. It was on an article about the benefits of coal, which like all articles on theconversation comes with a side bar listing affiliations and interests the author has that might influence their opinion. (Apparently the theconversation also thinks things outside of what is said in the article can be used to judge its validity.) In that case the said bar said the author had no notable affiliations. I pointed out he was occupying a chair paid for the Koch brothers.
It's a cost-effective heuristic, that's popular and necessary simply because of how much information you have to filter out daily just to stay sane. However, to effectively take into account arguments about people, you need to be honest about what you're doing - using weak evidence as a coarse filter. You're bound to have false positives. For things that matter (or that you want to have an opinion on), it's hopefully not the only piece of evidence you're using.
 - https://xkcd.com/552/
If OP said "Look, this person got their degree from a low quality place! Let's dismiss the argument because nobody from such a place could produce good research", that would be an ad hom.
But this goes beyond calling out informal fallacies on the internet, it goes to our whole epistemological framework. Why do we trust some claims but not others? Might there be a reason to be skeptical of or completely distrust one claim until it is corroborated by another source? I would argue that there is, in many cases, depending on both the nature of the claim and the history of the person making it. To that end, I'm sure you agree such a thing is necessary in the legal system, for instance, wouldn't you?
As in, if a postulated hypothesis does not contradict any observable facts, it stays a hypothesis, a possible foundation of a theory. If it does contradict the same, it becomes a false hypothesis and is discarded.
This is science, not law or anything else. It does not and can not rely on trust or perceptions of authority.
I'm well aware of how things work (or at least ought to work) in science, but that's totally irrelevant to the point I was making about skepticism of the sources that publish science and who they are funded by. It is especially irrelevant if we are talking about lay people judging research for their own purposes.
So anyone who didn't reproduce the study should just leave off commenting?
It basically says there's a whole lot we don't know, because the scientific data that's available isn't particularly good. I don't feel that's a very controversial thing to say.
(And given no better data is available and the existing data indicates the problem is massive - I'll take the best science we have and say we should do something about it.)
> How can we fix this?
> More conservation actions. We already know what disrupts the balance of ‘good’ vs. ‘bad’ insects (in terms of human impacts): pesticides, habitat loss, pollution, land degradation, manicured lawns, too much waste, crop monocultures, invasive species etc. We can take action to minimise these effects now.
> More research. We can’t identify what insects we’re saving if we don’t get to know them first.
> More funding. Researchers can’t do research, people can’t act without funds and support. We need widespread public and political support for unbiased funding to fill knowledge gaps and make change to stop insect populations declining.
Even the author, after all that, wants "unbiased funding to fill knowledge gaps and make change to stop insect populations declining", emphasis mine. To me all of this just says the author wants a solid, watertight case and well educated steps forward. And there's plenty of steps we already can and must take, no further research needed, she also stresses that.
But I have to admit, it took me a bit to realize this after my initial negative reaction. I didn't even like the title, insectageddon is a terrible word and the decline of insects isn't a great story, at all... but I guess the author is simply too deep into these issues to make it more instantly palatable for people who aren't.
Not sure why that's any more alarming than universities that work closely with government (almost all of them) or other non-industry foundations.
Way to go. Way to go.
Do I have to explain this in even simpler words?
Keys, it is an argumentative strategy to avoid a genuine discussion. This is not a discussion. The author of the article is not here to join in argumentative discussion. Being passive aggressive won't change that.
As mentioned elsewhere this:
> So… My first instinct when I read debunking efforts like this is to try to check out who is doing the debunking.
is the definition of "ad hominem". Attacking not the position a person holds, but the person.
Discussion is what we have here. The comment in question, a part of the discussion, implied that the point what was made by the author is questionable because of who the author is.
This IS NOT the case. This is akin to simple research. Imagine you want to build a computer, and in your research you find The Verge's guide. You look up The Verge and check that it is simply not a good source for such information.
There is no one against another here. It is one with an argument, and one neutral, consuming the content, with no position whatsoever.
> Like it or not … when an institution positions themselves like that, it makes me wonder a little bit about whether they really can support unbiased environmental science
Maybe the university can't, maybe the author pulls it off regardless. Has anyone ever seen someone do good work at a shitty company? It wouldn't quite work to say "can anyone tell me if someone who studies or teaches at a university with a questionable line of PR on website can possibly be unbiased about environmental science?", the answer is something like "of course that's possible, to the extent that any human can be free of bias.. to give a useful answer maybe tell me what person and more importantly what research are you thinking of?", which would bring us to a point at which the comment never arrived.
But if someone just "wonders", then it's supposedly not okay to ask them to make up their mind before they post, or at least say what is keeping them from doing so. It's like "I'm not entirely sure I'm convinced most people would agree that" and other phrases, it does imply something, as faintly as it may be, but with plausible deniability. Generally speaking I find that worse than even a direct false claim, which I can correct, in these instances it's like saying the emperor is naked, and people go "there is no emperor, how rude".
Still, I don't think calling the post a debunking effort is the core of the comment, but the CV analysis. Especially the part where the commenter considers the post credible:
"[...] I tend to find her post credible here on general this-looks-like-a-fellow-academic grounds."
The commenter then brings his research to the institution, and make an affirmation on a possible bias. Possible being the key word, what makes the whole period kind of useless.
Yes, we can't correct it since it is not false (nor true), but we also don't have to correct it, as it holds no weight.
In the end, I'd sum the comment up as, "I consider the post credible as an academic, but won't be surprised if proven wrong.", and I would definitely not call this ad hominem. Especially given the commenter has not positioned themselves for or against the content, but as a passive reader.
Of course the study in question is inconclusive because done very sloppily, but there is no indication by the critic as to whether there is reason to believe the data is available to reach a more verifiable conclusion.
> Might this be a case of the Texas sharpshooter fallacy? I notice the lead author on the paper relates a windshield anecdote in The Guardian article. Is it possible that his observation of fewer insects on the windshield led to the original biased search terms?
> There’s at least one paper describing localized declines along busier roadways. It makes sense that day after day, year after year, road traffic will cause a lot of mortality.
> Martin, Amanda E., et al. “Flying insect abundance declines with increasing road traffic.” Insect Conservation and Diversity 11.6 (2018): 608-613. https://onlinelibrary.wiley.com/doi/abs/10.1111/icad.12300
pretty interesting article altogether (sadly so).
If that's actually true then obviously the results would show a decline. It doesn't even make any sense. Why would they do that if they want a real study?
This is simultaneously incredible and unsurprising. There is still so much to discover about the world!