Those papers, which tend to be long and full of great stuff, are cited a lot, and have hundreds of authors.
I wonder how many of these papers are where the first author has cited other papers where they are the first author. (Or really, at least the first few authors) It seems like for the data shown, it is just if anyone in the author list is anywhere in the author list of the citation?
Also for some research niches, you may be one of the few people writing papers on a subject. There's no one else to cite.
I do think there's some very valid points about bringing the person up to speed on previous research that brought them to the current paper. But I don't think those citations should really count as a citation in terms of metrics for how successful a scientist is.
To be honest, I find all the metric gaming about number of papers and citations to be ridiculous. I don't hear many people saying they want to write the best paper in their field, or something new. It all seems to be a numbers game these days. Academic career growth hacking, if you will.
So, for example, in biomedicine you often have lots of people on a paper who might only read a draft, make some trivial suggestions, and then be added as an author.
As a result, there's this pressure for large groups to form, where everyone is added and everyone can cite each other.
This doesn't mean the projects are bad, but it does lead to individuals with large citation counts primarily because they find ways to add themselves to everything, regardless of their level of effort. People should get credit where it's due, and large projects involve lots of people. But what defines a "project" has become very vague.
I've become extraordinarily disillusioned with academics. Science gets done but the rewards seem to filter preferentially to those who are able to game the system, and the system exists out of a need to make one's self look as productive as possible, in areas where contributions are generally necessarily tiny or nonexistent, even among very competent people, because the problems are hard and because so many people see the same things at the same time.
Software world solved this issue with version control systems like git. And if scientists write papers in latex or other text-based formats it's trivial to use version control system for that too.
Then when you quote a fragment you do "git blame" on it, and you see who created and edited this fragment, so you can quote only the relevant people instead of authors of the whole book.
This would make it much harder to abuse quotation rankings.
Additional benefits - when a paper is found to contain manipulated data or other errors - it's trivial to check who did it, so only that person's career is done.
Another example, my advisor doesn't like git. While writing papers I and my collaborators use git but send an email copy to my advisor. Clearly he's going to be on the paper because he's my advisor but you'll see zero commits from him.
I think it's just too easy to think that technology solves this in a trivial way. It's complicated. You have people from different eras working on things. And this is in a CS program, mind you. In different fields it gets much worse very quick.
Side note: go look at papers from top tier universities. You'll notice that they frequently cite colleagues at their University. Is this because they are gaming the system? Is it because they are doing the most related research (which is VERY common for a single University to work close)? Or is it a combination. In all likelihood it's a combination because citations matter. The h index is used in your performance because this is meant to be how impactful your paper is, but the system can definitely be manipulated (and likely isn't happening for malicious reasons nor necessarily unethical reasons)
For people building the telescope (think hardware, software, logistics, everything before the science can be done), many of whom are not academics, and don't typically get authorship or write papers, it's great to get credit for working on the project in a formal, public way. You don't even have to edit or provide some kind of task directly related to the paper either, which I agree can get somewhat clique-ish.
Answering questions about your past papers; looking over someone else's proposed methodology; or cleaning up an internal tool into one you can share are all great tasks for advancing the field, but none of them bolster a CV, earn grants, or help you get tenure. If you want credit for them, you usually have to commit lots more time to the task, like running a formal discussion, becoming an author, or polishing the tool into an OSS contribution. All too often, the result is siloed projects and work abandoned as soon as it's published. (How many papers offering some novel twist on priming or ego depletion could have been turned into replication-and-extension if past authors had been involved?)
Especially in astronomy, with large projects and lots of non-PhD team members, this makes so much sense. (I believe something similar may happen at LIGO - if not formally then at least in practice?) If work is going to be judged by authorship, it's only fair to recognize that at a certain point the groundwork and floating aid people give is comparably valuable to the act of writing up some chunk of the text.
This is largely due to the current model of science funding, and not just in the US. Here in Germany, many involved in public (i.e. at an university, not inhouse r&d at a company) science only get 1-year-limited chain contracts with no real security and low pay, as many grants and funds are also available only in the same time frame which means it's a constant hustle for funding, especially at third parties.
IMO there is only one way to solve this problem: politics have to irrevocably allocate fixed chunks of money for public scientifical investment for long terms (think 10 or 20 years) to give the scientists and universities actual security in their planning and staffing. This would also solve the problem that e.g. NASA has with each President reversing course. No wonder that the last Moon visit was decades ago when the priorities get completely turned over every 4-8 years.
There are other factors at play too, that are harder for me to pin down. Funding models are a big problem, but there's something related to attention-seeking or metrification at play too. Some of this has probably always been around, but in talking to older colleagues I get the sense that things are much more splashy and fad-driven than they used to be, with much greater pressure to produce in volume. A colleague explained that when it's that much easier to write and publish a paper, there's more of an expectation that you do more of them, even though the idea development time isn't any shorter.
This is also one of the arguments against promoters of so-called "term limits" for congressmen and others; imagine this kind of churn in priorities occurring with major and minor public works projects!
We don't have to wonder too much - we can already see it at the state level with governorships changing; one recent large change of this kind is with California's high-speed rail system. For all of it's "boondoggle-ry" and problems, I don't think the way it's been "axed" lately will be of help to completing it. In fact, it might just be a self-fulfilling prophecy for its opponents.
That's only one example; I'm sure others in other states could be easily found as well if one were to look. Ultimately, that kind of thing would only get worse with term limits on representatives to Congress, because federal funding for such large scale projects is needed - and that would end up likely in flux, and ultimately scuttle projects that depend on steady funding to be completed.
One could argue that individual state projects should only be funded by the state itself, but that notion of state self-sufficiency went out with the end of the Civil War. I also tend to wonder if - under a term-limited system - such a thing as the interstate highway system could have ever been built. It doesn't seem likely.
Also, it's worth mentioning that countries which support PhD via publication essentially require you to conduct self-citing research. This is to show you've had a common thread between your research, and that the PhD can be defended as to have all the papers be considered to be on the same subject.
There seem to be some authors and author groups who rely almost entirely on self-citation for impact factor, allowing them to get by with irrelevant or unchecked work. It might be possible to detect that with a metric like self-citation or high author-placement self-citation as a fraction of overall citations.
But overall, it seems like this metric should be limited to exploratory use. There are wholly legitimate cases of frequent self-citation, like mathemeticians pioneering a new technique, or astronomy research groups which cite a large support team and product many sequential findings. Discerning an apparent citation-mill like Vel Tech R&D from a legitimate research group like the LSST requires thought, not just statistics.
Meanwhile, the most egregious self-citers are usually doing something else wrong too. Robert Sternberg wasn't just self-citing, he was reusing large amounts of text without acknowledgement, and abusing his journal editorship to publish his own works without peer review. The Vel Tech author in the article seems to be citing his own past works which are irrelevant beyond vaguely falling in the same field, and the enormous range in his work (from food chain models to neurobiology to machine learning to fusion reactors) makes me suspect it's either inaccurate or insignificant.
Ioannidis is damn good at what he does, and was far too sensible to broadly condemn high self-citation researchers. But it would be a real shame to see self-citation rate blindly added to university standards the way citations and impact factor were. The lesson here is that reducing academic impact to statistical measures of papers doesn't work, not that we need some more statistical measures.
That's the main issue, isn't it? Citations are a bit like tokens that can be exchanged for funding, so they become a commodity that people are incentivised to hoard and trade. That is just the worse kind of environment to promote good quality research. The only thing it can promote is ...lots of citations.
Publish? What and where exactly?
Think that’s producing the best scientific results?
That's a strawman. The answer you were looking for is, "I don't know". I don't know either.
This is the table of thresholds for associate professorship ("II Fascia") and full professorship ("I Fascia"):
"Numero articoli" is "number of papers", "Numero citazioni" is "number of citations", and "Indice H" is "h-index". The thresholds are different for each research area (e.g., "INFORMATICA" is "computer science").
The topic is actually a little bit more complex, since looking at the metrics is only one step of the process.
PLOS gives reasonable citation guidelines, and in this context their Rule 5 is particular relevant: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
> Understanding the value attributed to X, Y, and Z in that particular text requires assessment of the rhetorical strategies of the author(s).
They could've just said, if you want to know why the author thinks XYZ are important, you need to look at what they are saying about it.
I'm a hardcore postmodern leftist, but I don't see how writing in such a contorted way helps practicing scientists. In fact I would argue that this kind of listing obscures a politics of its own; it is so busy prescribing citation practices that it won't examine its own politics.
That said, it's the first time I've seen this guide so maybe I need to read up on the issues; a list of do's / don'ts isn't the best way to introduce and help people understand the issues.
The main problem is that there is an objective need (or desire?) by various stakeholders to have some kind of metric that they can use to roughly evaluate the quality or quantity of scientist's work, with the caveat people outside your field need to be able to use it. I.e. let's assume that we have a university or government official that for some valid reason (there are many of them) needs to be able to compare two mathematicians without spending excessive time on it. Let's assume that the official is honest, competent and in fact is a scientist him/herself and so can do the evaluation "in the way that scientists want" - but that official happens to be, say, a biologist or a linguist. What process should be used? How should that person distinguish insigtful, groundbreaking novel and important research from pseudoscience or salami-sliced paper that's not bringing anything new to the field? I can evaluate papers and people in my research subfield, but not far outside of it. Peer review for papers exists because we consider that people outside of the field are not qualified to directly tell whether that paper is good or bad.
The other problem, of course, is how do you compare between fields - what data allows you to see that (for example) your history department is doing top-notch research but your economics department is not respected in their field?
I'm not sure that a good measurement can exist, and despite all their deep flaws it seems that we actually can't do much better than the currently used bibliographic metrics and judgement by proxy of journal ratings.
Saying "metric X is bad" doesn't mean "metric X shouldn't get used" unless a better solution is available.
Also, I really question your notion that people outside a field should be able to evaluate the quality of someone's work, especially in academia, where the whole point is to be well ahead of what most people can understand. That theory seems like part of managerialism , which I'll grant is the dominant paradigm in the western corporate world.
I understand why a managerialist class would like to set themselves up as the well-paid judges of everybody else. But I'm not seeing why anybody would willingly submit themselves to that. It's a commonplace here on HN that we avoid letting managers make technical decisions, however fancy their MBA, because they're fundamentally not competent to do it. That seems much more important for people doing cutting-edge research.
That’s not the case at all. Being at the leading edge of research should mean that you are creating new knowledge. That doesn’t imply that people cannot understand it. This expectation that laypeople cannot possibly understand science is one of the reasons so many papers are written so densely and obtusely. “They” can’t understand it anyway, right?
Feynman said if he couldn’t explain it to freshmen he didn’t understand it himself.
I do agree that researchers should be able to give decent "here's what I do" explanations to the general public. But that's very different than a member of the general public understanding the context well enough that they can judge the value of the work to the field.
It's about the question of resource allocation. Pretty much every subfield of academia is a net consumer of resources, i.e. someone outside of that subfield is funneling resources to it. That someone - no matter if it's a university, or some foundation, or a gov't agency, or a philantrophist - needs to make a decision on how to allocate resources. And, in general, they honestly want to make a good, informed decision on which projects and researchers to support; but nonetheless they have to make a decision according to some criteria. So there's no choice of "no metric", there will always be a metric and we can only argue that it should be better. And the answer to "why anybody would willingly submit themselves to that" is that duh, you don't get a choice - you can suggest a better method to fulfil their goals of allocating resources in a way that is (also in their opinion) fair and objective; but you can't get around the fact that scientists are generally funded by nonscientists. And they need(or want) to make decisions.
They could delegate that, but that doesn't solve the question about the criteria - if they delegate that to universities, they still have to decide on how to allocate between departments; if they delegate that to scientist councils uniting all the departments in the country working on some subfield, they have to decide on how to allocate between the different organizations. So no matter what, you have to compare not only quality of similar scientists, but also of dissimilar scientists working in different (sub)fields. And delegation doesn't absolve you from responsibility, so if the money is (or looks!) wasted, then that's a failure - so when you delegate, you want to require them to use objective criteria. Which is hard - I could tell you which researchers in my subfield are doing excellent work and which are useless; but if I had to justify these decisions, to demonstrate why they're not just my bias because of politics/liking certain methods/gender/ethnicity/etc then it would actually be tricky; and I think that I'd actually reach out for these metrics. And I'm quite certain that the metrics (for the people that I have in mind) would agree with my subjective opinion; on average, the great research gets cited much more and is in higher-ranking venues; while the lousy stuff gets no citations apart from the author's only grad student.
Also, there's a lack of trust (IMHO not totally unwarranted). You could get a bunch of experts who are qualified to evaluate who gets what amount of resources, you can't rely on them actually doing so - if we take spiders as a totally random example, in general you're qualified to distinguish which spider research is good and which is useless only if you actually work on spider research, most likely in one of these teams - and the expected result is nepotism, allocating resources based on purely (intra-field) political reasons. And who'd decide on how to split resources between spider research and bird research? Do you expect the spider guys and bird guys to reach a consensus? Or would it go to whatever field the dean is in? This is a big problem even currently, and a big part of why the metrics are being gamed - but at least metrics are something that require effort to game and can't be gamed totally; if we'd do away with them, then we'd be left with absolutely arbitrary political allocation, which would be even worse.
So at the end of the day "they" need some way to transform the only reasonable source of truth - actual peer-review - to something that "non-peers" can use to judge what the the aggregate of that peer review says. That need is IMHO not negotiable, I really believe that they do actually need it - they don't want to do resource allocation totally arbitrarily, they want to do it well, they need (because of external pressures) objectivity and accountability, and currently this (journal rankings, bibliometerics, etc) is the best what we have suggested to summarize the results of that peer review.
If I had to write a law draft for a better process of allocating resources, what should be written in it?
Again, since this is a tech community, let me use that for an analogy. It's a classic problem for non-technical founders to evaluate their technical hires. They aren't qualified.
The right solution is not to find some gameable metric of tech-ness, like LoC/day or Github stars. Instead one uses either direct experience-based trust or some sort of indirect trust, like where you have a technical expert you trust and have that person interview your first tech hires.
Yes, having expert humans make the decisions is imperfect. But it's not like a managerialist approach is either. And the advantage of using expert humans, rather than a gameable metric and managerial control, is that we have centuries of experience in how people go wrong and many good approaches for countering it.
We fund academic work because we see value in it. But there are many kinds of value, and many different sorts of value. So I think it's appropriate that we have many different universities which have many different departments. Many different funding agencies and many different foundations. Each group has their own heuristics for picking the seed experts.
There are still systemic biases, of course, but that's true of any approach. And distributed power is much more robust to that then centralized power or a single homogeneous system.
Existing community/expertise based moderation and reputation systems might not be directly transferable or adequate. But it shows there are new ways to think about more decentralized measures of reputation that are new to this century and haven't been tried. New ways that may be preferable to a small group of kingmakers.
I think the biggest problem is leadership and cooperation of community to try something different. It's not just that there is no person who can mandate these things, it's that multiple constituencies have widely diverging interests, i.e. authors, universities, corporations, journals.
I also don't think it's a problem that different groups have different interests, etc. As I say elsewhere, I think that diversity is the solution.
>I also don't think it's a problem that different groups have different interests, etc.
I don't see how you can disagree that cooperation of community to try something different is not a major hurdle.
How many years has it been since important issues in the academic process were widely known? How much success in adoption has there been to date, regarding any fundamental changes?
It seems on its face to be crucial.
Would both help solve the replication crisis, and resolve this problem.
Of course then you might have 10 000 studies replicating the same easy to do study... which is why the "score" should be reduced based on how many other times that study has been replicated.
An insightful study that's replicable (but has not yet been) is valuable. A lousy study that's been replicated five times (not because it's interesting, but because it was easy to do, and the replicators knew that they'd be rewarded for replicating anything) is not valuable.
A metric that says "number of studies" is IMHO even more arbitrary, more gameable, and more detached from actual value than citation count - which does have some notion that your study actually matters to other poeple; that it was worth writing that paper because someone read it.
Or, I dunno, paleontology or sociology or other stuff.
Citations can be a useful metric here, particularly if you can identify citations of people actually using the method (as opposed to people just mentioning it in passing, or other methodological researchers comparing their own methods to it).
The number of people who did and gave their approval would be a good indicator I can trust the paper.
What does a citation do that's better then this?
For experiments, or non math papers, you might need something more robust. I think mostly because reviewing the paper isn't really reviewing the full study, but only what the researcher put in the paper. So it is very hard to review methodology and details to be sure they followed proper protocols, etc. You'd need someone to have been reviewing the study as it is happening, and not just the output paper from it.
Consider Researcher A, who has one paper with a hundred citations, and Researcher B, who has ten papers with two citations each. Probably Researcher A has made a larger contribution.
Whether you're in math or any other field, the fact that a paper is correct or reasonable enough to make it past peer review doesn't mean anybody gives a shit about it.
You trust in people, in consistency of physical laws, in coherency of your own mind, in constancy of temporal flow, in so many things because - as the Pyrrhonist refrains - nothing is certain, not even this.
I disagree with the assertion that bad metrics should be used if there are no alternatives. Bad metrics give wrong answers, and only the illusion of meaningful information. The most common use of bad metrics is to lie to people, and it isn't the scientists using the metrics but the organizations that employ them.
Why should an easily measurable metric which has meaningful value exist? It doesn't seem obvious to me that it should at all. Determining the capability of a researcher is inherently a very complex intellectual task. The desire is to reduce that task to something which removes the need for the person doing the evaluation to read and understand the produced research, or to even understand the field of study in many cases. Perhaps, instead, those who are put in charge of things like awarding grant funding, granting tenure at universities, and deciding who to hire to teach ought to be expected and required to evaluate the research on its merits. This would greatly increase the intellectual sophistication and capability needed for people in those positions, but the alternative will always be fairly easily exploitable because it is easier to goose a metric than to do solid research.
We see the shortcomings of trying to reduce complex intellectual challenges to checklists or metrics all the time. And we simply ignore the alternative of relying upon intellectually capable people meeting the challenge. Personally, I don't understand why.
The bureaucrat in this case would be another university professor working in the same field.
And for what it's worth, I've almost never heard impact factors discussed at NIH study sections, where investigator quality is explicitly on the agenda. Reviewers talk about relevant prior publications in the field, esp in marquee journals. [this latter feature is the reason we don't just put everything on biorxiv or equivalent and move on.]
We do need a metric imo, but I agree we don't have a perfect one yet.
[Link Test] [/Link Test]
Even better split it into individual contributors to give a count of researchers who have cited the paper?
In a very loose sense, PR is the same algorithm universities use, evaluate quality of some content based on the number of references to that content.
It is definitely gamed in similar ways. I'm surprised we haven't seen professors hire SEO firms to help increase citation counts of their research.
In fact PageRank was inspired by academic rankings in that aspect:
"PageRank was influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced (1998), Jon Kleinberg published his work on HITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers."
Carl Bergstrom is a smart guy so I suppose the practical implementation of the above must have some wrinkles, but with enough brute force it seems tractable. What I despise more than anything is the gaming that takes place for “impact factor”.
I do OK by standard metrics but would very much like to know where I stand by less easily gamed metrics of influence.
And suddenly Google is the authoritative source on literally everything in the world. I hope you like their political views, because they would become "the one".
But my work is incremental, and I obviously don't want to repeat what I said in a different paper, so I cite earlier work in later work. TBH, I don't think it's possible to avoid self-citation unless:
1. Your research is so popular that by the time you need to cite it, it's been surveyed, or improved upon, or otherwise adapted.
2. You switch research subjects relatively often.
3. You publish "blocks" of work, each based on fundamentals in your field established by others - and they're not incremental.
The whole reason the internet and wikis took off is we were very liberal in how we linked. If we disallowed inbound citations, wouldn't it be a lot harder to backtrack and grasp contextual underpinnings?
Anecdote: In the field of adult attachment theory <-> love there are a few prominent scholars that cite each other: Shaver, Hazan, Mikulincer. They do papers citing their own work and each other . There's also a book by Mikulincer highlights Shaver's upbringing with his parents, his past as a hippy, etc. They're delivering very nice content, and they cite others outside their ("circle"?)
Are there potentially scholars in the field with valuable contributions that go unnoticed? Possibly. It doesn't make self-citations in their papers any less helpful. Also I worry that regulating citations through some system may affect the quality of content and fix something that's not broken.
Which brings me to another issue, aren't we supposed to be helping each other?
 Example: http://adultattachmentlab.human.cornell.edu/HazanShaver1990....
That said, if you publish paper A, and then cite it in paper B which builds on that work, then in paper C you really only need to cite paper B if you're building on the work, not B and A. It might make for in interesting data set to plot out those sorts of relationships.
You could just cite the last paper here, which is the only one used directly, and which presumably itself cites the earlier papers. But it's more useful to me if you include the version of the sentence that cites all three and briefly explains their relationship.
Often half or (much) more of the value of a paper is in the references, and that's not a bad thing. Sometimes it is the first thing I read.
There's no ink shortage, no link limit on the Internet, and every paper has an abstract for quick filtering. As a curious person I want everything that serves to establish the argument cited so I can be guided to papers of interest and get a better idea of where an idea fits in the broader field.
Logically I agree with you, but a lot of academics seem to believe differently when it comes to citing other people's work, and if we are to go by that logic (which a lot of people are inevitably forced to do), I don't see why one should treat their own work any differently.
Filtering for self-citations is useful to identify the bubbles. But it is not sufficient to determine if those bubbles only contain hot air or if these scientists are actually working on something with substance in a narrow field where few others publish.
The problem really is the abuse of citation metrics and journal brand names (and especially journal-based metrics) as a means of evaluating researchers. What we really need is a different method of evaluating researchers that does not rely on where they publish or what they cite.
(But I would say that, given that I work on one such a system.)
> The rate of duplication in the rest of the biomedical literature has been estimated to be between 10% to 20% (Jefferson, 1998), though one review of the literature suggests the more conservative figure of approximately 10% (Steneck, 2000). https://ori.hhs.gov/plagiarism-13
If work by another author was enough to inspire you and add a reference, then your own previous work should certainly qualify, if it added inspiration to the current paper. Self-citing provides a "paper trail" for the reader when they want to investigate a claim or proof further.
(Like PageRank, it is very possible to discount internal PR/links under external links, and when you also take into account the authority of the referencer, you avoid scientists accumulating references from non-peer reviewed Arxiv publications).
It’s the same with the startup world. If you’re the only one doing a thing, are you brilliant or foolish?
Today, most scientists go for the popular topics and whatever is on the government research plan to get funding.* Whenever the wind changes direction they change their topics because they need that funding.
*: This might seem to contradict with the statement that most scientists work on nice topics. But only on the surface. In order to get published you have to do something novel. So, you choose a popular topic and then research a rather unpopular side aspect on it like how a specific chemical behaves when applied to the popular topic. If you're successful you publish and continue. Citations come later or they don't but the next round of funding comes with publishing. After a few years without many citations you move on to the next thing.
On government plans often you need to publish and then it's done. The citations only matter long-term if at all. Most scientists don't achieve anything of greater value. They are happy if they can publish at all. If the institute has a few scientists with a high citation count it carries all the rest of them.
There's countless numbers of scientists that made great strides working on topics others deemed rediculous. Heck, many of the Nobel prize winners were ridiculed by their colleagues as borderline wack-jobs at the time they were working on their research. Even after winning the prize, some still were with their later work (Crick's search for consciousness comes to mind, and why it would be so worthless a search does not).
If anything, the hubris of the scientific community would be as deafening as the pseudo-science BS and hold back progress just as much if not more except for one key thing: the scientific method.
Luckily, we have a process by which crackpots get differentiated from geniuses. So let's not leave $20 on the ground assuming others would have picked it up, especially when that $20 represents collective progress for the entire species.
Do you have any good examples of being considered wack-jobs before their winning?
> Dr. Ignaz Semmelweis discovered in 1847 that hand-washing with a solution of chlorinated lime reduced the incidence of fatal childbed fever tenfold in maternity institutions. However, the reaction of his contemporaries was not positive; his subsequent mental disintegration led to him being confined to an insane asylum, where he died in 1865.
> if Andromeda were not part of the Milky Way, then its distance must have been on the order of 108 light years—a span most contemporary astronomers would not accept.
the size and distance of these objects (galaxies) seemed far too absurdly large to one side of the debate to be accurate; it would mean the size of the universe would be absolutely enormous. Of course,
> it is now known that the Milky Way is only one of as many as an estimated 200 billion (2×1011) to 2 trillion (2×1012) or more galaxies proving Curtis the more accurate party in the debate.
https://en.wikipedia.org/wiki/Great_Debate_(astronomy) — I think it's an interesting read, and a good example of how sometimes the right answer can seem absolutely wrong.
>...Stanley B. Prusiner, a maverick American scientist who endured derision from his peers for two decades as he tried to prove that bizarre infectious proteins could cause brain diseases like “mad cow disease” in people and animals, has been awarded the ultimate in scientific vindication: the Nobel Prize in medicine or physiology.
>...Prusiner said the only time he was hurt by the decades of skepticism “was when it became personal.” After publication of an especially ridiculing article in Discover magazine 10 years ago, for example - which Prusiner Monday called the “crown jewel” of all the derogatory articles ever written about him - he stopped talking to the press. The self-imposed media exile became increasingly frustrating to science journalists over the past decade as his theories gained scientific credibility.
>....The recent 2011 Nobel Prize in Chemistry, Daniel Schechtman, experienced a situation even more vexing. When in 1982, thirty years ago, he made his discovery of quasicrystals, the research institution that hosted him fired him because he « threw discredit on the University with his false science ».
>...He was the subject of fierce resistance from one of the greatest scientists of the 20th century, Linius Pauling, Nobel Laureate in Chemistry and Peace Nobel Laureate. In 1985, he wrote: Daniel Schechtman tells non-sence. There are no quasi-crystals, there are only quasi-scientists!
An example that is pretty well known is Barry Marshall
>...In 1984, 33-year-old Barry Marshall, frustrated by responses to his work, ingested Helicobacter pylori, and soon developed stomach pain, nausea, and vomiting -- all signs of the gastritis he had intended to induce.
>...Marshall wrote in his Nobel Prize autobiography, "I was met with constant criticism that my conclusions were premature and not well supported. When the work was presented, my results were disputed and disbelieved, not on the basis of science but because they simply could not be true."
It was Max Plank who said "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." - so this isn't a new issue and things are probably better now than they were in the past.
This is nothing like tech startups, where you have tons of people sharing a relatively small problem space (creating tech tools companies want).
Consider for a moment: there are well over 350,000 different species of beetles. Theres just too much to study and too few people doing the work to expect there to always be a plethora of external research to draw upon.
Acquiring knowledge (I should say 'beliefs about valid knowledge') and brainstorming (and certainly collaboration and getting an advisor to adopt you) appear to be social activities, as much as purely logical and analytic activities.
Social activities like this, for social or herd creatures, are subject to flock or swarming patterns.
Maybe all the brilliant people are swarming around a locus of interest? It's certainly a good way to have the population explore the ins and outs of a, well, locus of interest. It's also a good way to have a loner get shunned by wandering off and poking at an uninteresting pile of dung.
I guess my point is: why not both? (Mathematically, statistically, egotistically, I know the idea that I am the foolish one is almost certainly more likely to be the case)
It seems like PhD candidates work on peripheral elements of their sponsor/tutor/professor's work, of that professor at some point is going to make a significant step then one of those PhDs will be along for the ride; not necessarily the genius one.
At the time, chytrids were about as obscure as a topic in science can be. Though fungi compose an entire organismal kingdom, on a level with plants or animals, mycology was and largely still is an esoteric field. Plant biologists are practically primetime television stars compared to mycologists. Only a handful of people had even heard of chytrids, and fewer still studied them. There was no inkling back then of the great significance they would later hold.
Longcore happened to know about chytrids because her mentor at the University of Michigan, the great mycologist Fred Sparrow, had studied them. Much yet remained to be learned—just in the course of her doctoral studies, Longcore identified three new species and a new genus—and to someone with a voracious interest in nature, chytrids were appealing. Their evolutionary origins date back 600 million years; though predominantly aquatic, they can be found in just about every moisture-rich environment; their spores propel themselves through water with flagella closely resembling the tails of sperm. Never mind that studying chytrids was, to use Joyce’s own word, “useless,” at least by the usual standards of utility. Chytrids were interesting.
The university gave Joyce an office and a microscope. She went to work: collecting chytrids from ponds and bogs and soils, teaching herself to grow them in cultures, describing them in painstaking detail, mapping their evolutionary trees. She published regularly in mycological journals, adding crumbs to the vast storehouse of human knowledge.
And so it might have continued but for a strange happening at the National Zoo in Washington, D.C., where poison blue dart frogs started dying for no evident reason. The zoo’s pathologists, Don Nichol and Allan Pessier, were baffled. They also happened to notice something odd growing on the dead frogs. A fungus, they suspected, probably aquatic in origin, though not one they recognized. An internet search turned up Longcore as someone who might have some ideas. They sent her a sample which she promptly cultured and characterized as a new genus and species of chytrid: Batrachochytrium dendrobatidis, she named it, or Bd for short.
That particular chytrid would prove to cause a disease more devastating than, as best as scientists can tell, any other in the story of life on Earth. After Longcore’s initial characterization, she and Nichol and Pessier proceeded to show that frogs exposed to Bd died. Other scientists soon linked Bd and its disease, dubbed chytridiomycosis, to massive, inexplicable die-offs of amphibians in Costa Rica, Australia, and the western United States. No disease had ever been known to cause a species to go extinct; as of this writing, chytridiomycosis has driven dozens to extinction, threatens hundreds more, and has been found in more than 500 species.
Almost overnight Longcore went from obscurity to the scientific center of an amphibian apocalypse. “Had I not been studying the ‘useless’ chytrids,” she says, “we wouldn’t have known how to deal with them.” Her research has been crucial—not only the initial characterization, but also her understanding of the systematics and classification of chytrids, which helped provide a conceptual scaffold for questions about Bd: Where did it come from? What made it so strange and so terrible? Why does it affect some species differently than others?
Imagine how many inventions we would have missed if all inventors had shared your mindset.
I know some scientists that define their research direction by asking these questions first before pursuing an idea. Many great inventions like optogenetics or expansion microscopy came from this investigative strategy. It can help keep your resources and energy in check.
Some topics need a large investment to show anything at all.
Some topics show immediate results (good or bad).
Should we, for example, stop all activities in fusion research because so far nobody has shown that it will work and we already invested billions?
It’s just an important question to ask yourself.
If you live in your own bubble the needle doesn't move forward.
If you are the only one doing the work, why is it worth doing?
I feel like if this problem were very concerning we'd see the distribution concentrated at certain institutions but I'm not sure there's one with over 10 researchers at them. We hear a lot about questionable Chinese journals, but the highest institution in this list is the Chinese Academy of Sciences with 3 individuals.
I think the more likely case is there are a few bad apples, some bad practices we can't ever fully get rid of, and that some research lends itself more to self-citation.
Yes, in theory, the scientic method / process is a wonderful standard. Unfortunately, once it's exposed to egos and profits it becomes somethings else far less worth of praise and honor.
I'm not doing a take down of science, science has already done that to itself. The sooner the rest of us come to terms with that, the better.
We 'marketed' science as due to our socioeconomic dogmas vaguely based on completely misunderstood caricatured Darwin ion theory there can be no alternative. We turn science into a quantative metrics game, and by golly, act all surprised that scientists do game the system?
How useful do you think Google Search would be if they just stopped after Pagerank v0.1 and called it a day, then let all the websites 'vote with their links'?
Which is to say, these may be a few sciences but it seems like they significant resources behind them, somehow.
Basically, my professor was one of the first female physicists trying to make it in a clearly male-dominated field nearly 60+ years ago.
She was not offered jobs, or jobs at reduced salary vs male colleagues, being told her research was not as productive.
She literally invented the concept of considering number of citations as a proof of impact, relevancy, and productivity, and showed that her work was better than most of her male colleagues.
Only after she jumped through this hoop herself was she able to get a faculty position. Damn impressive of her.
Neither does a low number of citations. For example, my field is small and
kind of esoteric, so we don't get lots of citations either from the outside or
the inside (one of the most influential papers in the field has... 286
citations on Semantic Scholar; since 1995).
With a field as small as a couple hundred researchers it's also very easy to
give the appearance of a citation mill. Given that papers will focus on a very
specific subject in the purview of the field, it is inevitable that each
researcher who studies that specific subject will cite the same handful of
researchers' papers over and over again- and be herself cited by them, since
she's now publishing on the subject that interests them.
As to self-citations, like Ioannidis himself says there are legitimate
reasons, for instance, a PhD student publishing with her thesis advisor as a
co-author. The student will most probably be working on subjects that the
advisor has already published on and in fact will most likely be extending the
advisor's prior work. So the advisor's prior work will be cited in the
So I'm really not sure what we're learning in the general case by counting
citations, other than that a certain paper has a certain number of citations.
Most researchers continue to do new research on the same concept after a publication, and they will of course site their earlier work when continuing. Additionally, post-graduate researchers often have their names placed on the research of grad students they are in charge of, even though they often have minimal involvement in the research or conclusions drawn.
You might be able tell something from the ratio of other authors from all citations to the number of self-citations, but only if you could eliminate self citations that were not either inclusion by proxy or cases where they are merely continuing research on the same topic with new methodologies.
There are already methods for identifying bad research, none of which can be achieved through the use of non-human-assisted data analysis of the authors list of research. The only way to be sure is critical review and 3rd party verification of results with repeated experiments.
So, high citation count isn't always a direct indicator of quality.
2 weeks of effort goes into doing something and you spend 2 months of writing a paper of the slightest of the result. Many of these papers introduce an infinitesimal increment to knowledge, at best and you can tell how long would it have taken to get it working. And this is Numerical Mathematics. There are clans of Mathematicians who just go around citing each other. And a quality paper comes around every 5 years or so.
I'm currently reading "The Systems Model of Creativity;
The Collected Works of Mihaly Csikszentmihalyi" and was really surprised at the rate of self-citation in the included papers. Now it does seem to me that the cited studies are relevant. And I can't judge whether there would have been papers from other authors even better suited for citing.
And why am I even reading that book? Well because of the persistence with which Csikszentmihalyi gets cited in other writings I read. How do I know these writers weren't shills for Csikszentmihalyi? I don't care all that much when the material is good.
So in the end should I care about backwater publications that cite themselves excessively? Because I don't have to read them. As a consumer I don't seem to get hurt by the practice.
If a scientist is trying to game the system, they are doing it to secure future funding. Either they are gaming the system to make the funder look good, or to appeal to a future funder.