
One Scientist's Crusade to Rewrite Reputation Rules - robg
http://www.wired.com/culture/geekipedia/magazine/17-06/mf_impactfactor?currentPage=all
======
lutorm
Still has all the problems that counting citations do: how do you count
multiple-author papers (particle physics has given up ranking authors and just
put the hundreds of authors in alphabetical order, IIRC) and it encourages
people to write piecemeal papers, because why say something in one paper if
you can say it in two -- and get twice the citations?

Also, my favorite: What if you cite someone to say that they were completely
wrong when they approached the same problem? Should that be counted as a
positive for them?

~~~
natrius
Presumably, no one is going to try to prove something wrong that has already
been proven wrong, so unless there are many obscure flaws in one's work that
take multiple papers to uncover, it shouldn't affect the citation count much.

~~~
lutorm
In introductions, papers often give a short overview of what's been done
before, so early work in a field often gets cited even if it turns out to have
been flawed. (It can't be completely ridiculous, of course, it must have
sounded plausible at the time, but even shoddy research often does.)

------
sachmanb
this is almost pagerank for academic publications, and would probably be
improved by being more like pagerank (where it also took into account how
often citing papers were cited)

~~~
timr
There are a lot of people trying to use pagerank for academic journals, but so
far it hasn't worked well for various reasons.

Part of the problem is that the metaphor breaks down: a paper is like an
individual webpage, but a journal is like a company -- it has a much longer
time-line, and its impact varies over time. Also, unlike web links, citations
don't go away; they just accumulate over time. Since the point of these
citation metrics are to rate the journals (and maybe the scientists), pagerank
has some difficulties in the domain. It works better for ranking individual
papers than for scientists or their journals.

This shouldn't be too surprising: TechCrunch (for example) probably has a good
rank on many pages, but pagerank doesn't tell us anything about Michael
Arrington's reputation.

~~~
chancho
But we're not talking about ranking journals. We're talking about ranking
authors. JIF if a reasonable metric for journals, the problem is that it's
used to rate authors: what's the JIF of journals you publish in?

The metric presented here is much better for rating authors because it gives
more of an author's peers an opportunity to vouch for him by citing his work,
as opposed to only a small editorial board and review committee who decide if
he gets into TopJournalX.

Adding a pagerank-style coefficient (increasing the weight of citations that
come from well-cited papers) would make this metric even better for precisely
the reason you state: papers exist in perpetuity. If I write a paper now but
it is ignored for 50 years, then someone builds upon that to break ground in
an entirely new field, then I deserve some indirect credit for that. The
journal I published in does not.

~~~
timr
Empirically, pagerank hasn't been very successful at ranking authors for the
reasons I mentioned, along with other complications (e.g. papers have multiple
authors).

But more importantly, you're confusing impact factor with peer review. Peer
review decisions are double-blind, and impact factor doesn't play a role
(shouldn't, anyway). Papers don't get published in Science and Nature based
upon the authors' impact factors.

------
thras
Unfortunately, both systems are too easy to game by those who happen to be
more unprofessional than average.

In the end, these sorts of systems are foisted on us by the paid bureaucrat-
class that pays itself quite well for doing all that really hard work of
managing academics. Figuring out whether someone is a hotshot scientist would
mean reading his papers, and that's way too much work.

~~~
knowtheory
1) Whether someone is a hotshot scientist is a subjective matter of opinion.

2) The question they are asking is not "How important are your contributions
to science?", but "How important _do your peers think your contributions_ to
science are?"

I'd probably go as far as to claim that it's nonsensical to search for an
objective value metric for contributions to science. Scientific contributions
are extremely heterogeneous, and value judgements are equally varied.

~~~
thras
Yes. There is no pure objective measurement, but...You still need to hire some
people and not hire others. Value judgments are not randomly distributed.
Science is a built upon forging objective consensus out of individual opinion.
It even works pretty well.

~~~
knowtheory
I'm not against ranking people, although i do think we need to keep in mind
what the rankings really mean.

And, i'd say that the rankings work well enough, but they're certainly not
perfect, as pointed out by the article.

