
Quality, not quantity: ending science's "academic prostitution" - jseliger
http://arstechnica.com/science/news/2010/06/quantity-over-quality-doesnt-cut-it-for-scientists.ars
======
avar
Aren't scientific journals more of a hindrance to scientific progress these
days anyway? They're not essential to the peer review process, and the
services that they do provide could be decoupled from publication.

It would be very nice to see a distributed publication system taken up by the
scientific community, based on circles of trust where anyone could partake in
the peer review process.

~~~
tensor
In my experience, journals are still essential to the peer review process and
are the primary way in which work is disseminated into the larger community.

~~~
_delirium
In computer science, I'd say the first part is true but not really the second,
at least in the areas I work in. People definitely use "did it get into a
journal?" (actually in CS, more often a good conference, since top conferences
have more prestige than journals) as a quality filter, but they don't really
_get_ their information by browsing the latest issue of journals. For that,
it's much more common to just get PDFs off the internet--- from authors'
homepages, Google Scholar, CiteSeer, citeulike, mailing lists, personal
suggestions, etc., and then figure out where those PDFs came from later.

In some cases, if it's an author I already know of, I don't even bother
looking to see where it was published until I actually need to write out a
bibtex entry to cite it. Doing things that way, I've cited papers from
publication venues I've never even _heard_ of, so they aren't really doing any
sort of information-filtering work for me. In those cases, the article may as
well have been a tech report for all it mattered to me, since the publication
venue was really serving as nothing (from my perspective) except a fancy place
to deposit PDFs. Admittedly, that works mainly when I already know something
about the author (or someone I know can tell me something about the author).
If they're a total unknown, I'd probably be more wary of citing stuff that
didn't already get into a venue I respect.

~~~
billswift
Actually, the only real reason I maintain an ACM membership is that I think
their online library is a great resource for browsing technical computer
science.

------
rubidium
_science output is not directly proportional to effort. Instead of ranking
scientists purely on their publication records, credit should be given for all
sorts of activities that don’t necessarily come across in traditional
metrics._

While I agree there is more to science output than publications, candidates
for post-doc/faculty positions shouldn't be judged on "effort", but on making
real contributions that bring progress to their field of research. Judging
that on anything else than quality publications becomes difficult, but I look
forward to what people may do about it.

So far I've heard mostly talk and little action.

~~~
_delirium
When you quantify _output_ , though, it tends to reward conservatism: work on
smallish, high-probability incremental improvements, not big things, because
big things have too high variance. It also rewards being good at overselling
work, although that's a bigger problem in some areas than others.

I think people actually are already pretty good at judging research quality on
something like "effort", or more to the point: how good a scientist is so-and-
so? But it requires familiarity with the person over some time, and tends not
to scale well. If you ask a scientist about the quality of research of one of
their colleagues they've been in the same department with for 10 years,
they'll rarely refer to publications, and they often won't even refer to
discoveries. Instead, it's inputs rather than outputs to the research process
that they evaluate: how well is the person managing his/her lab, how
insightful is he/she in picking problems and approaches, how good at mentoring
students, how well does he/she really understand where this field is and where
it's going / should be going, etc.

There is correlation, of course, in that the good scientists most often also
produce good results. But at least among the researchers I know, if you have
to pick one as the "real" one, inputs rather than outputs are it. If you
observe someone for a while, you can tell whether they're a good scientist or
not, and you can also tell the places where it doesn't correlate with
outputs--- someone's been doing everything exactly right on an important
problem but just hasn't gotten a break (or maybe _did_ get it, but it came a
month after they were scooped by another group), or someone else is kind of
mediocre but got lucky with a big result. Neither's the common case, but it's
not that rare either; it's not like research is just a crank you turn and
results come out. =]

I guess the main issue is how to scale it. I think the above is pretty much
how people already _do_ make decisions when they're familiar with someone's
work. If someone is up for tenure, my impression is that profs in that
person's area will rarely even bother looking at publication lists when
deciding whether to vote yes/no: after working with the person for 6 years
they already have a good idea of whether the person is a good researcher they
want to keep. It's mainly the people in other areas (in a large department),
or in administrative roles, who seem to have no choice but to use the CV as
the main basis of their tenure decision.

------
thunk
It seems to me that making research more open and searchable would mitigate
some of these problems. The system-gaming and badly calibrated incentives
smell like the kind of distortions closed social systems tend to introduce.

