
Objective CS Rankings Through Papers Published at Top Conferences - chillee
http://csrankings.org/
======
Upvoter33
These types of rankings are as good as the data they are fed. Here, the data
is pub count. This is clearly useful to see - for example, if you want to go
somewhere and study operating systems, but the school has not published a
paper in that area in the past decade, well, you might want to rethink your
decision.

That said, we need better data. Imagine top programs each doing a detailed
survey of exiting graduate students to learn things like (a) how easy was it
to find an advisor? (b) once you had an advisor, how much did you feel like
they helped you in your career development? etc. etc. If there was a serious
effort to evaluate the outcome of graduate school -- the graduate students
themselves, and how much they improved while in school -- we'd have a new data
source and deeper way to evaluate which school one might wish to join.

------
nopinsight
It appears that the rankings are not weighted by the size of faculty. Given
similar faculty quality, larger departments will often rank higher in the
scheme. It is true that larger departments have certain advantages, but one
would think that it levels off if there is a critical mass in a given area. A
better way to rank would be interesting. (A simple division by the number of
faculty is obviously wrong as well since counting interdisciplinary
researchers would lower the average.)

From my observations, citations are better metrics of quality than publication
venues in the long run. Their rationale for not using citations is somewhat
true but a measure like H-Score with a sufficiently high cutoff should
mitigate the problem somewhat. A more sophisticated method is to weight
citations by the number of citations used in a given paper so that the
citations occurred in each paper only sum to 1.0, as well as detecting and
discounting 'citation rings'.

That said, it is a nice effort to bring objectivity to academic rankings and
the site uses well-designed information architecture and data visualization. I
appreciate it.

~~~
eelkefolmer
Well they do compensate for number of authors on a paper. IMHO large
departments often have large clusters where everyone is co-authors so I do
think it evens out.

~~~
nopinsight
If we look at Visualization, for example, U of Utah's average count is 14.6
which is the sum of adjusted counts of 10 faculty members, while UC Davis's
10.8 is the sum of only 3 faculty members. So when a department has more
people actively working in the area, its average count would often come up
higher. It doesn't mean that the quality or the level of activity of each
researcher is higher.

------
dsg42
While I admire the idea of getting subjectivity and bias out of rankings, I
think you're missing the point that there is an inherent issue of quality here
that is being brushed aside.

For example, I decided to check out my own alma mater, Yale, to see how the
rankings were calculated. One professor stood out to me: Dan Spielman. It
happens that he won the Nevanlinna Prize in 2010, during the period that your
rankings cover. Yet, his average comes out to a mere 4.5, which would mean he
would actually bring down the average score at any of the top ten schools.

The issue here is that, until computers can reliably rank the quality and
importance of papers in real-time, these types of rankings mean little. It's
the quality, not the quantity, of papers published that matters.

------
no_protocol
This comment is probably on the rude side, but I've wanted to get this off my
chest for years.

I had some exposure to several of the top listed faculty at one of the higher
ranked schools. It seemed like at least a couple of them genuinely cared more
about how many papers their name was on in DBLP than what the papers were
about.

I mainly have a hard time believing someone who is a co-author on over 40
submissions per year has time to comprehend and fully understand them all. I
was also unconvinced that many of the papers were even moderately significant.
It seemed like they knew exactly what to do to get a paper accepted and
optimized for that.

The end result was I decided to steer away from the academic track because it
did not seem like an environment I would be comfortable in.

~~~
WkndTriathlete
Based on the number of insignificant papers I have had to crawl through in
various ACM publications, I am inclined to agree. Having said that, I don't
think the rankings are that out of line.

I do wish that CS (and science, in general) would stop requiring publication
for tenure (and Ph.D candidates). Wading through 500 abstracts to find the one
useful genuine innovation gets tiresome. At least the ACM has recognized they
have a problem and are starting to provide curated lists in their monthly
"Communications".

------
hiddencost
"objective" if your goal is to rank people by publication in those
conferences.

(1) the difference between the best and worst papers in a venue is wild. (2)
[http://blog.mrtz.org/2014/12/15/the-nips-
experiment.html](http://blog.mrtz.org/2014/12/15/the-nips-experiment.html) (3)
different fields vary dramatically in how many papers they publish

Plus I love how brain dead the word "objective" is. Every thing is objective.
What objective function did you choose and does it optimize something that
matters? This is a great example of a bad objective applied poorly to
something that doesn't matter.

( Don't get me wrong, publication in these conferences is impressive, and
depending on career stage I use a heuristic about my expectation of how many
good papers you've published (and how good they are) when evaluating resumes,
but this is just silly).

------
eelkefolmer
I am faculty in a department that ranks #80th on csrankings but >200 in US
news ranking. Strikingly is also the poor correlation between US news rankings
and CSrankings. I emailed with the owner of CSrankings who told me:

For the top 50 US News schools, the correlation with CSRankings (Spearman's
rho) is 0.77 (p-value = 3e-11). The correlation drops the further one gets
from the top-ranked schools. For the top 25-50 (same exclusion criteria), the
correlation drops to 0.44 (p-value = 0.025). For ranks 40-50, there is
effectively no correlation: 0.12, p-value = 0.75

US news rankings of CS departments is entirely based on reputation, which
really puts certain CS departments that do good research at a disadvantage
regarding student recruitment.

------
randcraw
This is a great idea, a big improvement over the nonsense used by many current
rankers. But I'd love to see the impact of the journals or the number of
references of each pub weight the counts. Count clearly overvalues department
size.

~~~
emeryberger
Journals are not the primary or most important venue for publication for most
areas of computer science. If anything, the current trend is to have journals
include proceedings from the top conferences.

The FAQ discusses the various pitfalls and challenges of counting citations.

------
kevan
It's only objective if the selection process for top conferences is objective.
But I do appreciate a metrics-driven system vs the surveys that US News uses.

~~~
michaelmior
True. I assume the meaning is that the site itself has no subjective
component.

~~~
chias
Besides where to draw the cutoff line between "top" and "non top" conferences
of course.

~~~
argonaut
It's unfortunate that this might seem esoteric / insider-baseball to non-
academics, but researchers in CS know _exactly_ which are the top conferences.
There is actually very little ambiguity.

Looking at the list, the entries for Robotics, NLP, Computer Vision, and ML
are accurate, to my knowledge (I would also include UAI/AISTATS, but they were
probably omitted because they're closer to mathematics/statistics).

~~~
chias
Being a CS academic myself I fully agree with what you just said.

That said, the decision that they should the top _three_ conferences (as
opposed to the top 2, top 5, etc) in each area is a subjective one, and has
the potential to significantly change the results.

~~~
argonaut
I don't think they're just listing the top 3 conferences. It seems they're
listing the conferences that are considered " _top_ " tier conferences - e.g.
venues that would "count" for graduation as a PhD student / consideration as
faculty, at a top-10 CS uni.

~~~
chias
According to their FAQ:

    
    
        The conferences listed (at most three per area; see
        below) were developed in consultation with faculty
        across a range of institutions. These are the most
        impactful and selective conferences for each area.

