One of the exclusions that I find interesting is any faculty that has a joint position at a company. A lot of great professors get 0 credit because of this.
Another one is that you are penalized for having student authors on your papers.
According to csrankings.org's FAQ, they only include professors who can advise a CS-only student. The conflict is that the website includes areas (like "Embedded & real-time systems", "Robotics", and "Computer Architecture") that are traditionally done in EE and ECE departments. One significant example is UT Austin's prolific ECE department.
The fundamental conceit of csrankings.org is right in the name. US News (and anyone else) does just as good a job. And always will. You can matter have your measurement gamed. Or you can measure in obscurity. Or you can matter and not measure. Pick one.
For undergraduate, just go wherever is cheapest with a reasonable curriculum and non-joke professors (decently difficult to hack: couple hundred citations and also real industry experience). Emphasize places that will also teach you non-CS skills (a second major, a great network that in addition implicitly teaches you the right type of communication skills, etc.).
For a masters, just don't.
For a phd, go with the best advisor you can find and finish fast.
Ignore rankings. They exist to be hacked. And academics are great hackers.
I'm sure Emery who runs csrankings.org has thought of this and that criteria "professors who can advise a CS-only student" is well thought out. Anyone who comes up with rankings/datasets has to determine a set of inclusion/exclusion criteria, which will be unsatisfying to some people. CS research can certainly be done in many different departments including iSchools, math, media studies, operation/management science, but this is a ranking of CS institutions rather than CS research, even if we think the latter is more useful.
Your reference to "traditionally done in EE and ECE departments" is to say that your (and perhaps others') view of what computer science traditionally is, should take precedence over the universities' self-definitions. The former (which depends on different peoples' perspectives of what is CS) is harder to to delineate in a universally-agreeable way than the latter (which has essentially no subjectivity). So I'd imagine a website like this which is used as a wide reference, will want to reduce as much of the author's own subjectivity as possible, to actually be acceptable to a wide range of people.
The explanation he's given is that the primary purpose of the website is to help prospective graduate students evaluate strength of CS PhD programs when deciding where to apply. For that purpose, it only makes sense to include people who can advise CS PhD students. This excludes not only research done in EE/ECE departments, but also research done in companies, or in departments without a PhD program (there are some strong ones at undergraduate-only colleges), since that's outside the scope of finding PhD programs.
That's fine as far as it goes, but the website has gotten popular as a general research ranking for a few of these subfields, beyond the original purpose of helping CS grad students choose programs to apply to. For some of those other purposes the original decisions may or may not make sense. The name of the website probably doesn't help either.
Shouldn't the purpose be to help prospective graduate students evaluate the strength of PhD programs in the fields they are interested in?
I think that excluding research in companies or universities without PhD programs is much different than excluding EE/ECE professors who happen to be at universities where the bureaucracy makes it difficult for them to have some sort of "official" appointment in the CS department. It is also much more rare that these entities publish in the top-tier conferences that csrankings counts.
Note that it is not just professors who can advise CS PhD students, it is professors who can SOLELY advise a student for a CS degree. It would help even if EE professors who have co-advised CS PhDs at the same university were allowed to be included.
Given that the ranking is already based on top-tier conference publications in each field, isn't the author introducing their own subjectivity by excluding professors based on their current university affiliations? It seems quite easy to adjust for EE departments that are obviously prolific in CS fields (as defined by the conferences counted in those fields).
I think the problem is kind of opposite of what you are saying: why should the history or bureaucracy of each university take precedence over the publication metrics when everyone seems to agree that fields like embedded systems and computer architecture are part of "computer science"?
Huh, not sure why Stephen Boyd is not included in the Stanford list, but John Duchi is (if you’re reading this John, I’m only a little sorry! ;). He’s on the top 10 highest citation count of all Stanford professors behind essentially Hastie and Tibshirani (both of Elements of Statistical Learning fame), and both of which I’m also surprised are not included, even though they can all advise CS-only students.
This is a bit silly of course because Stanford has a full cross-department policy, where the student’s department and their PhD advisors can be completely unrelated. (I’m not exaggerating at all when I’m saying this, and this policy also includes the Business and Medicine schools. There are EE professors who have business school students even without a courtesy appointment in the school of business. Similar things happen in the Medical school, etc.)
I wonder how that (for Stanford and other schools) would change the rankings.
Another one is that you are penalized for having student authors on your papers.