"There is a chat group of a few dozen authors who in subsets work on common topics and carefully ensure not to co-author any papers with each other so as to keep out of each other’s conflict lists (to the extent that even if there is collaboration they voluntarily give up authorship on one paper to prevent conflicts on many future papers). They exchange papers before submissions and then either bid or get assigned to review each other’s papers by virtue of having expertise on the topic of the papers. They give high scores to the papers. If a review raises any technical issues in PC discussions, the usual response is something to the effect that despite the issues they still remain positive; or if a review questions the novelty claims, they point to some minor detail deep in the paper and say that they find that detail to be novel even though the paper itself does not elevate that detail to claim novelty. Our process is not set up to combat such collusion."
Also see the link in a sibling comment for what seems to have kicked this all off: https://news.ycombinator.com/item?id=23460336
this is usually something like: anyone whose email address has the same domain as yours, plus anyone you have coauthored any paper with, both of these going back a few years. If you are ethical you will report any additional conflicts that this doesn't catch.
so if a cabal wants to make sure they can positively review each others papers, they have to make sure they don't trip those automated filters. this means they must have been at separate institutions for a while, and must avoid publishing any papers together.
then, during the review process, they can "bid on" (put themselves forward as a reviewer for) each other's papers and ensure they give only positive reviews.
Most of the time when an author later says (formally through the review system, or informally in casual conversation) that "I know it's this reviewer who asked me to cite their own paper" they are wrong. Their main evidence is that the reviewer asked them to cite two papers that include the same author.
But of course I can't tell them otherwise because of the confidentiality, so they keep on believing that, perpetuating the myth. Perhaps someone can aggregate some statistics on this, but I genuinely believe that authors suggesting their own papers only about 20% of the time, and when they do it's because their paper was highly relevant.
The other aspect is, that there are a some black sheep who will ask you to cite a whole bunch of barely relevant papers (often >5) all from the same group. With those you very clearly see that they are abusing the system. I suspect that these instances cloud people's judgement on the first aspect, so they always interpret the question to cite as fraudulent.
As a general advice if you suspect reviewers are try to push their citation counts up, a note to the editor (or PC if a conference) will typically help, because most don't look very favourably at this.
E.g: ask the authors to cite two papers of another researcher (as was mentioned somewhere in the thread as making it easy to guess who the reviewer is), ask to compare to results of that researcher more thoroughly, use some writing style typical of that researcher country of origin, etc.
It's especially easy if you are familiar with works from the other researchers for instance because you also reviewed their work.
Of course, this is hard to do with cell cultures and such, and in the case of large databases or compute-intensive tasks it's not quite feasible. But a surprising amount of what goes on in CS and neighbouring disciplines can work that way once you work past the reticence of individual established authors and set it as a goal for your (sub-)field.
Really? When I was doing my PhD from 2008-2012, double blind conferences on computer systems and networking usually reveals the authors.
Although double blindness is not perfect:
(Double-Blind Reviewing — More Placebo Than Miracle
You don't know the reviewers, but the reviewers often know who the authors are.
> In robotics they tend to be single blind.
I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.
> I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.
It has been this way for a few years now. RSS & CoRL are double blind, ICRA and IROS are single-blind. I can't think of the last time I submitted to an ML, AI, NLP, or vision conference that wasn't double blind. It's sort of a strange robotics thing :)
Reviewers are encouraged not to read anything that might be such a paper, though, for whatever that's worth.
tbh, it's really not, this is extremely rare (although increasing).
Academia in these labs is probably the closest you can get to war zones without actual felonies.
This doesn't just happen in CS, it happens everywhere. Academia doesn't progress because it's filled to the brim with vile creatures who are not in it for science or discovery but personal myopic pride.
Posting here instead of a new submission
This is one of the fields that you can easily leave academia for industry or even switch to another academic institution with ease if things start to go bad. Nobody but your own advisor will realistically care about a retracted paper (especially if you do it before publication). And if your advisor does hold it against you, that’s not an advisor you want anyway.
Please, please, please, if you ever find yourself in this situation, just walk away. You are being paid less than a Starbucks Barista to stuff a tenure-track professor’s portfolio with rushed research nobody will realistically care about in 3 years (if it was even relevant to begin with).
A PhD will teach you how to research. 99% of the research that comes out of that process will be useless, incremental crap. Issue retractions, miss deadlines, whatever. The stakes in the CS academic game are so small a complete come-apart as a PhD student can easily be turned into an extended masters degree on a resume.
I think that quote comes partially from the perspective of someone who doesn't recognize the nature of the stakes because they're different, not because they're smaller - if it was really intended as much more than a snarky critique of academia.
It's not as if everything outside of academia somehow involves much higher stakes. You could basically dismiss the whole of Google or Facebook as being small stakes, following similar logic.
In industry, a breakthrough correctly capitalized on can mint you millions of dollars. If you start a company, it could be tens to hundreds of millions. You then are not beholden to an institution or grants and can do what you want.
Just because you have a reputation and livelihood doing something does not mean the stakes are big in the general sense. Otherwise every job would be high stakes for everyone and the term would be meaningless.
The quote came from a professor at Columbia University.
2) Trusted sources with both clout and insider knowledge will generally advise you to get out with your hide in tact, and let sleeping dogs lie. That is the strategy I'd also advise to anyone who is not independently wealthy and has a powerful network of allies.
3) Even indication you /might/ whistle blow is enough to trigger blow-back. Be discreet. In particular, don't use university IT equipment for either personal use, or anything related to the situation. If things blow up, the university controls your data. That can give an incredible information asymmetry, as well as leverage.
... and be aware things will get better. At the time, it seems like the world. A few years later, it's old history.
These things won't change without systemic change. Unless universities adopt transparency measures, accountability measures, compensation limits, etc., we'll continue to see corrupt crooks in the admin. And universities shouldn't be able to use NDAs, non-disparage, etc. agreements to cover this stuff up. Simply no.
It's like standing up to a police officers. If you do it, you might get your head blown off. If we change the system, you might not need to worry about getting your head blown off.
I do understand the pressure this person was feeling, unfortunately they didn't see a way out.
It's been my experience that tenure provides something similar in academia. As a lowly grad student, even in the most outrageous of cases, you aren't likely to succeed in bringing these issues to light. It would take tremendous determination and there's a very good chance you will derail your own academic career.
In most cases you have two options when you run into something like this as a Ph.D. student:
A) Stick it out, get the degree.
Unfortunately I don't see folks within the system acknowledging (at least not openly) that the system is broken without extremely tragic events such as this one taking place.
As long as the department is sympathetic to the student's complaint, it will accommodate the switch if it can.
Furthermore, there is increasing attention to providing "Faculty Mentors" -- people of faculty rank who are emphatically not the student's advisor, tasked in that capacity with advocating for the student's needs alone. Look within the University for other outlets for your concern.
You're going to be working very closely with your advisor and relying on them for guidance and networking after graduation. If the relationship is toxic and the only options are A or B, think real hard about the upside of B before selecting A.
> (May 22) Withdrawing the paper will cause a big impact to my mentor (Tao Li)
> He asked me that I have to finish it
> (May 27, friend) How is the talking with your mentor?
> he refused to withdraw my paper resolutely
> (May 28) He push me to fake, if I can't publish the paper before deadline
> (June 6) I'm done communicating with Tao Li
> The result is he will never withdraw the paper
> I had a fight with him. And the police almost came.
> He refused to withdraw the paper resolutely.
> My mentor's words are: if I destroy his reputation, he will kill me.
> He said, this is his bottom line.
In academia, you're absolutely right. But outside, sorry, no.
I can perfectly understand how a PhD student under an abusive advisor would feel they're trapped. And I'm gonna sound racist, but I've seen the kind of toxicity there is among Chinese researchers in Western universities, and that's probably making this kind of situations even worse.
But I've worked in the industry, and then in academia (as technical staff, never researcher), and now I'm back in the industry, in a team where more than half the people have a PhD (or better).
Outside academia, people either don't give a damn, or know very well how fucked up it can be. Unless you're extremely unlucky and the hiring manager is somehow in the close network of your advisor, nobody will bat an eye if you, as a job candidate, say "academia was not for me, I dropped out of my PhD."
So please, if you read this and feel like you have no options, don't believe that.
It takes a long time for some students to understand and trust that I desperately want them to call me out when I'm wrong. Research is a partnership -- even if you don't know much, it is really essential that you ask probing questions and make sure that everyone in the group is standing on a solid foundation. If we are, we'll be able to answer your question immediately and you'll learn something. If we're not, then all of us are about to learn something important.
To second hocuspocus -- yes, there are always options outside of academia. As a side benefit, they'll probably pay better and will let you interact with more professionals more quickly than inside academia.
Now consider that there is nothing outside.
Because that's the mindset of PhD students, and more often than not it is also true.
> Only one e-mail response is received. That PC member said the first author of a paper has no right to withdraw an article due to methodology concerns and that if an advisor says publish, they should just publish without complaint.
Are you kidding me? I have publication in MICRO with a very famous institution in US. (Top 20).
You know how my professor published his paper in MICRO? Relationship! Relationship! Relationship!
As a graduate student coming from a third world country, I totally lost my hope on the system when I saw a venue as famous as MICRO, is only based on relationship.
Don’t judge me, who can I report it? I am graduate student, means losing my supervisor means losing my visa. I am even posting this via anonymous account (thats how scared I am).
With that said, my PIs didn't assist me at all in preparing, submitting, or defending my research, except to inform me of what to prepare for.
I was really pleased with that whole experience because it felt like there were very distinctly not gatekeepers in that field. It could very easily be that there are not really any professional stakes in the field I studied in, so maybe that makes a difference.
My team was all researchers, not tenured faculty, and we funded ourselves with government grants.
With all that said, it's really helpful to get a different point of view - thanks for sharing this, and sorry for the trouble you experienced, nobody should be put in such an awkward position.
And the only reason I left my home country was because I wanted pursue science. (Science is what makes my love beautiful)
But I am afraid maybe my next supervisor will be same (let alone if I leave middle way my current supervisor will not give me letter of recommendation).
You can imagine how much of a shock it is to my believe system to come from literally other side of the world, in hope of doing genuine research, and turns out, it is based on lies.
Choose your advisor, not your area of focus.
Your advisor single-handedly decides how smoothly your PhD will go, and once you have your PhD, you can decide where your career goes.
Obviously, area of focus does determine a bit of your future, but within that area, optimize for a helpful advisor.
Also, I am sure your university has resources, and ways you could report this behavior, but it's often very difficult to unseat/challenge university faculty, so I totally understand an unwillingness to act.
I agree with the other commenter: finish what you started, get out, and focus on doing good work.
Worked on my master's dissertation with my initial supervisor. He went off to try and find funding for a year or two to get me to come back for PhD.
Turned up, he passed me an ACM magazine article and asked me to have a look and see if anything stuck. Got hooked. He knows I like weird and trippy stuff ;)
I'm in a bit of rubbish situation as he left his position here for greener pastures. Totally fair decision on his part from what I've gathered of the back story
Now supervised by someone else and it's just not the same. Have to go over old ground quite often.
So yeah, choose your supervisor wisely. If they "get you" then they'll get what you're interested in. Then they'll be able to work with you find an interesting area to study.
I acknowledge that there is a problem, there is a percentage of professors which are abusive, dishonest... The problem is like in many other areas those guys (and it is mostly main) often raise to the top. The other issue is that the way academia is set up (and there are some good reasons for it, e.g. academic freedom) , it's very difficult to detect. You almost always only hear rumors and its very difficult to act on the rumors, especially if nobody wants to come forward (which is a problem) and the rumors are from different institutions.
Another aspect to this is, that there will often be some conflict/disagreement between PhD student and supervisor. This can be for when to finish, when, where and what to publish... That does not always mean there is abuse of power, it's mostly professional disagreements similar to disagreements between an engineer and their manager.
What's truly sad is that people will often push for being cared for by the big fish professors when it's actually often the worst choice they have.
So don't get off track - changing schools/advisers/topics can be very disruptive and you can wind up becoming a tenured ("ten-yeared") graduate student. I was forced to change advisor due to my first one leaving and it took me from 'on track to get out quickly' to 'slowest guy in my year to finish'.
I am a Full Professor. Before we submit a paper, I do my best to make sure our code does what we say it does. We make sure we understand the results as a second check on our implementation. Then we hope we caught all the important bugs that might invalidate our results. Now to be clear, we are not building industrial grade systems and so new benchmarks might expose bugs we don't know about. I am pretty sure that we are not the only group that does this in the field of Programming Languages & Compilers.
This indeed is under your control. The faulty system, may it be, is beyond your power, at least at the present standing.
Changing supervisors is possible, but probably not a good idea.
I recommend getting out as soon as reasonable, then get a real job, do science in your free time or as part of a real job. Better work/life balance, higher self-esteem, putting up with and fixing academia corruption is not your problem anymore. Academia does not have exclusive rights on doing science.
Dealing with these problems is hard, switching advisors is difficult, but, I feel sticking with a bad advisor is much more difficult long term. Hopefully you can figure something out. I do not envy the position you are in.
It's kind of sad, but I think even good science needs help to be seen.
It was totally wishful story glued together with beautiful sentences, published via a serious relationship in MICRO.
There are sentences in that paper which even I don’t or my professor understand. It has been written in that way!
1) Write in a way that nobody understands,
2) use big words, (put them in beautiful sentences, so nobody would have any doubts, particularly in introduction and beginning of the paper and in conclusion).
3) a relationship inside MICRO.
Wow you have published a paper in MICRO now.
Today, it seems that using unnecessarily complex language is required to make work seem more impressive than it really is. Perhaps because much of today's published work is not that significant. I've been asked to rephrase perfectly clear and simple sentences solely to use fancier language and currently in vogue vernacular. None of that aids clarity or understanding.
Ultimately, peer review is only as good as your peers. If they won't call out bullshit, then bullshit gets published.
As a PhD student, this is disgusts me. I've witnessed my share of awful, exploitative lab situations but this takes the cake. It seems to me that most of the time, professors only receive soft punishments (e.g. people in the department warning prospective students to think twice about joining a lab) unless the incident draws enough bad press.
This is a really shocking threat: he's saying that his school (the University of Florida) will take legal action against people making good-faith allegations of academic misconduct against him, because they have unspecified "ulterior motives."
How is this attempt to silence people an appropriate response to the death of one of his own postdocs? At the very least, the University needs to say whether it finds these threats acceptable and whether it is actually considering legal action.
Most people back down very fast.
Perhaps that’s no longer sustainable, but what is the solution? Many of the problems with modern-day academic research can be traced back to increased scale and competition, neither of which is going away.
Open Science: There is a growing movement by ECRs to publish Journals that are open access or atleast offer the option to do so with slightly extra costs. This helps to dismantle the centralized consortia of publishing groups that have become too powerful.
Open review: There are journals that allow the reviewers' and authors' comments to be published alongside the paper. This ensures that reviewers are reasonable in their conduct. See eLife for research in natural sciences where once your paper is under review, it is published no matter what. The reviewers can claim that their issues wer not addressed if they feel so and that gets published along the paper's final version. Similary in the computer science community, there are conferences that follow the open review policies as well.
Competition can be replaced with collaborations if the Group Leaders are open minded and are "raised" in such an environment. The next generation of researchers are today's graduate students.
Publish or Perish: This archaic movement can be suppresed when the universities hire people by looking at CVs of applicants where they can only see the papers but not the Journals where they were published. H-index and traditional metrics can be replaced by Altmetrics . The hiring committees need to do the extra work to go through the research of their applicants and not be biased by the journals in which they publish.
These are only few things that are being done. See OpenScience movement variants to explore more in this direction.
The root cause of the situation is that there is not enough money in the system and compared to previously more money is distributed by competitive grant processes. This results in a couple of things.
1. The cutoff (reviewer mark) where grants get funded or not is completely arbitrary, because it is in the flat part of a arctan or logit type curve, so small differences or luck can me you are able to do your research.
2. Competition therefore is fierce.
3. To balance the odds you have to write a lot of applications (academics now spend most of their time either writing grants or administering their grants, not doing research.)
4. It encourages incremental research, because non-incremental research implies high risk, but no one can afford the risk to fail, because having a significant reduction of publications reduces you chances of getting grants in the future (see 1.). Once you get some gap in your funding its almost impossible to get back.
5. This disadvantages women who want to have children.
Established elder scientists with little bias and little incentive to corrupt the country education/science system should be in charge, not administrative pencil pushers. Maybe hire some outsiders and set up some kind of independent commission of renowned scientists and budget planning/audit people to maintain oversight of the supported science groups.
But lack of money isn't really part of the problem, there is lot of money there and it often goes to waste. Putting in more isn't the solution - it would just amplify the current situation.
The fact there is competition for money isn't a bad thing. In theory it keeps people engaged and working efficiently. The problem with the current competition is that people are judged by their willingness to play ball, by flawed citation metrics, publicity, imaginary department points and relationships. Not by their originality and weight of their scientific contribution.
2) Ban universities which take federal grants, federal student aid, 501(c)3 status, etc. from using NDA/non-disparage agreements.
3) Put in FOIA/public records-style transparency measures.
4) Open science / open data. Publications should be open too, as should source code. Anyone should be able to replicate scientific research unless it requires $$$ equipment or e.g. builds on personally-identifiable information.
5) Democratic governance processes within universities and salary caps (it doesn't get at this stuff, but it does at a lot of embezzlement).
6) Limit number of grad students per advisor.
Basically, let enough sunshine in that the bad stuff can't stick around.
I think the problem goes a bit beyond that, in that the notion of what constitutes acting honorably has changed.
Science did fine without it until ~40 years ago, and it can do so again. I hope.
Required reproduction of the result could be a step forward, but only for cheap research.
As I understand it, modern peer review came about as a 'crowd sourcing" solution to the hugely expanding number of articles to publish, in ever more specialized fields.
The journals couldn't keep enough relevant expertise around, so letting the scientists review each other was the simple fix.
Before scientists started reviewing each other, who at the journal was judging the work of scientists to be published? Scientists?
The point is that they did not have a conflict of interest.
In 2020 peer review, people are often asked to review the work of their rivals. So they're incentivized to (1) say that it's bad and demand many changes, and/or (2) steal their ideas.
Peer review in my field is a random process. It's not corrupt; it's just that reviewers give papers a 30 second skim. I've had one paper really reviewed in my whole career. What gets in and out is just random. NIPS did a nice study on this many years back.
On the flip side, academic discourse gets delayed. People aim for Science/Nature, and then down the line. Journals / conferences aim for the prestige of high reject rates. Research goes out sometimes years too late.
Peer review gives the impression of credibility, without the reality. It should be gone. Tiers of journals should be gone too.
Accounts of how groups of inter-connected high-profile researchers game the review system for their own advantage are deeply concerning.
I was working on a paper along with a professor. We sent a paper for review and one of the anonymous feedback from one of the reviewers was something like "these (a couple related papers to our subject) also would make good annotations". My professor said "this person is probably writer of those papers and asks for annotations for a good review". I was quite shocked how casual when he was talking about this
Precisely how is this ratio determined?
Given the unreproducable results and plagiarism that we've learned about in recent times I believe there is a lot more fraud than can be accounted for by "the dishonest 0.01%."
* Grad students who will never make tenure.
* Lifelong adjuncts / post-docs who are staying there because they won't play the game.
* Older faculty from a different era.
It's enough that at this point, I find all research from my alma mater suspect unless I've personally reviewed it or know the PI personally. But there's a pretty large honest portion too.
Then you pick numbers that justify your position.
Then you get your friends to vouch for your numbers that they helped you come to even though they can't admit that because then they can't independently vouch for them.
ISCA'20 should really have been cancelled until this was resolved.
See a timeline of our attempts to get anyone at ACM/IEEE to care: http://pbzcnepu.net/isca/timeline.html
They can be easily found in grant webpages, eg https://www.nsf.gov/awardsearch/showAward?AWD_ID=1900713
As someone who received both their B.S. and M.S. at UIUC, I'm honestly disgusted at how various professors and administration have handled certain incidents that my friends have been part of
It is easier to fight things from the outside, as the internal higher-ups have less power to mess up your life.
It's interesting you mention UIUC, as traditionally that was the source of a lot of suspect papers. It was a running joke about the "coincidence" of how many people with UIUC connections managed to get papers into ISCA each year.
I know there are a lot of solid professors in UIUC like Joseph Torellas, Sarita Adve (C++ consistency models), Vikram Adve (who is the adviser of Chris Lattner - creator of LLVM). This is really bothersome and I hope it's not from one of those professors.
Both use modified "cycle-accurate" simulators for the results. For now let's ignore all the issues with the accuracy of these simulators.
Let's validate how "solid" the work is. Drop an e-mail to Torrellas and ask for the code they used so you can see if you can reproduce their work. Hopefully things have changed and they'll send you the code but in my experience they'll just say no.
So they got two papers in which are unverifiable, and none of the reviewers ever saw the code involved. This bothers some people, but it's not unusual at ISCA.
2. i dont understand how you can pick on "cycle accurate" simulation when every simulator in our community has problems. at least in gem5 (one of torellas isca2020 paper), we as reviewers can look at the code. how about the famous "in house x86 pin based simulator"? most pin based simulators performance numbers should rightfully be joke, but we use them anyway, because we aren't going to rewrite them.
3. at the end of the day, most of our work is unverifiable, because we make so many approximations anyway. one young faculty told me "we just need to see the idea and determine if it makes sense". i just do not know if this is the right thing to do, or if its a lie we tell ourselves.
There are a lot of papers in ISCA that are based on cycle-accurate simulators. It has been like that since forever. How else would you evaluate new non-existent architectures that are bleeding-edge? FPGA? Most can't even afford that. Not to mention it's just not possible in a lot of cases. I agree with you that some work should be verifiable but your accusation is weak for this part. Maybe the community can push for verifiable work before getting published in ISCA since it is such a prestigious conference.
If you don't want cycle-accurate simulator based papers, you'd have to eliminate a large variety of body of work only possible with this approach. A lot of techniques in modern processors have seen their start in processor/system simulators and most of them are probably not cycle-accurate.
yes. Most "cycle-accurate" results are garbage. Do you show error bars on your results? Can you? Did you run on a variety of independently implemented simulators and show the results on all of them? Did you run the full reference inputs to SPEC all the way through? Why not?
The answer seems to be that it would be hard. But guess what, science is hard. Try complaining to a biologist sometime about your architecture paper that took so long to write, where you gathered all the results 2 weeks before the paper deadline.
It's fine if you come up with a new idea and run some simple proof-of-concept runs to show it might have merit, but don't pretend the results from an academic simulator hacked together by a sleep-deprived grad student have any real world merit.
of course, nothing surprises me.
The reason people have latched on to the current set of allegations is there seems to be actual concrete proof of misconduct that can be acted on.
What is this insinuation based on? Do you have data on how many papers by "people with the right connections" are rejected, for example?
another anecdote, when first trying to raise awareness of the issue, when mentioning it to members of the community I never got the response "wow, how could this happen in computer architecture?" rather the response tended to be "wow, I can't believe it finally got bad enough that a student died"
i'm not sure this data is useful, because we'd disagree on "people with the right connections".
There is a chat group of a few dozen authors who in subsets work
on common topics and carefully ensure not to co-author any papers
with each other so as to keep out of each other’s conflict lists
(to the extent that even if there is collaboration they voluntarily
give up authorship on one paper to prevent conflicts on many future
For example, people almost never relinquish authorship credit.
If no one is going to publish names, nothing is going to happen.
I've seen fraud at IEEE ICDCS '98, but that conference was a mess, and smaller IEEE regional conferences are sewers, but nothing especially improper.
I hope I'm missing something.
 Quote: "HotCRP does not show you your own submission even if you are a PC member, so how did the de-anonymized reviews and PC comments for Huixiang’s paper end up in his laptop? An insider help seems likely."
There is a comment from Eddie Kohler on Twitter which implies this was not a leaky endpoint problem. You can see this thread for more details: https://twitter.com/xexd/status/1222659612857401344?s=20
In CS these academic conferences are held in many different locations, and many (most?) top researchers are not on Twitter.
We also, in machine learning, never truly examine results for basic statistical significance. It’s often rather “I got this number higher”.
I struggle to have even basic trust of academic CS results, unless it’s backed by a major private company research (Google research) or academics I know and trust well.
Science operates as a priesthood more than anything. Universities orginally emerged out of schools for religious instruction, to train Catholic clergy, and I'm not sure how far we have come from there. There is the same two-faced rapprochement between the highest ideals and the lowest behaviours. The high priests just wear jeans instead.
You can see the efficacy of modern science by the real and impactful discoveries in machine learning and related fields made over the last 10-15 years.
You'd expect the good ones to be pro-reform, but there apparently isn't enough pro-reform in academia to have reform.
With that, less and less good will happen, and more and more harm.
- grad student stipends are negligently low, making it difficult for students to complete their PhDs.
- many journals in CS are not open access despite the authors and reviewers not being compensated for their time. I think pre-print servers and the ability to publish work on your own website has really improved the state of CS research over the past 5 years. In the past, the official policies of many top conferences stated that a paper would not appear in the proceedings if it had been published to your website or a pre-print server. This policy has now been dropped (or is not being enforced) by almost all top CS conferences.
- it can be difficult to resolve a conflict with your advisor especially if they have tenure. Tenured professors enjoy significant faculty protection (though maybe not to the same degree as police). Often the most productive avenue for a student in this case is to quietly switch advisors. This can lead to entrenchment of toxic faculty. At my alma mater I can think of a professor who was notorious for having none of their students graduate - they all either left academia or switched advisors
It’s certainly an imperfect system and I’m sure folks can point out other flaws in the comments. There is a lot of room for reform in these and other areas. For example the lack of transparency in this case makes the entire academic community look bad for no good reason.
However I do think that folks on HN (on average) have an overly negative view of academia, some of which may not be based on an accurate understanding of its good parts. For example peer review, despite its many flaws, turns out to be very valuable. During the COVID-19 pandemic, a lot of spurious science has been published in pre-print form and on Medium, only to then be rebutted and rejected by scientific venues with a robust peer-review process.
My cynicism doesn't come from a lack of understanding of the academy or the good parts of the academy. To the contrary; it's based on:
1) Seeing the upper levels of the academy (which are often incredibly corrupt, although this is well-hidden from students).
2) Seeing enough cases handled to know that cover-ups are much more common than resolutions.
3) Seeing many students get abused, and seeing the impact fake research has on the real world.
The lack of transparency makes academia look bad for perfectly good reason: there are huge amounts of money flowing into pockets corruptly at elite schools, entire academic careers build on academic fraud (which is /not/ the same as legal fraud), and there's enough rot in the system that it's hard to reform. The higher you go, the more rot there is.
Appearances to the contrary, students are university customers, and faculty are employees. The power legally rests with the administration and the board. The rot persists since the people benefiting set the rules. If I'm the president of a school, and a reform would put me in jail, or even make me lose my million dollar salary, why would I listen to student protests? Faculty protests carry a little more weight, but not enough, and enough faculty are corrupt themselves that you can't build a united front for reform.
Regarding peer review, the system is entirely bankrupt. There are no incentives to do a good job. Anecdotally, most reviewers skim papers; no one reads them, and certainly no one checks the math or replicates the results. Less anecdotally, when people have looked at data, it's no different from a die roll (NIPS did a proper evaluation of peer review).
Regarding grad student stipends, the basic deal is good on paper: freedom to pursue intellectual interests, quality mentorship, and a quality education with basic living expenses covered (on the taxpayer dollar). This seems like exactly the right thing. It breaks when universities break their side of the bargain, with the taxpayer dollars and charitable donations go to faculty clubs, yachts, and vacation homes, internal power dynamics forcing graduate students to be treated as cheap labor rather than as students, and competition forcing widespread academic fraud.
^ As you know, many areas of science have a reproducibility crisis including psychology and medicine. I hope this will go a long way towards addressing some weaknesses in peer review in CS. For myself, I try to be thorough when refereeing papers. Anecdotally I know some more busy academics farm out their reviews to their grad students.
I had a largely positive experience in grad school also thanks to my excellent advisor. Although I can say that my ~$25k/yr stipend made it difficult to live on without the additional scholarship money that I earned, and even then...
I agree with you that there absolutely needs to be more transparency in many aspects including complaints about advisors, and investigations into dishonest publishing behaviour like in this case.
And for transparency, I'm not going to trust any research out of MIT research until MIT openly gets rid of NDAs and non-disparage agreements. Just no. Yuck. And that includes the chains of corporate walls MIT has built up to hide stuff.
I won't trust MIT research until I can file a public records request, and get records. I should be able to discover conflicts-of-interest, financial arrangements, and all the other messes MIT gets itself into.
I won't trust MIT research until faculty meetings are subject to open meeting laws, and the public can sit in on them and understand how decisions get made.
I won't trust MIT research until data, source code, and papers are out in the sun for public review by anyone.
I won't trust MIT research until I see bad actors getting disciplined and these things publicly discussed.
Until then, unless I know something at MIT isn't corrupt, I'll just assume it might be. I won't trust Stanford either. Too much stuff is simply fabricated to tell a good story. Heck, I won't trust a lot of the elite academy, since I assume it's just as bad. Things get a little bit better one tier down, but the rot is starting to seep in.
MIT CSAIL just shut down its main mailing list -- thousands of people -- since people started discussing institutional corruption.
Footnote: I had no problem living on my similar stipend. I paid for one bedroom in a four-bedroom apartment, ate cheap food, and didn't spend much otherwise. My university
I'm very curious - why you stress so much about publishing
in that Elseviers, ACMEs and so on ?
First: there are so called "home pages" - instant publication ! I belive every univ have such infrastructure for their stuff, possible students too...
Not every paper is ground breaking discovery and not every paper should be published in international and GLOBAL "jurnal".
But if you realy insist then: let say USoA with 50 states so at least 50 universities... Then, if you want to publish collect, let say every half of the year, papers (eg. in Tex format) in your department, compile them into YourUniversityDepartmentYearNumber jurnal, print 50 of it, glue stamps and post to 50 libraries. Done.
Also you could do zines. With it's own domain name. Easy.
Good content will bring readers.
Do not forget about ToC, with urls.
But it's not recipe for all diseases you have there.
You need to step back and evaluate.
Also don't allow yourself to be morphed into asholes and in two generations you will be deans and proffesors.
And some environment are just viperies, move out of there !!!
1. i dont want say anything which gets me trouble, but hotcrp was set up wrong by the chair in that year ISCA. i know pc members were able to download the whole paper submission list. i personally saw _all_ papers and _all_ reviewer comments on anything i did not submit. it is likely that everyone had access to this, because i am not a "big players" in this field. i really doubt i had some special access because i have no relationship to the chair that year.
i still have all the papers from isca19. i should not have these, but i just got them from a normal download (no enumeration or guessing urls).
2. point 1 does not excuse leaking, just made easier. instead of calling all pc friends and asking them to call their pc friends, you just need to call one pc friend not on your submission. he could see it.
3. good luck getting tao li. hint: did he fund anyone in the past who was investigating him? did he fund anyone who reviews his papers?
4. the tao li story is deeper.
a) there's race issue. because it's easier to manipulate people if your business is being a chinese -> usa faculty pipeline. and once people go through your pipeline, they are even more likely to keep it going!
b) if you think the paper push is bad, wait until the details of how he actually treated his students come to life. hint: some things advisors do to students in china is immoral, but tolerated. those things are illegal in usa.
5. this is bigger than tao li. you will never get rid of the cliques, but it's at point that i don't even want read the papers anymore. why sell my integrity for grants, when i could get industry money where at least the product shows the work is not crap?
6. if you want to know some worst players, i heard they are upset they are not in micro2020 reviewer list :p. of course it's not a completely good reviewer list, our community is too small!
7. i hope one day we can honestly talk this out. i would like to tell pressuring stories :)
> It is easier to fight things from the outside, as the internal higher-ups have less power to me
Why don't you start it here? Since you are already anonymous anyways?
It's inevitable that when you do research on a very niche topic, you'll probably know a lot of people who are qualified to review your paper.
How do we prevent this kind of corruption when the communities are so small?
EDIT: Thanks, I skimmed too much.
Verbatim from the article.
In CS conferences there is generally a small pool of reviewers for a paper in a given sub-area (ex at a conference on systems some people might focus on file systems). These reviewers tend to review each others’ papers since to be a reviewer you also must be an active contributor.
I don’t see how any algorithm might detect this pattern of fraud without taking into account the merit of the papers.
FWIW, anecdotally, in my interactions with academia involved parties have mostly been surprisingly respectable - which is more that I can say for industry.
Well, if you think that all Chinese companies are departments of the CCP and, by extension, all Chinese nationals working for Chinese companies (so the vast majority) are working for the CCP, and a Chinese company sponsoring OpenSSL wants to keep crypto weak... how is that anything but Sinophobia?
Maybe you think it's justified Sinophobia because of how evil the CCP is... but most evil things the CCP did can be summarized as "they oppress their people" and they oppress their people because they don't have majority support. That's especially true in internationally operating tech companies, where the employees are the most likely to need access to banned foreign websites just to do their job. Most Chinese people don't work for the CCP, they work around them.
I do believe there is a HN guideline that discourages that sort of behavior.