Hacker News new | past | comments | ask | show | jobs | submit login
Potential organized fraud in ACM/IEEE computer architecture conferences (medium.com/tnvijayk)
492 points by bmc7505 on June 8, 2020 | hide | past | favorite | 193 comments

They seem to summarize it in paragraph four, though I am not clear on what this really means - I think it is that certain authors are able to be "top ranked" and get limited review from colluding peers who may also be co-authors on the paper, but removed themselves to squeeze through review faster and avoid future flags for "conflict of interest"

"There is a chat group of a few dozen authors who in subsets work on common topics and carefully ensure not to co-author any papers with each other so as to keep out of each other’s conflict lists (to the extent that even if there is collaboration they voluntarily give up authorship on one paper to prevent conflicts on many future papers). They exchange papers before submissions and then either bid or get assigned to review each other’s papers by virtue of having expertise on the topic of the papers. They give high scores to the papers. If a review raises any technical issues in PC discussions, the usual response is something to the effect that despite the issues they still remain positive; or if a review questions the novelty claims, they point to some minor detail deep in the paper and say that they find that detail to be novel even though the paper itself does not elevate that detail to claim novelty. Our process is not set up to combat such collusion."

Also see the link in a sibling comment for what seems to have kicked this all off: https://news.ycombinator.com/item?id=23460336

peer review at CS conferences is normally anonymous and double-blind to reviewers and authors. however, the conference organizers themselves know who the reviewers and authors are, and the conference management websites are set up to avoid assigning reviewers to papers where they have a conflict of interest with the author.

this is usually something like: anyone whose email address has the same domain as yours, plus anyone you have coauthored any paper with, both of these going back a few years. If you are ethical you will report any additional conflicts that this doesn't catch.

so if a cabal wants to make sure they can positively review each others papers, they have to make sure they don't trip those automated filters. this means they must have been at separate institutions for a while, and must avoid publishing any papers together.

then, during the review process, they can "bid on" (put themselves forward as a reviewer for) each other's papers and ensure they give only positive reviews.

In small fields, it's sometimes really not very hard to guess who are the authors, just given the topic and what papers they cite. Even with double blind.

You can often guess who the reviewers are as well. The most obvious tell is "also cite paper X" which is their own 99% of the time.

I don't think this is true from my experience. I've been on several program committees, and have managed a couple as well. Usually if you're above the first-level reviewers (e.g. senior PC, track chair, PC chair), you can see the actual author names and reviewers.

Most of the time when an author later says (formally through the review system, or informally in casual conversation) that "I know it's this reviewer who asked me to cite their own paper" they are wrong. Their main evidence is that the reviewer asked them to cite two papers that include the same author.

But of course I can't tell them otherwise because of the confidentiality, so they keep on believing that, perpetuating the myth. Perhaps someone can aggregate some statistics on this, but I genuinely believe that authors suggesting their own papers only about 20% of the time, and when they do it's because their paper was highly relevant.

I agree, in my experience the request to add citations is usually because there is fairly obvious context missing, and the reviewer wants to make the connection/comparison is made. Sometimes it's the authors own papers, but that's selection bias at work - if they didn't work in the area, they probably wouldn't be reviewing it.

I think there are two aspects to this, the first one as you mention is that often reviewers just asked to cite relevant papers (and it's not unusual they come from the same author, because the fields are small).

The other aspect is, that there are a some black sheep who will ask you to cite a whole bunch of barely relevant papers (often >5) all from the same group. With those you very clearly see that they are abusing the system. I suspect that these instances cloud people's judgement on the first aspect, so they always interpret the question to cite as fraudulent.

As a general advice if you suspect reviewers are try to push their citation counts up, a note to the editor (or PC if a conference) will typically help, because most don't look very favourably at this.

Makes sense. Also at times the automated anti-plagiarism check systems may list some references by similarity of the topic or the certain degree of match, not necessarily suggesting that it's been copied. So it's helping the reviewer to suggest such references to the relevant extent.

"Please update the references to include recent literature" sometimes makes it clear, too.

It's even easier to trick the authors into believing you (the reviewer) is someone else.

E.g: ask the authors to cite two papers of another researcher (as was mentioned somewhere in the thread as making it easy to guess who the reviewer is), ask to compare to results of that researcher more thoroughly, use some writing style typical of that researcher country of origin, etc.

It's especially easy if you are familiar with works from the other researchers for instance because you also reviewed their work.

Is a reviewer asking an author to cite their own paper not a massive conflict of interest in itself?

Only if it's irrelevant. If it's relevant, it's a necessary correction.

Who decides whether it’s relevant?

Not even in "small" fields. A lot of the time any new work is just a further development of previous work by the same author. So if you're well read in your field, that "double blind" peer review is not blind at all.

The cure to that (reviewers who are not well read in their field) seems far worse than the disease.

I'm not saying we need a cure. I'm just saying we can all stop pretending that those reviews are "double blind" to the extent currently assumed by many.

the cure to that is a field where independent people can also contribute to further development of existing work - through making sure that resources used and code are published along with the paper.

Of course, this is hard to do with cell cultures and such, and in the case of large databases or compute-intensive tasks it's not quite feasible. But a surprising amount of what goes on in CS and neighbouring disciplines can work that way once you work past the reticence of individual established authors and set it as a goal for your (sub-)field.

> peer review at CS conferences is normally anonymous and double-blind to reviewers and authors

Really? When I was doing my PhD from 2008-2012, double blind conferences on computer systems and networking usually reveals the authors.

Although double blindness is not perfect: https://dl.acm.org/doi/pdf/10.1145/1517480.1517492 (Double-Blind Reviewing — More Placebo Than Miracle Cure?)

The concept of Double Blind is gone, because papers are out on Arxiv the second work is done on them. Within a small research domain, everyone knows what the other person is working on.

You don't know the reviewers, but the reviewers often know who the authors are.

This isn't true at all. In AI, ML, NLP and Computer Vision, for example, conferences tend to be double blind. In robotics they tend to be single blind. Most people are not putting their work up on arxiv before publication, so you can't find that out either.

Often those double blind conferences contain explicit exceptions that let you post on Arxiv. Check the Call for Papers from this year's NeurIPS for example. Obviously you aren't required to put anything on Arxiv, but lots of people want to stake their territory and will post either a preprint or an incomplete version of the paper (e.g. with an example missing).

> In robotics they tend to be single blind.

I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.

The NeurIPS exemption isn't specific to arxiv: "The existence of non-anonymous preprints (on arXiv, social media, websites, etc.) will not result in rejection" People can post anywhere. Most people don't. I don't check carefully, and as I reviewer I explicitly don't look, but when I do see a preprint it's rarely any different from the submitted version. People do stake out territory, but it's more like they do that with shitty papers that no one wants to accept but they want to "own" some phrase that might be popular in the future. The NLP community is very unhappy about this in particular.

> I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.

It has been this way for a few years now. RSS & CoRL are double blind, ICRA and IROS are single-blind. I can't think of the last time I submitted to an ML, AI, NLP, or vision conference that wasn't double blind. It's sort of a strange robotics thing :)

Depends on the author. Some people don't post to arXiv until after they've submitted, or after they've gotten initial reviews back. Others like to post to arXiv proactively to get feedback prior to submitting. Just depends on the person.

Reviewers are encouraged not to read anything that might be such a paper, though, for whatever that's worth.

of course it can’t be avoided completely, but you’re supposed to try not to know who the authors are.

> peer review at CS conferences is normally anonymous and double-blind to reviewers and authors.

tbh, it's really not, this is extremely rare (although increasing).

Extremely sad that this student decided to take their life over this. I have been in a somewhat comparable situation but slightly less extreme. I was minimally able to standup and hold my ground, but partly due to that I'm also out of research now. It's not a fun place at all; you spend years on your projects and professors like these don't take lightly to any objection to what they think is the correct way. This means you truly are jeopardizing your full future after you have already sunk so many productive years of your life in it.

Academia in these labs is probably the closest you can get to war zones without actual felonies. This doesn't just happen in CS, it happens everywhere. Academia doesn't progress because it's filled to the brim with vile creatures who are not in it for science or discovery but personal myopic pride.

Larger conferences, SIGGRAPH for example, are intensely competitive, and conferences are the primary publishing outlets for CS.

Politics and power plays are everywhere :-(

Looks like the (sad) background story is here : https://medium.com/@huixiangvoice/the-hidden-story-behind-th...

Posting here instead of a new submission

That suicide is so unfortunate. As someone who was a PhD student, I understand the perceived gravity of the situation for him. It’s unfortunate he had nobody to put it in perspective.

This is one of the fields that you can easily leave academia for industry or even switch to another academic institution with ease if things start to go bad. Nobody but your own advisor will realistically care about a retracted paper (especially if you do it before publication). And if your advisor does hold it against you, that’s not an advisor you want anyway.

Please, please, please, if you ever find yourself in this situation, just walk away. You are being paid less than a Starbucks Barista to stuff a tenure-track professor’s portfolio with rushed research nobody will realistically care about in 3 years (if it was even relevant to begin with).

A PhD will teach you how to research. 99% of the research that comes out of that process will be useless, incremental crap. Issue retractions, miss deadlines, whatever. The stakes in the CS academic game are so small a complete come-apart as a PhD student can easily be turned into an extended masters degree on a resume.

"academic politics are so vicious because the stakes are so small"

Except the impact on people's reputation, livelihood, and lives are not really small stakes.

I think that quote comes partially from the perspective of someone who doesn't recognize the nature of the stakes because they're different, not because they're smaller - if it was really intended as much more than a snarky critique of academia.

It's not as if everything outside of academia somehow involves much higher stakes. You could basically dismiss the whole of Google or Facebook as being small stakes, following similar logic.

The stakes just aren’t big in the grand scheme of things for academics, even at the top of the field. Beating someone to a major publication means some temporary fame in your circles and a small hop closer to tenure (if you don’t have it). Grant writing will be a little easier and maybe your students will learn about your methods (unlikely if they are undergrads).

In industry, a breakthrough correctly capitalized on can mint you millions of dollars. If you start a company, it could be tens to hundreds of millions. You then are not beholden to an institution or grants and can do what you want.

Just because you have a reputation and livelihood doing something does not mean the stakes are big in the general sense. Otherwise every job would be high stakes for everyone and the term would be meaningless.

The quote came from a professor at Columbia University.

Mmmm. Is that a reference to Billions or did they lift it from somewhere else?

Google "Sayre's Law"

Holy crap! Those allegations are bad, but the student's suicide makes it much worse.

It's all tragic. If anyone ever finds themselves in a situation like Huixiang Chen, you have integrity that the world needs more of, don't despair. I suggest that your first order of business might be to find trustworthy people who can give give good advice, in confidence. Ideally, you'll eventually find someone with clout, who can take over fixing the problems, but the maybe first is to simply get good advice from trustworthy people. In parallel, go talk with a mental health counselor about how you're feeling, since feeling depressed and initially not being able to imagine any good outcome is normal.

1) Crime networks at universities are powerful. If you're at an elite school, whistle-blowing will almost certainly cost you your career, if not more.

2) Trusted sources with both clout and insider knowledge will generally advise you to get out with your hide in tact, and let sleeping dogs lie. That is the strategy I'd also advise to anyone who is not independently wealthy and has a powerful network of allies.

3) Even indication you /might/ whistle blow is enough to trigger blow-back. Be discreet. In particular, don't use university IT equipment for either personal use, or anything related to the situation. If things blow up, the university controls your data. That can give an incredible information asymmetry, as well as leverage.

... and be aware things will get better. At the time, it seems like the world. A few years later, it's old history.

These things won't change without systemic change. Unless universities adopt transparency measures, accountability measures, compensation limits, etc., we'll continue to see corrupt crooks in the admin. And universities shouldn't be able to use NDAs, non-disparage, etc. agreements to cover this stuff up. Simply no.

It's like standing up to a police officers. If you do it, you might get your head blown off. If we change the system, you might not need to worry about getting your head blown off.

Wow sounds like that was the tip of the iceberg. Super tragic.

If someone behaves like the mentoring professor did, they aren't a mentor, they are a crux bringing you down. Burn their reputation, your life is worth more than their "research".

I do understand the pressure this person was feeling, unfortunately they didn't see a way out.

You know how people are protesting the structural policing systems in the US in part because the systems seem like they're built to protect police instead of holding them accountable?

It's been my experience that tenure provides something similar in academia. As a lowly grad student, even in the most outrageous of cases, you aren't likely to succeed in bringing these issues to light. It would take tremendous determination and there's a very good chance you will derail your own academic career.

In most cases you have two options when you run into something like this as a Ph.D. student: A) Stick it out, get the degree. B) Bail.

Unfortunately I don't see folks within the system acknowledging (at least not openly) that the system is broken without extremely tragic events such as this one taking place.

There is an option C): Switch advisors.

As long as the department is sympathetic to the student's complaint, it will accommodate the switch if it can.

Furthermore, there is increasing attention to providing "Faculty Mentors" -- people of faculty rank who are emphatically not the student's advisor, tasked in that capacity with advocating for the student's needs alone. Look within the University for other outlets for your concern.

You're going to be working very closely with your advisor and relying on them for guidance and networking after graduation. If the relationship is toxic and the only options are A or B, think real hard about the upside of B before selecting A.

Or D), blow the whistle and get out of academia.

as international students, there are far more limitations, paperwork and deadlines than just switching advisors. also, the problem of heading. angry one professor , and their clique will be angry at you

what dept are you in where faculty wouldn't support each other? surely the powerful faculty usually are the ones with the grants coming in, sometimes funding the other faculty (at least the dept/university).

This is incredibly sad. I'm not familiar with academic research, was the option to withdraw the paper before publication not available?

His mentor (one of those involved in this fraud) refused to withdraw the paper, according to the translated messages with friends from the same link above.

It isn't up to his mentor to withdraw a paper for which he isn't the first author. It doesn't even require a retraction as it wouldn't be published yet, albeit, withdrawing is still frowned upon

It seems like Huixiang did not feel comfortable overriding his mentor's preference. All quoted remarks below are Huixing, except the one explicitly marked "friend."

> (May 22) Withdrawing the paper will cause a big impact to my mentor (Tao Li)

> He asked me that I have to finish it


> (May 27, friend) How is the talking with your mentor?

> he refused to withdraw my paper resolutely

> (May 28) He push me to fake, if I can't publish the paper before deadline

> (June 6) I'm done communicating with Tao Li

> The result is he will never withdraw the paper


> I had a fight with him. And the police almost came.

> He refused to withdraw the paper resolutely.


> My mentor's words are: if I destroy his reputation, he will kill me.

> He said, this is his bottom line.

You have to remember how academia works both in terms of jobs and funding. His funding depended on his advisor. His future job prospects in academia and to a large extent outside of academia depended on his advisor. Because of this extreme multi-year power imbalance, and near indentured servitude at the hands of most advisors who are borderline abusive to students, senior authors control the publication process. It's true that any author could have stopped publication in theory.. but that doesn't mean anything.

> His future job prospects in academia and to a large extent outside of academia depended on his advisor.

In academia, you're absolutely right. But outside, sorry, no.

I can perfectly understand how a PhD student under an abusive advisor would feel they're trapped. And I'm gonna sound racist, but I've seen the kind of toxicity there is among Chinese researchers in Western universities, and that's probably making this kind of situations even worse.

But I've worked in the industry, and then in academia (as technical staff, never researcher), and now I'm back in the industry, in a team where more than half the people have a PhD (or better).

Outside academia, people either don't give a damn, or know very well how fucked up it can be. Unless you're extremely unlucky and the hiring manager is somehow in the close network of your advisor, nobody will bat an eye if you, as a job candidate, say "academia was not for me, I dropped out of my PhD."

So please, if you read this and feel like you have no options, don't believe that.

I see in the student's words the same deference to authority that I initially see from some students, often Chinese, with whom I have worked.

It takes a long time for some students to understand and trust that I desperately want them to call me out when I'm wrong. Research is a partnership -- even if you don't know much, it is really essential that you ask probing questions and make sure that everyone in the group is standing on a solid foundation. If we are, we'll be able to answer your question immediately and you'll learn something. If we're not, then all of us are about to learn something important.

To second hocuspocus -- yes, there are always options outside of academia. As a side benefit, they'll probably pay better and will let you interact with more professionals more quickly than inside academia.

It's not so simple. Even when I got industry offers, labs asked my advisor about what he thought and now when my students get industry offers, I'm asked what I think. Sometimes this is formal, sometimes it's informal over lunch asking if a former student is suitable for a job. It depends where you are going and who is in charge, but your advisor and their opinion can matter a lot.

If you spend some time talking to past supervisors / advisors you find sometimes negative comments are a good sign.

> In academia, you're absolutely right. But outside, sorry, no.

Now consider that there is nothing outside.

Because that's the mindset of PhD students, and more often than not it is also true.

From the link that someone else posted in this thread http://pbzcnepu.net/isca/timeline.html

> Only one e-mail response is received. That PC member said the first author of a paper has no right to withdraw an article due to methodology concerns and that if an advisor says publish, they should just publish without complaint.

Well thanks for that, I'm confused about the relevance of an 'advisor' to a paper from the perspective of the PC or publisher, or how they would even determine who an advisor is. Do IEEE/ACM have rules for this or is this part of the alleged fraud?

While true, you absolutely need your mentor support. Without it you wont graduate and you likely wont be able to switch mentor.

It is of course good to have the support of your supervisor, but if the relationship breaks down then you shouldn't feel that your graduation is at risk because of this. I think most universities will have a policy on changing supervisors that favours the student? Whether or not this affects your future career I don't know.

Is there a follow up to this story? Did the professor get investigated ?

I did some digging. It seems the prof is still under investigation but is currently active at his institution.


Are you kidding me? I have publication in MICRO with a very famous institution in US. (Top 20).

You know how my professor published his paper in MICRO? Relationship! Relationship! Relationship!

As a graduate student coming from a third world country, I totally lost my hope on the system when I saw a venue as famous as MICRO, is only based on relationship.

Don’t judge me, who can I report it? I am graduate student, means losing my supervisor means losing my visa. I am even posting this via anonymous account (thats how scared I am).

That's a huge bummer. When I was doing research in physics, I found it easy to get published, even without having a PhD - though it didn't hurt, I think, to be working at a university with a group that had already published in that publication before.

With that said, my PIs didn't assist me at all in preparing, submitting, or defending my research, except to inform me of what to prepare for.

I was really pleased with that whole experience because it felt like there were very distinctly not gatekeepers in that field. It could very easily be that there are not really any professional stakes in the field I studied in, so maybe that makes a difference.

My team was all researchers, not tenured faculty, and we funded ourselves with government grants.

With all that said, it's really helpful to get a different point of view - thanks for sharing this, and sorry for the trouble you experienced, nobody should be put in such an awkward position.

I am thinking about applying again from the beginning and start Ph.D. all over. Since I am in love with science.

And the only reason I left my home country was because I wanted pursue science. (Science is what makes my love beautiful)

But I am afraid maybe my next supervisor will be same (let alone if I leave middle way my current supervisor will not give me letter of recommendation).

You are right to be afraid. Get your PhD. first, then do what you want and try to do honest research if you still feel like it. It's like that in every field, apparently. I've been doing medical research for 10 years and I have yet to meet a honest professor. But with time, I got more independent and while I still have to put up with professoral bullshit I now also can conduct honest and independent projects from time to time.

Thank you for those kind words. I really needed it. A honest person to talk too.

You can imagine how much of a shock it is to my believe system to come from literally other side of the world, in hope of doing genuine research, and turns out, it is based on lies.

I ended up dropping out of graduate school, but the advice I was given before entering was this:

Choose your advisor, not your area of focus.

Your advisor single-handedly decides how smoothly your PhD will go, and once you have your PhD, you can decide where your career goes.

Obviously, area of focus does determine a bit of your future, but within that area, optimize for a helpful advisor.

Also, I am sure your university has resources, and ways you could report this behavior, but it's often very difficult to unseat/challenge university faculty, so I totally understand an unwillingness to act. I agree with the other commenter: finish what you started, get out, and focus on doing good work.

> Choose your advisor, not your area of focus.

Absolutely this.

Worked on my master's dissertation with my initial supervisor. He went off to try and find funding for a year or two to get me to come back for PhD.

Turned up, he passed me an ACM magazine article and asked me to have a look and see if anything stuck. Got hooked. He knows I like weird and trippy stuff ;)

I'm in a bit of rubbish situation as he left his position here for greener pastures. Totally fair decision on his part from what I've gathered of the back story

Now supervised by someone else and it's just not the same. Have to go over old ground quite often.

So yeah, choose your supervisor wisely. If they "get you" then they'll get what you're interested in. Then they'll be able to work with you find an interesting area to study.

I have to comment on this. I strongly disagree with the assertion that all professors are dishonest. In my experience the vast majority of academics are honest to a fault (much more so than in many other industries).

I acknowledge that there is a problem, there is a percentage of professors which are abusive, dishonest... The problem is like in many other areas those guys (and it is mostly main) often raise to the top. The other issue is that the way academia is set up (and there are some good reasons for it, e.g. academic freedom) , it's very difficult to detect. You almost always only hear rumors and its very difficult to act on the rumors, especially if nobody wants to come forward (which is a problem) and the rumors are from different institutions.

Another aspect to this is, that there will often be some conflict/disagreement between PhD student and supervisor. This can be for when to finish, when, where and what to publish... That does not always mean there is abuse of power, it's mostly professional disagreements similar to disagreements between an engineer and their manager.

We must navigate very different fields then , because what I'm seeing cannot be "professional disagreement" mistaken for something else. But even so, this is a sterile debate. What is important is that our academic system selects for bad actors in all fields through the "publish or perish" motto. In my field at least, dedication only will rarely propell you to full professorship, because you are competing with those who are dedicated to both research and academic misconduct. Noone becomes important by being nice. A second and important downside specific to medicine is that most academics also do clinical work. Now what do you think happens when the guys running the clinics are hiding in their office most of the time to write papers? Lack of supervision of juniors, in addition to the fact that hiding in your office often does not make you the best clinician.

What's truly sad is that people will often push for being cared for by the big fish professors when it's actually often the worst choice they have.

Don't do this! Try to do the best research you can. When you are make your own way as a researcher you can learn from this bad experience and conduct yourself with more integrity. Once you have a post PhD job what you did during your thesis won't matter, to be honest - 10 years on people will judge you by your more recent publications.

So don't get off track - changing schools/advisers/topics can be very disruptive and you can wind up becoming a tenured ("ten-yeared") graduate student. I was forced to change advisor due to my first one leaving and it took me from 'on track to get out quickly' to 'slowest guy in my year to finish'.

Find a new advisor. Having a body of dishonest work is problematic.

I am a Full Professor. Before we submit a paper, I do my best to make sure our code does what we say it does. We make sure we understand the results as a second check on our implementation. Then we hope we caught all the important bugs that might invalidate our results. Now to be clear, we are not building industrial grade systems and so new benchmarks might expose bugs we don't know about. I am pretty sure that we are not the only group that does this in the field of Programming Languages & Compilers.

This part I don't understand. You did get it published, thanks to connections and whatever 'broken' reasons. I'd guess you research in the domain that you want. By all means it helps your progress. So much more reason to accomplish your research, get it done so you feel no guilt about the quality of the result.

This indeed is under your control. The faulty system, may it be, is beyond your power, at least at the present standing.

You need your advisor to graduate and get your first academic or research job. After that, it's all up to you. Get through it and you have many more options.

Changing supervisors is possible, but probably not a good idea.

I wanted to do theoretical science, after few years at the university, seeing how corrupt the system and the people are, and tired of living in such an environment, I wrapped up after master degree. Best decision of my life.

I recommend getting out as soon as reasonable, then get a real job, do science in your free time or as part of a real job. Better work/life balance, higher self-esteem, putting up with and fixing academia corruption is not your problem anymore. Academia does not have exclusive rights on doing science.

Unfortunately the data science craze has resulted in a lot of these corrupt academic politician types coming into industry, so you definitely don't want to go into "machine learning" if you want to see decency.

You should report it to SIGARCH executive. More information on this blog post


You really shouldn't, those people are likely corrupt as well and may get you deported.

Maybe a stupid question, but do you know whether it is really true that it only got publisher due to his relationships? I can imagine some professors also have an interest to convince students that they are 100% in their power (or at least that the student needs them). It might well be that you got published for quality and he wasn't really needed but lies/makes a bit of a show so that you will add him as co-author or similar.

I saw this in my university too. It's common in many labs (not all) for all the researchers, even masters students who might've just proofread a thesis to put their names in there.

If you have concrete issues to report, I think it would be best to talk to your department head and probably look for a new advisor. If that's not working, the college.

Dealing with these problems is hard, switching advisors is difficult, but, I feel sticking with a bad advisor is much more difficult long term. Hopefully you can figure something out. I do not envy the position you are in.

I feel sorry for you. And I understand it is a very difficult situation. If you speak up and report it with evidence, yes you should have evidence, it will be really good for the community. And I believe there are good professors will support you and would like to advice you because you're as honest as them.

Are you still a grad student? I already graduated and have the same sentiments as you.

In your opinion was your paper good enough to be in MICRO?

It's kind of sad, but I think even good science needs help to be seen.

Not at all. The idea maybe was good (in long term with solid work). But none of our code were working, or works right now for that matter. Nothing works. And I am not saying they does not work as research codes tend to be ugly. I am saying that in literal sense of the word.

It was totally wishful story glued together with beautiful sentences, published via a serious relationship in MICRO.

There are sentences in that paper which even I don’t or my professor understand. It has been written in that way!

1) Write in a way that nobody understands,

2) use big words, (put them in beautiful sentences, so nobody would have any doubts, particularly in introduction and beginning of the paper and in conclusion).

3) a relationship inside MICRO.

Wow you have published a paper in MICRO now.

It does seem the prevailing writing style of academia is a big part of the problem. Papers a lot of time leave the reader feeling stupid. “Surely there’s wisdom here!?” the reader hopes. Maybe if papers taught the underlying concepts in a clearer, more transparent way we’d get better, more reproducible science

This is also a recent phenomenon. If you go back and read papers from 50 to 80 years back, they are shorter, more concisely written, mostly devoid of confusing language, and they are a pleasure to read, rather than a chore. It also comes from the practice of writing solely in the third person; it wasn't previously common, and authors could freely express their own opinions and thoughts directly without hiding them behind flowery language. With the extreme formalism of today's papers, it's made them sterile and devoid of any human connection.

Today, it seems that using unnecessarily complex language is required to make work seem more impressive than it really is. Perhaps because much of today's published work is not that significant. I've been asked to rephrase perfectly clear and simple sentences solely to use fancier language and currently in vogue vernacular. None of that aids clarity or understanding.

That's really up to the reviewers. I've definitely been told that I need to clarify different parts of my paper in my reviews.

Ultimately, peer review is only as good as your peers. If they won't call out bullshit, then bullshit gets published.

I wonder if the code was not working, how you plot the results?

when i publish my research it definitely helped when my prof is a co-author. He did help a lot though, but I know many who would just attach their names to thing.

I want to point out that, as far as I can tell, the advisor is still active at the university [1] and even presented the questionable paper at the conference [2]. This is pretty disturbing and I haven't found any more info on what the school has done outside some vague internal investigation [3]. Does anybody know if the university is actually conducting a thorough investigation and whether or not it's ongoing?

As a PhD student, this is disgusts me. I've witnessed my share of awful, exploitative lab situations but this takes the cake. It seems to me that most of the time, professors only receive soft punishments (e.g. people in the department warning prospective students to think twice about joining a lab) unless the incident draws enough bad press.

[1] http://www.taoli.ece.ufl.edu/

[2] https://twitter.com/VHuixiang/status/1146524212279955458

[3] https://bit.ly/2UsrMsa

There's also hard evidence of misconduct in the conference review software, which directly contradicts the "investigation" from the conference chairs: https://twitter.com/xexd/status/1222671193527861249

i'm a phd student at uf (in cs not ece) and from what i can tell nothing has been done to discipline tao. there were mutterings when his student committed suicide but nothing since then.

Read the alleged response by Tao Li to the original accusations against him: https://medium.com/@huixiangvoice/help-dr-tao-li-is-threaten...

This is a really shocking threat: he's saying that his school (the University of Florida) will take legal action against people making good-faith allegations of academic misconduct against him, because they have unspecified "ulterior motives."

How is this attempt to silence people an appropriate response to the death of one of his own postdocs? At the very least, the University needs to say whether it finds these threats acceptable and whether it is actually considering legal action.

That is completely believable actually. Legal threats are very effective at silencing whistle blower reports. The response in these cases is that the faculty (head of the research group and the Dean) suffer reputational damage equivalent to the misconduct penalty of the offender. That is, exclusion from research (including supervision, presenting at conferences), retraction of their publications, demotion or termination. Without this, the old boys network will have its way.

I can say, from first-hand knowledge, that this is not uncommon. MIT will do this and more against whistleblowers.

Most people back down very fast.

Academia has become tremendously cutthroat but it still operates under the assumption that everyone acts honorably.

Perhaps that’s no longer sustainable, but what is the solution? Many of the problems with modern-day academic research can be traced back to increased scale and competition, neither of which is going away.

There are a lot of possible solutions that scientific community (Early Career Researchers) is working on.

Open Science: There is a growing movement by ECRs to publish Journals that are open access or atleast offer the option to do so with slightly extra costs. This helps to dismantle the centralized consortia of publishing groups that have become too powerful.

Open review: There are journals that allow the reviewers' and authors' comments to be published alongside the paper. This ensures that reviewers are reasonable in their conduct. See eLife for research in natural sciences[0] where once your paper is under review, it is published no matter what. The reviewers can claim that their issues wer not addressed if they feel so and that gets published along the paper's final version. Similary in the computer science community, there are conferences that follow the open review policies as well.

Competition can be replaced with collaborations if the Group Leaders are open minded and are "raised" in such an environment. The next generation of researchers are today's graduate students.

Publish or Perish: This archaic movement can be suppresed when the universities hire people by looking at CVs of applicants where they can only see the papers but not the Journals where they were published. H-index and traditional metrics can be replaced by Altmetrics [1]. The hiring committees need to do the extra work to go through the research of their applicants and not be biased by the journals in which they publish.

These are only few things that are being done. See OpenScience movement variants[3] to explore more in this direction.

[0] https://reviewer.elifesciences.org/author-guide/journal-poli...

[1] https://www.altmetric.com/about-altmetrics/what-are-altmetri...

[3] https://oa2020.org/

Current U.S./Europe academia seems unable to change from within. It lives on government money which it itself controls and distributes to those who "play ball" - the others are pushed away. The only way a change can happen here if academia-independent people control and distribute money to individuals who showed talent/potential/results in the past. It will take massive outside interference to fix or supplant the corrupt system.

That's not quite how it works and in my opinion not the main problem. Take for example DARPA grants, they are controlled by outside people and I don't think it works. You essentially have a "case worker" often with little understanding of the field pushing in some direction. Moreover the amount of admin around these grants is staggering.

The root cause of the situation is that there is not enough money in the system and compared to previously more money is distributed by competitive grant processes. This results in a couple of things.

1. The cutoff (reviewer mark) where grants get funded or not is completely arbitrary, because it is in the flat part of a arctan or logit type curve, so small differences or luck can me you are able to do your research.

2. Competition therefore is fierce.

3. To balance the odds you have to write a lot of applications (academics now spend most of their time either writing grants or administering their grants, not doing research.)

4. It encourages incremental research, because non-incremental research implies high risk, but no one can afford the risk to fail, because having a significant reduction of publications reduces you chances of getting grants in the future (see 1.). Once you get some gap in your funding its almost impossible to get back.

5. This disadvantages women who want to have children.

Yeah more university/grant administration of the current sort is not the right way to go. They have too much power already.

Established elder scientists with little bias and little incentive to corrupt the country education/science system should be in charge, not administrative pencil pushers. Maybe hire some outsiders and set up some kind of independent commission of renowned scientists and budget planning/audit people to maintain oversight of the supported science groups.

But lack of money isn't really part of the problem, there is lot of money there and it often goes to waste. Putting in more isn't the solution - it would just amplify the current situation.

The fact there is competition for money isn't a bad thing. In theory it keeps people engaged and working efficiently. The problem with the current competition is that people are judged by their willingness to play ball, by flawed citation metrics, publicity, imaginary department points and relationships. Not by their originality and weight of their scientific contribution.

It does seem like there's sometimes too much inside baseball in grant review/awards. I wonder if there needs to be a notion of a 'stakeholder' in research that can measure/advocate for the general market of the research outcomes

Yes, the public should be the main stakeholder if public is funding the project. The problem is, it takes an expert to judge the work of another expert. Wise people outside the local academia should represent the public in such things. Maybe one way is to hire and pay well some foreign professors to do the adversarial oversight/evaluation of our local science workforce. For example, a renowned German scientist hired and payed by French government for 5 years to criticize and oversee funding of French scientists close to his area of expertise. The system has to be more adversarial to weed out/defund the bad activities.

1) Reduce competition. Make jobs less desirable to where there might be able to win without cheating. More people at lower salaries would be good here. More research/teaching gets done, and you eliminate subclasses like adjuncts and lifelong postdocs.

2) Ban universities which take federal grants, federal student aid, 501(c)3 status, etc. from using NDA/non-disparage agreements.

3) Put in FOIA/public records-style transparency measures.

4) Open science / open data. Publications should be open too, as should source code. Anyone should be able to replicate scientific research unless it requires $$$ equipment or e.g. builds on personally-identifiable information.

5) Democratic governance processes within universities and salary caps (it doesn't get at this stuff, but it does at a lot of embezzlement).

6) Limit number of grad students per advisor.

Basically, let enough sunshine in that the bad stuff can't stick around.

> Academia has become tremendously cutthroat but it still operates under the assumption that everyone acts honorably.

I think the problem goes a bit beyond that, in that the notion of what constitutes acting honorably has changed.

Seems like Peer Review is hopelessly broken. Conflicts of interest galore.

Science did fine without it until ~40 years ago, and it can do so again. I hope.

The solution is still within peer review, I feel. What ethical alternative is there?

Required reproduction of the result could be a step forward, but only for cheap research.

I don't think science was unethical before Peer Review.

As I understand it, modern peer review came about as a 'crowd sourcing" solution to the hugely expanding number of articles to publish, in ever more specialized fields.

The journals couldn't keep enough relevant expertise around, so letting the scientists review each other was the simple fix.

This does not make sense.

Before scientists started reviewing each other, who at the journal was judging the work of scientists to be published? Scientists?

Well, science literate people working at the journal, I assume.

The point is that they did not have a conflict of interest.

In 2020 peer review, people are often asked to review the work of their rivals. So they're incentivized to (1) say that it's bad and demand many changes, and/or (2) steal their ideas.

arxiv and letting good stuff bubble up after-the-fact.

Peer review in my field is a random process. It's not corrupt; it's just that reviewers give papers a 30 second skim. I've had one paper really reviewed in my whole career. What gets in and out is just random. NIPS did a nice study on this many years back.

On the flip side, academic discourse gets delayed. People aim for Science/Nature, and then down the line. Journals / conferences aim for the prestige of high reject rates. Research goes out sometimes years too late.

Peer review gives the impression of credibility, without the reality. It should be gone. Tiers of journals should be gone too.

Peer review is much older than that. It dates back to the 18th century.

Sure, but it didn't become mandatory and institutionalized until fairly recently.

In a time when the main purpose of a conference were academic exchange and moving forward a scientific field, there was little incentive to engage in fraud. When, as is standard practice today, your scientific career hinges on how many papers you published at top venues, the story changes drastically.

Accounts of how groups of inter-connected high-profile researchers game the review system for their own advantage are deeply concerning.

Looking at the screenshots ... "video recognition in future 5G technology". Is that just straight up word salad? It sounds like the industry horseshit I hear in snippets ... "the convergence of machine learning and IoT". Does the phrase "video recognition in future 5G technology" have _any_ meaning whatsoever?

You are missing "cryptomining lidar focus" .. might add more legitimacy.

I'd imagine it was about object recognition in videos transmitted over a 5G network.

For some reason, Chinese industry field has some enthusiastic on computer vision and IoT field, but to me those things are worthless junk.

I imagine it means "live" over 5G

This was one of the reasoss for me to drop from graduate and quit my research assistant job.

I was working on a paper along with a professor. We sent a paper for review and one of the anonymous feedback from one of the reviewers was something like "these (a couple related papers to our subject) also would make good annotations". My professor said "this person is probably writer of those papers and asks for annotations for a good review". I was quite shocked how casual when he was talking about this

"Conclusion: 99.99% of us are honest but the dishonest 0.01% can cause serious, repeated damage."

Precisely how is this ratio determined?

Given the unreproducable results and plagiarism that we've learned about in recent times I believe there is a lot more fraud than can be accounted for by "the dishonest 0.01%."

Maybe the absolutely worst are the 0.01% (or more likely, 1%). But the problem is more broad. There are different ways of corruption, making up word salads and fake results on one hand, or engaging in low effort repetitions with little to no questioning of the prevalent methodologies/theories of du jour, to fixing language in grant proposals/reports to get more money. When all these add up, I think 0.01% is likely the portion of the completely honest hard-working people in academia.

It's more than 0.01% honest/hard-working people. You have:

* Grad students who will never make tenure.

* Lifelong adjuncts / post-docs who are staying there because they won't play the game.

* Older faculty from a different era.

* Etc.

It's enough that at this point, I find all research from my alma mater suspect unless I've personally reviewed it or know the PI personally. But there's a pretty large honest portion too.

I think the same. I mean, it's widely known that academic incentives are completely broken all across the board. And yet somehow those broken incentives are not producing dishonest research at scale? Doesn't add up, really.

You start by thinking: "I feel as if I'm being honest and my intentions are pure, so for the sake of being able to continue doing this, I need to believe that the vast majority are also honest with pure intentions, but I do recognize just how messed up it is and how very easy it would be to abuse."

Then you pick numbers that justify your position.

Then you get your friends to vouch for your numbers that they helped you come to even though they can't admit that because then they can't independently vouch for them.

some of us made a big effort to get anyone at the IEEE or ACM to care about this ISCA'19 incident. The initial investigation by Torrellas was run mostly by ISCA insiders with conflicts of interest, and mostly seemed to be a stall tactic. They did eventually open a new investigation.

ISCA'20 should really have been cancelled until this was resolved.

See a timeline of our attempts to get anyone at ACM/IEEE to care: http://pbzcnepu.net/isca/timeline.html

I am wondering if the NSF program officers responsible for overseeing the grants should be contacted as they would be in power to make the University and the PIs accountable and/or start their own investigations.

They can be easily found in grant webpages, eg https://www.nsf.gov/awardsearch/showAward?AWD_ID=1900713

The program officers were contacted, leading to the OIG being notified on 6 December 2019 and supposedly an investigation was openened, but we have not heard anything back since then.

I just want to thank you for your actions and ask if you have any advice for career paths to do something about this type of academic bullshit. My GF is currently pursuing a PhD in biochem but very interested in pursuing science policy so that she can do something about all the injustice one witnesses as a graduate student.

As someone who received both their B.S. and M.S. at UIUC, I'm honestly disgusted at how various professors and administration have handled certain incidents that my friends have been part of

there's really not much one can do. In Comp Arch, most of the honest people left the field years ago because they got sick of all this nonsense. Most were happy to let those left behind play their silly peer-review games, but when the student suicide happened it was like a call to action.

It is easier to fight things from the outside, as the internal higher-ups have less power to mess up your life.

It's interesting you mention UIUC, as traditionally that was the source of a lot of suspect papers. It was a running joke about the "coincidence" of how many people with UIUC connections managed to get papers into ISCA each year.

is this true? Can you expound on this? which suspect papers are there?

I know there are a lot of solid professors in UIUC like Joseph Torellas, Sarita Adve (C++ consistency models), Vikram Adve (who is the adviser of Chris Lattner - creator of LLVM). This is really bothersome and I hope it's not from one of those professors.

well let's take Torrellas. Check out the two papers at ISCA'20 with him as an author.

Both use modified "cycle-accurate" simulators for the results. For now let's ignore all the issues with the accuracy of these simulators.

Let's validate how "solid" the work is. Drop an e-mail to Torrellas and ask for the code they used so you can see if you can reproduce their work. Hopefully things have changed and they'll send you the code but in my experience they'll just say no.

So they got two papers in which are unverifiable, and none of the reviewers ever saw the code involved. This bothers some people, but it's not unusual at ISCA.

1. torellas student (now at mit) release code for spectre defense. yeah maybe it was to help her job hunt, but at least it proves (to me) that torellas isn't just pushing crap in the past. i understand you have issues with torellas handling of this incident (as do I), but taking issue with his isca papers seems to be overreach.

2. i dont understand how you can pick on "cycle accurate" simulation when every simulator in our community has problems. at least in gem5 (one of torellas isca2020 paper), we as reviewers can look at the code. how about the famous "in house x86 pin based simulator"? most pin based simulators performance numbers should rightfully be joke, but we use them anyway, because we aren't going to rewrite them.

3. at the end of the day, most of our work is unverifiable, because we make so many approximations anyway. one young faculty told me "we just need to see the idea and determine if it makes sense". i just do not know if this is the right thing to do, or if its a lie we tell ourselves.

> ewers ever saw the code involved. Thi

There are a lot of papers in ISCA that are based on cycle-accurate simulators. It has been like that since forever. How else would you evaluate new non-existent architectures that are bleeding-edge? FPGA? Most can't even afford that. Not to mention it's just not possible in a lot of cases. I agree with you that some work should be verifiable but your accusation is weak for this part. Maybe the community can push for verifiable work before getting published in ISCA since it is such a prestigious conference.

If you don't want cycle-accurate simulator based papers, you'd have to eliminate a large variety of body of work only possible with this approach. A lot of techniques in modern processors have seen their start in processor/system simulators and most of them are probably not cycle-accurate.

> If you don't want cycle-accurate simulator based papers, you'd have to eliminate a large variety of body of work only possible with this approach.

yes. Most "cycle-accurate" results are garbage. Do you show error bars on your results? Can you? Did you run on a variety of independently implemented simulators and show the results on all of them? Did you run the full reference inputs to SPEC all the way through? Why not?

The answer seems to be that it would be hard. But guess what, science is hard. Try complaining to a biologist sometime about your architecture paper that took so long to write, where you gathered all the results 2 weeks before the paper deadline.

It's fine if you come up with a new idea and run some simple proof-of-concept runs to show it might have merit, but don't pretend the results from an academic simulator hacked together by a sleep-deprived grad student have any real world merit.

omg running ref spec on gem5 fs mode...

can you expand? i know the clique, but at least for the specific work i follow, the work isn't bad recently.

of course, nothing surprises me.

it isn't always that the work is bad, it's just the review process for getting in has always seemed to be easier for people with the right connections. Not necessarily in an overt conspiratorial way like the recent allegations either.

The reason people have latched on to the current set of allegations is there seems to be actual concrete proof of misconduct that can be acted on.

> it's just the review process for getting in has always seemed to be easier for people with the right connections

What is this insinuation based on? Do you have data on how many papers by "people with the right connections" are rejected, for example?

it's anecdotal.

another anecdote, when first trying to raise awareness of the issue, when mentioning it to members of the community I never got the response "wow, how could this happen in computer architecture?" rather the response tended to be "wow, I can't believe it finally got bad enough that a student died"

Because these anecdotes are unsubstantiated, they can come across as sour grapes, which I think damages your good cause of exposing the serious misconduct for which there's actual evidence.

i think this is just anecdotal; this person is clearly in the community.

i'm not sure this data is useful, because we'd disagree on "people with the right connections".

This behavior should be particularly unacceptable in computer science journals. One of the main results computer science give to society is creating secure systems to protect and exchange information, and the fact that a double blind peer review system for a conference that basically deals with the construction of safe reliable systems was broken shouldn't be taken lightly.

Come see what's happening in medicine and be really appalled...

I do wonder if medicine can compete with the various conferences in support of social service programs?

    There is a chat group of a few dozen authors who in subsets work 
    on common topics and carefully ensure not to co-author any papers
    with each other so as to keep out of each other’s conflict lists 
    (to the extent that even if there is collaboration they voluntarily
    give up authorship on one paper to prevent conflicts on many future
is this normal ?

Well, no, that's why this publication is a strong assertion of unacceptable fraud.

I have reviewed papers for these types of (but not this) conferences. It would be staggeringly abnormal.

For example, people almost never relinquish authorship credit.

It is worse in humanities. Many university presses won't dare to publish books on themes that are antithetical to the dominant theories and/or paradigms. Being part of the dominant clique is necessary to publish books there.

"Here is what I heard from an award-winning professor about what happened. The professor has first-hand knowledge of the investigations and has communicated directly with the investigators, but wishes to remain anonymous:"

If no one is going to publish names, nothing is going to happen.

I've seen fraud at IEEE ICDCS '98, but that conference was a mess, and smaller IEEE regional conferences are sewers, but nothing especially improper.

The paper file listing [1] from the PC in question shows predictable [conference][year]-paper[incrementing-number] file name pattern. Rather than "insider help" [2], could this be explained via a "leaky endpoint/api" that got scraped?

I hope I'm missing something.

[1] https://miro.medium.com/max/1400/1*sNC6SuC1v8peYmw6KEcz3g.pn...

[2] Quote: "HotCRP does not show you your own submission even if you are a PC member, so how did the de-anonymized reviews and PC comments for Huixiang’s paper end up in his laptop? An insider help seems likely."

It's possible, but there are only a few reviewing systems that are commonly used and their permission systems are robust as far as I know. I think a simpler explanation is that someone in the fraud group has a "higher permission role" than a first-line reviewer, so has access to these. This could be someone who is a general chair, program committee chair, track chair or area chair if the conference is big enough, and often even senior PC members (the metareviewers). All they have to do is perform an export and put it on Dropbox or Google Drive.

It has been confirmed that the reviewing system in question is HotCRP, which is a very battle-tested system that is unlikely to have these issues.

There is a comment from Eddie Kohler on Twitter which implies this was not a leaky endpoint problem. You can see this thread for more details: https://twitter.com/xexd/status/1222659612857401344?s=20

This is really sad for a great community like SIGARCH. They need to clean up; rather, they are sweeping things under the rug. Let's hope other communities can learn from their mistakes.

This is the tip of the iceberg for SIGARCH.

Not directly related, but the "sign in to medium.com with Google" dialog on the top right corner with my account name on it is really freaking me out.

It's an iFrame hosted on Google servers, so medium does not directly have access to the iFrame contents. But it is still irritating.

It's the other way around that I'm concerned with; Google tracking which Medium posts I read.

Does anyone know which other SIG is implicated? Or where I might be able to find out?

Honestly that's how I figured it worked anyway. Popularity contest. Even the less significant confrences seem to just be platforms where "influencers" (Twitter verified personalities in leading roles at popular tech companies with barely any code in their repos) scratch each others back and swap followers. It was always my understanding that confs were a place for networking with other people just for that purpose. Maybe it's because I'm on the East Coast and all the serious confs are on the west coast.

I believe the author of this comment is referring to industry conferences. Please note these are very different from peer-reviewed academic conferences.

In CS these academic conferences are held in many different locations, and many (most?) top researchers are not on Twitter.

Why don’t we require CS research to be reproducible? It’s a fricken computer program. (or should be!)

We also, in machine learning, never truly examine results for basic statistical significance. It’s often rather “I got this number higher”.

I struggle to have even basic trust of academic CS results, unless it’s backed by a major private company research (Google research) or academics I know and trust well.

Did anyone think this wasn't happening?

Science operates as a priesthood more than anything. Universities orginally emerged out of schools for religious instruction, to train Catholic clergy, and I'm not sure how far we have come from there. There is the same two-faced rapprochement between the highest ideals and the lowest behaviours. The high priests just wear jeans instead.

This comment is not very helpful. Many scientists are doing their best to improve the world by publishing their research to the public. They are doing so, at least in CS, at a substantial pay cut compared to industry.

You can see the efficacy of modern science by the real and impactful discoveries in machine learning and related fields made over the last 10-15 years.

You misunderstand. I am definitely not anti-science. The point is we would probably be much further ahead and have many more impactful discoveries if this sort of nonsense wasn't happening, and good people who want to improve the world were given the maximum opportunity to do this. Instead these people are constantly exploited by sociopathic careerists who manage to rise to the top in many places.

It's like police. Good ones do good, and bad ones do harm.

You'd expect the good ones to be pro-reform, but there apparently isn't enough pro-reform in academia to have reform.

With that, less and less good will happen, and more and more harm.

There’s a lot of room for improvement:

- grad student stipends are negligently low, making it difficult for students to complete their PhDs.

- many journals in CS are not open access despite the authors and reviewers not being compensated for their time. I think pre-print servers and the ability to publish work on your own website has really improved the state of CS research over the past 5 years. In the past, the official policies of many top conferences stated that a paper would not appear in the proceedings if it had been published to your website or a pre-print server. This policy has now been dropped (or is not being enforced) by almost all top CS conferences.

- it can be difficult to resolve a conflict with your advisor especially if they have tenure. Tenured professors enjoy significant faculty protection (though maybe not to the same degree as police). Often the most productive avenue for a student in this case is to quietly switch advisors. This can lead to entrenchment of toxic faculty. At my alma mater I can think of a professor who was notorious for having none of their students graduate - they all either left academia or switched advisors

It’s certainly an imperfect system and I’m sure folks can point out other flaws in the comments. There is a lot of room for reform in these and other areas. For example the lack of transparency in this case makes the entire academic community look bad for no good reason.

However I do think that folks on HN (on average) have an overly negative view of academia, some of which may not be based on an accurate understanding of its good parts. For example peer review, despite its many flaws, turns out to be very valuable. During the COVID-19 pandemic, a lot of spurious science has been published in pre-print form and on Medium, only to then be rebutted and rejected by scientific venues with a robust peer-review process.

Well, I had a great time in my Ph.D program. My advisor was the very model of caring about students and integrity. I did good research which I was proud of (but which had very little impact since at the time I didn't know how to sell / position it well) I see the upsides of a functioning academia.

My cynicism doesn't come from a lack of understanding of the academy or the good parts of the academy. To the contrary; it's based on:

1) Seeing the upper levels of the academy (which are often incredibly corrupt, although this is well-hidden from students).

2) Seeing enough cases handled to know that cover-ups are much more common than resolutions.

3) Seeing many students get abused, and seeing the impact fake research has on the real world.

The lack of transparency makes academia look bad for perfectly good reason: there are huge amounts of money flowing into pockets corruptly at elite schools, entire academic careers build on academic fraud (which is /not/ the same as legal fraud), and there's enough rot in the system that it's hard to reform. The higher you go, the more rot there is.

Appearances to the contrary, students are university customers, and faculty are employees. The power legally rests with the administration and the board. The rot persists since the people benefiting set the rules. If I'm the president of a school, and a reform would put me in jail, or even make me lose my million dollar salary, why would I listen to student protests? Faculty protests carry a little more weight, but not enough, and enough faculty are corrupt themselves that you can't build a united front for reform.

Regarding peer review, the system is entirely bankrupt. There are no incentives to do a good job. Anecdotally, most reviewers skim papers; no one reads them, and certainly no one checks the math or replicates the results. Less anecdotally, when people have looked at data, it's no different from a die roll (NIPS did a proper evaluation of peer review).

Regarding grad student stipends, the basic deal is good on paper: freedom to pursue intellectual interests, quality mentorship, and a quality education with basic living expenses covered (on the taxpayer dollar). This seems like exactly the right thing. It breaks when universities break their side of the bargain, with the taxpayer dollars and charitable donations go to faculty clubs, yachts, and vacation homes, internal power dynamics forcing graduate students to be treated as cheap labor rather than as students, and competition forcing widespread academic fraud.

One of my friends leads artifact evaluation for USENIX Sec. It's an emerging but I think valuable addition to the peer-review process in CS. There's been a growing movement, which I very much like, for improved reproducibility. Here's an example from NeurIPS: https://towardsdatascience.com/reproducible-machine-learning...

^ As you know, many areas of science have a reproducibility crisis including psychology and medicine. I hope this will go a long way towards addressing some weaknesses in peer review in CS. For myself, I try to be thorough when refereeing papers. Anecdotally I know some more busy academics farm out their reviews to their grad students.

I had a largely positive experience in grad school also thanks to my excellent advisor. Although I can say that my ~$25k/yr stipend made it difficult to live on without the additional scholarship money that I earned, and even then...

I agree with you that there absolutely needs to be more transparency in many aspects including complaints about advisors, and investigations into dishonest publishing behaviour like in this case.

I think we're long past that point. We need to realign incentive structures and see wholesale change. You'll never see quality peer reviews on an honor system, with no incentives to do quality. The basic concept is broken. It's much better to let people publish as they do research, and have post-hoc reviews, revisions, and discussions.

And for transparency, I'm not going to trust any research out of MIT research until MIT openly gets rid of NDAs and non-disparage agreements. Just no. Yuck. And that includes the chains of corporate walls MIT has built up to hide stuff.

I won't trust MIT research until I can file a public records request, and get records. I should be able to discover conflicts-of-interest, financial arrangements, and all the other messes MIT gets itself into.

I won't trust MIT research until faculty meetings are subject to open meeting laws, and the public can sit in on them and understand how decisions get made.

I won't trust MIT research until data, source code, and papers are out in the sun for public review by anyone.

I won't trust MIT research until I see bad actors getting disciplined and these things publicly discussed.

Until then, unless I know something at MIT isn't corrupt, I'll just assume it might be. I won't trust Stanford either. Too much stuff is simply fabricated to tell a good story. Heck, I won't trust a lot of the elite academy, since I assume it's just as bad. Things get a little bit better one tier down, but the rot is starting to seep in.

MIT CSAIL just shut down its main mailing list -- thousands of people -- since people started discussing institutional corruption.

Footnote: I had no problem living on my similar stipend. I paid for one bedroom in a four-bedroom apartment, ate cheap food, and didn't spend much otherwise. My university

Certain news and social media sites have expertise in detecting voting rings. Maybe they could help...

Dear Scientists,

I'm very curious - why you stress so much about publishing in that Elseviers, ACMEs and so on ?

It's unhealty.

First: there are so called "home pages" - instant publication ! I belive every univ have such infrastructure for their stuff, possible students too...

Not every paper is ground breaking discovery and not every paper should be published in international and GLOBAL "jurnal".

But if you realy insist then: let say USoA with 50 states so at least 50 universities... Then, if you want to publish collect, let say every half of the year, papers (eg. in Tex format) in your department, compile them into YourUniversityDepartmentYearNumber jurnal, print 50 of it, glue stamps and post to 50 libraries. Done.

Include cdrom.

Also you could do zines. With it's own domain name. Easy.

Good content will bring readers.

Do not forget about ToC, with urls.

But it's not recipe for all diseases you have there.

You need to step back and evaluate.

Also don't allow yourself to be morphed into asholes and in two generations you will be deans and proffesors.

And some environment are just viperies, move out of there !!!

i would like add some discussions. when i say "i", you can either read "i" as i am a pc or external reviewer, or maybe im a grad student.

1. i dont want say anything which gets me trouble, but hotcrp was set up wrong by the chair in that year ISCA. i know pc members were able to download the whole paper submission list. i personally saw _all_ papers and _all_ reviewer comments on anything i did not submit. it is likely that everyone had access to this, because i am not a "big players" in this field. i really doubt i had some special access because i have no relationship to the chair that year.

i still have all the papers from isca19. i should not have these, but i just got them from a normal download (no enumeration or guessing urls).

2. point 1 does not excuse leaking, just made easier. instead of calling all pc friends and asking them to call their pc friends, you just need to call one pc friend not on your submission. he could see it.

3. good luck getting tao li. hint: did he fund anyone in the past who was investigating him? did he fund anyone who reviews his papers?

4. the tao li story is deeper. a) there's race issue. because it's easier to manipulate people if your business is being a chinese -> usa faculty pipeline. and once people go through your pipeline, they are even more likely to keep it going! b) if you think the paper push is bad, wait until the details of how he actually treated his students come to life. hint: some things advisors do to students in china is immoral, but tolerated. those things are illegal in usa.

5. this is bigger than tao li. you will never get rid of the cliques, but it's at point that i don't even want read the papers anymore. why sell my integrity for grants, when i could get industry money where at least the product shows the work is not crap?

6. if you want to know some worst players, i heard they are upset they are not in micro2020 reviewer list :p. of course it's not a completely good reviewer list, our community is too small!

7. i hope one day we can honestly talk this out. i would like to tell pressuring stories :)

> ld years ago because they got sick of all this nonsense. Most were happy to let those left behind play their silly peer-review games, but when the student suicide happened it was like a call to action.

> It is easier to fight things from the outside, as the internal higher-ups have less power to me

Why don't you start it here? Since you are already anonymous anyways?

seems you wanted reply to another person

Please tell more so people know actions should be taken right now

The bigger question is, what do we do about it?

It's inevitable that when you do research on a very niche topic, you'll probably know a lot of people who are qualified to review your paper.

How do we prevent this kind of corruption when the communities are so small?

And this is exactly why I turned down a Ph.D. and always thought the community of academia (CS academia in my case) is one big pile of circle-jerk and shitty politics..

Is there a summary of what the fraud was about? Was it about publishing certain researchers' work?

EDIT: Thanks, I skimmed too much.

> There is a chat group of a few dozen authors who in subsets work on common topics and carefully ensure not to co-author any papers with each other so as to keep out of each other’s conflict lists (to the extent that even if there is collaboration they voluntarily give up authorship on one paper to prevent conflicts on many future papers). They exchange papers before submissions and then either bid or get assigned to review each other’s papers by virtue of having expertise on the topic of the papers. They give high scores to the papers. If a review raises any technical issues in PC discussions, the usual response is something to the effect that despite the issues they still remain positive; or if a review questions the novelty claims, they point to some minor detail deep in the paper and say that they find that detail to be novel even though the paper itself does not elevate that detail to claim novelty. Our process is not set up to combat such collusion.

Verbatim from the article.

Collusion and fraud can be easily determined using graphing algorithms. I’m surprised they don’t use that to determine fraud for things like this at the ACM/IEEE.

Can you please explain?

In CS conferences there is generally a small pool of reviewers for a paper in a given sub-area (ex at a conference on systems some people might focus on file systems). These reviewers tend to review each others’ papers since to be a reviewer you also must be an active contributor.

I don’t see how any algorithm might detect this pattern of fraud without taking into account the merit of the papers.

A few bad apples spoils the barrel.

I have always imagined higher level academia would be rife with inner circle politics (corruption).

"higher level X would be rife with inner circle politics" - sadly seems to be in line with human nature.

FWIW, anecdotally, in my interactions with academia involved parties have mostly been surprisingly respectable - which is more that I can say for industry.


> A quick look at the historical DNS records for IEEE.org shows there have been strange things going on for some time

Such as?

Please keep the sinophobia out of HN. Thanks.

How is questioning the motives of departments of the CCP like Huawei and any other Chinese company Sinophobic? Do you realise we have relatively strong crypto today due to lobbying and pressure on the US government in the 90s, who wanted to keep crypto weak. Why on earth should we not question when China adopts the same tactics via direct or indirect methods?

> How is questioning the motives of departments of the CCP like Huawei and any other Chinese company Sinophobic?

Well, if you think that all Chinese companies are departments of the CCP and, by extension, all Chinese nationals working for Chinese companies (so the vast majority) are working for the CCP, and a Chinese company sponsoring OpenSSL wants to keep crypto weak... how is that anything but Sinophobia?

Maybe you think it's justified Sinophobia because of how evil the CCP is... but most evil things the CCP did can be summarized as "they oppress their people" and they oppress their people because they don't have majority support. That's especially true in internationally operating tech companies, where the employees are the most likely to need access to banned foreign websites just to do their job. Most Chinese people don't work for the CCP, they work around them.

It is well known that large companies in China get assigned a CCP delegate. They are then made extensions of the CCP.

The OP in question is engaging in hateful rhetoric that is unrelated to the thread, which is ironically about US academic funding realities & the corrupt practices it encourages--a failure of the ruthlessly capitalist system here.

I do believe there is a HN guideline that discourages that sort of behavior.

Nothing wrong with distrust of the CCP and their companies.

For doing research in academic, where are the funding from? It's impossible that government could fund every project in states.

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact