
Potential organized fraud in ACM/IEEE computer architecture conferences - bmc7505
https://medium.com/@tnvijayk/potential-organized-fraud-in-acm-ieee-computer-architecture-conferences-ccd61169370d
======
codezero
They seem to summarize it in paragraph four, though I am not clear on what
this really means - I think it is that certain authors are able to be "top
ranked" and get limited review from colluding peers who may also be co-authors
on the paper, but removed themselves to squeeze through review faster and
avoid future flags for "conflict of interest"

 _" There is a chat group of a few dozen authors who in subsets work on common
topics and carefully ensure not to co-author any papers with each other so as
to keep out of each other’s conflict lists (to the extent that even if there
is collaboration they voluntarily give up authorship on one paper to prevent
conflicts on many future papers). They exchange papers before submissions and
then either bid or get assigned to review each other’s papers by virtue of
having expertise on the topic of the papers. They give high scores to the
papers. If a review raises any technical issues in PC discussions, the usual
response is something to the effect that despite the issues they still remain
positive; or if a review questions the novelty claims, they point to some
minor detail deep in the paper and say that they find that detail to be novel
even though the paper itself does not elevate that detail to claim novelty.
Our process is not set up to combat such collusion."_

Also see the link in a sibling comment for what seems to have kicked this all
off:
[https://news.ycombinator.com/item?id=23460336](https://news.ycombinator.com/item?id=23460336)

~~~
currymj
peer review at CS conferences is normally anonymous and double-blind to
reviewers and authors. however, the conference organizers themselves know who
the reviewers and authors are, and the conference management websites are set
up to avoid assigning reviewers to papers where they have a conflict of
interest with the author.

this is usually something like: anyone whose email address has the same domain
as yours, plus anyone you have coauthored any paper with, both of these going
back a few years. If you are ethical you will report any additional conflicts
that this doesn't catch.

so if a cabal wants to make sure they can positively review each others
papers, they have to make sure they don't trip those automated filters. this
means they must have been at separate institutions for a while, and must avoid
publishing any papers together.

then, during the review process, they can "bid on" (put themselves forward as
a reviewer for) each other's papers and ensure they give only positive
reviews.

~~~
the_svd_doctor
In small fields, it's sometimes really not very hard to guess who are the
authors, just given the topic and what papers they cite. Even with double
blind.

~~~
qznc
You can often guess who the reviewers are as well. The most obvious tell is
"also cite paper X" which is their own 99% of the time.

~~~
lazyjeff
I don't think this is true from my experience. I've been on several program
committees, and have managed a couple as well. Usually if you're above the
first-level reviewers (e.g. senior PC, track chair, PC chair), you can see the
actual author names and reviewers.

Most of the time when an author later says (formally through the review
system, or informally in casual conversation) that "I know it's this reviewer
who asked me to cite their own paper" they are wrong. Their main evidence is
that the reviewer asked them to cite two papers that include the same author.

But of course I can't tell them otherwise because of the confidentiality, so
they keep on believing that, perpetuating the myth. Perhaps someone can
aggregate some statistics on this, but I genuinely believe that authors
suggesting their own papers only about 20% of the time, and when they do it's
because their paper was highly relevant.

~~~
ska
I agree, in my experience the request to add citations is usually because
there is fairly obvious context missing, and the reviewer wants to make the
connection/comparison is made. Sometimes it's the authors own papers, but
that's selection bias at work - if they didn't work in the area, they probably
wouldn't be reviewing it.

------
tomato2juice
Looks like the (sad) background story is here :
[https://medium.com/@huixiangvoice/the-hidden-story-behind-
th...](https://medium.com/@huixiangvoice/the-hidden-story-behind-the-suicide-
phd-candidate-huixiang-chen-236cd39f79d3)

Posting here instead of a new submission

~~~
kortilla
That suicide is so unfortunate. As someone who was a PhD student, I understand
the perceived gravity of the situation for him. It’s unfortunate he had nobody
to put it in perspective.

This is one of the fields that you can easily leave academia for industry or
even switch to another academic institution with ease if things start to go
bad. Nobody but your own advisor will realistically care about a retracted
paper (especially if you do it before publication). And if your advisor does
hold it against you, that’s not an advisor you want anyway.

Please, please, please, if you ever find yourself in this situation, just walk
away. You are being paid less than a Starbucks Barista to stuff a tenure-track
professor’s portfolio with rushed research nobody will realistically care
about in 3 years (if it was even relevant to begin with).

A PhD will teach you how to research. 99% of the research that comes out of
that process will be useless, incremental crap. Issue retractions, miss
deadlines, whatever. The stakes in the CS academic game are so small a
complete come-apart as a PhD student can easily be turned into an extended
masters degree on a resume.

~~~
dlkf
"academic politics are so vicious because the stakes are so small"

~~~
kortilla
Mmmm. Is that a reference to Billions or did they lift it from somewhere else?

~~~
antonvs
Google "Sayre's Law"

------
jjjjjj__
Fraud?

Are you kidding me? I have publication in MICRO with a very famous institution
in US. (Top 20).

You know how my professor published his paper in MICRO? Relationship!
Relationship! Relationship!

As a graduate student coming from a third world country, I totally lost my
hope on the system when I saw a venue as famous as MICRO, is only based on
relationship.

Don’t judge me, who can I report it? I am graduate student, means losing my
supervisor means losing my visa. I am even posting this via anonymous account
(thats how scared I am).

~~~
codezero
That's a huge bummer. When I was doing research in physics, I found it easy to
get published, even without having a PhD - though it didn't hurt, I think, to
be working at a university with a group that had already published in that
publication before.

With that said, my PIs didn't assist me at all in preparing, submitting, or
defending my research, except to inform me of what to prepare for.

I was really pleased with that whole experience because it felt like there
were very distinctly _not_ gatekeepers in that field. It could very easily be
that there are not really any professional stakes in the field I studied in,
so maybe that makes a difference.

My team was all researchers, not tenured faculty, and we funded ourselves with
government grants.

With all that said, it's really helpful to get a different point of view -
thanks for sharing this, and sorry for the trouble you experienced, nobody
should be put in such an awkward position.

~~~
jjjjjj__
I am thinking about applying again from the beginning and start Ph.D. all
over. Since I am in love with science.

And the only reason I left my home country was because I _wanted_ pursue
science. (Science is what makes my love beautiful)

But I am afraid maybe my next supervisor will be same (let alone if I leave
middle way my current supervisor will not give me letter of recommendation).

~~~
rscho
You are right to be afraid. Get your PhD. first, then do what you want and try
to do honest research if you still feel like it. It's like that in every
field, apparently. I've been doing medical research for 10 years and I have
yet to meet a honest professor. But with time, I got more independent and
while I still have to put up with professoral bullshit I now also can conduct
honest and independent projects from time to time.

~~~
jjjjjj__
Thank you for those kind words. I really needed it. A honest person to talk
too.

You can imagine how much of a shock it is to my believe system to come from
literally other side of the world, in hope of doing genuine research, and
turns out, it is based on lies.

~~~
codezero
I ended up dropping out of graduate school, but the advice I was given before
entering was this:

Choose your advisor, not your area of focus.

Your advisor single-handedly decides how smoothly your PhD will go, and once
you have your PhD, you can decide where your career goes.

Obviously, area of focus does determine a bit of your future, but within that
area, optimize for a helpful advisor.

Also, I am sure your university has resources, and ways you could report this
behavior, but it's often very difficult to unseat/challenge university
faculty, so I totally understand an unwillingness to act. I agree with the
other commenter: finish what you started, get out, and focus on doing good
work.

~~~
dijksterhuis
> Choose your advisor, not your area of focus.

Absolutely this.

Worked on my master's dissertation with my initial supervisor. He went off to
try and find funding for a year or two to get me to come back for PhD.

Turned up, he passed me an ACM magazine article and asked me to have a look
and see if anything stuck. Got hooked. He knows I like weird and trippy stuff
;)

I'm in a bit of rubbish situation as he left his position here for greener
pastures. Totally fair decision on his part from what I've gathered of the
back story

Now supervised by someone else and it's just not the same. Have to go over old
ground quite often.

So yeah, choose your supervisor wisely. If they "get you" then they'll get
what you're interested in. Then they'll be able to _work with you_ find an
interesting area to study.

------
tnecniv
I want to point out that, as far as I can tell, the advisor is still active at
the university [1] and even presented the questionable paper at the conference
[2]. This is pretty disturbing and I haven't found any more info on what the
school has done outside some vague internal investigation [3]. Does anybody
know if the university is actually conducting a thorough investigation and
whether or not it's ongoing?

As a PhD student, this is disgusts me. I've witnessed my share of awful,
exploitative lab situations but this takes the cake. It seems to me that most
of the time, professors only receive soft punishments (e.g. people in the
department warning prospective students to think twice about joining a lab)
unless the incident draws enough bad press.

[1] [http://www.taoli.ece.ufl.edu/](http://www.taoli.ece.ufl.edu/)

[2]
[https://twitter.com/VHuixiang/status/1146524212279955458](https://twitter.com/VHuixiang/status/1146524212279955458)

[3] [https://bit.ly/2UsrMsa](https://bit.ly/2UsrMsa)

~~~
choppaface
There's also hard evidence of misconduct in the conference review software,
which directly contradicts the "investigation" from the conference chairs:
[https://twitter.com/xexd/status/1222671193527861249](https://twitter.com/xexd/status/1222671193527861249)

------
jasonhansel
Read the alleged response by Tao Li to the original accusations against him:
[https://medium.com/@huixiangvoice/help-dr-tao-li-is-
threaten...](https://medium.com/@huixiangvoice/help-dr-tao-li-is-threatening-
insiders-on-behalf-of-university-of-florida-3fa1cb16de68)

This is a really shocking threat: he's saying that his school (the University
of Florida) will take legal action against people making good-faith
allegations of academic misconduct against him, because they have unspecified
"ulterior motives."

How is this attempt to silence people an appropriate response to the death of
one of his own postdocs? At the very least, the University needs to say
whether it finds these threats acceptable and whether it is actually
considering legal action.

~~~
angry_octet
That is completely believable actually. Legal threats are very effective at
silencing whistle blower reports. The response in these cases is that the
faculty (head of the research group and the Dean) suffer reputational damage
equivalent to the misconduct penalty of the offender. That is, exclusion from
research (including supervision, presenting at conferences), retraction of
their publications, demotion or termination. Without this, the old boys
network will have its way.

------
yutopia
Academia has become tremendously cutthroat but it still operates under the
assumption that everyone acts honorably.

Perhaps that’s no longer sustainable, but what is the solution? Many of the
problems with modern-day academic research can be traced back to increased
scale and competition, neither of which is going away.

~~~
effie
Current U.S./Europe academia seems unable to change from within. It lives on
government money which it itself controls and distributes to those who "play
ball" \- the others are pushed away. The only way a change can happen here if
academia-independent people control and distribute money to individuals who
showed talent/potential/results in the past. It will take massive outside
interference to fix or supplant the corrupt system.

~~~
cycomanic
That's not quite how it works and in my opinion not the main problem. Take for
example DARPA grants, they are controlled by outside people and I don't think
it works. You essentially have a "case worker" often with little understanding
of the field pushing in some direction. Moreover the amount of admin around
these grants is staggering.

The root cause of the situation is that there is not enough money in the
system and compared to previously more money is distributed by competitive
grant processes. This results in a couple of things.

1\. The cutoff (reviewer mark) where grants get funded or not is completely
arbitrary, because it is in the flat part of a arctan or logit type curve, so
small differences or luck can me you are able to do your research.

2\. Competition therefore is fierce.

3\. To balance the odds you have to write a lot of applications (academics now
spend most of their time either writing grants or administering their grants,
not doing research.)

4\. It encourages incremental research, because non-incremental research
implies high risk, but no one can afford the risk to fail, because having a
significant reduction of publications reduces you chances of getting grants in
the future (see 1.). Once you get some gap in your funding its almost
impossible to get back.

5\. This disadvantages women who want to have children.

~~~
effie
Yeah more university/grant administration of the current sort is not the right
way to go. They have too much power already.

Established elder scientists with little bias and little incentive to corrupt
the country education/science system should be in charge, not administrative
pencil pushers. Maybe hire some outsiders and set up some kind of independent
commission of renowned scientists and budget planning/audit people to maintain
oversight of the supported science groups.

But lack of money isn't really part of the problem, there is lot of money
there and it often goes to waste. Putting in more isn't the solution - it
would just amplify the current situation.

The fact there is competition for money isn't a bad thing. In theory it keeps
people engaged and working efficiently. The problem with the current
competition is that people are judged by their willingness to play ball, by
flawed citation metrics, publicity, imaginary department points and
relationships. Not by their originality and weight of their scientific
contribution.

------
kleiba
In a time when the main purpose of a conference were academic exchange and
moving forward a scientific field, there was little incentive to engage in
fraud. When, as is standard practice today, your scientific career hinges on
how many papers you published at top venues, the story changes drastically.

Accounts of how groups of inter-connected high-profile researchers game the
review system for their own advantage are deeply concerning.

------
jeffbee
Looking at the screenshots ... "video recognition in future 5G technology". Is
that just straight up word salad? It sounds like the industry horseshit I hear
in snippets ... "the convergence of machine learning and IoT". Does the phrase
"video recognition in future 5G technology" have _any_ meaning whatsoever?

~~~
mindfulplay
You are missing "cryptomining lidar focus" .. might add more legitimacy.

------
shultays
This was one of the reasoss for me to drop from graduate and quit my research
assistant job.

I was working on a paper along with a professor. We sent a paper for review
and one of the anonymous feedback from one of the reviewers was something like
"these (a couple related papers to our subject) also would make good
annotations". My professor said "this person is probably writer of those
papers and asks for annotations for a good review". I was quite shocked how
casual when he was talking about this

------
pbzcnepu
some of us made a big effort to get anyone at the IEEE or ACM to care about
this ISCA'19 incident. The initial investigation by Torrellas was run mostly
by ISCA insiders with conflicts of interest, and mostly seemed to be a stall
tactic. They did eventually open a new investigation.

ISCA'20 should really have been cancelled until this was resolved.

See a timeline of our attempts to get anyone at ACM/IEEE to care:
[http://pbzcnepu.net/isca/timeline.html](http://pbzcnepu.net/isca/timeline.html)

~~~
throwawayiionqz
I am wondering if the NSF program officers responsible for overseeing the
grants should be contacted as they would be in power to make the University
and the PIs accountable and/or start their own investigations.

They can be easily found in grant webpages, eg
[https://www.nsf.gov/awardsearch/showAward?AWD_ID=1900713](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1900713)

~~~
pbzcnepu
The program officers were contacted, leading to the OIG being notified on 6
December 2019 and supposedly an investigation was openened, but we have not
heard anything back since then.

~~~
obmelvin
I just want to thank you for your actions and ask if you have any advice for
career paths to do something about this type of academic bullshit. My GF is
currently pursuing a PhD in biochem but very interested in pursuing science
policy so that she can do something about all the injustice one witnesses as a
graduate student.

As someone who received both their B.S. and M.S. at UIUC, I'm honestly
disgusted at how various professors and administration have handled certain
incidents that my friends have been part of

~~~
pbzcnepu
there's really not much one can do. In Comp Arch, most of the honest people
left the field years ago because they got sick of all this nonsense. Most were
happy to let those left behind play their silly peer-review games, but when
the student suicide happened it was like a call to action.

It is easier to fight things from the outside, as the internal higher-ups have
less power to mess up your life.

It's interesting you mention UIUC, as traditionally that was the source of a
lot of suspect papers. It was a running joke about the "coincidence" of how
many people with UIUC connections managed to get papers into ISCA each year.

~~~
dbancajas
is this true? Can you expound on this? which suspect papers are there?

I know there are a lot of solid professors in UIUC like Joseph Torellas,
Sarita Adve (C++ consistency models), Vikram Adve (who is the adviser of Chris
Lattner - creator of LLVM). This is really bothersome and I hope it's not from
one of those professors.

~~~
pbzcnepu
well let's take Torrellas. Check out the two papers at ISCA'20 with him as an
author.

Both use modified "cycle-accurate" simulators for the results. For now let's
ignore all the issues with the accuracy of these simulators.

Let's validate how "solid" the work is. Drop an e-mail to Torrellas and ask
for the code they used so you can see if you can reproduce their work.
Hopefully things have changed and they'll send you the code but in my
experience they'll just say no.

So they got two papers in which are unverifiable, and none of the reviewers
ever saw the code involved. This bothers some people, but it's not unusual at
ISCA.

~~~
dbancajas
> ewers ever saw the code involved. Thi

There are a lot of papers in ISCA that are based on cycle-accurate simulators.
It has been like that since forever. How else would you evaluate new non-
existent architectures that are bleeding-edge? FPGA? Most can't even afford
that. Not to mention it's just not possible in a lot of cases. I agree with
you that some work should be verifiable but your accusation is weak for this
part. Maybe the community can push for verifiable work before getting
published in ISCA since it is such a prestigious conference.

If you don't want cycle-accurate simulator based papers, you'd have to
eliminate a large variety of body of work only possible with this approach. A
lot of techniques in modern processors have seen their start in
processor/system simulators and most of them are probably not cycle-accurate.

~~~
pbzcnepu
> If you don't want cycle-accurate simulator based papers, you'd have to
> eliminate a large variety of body of work only possible with this approach.

yes. Most "cycle-accurate" results are garbage. Do you show error bars on your
results? Can you? Did you run on a variety of independently implemented
simulators and show the results on all of them? Did you run the full reference
inputs to SPEC all the way through? Why not?

The answer seems to be that it would be hard. But guess what, science is hard.
Try complaining to a biologist sometime about your architecture paper that
took so long to write, where you gathered all the results 2 weeks before the
paper deadline.

It's fine if you come up with a new idea and run some simple proof-of-concept
runs to show it might have merit, but don't pretend the results from an
academic simulator hacked together by a sleep-deprived grad student have any
real world merit.

~~~
idoeestuff
omg running ref spec on gem5 fs mode...

------
pvarangot
This behavior should be particularly unacceptable in computer science
journals. One of the main results computer science give to society is creating
secure systems to protect and exchange information, and the fact that a double
blind peer review system for a conference that basically deals with the
construction of safe reliable systems was broken shouldn't be taken lightly.

~~~
rscho
Come see what's happening in medicine and be really appalled...

~~~
protomyth
I do wonder if medicine can compete with the various conferences in support of
social service programs?

------
topspin
"Conclusion: 99.99% of us are honest but the dishonest 0.01% can cause
serious, repeated damage."

Precisely how is this ratio determined?

Given the unreproducable results and plagiarism that we've learned about in
recent times I believe there is a lot more fraud than can be accounted for by
"the dishonest 0.01%."

~~~
effie
Maybe the absolutely worst are the 0.01% (or more likely, 1%). But the problem
is more broad. There are different ways of corruption, making up word salads
and fake results on one hand, or engaging in low effort repetitions with
little to no questioning of the prevalent methodologies/theories of du jour,
to fixing language in grant proposals/reports to get more money. When all
these add up, I think 0.01% is likely the portion of the completely honest
hard-working people in academia.

~~~
wegs
It's more than 0.01% honest/hard-working people. You have:

* Grad students who will never make tenure.

* Lifelong adjuncts / post-docs who are staying there because they won't play the game.

* Older faculty from a different era.

* Etc.

It's enough that at this point, I find all research from my alma mater suspect
unless I've personally reviewed it or know the PI personally. But there's a
pretty large honest portion too.

------
znpy

        There is a chat group of a few dozen authors who in subsets work 
        on common topics and carefully ensure not to co-author any papers
        with each other so as to keep out of each other’s conflict lists 
        (to the extent that even if there is collaboration they voluntarily
        give up authorship on one paper to prevent conflicts on many future
        papers)
    

is this normal ?

~~~
PeterisP
Well, no, that's why this publication is a strong assertion of unacceptable
fraud.

------
raincom
It is worse in humanities. Many university presses won't dare to publish books
on themes that are antithetical to the dominant theories and/or paradigms.
Being part of the dominant clique is necessary to publish books there.

------
mcguire
" _Here is what I heard from an award-winning professor about what happened.
The professor has first-hand knowledge of the investigations and has
communicated directly with the investigators, but wishes to remain anonymous:_
"

If no one is going to publish names, nothing is going to happen.

I've seen fraud at IEEE ICDCS '98, but that conference was a mess, and smaller
IEEE regional conferences are sewers, but nothing especially improper.

------
withinrafael
The paper file listing [1] from the PC in question shows predictable
[conference][year]-paper[incrementing-number] file name pattern. Rather than
"insider help" [2], could this be explained via a "leaky endpoint/api" that
got scraped?

I hope I'm missing something.

[1]
[https://miro.medium.com/max/1400/1*sNC6SuC1v8peYmw6KEcz3g.pn...](https://miro.medium.com/max/1400/1*sNC6SuC1v8peYmw6KEcz3g.png)

[2] Quote: "HotCRP does not show you your own submission even if you are a PC
member, so how did the de-anonymized reviews and PC comments for Huixiang’s
paper end up in his laptop? An insider help seems likely."

~~~
lazyjeff
It's possible, but there are only a few reviewing systems that are commonly
used and their permission systems are robust as far as I know. I think a
simpler explanation is that someone in the fraud group has a "higher
permission role" than a first-line reviewer, so has access to these. This
could be someone who is a general chair, program committee chair, track chair
or area chair if the conference is big enough, and often even senior PC
members (the metareviewers). All they have to do is perform an export and put
it on Dropbox or Google Drive.

~~~
SoylentOrange
It has been confirmed that the reviewing system in question is HotCRP, which
is a very battle-tested system that is unlikely to have these issues.

There is a comment from Eddie Kohler on Twitter which implies this was not a
leaky endpoint problem. You can see this thread for more details:
[https://twitter.com/xexd/status/1222659612857401344?s=20](https://twitter.com/xexd/status/1222659612857401344?s=20)

------
Upvoter33
This is really sad for a great community like SIGARCH. They need to clean up;
rather, they are sweeping things under the rug. Let's hope other communities
can learn from their mistakes.

~~~
wegs
This is the tip of the iceberg for SIGARCH.

------
soraminazuki
Not directly related, but the "sign in to medium.com with Google" dialog on
the top right corner with my account name on it is really freaking me out.

~~~
hereisdx
It's an iFrame hosted on Google servers, so medium does not directly have
access to the iFrame contents. But it is still irritating.

~~~
soraminazuki
It's the other way around that I'm concerned with; Google tracking which
Medium posts I read.

------
p0llard
Does anyone know which other SIG is implicated? Or where I might be able to
find out?

------
zelon88
Honestly that's how I figured it worked anyway. Popularity contest. Even the
less significant confrences seem to just be platforms where "influencers"
(Twitter verified personalities in leading roles at popular tech companies
with barely any code in their repos) scratch each others back and swap
followers. It was always my understanding that confs were a place for
networking with other people just for that purpose. Maybe it's because I'm on
the East Coast and all the serious confs are on the west coast.

~~~
SoylentOrange
I believe the author of this comment is referring to industry conferences.
Please note these are very different from peer-reviewed academic conferences.

In CS these academic conferences are held in many different locations, and
many (most?) top researchers are not on Twitter.

------
softwaredoug
Why don’t we require CS research to be reproducible? It’s a fricken computer
program. (or should be!)

We also, in machine learning, never truly examine results for basic
statistical significance. It’s often rather “I got this number higher”.

I struggle to have even basic trust of academic CS results, unless it’s backed
by a major private company research (Google research) or academics I know and
trust well.

------
Gatsky
Did anyone think this wasn't happening?

Science operates as a priesthood more than anything. Universities orginally
emerged out of schools for religious instruction, to train Catholic clergy,
and I'm not sure how far we have come from there. There is the same two-faced
rapprochement between the highest ideals and the lowest behaviours. The high
priests just wear jeans instead.

~~~
SoylentOrange
This comment is not very helpful. Many scientists are doing their best to
improve the world by publishing their research to the public. They are doing
so, at least in CS, at a substantial pay cut compared to industry.

You can see the efficacy of modern science by the real and impactful
discoveries in machine learning and related fields made over the last 10-15
years.

~~~
wegs
It's like police. Good ones do good, and bad ones do harm.

You'd expect the good ones to be pro-reform, but there apparently isn't enough
pro-reform in academia to have reform.

With that, less and less good will happen, and more and more harm.

~~~
SoylentOrange
There’s a lot of room for improvement:

\- grad student stipends are negligently low, making it difficult for students
to complete their PhDs.

\- many journals in CS are not open access despite the authors and reviewers
not being compensated for their time. I think pre-print servers and the
ability to publish work on your own website has really improved the state of
CS research over the past 5 years. In the past, the official policies of many
top conferences stated that a paper would not appear in the proceedings if it
had been published to your website or a pre-print server. This policy has now
been dropped (or is not being enforced) by almost all top CS conferences.

\- it can be difficult to resolve a conflict with your advisor especially if
they have tenure. Tenured professors enjoy significant faculty protection
(though maybe not to the same degree as police). Often the most productive
avenue for a student in this case is to quietly switch advisors. This can lead
to entrenchment of toxic faculty. At my alma mater I can think of a professor
who was notorious for having none of their students graduate - they all either
left academia or switched advisors

It’s certainly an imperfect system and I’m sure folks can point out other
flaws in the comments. There is a lot of room for reform in these and other
areas. For example the lack of transparency in this case makes the entire
academic community look bad for no good reason.

However I do think that folks on HN (on average) have an overly negative view
of academia, some of which may not be based on an accurate understanding of
its good parts. For example peer review, despite its many flaws, turns out to
be very valuable. During the COVID-19 pandemic, a lot of spurious science has
been published in pre-print form and on Medium, only to then be rebutted and
rejected by scientific venues with a robust peer-review process.

~~~
wegs
Well, I had a great time in my Ph.D program. My advisor was the very model of
caring about students and integrity. I did good research which I was proud of
(but which had very little impact since at the time I didn't know how to sell
/ position it well) I see the upsides of a functioning academia.

My cynicism doesn't come from a lack of understanding of the academy or the
good parts of the academy. To the contrary; it's based on:

1) Seeing the upper levels of the academy (which are often incredibly corrupt,
although this is well-hidden from students).

2) Seeing enough cases handled to know that cover-ups are much more common
than resolutions.

3) Seeing many students get abused, and seeing the impact fake research has on
the real world.

The lack of transparency makes academia look bad for perfectly good reason:
there are huge amounts of money flowing into pockets corruptly at elite
schools, entire academic careers build on academic fraud (which is /not/ the
same as legal fraud), and there's enough rot in the system that it's hard to
reform. The higher you go, the more rot there is.

Appearances to the contrary, students are university customers, and faculty
are employees. The power legally rests with the administration and the board.
The rot persists since the people benefiting set the rules. If I'm the
president of a school, and a reform would put me in jail, or even make me lose
my million dollar salary, why would I listen to student protests? Faculty
protests carry a little more weight, but not enough, and enough faculty are
corrupt themselves that you can't build a united front for reform.

Regarding peer review, the system is entirely bankrupt. There are no
incentives to do a good job. Anecdotally, most reviewers skim papers; no one
reads them, and certainly no one checks the math or replicates the results.
Less anecdotally, when people have looked at data, it's no different from a
die roll (NIPS did a proper evaluation of peer review).

Regarding grad student stipends, the basic deal is good on paper: freedom to
pursue intellectual interests, quality mentorship, and a quality education
with basic living expenses covered (on the taxpayer dollar). This seems like
exactly the right thing. It breaks when universities break their side of the
bargain, with the taxpayer dollars and charitable donations go to faculty
clubs, yachts, and vacation homes, internal power dynamics forcing graduate
students to be treated as cheap labor rather than as students, and competition
forcing widespread academic fraud.

~~~
SoylentOrange
One of my friends leads artifact evaluation for USENIX Sec. It's an emerging
but I think valuable addition to the peer-review process in CS. There's been a
growing movement, which I very much like, for improved reproducibility. Here's
an example from NeurIPS: [https://towardsdatascience.com/reproducible-machine-
learning...](https://towardsdatascience.com/reproducible-machine-learning-
cf1841606805)

^ As you know, many areas of science have a reproducibility crisis including
psychology and medicine. I hope this will go a long way towards addressing
some weaknesses in peer review in CS. For myself, I try to be thorough when
refereeing papers. Anecdotally I know some more busy academics farm out their
reviews to their grad students.

I had a largely positive experience in grad school also thanks to my excellent
advisor. Although I can say that my ~$25k/yr stipend made it difficult to live
on without the additional scholarship money that I earned, and even then...

I agree with you that there absolutely needs to be more transparency in many
aspects including complaints about advisors, and investigations into dishonest
publishing behaviour like in this case.

~~~
wegs
I think we're long past that point. We need to realign incentive structures
and see wholesale change. You'll never see quality peer reviews on an honor
system, with no incentives to do quality. The basic concept is broken. It's
much better to let people publish as they do research, and have post-hoc
reviews, revisions, and discussions.

And for transparency, I'm not going to trust any research out of MIT research
until MIT openly gets rid of NDAs and non-disparage agreements. Just no. Yuck.
And that includes the chains of corporate walls MIT has built up to hide
stuff.

I won't trust MIT research until I can file a public records request, and get
records. I should be able to discover conflicts-of-interest, financial
arrangements, and all the other messes MIT gets itself into.

I won't trust MIT research until faculty meetings are subject to open meeting
laws, and the public can sit in on them and understand how decisions get made.

I won't trust MIT research until data, source code, and papers are out in the
sun for public review by anyone.

I won't trust MIT research until I see bad actors getting disciplined and
these things publicly discussed.

Until then, unless I know something at MIT isn't corrupt, I'll just assume it
might be. I won't trust Stanford either. Too much stuff is simply fabricated
to tell a good story. Heck, I won't trust a lot of the elite academy, since I
assume it's just as bad. Things get a little bit better one tier down, but the
rot is starting to seep in.

MIT CSAIL just shut down its main mailing list -- thousands of people -- since
people started discussing institutional corruption.

Footnote: I had no problem living on my similar stipend. I paid for one
bedroom in a four-bedroom apartment, ate cheap food, and didn't spend much
otherwise. My university

------
Woodi
Dear Scientists,

I'm very curious - why you stress so much about publishing in that Elseviers,
ACMEs and so on ?

It's unhealty.

First: there are so called "home pages" \- instant publication ! I belive
every univ have such infrastructure for their stuff, possible students too...

Not every paper is ground breaking discovery and not every paper should be
published in international and GLOBAL "jurnal".

But if you realy insist then: let say USoA with 50 states so at least 50
universities... Then, if you want to publish collect, let say every half of
the year, papers (eg. in Tex format) in your department, compile them into
YourUniversityDepartmentYearNumber jurnal, print 50 of it, glue stamps and
post to 50 libraries. Done.

Include cdrom.

Also you could do zines. With it's own domain name. Easy.

Good content will bring readers.

Do not forget about ToC, with urls.

But it's not recipe for all diseases you have there.

You need to step back and evaluate.

Also don't allow yourself to be morphed into asholes and in two generations
you will be deans and proffesors.

And some environment are just viperies, move out of there !!!

------
DenisM
Certain news and social media sites have expertise in detecting voting rings.
Maybe they could help...

------
idoeestuff
i would like add some discussions. when i say "i", you can either read "i" as
i am a pc or external reviewer, or maybe im a grad student.

1\. i dont want say anything which gets me trouble, but hotcrp was set up
wrong by the chair in that year ISCA. i know pc members were able to download
the whole paper submission list. i personally saw _all_ papers and _all_
reviewer comments on anything i did not submit. it is likely that everyone had
access to this, because i am not a "big players" in this field. i really doubt
i had some special access because i have no relationship to the chair that
year.

i still have all the papers from isca19. i should not have these, but i just
got them from a normal download (no enumeration or guessing urls).

2\. point 1 does not excuse leaking, just made easier. instead of calling all
pc friends and asking them to call their pc friends, you just need to call one
pc friend not on your submission. he could see it.

3\. good luck getting tao li. hint: did he fund anyone in the past who was
investigating him? did he fund anyone who reviews his papers?

4\. the tao li story is deeper. a) there's race issue. because it's easier to
manipulate people if your business is being a chinese -> usa faculty pipeline.
and once people go through your pipeline, they are even more likely to keep it
going! b) if you think the paper push is bad, wait until the details of how he
actually treated his students come to life. hint: some things advisors do to
students in china is immoral, but tolerated. those things are illegal in usa.

5\. this is bigger than tao li. you will never get rid of the cliques, but
it's at point that i don't even want read the papers anymore. why sell my
integrity for grants, when i could get industry money where at least the
product shows the work is not crap?

6\. if you want to know some worst players, i heard they are upset they are
not in micro2020 reviewer list :p. of course it's not a completely good
reviewer list, our community is too small!

7\. i hope one day we can honestly talk this out. i would like to tell
pressuring stories :)

~~~
dbancajas
> ld years ago because they got sick of all this nonsense. Most were happy to
> let those left behind play their silly peer-review games, but when the
> student suicide happened it was like a call to action.

> It is easier to fight things from the outside, as the internal higher-ups
> have less power to me

Why don't you start it here? Since you are already anonymous anyways?

~~~
idoeestuff
seems you wanted reply to another person

------
jedberg
The bigger question is, what do we do about it?

It's inevitable that when you do research on a very niche topic, you'll
probably know a lot of people who are qualified to review your paper.

How do we prevent this kind of corruption when the communities are so small?

------
sk0v
And this is exactly why I turned down a Ph.D. and always thought the community
of academia (CS academia in my case) is one big pile of circle-jerk and shitty
politics..

------
cryptonector
Is there a summary of what the fraud was about? Was it about publishing
certain researchers' work?

EDIT: Thanks, I skimmed too much.

~~~
YetAnotherMatt
> There is a chat group of a few dozen authors who in subsets work on common
> topics and carefully ensure not to co-author any papers with each other so
> as to keep out of each other’s conflict lists (to the extent that even if
> there is collaboration they voluntarily give up authorship on one paper to
> prevent conflicts on many future papers). They exchange papers before
> submissions and then either bid or get assigned to review each other’s
> papers by virtue of having expertise on the topic of the papers. They give
> high scores to the papers. If a review raises any technical issues in PC
> discussions, the usual response is something to the effect that despite the
> issues they still remain positive; or if a review questions the novelty
> claims, they point to some minor detail deep in the paper and say that they
> find that detail to be novel even though the paper itself does not elevate
> that detail to claim novelty. Our process is not set up to combat such
> collusion.

Verbatim from the article.

------
remote_phone
Collusion and fraud can be easily determined using graphing algorithms. I’m
surprised they don’t use that to determine fraud for things like this at the
ACM/IEEE.

~~~
SoylentOrange
Can you please explain?

In CS conferences there is generally a small pool of reviewers for a paper in
a given sub-area (ex at a conference on systems some people might focus on
file systems). These reviewers tend to review each others’ papers since to be
a reviewer you also must be an active contributor.

I don’t see how any algorithm might detect this pattern of fraud without
taking into account the merit of the papers.

------
teddyh
A few bad apples spoils the barrel.

------
obvthrowaway2
I have always imagined higher level academia would be rife with inner circle
politics (corruption).

~~~
winstonschmidt
"higher level X would be rife with inner circle politics" \- sadly seems to be
in line with human nature.

FWIW, anecdotally, in my interactions with academia involved parties have
mostly been surprisingly respectable - which is more that I can say for
industry.

------
momofarm
For doing research in academic, where are the funding from? It's impossible
that government could fund every project in states.

