I'm long past my academia phase, but recently led the PC for an industry conference (accept rate: ~15%).
1. Curation is important both for the physical limits (venues only fit a certain number of people), attention limits (attendees will usually retain only a handful of "nuggets" no matter how packed the agenda is) and interaction limits (you can't meet everyone at a large conference).
2. If the goal of a conference is not just to "stamp" research as somehow "approved", but to encourage discovery and knowledge exchange that deepens a specific area, it's important to apply that curation filter with an eye toward best advancing the goals of the conference. That means not just going for things that are okay, but those that best resonate with other presentations / attendees / research topics.
3. While the size of any one conference has to be fixed, tech has made it infinitely easier to create new conferences and journals with other focus areas. They may not start with the prestige of a larger journal, but if the papers published start to have an impact, it can catalyze an entire subfield of work.
Some conferences can be tied exclusively to "novelty" - ACM academic conferences - but others to "incremental advancements" - the bigger industry conferences in security, like Usenix Security and some to "best explaining ideas" - like Enigma.
There are new ways to find an audience for your work and create impact - that's part of the job now.
I'm well-known in a research community. I'm positioned such that I don't need more academic points. I've mostly stopped publishing in branded prestige academic venues, in part due to rejection rates.
My goal in doing work and writing papers is to see them disseminated. The acceptance/rejection process is asinine -- studies show it's basically random. I've had one paper in my whole career where the reviewers did a proper review (e.g. worked through the math). The rest were quick skims. Comments often show the reviewers never read the paper. The stuff that makes it through this process is often nonsense, while very high-quality work is often cut.
The very best paper I wrote in my career has never seen the light of day. It was shortened to a 4-page work-in-progress because a reviewer didn't read it (the feedback was literally nonsense: that the sample size was small enough to be anecdotal; I had the largest sample size in the history of the research field).
The only impact of this egoistical search for prestige-by-low-accept-rates is that people who have better things to do with their time leave, and that research dissemination is slowed.
Those excuses make little sense in the real world:
1) If your conference has a 10% accept rate, it's easy enough to book a bigger venue next year. I've been to conferences with dozens of people, and ones with tens of thousands. It all works well. Bigger ones work better, if anything.
2) PCs aren't thoughtful enough to do that well, and even so, the goal of a conference shouldn't be to select things which resonate with the entrenched PC. That's why many ideas need to wait for a generation of old, conservative professors to die to make it out there.
3) The whole obsession with prestige is stupid and misguided.
Journals and conferences ought to have quality bars. Are there typos and grammar errors? Were there clear IRB ethic violations? Did you use error bars on your plots? Was data fabricated? Is the research methodologically sound? Is it coherent and readable? And so on. If it passes those bars, it should be published. If no one reads it / attends a talk, that's okay too -- importance can and should be determined after-the-fact.
I may have been radicalized during my short time in the academic world, but IMO, conferences are a really bad setting to disseminate new ideas. They just don't favor it. In practice, you have people preaching their ideas, a lot of people not listening, and a few misunderstanding. Nobody else.
Spreading ideas is better done on paper, with guided discussion, and without time limits. Or, in other worlds, on something like paper-split hierarchical internet forums.
Conferences can be useful to discuss and work over known ideas. For that, they should always bring papers that are already published, and had some community attention. The idea of debuting new ideas over unprepared people is antagonistic to that goal.
> Comments often show the reviewers never read the paper.
This. I was not in computer science, but in a different technical field, and this is sadly common. We would often have to appeal to the editor with "The topic the reviewer said we didn't address? It's in Section X. Get us another reviewer."
The biggest mystery in the whole thing is why someone who is volunteering to review papers anonymously would bother to do it badly when they could simply not do it at all.
* You do get academic points for chairing a conference, and as a chair, you do need to find reviewers.
* A colleague is running a conference, and asks you to do a favor. You want to help your colleague. Reviewing papers wins you points with them, and declining to review burns bridges. When you're running a conference, you'd like them to reciprocate. Plus, they might be on a grant / hiring / etc. board / committee / etc. later on. Burning bridges in academia is very bad.
On the other hand, there is no incentive to invest more than 30-600 seconds per review. Neither you nor your friend really have any reason to care about the quality of the conference.
As this process repeats, people put in less and less time each time around, since it doesn't matter. The process converges to random noise.
Why would they care? They get their name as the chair of the conference on their CV. No one remembers who chaired a specific year or how a given year went. If they did, there's enough noise in the process it wouldn't be attributed to the review process in particular. Some years, weaker papers come in, and others, stronger ones do.
Just accept people who has held a lot of conference talks before and it will be fine. That is the fastest and easiest way to review, so unless there is pressure to do things differently that is how most will do it.
If there is space still left at the end you can look at the others and take the first paper that looks fine until there are no spots left.
There is (for good reason) more focus today on diversity--broadly defined e.g. new speakers--for non-academic conferences these days. However, there were quite a few conferences in the tech sector historically that tended to have a core of "the usual suspects" with others grabbing a smaller number of leftover slots. TBH, I probably benefited from this over the years. (Conferences run by companies follow somewhat different rules but still usually have a stable of Top Rated Speakers who tend to get slots.)
I think this happens in all fields. It’s probably a professor on a PC dumping the review on an unsuspecting and overworked PhD student or MS student who really doesn’t care and just wants to get some sleep.
And yes - say what one may - PhD students are overworked and underpaid at least in most of the US.
The "basically" is important though, because there are some nuances to it.
However, the point I've actually come here to make is that since publications are a strong factor for your career progress in academia, a corollary of the above is that making it in academia is basically random, too. Which is also true for other reasons, though: for every open professor position in a certain field, there are usually a number of candidates that are all equally highly qualified. But only one of them can get the gig. If the selection is not random, then it's typically based on other factors, such as, how well you are connected, your gender, whether some other professor at the faculty fears competition from you, etc. -- which may not be random, but is equally out of your control in all but a few cases.
My experience is that for elite schools -- Stanford and MIT -- the remaining factor is how much one is willing to cheat. There is a random component, a merit-based component, but most (and I have large n here) successful affiliated faculty candidates did so by cheating in some way.
That can be data baking, credit theft, or a whole slew of other techniques, but at least in my department, most new faculty at least at these two schools are in some way crooked.
"99.99% of us are honest but the dishonest 0.01% can cause serious, repeated damage."
My experience is that this is much more like a 50/50 split at elite schools, at least when you look at people who succeed at making it to faculty positions. BMJ estimates 20% of publications are based on fabricated data:
That sounds about right for what I've seen at MIT. Note that 20% of publications being based on fabricated data is in-line with my 50/50 split figure. Researchers who cheat only do it part of the time, and often in ways which don't involve direct data fabrication. Critically, the numbers go up significantly for high-impact publications -- they types that make the news and make scientific careers. By the time MIT's PR machine picks up a publication, and the press picks up from there, the odds of it being fraudulent are much higher than 50/50.
> Comments often show the reviewers never read the paper.
And when they do, it's not sure they understood it or even put the slightest towards understanding. I've a rejected paper where one of the comment was that the header of a table featuring 4 columns named N, V, ADJ, ADV was "hard to understand". The table was between two paragraphs each mentioning nouns, verbs, adjectives and adverbs, in a paper mostly about dictionary...
> My goal in doing work and writing papers is to see them disseminated. The acceptance/rejection process is asinine -- studies show it's basically random. I've had one paper in my whole career where the reviewers did a proper review (e.g. worked through the math). The rest were quick skims. Comments often show the reviewers never read the paper. The stuff that makes it through this process is often nonsense, while very high-quality work is often cut.
Because the leaders are the people who made their careers in the current system and they wouldn't benefit from making things more meritocratic. These are the people who argues endlessly saying meritocracy is bad for reason X or reason Y, they just want to keep their current privileges.
> 3. While the size of any one conference has to be fixed, tech has made it infinitely easier to create new conferences and journals with other focus areas. They may not start with the prestige of a larger journal, but if the papers published start to have an impact, it can catalyze an entire subfield of work.
Does it though? The largest conferences I go to as a CS academic have hundreds of people. There are academic areas where 10x people participate. The size limitation is a self-imposed excuse to keep acceptance low. I have been PC chair of two conferences and my attempts to expand the conference numbers were shot down by the steering committee precisely for this reason, not because we couldn't find a larger room.
This is a reason I gave up doing systems research. It’s full of cliques and negativity. It became a process whereby we were even trained to accept this rejection frequency as normal and to expect retrying for years. You keep submitting a paper and revising it for months even while working on the next idea (can’t just sit around fixing a paper, PhD work must always continue).
Rejections also directly impact a student’s graduation. No accepted papers at decent conferences? Delay graduation. It’s grating and unnecessarily stressful.
After graduation I was unmotivated to continue to bother publishing because I don’t care for this culture. I’m saddened by it.
There are positions in industry like at Sandia National Labs which require regular publication to remain employed. Again, given good work is done, rejections affect your livelihood. Too much stress.
I lost my “spark” thinking the field of CS systems research was one of exploration and community. It exists, but not as a whole.
I left my PhD in computational geometry for similar reasons. I was told flat out by my advisor that the chances of me getting a paper accepted in even a middling journal were low. Not normally low (as in tough review), but so low because the entire field is dominated by a handful of very high profile researchers and mathematicians that essentially gate keep knowledge on it.
It was also necessary to publish some papers to even get my PhD.
Academia is dead. My biggest most important realization about learning in the last decade is that my concept of the academic was backwards. They aren't free thinking at all. It's drama all the way down.
I now havent touched really anything in CS research because this experience completely killed my love for the field, and advanced learning in general.
I agree with this sentiment. Although I didn't make it to even applying to grad programs because the professors at my uni were very much "what are you going to do for me" in Thomas Edison invention factory way, when even inquiring about undergrad research.
> I now havent touched really anything in CS research because this experience completely killed my love for the field, and advanced learning in general.
I wager you'll come around, out of genuine interest, and when you want to work on interesting things, you'll run into novel problems, a lot. But this is sad to me because every few years I'll come up with something novel, and discover someone already did a ph.d thesis on it. I feel like I'd be much further along if I had access to people who really cared about teaching and pushing the boundaries.
One slight saving grace (after having graduated) is to feel fine writing documents like papers and just putting them up on https://arxiv.org/ under my own terms.
One additional technique I learned from others is to use submissions to conferences as a way to get feedback on your work without any expectation that it gets accepted. You do that early knowing there are flaws, but do it to get early warnings from reviewers about rejection points.
Now you could do the same being outside academia, as a way to get feedback on something and then just call it “done” on some paper on your terms.
>> One additional technique I learned from others is to use submissions to conferences as a way to get feedback on your work without any expectation that it gets accepted. You do that early knowing there are flaws, but do it to get early warnings from reviewers about rejection points.
This is, of course, one if the problems with current research as it wastes everyone's (yours, reviewers, ACs, etc) time. In particular, because you're not guaranteed practicable or actionable feedback and because you're increasing the time reviewers must review work (i.e., more papers to review as submissions are higher and quality lower).
This topic is close to my heart. I do research on the fundamentals of 2D graphics, some of it cutting edge (especially GPU techniques), others just selecting the best known techniques (for example, right now I'm doing a bit of a deep dive into robust cubic and quartic polynomial root finders, not really academically publishable, but potentially hugely useful for others working on similar problems).
I made a serious attempt to submit some of my GPU monoid work to academic conferences earlier this year, got rejected. As someone who doesn't have the "publish or perish" incentive of actually being in academia, it's really just not worth it.
I'm also doing inquiry into UI, for example architectural patterns in reactive systems. The current state of academic literature on this topic seems to be terrible (though please point out counterexamples!). Most of what's written is marketing material for UI frameworks - there's an explosion of those, especially in the JavaScript world, but very little synthesis of the core concepts. I wouldn't even attempt to try to submit an academic paper on this topic, though I think it would likely be useful to the world.
It feels like there should be a space to publish work that is not novel in an academic sense, but useful. Wikipedia is not it (their articles on cubic and quartic equations are garbage when it comes to numerical concerns), Stack Overflow is not it, academic conferences and journals are not it. I'm using my blog, generally successfully, but it feels there should be a more systematic approach.
I found myself in a somewhat similar situation. I learned to thing long-term:
(1) Publish initial ideas not in top conferences, but in second rate venues. Most publications in top conferences are, clearly, refinements of ideas that were first presented in inchoate form, in less prestigious workshops. Paradoxically, prestigious conferences don't like brand new ideas, but they like polished papers. And they like concrete open problems being solved. If a paper starts "We are solving the longstanding open problem from [17]" then that dramatically increases acceptance probability: most submissions to prestigious conferences are not rejected b/c they are wrong, but because it's unclear why they are more siginficant than all the other submissions. Solving a concrete open problem gives you social proof that the problem is significant (others worked on it) and hard (people failed). How do you create concrete open problems? By leaving them as concrete open problems in a previous paper.
(2) Create your own conferences. All now famous conferences started as small, low prestige workshops 10-20 years ago, e.g. the "Oakland" security workshop (now: IEEE Symposium on Security and Privacy).
I learned this from my post-doc mentor, who is now a famous scientist with a whole research tradition to his name, that he created, bootstrapping from a first workshop paper.
I am a graphics researcher who has been following your blog posts on your GPU vector graphics engine. I've been benefitted by your posts and I think some of the vector graphics renderer designs you discussed in your blog are genuinely novel and worth publishing as academic papers if written and evaluated properly. Please consider submitting them to a graphics venue like SIGGRAPH, EGSR, High Performance Graphics, or Journal of Computer Graphics Techniques if you find free time to do it. It will greatly benefit the community. ; )
I wonder if publishing papers onto something like github could become a tradition. Heck, you could even include an implementation. People could make pull requests for new ideas, or even fork your paper...
Academic journals grew out of the processes that academics used to communicate with each other and filter ideas through their community in the pre-internet days. It is weird that they've become the stamp of approval for all research.
The Programming Journal (https://programming-journal.org/) and its associated conference seem pretty good to me. At least I find papers there that are interesting to me and are not unnecessarily obtuse in notation etc.
I wonder if this culture arises out of necessity. Many conferences claim to not have a set quota of papers they accept, but in practice there is a limitation to how many presentations can be accommodated given the physical and temporal limitations of the venue. The quota may grow slightly over time, either by shrinking the time allotted to every presentation (ICFP is really squeezing it these years, for example) or adding parallel tracks, but there are ultimately fairly hard physical restrictions. This inevitably limits how many submissions can be accepted, which also creates an otherwise unnecessary air of competition, even between otherwise unrelated papers.
Journals don't have this restriction; you can always put out more volumes. I suppose I enjoy conferences as much as any CS practitioner, but it does not strike me as a sustainable or scalable publication method.
But journals do have an economic restriction. More pages cost more money and the subscription fees only go so far. I suppose page charges might balance some of the editorial costs, but those are also limiting. They can't be too high or they'll chase away submissions.
> More pages cost more money and the subscription fees only go so far
arxiv.org's budget is instructive: around $2M for 181K new submissions and a digital library of around 2M articles. Works out to about $12 per new submission.
For a reviewed journal or conference, presumably all published papers would have to be reviewed, much as they are currently by volunteer reviewers who review all papers before publication or rejection. If Prof. Lee is right then reviewing effort could go down overall due to fewer resubmissions.
12 USD is the current cost per submission. But I do not think that they need to 2x their budget to handle 2x the amount of articles. Marginal costs per article will be lower.
But your point stands, this stuff costs real money. So I just donated 100 USD to Arxiv, as they host a few of my papers. And thousands of other articles that I have read, for free and super accessibly.
I am an academic mathematician -- who has had job applications rejected, papers rejected, grant proposals rejected. Not always, but it's not exactly a rare occurrence. I've also been on the other side, and it also sucks to reject people.
It's an unfortunate reality of academia that there are fewer resources (jobs, grant funding, etc.) available, than there are researchers who are prepared to put them to good use.
Further, those who are making the decisions have limited time. If you're serving on a hiring committee and get hundreds of job applicants, you can't hope to read all the papers of all the applicants. To deeply read any one of them would take a fair bit of time.
We therefore need a signaling mechanism to distinguish the outstanding from the merely very good.
It's of course possible to argue about the details of how papers are rejected, as the authors indeed do. But unfortunately the core problem -- an aspiring academic will get rejected often, and it can be extremely demoralizing -- is one we probably can't solve.
I don't buy the attention filter argument. No one -- and I really do mean no one -- is going to read the entire contents of the proceedings of even just one of these conferences. NeurIPS -- a single CS conference -- is more than twice the size of the Joint Math Meetings. ICRA and ICML are just as large or larger, and AAAI isn't far behind. That's just one sub-field of CS. There are so many papers coming out every year that I simply cannot keep up with two of my own niches. Adding more papers to that firehose wouldn't materially change the situation.
I've reviewed for some (high quality) Mathematics journals. Papers tend to be more complete, for sure, but the reviewing is much less rejectionist. I'm not aware of any Mathematics journal with a 10% acceptance rate, and even 20% is probably on the low end.
> It's an unfortunate reality of academia that there are fewer resources (jobs, grant funding, etc.) available, than there are researchers who are prepared to put them to good use.
I don't think this is true in CS. Universities outside of an elite set really struggle to hire and retain high quality faculty. It's at a crisis level outside of R1. Teaching-oriented institutions have mostly have stopped trying to hire traditional academics; a masters degree with some teaching experience is sufficient.
Some of this is due to industry -- high-quality faculty candidates tend to also have 3x-5x offers in industry, and it's hard to turn down a guaranteed early retirement for the grind and uncertainty of the tenure track. But I think some of it is also that students who would make good teachers and mentors lose confidence due to a series of unnecessary paper rejections and decide to nope out of academia.
Again, I spend a lot of time around academic mathematics. The rejectionist culture in CS is real. And not just conferences, btw. An NSF program manager started my last review panel by telling us that scores are consistently way lower in CS than in any other field and to please chill out.
> I don't think this is true in CS. Universities outside of an elite set really struggle to hire and retain high quality faculty. [...] Some of this is due to industry -- high-quality faculty candidates tend to also have 3x-5x offers in industry, and it's hard to turn down a guaranteed early retirement for the grind and uncertainty of the tenure track.
That seems to just confirm that resources are lacking. There's no shortage of people, but there's a shortage of money to pay them.
Universities that offer doctoral degrees and have "Very High Research Activity" according to the Carnegie Classification of Institutions of Higher Education.
All well and good. Then again, I've found the Computer Science literature to be abysmal - for pretty much forever. IMHO, there isn't a single journal that comes close to the quality of papers published in "Science," or "J. of the AMA." I turned off my ACM & IEEE journal subscriptions, years ago.
The good stuff used to be in things like MIT AI Lab Memos, BBN Technical Reports, RFCs, theses, and such. The good presentations are at trade shows - and more likely to be in exhibitor booths, and part of show floor demonstrations (the old INTEROP shownets, the LVC demonstrations at I/ITSEC conferences).
Personally, I'm a lot more concerned with the review processes for grant & contract proposals - where there's real money on the table, and achieving solid results can be mission- and sometimes life-critical.
The academic system is completely broken for Computer Science, and I don't really see a way to fix it. The economic realities of the field just make it too risky to allow an exceptionally gifted individual to remain out in the open publishing research that could potentially destroy your business model.
Justifying these low acceptance rates as somehow prestigious is really just creating even more perverse incentives that open the academia side of Computer Science to further defunding and brain drain. If you're smart enough to rise, you'll get an offer from the private sector you simply cannot refuse. It doesn't matter if your passion is Academia, they can and will buy you out and own whatever you're working on.
This has all led to Computer Science's academia side being something one escapes rather than something you contribute too. The "cream" rising to the top is often less genius and more politically savvy with the right connections on the PC. I'm not necessarily against a selection bias towards "people skills," but to do so and continue to pretend PCs are pure meritocracy is nauseating.
It just comes back to the fact that the majority of Comp Sci PhDs have the same story: Halfway into their doctorate program they became severely disillusioned and started jockeying to just graduate and land a private sector job that essentially was just bribe money to keep them from working for the competition.
> The "cream" rising to the top is often less genius and more politically savvy with the right connections on the PC.
I'm generally nauseated when I interact with American CS academics. Every time I attend a conference, PC, or NSF panel, I am so glad I chose industry. It's like IRL twitter.
(Europe seems to be better for some reason.)
> If you're smart enough to rise, you'll get an offer from the private sector you simply cannot refuse. It doesn't matter if your passion is Academia, they can and will buy you out and own whatever you're working on.
IME it's less about "offer you can't refuse" on the industry side and more about "offer you can't take" on the academic side.
After 6 years of deferred income I simply could not take a job that paid $80K-$100K in an HCoL area or $65K-$80K in an LCoL area. I had loans to pay back, no 401K, and not enough savings for a down payment.
If you want good people to stay in CS academia, I think a few things need to change:
1. First, and most importantly, the faculty culture. I don't really know how to describe the problem, but "the old folks are checked out and the young folks are Twitter personalities" is probably close. What's the point of being in academia if you have to be surrounded by the intellectual equivalent of used car salesmen, especially when you can go to industry and do interesting work without the BS?
2. Double the income of PhD students so that they aren't financially ruined by choosing the academic path. This isn't a super unreasonable request -- they'd still be paid less than their peers in industry while doing what's effectively a full time job.
3. Pay faculty more. Not a lot more... just, like, "at least what my undergrad students make at their first job after graduating".
I think if you solve items 2 and 3, then item 1 will take care of itself.
IDK, I think tenure contributes a lot to 1. I understand and agree with a lot of the rationale (academic freedom, etc.) but when you select for people that prioritize, “if I work really hard for 6 years and get lucky, I can never be fired,” you get a lot of dysfunctional individuals and encourage some of their worst impulses.
Should faculty be paid more? Absolutely. Should Ph.D. students be paid more? Absolutely!! But the blanket statement you make in (1) is wrong and strikes me as awfully close to the extreme left-wing and right-wing mindsets of "the system is fucked up beyond repair, all that remains to be done is to tear it down". The reality is more nuanced than this, and the picture you paint of industry is hardly that rosy, even at silver-spoon companies that invest heavily in R&D.
I've spent a lot of time with working for or closely interfacing with a half dozen academic institutions. I left academia by choice -- with multiple TT offers in hand -- so this isn't sour grapes.
I am highly confident in my assessment that the personalities found on the typical R1 tenure track are exactly the sort of personalities I avoid hiring or working with at all costs. There are exceptions, but they prove the rule (and I can often poach them anyways).
I don't think I said anything about industry other than that it pays 3x-5x better than the TT, and I'm pretty darn confident that's true. I am clear-eyed about the issues in industry, but the personalities are much better.
I really do believe that the massive pay disparity between CS industry and CS academia is, in part, a "toxic personality that can't play well with others" tax. And I really do believe that you'd get more mentally/emotionally healthy people on the TT if it paid better.
Anyways, we can agree to disagree, because we agree on the solution in any case.
> I am highly confident in my assessment that the personalities found on the typical R1 tenure track are exactly the sort of personalities I avoid hiring or working with at all costs.
My experience working with a former academic that was awful to work with: Self-absorbed, self-promoting, accomplished next to nothing but talked a big game, shit on everything everyone else did, even though their code ran the business
The pay is not the biggest problem though. Obviously it is a big one, but there's a huge issue with the work culture.
I agree it's a rosy picture of industry, but IME most of the supposed "intellectual freedom" of academia is just a marketing pitch these days. You don't get there until you somehow make tenure, and even then if you're in a high cost field you need to be very high profile if you don't want to be forced to focus on the topics that award grant money. You're interested in narcolepsy? Too bad.
So I consider it a red flag when a PI immediately jumps to say that "yes salaries should be higher but" and then goes on to defend everything else about their current situation.
Like it is ridiculous the amount of self promotion one feels pressured to do on Twitter. Do you not see the problem with authors pushing their work on social media during a supposed double blind review period?
I don't disagree that there is often a lot of bitching without actionable suggestions. But I don't think the characterization in (1) was especially extreme and I don't see the suggestion to burn the whole system to the ground. Personally I think we need more diversity in how academic institutions operate, that doesn't mean that old institutions will disappear.
I'm curious where all the money goes, since student loans are incredibly high but teacher pay is so low. I'm guessing the answer is 'random nonsense that shouldn't matter'.
In my university teaching experience, I found that everyone up the administrative chain to the top gets a cut, with the teaching faculty themselves receiving 1-2% of the annual tuition...
Many similar problems exist in most fields of academia. Biology has an even worse academic environment IMO, but unless you are highly computational it's not as easy to sell out to industry. Yeah there's pharma, but the straight out of PhD salaries aren't that exciting and the work environment is generally not as good as tech.
Not disagreeing with you at all on the CS front though obviously. I am interested to see how things go in the next few decades in the respective fields, as the ease of exit does affect who stays as you mention. But it also affects who joins in the first place, the pressure felt to get results, and hopefully down the road systemic incentive to fix the problems.
CS also has the perk of being easier to rejoin - you don't need to make a huge initial investment in most researchers to give them a chance. So I'm optimistic for reform in CS academia down the road. But it's a long road, and if academic politics continue to prevail then I'm deeply concerned about the state of all of our research institutions.
Seriously, I was recently at reunions for a "top" university where many people go into graduate programs, and it became a running joke trying to find an alum from any PhD program that wasn't jaded as fuck. Even some of the people that I was most confident would be killing it weren't (or at least felt they weren't). The majority were actively exploring industry opportunities and considered themselves unlikely to do an academic postdoc.
> If you're smart enough to rise, you'll get an offer from the private sector you simply cannot refuse
The average graduation time for my comp sci undergrad in the mid 2000’s in Slovenia was 7.5 years. Because most people got jobs and forgot to graduate.
Personally I dropped out when schoolwork started getting in the way of freelancing for US companies. I remember a moment when my professor said “You know if you don’t get these grades up, you’ll have a hard time finding a job” and I thought “But I already have a job … sitting here talking to you is costing me billable hours”
Don’t get me wrong, I loved studying comp sci and learned a lot. Even use that knowledge regularly. Just didn’t get the paper.
> The economic realities of the field just make it too risky to allow an exceptionally gifted individual to remain out in the open publishing research that could potentially destroy your business model.
This seems a bit of a stretch, doesn’t it? It seems that you don’t have to suppress CS research to prevent it from having an impact, you can usually just politely ignore it after it has been published.
I probably phrased this really badly and applied it too broadly. Take the early days of Machine Learning. Everyone was working to crack it open and make money off of it. They still are, but now we’re almost to the maturity portion of it’s lifecycle.
But in the early days, if you were smart enough and showed promise, you could get hired before you even finished your doctorate. Everyone trying to crack open the ML egg was doing this, so not doing it gave execs FOMO about letting someone else employ a very smart ML scientist that developed something that made a lot of money.
So it’s less that it’s risky but for very specific and developing areas of CS it’s considered a worthwhile expense rolling the dice to own the patents on the next big breakthrough outright for the cost of a single salary.
Is 18% or so acceptance rate really low, though? Almost 2 in 10 submissions are accepted, and I thought "the top" meant something like 2% or less.
BTW, is there any resources that catalogs which ideas in papers may work well in industry? As someone outside of academia, I find there are simply too many papers, even from top conferences, for me to consume. It's hard for me to know which paper's ideas can help me or not, and this 18% acceptance rate is not a good enough filter any more.
There's also self selection: You're only going to submit to a top conference if you think there is a slight chance of your work being accepted. Thus, one could argue that papers submitted to top conferences are already better than average. This means that the acceptance rate is way lower in fact.
I have never pursued this type of publication, but why on earth does an "acceptance rate" even exist for these (journal or conference) publications?
Why not publish them all? Endorse those that are selected, add commentary to those with which there is disagreement, but is a batch inclusion of them all so technically difficult?
The purpose of a journal is to be read, the purpose of a conference is to be heard, and they should be optimized for the desires of the audience, not the desires of the authors.
A journal or conference that publishes everything is useless to its audience, which expects significant curation and filtering to pare down the incoming firehose to something that (a) fits a specific topic and (b) has a filter for quality and (c) pick the most interesting ones out of all the far-too-many decent papers that fit the first two criteria.
There's no problem to publish them all somewhere, that's being done on e.g. arXiv, pretty much every recent useful CS paper is there as well, often before being published in a "proper venue" - but a "publish everything" service is not enough, people do want to read more selective sources instead of trying to skim everything that's posted on arXiv.
If the organization is receiving public funds of any kind, then they should be required to (electronically) publish all submissions unless the authors withdraw.
There has been quite enough censorship and paywalling of research that my taxes fund.
The point of a journal is not for people to express their feelings, but to provide researchers with a collection of papers that are, at least, mostly, not garbage. Journals publish the work of others and farm out the review process to others, their only real functionality is that of a gatekeeper.
Since a good chunk of researchers are funded in part by the government, I guess most journals would end up having to follow this "publish everything" requirement (money is fungible and some of every grant goes to administrative overhead, so you could argue anything any most universities touch is funded in part by the government).
Most publishers publish multiple journals, though, so I guess they could follow your rule as long as they were allowed to open up the, for example, "IEEE Journal of Perpetual Motion Machines And Straightforward Proofs That P=NP."
The bullshit will wash over everything though - there are lots of people out there that are not acting in good faith (ie. Russians) and they will use your suggestion to claim status for all sorts of wickedness.
I think it's not surprising that conferences have to perform some selection. You need the right amount of participants and talks. If a high rejection rate achieves the right amount, it is hard to argue against it.
But why are publications tied to conference attendance anyways? Sure, there also journals, but submitting to a journal tends to be an especially slow process. If you are in a fast-paced field, submitting to a journal is a dangerous game.
Why can we not just upload our papers to something like arxiv and then give people the option to vote on papers analogous to reddit submissions, so that promising stuff organically rises to the top. That way it would at least be based on the opinions of a sizable number of judges, not just three preselected peers.
Oh no, but what about peer review. What about it? Is it difficult to get past peer review at a top conference? Yes. Is it difficult to get past peer review in general? No. You can publish anything you want already, you will just have to jump through senseless hoops to do it. Why not skip the hoops and just upload it somewhere? We can still have journals and conferences that select high quality material from the uploaded papers and it will be an honour to be featured in one of those. You can still use features at conferences and journals as a bad metric to judge the quality of researchers, but the actual publishing will be decoupled from these institutions.
ArXiv is fine for what it does, but it does not provide any sort of dissemination support.
IMO the best thing about CS conferences is the poster track where you can walk by hundreds of posters, and the information is (when done well) much more easily digestible than papers, and you get to ask questions, and these are nowhere near their limits.
Nothing has made me happier than to simply leave the academic/publishing side of CS and focus entirely on things that I find interesting and can do on a single desktop computer. If I share, I share and give people my code and ideas freely, but don't publish. Why would I want to take all my hard work and beg somebody with power to put it on a piece of paper?
If I have something useful to say I'll put it in arxiv. Peer reviewers just wasted my time.
One side effect I didn't see mentioned in the article:
One professor of mine spoke of the LPU, the least publishable unit. So, if you're lucky enough to have some novel ideas, and build something nice out of it, don't put it all into one coherent and easily digested journal paper! The number of publications counts.
Instead, chop it into little pieces that are just "novel" or noteworthy enough (LPUs), and publish them separately. Publication list inflation accomplished; and scientific progress/intelligibility/successful communication be damned.
Honestly, I think this is often said cynically but is a good practice overall. Would you rather have to read and understand one giant commit reflecting 2 years of work, or 10 well-documented and logically complete individual commits?
Formal logic cannot be reduced in character count too far, at least without making it unintelligible (fear the day Alex the intern wheels out Unicode emoji variables).
Spreading the commits out has a purpose.
Academic papers could be vastly compressed to reduce cognitive load.
Brevity and intelligibility however is not a KPI. Self aggrandisement is.
The issue is that novel paper ideas will be split across multiple years (and even multiple conferences), making it much harder to actually see the whole picture for a reader. Each little piece of the paper will often also be bloated with unnecessary extra detail in order to reach the threshold for "minimum publishable paper".
Splitting up a groundbreaking idea into so many papers that the idea is lost is 1) going beyond a “minimal publishable unit” and 2) not in the authors’ interest, since getting credit for a groundbreaking idea in a correspondingly prestigious outlet is much better than getting credit for 2 or 3 bad ideas. I’m sure there’s a level of novelty where 2 irrelevant papers is better for the author than 1 single paper, but I don’t think we should design academic publishing around slightly-better-than-mediocre contributions.
I think we are not far off from virtual conferences where most sessions are presented by DallE generated presenters, speaking with wavenet generated audios in turn are presenting GPT generated papers.
I think the performative, results-based culture of academia in general (particularly in Asia and not just Chinese) has seriously succumbed to Goodhart's law: https://en.wikipedia.org/wiki/Goodhart%27s_law
Because you are judged on citations, publication count in prestigious journals, and research dollars spent... you have people citing themselves and their friends in citation rings, gaming the referee process to pick reviewers, and doing unethical things for money.
My first experience in this is when I was asked (as a postdoc) to give a keynote for a no-name conference in Vancouver. They were paying so why not? I later realized that my pedigree was being used to lend credence to a conference that was totally cargo cult academia. There were conference sessions, and people presenting, and bunches of people in several rooms, but nothing was actually being said and nothing was being done. They then offered me $3,000 cash in an envelope which I declined to take most of because of ethical reasons (I only wanted to cover my expenses). A very unique, maybe even out-of-body experience to stay the least.
If you want to see a crazy instantiation of cargo cult academia, google "extreme learning machines", which is basically a whole field built off of 2-layer feedforward neural networks with 1 randomly initialized layer. The other keynote at this conference was the guy who "started" this field.
I think they're just joking that some of the material presented at Chinese academic conferences are close to non-sense, might as well be produced by DALL-E.
Can't speak to the state of Chinese academia but have had a few similar experiences at American conferences
When careers are on the line and one is expected to mischaracterize one's work or its impact, people will tend to do so
If this problem exists independently of cultural or institutional factors maybe there are potential solutions with an equally broad range of application
'(I'm ethnic Chinese and this is not meant to be a racially charged comment)'
that's not a carte blanche to stereotype a whole nation and it makes as much sense as a mailander chinese person badmouthing singapore taiwan or the abcs and using that excuse
It doesn't matter what color you are. It doesn't matter where you're from. If you create something like what Dijkstra created back in 56' (undeniable innovation), then no door will remain shut. That's the beauty of Math, CompSci, or STEM as a whole.
Not to be too blunt, but I'd rather we keep this soft, "everyone should be accepted" nonsense out of CompSci. The cream will rise to the top. Anyone complaining just needs to improve, or innovate. The author even says the reason for rejection is often "lack of novelty".
Me personally, I'm not talented enough to even be in the same conversation as some of the people who are in that "universe", but that's okay - It just wasn't meant to be for me. If I "stay in my own lane", that creates more room for the gifted, and the elite.
> If you create something like what Dijkstra created back in 56' (undeniable innovation), then no door will remain shut. That's the beauty of Math, CompSci, or STEM as a whole.
Don’t get me wrong, this seems like a nice idea, I’d love to live in this world, but hasn’t it been disproven time and again throughout history? This board itself is flooded with engineers with great ideas being dismissed because of “realities of the business” or “changing priorities”, much less “political realities” etc et Al.
In fact I’d say the reality is probably opposite,(as a matter of principal, obviously great ideas do eventually make it but) truly great innovative ideas are ignored, it is only the incremental ideas that don’t make anyone uncomfortable that are accepted.
> This board itself is flooded with engineers with great ideas being dismissed because of “realities of the business” or “changing priorities”, much less “political realities” etc
Your argument would be rock solid if those great ideas were only applicable/useful in huge bureaucratic organizations (which is the type of environment where bullshit ideas are given more attention than truly innovative and impactful ideas). We've all probably been there before, and I get your point 100%.
However, the next step would be to go where your ideas can be heard. And if the idea is good enough, people will listen. Shouting into an org chart 10 miles deep to narcissistic C level executives has always seemed like a waste of time. So I don't disagree with you, but if you aren't being heard, speak to someone who will listen.
> However, the next step would be to go where your ideas can be heard. And if the idea is good enough, people will listen
I don't see how your point is in conflict with my point. You originally said that "no door will remain shut". Your response seems to be similar to my point is that this is not in fact true, ideas have to be marketed to the people who need them or "the right doors". It is not in fact "the best ideas that will float to the top" by some objective measure but the most relevant/politically applicable/subjectively best to the audience" ideas that will float to the top.
The cream isn't rising to the top. I doubt many people on HN - never mind in the STEM community a a whole - can name anyone born after 1960 doing research at the level of Dijkstra, Knuth, Hoare, and maybe Wirth.
This either means there's no one that smart or original in the last few generations - possible, but something of a stretch - or the system isn't successfully selecting, motivating, and rewarding people with that level of talent.
My suspicion is that the top people are working in the private sector, and many of them are doing very highly paid but questionably useful work as quants and system engineers.
Building things is fine, but of course it's not academic research - which is defined by the creation of game-changing concepts and philosophical structures, some of which happen to be mathematical.
Your comment may have been narrowly focused on academia, but I see the same perspectives being shared about Diversity and Inclusion in our whole industry. But it's a problem.
Literally the exact same phrasing has been used to resist removing Jim Crow laws and civil rights legislation. There was always someone could point to a black man in America that was doing just fine without any new laws.
Let's start with where you're right: You're absolutely right that if you're in the top 0.1%, hell even the top 1% of any field, you will do fine. No door will remain shut. Your comparison point is one of the most esteemed humans in the history of Computing Science. A person who was awarded one of the first 6 Turing Awards (The equivalent of the Nobel Prize in our field).
At that level of intellect, you could make the claim there is no discrimination. And you'd be wrong (Let's just talk about what happened to Alan Turing HIMSELF, and the discrimination he faced, in spite of the ideas that were so revolutionary they named the award after him). But let's pretend that you're right.
What about the rest of the field? What about the p90 of the industry - folks who want to succeed, who want to thrive, but encounter hidden and non-hidden biases. Spend 10 minutes with any woman in STEM and you'll be filled with stories of subtle and non-subtle discrimination they encounter. Is it necessary? I think not.
But at what rate? And how long will they stay in the game without recognition before giving up? And do those numbers differ based on your color or where you're from?
> But at what rate? And how long will they stay in the game without recognition before giving up? And do those numbers differ based on your color or where you're from?
If you provide that data, we can continue the conversation. However, the truths in my original post stand firm. STEM is an objective field. Sure, there may be issues with equality of opportunity, but that's outside the scope of my argument.
Equality of outcome will always be attainable through personal effort, intelligence, and innovation - But as I said in my original post, you'll need to be really good.
Not everyone will be like Hal Abelson, Margaret Hamilton, Dennis Ritchie, or Richard Stallman. Those people are built different. Those are the people I'm referring to. It is literally impossible for them to have ever not been noticed due to their next level talent and relentlessness.
I think this is naive and a classic example of a poor argument, as your statements are unfalsifiable. They’re expressions of faith in the system and in individuals outside of time and circumstance.
From my perspective, we have seen a lot of evidence that STEM is far from an objective field, and has culturally excluded a lot of talent. I see little evidence that this sort of talent magically, consistently, will fight through the cultural barriers.
I'm a black man who grew up in the projects in a single parent household. Through willpower and dedication, I made a way. I made a way because skin color doesn't matter when you have your CCNA at 14, or when you maintain a Gentoo install at 16, and have a OpenBSD router in your home built from old computer parts found in thrift stores. Mind you, these aren't even impressive achievements, but they were above average for my age group at the time.
> STEM is far from an objective field
I had every excuse to believe that, but I didn't. I'm definitely biased because I trusted the system and it worked. Most industries can be exclusive, but STEM is not one of them. Raw talent seems to be rare, so any talent is accepted.
> and has culturally excluded a lot of talent
You use 'culture' as a point of argument, which is extremely vague, yet you accuse me of presenting a poor argument due to unfalsifiable statements? I respect your point of view, but you have to be fair.
Claiming that your statements are an unfalsifiable declaration of faith in the system is not the same as taking a position that everything is bad and wrong, and doesn't have to be defended. If the reason you're speaking is because you want to convince people that the system is fine, this is one of the vague objections that your listeners (the people you're trying to convince) have.
It's your job to explain how there are no cultural effects that you can find that affect your thesis, and that you've examined the ones that people have previously mentioned. Not the listener's job to be very specific about their suspicions that cultural factors could have an effect on your thesis.
Black man, south side of Chicago, single mother, learned C at 12 (or 13) on my own. Doesn't qualify me to make sweeping cultural statements and not be questioned about them.
By culture I meant largely what what was discussed in the OP, but also the gender gap and resulting slant towards males in the CS field. Though I suppose the racial gap would also apply.
I did not mean to get into an argument on particulars of whether it’s possible for talent to break through regardless of obstacles (a high rejection rate as discussed in OP, for example), as I figured you had personal reasons for your stance about the potential for talent to break through. I didn’t feel it was right or justified to say you were wrong. You may very well be right, and a part of me hopes you are. I just felt your argument as posted wasn’t strong.
Ultimately what I’m saying is that we don’t have a reliable model (as used in social science) to determine what makes high achievements possible.
I tend not to be fatalist about social systems always determining the outcomes of individuals, I think individuals can transcend them and break through the average result, and we have examples of this such as yourself.
I am a white cis male but the son of a single mother / school teacher and first generation immigrant, and through a mix of luck, privilege and skill had success beyond what was expected of me.
I think breaking through class barriers is doable but I’m not sure if there’s a way to model how to make it repeatable. Even “talent” is a nebulous term, something that arguably is rare but is also not measurable. Maybe it’s not rare?
Mainly I wonder if some folks with great talent get stopped along the way, and what can be done about it. I liken it to random extrinsic events - just as a a car accident cuts a life short, a person with the wrong PhD advisor or boss can be disillusioned or outcast, or worse. Just as car accident rates can be mitigated, what can we do to help nurture people of all walks of life in STEM?
While I do think that some cream makes it to the top, this is the kind of selection bias that's common in academia. The professors with tenure love to think that the system is fair. But even if the filters ensure that the tenured are good, it doesn't mean that the filtering process isn't excluding a number of other people who are just as good (or even better).
There aren't many slots at each level of the pyramid. Many people are excluded at each stage of the culling.
Edward Lee (the author of this piece) is the cream that has risen to the top. He’s a top level respected scientist. This piece is clearly not coming from a place of jealousy or softness.
> The emphasis on novelty has deep roots in academic publishing. It used to be that publishing was expensive, and any repetition came at the expense of other things that could have been published. Today, however, publishing is essentially free.
Requiring novelty shows respect for the readers' time. Paper and ink costs were never the primary limiting factors (even if the publishers' claimed otherwise to save face).
"repetition came at the expense of other things" -- no, repetition comes at the expense equal to Number of Readers * Time Wasted on Each non-novel paper.
If the author just stuck to "novelty is hard to really know" then it's a much stronger argument.
But without reproducibility the paper is basically worthless. There's already a crisis of unreproducible work and the strong novelty bias is basically why.
I don't think requiring novelty is respect for a reader's time. I think requiring high quality is respect, even if it is adding to existing bodies of evidence. Novel drivel is just drivel regardless of its novelty; it requires high quality analysis and research for anyone to get any benefit from that newness.
Add upvotes/downvotes to preprint servers. Weight voting power by the user’s subfield, dynamically determined by who’s upvoting them, PageRank style (so a scientist who’s mostly upvoted by others working in numerical linear algebra would have a lot of voting power on a preprint describing a novel matrix factorization scheme, and very little voting power on a preprint describing a novel thread scheduling algorithm).
Take the top N upvoted preprints in a given field and showcase them at a conference for that field.
And to avoid hype cycles or gaming the system, make voting cost karma—if you choose to vote, you yourself lose some voting weight in the process, so votes would only be given out sparingly to papers that actually deserve them.
Haha, yeah no way anyone’s going to game that system! No way some clique recruits all of the voting power (collaborators to undergrads) to push their work through, all it takes is LeCun to simply read all papers and his vote will balance everything out!
Easy fix to your solution: get more people to vote. How about all 10000 undergrads in your institution?
In your model, a modern day Einstein would have never had a chance; industrialized research, with publication mills and systematic co-authorship permutation is exactly what would flourish under your model.
10k undergrads would have few to no highly upvoted publications, and thus little to no voting power, even in aggregate. An even smaller fraction of them would have highly upvoted papers in the specific subfield of the paper they’re voting on, making their aggregated voting power even more worthless.
The only way to game the system would be to somehow convince a ton of highly upvoted people in a given subfield to upvote a paper in that particular subfield (and give up some of their voting power accordingly, since voting would cost you karma). Heck, make it so that you can only vote on a certain number of papers each month (5-10?), to make votes even more valuable.
And how do you decide who votes and how many votes each person gets? A basic democracy favors the young, foolish masses. The goal of the journal system is to put the smart, seasoned academics in charge of choosing the best material.
True, but I feel like the strength of the moderation at HN is that it is mostly formal and less editorial. People are moderated mostly for how they say things, not what they're saying, especially by explicitly choosing as the primary criteria for the site what hackers find interesting.
Not that you couldn't have a narrower focus, but it should be an explicit focus where the question of whether something merits moderation is a yes or no question that could be answered by anyone, rather than a know it when I see it stance that just reflects the current personal standards of an individual or clique.
Does this belong on Hacker News? Was it upvoted and not flagged too often by people with sufficient karma on Hacker News? Then dang doesn't have to make a decision. This is not a perfect characterization, because moderation here does rarely drop into attempts to exclude types of discussions that the patrons find annoying (which is rationalized because the subjects are controversial and draw in people who want to fight, and repetitive because these people fight all the time so have habits and standard arguments.)
I'm really just expanding on the word "empowered" here, because most mods have absolute power unless they have to answer to other mods. The difference here is not power, it's philosophy.
Yeah, why not. It's already like/comment/subscribe model via publishing online. This would add a dislike button and you could have it so it costs karma etc. Would be preferable imo.
Anything like this is worth trying. Enhancing the reliability of peer-to-peer peer review on preprint servers so they become the de facto publishing base would be ideal. Think how many younger students could start publishing at undergrad and earlier.
Correct me if I’m wrong, but I think you’re trying to say that the phrase “toxic culture” signals something exasperating or frustrating to you about the person using it.
Is it because you don’t think that a culture can be toxic? Or perhaps that the phrase is overused or misapplied?
It's a good article, but it's not making the (obvious?) jump: the conference model in computer science is broken. No other discipline does it this way, and it creates for ourselves this terrible program committee problem. The journal model has plenty of its own flaws, but it at least allows for iterative work. (That is, instead of rejecting a work outright, a journal can work with an author on the flaws, even if that requires doing substantial new work.) This is not to lionize the journal model, which has plenty of its own flaws -- and indeed, I personally think computer science should use its current laggard status as an opportunity to find a wholly new model, preferably one that is much more amenable to practitioner participation.[0]
> The double-blind review process has eliminated natural biases that may have influenced our reviews in the past, including tendencies to reject based on gender or to accept from the “better” institutions.
I don’t know how much double-blind really helps. Especially in a specific field, you can usually tell what person/lab/company wrote the paper without seeing the authors’ names.
I once reviewed a math paper for a double-blind journal. Most math journals aren't double blind (as you say, I can often tell who the authors are) -- but this is one focused on expositional articles that undergrads might enjoy.
Anyway, I read it, didn't find it all that interesting, and recommended that the paper be rejected.
Afterwards, I googled the paper's title and found a signed copy: the author was one of the most respected scientists in our field, who had been a mentor to me and done a huge amount for my career.
I was immediately embarrassed: I rejected his work? And right then I had my moment of zen.
As someone who has been on the conference committee for (non-academic) conferences, I'm not a particular fan of blind selections. Yes, I get the desire to not just give slots to "the usual suspects" which at least historically has often happened at a lot of conferences I attended. But the reality is that there are people who you know based on overwhelming experience will give great talks that attendees will appreciate and learn from. Even if they submitted an abstract that didn't immediately catch your eye among the pile you're going to accept 25% of, do you really want to reject them? Assuming the committee has some commitment to new/less known speakers there are IMO better ways to spread the love than blinding.
> there are people who you know based on overwhelming experience will give great talks that attendees will appreciate and learn from ... do you really want to reject them?
If I'm reading correctly, you're saying that sometimes a work deserves to get in based (at least partly) on the author's name recognition (for historically having given good talks) and not solely on the merits of the work itself. I see the point you're trying to make, but something about this argument makes me uncomfortable.
An abstract isn't a talk. It's a limited description of a topic (which is often written pretty quickly by experienced presenters). So, yes, someone who consistently delivers is probably going to do a pretty good job unless the abstract just seems uninteresting. i.e. a good presenter can pick a topic I just don't think attendees would be interested in or, more commonly, just isn't a good fit for the program.
More broadly, yes, you need to balance having people you know will do a solid job whatever their abstract against welcoming new speakers.
Imagine a resume is just so-so. But you've worked with a person and know they're great. Do you judge them based on their resume?
Honestly this only touches a small fraction of how absurd the whole publishing system with conferences is.
I mean just thing about some obvious issues: Large parts of computer science limit how much science they can publish (and thus effectively share with others) by the number of conferences people want to organize. There's also a very obvious discrimination issue, as most "high tier" conferences are either in the US or (to a lesser extent) in the EU. And it's pretty crazy that people do transatlantic flights to go to a conference in order to publish a paper, even if they don't really want to go to the conference.
In general I agree that low acceptance rate with negative selection based on arguments like "it's not novel", "it's obvious" is toxic.
I get it that capacity for conference might be limited, so gatekeeping might be necessity, but personally I like to learn useful knowledge, so something novel, but theoretical might be more boring for me than something well known, but improved in some interesting ways.
Also composing solutions is interesting, you should not have to reinvent the wheel, just to get approved, if you want to focus on broader perspective. The whole is greater than sum of the parts.
But at the core of his arguments is that rejection is mean. He's complaining about the "it's not novel" argument now but if everyone takes this to heart, they'll just have to come up with some other reason to reject. There are only so many resources and academia tends to massively overproduce talent.
I completely agree with this. I have seen papers produced by a bunch of newly minted PhD's and the first thought that came in my mind was "WTF is this? There's nothing in this that's new or makes sense".
A lot of publishing, from outside the top research universities, seems to be mainly at showing a higher publication count to get more funding and impress tenure committees.
Some of the professors, who do this, make you wonder how they get their PhD in the first place. Back in my masters, I was giving a presentation on a paper from FB, where they were going over scheduling policies they had been using in their hadoop clusters. The professor reviewing asked me a question "You're taking about scheduling tasks in distributed environments, you're talking about Hadoop, Hadoop is for files what scheduling does it do?". Some of us there had a big "WTF" plastered on our face. Couldn't even argue cause this prof. thought he was a hotshot and would make life hell for students who did. In his own words to a student "I can write a compiler, I can write an operating system, but I can't make head or tail of what you're doing".
>Serving on a PC is a yeoman’s service, and the community owes them a debt of gratitude. However, I believe that a toxic culture has emerged. This blog is a call for PCs to change their priorities.
does that acronym not stand for what i think it does?
this is one of my pet peeves. You and dozens of other people probably spent collectively hundreds to thousands of minutes and brain cells wondering what PC stood for, while the people who originally typed "PC" instead of program committee saved maybe a fifth of a second.
It's especially annoying because they should have known that PC was already taken as an acronym by something else far more prominent within the same domain.
A lot of this is caused by the bad faith of some state actors.
Many computer science conferences are under systematic assault from these places, which systematically swamp the PC's with submissions.
What needs to happen is :
- Regionalisation; make conferences regional only. So that submission can only come from that area or small group of nations. This will reduce travel demands and increase plurality.
- Sharp constraints on personal submission: one and only one paper as an author by anyone.
- Block outs : you get in one year, you skip a year.
I don't see a "culture of rejection". There's rejection whenever there's competition and a selection. There are tons of computer science conferences and not all of them are elitist. Even mediocre papers get published. Should we have all the submissions in the world accepted at the top conferences so that no researcher feels left behind?
The emphasis on novelty has deep roots in academic publishing. It used to be that publishing was expensive, and any repetition came at the expense of other things that could have been published. Today, however, publishing is essentially free.
Yes, publishing is cheap or free, but attention is still scarce.
In hot fields, you have arXiv papers 2-3 generations ahead of peer review. Some have more citations than typical accepted papers. Peer review does not limit people’s attention.
The lack of Novelty point feels off the mark, at least in some fields. At least in ML its extremely common for a paper to just report a new benchmark on some well known dataset with virtually no new real contributions other than more time spent hyperparemeter turning
Academia as a whole is such an interesting place. I value it highly because I value learning. I love school, I loved the conversations and deep dives into interesting waters that nourished me in both college and graduate school, and I loved the opportunity to be fully immersed, without needing to get a job, in an atmosphere of learning.
But the rest of it? The petty hierarchies, the papers born of countless hours of hard work by extremely talented people that few people will ever read, the wage labor of adjuncts, the elitism, the comfortable cowardice of tenure - that all should be burned in fire.
I once read a book - I wish I could remember its name - that noted that even though we live in a democratic republic fueled by at least somewhat meritocratic capitalism, institutions from the feudal era still exist, the most notable being the church and the university. That was eye opening when I read it, and to this day it has jaded my view of academia and the people who tether themselves to it. All of those noble, high minded academics, fighting for their place in a feudal structure that they don't dare challenge. I know I sound a bit like an asshole when I write this, but: I can't truly look up to anyone who would resign themselves to a structure like that.
If the last paragraph is your perspective on being a pawn in an academic structure, I would love to hear what you think of being a pawn in a corporation.
I suppose a big enough corporation can feel the same, at least in terms of internal politics, but the difference for me is the tenure (well-paid, almost impossible to fire) and adjunct (close to minimum wage, no health insurance, completely disposable) dynamics. Indefensible. And I think the people with tenure don't object to this model simply because they enjoy their privilege.
Not OP, but at at least most corporate pawns are cynically aware of how artificial the system is and that you need to play games to get ahead (or just coast along if you don't care). Academia has a luster of meritocracy when in reality you need to game things just as hard to become successful.
A solution is a site for publishing papers, with community discussions. People can rate/upvote papers, and these ratings can be used to build several metrics for papers, authors and reviewers.
1. Curation is important both for the physical limits (venues only fit a certain number of people), attention limits (attendees will usually retain only a handful of "nuggets" no matter how packed the agenda is) and interaction limits (you can't meet everyone at a large conference).
2. If the goal of a conference is not just to "stamp" research as somehow "approved", but to encourage discovery and knowledge exchange that deepens a specific area, it's important to apply that curation filter with an eye toward best advancing the goals of the conference. That means not just going for things that are okay, but those that best resonate with other presentations / attendees / research topics.
3. While the size of any one conference has to be fixed, tech has made it infinitely easier to create new conferences and journals with other focus areas. They may not start with the prestige of a larger journal, but if the papers published start to have an impact, it can catalyze an entire subfield of work.
Some conferences can be tied exclusively to "novelty" - ACM academic conferences - but others to "incremental advancements" - the bigger industry conferences in security, like Usenix Security and some to "best explaining ideas" - like Enigma.
There are new ways to find an audience for your work and create impact - that's part of the job now.