Hacker News new | past | comments | ask | show | jobs | submit login

I feel you. Here are some thoughts from the other side of the fence:

Social media banning aims to preserve anonymity when the reviews are blind. It is hard to convincingly keep anonymity for many submissions, but an effort to keep it is still worthwhile and typically helps the less privileged to get a fair shot at a decent review, avoiding the social media popularity contest.

The policies for LLM usage differ between conferences. The only possibly valid concern with use of AI is the disclosure of non public info to an outside LLM company that may happen to publish or be retrained on that data (however unlikely this is in practice) before the paper becomes public; for example, someone could withdraw their publication and it no longer sees the day of light on the openreview website. (I personally disagree with this concern.) As far as I know there is no real limitation to using self hosted AI as long as the reviewer takes full credit for the final product and there is no limitation in using non public AI to improve the review clarity without dumping the full paper text. A fraction of authors would appreciate better referee reports, so at a minimum, the use of AI can bridge the language gap. I wouldn’t mind the conferences instituting an automatic AI processing to help the reviewers reduce ambiguity and avoid trivialities.

The high school track has been ridiculed, as expected. I think it is a great idea and doesn’t only apply to rich kids. There exist excellent specialized schools in NYC and other places in the US that might find ways to get resources for underprivileged ambitious high schoolers. It is possible that in the future a variant of such a track will incentivize some industry to donate compute resources to high school programs and it may start early and powerful local communities. I learned a lot in what would be middle school in the US by interacting with self motivated children at a ad hoc computer club and kept the same level of osmotic learning in the computer lab at college. The current state of AI is not super deep in terms of background knowledge, mostly super broad, and some specialized high schools already cover calculus and linear algebra, and certainly many high schools nowadays provide sufficient background in programming and elementary data analysis.

My personal reward hacking is that the conferences provide a decent way to focus the review to the top hundred or couple hundred plausible abstracts and even when the eventual choice is wrong I get a much better reward to noise ratio than from social media and the pure attacks on the arxiv (although LLMs help here as well). I always find it refreshing to see the novel ideas when they are in a raw form before they have been polished and before everyone can easily judge their worth. Too many of them get unnecessary negative views, which is why the system integrates multiple reviewers and area chairs that can make corrective decisions. It is important to avoid too much noise even at the risk of missing a couple great ones, and yet it always hurts when people drop greatness because of misunderstandings or poor chair choices. No system is perfect, but scaling these conferences from a couple hundred people a year up to about a dozen years ago to approaching hundred thousand a year has worked reasonably well.




> Social media banning aims to preserve anonymity when the reviews are blind.

Then ban preprints. That's the only reasonable resolution to solve the stated problem. But I think we recognize that in doing so, we'd be taking steps back that aren't worth it.

> avoiding the social media popularity contest.

The unfortunate truth is that this has always been the case. It's just gotten worse because __we__ the researchers fall for this trap more than the public does. Specifically, we discourage dissenting opinions. Specifically, we still rely heavily on authority (but we call it prestige).

> The policies for LLM usage differ between conferences.

This is known, and my comment was in a direct reference to CVPR policy being laughable.

The point I was making is not so literal as your interpretation. It is one step abstracted: the official policies are being carelessly made, and in such ways that are laughable and demonstrate that the smallest iota of reasoning was placed into these. Implying that there is a goal to signal rather than address the issues at hand. Because let's be real, resolving the issues is no easy task. So instead of addressing the difficulties of this and acknowledging them, we try to sweep them under the rug and signal that we are doing something. But that's no different than throwing your hands up and giving up.

> The high school track ... doesn’t only apply to rich kids.

You're right in theory but if you think this will be correct in practice I encourage you to reason a bit more deeply and talk to your peers who come from middle and lower class families. Ones where parents were not in academia. Ones where they may be the only STEM person in their family. The only person pursuing graduate education. Maybe even the only one with an undergraduate degree (or that it is uncommon in their family). Ask them if they had a robotics club. A chess club. IB classes? AP classes? Hell, I'll even tell you that my undergraduate didn't even have research opportunities, and this is essentially a requirement now for grad school. Be wary of the bubbles you live in. If you do not have these people around you, then consider the bias/bubble that led to this situation. And I'll ask you an important question: do you really think the difference between any two random STEM majors in undergrad are large? Sure, there's decent variance, but do you truthfully think that you can't pick a random STEM student from a school ranked 100 and place them in a top 10 school (assume financials are not an issue and forget family issues), that they would not have a similar success rate? Because there's plenty of data on this (there's a reason I mentioned the specific caveats, but let's recognize those aren't about the person's capabilities, which is what my question is after). If you are on my side, then I think you'd recognize that the way we are doing things is giving up a lot of potential talent, and if you want to accelerate the path to AGI then I'd argue that this is far more influential than any r̶i̶c̶h̶ ̶c̶h̶i̶l̶d̶,̶ ̶c̶h̶i̶l̶d̶ ̶o̶f̶ ̶p̶r̶o̶f̶e̶s̶s̶o̶r̶ High School track. But we both know that's not going to happen because we care more about e̶l̶i̶t̶i̶s̶m̶ "prestige" than efficiency. (And think about the consequences of this for when we teach a machine to mimic humans)

Edit: I want to make sure I ask a different question. You seem to recognize that there is a problem. I take it you think it's small. Then why defend it? Why not try to solve it? If you think there is no problem, why? And why do you think it isn't when so many do? (There seems to be a bias of where these attitudes come from. And I want to make clear that I truly believe everyone is working hard. I don't think anyone is trying to undermine hard work. I don't care if you're at a rank 1 or 100 school, if you're doing a PhD you're doing hard work)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: