Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe what they should do in the future is just automatically provide AI reviews to all papers and state that the work of the reviewers is to correct any problems or fill details that were missed. That would encourage manual review of the AI's work and would also allow authors to predict what kind of feedback they'll get in a structured way. (eg say the standard prompt used was made public so authors could optimize their submission for the initial automatic review, forcing the human reviewer to fill in the gaps)

ok of course the human reviewers could still use AI here but then so could the authors, ad infinitum..





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: