Hacker Newsnew | past | comments | ask | show | jobs | submit | tsurg_dot_com's commentslogin

This recent paper from Fudan University is a highly relevant read given the current industry focus on RL for LLMs (like GRPO). The authors investigate a very practical question: do the improvements brought by reinforcement fine-tuning (RFT) actually generalize beyond their training distribution when applied to multi-turn agents?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: