Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Llamaindex has so mucu potential. Any benchmarks on performance compared to fine-tuning?


You probably don't need fine-tuning, at least if it's just new content (and no new instructions). It may even be detrimental, since LLMs are als good at forgetting: https://twitter.com/abacaj/status/1739015011748499772




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: