Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use Vicuna[0]. It's much better than GPT4All.

Vicuna is based on 13B (not 7B) and its training data includes humans chatting with GPT-4 vs GPT4All's purely synthetic dataset generated by GPT-3.5.

[0] https://github.com/lm-sys/FastChat



Thank you!

Could Vicuña be used to further fine-tune GPT4All to make it better?


I think GPT4All's inferior quality dataset would make a worse combined model than strict Vicuna. Vicuna-30B will likely be better than GPT-3.5 level and approaching GPT-4 level when it's done training, but run slow on CPU.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: