Vicuna is based on 13B (not 7B) and its training data includes humans chatting with GPT-4 vs GPT4All's purely synthetic dataset generated by GPT-3.5.
[0] https://github.com/lm-sys/FastChat
Could Vicuña be used to further fine-tune GPT4All to make it better?
Vicuna is based on 13B (not 7B) and its training data includes humans chatting with GPT-4 vs GPT4All's purely synthetic dataset generated by GPT-3.5.
[0] https://github.com/lm-sys/FastChat