LaMini-Flan-T5-783M outperforming Alpaca-7B in human evaluation is impressive, and I wish Flan-T5 got some more love from the community, there's too much buzz around Llama based models that could be invested into improving a more open model (Flan-T5 is licensed under Apache 2.0 vs the restrictive license of Llama).
Now I wish someone fine-tuned Flan-T5 on the dataset from Open Assistant, that could be a truly open Alpaca competitor.
Now I wish someone fine-tuned Flan-T5 on the dataset from Open Assistant, that could be a truly open Alpaca competitor.