Maybe we don't need to worry, OpenLLaMA is under training right now. It will be the commercial version of LLaMA.
> Update 05/22/2023
> We are happy to release our 700B token checkpoint for the OpenLLaMA 7B model and 600B token checkpoint for the 3B model. We’ve also updated the evaluation results. We expect the full 1T token training run to finish at the end of this week.
> Update 05/22/2023
> We are happy to release our 700B token checkpoint for the OpenLLaMA 7B model and 600B token checkpoint for the 3B model. We’ve also updated the evaluation results. We expect the full 1T token training run to finish at the end of this week.
https://github.com/openlm-research/open_llama
So we could develop on LLaMA for now and switch to OpenLLaMA later.