This is kinda creepy. But at the same time, how do they do that? I thought the training of these models stopped in September 2021/2022. So how do they do these incremental trainings?
but doesn’t finetuning result in forgetting previous knowledge? it seems that finetuning is most usable to train “structures” not new knowledge. am i missing something?