I guess so, at least this is what people are reporting who have a lot of experience with language models, like janus (see link in sibling).
Though I should mention that mode collapse doesn't just come from supervised instruction tuning (which let the model reply to requests instead of treating them as completion prompts), but also from things like RLHF, which bias the model to give certain replies rather than others.
so wait, is this why all these chatgpt answers in HN comments sound so similar and are thus easy to detect?