Yes. It’s very important. Think of it as quantifying whether a model understood the objective. This is, as you’d expect, especially important for auto-regressive models.
EDIT: btw i don't have anything against ESG or DEI. I'm not a culture warrior complaining about 'woke ai'. I'm also not a lesswrong guy. I'm answering the OP's question "Is machine learning model "alignment" a serious academic concept? I've only seen this mentioned in pseudo-philosophical Twitter posts before." by saying what they mean now by 'AI alignment'.
I don't think anyone uses alignement to refer specifically to DEI bullshit.
There are people using the term generally to refer to any kind of finetuning of an LLM, so they would consider what OpenAssistant did to be alignment even though there was no attempt to convince it to not kill humanity or to be politically correct.