No, what actually happened is that the people you are calling the cultists coined the term alignment, which then got appropriated by the AI labs.
But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.
That's simply factually untrue, and even some of the people who have become apocalypse cultists used "alignment" in the original sense before coming to advocate apocalypse as the only issue of concern.
But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.