When an improvement comes from a new mental model of the data/problem, it's at least as much work to fix X as to write Y from scratch. Generally much more, as you must figure out how/whether to interface old features to the new model.
Similarly, in the case of changing mental models, most users and even developers of X won't be ready for Y when you write it. Trying to force the change onto those people will harm X and handicap acceptance of Y by confusing the Case For Y with the Case Against Killing X.
Similarly, in the case of changing mental models, most users and even developers of X won't be ready for Y when you write it. Trying to force the change onto those people will harm X and handicap acceptance of Y by confusing the Case For Y with the Case Against Killing X.
So it's hardly surprising.