Because the problems now cut across responsibilities of the components built on certain assumptions made for the sake of removing the spaghetti. Because now you need to account for "global" parameters (protocol version) in lots of disparate "local" scopes - individual functions who are quickly losing their degree of encapsulation.
We're discussing a hypothetical code-base, so focusing on the details isn't probably going to be too helpful. In any case however, I in no way see how the argument "flows from the outcome" of the refactor you're focusing on.
Specifically, he’s using the failed outcome of this hypothetical refactoring to argue that new_sequence_number should never have been broken out into a separate function in the first place because it separated the foosoft==3 case from the conditional, which made the bad refactor more likely to happen.
Certainly, in this hypothetical 1500-line function, it’s possible that they would have appeared close to each other and it’s possible they wouldn’t have. In this particular case, it’s entirely possible that the refactor was made easier to execute successfully because the programmer can leverage the compiler to find all the locations that need to be updated.
In the end, he’s demonstrated a flawed hypothetical procedure and asserted that it’s worse than an imagined alternative.
As usual, the argument being unsound doesn’t mean the conclusion is incorrect, only that it hasn’t been demonstrated here.
Personally (anecdotally), I don't believe that this statement is true. In my experience, an appropriate refactor will always be more readable than any spaghetti code. I may be wrong, your statement may be true, but without good examples of such refactors, who can say.
The above commenter is just pointing out that the example given in the article is absurd. Which it is. And with that, the argument is lost.