>The decimal representation of a number is really a string representation
It is incorrect to speak of "the" decimal representation of a number, as many numbers have non-unique decimal representations, the most famous example being 1.000...=0.999...
The definition that makes it unique is the shortest representation where trailing zeros are bout included. In your example that would be 1 note that this definition comes up in binary floating point where there are infinite decimal representations that will round back to a given binary64 float but the decimal representation chosen is the shortest (and closest to the binary64 float in the case of ties for length).
> It is incorrect to speak of "the" decimal representation of a number, as many numbers have non-unique decimal representations, the most famous example being 1.000...=0.999...
That's true and I'll concede that point, but it's not really relevant to what I said. That just means some numbers have different string representations that represent the same object. That doesn't really contradict anything in my post except the use of the definite article.
It is incorrect to speak of "the" decimal representation of a number, as many numbers have non-unique decimal representations, the most famous example being 1.000...=0.999...