There is a video by Vi Hart titled “Why Every Proof that .999… = 1 is Wrong”

It is true that few real numbers (with cardinality same as that of rational numbers) can’t have unique decimal representation, like, 0.493499999… = 0.4935000…. So, the point made in this video is that you can’t prove two different representations of a number to be equal.

But why so? As pointed out in my earlier post, this has something to do with the way we construct numbers. Such ambiguity in representation holds irrespective of representation system (binary or decimal)

In decimal representation of real numbers we subdivide intervals into ten equal subintervals. Thus, given , if we subdivide into ten equal subintervals, then belongs to a subinterval for some integer in . We obtain a sequence of integers with for all such that satisfies

In this case we say that has a decimal representation given by

The decimal representation of is unique except when is a subdivision point at some stage, which is when for some . We may also assume that is not divisible by 10.

When is a subdivision point at the stage, one choice for corresponds to selecting the left subinterval, which causes all subsequent digits to be 9, and the other choice corresponds to selecting the right subinterval, which causes all subsequent digits to be 0.

For example, unlike

A nice exposition is available on Wikipedia: https://en.wikipedia.org/wiki/0.999…