Decimal numbers *can* be represented exactly, if you have enough space - just not by floating *binary* point numbers. If you use a floating *decimal* point type (e.g. `System.Decimal`

in .NET) then plenty of values which can't be represented exactly in binary floating point can be exactly represented.

Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.

The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to *multiply* a number by 3: you can always multiply by an integer without running into problems. But when you *divide* by a number which is prime and isn't a factor of your base, you can run into trouble (and *will* do so if you try to divide 1 by that number).

Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/5 - and 5 is the prime that causes problems between decimal and binary.

### Side note to deal with the problem of finite representations:

Some floating decimal point types have a fixed size like `System.Decimal`

others like `java.math.BigDecimal`

are "arbitrarily large" - but they'll hit a limit at some point, whether it's system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number of decimal digits, you *can* exactly represent any number which is exactly representable as a floating binary point.