# Why Python's Floating Point Math Isn't Always Exact

# Why Python's Floating Point Math Isn't Always Exact

If you've ever tried doing math with decimals in Python, you might have encountered some unexpected results:

```
>>> 0.1 + 0.2
0.30000000000000004
>>> 1.2 - 1.0
0.19999999999999996
```

You'd expect `0.1 + 0.2`

to equal `0.3`

exactly, and `1.2 - 1.0`

to be `0.2`

. So what's going on here?

The reason is that most decimal fractions cannot be represented exactly as binary floating-point numbers. Computers use binary (base 2) to represent floating-point numbers internally, while we're used to decimal (base 10) representations. Just like how the decimal fraction 1/3 can't be represented exactly in base 10 (it becomes the repeating 0.33333...), many common decimal fractions like 0.1, 0.2, etc. can't be represented exactly in base 2. Python displays the exact stored binary approximations in decimal format, which looks confusing.

This issue isn't specific to Python - it's a consequence of how computers handle floating-point arithmetic based on the IEEE 754 standard. For most applications, these tiny precision errors don't matter. But if you need exact decimal arithmetic, like for financial calculations, you can use Python's built-in `decimal`

module:

```
>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.2')
Decimal('0.3')
```

The `decimal`

module lets you do math with as much precision as you need. You can also avoid surprises by comparing floating point numbers within a certain tolerance, rather than expecting them to be exactly equal:

```
>>> abs((0.1 + 0.2) - 0.3) < 0.0001
True
```

So while floating point math can sometimes seem unintuitive, with a bit of understanding you can handle it like a pro. Just remember - when in doubt, reach for `decimal`

!