The Rounding Error is a type of quantization error that occurs when a number with high precision is approximated by a number with lower precision. Because computers store numbers using a finite number of bits, they cannot represent most real numbers (especially irrational numbers like π or the results of division like 1/3) with infinite accuracy. At some point, the number must be cut short, or "rounded," to fit into the available storage. This small discrepancy between the true, ideal value and the stored, rounded value is the rounding error. This happens constantly in floating-point arithmetic.
For example, the decimal value 0.1 cannot be represented perfectly in binary floating-point; its representation is a repeating sequence, which gets rounded. When many calculations are performed, these tiny rounding errors can accumulate and sometimes become significant, affecting the accuracy of the final result. While often used interchangeably with "precision loss," a rounding error is more specifically the error introduced by the act of rounding itself, while precision loss is the more dramatic loss of significant digits that can occur during operations like subtractive cancellation. In high-precision measurement applications, it's important to be aware that every floating-point calculation introduces a potential rounding error.