$Id: why_reals_arent.txt,v 1.2 2011/01/22 01:28:38 db Exp db $ Invariably, some new C programmer comes asking why a float or even a double calculation does not come up with an answer they expect. That's because floats and doubles are _not_ real. That is, they are only clever integer approximations to the real number line. Here's an illustration of what can happen: #include #include int main() { float a; a = (0.1 / 3.0); printf("a= %f\n", a); a = a * 3.0; printf("a= %f\n", a); if (a == 0.1) printf("a is equal to 0.1\n"); else printf("a is not equal to 0.1\n"); } cc -o f f.c -lm ./f a= 0.033333 a= 0.100000 a is not equal to 0.1 The number 1.0 / 3.0 should work out to exactly 1/3. In C, the best we can do is a decimal representation. Since we can only use binary and a fixed number of bits to represent the part after the '.' (Mantissa), it is not an exact equivalent to the real number 1/3. `