I asked about the appearant wierd behaviour of a C program in printing out
floating point values, which was discussed here a long time ago, but I
missed the final resolution of it. Thanks to the following for their
responses:
pas_at_unh.edu
dsr_at_lns598.lns.cornell.edu
szgyula_at_skysrv.Pha.Jhu.EDU
BISSON_at_BATES.MIT.EDU
The problem wasn't some much the C compiler as the example program itself:
#include <stdio.h>
main()
{
float fff = 4.333;
float ggg = 1/65535;
printf("ggg = %f\n",ggg);
printf("1/65535 = %f\n",1/65535);
printf("fff = %f\n",fff);
printf("1/65535 = %f\n",1/65535);
printf("ggg = %f\n",ggg);
}
The lines
printf("1/65535 = %f\n",1/65535);
is passing an int to printf, which when specified with a %f is expecting
a double precision floating point value. This, according to ANSI C,
produces unpredictable results, so the output can be anything. It's by
chance that all the other systems I tested it on produced results that
we "think we expected" when just looking at the code.
Again, the expression "1/65535", if assigned to another variable, will
produce the correct results (0 or 0.0), but can be unpredicatble if
passed as an argument to a subroutine expecting a specific input type.
eyc
Received on Wed Jul 10 1996 - 21:05:12 NZST