Hi.
I have the following C code.
int main(int argc, char* argv[])
{
// float yInit, yInc, yNext, y;
double yInit, yInc, yNext, y;
double yMin, yMax, yMinimum=0.1, yMaximum=10.9;
yMin = log10(yMinimum);
yMax = log10(yMaximum);
printf(“Min: %lg Max: %lg log10(Min): %lg log10(Max): %lg\n”, yMinimum,
yMaximum, yMin, yMax);
yInit = pow(10, floor(yMin));
yInc = yInit;
yNext = yInit * 10;
y = yInit;
printf(“Init: %lg Inc: %lg Next: %lg\n”, yInit, yInc, yNext);
for (y=yInit; y<=yMaximum; y+=yInc) {
if (y<yMinimum) {
continue;
}
if (y>=yNext) {
yInc *= 10;
yNext *= 10;
}
printf("%g\n", y);
}
return 0;
}
When I define y and yNext as floats, the code works. When I define y and
yNext as doubles, it doesn’t. Why?
I have tried this in two different compilers (Watcom c in QNX 4 and Visual
C++ in Windows 95), with the same results.
I would expect the statement (y>=yNext) to be true when y=1.0 and yNext=1.0.
This is not what happens.
Any Ideas?
Thanks a lot.
Augie