Hi. I have run into yet another annoyance in my porting effort. This
one deals with string-to-double conversions.
We have developed a fairly full-featured system for loading and storing
run-time configuration information. This consists of a compiled-in file
containing basic information, and a file read at run-time that can
override any defaults.
A metadata file is read at compile time (using various and sundry
preprocessor tricks; this stage is enough to make a C++ purist faint).
The data from this file is compiled into the resulting executable. This
file contains among other things the data definitions and the names of
all allowable options, as well as limits (minimum value, maximum values,
default value, etc). After the preprocessor finishes, these values show
up as literals within the intermediate file.
At run-time, a file is read in that contains any overrides for the
default data. If data is present in the file, it overwrites anything
already in existence. String representations are changed to numbers
using atof(), atoi(), etc.
The problem we’re seeing is that 1500 does not equal 1500. That is, the
double representation of “1500” read in at compile time (and converted
to a double by the compiler) does not match the double representation of
“1500” read in at run time (and converted to a double by the library
function atof()). When the raw data is printed out in hex, the two
representations differ by one bit. This is presumably caused by the two
components (compiler vs. library) using different algorithms.
So which is correct? It seems wrong that the same string can be
converted to two different doubles. The differences is in a single
low-order bit.
Yes, I do realize that making a direct comparison of floating point
values is asking for trouble. However, I am not really comparing two
numbers; I am comparing (I believe) the results of two different
algorithms used to convert the same string to a floating point value.
Under QNX 4 (using both the Watcom compiler and the Watcom library), the
two conversions were identical (presumably because they shared the same
underlying algorithm).
In our example case, the maximum allowable value for a piece of data is
set at 1500 in the metadata file. At run time, the configuration file
specifies a value of 1500. These two are tested using
if (input > max) {
// Error condition.
}
The result is that the input value is seen as exceeding the maximum.
This causes an error and system stoppage. In this case, there is no
fractional part so the input conversion seems to be the culprit.
I know this isn’t really a problem that can be solved, but I wanted to
bring it to everyone’s attention in case it trips up anyone else. Does
anyone have any ideas for getting around this besides introducing clunky
“allowable error” logic?
On a related note, are there any rumors about getting a different
compiler for Neutrino? While GCC is great for the price, there are a
lot of places in which it seems to be lacking. There seems to be no
shortage of people on this newgroup that are also struggling with the
compiler and tools.
Josh Hamacher
FAAC Incorporated