Hi,
I like to change something every n us. My first application just sleeps this n us and measures how long it was sleeping. I started with 1000 us = 1 ms because this is the default (clock > 40 MHz). The applications looks basically like this:
clockperiod.nsec = 1000000;
clockperiod.fract = 0;
result = ClockPeriod(CLOCK_REALTIME, &clockperiod, NULL, 0);
ASSERT(result == 0);
result = clock_gettime(CLOCK_REALTIME, &before);
ASSERT(result == 0);
interval.tv_sec = 0;
interval.tv_nsec = 1000000;
result = clock_nanosleep(CLOCK_REALTIME, 0, &interval, NULL);
ASSERT(result == 0);
result = clock_gettime(CLOCK_REALTIME, &after);
ASSERT(result == 0);
I calculate the difference between ‘after’ and ‘before’ and output everything on the console. In this example with 1000 us everything went fine. When I use the ‘time’ builtin it outputs:
1.05s real 0.01s user 0.00s system
Again, everything as expected.
When I now change
clockperiod.nsec = 100000
which means 0.1 ms the program runs fine, but needs about 10 times longer to run when I measure it with my wristwatch. When I measure it with the ‘time’ builtin it still needs only 1s to run. When I change
clockperiod.nsec = 10000
the program needs about 100 s to finish. The fancy thing is that I printf something before the sleep and after the sleep, based on this output I can clearly see that the whole 100 s are spent in sleep, but the clock_gettime functions still return values that only differ with 1 sec. I have this behaviour when I run QNX (6.4.1) in VmWare and when I run it directly on a OMAP-L137 (ARM processor) board.
Any idea, what I’m doing wrong here?
Thanks for any help