ClockPeriod Problems

Hi,

I like to change something every n us. My first application just sleeps this n us and measures how long it was sleeping. I started with 1000 us = 1 ms because this is the default (clock > 40 MHz). The applications looks basically like this:

clockperiod.nsec = 1000000;
clockperiod.fract = 0;
result = ClockPeriod(CLOCK_REALTIME, &clockperiod, NULL, 0);
ASSERT(result == 0);

result = clock_gettime(CLOCK_REALTIME, &before);
ASSERT(result == 0);

interval.tv_sec = 0;
interval.tv_nsec = 1000000;
result = clock_nanosleep(CLOCK_REALTIME, 0, &interval, NULL);
ASSERT(result == 0);

result = clock_gettime(CLOCK_REALTIME, &after);
ASSERT(result == 0);

I calculate the difference between ‘after’ and ‘before’ and output everything on the console. In this example with 1000 us everything went fine. When I use the ‘time’ builtin it outputs:

1.05s real 0.01s user 0.00s system

Again, everything as expected.

When I now change

clockperiod.nsec = 100000

which means 0.1 ms the program runs fine, but needs about 10 times longer to run when I measure it with my wristwatch. When I measure it with the ‘time’ builtin it still needs only 1s to run. When I change

clockperiod.nsec = 10000

the program needs about 100 s to finish. The fancy thing is that I printf something before the sleep and after the sleep, based on this output I can clearly see that the whole 100 s are spent in sleep, but the clock_gettime functions still return values that only differ with 1 sec. I have this behaviour when I run QNX (6.4.1) in VmWare and when I run it directly on a OMAP-L137 (ARM processor) board.

Any idea, what I’m doing wrong here?

Thanks for any help

You are not describing (at least to me) what you are doing. I see one setting of the clock period, and one nanosleep for 1 milli-second. How that takes 1 second or 100 seconds I can’t fathom.

Are you looping 1000 times?

Why not post some code?

Currently I’m only measuring how QNX performs. To be more precise I like to measure the latency when I only sleep.

[code]#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sched.h>

#include <sys/neutrino.h>

typedef int bool;

#define ASSERT(expression) assert(expression, FILE, LINE)
void assert(const bool expression, const char* file, const int line) {
if(!expression) {
printf(“Assert %s:%i\n”, file, line);
exit(-1);
}
}

#define NSEC_PER_SEC 1000000000

void timespecnorm(struct timespec* ts) {
while(ts->tv_nsec >= NSEC_PER_SEC) {
ts->tv_nsec -= NSEC_PER_SEC;
ts->tv_sec++;
}
}

int timespecdiffns(const struct timespec* after, const struct timespec* before) {
int diffns;
diffns = NSEC_PER_SEC * ((int) after->tv_sec - (int) before->tv_sec);
diffns += (int) after->tv_nsec - (int) before->tv_nsec;
return diffns;
}

int main(int argc, char* argv[]) {
int result;
struct timespec ts;
struct timespec interval;
struct timespec before;
struct timespec after;
struct sched_param sp;
struct _clockperiod clockperiod;

printf("Build Time = %s\n", __TIME__);

if(argc != 3) {
	printf("Usage: testsleep clockus intervalus\n");
	exit(-1);
}

int clockus = atoi(argv[1]);
int intervalus = atoi(argv[2]);

interval.tv_sec = 0;
interval.tv_nsec = 1000 * intervalus;
timespecnorm(&interval);

printf("Interval = %i us\n", intervalus);

printf("Set Scheduler Policy\n");
sp.sched_priority = 80;
result = sched_setscheduler(0, SCHED_FIFO, &sp);
ASSERT(result == 0);

printf("Set Neutrino Clock Time = %i us\n", clockus);
clockperiod.nsec = 1000 * clockus;
clockperiod.fract = 0;
result = ClockPeriod(CLOCK_REALTIME, &clockperiod, NULL, 0);
ASSERT(result == 0);

printf("Get Clock Resolution Result\n");
result = clock_getres(CLOCK_REALTIME, &ts);
ASSERT(result == 0);
printf(" = %i s %i ns\n", (int) ts.tv_sec, (int) ts.tv_nsec);

result = clock_gettime(CLOCK_REALTIME, &before);
ASSERT(result == 0);
printf("Time Before = %i %li\n", before.tv_sec, before.tv_nsec);

printf("Before Sleep\n");
result = clock_nanosleep(CLOCK_REALTIME, 0, &interval, NULL);
ASSERT(result == 0);
printf("After Sleep\n");

result = clock_gettime(CLOCK_REALTIME, &after);
ASSERT(result == 0);
printf("Time After = %i %li\n", after.tv_sec, after.tv_nsec);

before.tv_nsec += 1000 * intervalus;
timespecnorm(&before);

int diffns = timespecdiffns(&after, &before);
int diffus = diffns / 1000;

printf("Diff = %i us\n", diffus);

return 0;

}[/code]

Test it first with “testsleep 1000 1000000” and then with “testsleep 100 1000000” (Overflow of input values is not handled). Although the application should always sleep ~1 s in both cases (at least I expect that), it sleeps for 10 s in the second case (but only measurable with a wristwatch).

These is the result running on a 1.6Ghz PIV.

They suggest that the problem is either processor specific, or processor speed related. If you ratchet up the clockres high enough on any processor, the cpu will become overwhelmed servicing timer interrupts. Maybe that is what is happening. Maybe try some resolutions between 1000 and 100 to see if there is a critical point.

I hope that helps.

time ./timeit 1000 1000000

Build Time = 11:26:49
Interval = 1000000 us
Set Scheduler Policy
Set Neutrino Clock Time = 1000 us
Get Clock Resolution Result
= 0 s 999847 ns
Time Before = 1270380428 618195110
Before Sleep
After Sleep
Time After = 1270380429 620041804
Diff = 1846 us
1.01s real 0.00s user 0.00s system

time ./timeit 100 1000000

Build Time = 11:26:49
Interval = 1000000 us
Set Scheduler Policy
Set Neutrino Clock Time = 100 us
Get Clock Resolution Result
= 0 s 99733 ns
Time Before = 1270380438 260719578
Before Sleep
After Sleep
Time After = 1270380439 260941835
Diff = 222 us
1.01s real 0.00s user 0.00s system

Thanks for you response, I get the more or less same results, but what I can’t see here, if you program runs 1 s in world time. But based on your answer I guess it is was always 1 s. I already tried other values like 500 us for ClockPeriod (which only means twice as much interrupts). The output from “top” still suggest that the system is ~99% idle and I can move around windows as fast as before.

It looks like the internal time measurement is switched to the new ClockPeriod but the timer interrupt frequency isn’t changed. In this case if the ClockPeriod is halved the program would run twice as long and so on.

Yes it took 1s both times. I assume you mean that the system reads 99% idle when the clockres=100. That suggests that it isn’t the problem I described. I still think you should look for the threshold as that might give you a clue. It could be that when clockres sets the clock divider beyond a certain point, something goes wrong. This might fit into the “specific processor” category, or even a specific board.

Hmm thanks for your answer. I’ll try on another computer. But for the OMAP-L137 eval baord I guess I’ve to fix it myself.

Edit: I’ve tested it on a computer directly and it seems to work. I then installed VirtualBox and the sample ClockPeriod works as expected, thefore VmWare maybe has at least one problem.