Hi guys,
I know this is very old subject and intensively discussed already,
but I recently encountered wierd behavior of this.
In my application, I have 5 independent processes, 1 of them goes to
sleep mode after some initial setup, and 1 of them is event-driven.
Other 3 processes are based on software timer, wake up every
0.001 sec, 0.01 sec, and 0.2 sec respectively.
Among them, the one whose sampling time is 1 mili sec is the most important
one.
In order to do this, I set up a timer like:
…
pid_main_timer = qnx_proxy_attach(0, NULL, 0, -1);
event.sigev_signo = -pid_main_timer;
tid = timer_create(CLOCK_REALTIME, &event);
timer.it_value.tv_sec = 1L; // wait for 1 sec at the begining
timer.it_value.tv_nsec = 0L;
timer.it_interval.tv_sec = 0L;
timer.it_interval.tv_nsec = 1000000L; // sampling time = 1E+6 nano sec
timer_settime(tid, 0, &timer, NULL);
…
and wait for a message from the proxy:
…
while (1) // main loop
{
/* waiting for timer wakeup from the server */
sender = Receive(0, NULL, 0);
if (sender != pid_main_timer)
{
printf("\nMessage from unknown process");
break;
}
…
After I ran the code, I found the timing was not quite reliable,
and I realized I didn’t change ticksize of system clock.
So I issued
ticksize 0.5
on the command line, and tried again. Then, it looked that
the sampling time was too fast, like 20 times faster than 1 mili sec.
Can the change of ticksize affect the timer’s behavior like this?
Isn’t the value of timer.it_interval.tv_nsec an absolute value in nano
seconds?
I’m running this code on a PIII 850MHz PC104 machine.
Thanks,
Hoam.