Hi,
I’m making research on rtos. I learned that an important issue to define the
quality of rtos is the interrupt latency and the jitter (standard deviation
of interrupt latency) related to it.
I can’t find anything about jitter. Could anyone tell me something about
this jitter in terms of microseconds, as far as concerns qnx?
I know that it’s a value changing depending on the used machine, but it
doesn’t matter, I’m looking for an average value.
Thanks in advance,
Paolo.
Hi Paolo…
You can always create a program, generate interrupts, output a digital
signal, and measure the signals in an oscilloscope. Then post the
results for everyones benefits. Notice that software timer interrupt
will generate different results as opposed to hardware generated interrupts.
Regards…
Miguel.
rchetti Paolo wrote:
Hi,
I’m making research on rtos. I learned that an important issue to define the
quality of rtos is the interrupt latency and the jitter (standard deviation
of interrupt latency) related to it.
I can’t find anything about jitter. Could anyone tell me something about
this jitter in terms of microseconds, as far as concerns qnx?
I know that it’s a value changing depending on the used machine, but it
doesn’t matter, I’m looking for an average value.
Thanks in advance,
Paolo.
I did a little watching of IRQ#8 from the RTC alarm generating a 8192 Hz
interrupt on my 500 MHz K6 and 1 GHz K7 a couple of years back snapping
the CPU’s free running counter on each interrupt. I noticed the jitter
to be only a few microseconds with random spikes to ten usecs for the K7
and the K6 was about double those numbers.
I also watched IRQ#0 from the system timer which kernel assigns as the
lowest priority. This produced more variation in the jitter. I’m vague
on the values but I do remember the width of the jitter expand and
shrink as I did things on the display and copied files and what not
else. The change wasn’t huge, say an additional ±5 usec.
Those are the peak values btw. Peak-peak values are double of course.