Baris Dundar <email@example.com> wrote:
I faced the following problem while I was playing with QRTP: when I
nanosleep (or select) function with an argument of n ms, I figured out
that indeed it takes (n+2) msec…
If I tell a process to sleep 1ms and if it sleeps for 3ms, I would NOT
call this behaviour realtime… Can somebody tell me where this 2ms
is coming from?
I wrote the following up for someone else several months ago to explain
the same thing. Enjoy…
(BTW, the default tick rate for QNX/Neutrino is 1ms if the CPU clock
rate is 40MHz or better, 10ms otherwise).
The following piece of code:
for ( i=0; i<1000; i++ )
With timer resolution set to .5 ms this loop last 2 seconds (???)
With timer resolution set to 1ms this loop last 3 seconds (???)
You’re seeing timer quantization (sp?) error. Let us consider the
1ms tick rate. First off, POSIX says that it’s OK to delay too much,
but it’s not OK to delay too little. Since the calling of delay() is
asynchronous with the running of the clock interrupt, that means that
we have to add one clock tick to a relative delay to ensure the correct
amount of time (consider what would happen if we didn’t and a one tick
delay was requested just before the clock interrupt went off). That normally
adds half a tick extra delay on average. This code isn’t average. Since
the thread gets woken up by the clock interrupt, it’s now synchronized
with it and almost immediately delays again. That means that you see
the worst case of getting almost a full extra tick of delay each time.
OK, that should make the loop last 2 seconds, where’s the extra second
coming from? The problem is that when you request a 1ms tick rate, we
may not be able to actually give it too you because of the frequency
of the input clock to the timer hardware. In those cases we choose the
closest number that’s faster than what you requested. In terms of IBM
PC hardware, requesting a 1ms tick rate actually gets you 999,847 nanoseconds
between each tick. With the requested delay, that gives us the following:
1,000,000ns + 999,847ns = 1,999,847ns of actual delay.
1,999,847ns / 999,847ns = 2.000153 ticks before the timer expires
Since we only expire timers at a clock interrupt, ceil(2.000153) gives
us that each delay(1) call actually waits:
999,847ns * 3 = 2,999,541ns
Multiply that by a 1000 for the loop count and you get a total loop time
of 2.999541 seconds.
Things are similar for the clock rate of 0.5ms. The actual period is
499,504ns per tick. If you perform the above calculations, you’ll get a
total delay of about 2 seconds.
If you bump the call to delay(10) and reduce the loop count to 100
to minimize the error, you’ll get a total loop time much closer to the
1 second you expect.
Brian Stecher (firstname.lastname@example.org) QNX Software Systems, Ltd.
phone: +1 (613) 591-0931 (voice) 175 Terence Matthews Cr.
+1 (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8