What is the minimum recommended sleep period

I was wondering whether anybody has any experience in suspending a thread
for a very short period, i.e < 1 millisecond. Is it possible to define the
minimum possible suspend period before the OS becomes blocked??

david chivers <david@emulatorinternational.com> wrote:

I was wondering whether anybody has any experience in suspending a thread
for a very short period, i.e < 1 millisecond. Is it possible to define the
minimum possible suspend period before the OS becomes blocked??

Using sleep()/nanosleep(), the minimum delay you can get is based
on the ticksize. The default ticksize (on most systems) is 1ms,
and due to a combination of POSIX rules and clock granularity
implementation issues, the smallest sleep you could get would
last between 1 and 2 ms (average about 1.5ms).

Configuring a smaller ticksize (it is configurable) would allow
for finer granularity – at the cost of more overhead.

Also, I’m not sure what you mean by “before the OS becomes
blocked?”. Exactly what is “the OS” is a little bit blurry
under QNX, due to it being a microkernel – but the kernel
itself does not, and can not, become blocked. Only threads
in processes (including threads in the process manager, also
part of procnto) can get blocked, but this is a usual state
for them, and they aren’t part of the timer work.

How small a suspend period do you want, and for what
purpose?

-David

Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com

Hi David,
Thanks for the quick response, Sorry for the confusion. My question
related to the min clock tick definable by Clock period. I understand that
this is defined as 10microssecs, but is this short time interval practical
and usable for an application containing I/O and network handling. Is a
more practical min clock tick definable, e.g 100microsecs??

David Gibbs wrote:

david chivers <> david@emulatorinternational.com> > wrote:

I was wondering whether anybody has any experience in suspending a thread
for a very short period, i.e < 1 millisecond. Is it possible to define the
minimum possible suspend period before the OS becomes blocked??

Using sleep()/nanosleep(), the minimum delay you can get is based
on the ticksize. The default ticksize (on most systems) is 1ms,
and due to a combination of POSIX rules and clock granularity
implementation issues, the smallest sleep you could get would
last between 1 and 2 ms (average about 1.5ms).

Configuring a smaller ticksize (it is configurable) would allow
for finer granularity – at the cost of more overhead.

Also, I’m not sure what you mean by “before the OS becomes
blocked?”. Exactly what is “the OS” is a little bit blurry
under QNX, due to it being a microkernel – but the kernel
itself does not, and can not, become blocked. Only threads
in processes (including threads in the process manager, also
part of procnto) can get blocked, but this is a usual state
for them, and they aren’t part of the timer work.

How small a suspend period do you want, and for what
purpose?

-David

Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com

The minimum is very dependent on CPU and target requirements, that’s why
it’s left so open ended. The quick answer is: Try it! And measure
it’s impact on your benchmarks.

If you really want to get good CPU throughput then setting ClockPeriod()
to 10 msec would be an improvement on the default. And then have your
own IRQ for your high speed stuff.

Anyone tried 100 msec? :>

david chivers <david@emulatorsinternational.com> wrote:

Hi David,
Thanks for the quick response, Sorry for the confusion. My question
related to the min clock tick definable by Clock period. I understand that
this is defined as 10microssecs, but is this short time interval practical
and usable for an application containing I/O and network handling. Is a
more practical min clock tick definable, e.g 100microsecs??

As the other person noted, this is absolutely and completely dependent
on the hardware, and on what you need to do.

Essentially, by setting the ClockPeriod(), you are setting the frequency
of the timer interrupt (e.g. interrupt 0 on x86).

This gives a direct trade-off, more interrupts gives more precise system
time, at the cost of more system overhead.

Ticksize 1ms == 1000 interrupts/sec.
Ticksize 10 us == 100,000 interrupts/sec.

The cost of an interrupt comes in two ways: the cost of doing what
needs to be done directly due to the interrupt – in this case,
evaluating the timer chain, checking for any round-robin rescheduling,
and handling any user attached handlers or events for interrupt 0; and
the cost of interrupting (saving state) and resuming/restarting (restoring
state) whatever operation was in progress when the interrupt fired.

QNX may allow the ClockPeriod() to be taken down to levels where, on some
hardware, the OS & hardware can’t keep up – and system progress comes
to a halt (or a crawl so slow it is indistinguishable from a halt).

Where this point is, depends on the hardware, and what you’re trying to
do on that hardware.

-David

Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com

Evan Hillas wrote:

The minimum is very dependent on CPU and target requirements, that’s why
it’s left so open ended. The quick answer is: Try it! And measure
it’s impact on your benchmarks.

If you really want to get good CPU throughput then setting ClockPeriod()
to 10 msec would be an improvement on the default. And then have your
own IRQ for your high speed stuff.

Anyone tried 100 msec? :

That’s milliseconds btw.

Evan Hillas wrote:

The minimum is very dependent on CPU and target requirements, that’s why
it’s left so open ended. The quick answer is: Try it! And measure
it’s impact on your benchmarks.

If you really want to get good CPU throughput then setting ClockPeriod()
to 10 msec would be an improvement on the default. And then have your
own IRQ for your high speed stuff.

Anyone tried 100 msec? :

And by “high sped” I meant real-time code.

Sorry for this final question, just to check that I have understood
everything. I understand that a time interval defined for clock_nanosleep,
can be defined in nanosecs or microsecs. For this to work correctly must I
change the clock period?


David Gibbs wrote:

david chivers <> david@emulatorsinternational.com> > wrote:
Hi David,
Thanks for the quick response, Sorry for the confusion. My question
related to the min clock tick definable by Clock period. I understand that
this is defined as 10microssecs, but is this short time interval practical
and usable for an application containing I/O and network handling. Is a
more practical min clock tick definable, e.g 100microsecs??

As the other person noted, this is absolutely and completely dependent
on the hardware, and on what you need to do.

Essentially, by setting the ClockPeriod(), you are setting the frequency
of the timer interrupt (e.g. interrupt 0 on x86).

This gives a direct trade-off, more interrupts gives more precise system
time, at the cost of more system overhead.

Ticksize 1ms == 1000 interrupts/sec.
Ticksize 10 us == 100,000 interrupts/sec.

The cost of an interrupt comes in two ways: the cost of doing what
needs to be done directly due to the interrupt – in this case,
evaluating the timer chain, checking for any round-robin rescheduling,
and handling any user attached handlers or events for interrupt 0; and
the cost of interrupting (saving state) and resuming/restarting (restoring
state) whatever operation was in progress when the interrupt fired.

QNX may allow the ClockPeriod() to be taken down to levels where, on some
hardware, the OS & hardware can’t keep up – and system progress comes
to a halt (or a crawl so slow it is indistinguishable from a halt).

Where this point is, depends on the hardware, and what you’re trying to
do on that hardware.

-David

Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com

david chivers <david@emulatorsinternational.com> wrote:

Sorry for this final question, just to check that I have understood
everything. I understand that a time interval defined for clock_nanosleep,
can be defined in nanosecs or microsecs. For this to work correctly must I
change the clock period?

Yes, while the input parameters give the appearance of nano-second
resoultion, the actually resolution is determined by the clock
period.

-David


Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com

David Gibbs wrote:

david chivers <> david@emulatorsinternational.com> > wrote:
Sorry for this final question, just to check that I have understood
everything. I understand that a time interval defined for clock_nanosleep,
can be defined in nanosecs or microsecs. For this to work correctly must I
change the clock period?

Yes, while the input parameters give the appearance of nano-second
resoultion, the actually resolution is determined by the clock
period.

Also, if you want your code to always get a regular trigger rather than
POSIXs time-of-day intervals then make sure you specify an exact
multiple of IRQ#0’s interval value.

This somewhat arbitrarily assigned value will be, I presume, constructed
by starting with the estimated clock rate of the i8254 timer chip
embedded in every southbridge, which is somewhere in the region of
1.19318 MHz then dividing it by 1000. 1193180 Hz / 1000 Hz = 1193
clocks per interrupt.

This results in some rounding error that is not ignored. The timer
system accumulates the discrepancy of what programs ask for and what it
believes it is delivering and when that amount extends across the next
tick it’ll skip that tick.

So, when specifying the number of nanoseconds in clock_nanosleep() you
should ask for exactly a multiple of the internally stored constant,
which will be something like 999849 nsecs. If you miss the value by
even one you’ll still get miss-firing timers. This is assuming the
default value for the average PC. Also, programs indirectly change it
via ClockPeriod().

A quick check with clock_getres() reveals to me a value of 999847 but
according to the docs on the qtime struct the value for PC hardware is
defined in femtoseconds so there may be some rounding error even when
using a multiple of clock_getres().

In all honesty, I think POSIX got this one wrong, if the program needs
time-of-day type information then it should be using time-of-day type
functions instead of timer functions. A compromise might be to have
error accumulation on timers and delays that increment in seconds but
higher resolution timers/delays simply firing on rounded multiples of
the IRQ with no error accumulation.

I don’t think QNX/Neutrino has any such timers without requesting
interrupt events. Maybe there is an in-between method that doesn’t
require root privileges?


I imagine this has been said before,
Evan

Speaking of IRQs, there is one that is typically unused - The RTC alarm.

This thing sits on IRQ#8 and is normally not enabled nor is IRQ#8 used
by anything else. I’ve tested it on a few PCs under QNX with no
hassles. The only concern is that your program may not be the only one
trying to use it, although it’d be a safe bet to say nothing in the
default Momentics install will use it.

Clearly the RTC can be setup to produce regular interval interruptions
otherwise I wouldn’t have made this fuss, the good part is it can exceed
1 kHz, the bad part is it can’t do it in 1 kHz increments. Prolly the
most interesting setting to you will be it’s fastest of 8192 Hz or about
122 usecs.

You have the usual choice of InterruptAttach() vs
InterruptAttachEvent(). In this case I think IAE() is the better option
because it inherently provides the necessary wakeup event for when a
timer expires.

Ideally, this would be a resource manager, so it can be loaded as a root
privileged server for multiple user programs and also manage the
necessary hardware adjustments.

The same idea applies to my rant too as this method ensures no missed
intervals, ie: Combining the above on IRQ#0 with the ClockPeriod()
adjustment to give you your preferred tick rate.