process manager (procnto) preemption

We are running QNX 6.1 on a MIPs target. Part of our hardware generates an
interrupt to our processor every 1ms. The interrupt needs to be serviced on
the order of a couple hundred microseconds. This is a real-time application
so it is critical that this interrupt be serviced at every 1ms interval.
Currently, we service the interrupt by using InterruptAttach() to get/set
data from the device, and use InterruptWait() in a high priority thread
(priority ~ 50) to do the work needed on the data received.

Here’s the problem - whenever a new process is launched, even one as small
as “hello world”, we have consistently observed that our high priority
thread (at ~ 50) is pre-empted by as much as 20 ms(!). We still see our ISR
executing at 1ms intervals, so only our service thread is being pre-empted
(that is, nobody is disabling interrupts). Since procnto has one high
priority thread that sits at priority ~ 63, it must be responsible for this
preemption.

We need to be able to launch applications during runtime, but absolutely
cannot tolerate the 20 ms (!) preemption caused by procnto. We also cannot
run our service thread completely out of the ISR since the stack space is
limited to 200 bytes (or something way too small). We also can’t seem to
launch our thread at a higher priority than procnto. Is there a way that
procnto can be run or configured so that it does not crash our real-time
service threads???


thanks in advance!
-george

How did you detect that your thread is pre-empted, and ISR is still running
? What other programs are running at the same time ? Do all you threads
running on 50 priority ?

Our application is running on 600 us interval (hardware doesn’t allow go
faster) on 22 priority level (to overpower network manager working on 21).


“SW Engineer” <georgeb@berkeleyprocess.com> wrote in message
news:b2hmtf$r4v$1@inn.qnx.com

We are running QNX 6.1 on a MIPs target. Part of our hardware generates an
interrupt to our processor every 1ms. The interrupt needs to be serviced
on
the order of a couple hundred microseconds. This is a real-time
application
so it is critical that this interrupt be serviced at every 1ms interval.
Currently, we service the interrupt by using InterruptAttach() to get/set
data from the device, and use InterruptWait() in a high priority thread
(priority ~ 50) to do the work needed on the data received.

Here’s the problem - whenever a new process is launched, even one as small
as “hello world”, we have consistently observed that our high priority
thread (at ~ 50) is pre-empted by as much as 20 ms(!). We still see our
ISR
executing at 1ms intervals, so only our service thread is being pre-empted
(that is, nobody is disabling interrupts). Since procnto has one high
priority thread that sits at priority ~ 63, it must be responsible for
this
preemption.

We need to be able to launch applications during runtime, but absolutely
cannot tolerate the 20 ms (!) preemption caused by procnto. We also cannot
run our service thread completely out of the ISR since the stack space is
limited to 200 bytes (or something way too small). We also can’t seem to
launch our thread at a higher priority than procnto. Is there a way that
procnto can be run or configured so that it does not crash our real-time
service threads???


thanks in advance!
-george
\

Can you determine if your application gets preempted when a new process is
created??

To distill things down even further we did the following:

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30 using
“renice”.
** We raised the interrupt handling thread of our time-critical process to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the one
that is attached to our high priority thread.

Under these conditions, whenever any new process is created - even as simple
as running “pidin” or “ls” - our (priority 63) interrupt handling thread is
being prempted for up to 20 milli-seconds. We monitor the interrupt line to
the CPU with a logic analyzer. We also use the logic analyzer to monitor a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and associated
ISR executed), but that the high priority thread (attached to that ISR) is
preempted for a long time. What can possibly be preempting a priority 63
thread for 20 milli-seconds?

thanks!
-george



“Serge Yuschenko” <serge.yuschenko@rogers.com> wrote in message
news:b2kcq6$vd$1@inn.qnx.com

How did you detect that your thread is pre-empted, and ISR is still
running
? What other programs are running at the same time ? Do all you threads
running on 50 priority ?

Our application is running on 600 us interval (hardware doesn’t allow go
faster) on 22 priority level (to overpower network manager working on 21).


“SW Engineer” <> georgeb@berkeleyprocess.com> > wrote in message
news:b2hmtf$r4v$> 1@inn.qnx.com> …
We are running QNX 6.1 on a MIPs target. Part of our hardware generates
an
interrupt to our processor every 1ms. The interrupt needs to be serviced
on
the order of a couple hundred microseconds. This is a real-time
application
so it is critical that this interrupt be serviced at every 1ms interval.
Currently, we service the interrupt by using InterruptAttach() to
get/set
data from the device, and use InterruptWait() in a high priority thread
(priority ~ 50) to do the work needed on the data received.

Here’s the problem - whenever a new process is launched, even one as
small
as “hello world”, we have consistently observed that our high priority
thread (at ~ 50) is pre-empted by as much as 20 ms(!). We still see our
ISR
executing at 1ms intervals, so only our service thread is being
pre-empted
(that is, nobody is disabling interrupts). Since procnto has one high
priority thread that sits at priority ~ 63, it must be responsible for
this
preemption.

We need to be able to launch applications during runtime, but absolutely
cannot tolerate the 20 ms (!) preemption caused by procnto. We also
cannot
run our service thread completely out of the ISR since the stack space
is
limited to 200 bytes (or something way too small). We also can’t seem to
launch our thread at a higher priority than procnto. Is there a way that
procnto can be run or configured so that it does not crash our real-time
service threads???


thanks in advance!
-george


\

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30 using
“renice”.
** We raised the interrupt handling thread of our time-critical process to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the one
that is attached to our high priority thread.

Be aware that the threads in procnto run in response to client requests and,
as such, will float to the client priority when that request comes into the
system. So you can’t really renice procnto.

Under these conditions, whenever any new process is created - even as simple
as running “pidin” or “ls” - our (priority 63) interrupt handling thread is
being prempted for up to 20 milli-seconds. We monitor the interrupt line to
the CPU with a logic analyzer. We also use the logic analyzer to monitor a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and associated
ISR executed), but that the high priority thread (attached to that ISR) is
preempted for a long time. What can possibly be preempting a priority 63
thread for 20 milli-seconds?

Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow down
variables.

chris


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

I wonder if procnto raises it’s own priority to carry out process creation
tasks.

I ran pidin from memory (dev/shmem) - the results are the same (nice
suggestion though, thanks). The preemption does not seem to be a strong
function of the “size” of the application launched. Whether I run “pidin” or
launch “slinger” the same preemption occurs. Also, if I do an execl() on an
existing process, the preemption occurs (when execl is executed) just as if
a new process had been spawned.

I’m stumped.


-george


“Chris McKillop” <cdm@qnx.com> wrote in message
news:b2rcql$p70$2@nntp.qnx.com

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30
using
“renice”.
** We raised the interrupt handling thread of our time-critical process
to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the
one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client requests
and,
as such, will float to the client priority when that request comes into
the
system. So you can’t really renice procnto.


Under these conditions, whenever any new process is created - even as
simple
as running “pidin” or “ls” - our (priority 63) interrupt handling thread
is
being prempted for up to 20 milli-seconds. We monitor the interrupt line
to
the CPU with a logic analyzer. We also use the logic analyzer to monitor
a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and
associated
ISR executed), but that the high priority thread (attached to that ISR)
is
preempted for a long time. What can possibly be preempting a priority 63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow
down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

SW Engineer <georgeb@berkeleyprocess.com> wrote:

I wonder if procnto raises it’s own priority to carry out process creation
tasks.

Don’t think that is the case, but I have never looked either.

What version of QNX are you using? (uname -a output).

chris


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

Hi George,

It is quite complicated to find a problem just speaking about it. I don’t remember any well known issue, which might cause this. I think it would be more effective if you can post some example clearly showing the behaviour you described, or, at least, output of pidin during the program work as a starter.

At first, take a look at the example below. I hope it resembles description of your task. I chose RTC as a source of interrupts. You can play with priority and rate. It works fine on any rate on P200 with priority 22. Nothing preempts the IRQ thread.

I don’t think the OS version is an issue here. Now I run this test on 6.2, but I dont remember any problems running similar task on 6.1 a while ago.

I am, also, pretty sure the procnto priority has nothing to do with this problem. As Chris mentioned before the procnto is client priority driven. High thread priority can only be an indication that some other process running on this high priority called procnto service. Most of the time all procnto’s threads are RECEIVE blocked.

Sincerely,
Serge

#include <unistd.h>
#include <stdlib.h>
#include <stdint.h>
#include <pthread.h>
#include <sys/slog.h>
#include <sys/slogcodes.h>
#include <sys/syspage.h>
#include <sys/siginfo.h>
#include <sys/neutrino.h>
#include <x86/inout.h>

#define RTC_CMD_ADDR 0x70 /* RTC internal register offset goes here /
#define RTC_DAT_ADDR 0x71 /
RTC internal register R/W access here */

#define RTC_REG_A 0x0A /* RTC register offset for accessing Reg. A /
#define RTC_REG_B 0x0B /
RTC register offset for accessing Reg. B /
#define RTC_REG_C 0x0C /
RTC register offset for accessing Reg. C /
#define RTC_REG_D 0x0D /
RTC register offset for accessing Reg. D */

#define RTC_IRQ 8
#define SLOG_CODE (_SLOGC_NEXT_QNX+1)

static uint64_t ih, it;
static int rate = 6; /* 0.000977 s */

const struct sigevent *interrupt_handler( void *data, int id )
{
struct sigevent *event = (struct sigevent *) data;

/*

  • Mask interrupt until interrupt thread
  • proceeds this event.
    */
    InterruptMask( RTC_IRQ, id );
    ih++;

/*

  • Arm RTC
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

return event;
}

void * interrupt_thread( void *data )
{
uint64_t c1, c2, cps;
int id;

ThreadCtl( _NTO_TCTL_IO, 0 );

/*

  • Enable RTC periodic IRQ
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

InterruptDisable();
out8( RTC_CMD_ADDR, RTC_REG_B ); /* select RTC register B /
out8( RTC_DAT_ADDR, 0x42 ); /
set Periodic Interrupt Enable bit */

out8( RTC_CMD_ADDR, RTC_REG_A ); /* select RTC register A /
out8( RTC_DAT_ADDR, 0x20 | rate ); /
set rate, oscillator enabled */
InterruptEnable();

id = InterruptAttach( RTC_IRQ,
interrupt_handler,
data,
sizeof( struct sigevent ),
_NTO_INTR_FLAGS_TRK_MSK );

cps = SYSPAGE_ENTRY( qtime )->cycles_per_sec;
c1 = ClockCycles();

for( ;; ) {
InterruptWait( 0, NULL );
c2 = ClockCycles();
it++;
slogf( _SLOG_SETCODE( SLOG_CODE, 0 ), _SLOG_DEBUG1, “IRQ: %f %lld %lld”, ( c2 - c1 ) / (double) cps, ih, it );
c1 = c2;
InterruptUnmask( RTC_IRQ, id );
}
}

int main( int argc, char **argv )
{
pthread_t tid;
pthread_attr_t pattr;
struct sched_param parm;
struct sigevent event;
int c, prio = 10;

while( -1 != ( c = getopt( argc, argv, “p:r:” ))) {
switch( c ) {
case ‘p’:

prio = atoi( optarg );
break;

case ‘r’:

c = atoi( optarg );

if( c > 2 && c < 16 )
rate = c;

break;
}
}

SIGEV_INTR_INIT( &event );
pthread_attr_init( &pattr );
pthread_attr_setschedpolicy( &pattr, SCHED_RR );
parm.sched_priority = prio;
pthread_attr_setschedparam( &pattr, &parm );
pthread_attr_setinheritsched( &pattr, PTHREAD_EXPLICIT_SCHED );

pthread_create( &tid, &pattr, interrupt_thread, &event );
pthread_join( tid, NULL );
return 0;
}



I wonder if procnto raises it’s own priority to carry
out process creation
tasks.

I ran pidin from memory (dev/shmem) - the results are
the same (nice
suggestion though, thanks). The preemption does not
seem to be a strong
function of the “size” of the application launched.
Whether I run “pidin” or
launch “slinger” the same preemption occurs. Also, if
I do an execl() on an
existing process, the preemption occurs (when execl
is executed) just as if
a new process had been spawned.

I’m stumped.


-george

Chris McKillop <cdm@qnx.com> wrote:

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30 using
“renice”.
** We raised the interrupt handling thread of our time-critical process to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client requests and,
as such, will float to the client priority when that request comes into the
system. So you can’t really renice procnto.

Hmmm, is the interrupt thread communicating with procnto? This would explain
the 63 procnto thread, since it would rise to meet the priority of the
client.

WRT procnto versions, is it possible for you to try 6.2?

Under these conditions, whenever any new process is created - even as simple
as running “pidin” or “ls” - our (priority 63) interrupt handling thread is
being prempted for up to 20 milli-seconds. We monitor the interrupt line to
the CPU with a logic analyzer. We also use the logic analyzer to monitor a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and associated
ISR executed), but that the high priority thread (attached to that ISR) is
preempted for a long time. What can possibly be preempting a priority 63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/


cburgess@qnx.com

Running uname -a output on my target gives:

QNX localhost 6.1.0 2001/06/25-15:59:35edt BPC-MERCEDmipsle

(we had QNX do the BSP for us - we are running on a MIPS platform with
special hardware).


“Chris McKillop” <cdm@qnx.com> wrote in message
news:b2slgh$ka7$1@nntp.qnx.com

SW Engineer <> georgeb@berkeleyprocess.com> > wrote:
I wonder if procnto raises it’s own priority to carry out process
creation
tasks.


Don’t think that is the case, but I have never looked either.

What version of QNX are you using? (uname -a output).

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

Thanks Serge,

I took a look at your code below. It is very similar to what we’re running.
The only difference is that we do not mask our interrupt source (and
wouldn’t have to if it weren’t for this problem). Otherwise, use of
InterruptAttach() and InterruptWait() is the same. An incomplete snippet is
shown below (we have custom hardware so I removed most of the associated
code). A logic analyzer is used to observe the LED debug port.


still stumped.
-george


#include
#define FE_PRIORITY 61

static struct sigevent fe_event;
extern struct out_packet_str *out_packet, *out_packet1;

int main()
{
ThreadCtl(_NTO_TCTL_IO, 0);
setprio(0, FE_PRIORITY);

// fe_main never returns
fe_main (NULL);

return (-1); // just to silence the compiler
}

const struct sigevent *fe_handler(void *area, int id)
{
// write to debug port
MASK_LED(0x02)

/* get data from our custom hardware /
// memcpy (out_packet1, out_packet2, DATA_PACKET_BYTES);
// out_packet = out_packet1;
//
// /
Write the new pointer to the front end mailbox. */
// pci_9080_start ((unsigned char *)front_end_physical1);
// etc. etc.
return(&fe_event);
}

void *fe_main(void *arg)
{
int id, intr_ret;
fe_event.sigev_notify= SIGEV_INTR;
// attach handler to isr vector
id = InterruptAttach(FE_ISR_VECTOR_NUM, &fe_handler, NULL, 0, 0);
if(id == -1)
{
fprintf(stderr, “Error from fe_main: InterruptAttach Error: %s\n”,
strerror(errno));
exit (-1);
}

// Setup and turn on the front end state machine
while (true)
{
intr_ret = -1;
while (intr_ret == -1)
{
// ignore any error from InterruptWait.
// The error may be caused by
// a signal or kernel timeout.
intr_ret = InterruptWait(0, NULL) ;
}

// write to debug port
UMASK_LED(0x02)
}
}




“Serge Yuschenko” <nospam@forums.openqnx.com> wrote in message
news:13350418.1045587036667.JavaMail.fliu@tiger…

Hi George,

It is quite complicated to find a problem just speaking about it. I don’t
remember any well known issue, which might cause this. I think it would be

more effective if you can post some example clearly showing the behaviour
you described, or, at least, output of pidin during the program work as a
starter.

At first, take a look at the example below. I hope it resembles
description of your task. I chose RTC as a source of interrupts. You can

play with priority and rate. It works fine on any rate on P200 with priority
22. Nothing preempts the IRQ thread.

I don’t think the OS version is an issue here. Now I run this test on 6.2,
but I dont remember any problems running similar task on 6.1 a while ago.

I am, also, pretty sure the procnto priority has nothing to do with this
problem. As Chris mentioned before the procnto is client priority driven.

High thread priority can only be an indication that some other process
running on this high priority called procnto service. Most of the time all
procnto’s threads are RECEIVE blocked.

Sincerely,
Serge

#include <unistd.h
#include <stdlib.h
#include <stdint.h
#include <pthread.h
#include <sys/slog.h
#include <sys/slogcodes.h
#include <sys/syspage.h
#include <sys/siginfo.h
#include <sys/neutrino.h
#include <x86/inout.h

#define RTC_CMD_ADDR 0x70 /* RTC internal register offset goes here /
#define RTC_DAT_ADDR 0x71 /
RTC internal register R/W access here */

#define RTC_REG_A 0x0A /* RTC register offset for accessing Reg. A /
#define RTC_REG_B 0x0B /
RTC register offset for accessing Reg. B /
#define RTC_REG_C 0x0C /
RTC register offset for accessing Reg. C /
#define RTC_REG_D 0x0D /
RTC register offset for accessing Reg. D */

#define RTC_IRQ 8
#define SLOG_CODE (_SLOGC_NEXT_QNX+1)

static uint64_t ih, it;
static int rate = 6; /* 0.000977 s */

const struct sigevent *interrupt_handler( void *data, int id )
{
struct sigevent *event = (struct sigevent *) data;

/*

  • Mask interrupt until interrupt thread
  • proceeds this event.
    */
    InterruptMask( RTC_IRQ, id );
    ih++;

/*

  • Arm RTC
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

return event;
}

void * interrupt_thread( void *data )
{
uint64_t c1, c2, cps;
int id;

ThreadCtl( _NTO_TCTL_IO, 0 );

/*

  • Enable RTC periodic IRQ
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

InterruptDisable();
out8( RTC_CMD_ADDR, RTC_REG_B ); /* select RTC register B /
out8( RTC_DAT_ADDR, 0x42 ); /
set Periodic Interrupt Enable bit */

out8( RTC_CMD_ADDR, RTC_REG_A ); /* select RTC register A /
out8( RTC_DAT_ADDR, 0x20 | rate ); /
set rate, oscillator enabled */
InterruptEnable();

id = InterruptAttach( RTC_IRQ,
interrupt_handler,
data,
sizeof( struct sigevent ),
_NTO_INTR_FLAGS_TRK_MSK );

cps = SYSPAGE_ENTRY( qtime )->cycles_per_sec;
c1 = ClockCycles();

for( ;; ) {
InterruptWait( 0, NULL );
c2 = ClockCycles();
it++;
slogf( _SLOG_SETCODE( SLOG_CODE, 0 ), _SLOG_DEBUG1, “IRQ: %f %lld %lld”,
( c2 - c1 ) / (double) cps, ih, it );
c1 = c2;
InterruptUnmask( RTC_IRQ, id );
}
}

int main( int argc, char **argv )
{
pthread_t tid;
pthread_attr_t pattr;
struct sched_param parm;
struct sigevent event;
int c, prio = 10;

while( -1 != ( c = getopt( argc, argv, “p:r:” ))) {
switch( c ) {
case ‘p’:

prio = atoi( optarg );
break;

case ‘r’:

c = atoi( optarg );

if( c > 2 && c < 16 )
rate = c;

break;
}
}

SIGEV_INTR_INIT( &event );
pthread_attr_init( &pattr );
pthread_attr_setschedpolicy( &pattr, SCHED_RR );
parm.sched_priority = prio;
pthread_attr_setschedparam( &pattr, &parm );
pthread_attr_setinheritsched( &pattr, PTHREAD_EXPLICIT_SCHED );

pthread_create( &tid, &pattr, interrupt_thread, &event );
pthread_join( tid, NULL );
return 0;
}



I wonder if procnto raises it’s own priority to carry
out process creation
tasks.

I ran pidin from memory (dev/shmem) - the results are
the same (nice
suggestion though, thanks). The preemption does not
seem to be a strong
function of the “size” of the application launched.
Whether I run “pidin” or
launch “slinger” the same preemption occurs. Also, if
I do an execl() on an
existing process, the preemption occurs (when execl
is executed) just as if
a new process had been spawned.

I’m stumped.


-george

I’ve stripped the interrupt thread pretty clean for observation. All that
remains now is the InterruptWait() call and the call to write to the
memory-mapped device port (to which the logic analyzer is connected). I find
it extremely odd that the preemption time (~20 milli-seconds) is always
nearly the same as if somewhere (in the kernel) some sort of timeout has
been reached. For comparison, equal priority threads in round robin mode on
our platform each get a time slice of around 4 milli-secs.

I don’t know what’s involved in trying 6.2. We had to work with QNX to
develop a BSP for our MIPS platform on custom hardware for 6.1. Could I use
a 6.2 procnto on a 6.1 BSP? (I doubt it).


thanks,
-george


“Colin Burgess” <cburgess@qnx.com> wrote in message
news:b2tuat$h72$1@nntp.qnx.com

Chris McKillop <> cdm@qnx.com> > wrote:

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30
using
“renice”.
** We raised the interrupt handling thread of our time-critical process
to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the
one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client requests
and,
as such, will float to the client priority when that request comes into
the
system. So you can’t really renice procnto.

Hmmm, is the interrupt thread communicating with procnto? This would
explain
the 63 procnto thread, since it would rise to meet the priority of the
client.

WRT procnto versions, is it possible for you to try 6.2?

Under these conditions, whenever any new process is created - even as
simple
as running “pidin” or “ls” - our (priority 63) interrupt handling
thread is
being prempted for up to 20 milli-seconds. We monitor the interrupt
line to
the CPU with a logic analyzer. We also use the logic analyzer to
monitor a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and
associated
ISR executed), but that the high priority thread (attached to that ISR)
is
preempted for a long time. What can possibly be preempting a priority
63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow
down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I
get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/



\

cburgess@qnx.com

I don’t know what’s involved in trying 6.2. We had to work with QNX to
develop a BSP for our MIPS platform on custom hardware for 6.1. Could I use
a 6.2 procnto on a 6.1 BSP? (I doubt it).

Assuming you have the source to the IPL and startup, you should be able
to rebuild them against the 6.2.0 headers and libs for the BSP. Sometimes
things change, but it will be pretty obvious pretty fast as to what it is. :wink:

As for specific MIPS change between 6.1 and 6.2, I will let someone who
works on the MIPS stuff comment.

chris


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

SW Engineer wrote:

Thanks Serge,

I took a look at your code below. It is very similar to what we’re running.
The only difference is that we do not mask our interrupt source (and
wouldn’t have to if it weren’t for this problem).

The RTC interrupt is already masked if the interrupt handler is active
… so an InterruptMask is simply useless.

Armin

Colin Burgess wrote:

Chris McKillop <> cdm@qnx.com> > wrote:

** Every process on the box except our time-critical process runs below
priority 30. This includes “procnto” - we lowered it’s priority to 30 using
“renice”.
** We raised the interrupt handling thread of our time-critical process to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the one
that is attached to our high priority thread.



Be aware that the threads in procnto run in response to client requests and,
as such, will float to the client priority when that request comes into the
system. So you can’t really renice procnto.


Hmmm, is the interrupt thread communicating with procnto? This would explain
the 63 procnto thread, since it would rise to meet the priority of the
client.

If yes … try to use FIFO scheduling for your interrupt thread.
With FIFO scheduling the interrupt thread will get back the CPU after
being suspended by a call to ‘procnto’ and no other procnto thread could
be activated.

WRT procnto versions, is it possible for you to try 6.2?

Are all procnto threads using floating priorities ? If yes … why
is the initial prio of 63 used ?


Armin




Under these conditions, whenever any new process is created - even as simple
as running “pidin” or “ls” - our (priority 63) interrupt handling thread is
being prempted for up to 20 milli-seconds. We monitor the interrupt line to
the CPU with a logic analyzer. We also use the logic analyzer to monitor a
debug port that we write to in the interrupt handling thread. The logic
analyzer output shows that the interrupt line is asserted (and associated
ISR executed), but that the high priority thread (attached to that ISR) is
preempted for a long time. What can possibly be preempting a priority 63
thread for 20 milli-seconds?



Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow down
variables.


chris

\

Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

\

In article <b2ut3q$rft$1@inn.qnx.com>, georgeb@berkeleyprocess.com says…

// write to debug port
MASK_LED(0x02)

What is MASK_LED() and UMASK_LED()? Actually, you can’t do much in ISR… What does ClockPeriod()

return for your system?
Eduard.

I guess you have sort of terminal access to your board to run pidin and
other utilities, hello-world for example. Potentially that terminal device
is a root cause of your trouble. If you have UART, your UART interrupt
handler is invoked every time when anything is being transmitted or
received. So, procnto schedules UART interrupt handler and delays your
thread when you are typing in or printf’ing out.



-Dmitri

“SW Engineer” <georgeb@berkeleyprocess.com> wrote in message
news:b2s8v1$ph1$1@inn.qnx.com

I wonder if procnto raises it’s own priority to carry out process creation
tasks.

I ran pidin from memory (dev/shmem) - the results are the same (nice
suggestion though, thanks). The preemption does not seem to be a strong
function of the “size” of the application launched. Whether I run “pidin”
or
launch “slinger” the same preemption occurs. Also, if I do an execl() on
an
existing process, the preemption occurs (when execl is executed) just as
if
a new process had been spawned.

I’m stumped.


-george


“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:b2rcql$p70$> 2@nntp.qnx.com> …

** Every process on the box except our time-critical process runs
below
priority 30. This includes “procnto” - we lowered it’s priority to 30
using
“renice”.
** We raised the interrupt handling thread of our time-critical
process
to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except the
one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client requests
and,
as such, will float to the client priority when that request comes into
the
system. So you can’t really renice procnto.


Under these conditions, whenever any new process is created - even as
simple
as running “pidin” or “ls” - our (priority 63) interrupt handling
thread
is
being prempted for up to 20 milli-seconds. We monitor the interrupt
line
to
the CPU with a logic analyzer. We also use the logic analyzer to
monitor
a
debug port that we write to in the interrupt handling thread. The
logic
analyzer output shows that the interrupt line is asserted (and
associated
ISR executed), but that the high priority thread (attached to that
ISR)
is
preempted for a long time. What can possibly be preempting a priority
63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow
down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I
get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/
\

If a serial IRQ-handler takes 20 ms, the serial driver is buggy!! Perhaps a
script printing to /dev/null can test if the serial driver has a problem.
e.g.

while true; do
sleep 2
pidin
done;

-Michael

On Wed, 19 Feb 2003 22:39:51 -0500, Dmitri Poustovalov
<pdmitri@bigfoot.com> wrote:

I guess you have sort of terminal access to your board to run pidin and
other utilities, hello-world for example. Potentially that terminal
device
is a root cause of your trouble. If you have UART, your UART interrupt
handler is invoked every time when anything is being transmitted or
received. So, procnto schedules UART interrupt handler and delays your
thread when you are typing in or printf’ing out.



-Dmitri

“SW Engineer” <> georgeb@berkeleyprocess.com> > wrote in message
news:b2s8v1$ph1$> 1@inn.qnx.com> …
I wonder if procnto raises it’s own priority to carry out process
creation
tasks.

I ran pidin from memory (dev/shmem) - the results are the same (nice
suggestion though, thanks). The preemption does not seem to be a strong
function of the “size” of the application launched. Whether I run
“pidin”
or
launch “slinger” the same preemption occurs. Also, if I do an execl() on
an
existing process, the preemption occurs (when execl is executed) just as
if
a new process had been spawned.

I’m stumped.


-george


“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:b2rcql$p70$> 2@nntp.qnx.com> …

** Every process on the box except our time-critical process runs
below
priority 30. This includes “procnto” - we lowered it’s priority to
30
using
“renice”.
** We raised the interrupt handling thread of our time-critical
process
to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except
the
one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client
requests
and,
as such, will float to the client priority when that request comes
into
the
system. So you can’t really renice procnto.


Under these conditions, whenever any new process is created - even
as
simple
as running “pidin” or “ls” - our (priority 63) interrupt handling
thread
is
being prempted for up to 20 milli-seconds. We monitor the interrupt
line
to
the CPU with a logic analyzer. We also use the logic analyzer to
monitor
a
debug port that we write to in the interrupt handling thread. The
logic
analyzer output shows that the interrupt line is asserted (and
associated
ISR executed), but that the high priority thread (attached to that
ISR)
is
preempted for a long time. What can possibly be preempting a
priority
63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to narrow
down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I
get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/




\


Using M2, Opera’s revolutionary e-mail client: http://www.opera.com/m2/

Hi Serge,

I duplicated your results using the RTC as a source of interrupts. On a x86 QNX 6.1 PC, there is no preemption of the IRQ thread. However, the results of the same test run on our MIPs target (using the RTC) show that preemption still does occur. That would mean it is specific to the MIPs version of QNX, our BSP, our hardware, or some
combination of the three. I’ve taken it up with QSSL support. Thanks again for your help.

-george


Serge Yuschenko wrote:

Hi George,

It is quite complicated to find a problem just speaking about it. I don’t remember any well known issue, which might cause this. I think it would be more effective if you can post some example clearly showing the behaviour you described, or, at least, output of pidin during the program work as a starter.

At first, take a look at the example below. I hope it resembles description of your task. I chose RTC as a source of interrupts. You can play with priority and rate. It works fine on any rate on P200 with priority 22. Nothing preempts the IRQ thread.

I don’t think the OS version is an issue here. Now I run this test on 6.2, but I dont remember any problems running similar task on 6.1 a while ago.

I am, also, pretty sure the procnto priority has nothing to do with this problem. As Chris mentioned before the procnto is client priority driven. High thread priority can only be an indication that some other process running on this high priority called procnto service. Most of the time all procnto’s threads are RECEIVE blocked.

Sincerely,
Serge

#include <unistd.h
#include <stdlib.h
#include <stdint.h
#include <pthread.h
#include <sys/slog.h
#include <sys/slogcodes.h
#include <sys/syspage.h
#include <sys/siginfo.h
#include <sys/neutrino.h
#include <x86/inout.h

#define RTC_CMD_ADDR 0x70 /* RTC internal register offset goes here /
#define RTC_DAT_ADDR 0x71 /
RTC internal register R/W access here */

#define RTC_REG_A 0x0A /* RTC register offset for accessing Reg. A /
#define RTC_REG_B 0x0B /
RTC register offset for accessing Reg. B /
#define RTC_REG_C 0x0C /
RTC register offset for accessing Reg. C /
#define RTC_REG_D 0x0D /
RTC register offset for accessing Reg. D */

#define RTC_IRQ 8
#define SLOG_CODE (_SLOGC_NEXT_QNX+1)

static uint64_t ih, it;
static int rate = 6; /* 0.000977 s */

const struct sigevent *interrupt_handler( void *data, int id )
{
struct sigevent *event = (struct sigevent *) data;

/*

  • Mask interrupt until interrupt thread
  • proceeds this event.
    */
    InterruptMask( RTC_IRQ, id );
    ih++;

/*

  • Arm RTC
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

return event;
}

void * interrupt_thread( void *data )
{
uint64_t c1, c2, cps;
int id;

ThreadCtl( _NTO_TCTL_IO, 0 );

/*

  • Enable RTC periodic IRQ
    */
    out8( RTC_CMD_ADDR, RTC_REG_C );
    in8( RTC_DAT_ADDR );

InterruptDisable();
out8( RTC_CMD_ADDR, RTC_REG_B ); /* select RTC register B /
out8( RTC_DAT_ADDR, 0x42 ); /
set Periodic Interrupt Enable bit */

out8( RTC_CMD_ADDR, RTC_REG_A ); /* select RTC register A /
out8( RTC_DAT_ADDR, 0x20 | rate ); /
set rate, oscillator enabled */
InterruptEnable();

id = InterruptAttach( RTC_IRQ,
interrupt_handler,
data,
sizeof( struct sigevent ),
_NTO_INTR_FLAGS_TRK_MSK );

cps = SYSPAGE_ENTRY( qtime )->cycles_per_sec;
c1 = ClockCycles();

for( ;; ) {
InterruptWait( 0, NULL );
c2 = ClockCycles();
it++;
slogf( _SLOG_SETCODE( SLOG_CODE, 0 ), _SLOG_DEBUG1, “IRQ: %f %lld %lld”, ( c2 - c1 ) / (double) cps, ih, it );
c1 = c2;
InterruptUnmask( RTC_IRQ, id );
}
}

int main( int argc, char **argv )
{
pthread_t tid;
pthread_attr_t pattr;
struct sched_param parm;
struct sigevent event;
int c, prio = 10;

while( -1 != ( c = getopt( argc, argv, “p:r:” ))) {
switch( c ) {
case ‘p’:

prio = atoi( optarg );
break;

case ‘r’:

c = atoi( optarg );

if( c > 2 && c < 16 )
rate = c;

break;
}
}

SIGEV_INTR_INIT( &event );
pthread_attr_init( &pattr );
pthread_attr_setschedpolicy( &pattr, SCHED_RR );
parm.sched_priority = prio;
pthread_attr_setschedparam( &pattr, &parm );
pthread_attr_setinheritsched( &pattr, PTHREAD_EXPLICIT_SCHED );

pthread_create( &tid, &pattr, interrupt_thread, &event );
pthread_join( tid, NULL );
return 0;
}

I wonder if procnto raises it’s own priority to carry
out process creation
tasks.

I ran pidin from memory (dev/shmem) - the results are
the same (nice
suggestion though, thanks). The preemption does not
seem to be a strong
function of the “size” of the application launched.
Whether I run “pidin” or
launch “slinger” the same preemption occurs. Also, if
I do an execl() on an
existing process, the preemption occurs (when execl
is executed) just as if
a new process had been spawned.

I’m stumped.


-george

I am not suggesting that UART interrupt processing takes 20 ms. QNX drivers
are not so buggy :wink:

QNX UART driver sends a single byte per interrupt (pls, correct me if I’m
wrong) even though the driver has already had the whole buffer someone sent.
If the baud rate is 115200 then UART will generate next “Tx empty”
interrupt in 1/115200*11bits=95us and UART interrupt handler will be
scheduled again. It is hard to say what’s wrong w/o more info about that
MIPS platform, but scheduling and processing of UAT interrupt handler every
100-150 us does not come for free :wink:

“Michael Tasche” <michael.tasche@esd-electronics.com> wrote in message
news:oprkvxhuylx7hvef@news…

If a serial IRQ-handler takes 20 ms, the serial driver is buggy!! Perhaps
a
script printing to /dev/null can test if the serial driver has a problem.
e.g.

while true; do
sleep 2
pidin
done;

-Michael

On Wed, 19 Feb 2003 22:39:51 -0500, Dmitri Poustovalov
pdmitri@bigfoot.com> > wrote:

I guess you have sort of terminal access to your board to run pidin and
other utilities, hello-world for example. Potentially that terminal
device
is a root cause of your trouble. If you have UART, your UART interrupt
handler is invoked every time when anything is being transmitted or
received. So, procnto schedules UART interrupt handler and delays your
thread when you are typing in or printf’ing out.



-Dmitri

“SW Engineer” <> georgeb@berkeleyprocess.com> > wrote in message
news:b2s8v1$ph1$> 1@inn.qnx.com> …
I wonder if procnto raises it’s own priority to carry out process
creation
tasks.

I ran pidin from memory (dev/shmem) - the results are the same (nice
suggestion though, thanks). The preemption does not seem to be a strong
function of the “size” of the application launched. Whether I run
“pidin”
or
launch “slinger” the same preemption occurs. Also, if I do an execl()
on
an
existing process, the preemption occurs (when execl is executed) just
as
if
a new process had been spawned.

I’m stumped.


-george


“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:b2rcql$p70$> 2@nntp.qnx.com> …

** Every process on the box except our time-critical process runs
below
priority 30. This includes “procnto” - we lowered it’s priority to
30
using
“renice”.
** We raised the interrupt handling thread of our time-critical
process
to
priority 63 - the highest priority available in QNX.
** There are no other interrupts being processed on our box except
the
one
that is attached to our high priority thread.


Be aware that the threads in procnto run in response to client
requests
and,
as such, will float to the client priority when that request comes
into
the
system. So you can’t really renice procnto.


Under these conditions, whenever any new process is created - even
as
simple
as running “pidin” or “ls” - our (priority 63) interrupt handling
thread
is
being prempted for up to 20 milli-seconds. We monitor the interrupt
line
to
the CPU with a logic analyzer. We also use the logic analyzer to
monitor
a
debug port that we write to in the interrupt handling thread. The
logic
analyzer output shows that the interrupt line is asserted (and
associated
ISR executed), but that the high priority thread (attached to that
ISR)
is
preempted for a long time. What can possibly be preempting a
priority
63
thread for 20 milli-seconds?


Are you running pidin from the disk? If so, try copying the binary
into /dev/shmem and running from that location. Just trying to
narrow
down
variables.

chris


Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I
get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/









\

Using M2, Opera’s revolutionary e-mail client: > http://www.opera.com/m2/

On Thu, 20 Feb 2003 20:28:56 -0500, Dmitri Poustovalov
<pdmitri@bigfoot.com> wrote:

I am not suggesting that UART interrupt processing takes 20 ms. QNX
drivers
are not so buggy > :wink:

QNX UART driver sends a single byte per interrupt (pls, correct me if I’m
wrong) even though the driver has already had the whole buffer someone
sent.
If the baud rate is 115200 then UART will generate next “Tx empty”
interrupt in 1/115200*11bits=95us and UART interrupt handler will be
scheduled again. It is hard to say what’s wrong w/o more info about that
MIPS platform, but scheduling and processing of UAT interrupt handler
every
100-150 us does not come for free > :wink:

I agree, but the short IRQ processing every 100 us cannot be the cause of
the problem.
There should be enough cpu time between 2 UART-Irqs to run that simple
thread at highest priority.

-Michael