Scheduling overhead in RTP 6.1

I have a question regarding the scheduler. I have a process with about 350
threads. Of these threads only about 15 are actively communicating reading
data from a server via IPC and relaying that message to a client on a TCP
socket. The others are waiting for messages from another process via IPC to
send this data to the same client via TCP socket. The memory utilization is
near 75% when viewed by spin. It appears that procnto (the kernel?) is
using 45% of the CPU time.

Can anyone tell me if the scheduling overhead is proportional to the number
of threads?
Is there anything that can be done to decrease the impact of having this
many threads in the system?

Thanks in advance.

Tim

“Tim Bochenek” <tim.bochenek@bepco.com> wrote in message
news:a6o5jf$lbs$1@inn.qnx.com

I have a question regarding the scheduler. I have a process with about
350
threads. Of these threads only about 15 are actively communicating
reading
data from a server via IPC and relaying that message to a client on a TCP
socket. The others are waiting for messages from another process via IPC
to
send this data to the same client via TCP socket. The memory utilization
is
near 75% when viewed by spin. It appears that procnto (the kernel?) is
using 45% of the CPU time.

Can anyone tell me if the scheduling overhead is proportional to the
number
of threads?
Is there anything that can be done to decrease the impact of having this
many threads in the system?

Scheduling dispatch latency is constant time – in contrast to, say, Linux
where its
linear. You could be seeing a number of other things, for example if the
threads
are blocking a lot, you may be getting a large number of context switches
that is growing
proportionately with the number of threads.

What would the impact be of having a server IPC channel blocked in each
thread along with a client IPC channel connected waiting to do work when the
server was unblocked? I am suspecting that there is a relationship between
the number of channels that I have and processing of incoming messages.


“Steve Furr” <furr@qnx.com> wrote in message
news:a7beb9$cc$1@nntp.qnx.com

“Tim Bochenek” <> tim.bochenek@bepco.com> > wrote in message
news:a6o5jf$lbs$> 1@inn.qnx.com> …
I have a question regarding the scheduler. I have a process with about
350
threads. Of these threads only about 15 are actively communicating
reading
data from a server via IPC and relaying that message to a client on a
TCP
socket. The others are waiting for messages from another process via
IPC
to
send this data to the same client via TCP socket. The memory
utilization
is
near 75% when viewed by spin. It appears that procnto (the kernel?) is
using 45% of the CPU time.

Can anyone tell me if the scheduling overhead is proportional to the
number
of threads?
Is there anything that can be done to decrease the impact of having this
many threads in the system?


Scheduling dispatch latency is constant time – in contrast to, say, Linux
where its
linear. You could be seeing a number of other things, for example if the
threads
are blocking a lot, you may be getting a large number of context switches
that is growing
proportionately with the number of threads.