Igor Kovalenko <kovalenko@attbi.com> wrote:
“Brian Stecher” <> bstecher@qnx.com> > wrote in message
news:b1e2f7$s9t$> 1@nntp.qnx.com> …
Nope. Event delivery & scheduling decisions caused by the event will
happen before code from any user level thread is executed - non-SMP.
In the SMP case things are a little more complicated, but the event
delivery and rescheduling will certainly take place before the next
clock tick.What if a RR thread has its timeslice about to expire? To hold the above
statement true (and not postpone the scheduling) you have to be able to
complete delivery of ALL queued pulses with priority >= that thread within a
timeframe less than 1 TICK long. I am very curious, how that is done. From a
purely logical perspective, how do you maintain a queue of asynchronous
events without running the producer and consumer asynchronously (that is, at
potentially different pace)?The queue CAN be indefinitely long, right? In fact it could be full of
pulses with even higher priority already before the pulse in question is
sent. And if you were delivering ALL eligible pulses before the next TICK,
that COULD potentially take indefinitely long time? Then there has to be a
bound on how many events can be delivered ‘per pass’. If there is, then
there got to be a potential latency, at least in a worst case. For some
people I know the word ‘queue’ is just another word for ‘latency’. I don’t
like poking in the dark like this, but docs are rather scarce on the
subject.
The docs are scarce because this is an implementation detail that can
change from release to release. Ya gotta leave a little mystery in
life or where’s the fun .
Anyway, Right now, if an interrupt that the kernel needs to be delivered
and it recognizes that the kernel data structures may not be in a consistent
state, it places the event on an “interrupt pending queue” (if the
kernel state is known to be consistent the event is delivered right
away). At various known “good” points during kernel execution the state
of that queue is checked (e.g. just before we transfer control back
to a user thread) and, if need be, the queue is drained and the
events delivered. That’s what I meant in my original message. In
the case of a pulse, if there is a thread received blocked on the
channel, the pulse will immediately be delivered and the thread
readied (and made the active thread if the priority is high enough).
If there is no thread ready to receive the pulse, it’s placed on
the channel’s send queue as per normal procedure - possibly boosting
the priority of threads in the process that created the channel.
The entries in the interrupt pending queue come from a preallocated
free pool (we can’t allocate them on demand since the kernel data
structures - including the heap, aren’t available to us when we
need them). The initial size of the free pool is 200 entries, but
that is grown if the system notices heavy use of the queue. If
you run out of entries, the infamous “Out of interrupt events”
message comes out and the event is dropped .
Normally the interrupt comes in, it’s either delivered right away,
or put on the pending queue and then the kernel call is preempted
(assuming a high enough priority on the pending item), which will
quickly get to the point where the queue is drained. If we can’t
preempt the kernel call right away, we’ll shortly get to a point
where we can and then the queue will be drained. Only a high, continuous
load of interrupts (e.g. a buggy interrupt handler not clearing
the hardware condition) will cause us to be unable to drain the
queue and eventually run out of free entries for it.
–
Brian Stecher (bstecher@qnx.com) QNX Software Systems, Ltd.
phone: +1 (613) 591-0931 (voice) 175 Terence Matthews Cr.
+1 (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8