mutex release

It is widely accepted concept that when a thread is releasing an object
(e.g. mutex) that another thread is waiting for (blocked), the other
gets unblocked AND takes ownership (lock) of that object.

Consider a situation where there are 3 threads H,M,L with priorities
High,Medium,Low respectively. H has a mutex locked and the other two are
blocked on the same mutex. H releases the mutex and goes to sleep. M
starts to execute and goes through the critical section releasing the
mutex at the end. L gets the ownership of the mutex, although it is not
running as the M is higher priority.
If M tries to lock the mutex again, it will get blocked waiting for L to
finish critical section.

This behavior seems to me to be questionable in RTOS. If you compute a
response time for a thread you really need to add the longest time (T)
it takes to go through critical section (at the priority of your thread)
in your whole system multiplied by number of mutex locks in its path.
If the release operation only unblocks the highest priority thread
waiting for that object and then the unblocked thread has to reacquire
the object at its own (In the above scenario the M does not get blocked
and runs through critical section) the calculation will be different
(1xT + (n-1)xC, where C is longest time to run through critical section
for threads with higher priority then the thread you computing the
response time for)

Is there something I’m missing, are there any other things I didn’t
factor for?

Matthew Safar

Matthew Safar <msafar@glenvan.glenayre.com> wrote:

It is widely accepted concept that when a thread is releasing an object
(e.g. mutex) that another thread is waiting for (blocked), the other
gets unblocked AND takes ownership (lock) of that object.

Consider a situation where there are 3 threads H,M,L with priorities
High,Medium,Low respectively. H has a mutex locked and the other two are
blocked on the same mutex. H releases the mutex and goes to sleep. M
starts to execute and goes through the critical section releasing the
mutex at the end. L gets the ownership of the mutex, although it is not
running as the M is higher priority.
If M tries to lock the mutex again, it will get blocked waiting for L to
finish critical section.

This behavior seems to me to be questionable in RTOS. If you compute a
response time for a thread you really need to add the longest time (T)
it takes to go through critical section (at the priority of your thread)
in your whole system multiplied by number of mutex locks in its path.
If the release operation only unblocks the highest priority thread
waiting for that object and then the unblocked thread has to reacquire
the object at its own (In the above scenario the M does not get blocked
and runs through critical section) the calculation will be different
(1xT + (n-1)xC, where C is longest time to run through critical section
for threads with higher priority then the thread you computing the
response time for)

Is there something I’m missing, are there any other things I didn’t
factor for?

This is the side effect of proiroty inversion.

Think about this: when L hold the mutex, M blocked, if there is another
X (L < X < M) start running. X have nothing to do with the mutex, simply
running at a higher priority of L. Then L won’t get chance to run, and
M is in fact blocked by X (which is a lower priority thread block high
priority thread case).

The way in QNX to solve it is: when L hold the mutex, and M try to lock
the mutex. L’s priority will be “boost up” to M. Thus, L is running at
M priority inside the mutex, this prevent X (L < X < M) to run, and
L will sure return the mutex as soon as possiable.

-xtang

Xiaodan Tang wrote:

Matthew Safar <> msafar@glenvan.glenayre.com> > wrote:

It is widely accepted concept that when a thread is releasing an object
(e.g. mutex) that another thread is waiting for (blocked), the other
gets unblocked AND takes ownership (lock) of that object.

Consider a situation where there are 3 threads H,M,L with priorities
High,Medium,Low respectively. H has a mutex locked and the other two are
blocked on the same mutex. H releases the mutex and goes to sleep. M
starts to execute and goes through the critical section releasing the
mutex at the end. L gets the ownership of the mutex, although it is not
running as the M is higher priority.
If M tries to lock the mutex again, it will get blocked waiting for L to
finish critical section.

This behavior seems to me to be questionable in RTOS. If you compute a
response time for a thread you really need to add the longest time (T)
it takes to go through critical section (at the priority of your thread)
in your whole system multiplied by number of mutex locks in its path.
If the release operation only unblocks the highest priority thread
waiting for that object and then the unblocked thread has to reacquire
the object at its own (In the above scenario the M does not get blocked
and runs through critical section) the calculation will be different
(1xT + (n-1)xC, where C is longest time to run through critical section
for threads with higher priority then the thread you computing the
response time for)

Is there something I’m missing, are there any other things I didn’t
factor for?

This is the side effect of proiroty inversion.

Think about this: when L hold the mutex, M blocked, if there is another
X (L < X < M) start running. X have nothing to do with the mutex, simply
running at a higher priority of L. Then L won’t get chance to run, and
M is in fact blocked by X (which is a lower priority thread block high
priority thread case).

The way in QNX to solve it is: when L hold the mutex, and M try to lock
the mutex. L’s priority will be “boost up” to M. Thus, L is running at
M priority inside the mutex, this prevent X (L < X < M) to run, and
L will sure return the mutex as soon as possiable.

-xtang

I think the problem is something else: (this is not a priority inversion
problem)

  • when mutex is unlocked the highest priority task, trying to lock it,
    is made ready and gets the hold of that mutex!

What I think should be done is:

  • when mutex is unlocked the highest priority task trying to lock it is
    made ready and mutex is made unlocked.
  • the task (that had been waiting) should try to lock the mutex again at
    its own!

If it is done that way:

  • the task M will not block again on the subsequent lock of the same
    mutex (the L have not had the chance to run yet so it is not holding the
    mutex)

Matthew

Matthew Safar <msafar@glenvan.glenayre.com> wrote:

I think the problem is something else: (this is not a priority inversion
problem)

  • when mutex is unlocked the highest priority task, trying to lock it,
    is made ready and gets the hold of that mutex!

What I think should be done is:

  • when mutex is unlocked the highest priority task trying to lock it is
    made ready and mutex is made unlocked.
  • the task (that had been waiting) should try to lock the mutex again at
    its own!

If it is done that way:

  • the task M will not block again on the subsequent lock of the same
    mutex (the L have not had the chance to run yet so it is not holding the
    mutex)

You are right, I didn’t really read your post carefully. (Uh, friday
is not a good day to answer post :slight_smile:

And yes, QNX is doing what you described. Here is the little program
to prove it. You will see thread 3 keeps on get the mutex, and
thread 2 doesn’t really have a chance.

-xtang

#include <stdio.h>
#include <pthread.h>

pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

void *thread_func(void *arg)
{
struct sched_param sp;
pthread_t tid = pthread_self();

sp.sched_priority = (int)arg;
pthread_setschedparam(tid, SCHED_FIFO, &sp);
printf(“Thread %d, at priority %d\n”, tid, (int)arg);

for (;:wink: {
pthread_mutex_lock(&lock);
printf(“Thread %d in mutex\n”, tid);
pthread_mutex_unlock(&lock);
}

return 0;
}

int main()
{
pthread_create(NULL, NULL, thread_func, (void*)10);
pthread_create(NULL, NULL, thread_func, (void*)12);
sleep(10);
return 0;
}

Previously, Matthew Safar wrote in comp.os.qnx:

Consider a situation where there are 3 threads H,M,L with priorities
High,Medium,Low respectively. H has a mutex locked and the other two are
blocked on the same mutex. H releases the mutex and goes to sleep. M
starts to execute and goes through the critical section releasing the
mutex at the end. L gets the ownership of the mutex, although it is not
running as the M is higher priority.

This last statement is not correct as written. In a single
processor system, unless M becomes blocked, L should not get
ownership of the mutex because it will not run. In a multi-processor
system, L could get the mutex, but then it would be running.

Assume that M gets blocked briefly and then gets unblocked, then…

If M tries to lock the mutex again, it will get blocked waiting for L to
finish critical section.

Yes, this is a classic priority inversion situation. L is running and
M is waiting for L. As Xiaodan mentioned, QNX will boost L’s priority
to M’s so that M cannot get locked out by a lower priority process.

This behavior seems to me to be questionable in RTOS. If you compute a
response time for a thread you really need to add the longest time (T)
it takes to go through critical section (at the priority of your thread)
in your whole system multiplied by number of mutex locks in its path.
If the release operation only unblocks the highest priority thread
waiting for that object and then the unblocked thread has to reacquire
the object at its own (In the above scenario the M does not get blocked
and runs through critical section) the calculation will be different
(1xT + (n-1)xC, where C is longest time to run through critical section
for threads with higher priority then the thread you computing the
response time for)

Is there something I’m missing, are there any other things I didn’t
factor for?

Well I may be missing something, but I think you are wrong. At least
you should be here. A process waiting for a Mutex should not be on
a FIFO queue as you seem to imply.


Mitchell Schoenbrun --------- maschoen@pobox.com

I did all kinds of interesting experimentation with this.

Are you saying (I hope) that this is a BUG in the OS that will be fixed?


Bill Caroselli – 1(530) 510-7292
Q-TPS Consulting
QTPS@EarthLink.net


“Xiaodan Tang” <xtang@qnx.com> wrote in message
news:9urrp0$bma$1@nntp.qnx.com

Matthew Safar <> msafar@glenvan.glenayre.com> > wrote:

You are right, I didn’t really read your post carefully. (Uh, friday
is not a good day to answer post > :slight_smile:

And yes, QNX is doing what you described. Here is the little program
to prove it. You will see thread 3 keeps on get the mutex, and
thread 2 doesn’t really have a chance.

-xtang

Bill Caroselli wrote:

I did all kinds of interesting experimentation with this.

Are you saying (I hope) that this is a BUG in the OS that will be fixed?

I don’t know what you mean. QNXs’ behavior is correct for a RTOS on a
uniprocessor (thread 2 never gets the mutex, since thread 3 always;
either A: owns the mutex, or B: is ready at a higher priority).

Perhaps you are reflecting on the QNX 2 days when (rightly so IMO) lower
numbers were higher priorities ?