semaphores

I’m having a problem with semaphores. I’m not so much looking for as a
solution but rather an explanation as to why, if possible.

In the kernel, Proc32 -e 300 (300 semaphores)

In the application code, loops create 243 semaphores, at startup, in
shared_memory.

There are multiple versions of the application code (for different
customers), and each has a different size to shared_memory.

There is no gracefull shutdown. No calls to sem_destory. Application
shutdown is by script which just slays all the processes.

Now the problem:

Start and stop the application code for one customer, then start the
application for another customer, and I get an error creating the
semaphores. With the old 243 not destroyed, and another 243 to create,
I guess I run over my limit of 300.

But, successive start stop starts of the same application code, doesn’t
present the problem.

I have to reboot to change between applications for two different
customers, or even the same customers code, but in a test version where
the shared_memory size has changed.

Why do successive restarts have no problems? You’d expect, if I didn’t
destroy the semaphores on shutdown, that I’d have problems creating more
(having crossed my 300 limit on second startup) no matter which
customers application was starting.

I guess that as long as the semaphores are in the same place in memory,
it knows they are the same ones when you creat them the second time, but
I don’t know.

Scott

Hard to guess without more detail on how you’re creating the semaphores.

I assume they are in shared memory. semaphores in a shared memory
object are destroyed automatically when the shared memory object
is (which isn’t until it’s unlinked, and all existing mappings of
it are unmapped, either explicitly, or at process exit).

My guess is the problem has to do with the linking and unlinking
of the object, and perhaps you thinking that the object is gone
because you can’t see it in /dev/shmem. The object can still exist
if it isn’t in /dev/shmem (it works like a file) just because its
name has been unlinked doesn’t mean its gone until all references
to it have been unmapped.

Also there was a bug in earlier Proc32s that didn’t succesfully
destroy semaphores in mapped memory when the sem wasn’t on a
page boundary, fixed in more recent Procs.

So, the key is probably in the detailed order of shm_opens(),
mmaps(), shm_unlinks(), and sem_create()s.

Sam

Previously, J. Scott Franko wrote in qdn.public.qnx4, comp.os.qnx:

I’m having a problem with semaphores. I’m not so much looking for as a
solution but rather an explanation as to why, if possible.

In the kernel, Proc32 -e 300 (300 semaphores)

In the application code, loops create 243 semaphores, at startup, in
shared_memory.

There are multiple versions of the application code (for different
customers), and each has a different size to shared_memory.

There is no gracefull shutdown. No calls to sem_destory. Application
shutdown is by script which just slays all the processes.

Now the problem:

Start and stop the application code for one customer, then start the
application for another customer, and I get an error creating the
semaphores. With the old 243 not destroyed, and another 243 to create,
I guess I run over my limit of 300.

But, successive start stop starts of the same application code, doesn’t
present the problem.

I have to reboot to change between applications for two different
customers, or even the same customers code, but in a test version where
the shared_memory size has changed.

Why do successive restarts have no problems? You’d expect, if I didn’t
destroy the semaphores on shutdown, that I’d have problems creating more
(having crossed my 300 limit on second startup) no matter which
customers application was starting.

I guess that as long as the semaphores are in the same place in memory,
it knows they are the same ones when you creat them the second time, but
I don’t know.

Scott

\


Sam Roberts (sam@cogent.ca), Cogent Real-Time Systems (www.cogent.ca)

In qdn.public.qnx4 J. Scott Franko <jsfranko@switch.com> wrote:

I’m having a problem with semaphores. I’m not so much looking for as a
solution but rather an explanation as to why, if possible.

In the kernel, Proc32 -e 300 (300 semaphores)

In the application code, loops create 243 semaphores, at startup, in
shared_memory.

There are multiple versions of the application code (for different
customers), and each has a different size to shared_memory.

There is no gracefull shutdown. No calls to sem_destory. Application
shutdown is by script which just slays all the processes.

Now the problem:

Start and stop the application code for one customer, then start the
application for another customer, and I get an error creating the
semaphores. With the old 243 not destroyed, and another 243 to create,
I guess I run over my limit of 300.

But, successive start stop starts of the same application code, doesn’t
present the problem.

I have to reboot to change between applications for two different
customers, or even the same customers code, but in a test version where
the shared_memory size has changed.

Try removing the shared memory area. (ls /dev/shmem, rm /dev/shmem/my_mem)
I think this will probably allow you to continue without the reboot.

Why do successive restarts have no problems? You’d expect, if I didn’t
destroy the semaphores on shutdown, that I’d have problems creating more
(having crossed my 300 limit on second startup) no matter which
customers application was starting.

I guess that as long as the semaphores are in the same place in memory,
it knows they are the same ones when you creat them the second time, but
I don’t know.

The Neutrino docs note that a semaphore is valid until it is explictly
destroyed, or until the memory where it resides is freed. I think the
same is true for QNX4. And, of course, when you do the restarts, you
will be using the same shared memory area, so you will just be re-using
the semaphores – therefor you won’t hit the limit.

So, conclusion – to re-test, make sure you remove the shared memory area
from the first program before starting the new one.

-David

In comp.os.qnx J. Scott Franko <jsfranko@switch.com> wrote:

I’m having a problem with semaphores. I’m not so much looking for as a
solution but rather an explanation as to why, if possible.

I think I answered this in a previous thread… just in case you
missed it…

– semaphores persist until the memory they are created in is
freed or they are explicitly destroyed.

– remove the shared memory area that is leftover from your
applications before restarting – “rm /dev/shmem/my_memory”

-David


In the kernel, Proc32 -e 300 (300 semaphores)

In the application code, loops create 243 semaphores, at startup, in
shared_memory.

There are multiple versions of the application code (for different
customers), and each has a different size to shared_memory.

There is no gracefull shutdown. No calls to sem_destory. Application
shutdown is by script which just slays all the processes.

Now the problem:

Start and stop the application code for one customer, then start the
application for another customer, and I get an error creating the
semaphores. With the old 243 not destroyed, and another 243 to create,
I guess I run over my limit of 300.

But, successive start stop starts of the same application code, doesn’t
present the problem.

I have to reboot to change between applications for two different
customers, or even the same customers code, but in a test version where
the shared_memory size has changed.

Why do successive restarts have no problems? You’d expect, if I didn’t
destroy the semaphores on shutdown, that I’d have problems creating more
(having crossed my 300 limit on second startup) no matter which
customers application was starting.

I guess that as long as the semaphores are in the same place in memory,
it knows they are the same ones when you creat them the second time, but
I don’t know.

Scott