Can I use shared memory and atomic operation together?

In QNX documents, it stated about using shared memory and semaphore (or
mutex) is faster than messgaing passing. However in order to synchronize,
can I use shared memory and atomic operation together?

asmart

“Joseph” <tsunghsunwu@fpc.com.tw> wrote in message
news:e6ti6v$cvm$2@inn.qnx.com

In QNX documents, it stated about using shared memory and semaphore (or
mutex) is faster than messgaing passing. However in order to synchronize,
can I use shared memory and atomic operation together?

If you know what you are doing yes. On non x86 process some extra operation
(like flushing cache) could be required.

asmart

Joseph <tsunghsunwu@fpc.com.tw> wrote:

In QNX documents, it stated about using shared memory and semaphore (or
mutex) is faster than messgaing passing. However in order to synchronize,
can I use shared memory and atomic operation together?

Yes you can – but then you have the issue of what to do when the other
owner has the memory “locked”. Do you loop until available (burning CPU,
so the other owner can’t “unlock” the memory), or do you sleep and try
again (introducing latency, and possible multiple retries).

Something like semaphores, or a mutex & condvar combination, cover off
these issues for you. You may find it easier to get proper behaviour
using these.

Also, while it may be faster, it may not. It depends on a variety
of things. And, the gain in speed may not be worth the pain in
coding and debugging if you don’t get your synchronisation quite right.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

Burning CPU can be faster than invoking kernel calls. I think the only case
where shared memory will be faster than message passing is when you burn
CPU. If the docs suggest that it will be faster using semaphores it is not
true.

– igor

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:e8jplp$esn$3@inn.qnx.com

Joseph <> tsunghsunwu@fpc.com.tw> > wrote:
In QNX documents, it stated about using shared memory and semaphore (or
mutex) is faster than messgaing passing. However in order to synchronize,
can I use shared memory and atomic operation together?

Yes you can – but then you have the issue of what to do when the other
owner has the memory “locked”. Do you loop until available (burning CPU,
so the other owner can’t “unlock” the memory), or do you sleep and try
again (introducing latency, and possible multiple retries).

Something like semaphores, or a mutex & condvar combination, cover off
these issues for you. You may find it easier to get proper behaviour
using these.

Also, while it may be faster, it may not. It depends on a variety
of things. And, the gain in speed may not be worth the pain in
coding and debugging if you don’t get your synchronisation quite right.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

I think the documentation is suggestive only. If you are moving small
amounts of data, the kernel call overhead dominates, and so the answer
is no. If you are moving large amounts of data, then probably shared
memory will be faster. If the amount of data is large, but the
amount that you will process/view is small, then shared memory should
be quite superior. It all just depends.