QNX message and POSIX message QNX

Hi,

I’m doing my project, however, I hesitate to choose message or message queue
for my project, where one thread is to read the value of CI0-DIO port
periodically, when the value is changed, it sends the value to another
thread. The thread to receive the value doesn’t need reply to the sending
thread.

Thanks,

Belinda

Belinda <yye@is2.dal.ca> wrote:

Hi,

I’m doing my project, however, I hesitate to choose message or message queue
for my project, where one thread is to read the value of CI0-DIO port
periodically, when the value is changed, it sends the value to another
thread. The thread to receive the value doesn’t need reply to the sending
thread.

How much data is included? If the amount of data you want to send is
32-bits, you might want to look at sending a pulse with MsgSendPulse().

Our implementation of message queues is fairly resource heavy – a
seperate process maintains the queues, and a message queue transaction
in fact involves two MsgSend/Receive/Reply transactions rather than
just one.

So, I generally recommend a direct Send/Receive/Reply transaction for
moving larger amounts of data, and pulse for notification, or up to
32-bits of data. In this case, it seems likely that the value of
a port is likely to be 32 bits or fewer, and this also falls nicely
into the “notification of change” useage for pulses, making a pulse
seem natural.

If it were a 1K structure, I might still send a pulse for notification,
then if the threads were in the same process, use an in-memory buffer
for the data, or if the threads are in different processes, look at
shared memory, but probably have the thread that received the notification
of change pulse to a MsgSend() to the data-collector, and have the
data-collector MsgReply() with the “new” 1K structure.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Yes, I’d stay away from using mqueue. For communication between threads the
best way would be to set up a buffer (in shared memory if threads are in
different processes) with flow control by pair of semaphores. It is
classical producer/consumer solution and works better than any kind of IPC.

  • igor

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:ae7lfa$631$5@nntp.qnx.com

Belinda <> yye@is2.dal.ca> > wrote:
Hi,

I’m doing my project, however, I hesitate to choose message or message
queue
for my project, where one thread is to read the value of CI0-DIO port
periodically, when the value is changed, it sends the value to another
thread. The thread to receive the value doesn’t need reply to the
sending
thread.

How much data is included? If the amount of data you want to send is
32-bits, you might want to look at sending a pulse with MsgSendPulse().

Our implementation of message queues is fairly resource heavy – a
seperate process maintains the queues, and a message queue transaction
in fact involves two MsgSend/Receive/Reply transactions rather than
just one.

So, I generally recommend a direct Send/Receive/Reply transaction for
moving larger amounts of data, and pulse for notification, or up to
32-bits of data. In this case, it seems likely that the value of
a port is likely to be 32 bits or fewer, and this also falls nicely
into the “notification of change” useage for pulses, making a pulse
seem natural.

If it were a 1K structure, I might still send a pulse for notification,
then if the threads were in the same process, use an in-memory buffer
for the data, or if the threads are in different processes, look at
shared memory, but probably have the thread that received the notification
of change pulse to a MsgSend() to the data-collector, and have the
data-collector MsgReply() with the “new” 1K structure.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

“Igor Kovalenko” <Igor.Kovalenko@motorola.com> wrote in message
news:ae8s4f$52a$1@inn.qnx.com

Yes, I’d stay away from using mqueue. For communication between threads
the
best way would be to set up a buffer (in shared memory if threads are in
different processes) with flow control by pair of semaphores. It is
classical producer/consumer solution and works better than any kind of
IPC.

Out of interest Igor, would you justify/quantify “better”?

Jim

[…]

“Jim Douglas” <jim@dramatec.co.uk> wrote in message
news:ae9mh0$nek$1@inn.qnx.com

“Igor Kovalenko” <> Igor.Kovalenko@motorola.com> > wrote in message
news:ae8s4f$52a$> 1@inn.qnx.com> …
Yes, I’d stay away from using mqueue. For communication between threads
the
best way would be to set up a buffer (in shared memory if threads are in
different processes) with flow control by pair of semaphores. It is
classical producer/consumer solution and works better than any kind of
IPC.

Out of interest Igor, would you justify/quantify “better”?

Jim

I would say it is much more efficient for the OS. I.E. less overhead, less

context switching, fewer machine instructions in the execution path.

Shall we define “much” now?

“Bill Caroselli (Q-TPS)” <QTPS@EarthLink.net> wrote in message
news:aeaqgg$kke$1@inn.qnx.com

“Jim Douglas” <> jim@dramatec.co.uk> > wrote in message
news:ae9mh0$nek$> 1@inn.qnx.com> …

“Igor Kovalenko” <> Igor.Kovalenko@motorola.com> > wrote in message
news:ae8s4f$52a$> 1@inn.qnx.com> …
Yes, I’d stay away from using mqueue. For communication between
threads
the
best way would be to set up a buffer (in shared memory if threads are
in
different processes) with flow control by pair of semaphores. It is
classical producer/consumer solution and works better than any kind of
IPC.

Out of interest Igor, would you justify/quantify “better”?

Jim

I would say it is much more efficient for the OS. I.E. less overhead,
less
context switching, fewer machine instructions in the execution path.

I have to correct myself since what I proposed is a form of IPC by itself :wink:
However, I’d say using either form of message passing within single process
is just silly from efficiency perspective.

There are reasons to use other mechanisms sometimes. Mostly it happens when
you need to be blocked on multiple events (wait for various types of events
in single entry point). In such cases it is usually much more convinient to
use MsgReceive() in conjunction with ionotify() which would be QNX6-way of
doing things, or use sigwaitinfo() in conjunction with aio_xxx(), which
would be POSIX way of doing things (too bad no aio_xxx on QNX6 yet).

Note, recommendation to use pulses for 32bit or shorter message is dubious.
Signals are better for that from efficiency perspective (they tax kernel
less than pulses) and provide same functionality (synchronous form).

– igor

Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote:

Note, recommendation to use pulses for 32bit or shorter message is dubious.
Signals are better for that from efficiency perspective (they tax kernel
less than pulses) and provide same functionality (synchronous form).

If you are only waiting for this one type of notification, then yes
signals are more efficient. Of course, to get this efficiency you’d
better be using sigwaitinfo() as your blocking mechanism, and making
sure the signal is masked at all times when you’re not blocked waiting
for it.

But, if you might be getting messages as well, then the flexibility of
using a pulse, and the common blocking point of MsgReceive() is probably
worth it.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Yes indeed, but my point was that ‘waiting for messages’ is QNX-ism. Some
people like using raw messages, but I am not one of them. I think they are
better left to implement libraries and resource managers (and even there you
mostly deal with cover functions, not with raw messages). Applications are
better off using POSIX interfaces for both I/O and event delivery.

And you don’t have to mask signals at all times you’re not blocked waiting
for them - that would make the mechanism useless. Synchronous signals will
be queued while you’re handling them. And yes, using sigwaitinfo() is
exactly what I meant. By the way, this issue of using signals properly is
rather hard for understanding. Would be nice if QNX docs were more educative
on the subject.

– igor

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:aeb91v$2gd$2@nntp.qnx.com

Igor Kovalenko <> Igor.Kovalenko@motorola.com> > wrote:

Note, recommendation to use pulses for 32bit or shorter message is
dubious.
Signals are better for that from efficiency perspective (they tax kernel
less than pulses) and provide same functionality (synchronous form).

If you are only waiting for this one type of notification, then yes
signals are more efficient. Of course, to get this efficiency you’d
better be using sigwaitinfo() as your blocking mechanism, and making
sure the signal is masked at all times when you’re not blocked waiting
for it.

But, if you might be getting messages as well, then the flexibility of
using a pulse, and the common blocking point of MsgReceive() is probably
worth it.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote:
: And you don’t have to mask signals at all times you’re not blocked waiting
: for them - that would make the mechanism useless. Synchronous signals will
: be queued while you’re handling them. And yes, using sigwaitinfo() is
: exactly what I meant. By the way, this issue of using signals properly is
: rather hard for understanding. Would be nice if QNX docs were more educative
: on the subject.

I’ll add this to our ever-growing list of things to do. Thanks for the
suggestion.


Steve Reid stever@qnx.com
TechPubs (Technical Publications)
QNX Software Systems

Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote:

Yes indeed, but my point was that ‘waiting for messages’ is QNX-ism. Some
people like using raw messages, but I am not one of them. I think they are
better left to implement libraries and resource managers (and even there you
mostly deal with cover functions, not with raw messages). Applications are
better off using POSIX interfaces for both I/O and event delivery.

Hm… waiting for “events” though is a common idea. It happens that
we just think that MsgReceive() (or a cover function) is one of the
best ways to wait for events.

Of course, another more Unixy way of waiting for events is select().
(In our implementation, that happens to be sigwaitinfo().) Or, of
course, my_gui_mainloop(), and however that happens to be implemented.

And you don’t have to mask signals at all times you’re not blocked waiting
for them - that would make the mechanism useless. Synchronous signals will
be queued while you’re handling them.

If you use a signal handler, signals are automatically masked while you
are handling them. But, once you use a signal handler, the cost of the
switch to the signal handler context and return has now pushed the cost
of using signals above the cost of using pulses.

If you are using sigwaitinfo() as your blocking point, the signal will
be handled inline with the sigwaitinfo(), rather than being in a signal
handler context, and you won’t get the automatic masking of the signal.

And yes, using sigwaitinfo() is
exactly what I meant. By the way, this issue of using signals properly is
rather hard for understanding. Would be nice if QNX docs were more educative
on the subject.

QNX docs don’t tend to cover, in detail, things that are “normal Unix”,
but tend to focus, more, on the specifically “QNXy” things. My
understanding is that our signal implementation is, mostly, pretty
close to standard. (Well, the main “oddity” is that a server can
hold-off the receipt of a signal by a reply-blocked client. This is
neccessary to give the equivalent effect of doing the server work in
the kernel in other Unix-like OSes where the kernel can also hold-off
the unblock, and allow the kernel driver to complete/clean-up the
operation, and is also need for making “atomic” (sw) i/o operaton
atomic.)

If I were structuring a program (thread) to use synchronous signals,
it would look something like:

mask all signals that I want to not kill me (pthread_sigmask())
set all signals I’m interested in to be queued signals (sigaction())
init bitfield of signals I’m interested in
loop
wait for signals to come in (sigwaitinfo())
perform appropriate behaviour for signal
endloop

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:afa0qh$bio$1@nntp.qnx.com

Igor Kovalenko <> Igor.Kovalenko@motorola.com> > wrote:
Yes indeed, but my point was that ‘waiting for messages’ is QNX-ism.
Some
people like using raw messages, but I am not one of them. I think they
are
better left to implement libraries and resource managers (and even there
you
mostly deal with cover functions, not with raw messages). Applications
are
better off using POSIX interfaces for both I/O and event delivery.

Hm… waiting for “events” though is a common idea. It happens that
we just think that MsgReceive() (or a cover function) is one of the
best ways to wait for events.

‘Best’ for what? I don’t think it is best for applications, especially if
you care about portability of your code or if you have a large pool of
developers who are more familiar with Unix programming than with QNX-isms.

It is probably best for service providers (resmgrs, etc), so an organisation
can have few people who know QNX and implemement service providers plus
larger number of ‘generic’ programmers who are easier to find, doing
application layer stuff.

It is also more logical - QNX messages are low level OS-specific stuff,
which should be hidden in OS-specific layer of servers/drivers. What you
suggest is using low-level OS-specific API in the upper layer. You don’t use
raw IP sockets to implement FTP, do you?

Of course, another more Unixy way of waiting for events is select().
(In our implementation, that happens to be sigwaitinfo().) Or, of
course, my_gui_mainloop(), and however that happens to be implemented.

Unixy way is actually poll(), which QNX (rather unfortunately) does not
have. The select() is BSD legacy and has less functionality.

And you don’t have to mask signals at all times you’re not blocked
waiting
for them - that would make the mechanism useless. Synchronous signals
will
be queued while you’re handling them.

If you use a signal handler, signals are automatically masked while you
are handling them. But, once you use a signal handler, the cost of the
switch to the signal handler context and return has now pushed the cost
of using signals above the cost of using pulses.

I did not suggest using asynchronous signals for general event delivery.
That’s PITA.

If you are using sigwaitinfo() as your blocking point, the signal will
be handled inline with the sigwaitinfo(), rather than being in a signal
handler context, and you won’t get the automatic masking of the signal.

You don’t need masking, since incoming signals will be queued.

And yes, using sigwaitinfo() is
exactly what I meant. By the way, this issue of using signals properly
is
rather hard for understanding. Would be nice if QNX docs were more
educative
on the subject.

QNX docs don’t tend to cover, in detail, things that are “normal Unix”,
but tend to focus, more, on the specifically “QNXy” things. My

Synchronous signals aren’t really ‘normal Unix’. They are ‘normal POSIX’ and
are relatively little known and little understood by majority of Unix
programmers. Since QNX is making a point of being POSIX OS it would be nice
if docs taught people a bit about POSIX programming.

understanding is that our signal implementation is, mostly, pretty
close to standard. (Well, the main “oddity” is that a server can
hold-off the receipt of a signal by a reply-blocked client. This is
neccessary to give the equivalent effect of doing the server work in
the kernel in other Unix-like OSes where the kernel can also hold-off
the unblock, and allow the kernel driver to complete/clean-up the
operation, and is also need for making “atomic” (sw) i/o operaton
atomic.)

Too bad this also applies to KILL. People normally expect non-maskable
signals to work no matter what. You should not care about clean up if user
tells you ‘just die, damn you’.

If I were structuring a program (thread) to use synchronous signals,
it would look something like:

mask all signals that I want to not kill me (pthread_sigmask())
set all signals I’m interested in to be queued signals (sigaction())
init bitfield of signals I’m interested in
loop
wait for signals to come in (sigwaitinfo())
perform appropriate behaviour for signal
endloop

That’s what I was suggesting basically, glad we agree on something :wink:

To be more realistic however, you usually need bunch of ‘worker’ threads
blocked on something like condvar in addition to sigwaitinfo() thread. When
you dequeue a signal, just tell one of ‘worker’ threads what to do and where
is context and immediately go back to sigwaitinfo(). If you bother for too
long, signal queue might get overloaded (and it is rather short queue on
some systems, 32 signals on Solaris for example and not adjustable).

– igor