“David Gibbs” <dagibbs@qnx.com> wrote in message
news:afa0qh$bio$1@nntp.qnx.com…
Igor Kovalenko <> Igor.Kovalenko@motorola.com> > wrote:
Yes indeed, but my point was that ‘waiting for messages’ is QNX-ism.
Some
people like using raw messages, but I am not one of them. I think they
are
better left to implement libraries and resource managers (and even there
you
mostly deal with cover functions, not with raw messages). Applications
are
better off using POSIX interfaces for both I/O and event delivery.
Hm… waiting for “events” though is a common idea. It happens that
we just think that MsgReceive() (or a cover function) is one of the
best ways to wait for events.
‘Best’ for what? I don’t think it is best for applications, especially if
you care about portability of your code or if you have a large pool of
developers who are more familiar with Unix programming than with QNX-isms.
It is probably best for service providers (resmgrs, etc), so an organisation
can have few people who know QNX and implemement service providers plus
larger number of ‘generic’ programmers who are easier to find, doing
application layer stuff.
It is also more logical - QNX messages are low level OS-specific stuff,
which should be hidden in OS-specific layer of servers/drivers. What you
suggest is using low-level OS-specific API in the upper layer. You don’t use
raw IP sockets to implement FTP, do you?
Of course, another more Unixy way of waiting for events is select().
(In our implementation, that happens to be sigwaitinfo().) Or, of
course, my_gui_mainloop(), and however that happens to be implemented.
Unixy way is actually poll(), which QNX (rather unfortunately) does not
have. The select() is BSD legacy and has less functionality.
And you don’t have to mask signals at all times you’re not blocked
waiting
for them - that would make the mechanism useless. Synchronous signals
will
be queued while you’re handling them.
If you use a signal handler, signals are automatically masked while you
are handling them. But, once you use a signal handler, the cost of the
switch to the signal handler context and return has now pushed the cost
of using signals above the cost of using pulses.
I did not suggest using asynchronous signals for general event delivery.
That’s PITA.
If you are using sigwaitinfo() as your blocking point, the signal will
be handled inline with the sigwaitinfo(), rather than being in a signal
handler context, and you won’t get the automatic masking of the signal.
You don’t need masking, since incoming signals will be queued.
And yes, using sigwaitinfo() is
exactly what I meant. By the way, this issue of using signals properly
is
rather hard for understanding. Would be nice if QNX docs were more
educative
on the subject.
QNX docs don’t tend to cover, in detail, things that are “normal Unix”,
but tend to focus, more, on the specifically “QNXy” things. My
Synchronous signals aren’t really ‘normal Unix’. They are ‘normal POSIX’ and
are relatively little known and little understood by majority of Unix
programmers. Since QNX is making a point of being POSIX OS it would be nice
if docs taught people a bit about POSIX programming.
understanding is that our signal implementation is, mostly, pretty
close to standard. (Well, the main “oddity” is that a server can
hold-off the receipt of a signal by a reply-blocked client. This is
neccessary to give the equivalent effect of doing the server work in
the kernel in other Unix-like OSes where the kernel can also hold-off
the unblock, and allow the kernel driver to complete/clean-up the
operation, and is also need for making “atomic” (sw) i/o operaton
atomic.)
Too bad this also applies to KILL. People normally expect non-maskable
signals to work no matter what. You should not care about clean up if user
tells you ‘just die, damn you’.
If I were structuring a program (thread) to use synchronous signals,
it would look something like:
mask all signals that I want to not kill me (pthread_sigmask())
set all signals I’m interested in to be queued signals (sigaction())
init bitfield of signals I’m interested in
loop
wait for signals to come in (sigwaitinfo())
perform appropriate behaviour for signal
endloop
That’s what I was suggesting basically, glad we agree on something
To be more realistic however, you usually need bunch of ‘worker’ threads
blocked on something like condvar in addition to sigwaitinfo() thread. When
you dequeue a signal, just tell one of ‘worker’ threads what to do and where
is context and immediately go back to sigwaitinfo(). If you bother for too
long, signal queue might get overloaded (and it is rather short queue on
some systems, 32 signals on Solaris for example and not adjustable).
– igor