Understanding OS calls

In trying to understand the new NTO message passing I’ve
searched down to where I understand that on the server side
I call resmgr_attach() to attach to a piece of name space
and create a channel to it. On the client side I call
open() to find the channel and make a connection to it.

I can’t find anything lower level. I presume that resmgr_attach()
calls ChannelCreate() and then some routine to bind this channel
to the name space, and that open(), knowing the path requested,
retrieves the channel from the kernel and then calls ConnectAttach()
to make a connection.

Can anyone point me at the missing routines. I’m not
against using the higher level interface. I’d just like to
know what is going on below.

Thanks,


Mitchell Schoenbrun --------- maschoen@pobox.com

This question has been out here for a few days without
reply. It can’t be all that difficult unless for some reason
this interface is not public. Is that the case?

Previously, Mitchell Schoenbrun wrote in qdn.public.qnxrtp.os:

In trying to understand the new NTO message passing I’ve
searched down to where I understand that on the server side
I call resmgr_attach() to attach to a piece of name space
and create a channel to it. On the client side I call
open() to find the channel and make a connection to it.

I can’t find anything lower level. I presume that resmgr_attach()
calls ChannelCreate() and then some routine to bind this channel
to the name space, and that open(), knowing the path requested,
retrieves the channel from the kernel and then calls ConnectAttach()
to make a connection.

Can anyone point me at the missing routines. I’m not
against using the higher level interface. I’d just like to
know what is going on below.

Thanks,


Mitchell Schoenbrun --------- > maschoen@pobox.com

\


Mitchell Schoenbrun --------- maschoen@pobox.com

Previously, Mitchell Schoenbrun wrote in qdn.public.qnxrtp.os:

This question has been out here for a few days without
reply. It can’t be all that difficult unless for some reason
this interface is not public. Is that the case?

Previously, Mitchell Schoenbrun wrote in qdn.public.qnxrtp.os:
In trying to understand the new NTO message passing I’ve
searched down to where I understand that on the server side
I call resmgr_attach() to attach to a piece of name space
and create a channel to it. On the client side I call
open() to find the channel and make a connection to it.

I can’t find anything lower level. I presume that resmgr_attach()
calls ChannelCreate() and then some routine to bind this channel
to the name space, and that open(), knowing the path requested,
retrieves the channel from the kernel and then calls ConnectAttach()
to make a connection.

Can anyone point me at the missing routines. I’m not
against using the higher level interface. I’d just like to
know what is going on below.

Well, I’ll take a shot at it, Mitchell:

Resmgr_attach() is not necessary unless you are making a resource
manager. At the lowest level, a connection from client to server is
made by the client to a nd/pid/chid tuple on the server, where:
nd = node descriptor. This is not a node number, but a
locally unique number on the client’s machine. No other
machine has this number as a descriptor for the server’s
machine. In addition, this nd could change whenever the
server’s machine drops off the network or is rebooted.
pid = the pid of the server, on the server’s node
chid = a channel ID on the server. The server had to create
this.

The client calls
coid = ConnectAttach (nd, pid, chid, 1, _NTO_SIDE_CHANNEL);
to produce a connection ID, which is essentially the file descriptor
for this client/server connection. This connection ID is only
meaningful to the client. The server doesn’t even know that it was
created.

To send a message from client to server, the client calls:
MsgSend (coid, sendbuffer, sendbytes, recbuffer, recbytes);
which is almost exactly the same as the QNX4 Send() call, except that
the parameters are in a different order.

On the server side, the server calls:
chid = ChannelCreate (0);
to simply create the next available channel. This is the chid that
the client needs to use when calling ConnectAttach().

Now, the server waits for messages:
rcvid = MsgReceive (chid, rcvmsg, maxlen, NULL);

The rcvid is a magic cookie that is used in MsgDeliverEvent and
MsgReply to refer to the client. There is no useful information
embedded in it. The lifetime of a rcvid is “long”, where “long” is
not well documented, but it is definitely longer-lived than the
particular send/receive/reply transaction that created it. You can,
for example, MsgReply to a rcvid, and then use the rcvid later in a
MsgDeliverEvent, and that rcvid will still be valid.

The server does its processing, then calls:
MsgReply (rcvid, status, replymsg, length);
which concludes the S/R/R transaction.

Now, the 64-dollar question(s):
The server created a chid, which the client needs, yet there is no
apparent way for the client to get this chid. How do I get the chid
to the client? For that matter, how do I get the pid to the client,
and what the heck is nd?

The answer is, “Deal with it.” You can write the chid and pid to a
file, or pass it to the client in its argument list, or pass it
through a pipe, or whatever. The original assumption in QNX6 was that
all servers must be resource managers. With a resource manager you
use resmgr_attach() to create an entry in the filesystem name space,
and an open() on that name generates a coid for the client directly,
bypassing the need for nd/pid/chid. Later, the POSIX purists buckled
under public pressure, and the functions name_attach and name_open
were added as analogs to the QNX4 qnx_name_attach and qnx_name_locate
calls. The problem is that they do not currently work over the
network, so you have to live with local node communication.
Name_attach creates a filesystem name, just like resmgr_attach, and
name_open generates a coid, just like open.

As for the nd, this is another magic cookie, only valid on the node on
which it was created. That is, if you have machines A, B, C, then A
could have nd_of_B = 43, nd_of_C = 22. B could have nd_of_A = 22,
nd_of_C = 99. You cannot share nd’s among tasks on different
machines, though they are sharable among tasks on the same machine.
The nd of your own node is always 0.

You create a nd by doing a name lookup on the node name, which you got
because you just knew it. The node name is typically the hostname
of the machine, unless you specifically stated otherwise in the
parameter list to npm-qnet.so. The command to create a nd is: nd =
netmgr_strtond (nodename, NULL); The node name is referred to as the
Fully Qualified Node Name, or FQNN. So, in addition to passing the
chid and pid of the server to the client, you must also pass the
variable-length FQNN string to the client. If you are relying on your
TCP setup to generate the hostname for the FQNN, you have to be
careful. In QNET, the FQNN must be unique (one-to-one mapping with a
node). In TCP, no such restriction exists. Consequently, a
well-formed TCP network naming strategy can be a mal-formed QNET
network naming strategy.

So, the API to perform low-level messaging is simple, and very similar
to QNX4, but the specific information required by that API is subject
to a catch-22. In order for the client to connect to the server, the
server must first send a message to the client with the information
the client needs. But of course the server cannot know how to send
the message to the client. So you need a mailbox, effectively, where
the server places this information. With a resource manager, this is
formalized through the file system name space. The server’s message
to the client is the registration of a name. If you just want to whip
up two processes that send messages like you did in QNX4, then you
need another way.

Incidentally, this problem always existed in QNX4 as well. You needed
to know something about the server - its nid/pid. The QNX4 nameloc
program and the name registry in Proc solved this mapping for you.

The complexity in QNX6 stems from just one thing - nobody has written
a global naming service yet. The resmgr_attach() and name_attach()
functions, since they operate on the file name space of the local
machine, don’t really handle one class of problems: “I know that
service XYZ exists, but I don’t know (or care) which node it is on.
I just want to use it.” The solutions to this question are generally
either an exhaustive search of all available nodes, or storage of
global information at one agreed-upon central location. Both are poor
solutions.

Hope this helps to shed some light,
Andrew

Previously, Andrew Thomas wrote in qdn.public.qnxrtp.os:

The answer is, “Deal with it.” You can write the chid and pid to a
file, or pass it to the client in its argument list, or pass it
through a pipe, or whatever.

Andrew,

Ok, so thank you for your very expansive description. I’m
sure this will help someone out. I was very specific about
my question because I’m aware of all this. It seems that
the answer you are giving me is either “write the chid and
pid to a file”, “pass it to the client in its argument
list”, “or pass it through a pipe, or whatever”, or
“Deal with it”.

None over these are acceptable. I can use resmgr_attach(),
however it is perfectly clear that this routine is somewhat
higher than I really need. Below resmgr_attach() is a
routine that must assign the chid and pid to the attached
name space. Likewise, on the client side some routine
buried in open() must first get the chid and pid using the
name space as a key before sending its first message to the
server.

If QSSL is not documenting this interface, well so be it,
I’ll “Deal with it”. It just seems a little arbitrary and
to tell you the truth, goofy. QNX 4 did have this interface
available, although maybe not as well documented as it could
be.

Mitchell Schoenbrun --------- maschoen@pobox.com

Mitchell Schoenbrun wrote:

Previously, Andrew Thomas wrote in qdn.public.qnxrtp.os:

The answer is, “Deal with it.” You can write the chid and pid to a
file, or pass it to the client in its argument list, or pass it
through a pipe, or whatever.

Andrew,

Ok, so thank you for your very expansive description. I’m
sure this will help someone out. I was very specific about
my question because I’m aware of all this. It seems that
the answer you are giving me is either “write the chid and
pid to a file”, “pass it to the client in its argument
list”, “or pass it through a pipe, or whatever”, or
“Deal with it”.

None over these are acceptable. I can use resmgr_attach(),
however it is perfectly clear that this routine is somewhat
higher than I really need. Below resmgr_attach() is a
routine that must assign the chid and pid to the attached
name space. Likewise, on the client side some routine
buried in open() must first get the chid and pid using the
name space as a key before sending its first message to the
server.

If QSSL is not documenting this interface, well so be it,
I’ll “Deal with it”. It just seems a little arbitrary and
to tell you the truth, goofy. QNX 4 did have this interface
available, although maybe not as well documented as it could
be.

I can’t resist … the ‘design’ of the interprocess communication
seems for me like a ‘designed chaos’ using at least 4 different
‘cookies’ (nd, cid, coid, rcvid …) The semantic and scope of these
‘cookies’ are realy weak documented … if at all.

The IPC of QNX4 was based just on the PID (or virtual PID) … that
was a clean design!

I don’t believe that a central name service could (or should) be a
solution for QNX6. The approach of HARNESS (followup of PVM) seems
to be better. Why not introducing a management server on each node
and distributing all configuration infos to a configuration data
base … maintained on each node? It would avoid the ‘single points
of failure’ … as introduced by a global name server.

Just my 2 cents …

Armin



\

Mitchell Schoenbrun --------- maschoen@pobox.com

Who just said the road to hell is not paved by good intentions or at all?
Ask Andrew now :wink:


“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010320221012.13175A@schoenbrun.com

Previously, Andrew Thomas wrote in qdn.public.qnxrtp.os:

The answer is, “Deal with it.” You can write the chid and pid to a
file, or pass it to the client in its argument list, or pass it
through a pipe, or whatever.

Andrew,

Ok, so thank you for your very expansive description. I’m
sure this will help someone out. I was very specific about
my question because I’m aware of all this. It seems that
the answer you are giving me is either “write the chid and
pid to a file”, “pass it to the client in its argument
list”, “or pass it through a pipe, or whatever”, or
“Deal with it”.

None over these are acceptable. I can use resmgr_attach(),
however it is perfectly clear that this routine is somewhat
higher than I really need. Below resmgr_attach() is a
routine that must assign the chid and pid to the attached
name space. Likewise, on the client side some routine
buried in open() must first get the chid and pid using the
name space as a key before sending its first message to the
server.

If QSSL is not documenting this interface, well so be it,
I’ll “Deal with it”. It just seems a little arbitrary and
to tell you the truth, goofy. QNX 4 did have this interface
available, although maybe not as well documented as it could
be.

Mitchell Schoenbrun --------- > maschoen@pobox.com

I have to agree with Mitchell that the resmgr_* set of functions sounds
pretty heavy for connecting with “home built” rather than “store bought”
managers. Is prefix aliasing not available for this purpose? Will it be?
I guess I’m asking whether the networked spanning prefix space will be as
solid and versitile as in QNX4 and, if so, when? Also, in your description,
there is no mention of scatter/gather messaging; is this not present or just
not mentioned?

These are important issues to me as we are facing decisions on when/how to
port several dozen megabytes of code that relies heavily on qnet, prefix
namespace, transparent location of servers across nodes, and non-routine IP
communications (multi-casting)…

Andrew Thomas wrote:

Previously, Mitchell Schoenbrun wrote in qdn.public.qnxrtp.os:
This question has been out here for a few days without
reply. It can’t be all that difficult unless for some reason
this interface is not public. Is that the case?

Previously, Mitchell Schoenbrun wrote in qdn.public.qnxrtp.os:
In trying to understand the new NTO message passing I’ve
searched down to where I understand that on the server side
I call resmgr_attach() to attach to a piece of name space
and create a channel to it. On the client side I call
open() to find the channel and make a connection to it.

I can’t find anything lower level. I presume that resmgr_attach()
calls ChannelCreate() and then some routine to bind this channel
to the name space, and that open(), knowing the path requested,
retrieves the channel from the kernel and then calls ConnectAttach()
to make a connection.

Can anyone point me at the missing routines. I’m not
against using the higher level interface. I’d just like to
know what is going on below.

Well, I’ll take a shot at it, Mitchell:

Resmgr_attach() is not necessary unless you are making a resource
manager. At the lowest level, a connection from client to server is
made by the client to a nd/pid/chid tuple on the server, where:
nd = node descriptor. This is not a node number, but a
locally unique number on the client’s machine. No other
machine has this number as a descriptor for the server’s
machine. In addition, this nd could change whenever the
server’s machine drops off the network or is rebooted.
pid = the pid of the server, on the server’s node
chid = a channel ID on the server. The server had to create
this.

The client calls
coid = ConnectAttach (nd, pid, chid, 1, _NTO_SIDE_CHANNEL);
to produce a connection ID, which is essentially the file descriptor
for this client/server connection. This connection ID is only
meaningful to the client. The server doesn’t even know that it was
created.

To send a message from client to server, the client calls:
MsgSend (coid, sendbuffer, sendbytes, recbuffer, recbytes);
which is almost exactly the same as the QNX4 Send() call, except that
the parameters are in a different order.

On the server side, the server calls:
chid = ChannelCreate (0);
to simply create the next available channel. This is the chid that
the client needs to use when calling ConnectAttach().

Now, the server waits for messages:
rcvid = MsgReceive (chid, rcvmsg, maxlen, NULL);

The rcvid is a magic cookie that is used in MsgDeliverEvent and
MsgReply to refer to the client. There is no useful information
embedded in it. The lifetime of a rcvid is “long”, where “long” is
not well documented, but it is definitely longer-lived than the
particular send/receive/reply transaction that created it. You can,
for example, MsgReply to a rcvid, and then use the rcvid later in a
MsgDeliverEvent, and that rcvid will still be valid.

The server does its processing, then calls:
MsgReply (rcvid, status, replymsg, length);
which concludes the S/R/R transaction.

Now, the 64-dollar question(s):
The server created a chid, which the client needs, yet there is no
apparent way for the client to get this chid. How do I get the chid
to the client? For that matter, how do I get the pid to the client,
and what the heck is nd?

The answer is, “Deal with it.” You can write the chid and pid to a
file, or pass it to the client in its argument list, or pass it
through a pipe, or whatever. The original assumption in QNX6 was that
all servers must be resource managers. With a resource manager you
use resmgr_attach() to create an entry in the filesystem name space,
and an open() on that name generates a coid for the client directly,
bypassing the need for nd/pid/chid. Later, the POSIX purists buckled
under public pressure, and the functions name_attach and name_open
were added as analogs to the QNX4 qnx_name_attach and qnx_name_locate
calls. The problem is that they do not currently work over the
network, so you have to live with local node communication.
Name_attach creates a filesystem name, just like resmgr_attach, and
name_open generates a coid, just like open.

As for the nd, this is another magic cookie, only valid on the node on
which it was created. That is, if you have machines A, B, C, then A
could have nd_of_B = 43, nd_of_C = 22. B could have nd_of_A = 22,
nd_of_C = 99. You cannot share nd’s among tasks on different
machines, though they are sharable among tasks on the same machine.
The nd of your own node is always 0.

You create a nd by doing a name lookup on the node name, which you got
because you just knew it. The node name is typically the hostname
of the machine, unless you specifically stated otherwise in the
parameter list to npm-qnet.so. The command to create a nd is: nd =
netmgr_strtond (nodename, NULL); The node name is referred to as the
Fully Qualified Node Name, or FQNN. So, in addition to passing the
chid and pid of the server to the client, you must also pass the
variable-length FQNN string to the client. If you are relying on your
TCP setup to generate the hostname for the FQNN, you have to be
careful. In QNET, the FQNN must be unique (one-to-one mapping with a
node). In TCP, no such restriction exists. Consequently, a
well-formed TCP network naming strategy can be a mal-formed QNET
network naming strategy.

So, the API to perform low-level messaging is simple, and very similar
to QNX4, but the specific information required by that API is subject
to a catch-22. In order for the client to connect to the server, the
server must first send a message to the client with the information
the client needs. But of course the server cannot know how to send
the message to the client. So you need a mailbox, effectively, where
the server places this information. With a resource manager, this is
formalized through the file system name space. The server’s message
to the client is the registration of a name. If you just want to whip
up two processes that send messages like you did in QNX4, then you
need another way.

Incidentally, this problem always existed in QNX4 as well. You needed
to know something about the server - its nid/pid. The QNX4 nameloc
program and the name registry in Proc solved this mapping for you.

The complexity in QNX6 stems from just one thing - nobody has written
a global naming service yet. The resmgr_attach() and name_attach()
functions, since they operate on the file name space of the local
machine, don’t really handle one class of problems: “I know that
service XYZ exists, but I don’t know (or care) which node it is on.
I just want to use it.” The solutions to this question are generally
either an exhaustive search of all available nodes, or storage of
global information at one agreed-upon central location. Both are poor
solutions.

Hope this helps to shed some light,
Andrew

I have to agree with Mitchell that the resmgr_* set of functions
sounds
pretty heavy for connecting with “home built” rather than “store
bought”
managers.

I have to disagree with you and Mitchell on this. The resmgr library
does all of the “heavy” work for you, and it is a shared library so the
space penalty on even the smallest embedded system is negligable (since
there are going to be at least a couple of QSSL managers making use of
it - the more managers make use of it, the more space efficient it is).
The big benefit is that your “home grown” software has all the
infrastructure there to “grow-up” into a “store-bought” package if
necessary, with little or no downside.

Is prefix aliasing not available for this purpose?

What do you mean by prefix aliasing ?

Will it be?
I guess I’m asking whether the networked spanning prefix space will be
as
solid and versitile as in QNX4 and, if so, when?

I can’t answer obviously, but this is a good question for QSSL

Also, in your description,
there is no mention of scatter/gather messaging; is this not present
or just
not mentioned?

It’s right there in the QNX docs under readv/writev.

Thanks for all the moral support guys. The theoretical
just got a little tragic last night. Here is what I
ran into.


I want to do something that I can write in my sleep in QNX4.
I have a manager that receives user messages but needs to
wake up occaisionally on its own. Under QNX4 I create a
proxy and attach it to a timer. The timer fires, triggers
the proxy and my Receive() wakes up.

Now under Neutrino the fun starts. I’ll start with the
timer. I setup a timer to send a pulse periodically. One of
the first things I have to do is give the event structure a
connection id (coid). Ok, that’s fair, the timer needs to
know what connection to send the pulse over.

But now I have a catch 22. I could use ConnectCreate() to
create a channel, and then ConnectAttach() to create the coid.
But now I have a coid which cannot be found out by my clients.

Ok, back to the drawing board, I do a resmgr_attach(), but
where’s the channel id? It’s buried inside resmgr_attach().
Is it in the dispatch_t structure, probably, but I can’t find
this defined anywhere in /usr/include/*.

If resmgr_block() could wait on two coid’s then things would
be ok. Well it could do this by using select() I think, but
it doesn’t. I could write this code myself, but again I don’t
have the channel id to wait on for the client messages.

Maybe what I need to do is create a thread, have the thread
open the name space, and then have the timer send the pulse
to that way. It seems awful kludgy to need an extra thread
just to have a pulse wakeup a manager.

I hope I’ve missed something simple here. Can someone help
me out.

Mitchell Schoenbrun --------- maschoen@pobox.com

Mitchell Schoenbrun wrote:

Thanks for all the moral support guys. The theoretical
just got a little tragic last night. Here is what I
ran into.

I want to do something that I can write in my sleep in QNX4.
I have a manager that receives user messages but needs to
wake up occaisionally on its own. Under QNX4 I create a
proxy and attach it to a timer. The timer fires, triggers
the proxy and my Receive() wakes up.

You don’t really need pulse. Signals in Neutrino can carry data just
like pulses (they are implemented by same kernel object). They are also
more lightweight (tax your system less) and generally should be
preferred to pulses unless you have a good reason to do otherwise.

Now under Neutrino the fun starts. I’ll start with the
timer. I setup a timer to send a pulse periodically. One of
the first things I have to do is give the event structure a
connection id (coid). Ok, that’s fair, the timer needs to
know what connection to send the pulse over.

But now I have a catch 22. I could use ConnectCreate() to
create a channel, and then ConnectAttach() to create the coid.
But now I have a coid which cannot be found out by my clients.

Ok, back to the drawing board, I do a resmgr_attach(), but
where’s the channel id? It’s buried inside resmgr_attach().
Is it in the dispatch_t structure, probably, but I can’t find
this defined anywhere in /usr/include/*.

Hmm. I’d try to use name_attach() and name_locate() myself to get coid.
Perhaps name_locate() can locate a name attached by resmgr_attach() too,
who knows…
Might work, although I did not try it. Indeed, you can’t use open() for
that without help of another thread.

If resmgr_block() could wait on two coid’s then things would
be ok. Well it could do this by using select() I think, but
it doesn’t. I could write this code myself, but again I don’t
have the channel id to wait on for the client messages.

I did that using ionotify(), with similar scenario. Had a program
waiting on both timer and incoming data on a socket in a single event
loop. The timer handler then sent data to that socket (it was sort of
benchmark).

Maybe what I need to do is create a thread, have the thread
open the name space, and then have the timer send the pulse
to that way. It seems awful kludgy to need an extra thread
just to have a pulse wakeup a manager.

I hope I’ve missed something simple here. Can someone help
me out.

Yes it is kludgy as a way to obtain coid, but then using pulse to get
timer events is unnecessary overhead in the first place, if your timer
handler is very simple. Otherwise you might as well think and conclude
that having a separate thread might not be such a bad idea. You can’t
handle incoming messages while you’re handling timer and vice versa. If
your program gets to run on SMP then thread would be good. In any case
you might get better determinism under heavy load with separate thread.

  • Igor

Rennie Allen wrote:

I have to agree with Mitchell that the resmgr_* set of functions
sounds
pretty heavy for connecting with “home built” rather than “store
bought”
managers.

I have to disagree with you and Mitchell on this. The resmgr library
does all of the “heavy” work for you, and it is a shared library so the
space penalty on even the smallest embedded system is negligable (since
there are going to be at least a couple of QSSL managers making use of
it - the more managers make use of it, the more space efficient it is).
The big benefit is that your “home grown” software has all the
infrastructure there to “grow-up” into a “store-bought” package if
necessary, with little or no downside.

Could be, I have zero experience yet with RTP, have been meaning to get
acquainted but just haven’t found the time yet. So, I’m soliciting
impressions from experienced users.

Is prefix aliasing not available for this purpose?

What do you mean by prefix aliasing ?

In QNX4:

prefix -A/dev/foo=//3/dev/foo

where foo on node three might be attached by my manager.


Will it be?
I guess I’m asking whether the networked spanning prefix space will be
as
solid and versitile as in QNX4 and, if so, when?

I can’t answer obviously, but this is a good question for QSSL

Also, in your description,
there is no mention of scatter/gather messaging; is this not present
or just
not mentioned?

It’s right there in the QNX docs under readv/writev.

Cool, as I said, I haven’t started RTP yet.

Thanks for the info Rennie

In QNX4:

prefix -A/dev/foo=//3/dev/foo

where foo on node three might be attached by my manager.

Ok, sure this can be done. It’s called a symbolic link in QNX6 (IMO
much more elegant than having a special namespace, and a special utility
for creating “symbolic links” in that namespace).

e.g.

ln -sP /net/othernode/dev/foo /dev/foo

cat /dev/foo (goes across net to

othernode/dev/foo

Better yet (assuming /dev/foo is a mission critical device):

ln -sP /net/othernode~redundant/dev/foo /dev/foo-mission-critical

cat /dev/foo-mission-critical

(you can cut one of the network cables while the “cat” is happening,
with no noticable effect)

If you want QNX4 style fault tolerance then:

ln -sP /net/othernode~loadbalance/dev/foo

/dev/foo-qnx4style-fault-tolerance

(now access to /dev/foo-qnx4style-fault-tolerance will use the combined
bandwidth of both links)

Note that the above scheme means that the type of fault tolerance is
selectable on a service-by-service basis (i.e. “cat /dev/foo” will go
across 1 network link, “cat /dev/foo-mission-critical” will send
duplicate packets across all network links, and “cat
/dev/foo-qnx4style-fault-tolerance” will split up the data across all
network links).

…and even better yet; since QNX6 will stack the namespace (unioning),
you can have automatic redirection to backup services.

You can complain about the current state of the implementation of Qnet
(early beta) but you cannot complain about the design.

It’s right there in the QNX docs under readv/writev.

For the resmgr side, checkout resmgr_msgreadv/writev. You really should
check out the on-line docs on RtP. IMO the on-line docs are one of the
most impressive features of RtP (they are orders of magnitude better
than QNX4). Kudos to the docs people.

Cool, as I said, I haven’t started RTP yet.

Your gonna like it. Frankly, I can’t figure what all the griping about
IPC (from QNX4ers) is all about, QNX6 is soooo much better (it’s just
not as stable yet, which is to be expected at this point).

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:

You don’t really need pulse. Signals in Neutrino can carry data just
like pulses (they are implemented by same kernel object). They are also
more lightweight (tax your system less) and generally should be
preferred to pulses unless you have a good reason to do otherwise.

A possibility. Signals are not synchronized however the way a
pulse would be. I could make this work.

Hmm. I’d try to use name_attach() and name_locate() myself to get coid.
Perhaps name_locate() can locate a name attached by resmgr_attach() too,
who knows…
Might work, although I did not try it. Indeed, you can’t use open() for
that without help of another thread.

I’ve been thinking about this. I thought that this was QSSL’s
kludge add on. Needs another admin running too. I’m not saying
what I want can’t be done. It just seems like the obvious
leanest route is blocked, and for no other reason than
politics.

I did that using ionotify(), with similar scenario. Had a program
waiting on both timer and incoming data on a socket in a single event
loop. The timer handler then sent data to that socket (it was sort of
benchmark).

Right, but you didn’t do this with dispatch_block(). If I do
a resmgr_attach() I don’t know the channel to receive on.


Yes it is kludgy as a way to obtain coid, but then using pulse to get
timer events is unnecessary overhead in the first place, if your timer
handler is very simple. Otherwise you might as well think and conclude
that having a separate thread might not be such a bad idea. You can’t
handle incoming messages while you’re handling timer and vice versa. If
your program gets to run on SMP then thread would be good. In any case
you might get better determinism under heavy load with separate thread.

Hmmm, well I may end up going in this direction then.


Mitchell Schoenbrun --------- maschoen@pobox.com

Previously, Rennie Allen wrote in qdn.public.qnxrtp.os:

Is prefix aliasing not available for this purpose?

What do you mean by prefix aliasing ?

I mean a kernel call that will attach my nd/pid/chid to part of
the name space. A call other than resmgr_attach(). Clearly
resmgr_attach() does this and much more.


Mitchell Schoenbrun --------- maschoen@pobox.com

Rennie Allen <RAllen@csical.com> wrote:

For the resmgr side, checkout resmgr_msgreadv/writev. You really should
check out the on-line docs on RtP. IMO the on-line docs are one of the
most impressive features of RtP (they are orders of magnitude better
than QNX4). Kudos to the docs people.

Thanks! This is music to our ears. We keep reading the newsgroups
for information to use to enhance the docs. The
“Writing a Resource Manager” chapter in the Programmer’s Guide
is currently under revision. We’ve got short and long range plans
for the chapter – you should see the start of the rewrite shortly.
Just need to fill in some blanks! :wink:

-donna

Mitchell Schoenbrun <maschoen@pobox.com> wrote:

Previously, Rennie Allen wrote in qdn.public.qnxrtp.os:

Is prefix aliasing not available for this purpose?

What do you mean by prefix aliasing ?

I mean a kernel call that will attach my nd/pid/chid to part of
the name space. A call other than resmgr_attach(). Clearly
resmgr_attach() does this and much more.

There’s been a fair bit of stuff here, but I don’t think I’ve seen
everything covered off in this thread.

There are currently three main ways for one process to find another.
(Under QNX4 you still had this problem – what was the PID of the
process providing a service? You used qnx_name_attach() & locate()
to find this.)

  1. Use a resource manager. (QNX4 IO manager equivalent) resmgr_attach()
    registers a name in the name space. Block with dispatch_block(), handle
    pulses with pulse_attach(), and get a connection to yourself so a timer
    or isr can send you pulses with message_connect(). Clients find you with
    open() and the fd returned can be passed to MsgSend() as well – all fds
    are coids.

  2. Use name_attach() and name_open() (QNX4 qnx_name_attach() & locate()
    replacement) name_attach() registers a name, and returns a structure which
    has a chid entry in it. You wait for messages with MsgReceive(), you can
    connect to your own chid with ConnectAttach() for timers/isrs. This is
    not (currently) implemented for global (cross-network) names, but the
    API hooks are there for this in the future.

  3. Grow your own method – argv, file, common starter program, or
    whatever.

Hope this helps,

-David

QNX Training Services
dagibbs@qnx.com

“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010320221012.13175A@schoenbrun.com

If QSSL is not documenting this interface, well so be it,
I’ll “Deal with it”. It just seems a little arbitrary and
to tell you the truth, goofy. QNX 4 did have this interface
available, although maybe not as well documented as it could
be.

That is the interface. The description I gave you is as low
as it goes. The resmgr library is just doing the same thing,
but using the file name space as the means for making that
initial contact between processes.

The real stumper that everybody runs into is that it is hard
to get the nd/pid/chid if you aren’t a resource manager. That’s
because there’s no high level API for an alternate method.

Cheers,
Andrew

“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010321114612.212A@schoenbrun.com

Thanks for all the moral support guys. The theoretical
just got a little tragic last night. Here is what I
ran into.
[snip]

We are going to be releasing an API that abstracts the message passing
a little bit that will give you a unified method for dealing with pulses,
sockets (file descriptors), synchronous messages, asynchronous
messages and signals. The API comes with a queue manager and a name
server that solves the problem of discovering the nd/pid/chid
of a server task. This API will also allow you to write code that is
source compatible with QNX4 and Linux if you use the API layer
exclusively. The name server also generates “task started” and “task
died” messages to all processes with names currently attached, which
solves the _PPF_INFORMED problem.

We plan to release this free for non-commercial use. If you think this
might help, we would be happy to have people to help us test it.

Cheers,
Andrew

Previously, David Gibbs wrote in qdn.public.qnxrtp.os:

  1. Use a resource manager. (QNX4 IO manager equivalent) resmgr_attach()
    registers a name in the name space. Block with dispatch_block(), handle
    pulses with pulse_attach(), and get a connection to yourself so a timer
    or isr can send you pulses with message_connect().

This is precisely what I’m hoping to do. Now, after doing the
resmgr_attach() how do I “get a connection to yourself”. I don’t
have the channel id that could come from ChannelCreate().
resmgr_attach() does not return this id to me. Without this,
My only hope is to open() the name space, but this clearly would
need to be done by another thread. Otherwise I’d be doing a MsgSend()
to myself.



Mitchell Schoenbrun --------- maschoen@pobox.com

Previously, Andrew Thomas wrote in qdn.public.qnxrtp.os:

That is the interface. The description I gave you is as low
as it goes. The resmgr library is just doing the same thing,
but using the file name space as the means for making that
initial contact between processes.

Right, how does resmgr_attach() “use the file name space”. There
must be a call that causes the kernel or proc to know what
nd/pid/chid is associated with the name space. If resmgr_attach()
is the only api to do this, well then how do I find out what the
channel-id is that resmgr_attach() creates.

I’m getting a little weary here. This will be at least the
third time I’ve mentioned this catch22. I’d be grateful
to find out that I’m missing something, or misunderstanding.

It doesn’t seem rational that you can setup a timer to send
a pulse to your channel via a connection all with one
thread, but only if you are fortunate enough to not need
to attach to the name space for client visibility.

The real stumper that everybody runs into is that it is hard
to get the nd/pid/chid if you aren’t a resource manager. That’s
because there’s no high level API for an alternate method.

Well the problem would be solved if resmgr_attach() would give me
the channel id so I could use ConnectAttach() to have a connection
to myself. David Gibb alludes to this possibility in his response,
but doesn’t tell me how it can be done.

Thanks for your patience.


Mitchell Schoenbrun --------- maschoen@pobox.com