message passing, multiple channels and async I/O

Let’s start directly with an example. I have a single-thread server
that must handle messages from two clients on different channels:

client1 -----> server <----- client2

If I desing a server like this, I directly jump into a dark hole: the
server will remain blocked within a MsgReceive call completely
ignoring the second client.

In this case it would be usefull to receive some kind of signal when
I effectively need to receive a message from a client. A possible
solution that I’ve thought is to create 2 threads, each one blocked
within a MsgReceive call and than deliver a signal to the main idle
tread. The problem here is that the server shold be single-threaded.
Is there any way to perform asyncronous message passing on the server
side (MsgReceive)?

Another question:

In the example above the server must be able to do something like
this:

receive a message from client1, process this message then deliver a
message to the second client. The trick here is that: the server is
probably locked on a MsgReceive from the client2, and as read in the
documentation I can’t break the message direction (client2->server).
The client2 should instead ask the server and the server reply with
the data that must be passed. The MsgDeliverEvent in this case would
be usefull but it safe to perform it while a thread is already
waiting in a MsgReceive call?

scheme from client1 → server ← client2 perspective

  • server is blocked within 2 MsgReceive calls
  • client1 sends a message to the server (chn 1) to be delivered to
    client2
  • thread1 (chn 1) in server unblocks, performs some data analisis,
    and send a MsgDeliverEvent to client2 (chn 2)
  • client2 (wich may/may not be blocked) receives the signal and sends
    the request message to the server thread 2
  • thread 2 will receive the message, send the data and return to the
    blocked state.

Is this design possible?
The important thing here is to have the 2 clients completely free
(they’re not blocked) since they must be ready to handle user
interaction with some form of ui in a resource limited system

Why is it essential that this be a single threaded server? This is exactly
the scenario that multi-threading was created for.

That said, you can do a conditional receive inside of the work loop while
processing a request from the other client. But I would design a good
multi-threaded application any day for this situation.

One last option that I used very frequently in QNX 4. A server process
loads and forks creating two seperate processes as opposed to a
multi-threaded process. The child process becomes the workhorse process and
the parent process does messaging. When a client process sends a request to
the parent process, the parent process immediately replies, thus preventing
the client process from blocking for a substancial period of time. The
reply does NOT indicate completion or success or failure of the request. It
merely indicates that the server received the request. The parent process
then puts the request into a queue and signals the child process that there
is a new request. The parent process is now blocked waiting to receive a
new request. The child process processes the request. When it is done or
needs to indicate a failure it can signal the client that there is a status
change. The client then send a status request to the parent process and the
parent process repllies with the new status, either the request you gave me
is done and here are the results or your request failed for this reason.

But go with the multi-threading approach.

Bill Caroselli

<wavexx@apexmail.com> wrote in message news:9lulcf$ftk$1@inn.qnx.com

Let’s start directly with an example. I have a single-thread server
that must handle messages from two clients on different channels:

client1 -----> server <----- client2

If I desing a server like this, I directly jump into a dark hole: the
server will remain blocked within a MsgReceive call completely
ignoring the second client.

In this case it would be usefull to receive some kind of signal when
I effectively need to receive a message from a client. A possible
solution that I’ve thought is to create 2 threads, each one blocked
within a MsgReceive call and than deliver a signal to the main idle
tread. The problem here is that the server shold be single-threaded.
Is there any way to perform asyncronous message passing on the server
side (MsgReceive)?

Another question:

In the example above the server must be able to do something like
this:

receive a message from client1, process this message then deliver a
message to the second client. The trick here is that: the server is
probably locked on a MsgReceive from the client2, and as read in the
documentation I can’t break the message direction (client2->server).
The client2 should instead ask the server and the server reply with
the data that must be passed. The MsgDeliverEvent in this case would
be usefull but it safe to perform it while a thread is already
waiting in a MsgReceive call?

scheme from client1 → server ← client2 perspective

  • server is blocked within 2 MsgReceive calls
  • client1 sends a message to the server (chn 1) to be delivered to
    client2
  • thread1 (chn 1) in server unblocks, performs some data analisis,
    and send a MsgDeliverEvent to client2 (chn 2)
  • client2 (wich may/may not be blocked) receives the signal and sends
    the request message to the server thread 2
  • thread 2 will receive the message, send the data and return to the
    blocked state.

Is this design possible?
The important thing here is to have the 2 clients completely free
(they’re not blocked) since they must be ready to handle user
interaction with some form of ui in a resource limited system

Maybe I don’t fully understand your question, but is the server like:

chid1 = ChannelCreate();
chid2 = ChannelCreate();
/* waiting for client 2 check in, so we could MsgDelieverEvent()

  • to him
    /
    rcvid2 = MsgReceive(chid2,…);
    /
    save away rcvid2 & event */

for (;:wink: {
rcvid1 = MsgReceive(chid1, …);
/* process data from client1 */
MsgReply(rcvid1, );

MsgDeliverEvent(rcvid2, event );
rcvid2 = MsgReceive(chid2, …);
MsgReply(rcvid2, the data);
}

Client 1 only ConnectAttach(chid1), and Client2 only ConnectAttach(chid2).

-xtang

wavexx@apexmail.com wrote:

Let’s start directly with an example. I have a single-thread server
that must handle messages from two clients on different channels:

client1 -----> server <----- client2

If I desing a server like this, I directly jump into a dark hole: the
server will remain blocked within a MsgReceive call completely
ignoring the second client.

In this case it would be usefull to receive some kind of signal when
I effectively need to receive a message from a client. A possible
solution that I’ve thought is to create 2 threads, each one blocked
within a MsgReceive call and than deliver a signal to the main idle
tread. The problem here is that the server shold be single-threaded.
Is there any way to perform asyncronous message passing on the server
side (MsgReceive)?

Another question:

In the example above the server must be able to do something like
this:

receive a message from client1, process this message then deliver a
message to the second client. The trick here is that: the server is
probably locked on a MsgReceive from the client2, and as read in the
documentation I can’t break the message direction (client2->server).
The client2 should instead ask the server and the server reply with
the data that must be passed. The MsgDeliverEvent in this case would
be usefull but it safe to perform it while a thread is already
waiting in a MsgReceive call?

scheme from client1 → server ← client2 perspective

  • server is blocked within 2 MsgReceive calls
  • client1 sends a message to the server (chn 1) to be delivered to
    client2
  • thread1 (chn 1) in server unblocks, performs some data analisis,
    and send a MsgDeliverEvent to client2 (chn 2)
  • client2 (wich may/may not be blocked) receives the signal and sends
    the request message to the server thread 2
  • thread 2 will receive the message, send the data and return to the
    blocked state.

Is this design possible?
The important thing here is to have the 2 clients completely free
(they’re not blocked) since they must be ready to handle user
interaction with some form of ui in a resource limited system

Previously, wavexx@apexmail.com wrote in qdn.public.qnxrtp.os:

If I desing a server like this, I directly jump into a dark hole: the
server will remain blocked within a MsgReceive call completely
ignoring the second client.

If you only want a single thread active in the process, you could
use two threads, but use a semaphore after the MsgReceive’s so
that only thread ever does anything at a time. There should be
no real difference between this and only using one thread, and
it solves your problem.

In the example above the server must be able to do something like
this:

receive a message from client1, process this message then deliver a
message to the second client. The trick here is that: the server is
probably locked on a MsgReceive from the client2, and as read in the
documentation I can’t break the message direction (client2->server).

Here you could just create a thread whose sole purpose is to wait
for that message from client2 to deliver the message. It could
then either exit or wait for another assignment.

Mitchell Schoenbrun --------- maschoen@pobox.com

wavexx@apexmail.com wrote:

Let’s start directly with an example. I have a single-thread server
that must handle messages from two clients on different channels:

client1 -----> server <----- client2

First question… why must the server have different channels?
Why can’t both clients attach to the same channel?

The two natural architecture solutions for your problem are either
2 threads, or just one channel.

-David


Is this design possible?
The important thing here is to have the 2 clients completely free
(they’re not blocked) since they must be ready to handle user
interaction with some form of ui in a resource limited system

If they are clients, they must be send-blocked or reply-blocked
at some point.

Hm… another possible architecture… invert the client/server…

Have the “server” Send to the “client”, rather than vice-versa.

Then, if a “client” needs to send the “server”, it instead sends a
pulse, and when the “server” receives the pulse, it MsgSends() to the
appropriate “client”, basically saying “you called”, the “client” can
then MsgReply() with the request, the “server” can work on the request,
then MsgSend() the results to the “client”, with the “client” then do
a thank-you reply, and the “server” returns to waiting for pulses.

Then, the “client” is never blocked on the “server”.

-David

QNX Training Services
dagibbs@qnx.com

wavexx@apexmail.com wrote:

The important thing here is to have the 2 clients completely free
(they’re not blocked) since they must be ready to handle user
interaction with some form of ui in a resource limited system

I think a previous post may also shed some light for you.
The article is #1162, in qdn.public.qnxrtp.os, entitled:
“channels must be created for each of threads ???”

-Adam
amallory@qnx.com

David Gibbs <dagibbs@qnx.com> writes:

wavexx@apexmail.com > wrote:
Let’s start directly with an example. I have a single-thread server
that must handle messages from two clients on different channels:

client1 -----> server <----- client2

First question… why must the server have different channels?
Why can’t both clients attach to the same channel?

The two natural architecture solutions for your problem are either
2 threads, or just one channel.

There are lots of ways to do this, depending on the information
flowing between the clients and the server.

  1. If the client does not need an answer from the server, send the
    message via a message queue. Problem solved.

  2. If the client is polling the server for changing information, then
    have the server send the information unsolicited to the client
    through a message queue. The client never sends to the server.

  3. If the client needs a synchronous response from the server,
    construct the server so that it never does a synchronous send to
    any client. The maximum blocking time of any client is the amount
    of time required to service all of the clients ahead of it waiting
    for service from the server.

  4. If the client needs a response, but it does not have to be
    synchronous, make all communication between client and server
    asynchronous through a queue.

Cheers,
Andrew


Andrew Thomas, President, Cogent Real-Time Systems Inc.
2430 Meadowpine Boulevard, Suite 105, Mississauga, Ontario, Canada L5N 6S2
Email: andrew@cogent.ca WWW: http://www.cogent.ca