Communications server architecture

I’m implementing a server that acts as a communications gateway to another
protocol. When the server receives a packet from below, it needs to send
that packet up to the client waiting for it. My original structure was:
server receive()
client send()
(server waits for packet from below)
server reply() with packet
(client works on packet)

The biggest problem with this is that once the server reply()s, the server
has nowhere to send any more packets until the client gets arround to
send()ing the next request for a packet. I was losing packets in between the
server’s reply() and the client’s next send(). I also want to avoid
buffering packets on the server side.

In this case the server is “well known”, but the client isn’t. However, the
majority of the traffic is flowing from the server to the client (sort of
backwards). I think the send/receive sense has to be swapped for two
reasons:
fix the buffering problem above by changing the client to a
multithreaded receive()
allow the client to make use of MsgRead() to shuffle around headers of
the received packet
New architecture:
client receive() (probably in a multithreaded fassion)
(server waits for packet from below)
server send() packet
(client works on packet)
client reply()

In this case the server needs to ConnectAttach() to the client, but the
server’s channel is well known (not the client’s). Is it reasonable for the
client to send a message with it’s node/pid/channel to the server, so that
the server can turn around and ConnectAttach() back to the client?

Thanks,
Shaun

I wanted to report success on this new architecture. It works flawlessly
with no dropped packets. Here’s a handy little snippet of code for the
server:
struct _pulse pulse;
int nRcvId = MsgReceive_r( nChId, &pulse, sizeof( pulse), NULL);
if( nRcvId == 0 && pulse.code == CLIENT_ADVERTISE) {
struct _client_info info;
// client is advertising
ConnectClientInfo_r( pulse.scoid, &info, 0);
coid = ConnectAttach_r(
info.nd, info.pid, pulse.value.sival_int,
_NTO_SIDE_CHANNEL, 0);
}

Cheers,
Shaun

Shaun,

Why don’t you want the server to buffer packets?

Kevin

“Shaun Jackman” <sjackman@nospam.vortek.com> wrote in message
news:afi4lo$bh8$1@inn.qnx.com

I’m implementing a server that acts as a communications gateway to another
protocol. When the server receives a packet from below, it needs to send
that packet up to the client waiting for it. My original structure was:
server receive()
client send()
(server waits for packet from below)
server reply() with packet
(client works on packet)

The biggest problem with this is that once the server reply()s, the server
has nowhere to send any more packets until the client gets arround to
send()ing the next request for a packet. I was losing packets in between
the
server’s reply() and the client’s next send(). I also want to avoid
buffering packets on the server side.

In this case the server is “well known”, but the client isn’t. However,
the
majority of the traffic is flowing from the server to the client (sort of
backwards). I think the send/receive sense has to be swapped for two
reasons:
fix the buffering problem above by changing the client to a
multithreaded receive()
allow the client to make use of MsgRead() to shuffle around headers of
the received packet
New architecture:
client receive() (probably in a multithreaded fassion)
(server waits for packet from below)
server send() packet
(client works on packet)
client reply()

In this case the server needs to ConnectAttach() to the client, but the
server’s channel is well known (not the client’s). Is it reasonable for
the
client to send a message with it’s node/pid/channel to the server, so that
the server can turn around and ConnectAttach() back to the client?

Thanks,
Shaun

I should have added that one way to maybe eliminate the packet loss w/o
making the server send to a client would be to make multiple threaded
clients that are send blocked on the server, instead of just one. Just a
thought.

Kevin

“Kevin Stallard” <kevin@ffflyingrobots.com> wrote in message
news:afpls3$njq$1@inn.qnx.com

Shaun,

Why don’t you want the server to buffer packets?

Kevin

“Shaun Jackman” <> sjackman@nospam.vortek.com> > wrote in message
news:afi4lo$bh8$> 1@inn.qnx.com> …
I’m implementing a server that acts as a communications gateway to
another
protocol. When the server receives a packet from below, it needs to send
that packet up to the client waiting for it. My original structure was:
server receive()
client send()
(server waits for packet from below)
server reply() with packet
(client works on packet)

The biggest problem with this is that once the server reply()s, the
server
has nowhere to send any more packets until the client gets arround to
send()ing the next request for a packet. I was losing packets in between
the
server’s reply() and the client’s next send(). I also want to avoid
buffering packets on the server side.

In this case the server is “well known”, but the client isn’t. However,
the
majority of the traffic is flowing from the server to the client (sort
of
backwards). I think the send/receive sense has to be swapped for two
reasons:
fix the buffering problem above by changing the client to a
multithreaded receive()
allow the client to make use of MsgRead() to shuffle around headers
of
the received packet
New architecture:
client receive() (probably in a multithreaded fassion)
(server waits for packet from below)
server send() packet
(client works on packet)
client reply()

In this case the server needs to ConnectAttach() to the client, but the
server’s channel is well known (not the client’s). Is it reasonable for
the
client to send a message with it’s node/pid/channel to the server, so
that
the server can turn around and ConnectAttach() back to the client?

Thanks,
Shaun

\

I want to avoid queuing on the server mostly due to the complexity it adds.
However, the biggest reason is that the servicing of these packets has a
real-time deadline, and the client should be capable of servicing each
packet before the next arrives. If the packet sits and ages in the queue for
some unknown time it makes it harder to keep a handle on latency.

Cheers,
Shaun

Why don’t you want the server to buffer packets?