Using MsgReceive() across network

Hi,

I am trying to use MsgSend(), MsgReceive() across a network.
My test program spawns two processes: one local and one remote.
Channels are created and connections are made to enable bi-directional
data transfers between the two spawned processes.

I am finding that if I send small messages (<16 kB), evrying works fine.
If I send larger messages, I start detecting data errors on the receive side
of the remote process. This can be eliminated if I use MsgRead(), but this
negatively affects the data transfer performance.

This problem does not exist If I spawn both processes on the same node.

Here is the pseudocode for the two processes:

LocalProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{
// SEND DATA

Set buff[] data to value

MsgSend( buff )

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data
}
}

RemoteProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data

// SEND DATA

Set buff[] data to value

MsgSend( buff )

}
}

Regards,
Tom Labno

“Tom Labno” <tlabno@birinc.com> wrote in message
news:cfb0ue$80$1@inn.qnx.com

Hi,

I am trying to use MsgSend(), MsgReceive() across a network.
My test program spawns two processes: one local and one remote.
Channels are created and connections are made to enable bi-directional
data transfers between the two spawned processes.

I am finding that if I send small messages (<16 kB), evrying works fine.
If I send larger messages, I start detecting data errors on the receive
side
of the remote process.

This can be eliminated if I use MsgRead(), but this
negatively affects the data transfer performance.

That’s the proper and documented way to do this. In fact I though the
threshold was 8K :wink:

This problem does not exist If I spawn both processes on the same node.

Here is the pseudocode for the two processes:

LocalProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{
// SEND DATA

Set buff[] data to value

MsgSend( buff )

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data
}
}

RemoteProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data

// SEND DATA

Set buff[] data to value

MsgSend( buff )

}
}

Regards,
Tom Labno
\

Mario Charest <nowheretobefound@8thdimension.com> wrote:

“Tom Labno” <> tlabno@birinc.com> > wrote in message
news:cfb0ue$80$> 1@inn.qnx.com> …
Hi,

I am trying to use MsgSend(), MsgReceive() across a network.
My test program spawns two processes: one local and one remote.
Channels are created and connections are made to enable bi-directional
data transfers between the two spawned processes.

I am finding that if I send small messages (<16 kB), evrying works fine.
If I send larger messages, I start detecting data errors on the receive
side
of the remote process.

This can be eliminated if I use MsgRead(), but this
negatively affects the data transfer performance.

That’s the proper and documented way to do this. In fact I though the
threshold was 8K > :wink:

Exactly the answer I was going to give.

The reason for this is a trade-off between performance and memory
useage. With potentially many processes, from many nodes, sending
potentially large amounts of data accross the network to a node, if
qnet (inside io-net) were to buffer all data at transmit time, it could
take prodigious amounts of data. So, for a cross-network transfer of
more than a small amount of data, the data will not be transferred until
the receiving application supplies a buffer in which to place that
transferred data. This is done by calling MsgRead(). 8K and 16K tend
to be places on the speed/memory curve where the increase in speed for
extra memory tails off, a “knee” in the curve. (They are, also,
convenient power of 2 values, that make machines and programmers happy.)

In fact, the savings in performance for not transferring data can
be important as, in some cases, the data may never be transferred.
A trivial example would be something like:

fd = open( “/net/node/dev/null”, O_WRONLY );
write(fd, buf, 64*1024);

Where the least amount of data copied is best.

-David


This problem does not exist If I spawn both processes on the same node.

Here is the pseudocode for the two processes:

LocalProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{
// SEND DATA

Set buff[] data to value

MsgSend( buff )

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data
}
}

RemoteProcess()
{
char *buff;

buff = malloc()
for( num_messages )
{

// RECEIVE DATA (and verify)

MsgReceive( buff )

MsgRead() // why is this needed?

MsgReply()

Verify buff[] data

// SEND DATA

Set buff[] data to value

MsgSend( buff )

}
}

Regards,
Tom Labno

\


Please follow-up to newsgroup, rather than personal email.
David Gibbs
QNX Training Services
dagibbs@qnx.com