Network performance in QNX6

Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount of data.

Are these numbers reasonable? Any idea to improve it?
Thanks

Did you use a lot of small messages to pass those 32MB?

“Jerry” <xwindow@yahoo.com> wrote in message news:b799q2$v5$1@inn.qnx.com

Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks
\

In article <b799q2$v5$1@inn.qnx.com>, xwindow@yahoo.com says…

Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount of data.

Are these numbers reasonable? Any idea to improve it?
Thanks

\

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <ed1k@humber.bay> wrote in message
news:MPG.1902a0b8f01e203f9896b4@inn.qnx.com

In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks

\

Size does matter :wink:
Use 32K buffer and you will see.

“Jerry” <xwindow@yahoo.com> wrote in message
news:b7cals$9t2$1@inn.qnx.com

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <> ed1k@humber.bay> > wrote in message
news:> MPG.1902a0b8f01e203f9896b4@inn.qnx.com> …
In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable
connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks



\

Since you were using TCP, a larger buffer size probably wouldn’t affect the
result. We’ve seen well over twice the speed you’re seeing, but with machines
that were twice as fast. I suppose your real question is why this transfer
consumed 90% of the QNX CPU and only 15% of the Linux machine - good question!
But we’ve seen similar results; the QNX TCP/IP stack does seen to be much slower
than would be expected. Actually, since “the stack” is the NetBSD stack, I
guess it’s the stack/io-net glue that’s slow, but whatever the exact reason, QNX
does seem to have a network performance problem. But just think about all the
other great features of QNX and throw some more horsepower at the problem.

Murf

Jerry wrote:

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <> ed1k@humber.bay> > wrote in message
news:> MPG.1902a0b8f01e203f9896b4@inn.qnx.com> …
In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks

\

I just moved 32M (1K buffer 32K times) from a 500MHz QNX machine to a 2.4Ghz Qnx
machine in 3.85 seconds; CPU load on the sending machine was close to 100%.
When I posted a previous comment, I was (incorrectly) remembering that we only
got that kind of performance with faster machines.

Just for fun, I also tried moving 32M by sending a 64K buffer 512 times: that
took 3.76 seconds - well within the normal variability of network operations.
As long as you’re using TCP and moving a good sized chunk of data, the buffer
size, once it’s over some minimum, doesn’t much matter.

Murf

Jerry wrote:

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <> ed1k@humber.bay> > wrote in message
news:> MPG.1902a0b8f01e203f9896b4@inn.qnx.com> …
In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks

\

That figure means you’re getting some 8+ MB/sec or 64Mbit/sec. Which is a
bit more than half of your bandwidth, assuming 100/HD. That might seem like
a decent number, but there was a time when QNX claimed to have ‘near wire’
speed (e.g., 1.06MB/sec on 10Mbit network, using TCP - if I remember the
numbers correctly).

“John A. Murphy” <murf@perftech.com> wrote in message
news:3E9ABAEF.E404C8EE@perftech.com

I just moved 32M (1K buffer 32K times) from a 500MHz QNX machine to a
2.4Ghz Qnx
machine in 3.85 seconds; CPU load on the sending machine was close to
100%.
When I posted a previous comment, I was (incorrectly) remembering that we
only
got that kind of performance with faster machines.

Just for fun, I also tried moving 32M by sending a 64K buffer 512 times:
that
took 3.76 seconds - well within the normal variability of network
operations.
As long as you’re using TCP and moving a good sized chunk of data, the
buffer
size, once it’s over some minimum, doesn’t much matter.

Murf

Jerry wrote:

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <> ed1k@humber.bay> > wrote in message
news:> MPG.1902a0b8f01e203f9896b4@inn.qnx.com> …
In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is
reasonable.

Two machines with same 10/100 ethernet card (cross-over cable
connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API
(tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount
of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks


\

As I said before, QNX definitely seems to have a network performance problem.
But Jerry was seeing much lower numbers than these, and on a faster machine -
that’s the point of interest. Might be interesting to know what kind of
card/driver he was using. (I was using something with a National Semiconductor
DP83815 at the sending end, and something with an Intel gigiabit chip at the
receiving end.)

Murf

Igor Kovalenko wrote:

That figure means you’re getting some 8+ MB/sec or 64Mbit/sec. Which is a
bit more than half of your bandwidth, assuming 100/HD. That might seem like
a decent number, but there was a time when QNX claimed to have ‘near wire’
speed (e.g., 1.06MB/sec on 10Mbit network, using TCP - if I remember the
numbers correctly).

“John A. Murphy” <> murf@perftech.com> > wrote in message
news:> 3E9ABAEF.E404C8EE@perftech.com> …
I just moved 32M (1K buffer 32K times) from a 500MHz QNX machine to a
2.4Ghz Qnx
machine in 3.85 seconds; CPU load on the sending machine was close to
100%.
When I posted a previous comment, I was (incorrectly) remembering that we
only
got that kind of performance with faster machines.

Just for fun, I also tried moving 32M by sending a 64K buffer 512 times:
that
took 3.76 seconds - well within the normal variability of network
operations.
As long as you’re using TCP and moving a good sized chunk of data, the
buffer
size, once it’s over some minimum, doesn’t much matter.

Murf

Jerry wrote:

No, I used a buffer (1K Byte in size) with a pattern in it. Then used a
loop repeating 32K times.
Not sure if the buffer size matters.

“ed1k” <> ed1k@humber.bay> > wrote in message
news:> MPG.1902a0b8f01e203f9896b4@inn.qnx.com> …
In article <b799q2$v5$> 1@inn.qnx.com> >, > xwindow@yahoo.com > says…
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is
reasonable.

Two machines with same 10/100 ethernet card (cross-over cable
connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API
(tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )

Did you read those 32M of data from hard disk?

Eduard.

on Linux the cpu load was never over 15% receiving the same amount
of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks


\

What card are you using? Can you post your code?

-seanb

Jerry <xwindow@yahoo.com> wrote:

Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount of data.

Are these numbers reasonable? Any idea to improve it?
Thanks

Igor Kovalenko <kovalenko@attbi.com> wrote:

That figure means you’re getting some 8+ MB/sec or 64Mbit/sec. Which is a
bit more than half of your bandwidth, assuming 100/HD. That might seem like
a decent number, but there was a time when QNX claimed to have ‘near wire’
speed (e.g., 1.06MB/sec on 10Mbit network, using TCP - if I remember the
numbers correctly).

That was QNX4 native networking, not TCP.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Igor Kovalenko <kovalenko@attbi.com> wrote:

That figure means you’re getting some 8+ MB/sec or 64Mbit/sec. Which is a
bit more than half of your bandwidth, assuming 100/HD. That might seem like
a decent number, but there was a time when QNX claimed to have ‘near wire’
speed (e.g., 1.06MB/sec on 10Mbit network, using TCP - if I remember the
numbers correctly).

“Near wire speed”. Yup. QNX4 with native networking, not TCP.
And not under QNX6 at all.

QNX6 has trouble moving data. Try a simple:

int x = 10000, y = 10000, z; // x * y = 100,000,000
char buf1 [10000], buf2 [10000];
while( x > 0 )
{
memmove( buf1, buf2, y );
x–;
}

Under both QNX4 & QNX6. QNX4 is much faster.
*NOTE: In all fairness, I haven’t tried this since 6.1.
Maybe (hopefully) this has been fixed.

int x = 10000, y = 10000, z; // x * y = 100,000,000
char buf1 [10000], buf2 [10000];
while( x > 0 )
{
memmove( buf1, buf2, y );
x–;
}

Under both QNX4 & QNX6. QNX4 is much faster.
*NOTE: In all fairness, I haven’t tried this since 6.1.
Maybe (hopefully) this has been fixed.

Try using memcpy() instead of memmove(). :slight_smile: I have fixed the memmove()
to be smarter (ie: use memcpy() in this sort of case) but it has not shown
up in a release (yet).

chris


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:b7euu5$9pq$1@nntp.qnx.com

Igor Kovalenko <> kovalenko@attbi.com> > wrote:
That figure means you’re getting some 8+ MB/sec or 64Mbit/sec. Which is
a
bit more than half of your bandwidth, assuming 100/HD. That might seem
like
a decent number, but there was a time when QNX claimed to have ‘near
wire’
speed (e.g., 1.06MB/sec on 10Mbit network, using TCP - if I remember the
numbers correctly).

That was QNX4 native networking, not TCP.

My memory is good on this sort of things. AFAIR, native networked claimed
1.12MB/sec, TCP/IP 1.06… I might even still have some old issue of QNX
News (which ware printed and sent to customers back in those days).

We could of course try and measure QNET performance under QNX6 and see how
it stacks up against either of those numbers…

– igor

it seems to be working better now. 32 MB takes 4 or 5 sec.
I just noticed that Linux box has two CPUs.

Thanks.

PS: Intel 10/100 ethernet card.



“Sean Boudreau” <seanb@node25.ott.qnx.com> wrote in message
news:b7epkm$6d9$1@nntp.qnx.com

What card are you using? Can you post your code?

-seanb

Jerry <> xwindow@yahoo.com> > wrote:
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks
\

That Intel card could be a major part of the problem. Intel apparently did some
recruiting between the time they did their 10/100 cards and the time they did
their gigabit products: the 10/110 stuff was a joke, but the gigabit stuff is
pretty decent.

Murf

Jerry wrote:

it seems to be working better now. 32 MB takes 4 or 5 sec.
I just noticed that Linux box has two CPUs.

Thanks.

PS: Intel 10/100 ethernet card.

“Sean Boudreau” <> seanb@node25.ott.qnx.com> > wrote in message
news:b7epkm$6d9$> 1@nntp.qnx.com> …

What card are you using? Can you post your code?

-seanb

Jerry <> xwindow@yahoo.com> > wrote:
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is reasonable.

Two machines with same 10/100 ethernet card (cross-over cable connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API (tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks
\

I’ve heard different opinion, that 10/100 Intel card (82559) is the fastest
100Mbit ethernet chip, although quite quirky.

“John A. Murphy” <murf@perftech.com> wrote in message
news:3EA0ACA8.212E403F@perftech.com

That Intel card could be a major part of the problem. Intel apparently
did some
recruiting between the time they did their 10/100 cards and the time they
did
their gigabit products: the 10/110 stuff was a joke, but the gigabit stuff
is
pretty decent.

Murf

Jerry wrote:

it seems to be working better now. 32 MB takes 4 or 5 sec.
I just noticed that Linux box has two CPUs.

Thanks.

PS: Intel 10/100 ethernet card.

“Sean Boudreau” <> seanb@node25.ott.qnx.com> > wrote in message
news:b7epkm$6d9$> 1@nntp.qnx.com> …

What card are you using? Can you post your code?

-seanb

Jerry <> xwindow@yahoo.com> > wrote:
Does anyone have any experience on network performance under qnx6?

I did a rough measurement, I am not sure if the number is
reasonable.

Two machines with same 10/100 ethernet card (cross-over cable
connected)
(1) qnx6 on PIII 1GHz. sending data to (2) using socket API
(tcp
transfer)
(2) red hat linux 8.0 on PIII 500Mhz. receiving data from (1)

On qnx6 i sent 32M of data within 7.0 sec (CPU load was over 90% )
on Linux the cpu load was never over 15% receiving the same amount
of
data.

Are these numbers reasonable? Any idea to improve it?
Thanks

\