TCP/IP 5, Multicast limit?

I’m seeing some sort of upper threshold on the number of packets that
Multicast can receive.

On two identical hardware platforms, one running QNX4, the other running
RedHat 7.3, both with 3com 905 cards, both with same source code, the linux
box is receiving >8000 mcast mgs per second, the qnx box (actually several
different ones) are receiving only 25%-35% or so.

there’s >50% idle on the qnx node. when I ran netsniff, there did seem to
be the correct number of raw packets received (80K packets per 10 sec
interval)

the testing process sits in a recvfrom() loop

anyone have any clues or hints as to where to look?


bob.

$ procmon idle (measures idle process)
12:22:52.56 69% [===================================| ]
12:22:53.56 72% [====================================| ]
12:22:54.56 70% [====================================| ]
12:22:55.56 67% [================================== | ]
12:22:56.56 76% [======================================| ]
12:22:57.56 74% [======================================| ]
12:22:58.56 70% [==================================== | ]
12:22:59.56 77% [======================================| ]
12:23:00.56 75% [======================================| ]
12:23:01.56 76% [======================================| ]
12:23:02.56 75% [======================================| ]
12:23:03.56 75% [======================================| ]
12:23:04.56 77% [======================================| ]

$ promon ltest2
12:22:52.56 0% [= | ]
12:22:53.56 1% [= | ]
12:22:54.56 5% [=== | ]
12:22:55.56 1% [= | ]
12:22:56.56 4% [=== | ]
12:22:57.56 0% [= | ]
12:22:58.56 1% [= | ]
12:22:59.56 3% [== | ]
12:23:00.56 3% [== | ]
12:23:01.56 1% [= | ]
12:23:02.56 4% [=== | ]
12:23:03.56 2% [== | ]
12:23:04.56 1% [= | ]


$ ps xfa
PID PGRP SID PRI STATE BLK SIZE COMMAND
1 1 0 30f READY 262070K Proc32 -l 3
2 2 0 10r RECV 0 108K Slib32
4 4 0 10r RECV 0 217764K Fsys
5 5 0 22r RECV 0 109132K Fsys.eide
8 8 0 0r READY 40K (idle)
16 7 0 24f RECV 0 424K Dev
19 7 0 20r RECV 0 464K Dev.ansi -n 9 -k30,250
21 7 0 20r RECV 0 168K Dev.ser
22 7 0 9o RECV 0 140K Dev.par
23 16 0 20r RECV 0 300K Dev.pty -n12
24 4 0 10o RECV 0 54448K Fsys.floppy
27 7 0 10r RECV 0 32K Pipe
32 7 0 23r RECV 0 184K Net -n32 -Tr 12
34 7 0 20r RECV 0 164K Net.ether905 -M -l1 -s100
41 41 0 10r READY 256K /usr/ucb/Tcpip redinews03
48 48 0 10o RECV 53 24K /usr/ucb/inetd
50 50 0 10o RECV 52 28K /usr/ucb/routed
54 7 0 10o WAIT -1 28K tinit -T /dev/con1 /dev/con2 (…)
78 78 1 10o REPLY 16 20K /bin/login -p
13514 13514 0 10o RECV 18636 48K in.telnetd
18637 18637 3 10o WAIT -1 184K -bash
21216 21216 0 10o REPLY 41 48K in.telnetd
21219 21219 2 10o WAIT -1 184K -bash
14776 14776 2 10o REPLY 1 24K ps xfa
16353 16353 3 10o WAIT -1 192K -bash
13810 16353 3 10o REPLY 41 36K /redinews/test/ltest2 -m

$ cat /etc/version/*
QNX Software Systems Ltd. QNX 4.25, release date 13-Nov-98
QNX TCP/IP Runtime version 4.25, release date 28-Oct-98
QNX TCP/IP Runtime 4.24 Japanese Docs, release date 09-Oct-97
QNX TCP/IP Toolkit version 4.25, release date 28-Oct-98
QNX TCP/IP Toolkit version 4.25A, release date 29-Jan-99
QNX TCP/IP Runtime version 4.25A, release date 29-Jan-99
QNX TCP/IP Runtime version 4.25B, release date 24-Feb-99
QNX TCP/IP Runtime version 5.0, release date February 2001
QNX TCP/IP Runtime version 5.0, Patch A (Beta), January 3rd 2003
QNX TCP/IP Runtime version 5.0, Patch A (Beta), April 17th 2003
QNX TCP/IP Toolkit version 5.0, release date February 2001
QNX TCP/IP Toolkit version 5.0, Patch A (Beta), January 3rd 2003
QNX TCP/IP Toolkit version 5.0, Patch A (Beta), April 17th 2003
VEDIT QNX Ver. 5.05 06/05/98 Copyright (C) Greenview Data, Inc.

Bob <nntp@redinews.remove.com> wrote in
news:Xns94E67E4D77319nntpredinewsremoveco@209.226.137.7:

I’m seeing some sort of upper threshold on the number of packets that
Multicast can receive.

I added a little code to bundle packets in larger transmissions, (10:1) and
the problem is no longer observable.

I should add that this used to be a broadcast system, and packet reception
rates were FAR above the seeming ceiling of 3-4K per second. Under
broadcast mode, we were able to see packet rates in the 11-13K per second
range.

I don’t know how to categorize this.

Since the app worked under broadcast mode, it doesn’t seem to be the NIC
that’s limiting throughput.

Since the node has plenty of idle, it doesn’t seem to be the cpu.

???

bob.