Here’s one more thought.
Is one of your hosts is acting as a server, as opposed to a more peer to
peer topogoraphy? If so, you can gain a lot of effective throughput by
using two networks this way.
Route all of your traffic TO the server through one network interface and
route all of your network FROM the server on the other network interface.
The big problem with ethernet is collisions. When you have 10 hosts all
trying to send frequently (i.e. many times a second) your effective
throughput on the ethernet can be cut to about 40% of the theoretical
maximum. With two interfaces you greatly reduce the competion for the wire
to the server and virtually elimate it on the traffic from the server.
This stratigy has served us vey well when I was at Orban. We had an
application that played many channels of digital audio through the network
from a file server. Outbound audio packet from the server were virtually
non existant. And traffic to the server was never all that great in the
first place.
Hope this helps.
Bill Caroselli
“George Broz” <gbroz@earthlink.net> wrote in message
news:3B7098E6.A6641E4@earthlink.net…
Bill, Mario,
Thanks for the input. My colleagues and I also thought about setting the
priorities of processes such as telnetd and came to the same conclusions
arrived
at below.
Our key process requires a guaranteed deterministic response from the
network -
use of telnet, ftp, and nfs is necessary, but their transfer rates can be
hobbled as low as needed to keep them from interfering with the key
process.
Also in our scenario, the maximum number of boxes that can be networked is
relatively small (less than ten). So it would be acceptable to hobble each
box
so that telnet, ftp, and nfs would consume no more than 2%, for example,
of the
available bandwidth (if this were the only issue).
Adding another network interface (to implement a separate network) is our
last
resort, but it looks like we’re running out of options > 
Thanks,
-George
Mario Charest wrote:
“Bill Caroselli (Q-TPS)” <> qtps@earthlink.net> > wrote in message
news:9kmm2i$lu0$> 1@inn.qnx.com> …
Hello Mario, et al
The idea of this scares me.
It would scare me too if I had to do it > 
First, if you try to use priority to limit bandwidth,
It wouldn’t so much to limit bandwidth but an attempt to
provide bandwidth to the program that criticaly need it.
Of this is very crude!
and there is no other
process needing to run then there is virtually no limit on how much of
the
wire a rogue process can consume.
If you try to guage outbound packets using an IP Filter program, which
I’m
sure is quite possible, there are two problems. First. it would slow
down
EVERY outbound packet a certain amount, not to mention every other
process
while it scrutinizes the data in the packet to determine IF it should
be
limited. Secondly, if it does decide to buffer/delay the out bound
packet
I
think that there is too much of a chance that another packet that
should
get
priority will get stuck behind the packet that is being delayed. OR,
you
have a process that could easily become a memory sink hole trying to
buffer
an endless stream of packets.
This wouldn’t guaranty bandwidth either. If 5 machines are each
transmitting
at 10 percent of the bandwidth, there will only be 50% left ;-(
To go around this problem each machine would have to be set in
promiscuous
mode to monitor all traffic. This can become rather messy quickly > 
The original goal was simply to have one application that is
guaranteed
deterministic responce time. A seperate network that can only be
accessed
by that application will come close to providing that. And there’s no
code
to write.
No argument from me there. But given the restriction, setting priority
is
the
only suggestion I could come up with. It’s my impression average data
transfer
rate is what the original poster was looking for rather then
deterministoc
response
time???
“Mario Charest” <> mcharest@zinformatic.com> > wrote in message
news:9km8rs$eha$> 1@inn.qnx.com> …
“George Broz” <> gbroz@earthlink.net> > wrote in message
news:> 3B64B416.AFA1E235@earthlink.net> …
Is there any way to put a limit on the amount of network bandwidth
that
can be consumed by a particular port or protocol in QNX6.1?
I have an application running 6.0/6.1 that uses UDP on several
ports.
These several ports need the lion’s share of the total available
network
bandwidth (lets say 80%) at all times. However, as this
application
runs, services such as telnet, ftp, and nfs will be used
intermittently,
but these services must never consume more than the allotted (in
this
example) 20% or they will affect the operation of the application
adversely.
Does anyone know of a utility or possibly a configuration option
or
“add-on” to io-net that realizes this sort of functionality?
I think the best way to achieve this is by controling the priority
of each process. For example run telnetd below the priority
of your main process.
The idea of controling each “protocol” bandwidth usage is
very interesting. An io-net filter might be able to
do that but I get the feeling this may open up a can
of worms.
Hey Thomas that would be a nice case for a high priority but
time limited process wouldn’t it > 
Thanks in advance,
George
\