how to config multi network cards with one IP address

Hi all

how to config multi network cards with one IP address ???:oops:


Uh, why do you want to do this? Are these cards in the same computer? On the same local network? If I send a packet to the IP address, which card would you expect it to come in on? Random?

Are you trying increase the bandwidth between two computers?

yes , I wanna increase the bandwidth , and these pci network cards are in the same computer,

maschoen , please help

There are routers that will let clients all talk to one IP distribute the load among the 4 cards, each with a different IP. I don’t think there is a way to configure things to do this with QNX.

You ignored my Gedanken experiment that should help you understand why this is. If I’m a client, and I send a packet to the one IP address, which card would expect it come in on? A Random one? The least busy one? Ok, now how would it happen that your packet would know which card to head for?

I don’t think that’s possible. You might be able to right a network filter that would allow you to do that but each end would need to know the “protocol”.

I’ll add a little to what Mario had to say, because ZI think he is dead right. In a local network, nodes do not communicate by sending packets back and forth to IP addresses. Instead they send them to Mac addresses. The exception is when they broadcast ARP messages to find out what the Mac address is of a node with some IP address. So one end would have to know that it can send to 4 different Mac addresses. But how would it even find out what they are? The ARP request will only return one Mac address.

Are both computers QNX? If so, then Qnet does link aggregation automatically.

With Qnet there is no need for an IP stack, so you don’t have to be concerned with setting a single ip address on multiple interfaces.

OK, assuming that io-net is not running on either computer to begin with, and that each computer has 2 or more Intel 82544 GigE; here are the two steps to get multiple link aggregation between two QNX computers

  1. On computer A: “io-net -di82544 -pqnet”
  2. On computer B: “io-net -di82544 -pqnet”
  3. Wait a moment while qnets autodiscovery protocol builds its internal
    tables (so that it realizes there are 2 paths between A & B).

That’s it! Qnet will now balance the traffic across the multiple nics that connect A & B (giving a theoretical 2Gbps with 2 nics). It really is that easy…

The hardest thing about link aggregation in QNX is convincing people that it’s this simple!


That didn’t work for me, no increase transfer rate. Note that the 4 ports where connected to the same switch. Could that have something to do with it.


Were there multiple messages going simultaneously, or just one?   I don't think load balancing will transport a single message across more than one link at a time.


I was doing cp -V /dev/hd0 /net/…/dev/null

How about trying it twice simultaneously?

Each packet of a message will be sent out of whatever link is not “full” at the time. A single large message can indeed be spread across multiple links; but, obviously if you have blocking communication (Send → Receive), you will probably never use more than one link at a time (even with really big message size).

If you want to measure the aggregation capacity you need to write a multi-threaded, reply driven test program (since reply is non-blocking). Basically, in a reply driven design the “sender” is doing replies, and the “receiver” is doing sends. Dan Hildebrand (I still miss him terribly) wrote a few things on reply driven design many years ago…

This is deja vu, because I had this exact discussion with Bill Caroselli about 6 years ago; and I wrote a test program that showed nearly 200Mbit/sec across 2 100Mbit ethernet cards and sent it to him.

Of course, you don’t need to code everything reply driven, to get the benefit of aggregation, only to measure it (and prove it to yourself). You’ll get the benefit of aggregation whenever you have muliple applications communicating over qnet.


That won’t do anything. The cp command uses 16K buffers, and is a synchronous message send. The network card will use a total of one DMA descriptor (even if cp had a 4MB blocksize!) and then you have to wait for the reply to come back from the remote file system before it can send the next. QNet doesn’t get a chance to balance anything, since the driver was never “full” (you’d need to exhaust the descriptors for that to happen), so you will get zero speedup from this…

If you write a special throughput utility that is reply driven, with small packets then you can exhaust the NICs descriptors, and Qnet will see a “full” pipe, and then send packets out the other one.

You need a non-blocking “send” (i.e. a reply) so that the producer can do small chunks, yet not have to wait for a reply before moving on (otherwise there’ll almost never be more than one descriptor used for each message send, and a typical nic has 256 descriptors!).

I see thanks for the explanation. That could be a “problem” for us. Because of some other problem with 1Gig network we were asked to setup transmit and receive descriptor to 2048 so it’s very unlikely the network descriptor would get exhausted ;-)

I’m more interested in reducing latency then increasing bandwidth.

Again, you will get the balancing if you have more than one set of threads communicating across QNet. The reply-driven utility, is simply something that is needed to insure that the aggregation can be measured (because you don’t want to have to write a huge system, just so you can prove to yourself that you are getting aggregate throughput).

Sorry for jump in, but that probably is not very good. Try 2 switch (so the 2 network is physically seperated).