Flow control in network driver

I wrote a custom network driver (UP producer) which interfaces with yet
another driver outside io-net and interfaces with it via devctl. One of the
issues we are facing is that the link capacity on which we transmit the
packets can be really slow ~ 4 kbps/sec. I would like to establish some
sort of flow control between the two drivers. Ideally, I would like the
following :

When there are packets to transmit …

io-net calls the UP producer transmit()
Transmit calls the devctl() of the second driver to transmit the packet
The second driver queues this packet and returns devctl synchronously
UP producer’s Transmit() returns control back to io-net (at this point
packet has still not been sent over the wire)
(after some time…)
The other driver then completes the packet transmit and notifies via
asynchronous to the UP producer (pulse/message) that the packet has been
sent
UP producer notifies io-net that the previous packet was send and that it
can pass down more packets


Can the above be accomplished ? I tried a simple test where I don’t call the
TxDone routine after transmitting the packet. I was under assumption that if
TxDone() is not called then io-net will stop sending more packets down to my
up producer’s transmit(). However, I continously see the transmit() being
called to send more packets. I see return codes such as TX_DOWN_AGAIN, and
flags in the npkt that would tell io-net that the packet has not been
transmitted. Could these be used for flow control ?

Thanks

  • Murtaza

There is no way to force a particular protocol to
flow control itself. The usual algorithm is
to queue up to X packets in the driver and once the
limit is reached, return ENOBUFS to the protocol.
Normal protocol timeouts are then applied. Something
like:

if (num_queued > max_queued) {
npkt->flags |= _NPKT_NOT_TXED;
ion->tx_done(reg_hdl, npkt);
return -1;
}

You can keep track of your own queue length or you can have
io-net enforce it with the DCMD_IO_NET_MAX_QUEUE devctl. Be
aware that if you have io-net enforce it and it does apply
flow control on your behalf, you won’t see any packets to
tx until you release some which you have outstanding. ie
if io-net flow controls you, you won’t see another packet to
tx until you ion->tx_done() one or more packets.

-seanb

Murtaza <murti@yahoo.com> wrote:

I wrote a custom network driver (UP producer) which interfaces with yet
another driver outside io-net and interfaces with it via devctl. One of the
issues we are facing is that the link capacity on which we transmit the
packets can be really slow ~ 4 kbps/sec. I would like to establish some
sort of flow control between the two drivers. Ideally, I would like the
following :

When there are packets to transmit …

io-net calls the UP producer transmit()
Transmit calls the devctl() of the second driver to transmit the packet
The second driver queues this packet and returns devctl synchronously
UP producer’s Transmit() returns control back to io-net (at this point
packet has still not been sent over the wire)
(after some time…)
The other driver then completes the packet transmit and notifies via
asynchronous to the UP producer (pulse/message) that the packet has been
sent
UP producer notifies io-net that the previous packet was send and that it
can pass down more packets



Can the above be accomplished ? I tried a simple test where I don’t call the
TxDone routine after transmitting the packet. I was under assumption that if
TxDone() is not called then io-net will stop sending more packets down to my
up producer’s transmit(). However, I continously see the transmit() being
called to send more packets. I see return codes such as TX_DOWN_AGAIN, and
flags in the npkt that would tell io-net that the packet has not been
transmitted. Could these be used for flow control ?

Thanks

  • Murtaza

Thanks for the info Sean.

I made some changes to the driver to rely on the io-net queue. I set the
io-net queue via DMCD_IO_NET_MAX_QUEUE to 1 packet. This allows my transmit
fucntion to receive only one packet at a time until (like you said) txDone
is called on that packet. Unfortunately, by doing this, the throughput I
receive is only a fraction of what I used to get when I had the io-net queue
set to 100 packets and I was calling txDone within the same Transmit()
function when the communication between the UP producer and the second
driver was synchronous. With the changes I made, I set the io-net queue to
1 packet. When the transmit() receive a packet, it send it to the second
driver, wait for the ack from the driver via a pulse, and then issue a
txDone on that packet.

To figure out the throughput problem, I added timestamps in the code to log
the time difference between when the transmit() is called to pass down the
packet and the time txDone is issued on that packet. I also added another
set of timestamps between the time last txDone was called and the transmit()
is called again to send down the packet. The difference between the
transmit issued and an ack for transmit (when TxDone is called) is roughly 1
ms, same as it was in the old way when the communication between the UP
producer and the second driver was synchronous.

The difference between the time txDone is called and when the next packet is
sent down is fairly large and ranges anywhere between 60 ms - 900 ms. In
the previous case, this difference was often 0 ms and in rare cases between
20-60 ms. The type of traffic I ran my test with was bursty i.e.: i did a
file transfer over ftp. These times are captures around the period when the
actual file transfer is taking place. I see no packet drops on the link and
I dont see any reason why there would be any TCP retransmits. In the old
driver, I get around 400 kbit/sec burst data rate with ftp. On the same
link, with the new driver, I get only 22 kbit/sec.

Any ides why there is such a big delay between successive packet transmits
when the io-net queue is set to 1 packet ?

  • Murtaza

“Sean Boudreau” <seanb@node25.ott.qnx.com> wrote in message
news:bcv007$q69$1@nntp.qnx.com

There is no way to force a particular protocol to
flow control itself. The usual algorithm is
to queue up to X packets in the driver and once the
limit is reached, return ENOBUFS to the protocol.
Normal protocol timeouts are then applied. Something
like:

if (num_queued > max_queued) {
npkt->flags |= _NPKT_NOT_TXED;
ion->tx_done(reg_hdl, npkt);
return -1;
}

You can keep track of your own queue length or you can have
io-net enforce it with the DCMD_IO_NET_MAX_QUEUE devctl. Be
aware that if you have io-net enforce it and it does apply
flow control on your behalf, you won’t see any packets to
tx until you release some which you have outstanding. ie
if io-net flow controls you, you won’t see another packet to
tx until you ion->tx_done() one or more packets.

-seanb

Murtaza <> murti@yahoo.com> > wrote:
I wrote a custom network driver (UP producer) which interfaces with yet
another driver outside io-net and interfaces with it via devctl. One of
the
issues we are facing is that the link capacity on which we transmit the
packets can be really slow ~ 4 kbps/sec. I would like to establish some
sort of flow control between the two drivers. Ideally, I would like the
following :

When there are packets to transmit …

io-net calls the UP producer transmit()
Transmit calls the devctl() of the second driver to transmit the packet
The second driver queues this packet and returns devctl synchronously
UP producer’s Transmit() returns control back to io-net (at this point
packet has still not been sent over the wire)
(after some time…)
The other driver then completes the packet transmit and notifies via
asynchronous to the UP producer (pulse/message) that the packet has been
sent
UP producer notifies io-net that the previous packet was send and that
it
can pass down more packets


Can the above be accomplished ? I tried a simple test where I don’t call
the
TxDone routine after transmitting the packet. I was under assumption
that if
TxDone() is not called then io-net will stop sending more packets down
to my
up producer’s transmit(). However, I continously see the transmit()
being
called to send more packets. I see return codes such as TX_DOWN_AGAIN,
and
flags in the npkt that would tell io-net that the packet has not been
transmitted. Could these be used for flow control ?

Thanks

  • Murtaza

Like I said, when a protocol gets ENOBUFS, timeouts are
applied (it’s like a dropped packet which hurts). ‘netstat -ptcp’
probably shows timeouts. If the packet gets out eventually,
tcp can calculate round trip times and be smart about
window sizes / timeout values. Common window sizes for
tcp are around 16 to 32k so you should probably allow
around that much to queue in your driver.

-seanb

Murtaza <murti@yahoo.com> wrote:

Thanks for the info Sean.

I made some changes to the driver to rely on the io-net queue. I set the
io-net queue via DMCD_IO_NET_MAX_QUEUE to 1 packet. This allows my transmit
fucntion to receive only one packet at a time until (like you said) txDone
is called on that packet. Unfortunately, by doing this, the throughput I
receive is only a fraction of what I used to get when I had the io-net queue
set to 100 packets and I was calling txDone within the same Transmit()
function when the communication between the UP producer and the second
driver was synchronous. With the changes I made, I set the io-net queue to
1 packet. When the transmit() receive a packet, it send it to the second
driver, wait for the ack from the driver via a pulse, and then issue a
txDone on that packet.

To figure out the throughput problem, I added timestamps in the code to log
the time difference between when the transmit() is called to pass down the
packet and the time txDone is issued on that packet. I also added another
set of timestamps between the time last txDone was called and the transmit()
is called again to send down the packet. The difference between the
transmit issued and an ack for transmit (when TxDone is called) is roughly 1
ms, same as it was in the old way when the communication between the UP
producer and the second driver was synchronous.

The difference between the time txDone is called and when the next packet is
sent down is fairly large and ranges anywhere between 60 ms - 900 ms. In
the previous case, this difference was often 0 ms and in rare cases between
20-60 ms. The type of traffic I ran my test with was bursty i.e.: i did a
file transfer over ftp. These times are captures around the period when the
actual file transfer is taking place. I see no packet drops on the link and
I dont see any reason why there would be any TCP retransmits. In the old
driver, I get around 400 kbit/sec burst data rate with ftp. On the same
link, with the new driver, I get only 22 kbit/sec.

Any ides why there is such a big delay between successive packet transmits
when the io-net queue is set to 1 packet ?

  • Murtaza

“Sean Boudreau” <> seanb@node25.ott.qnx.com> > wrote in message
news:bcv007$q69$> 1@nntp.qnx.com> …

There is no way to force a particular protocol to
flow control itself. The usual algorithm is
to queue up to X packets in the driver and once the
limit is reached, return ENOBUFS to the protocol.
Normal protocol timeouts are then applied. Something
like:

if (num_queued > max_queued) {
npkt->flags |= _NPKT_NOT_TXED;
ion->tx_done(reg_hdl, npkt);
errno = ENOBUFS;
return -1;
}

You can keep track of your own queue length or you can have
io-net enforce it with the DCMD_IO_NET_MAX_QUEUE devctl. Be
aware that if you have io-net enforce it and it does apply
flow control on your behalf, you won’t see any packets to
tx until you release some which you have outstanding. ie
if io-net flow controls you, you won’t see another packet to
tx until you ion->tx_done() one or more packets.

-seanb

Murtaza <> murti@yahoo.com> > wrote:
I wrote a custom network driver (UP producer) which interfaces with yet
another driver outside io-net and interfaces with it via devctl. One of
the
issues we are facing is that the link capacity on which we transmit the
packets can be really slow ~ 4 kbps/sec. I would like to establish some
sort of flow control between the two drivers. Ideally, I would like the
following :

When there are packets to transmit …

io-net calls the UP producer transmit()
Transmit calls the devctl() of the second driver to transmit the packet
The second driver queues this packet and returns devctl synchronously
UP producer’s Transmit() returns control back to io-net (at this point
packet has still not been sent over the wire)
(after some time…)
The other driver then completes the packet transmit and notifies via
asynchronous to the UP producer (pulse/message) that the packet has been
sent
UP producer notifies io-net that the previous packet was send and that
it
can pass down more packets


Can the above be accomplished ? I tried a simple test where I don’t call
the
TxDone routine after transmitting the packet. I was under assumption
that if
TxDone() is not called then io-net will stop sending more packets down
to my
up producer’s transmit(). However, I continously see the transmit()
being
called to send more packets. I see return codes such as TX_DOWN_AGAIN,
and
flags in the npkt that would tell io-net that the packet has not been
transmitted. Could these be used for flow control ?

Thanks

  • Murtaza