Socket buffer and performance

I have a socket application that typically sends a fair amount of data
in real time (~100 KB/sec). When the amount of data throughput suddenly
from “low” to “high”, the send( ) function returns error #11 for many of
the messages. But within a short time the large amount of messages are
handled with no problem. It is as if the initial send buffer size (8K)
was dynamically increased to handle the additional load.

Also, when the throughput drops significantly for a few seconds it seems
that some of the buffer memory is deallocated, because when the
thoughput jumps up again the problem reoccurs.

Questions:

  • Does anyone know of any dynamic memory allocation in the socket
    buffers?
  • The default buffer size (8K) should handle my instantaneous buffering
    needs. Why shohuld I have to increase the buffer size substantially (to
    61K) to make the problem go away? Is the buffer divided into
    maximum-network-length increments (e.g. 8K divided by 1500 bytes = only
    5 messages buffered)?

Any help would be appreciated in this area.

Thanks

  • Matt

Assuming this is a TCP socket…

You can only send as fast as the other end receives. The other
end advertises its window, we send up to that window and the
local send buffer opens up as the other end acks the data. You’ll
get EWOULDBLOCK (EAGAIN) if you set the socket to nonblocking and
the local send buffer fills up.

-seanb


Matthew Schrier <mschrier@wgate.com> wrote:
: I have a socket application that typically sends a fair amount of data
: in real time (~100 KB/sec). When the amount of data throughput suddenly
: from “low” to “high”, the send( ) function returns error #11 for many of
: the messages. But within a short time the large amount of messages are
: handled with no problem. It is as if the initial send buffer size (8K)
: was dynamically increased to handle the additional load.

: Also, when the throughput drops significantly for a few seconds it seems
: that some of the buffer memory is deallocated, because when the
: thoughput jumps up again the problem reoccurs.

: Questions:

: - Does anyone know of any dynamic memory allocation in the socket
: buffers?
: - The default buffer size (8K) should handle my instantaneous buffering
: needs. Why shohuld I have to increase the buffer size substantially (to
: 61K) to make the problem go away? Is the buffer divided into
: maximum-network-length increments (e.g. 8K divided by 1500 bytes = only
: 5 messages buffered)?

: Any help would be appreciated in this area.

: Thanks
: - Matt

This is all true, but the problem seems to point to the sender. We cranked up
the receive buffer size on the receiver (to 32K) and the problem still
occurred.

We believe that even the default buffer sizes of 8K, and on a closed network
containing only two or three machines, that we should not have even come close
to hitting any buffer limits on either the sender or the receiver.

We also had two completely different receiving applications exhibit the same
problem, another indication the the sender was the culprit. And it seems to
occur only on the TRANSITION from one data rate to a higher data rate. For
instance from 10 KB/sec to 35 KB/sec. But even at 120 KB/sec steady state the
system works great.

Admittedly I need to do more investigation, but I was interested in anyone had
experience with how QNX manages its buffers internally. We are concerned that
if memory is being allocated and deallocated without the applications
knowledge or involvement QNX is not really behaving like a good real-time OS.

Thanks

  • Matt

Sean Boudreau wrote:

Assuming this is a TCP socket…

You can only send as fast as the other end receives. The other
end advertises its window, we send up to that window and the
local send buffer opens up as the other end acks the data. You’ll
get EWOULDBLOCK (EAGAIN) if you set the socket to nonblocking and
the local send buffer fills up.

-seanb

Matthew Schrier <> mschrier@wgate.com> > wrote:
: I have a socket application that typically sends a fair amount of data
: in real time (~100 KB/sec). When the amount of data throughput suddenly
: from “low” to “high”, the send( ) function returns error #11 for many of
: the messages. But within a short time the large amount of messages are
: handled with no problem. It is as if the initial send buffer size (8K)
: was dynamically increased to handle the additional load.

: Also, when the throughput drops significantly for a few seconds it seems
: that some of the buffer memory is deallocated, because when the
: thoughput jumps up again the problem reoccurs.

: Questions:

: - Does anyone know of any dynamic memory allocation in the socket
: buffers?
: - The default buffer size (8K) should handle my instantaneous buffering
: needs. Why shohuld I have to increase the buffer size substantially (to
: 61K) to make the problem go away? Is the buffer divided into
: maximum-network-length increments (e.g. 8K divided by 1500 bytes = only
: 5 messages buffered)?

: Any help would be appreciated in this area.

: Thanks
: - Matt

There are some inconsistencies here. You say the default of 8K which
implies the tiny stack but you mention setting the send buffer size which
it doesn’t support.

The tiny stack sets up an 8k ring buffer on socket connection and always
points the packet therein.

Which stack are you in fact using? What are the size of your writes?
Try setting TCP_NODELAY.

-seanb

Matthew Schrier <mschrier@wgate.com> wrote:
: This is all true, but the problem seems to point to the sender. We cranked up
: the receive buffer size on the receiver (to 32K) and the problem still
: occurred.

: We believe that even the default buffer sizes of 8K, and on a closed network
: containing only two or three machines, that we should not have even come close
: to hitting any buffer limits on either the sender or the receiver.

: We also had two completely different receiving applications exhibit the same
: problem, another indication the the sender was the culprit. And it seems to
: occur only on the TRANSITION from one data rate to a higher data rate. For
: instance from 10 KB/sec to 35 KB/sec. But even at 120 KB/sec steady state the
: system works great.

: Admittedly I need to do more investigation, but I was interested in anyone had
: experience with how QNX manages its buffers internally. We are concerned that
: if memory is being allocated and deallocated without the applications
: knowledge or involvement QNX is not really behaving like a good real-time OS.

: Thanks
: - Matt

: Sean Boudreau wrote:

:> Assuming this is a TCP socket…
:>
:> You can only send as fast as the other end receives. The other
:> end advertises its window, we send up to that window and the
:> local send buffer opens up as the other end acks the data. You’ll
:> get EWOULDBLOCK (EAGAIN) if you set the socket to nonblocking and
:> the local send buffer fills up.
:>
:> -seanb
:>
:> Matthew Schrier <mschrier@wgate.com> wrote:
:> : I have a socket application that typically sends a fair amount of data
:> : in real time (~100 KB/sec). When the amount of data throughput suddenly
:> : from “low” to “high”, the send( ) function returns error #11 for many of
:> : the messages. But within a short time the large amount of messages are
:> : handled with no problem. It is as if the initial send buffer size (8K)
:> : was dynamically increased to handle the additional load.
:>
:> : Also, when the throughput drops significantly for a few seconds it seems
:> : that some of the buffer memory is deallocated, because when the
:> : thoughput jumps up again the problem reoccurs.
:>
:> : Questions:
:>
:> : - Does anyone know of any dynamic memory allocation in the socket
:> : buffers?
:> : - The default buffer size (8K) should handle my instantaneous buffering
:> : needs. Why shohuld I have to increase the buffer size substantially (to
:> : 61K) to make the problem go away? Is the buffer divided into
:> : maximum-network-length increments (e.g. 8K divided by 1500 bytes = only
:> : 5 messages buffered)?
:>
:> : Any help would be appreciated in this area.
:>
:> : Thanks
:> : - Matt

To clear things up I am using the TCP/IP protocol.

What I am seeing is probably not dynamic memory allocation/deallocation. I have
been running some tests and it seems that after I stop traffic for a few seconds,
there is always a 100ms to 250 ms. delay from the first message of the “new” traffic
to when the messages are actually transmitted. It is like the driver goes to sleep
when it doesn’t see continuous data for a while. During that delay the driver’s
buffer fills up with N messages (very repeatable and consistent with the configured
buffer size), and then the driver won’t accept any more messages until the buffer
starts to empty.

So I am now trying to figure out why there is this delay after a short period of
silence.

I tried setting TCP_NODELAY to TRUE but that had no effect (thanks anyway).

  • Matt

Sean Boudreau wrote:

There are some inconsistencies here. You say the default of 8K which
implies the tiny stack but you mention setting the send buffer size which
it doesn’t support.

The tiny stack sets up an 8k ring buffer on socket connection and always
points the packet therein.

Which stack are you in fact using? What are the size of your writes?
Try setting TCP_NODELAY.

-seanb

Matthew Schrier <> mschrier@wgate.com> > wrote:
: This is all true, but the problem seems to point to the sender. We cranked up
: the receive buffer size on the receiver (to 32K) and the problem still
: occurred.

: We believe that even the default buffer sizes of 8K, and on a closed network
: containing only two or three machines, that we should not have even come close
: to hitting any buffer limits on either the sender or the receiver.

: We also had two completely different receiving applications exhibit the same
: problem, another indication the the sender was the culprit. And it seems to
: occur only on the TRANSITION from one data rate to a higher data rate. For
: instance from 10 KB/sec to 35 KB/sec. But even at 120 KB/sec steady state the
: system works great.

: Admittedly I need to do more investigation, but I was interested in anyone had
: experience with how QNX manages its buffers internally. We are concerned that
: if memory is being allocated and deallocated without the applications
: knowledge or involvement QNX is not really behaving like a good real-time OS.

: Thanks
: - Matt

: Sean Boudreau wrote:

:> Assuming this is a TCP socket…
:
:> You can only send as fast as the other end receives. The other
:> end advertises its window, we send up to that window and the
:> local send buffer opens up as the other end acks the data. You’ll
:> get EWOULDBLOCK (EAGAIN) if you set the socket to nonblocking and
:> the local send buffer fills up.
:
:> -seanb
:
:> Matthew Schrier <> mschrier@wgate.com> > wrote:
:> : I have a socket application that typically sends a fair amount of data
:> : in real time (~100 KB/sec). When the amount of data throughput suddenly
:> : from “low” to “high”, the send( ) function returns error #11 for many of
:> : the messages. But within a short time the large amount of messages are
:> : handled with no problem. It is as if the initial send buffer size (8K)
:> : was dynamically increased to handle the additional load.
:
:> : Also, when the throughput drops significantly for a few seconds it seems
:> : that some of the buffer memory is deallocated, because when the
:> : thoughput jumps up again the problem reoccurs.
:
:> : Questions:
:
:> : - Does anyone know of any dynamic memory allocation in the socket
:> : buffers?
:> : - The default buffer size (8K) should handle my instantaneous buffering
:> : needs. Why shohuld I have to increase the buffer size substantially (to
:> : 61K) to make the problem go away? Is the buffer divided into
:> : maximum-network-length increments (e.g. 8K divided by 1500 bytes = only
:> : 5 messages buffered)?
:
:> : Any help would be appreciated in this area.
:
:> : Thanks
:> : - Matt