we made the observation that in case an application does not pick up packets from the ip stack ‘in time’, packets are dropped.
We assume that the reason behind is that queues run full. (packets are not discarded with a timeout, are they?)
Now, we would like to know how we can find out the queue(buffer) sizes and whether those are configurable on per socket basis, on per interface basis And/or on per ip stack basis.