UDP packet loss

Hi.

We have some data that we want to broadcast over UDP. These buffers are
between 16 and 64K bytes. If the buffer is greater than 1k, then we break
up the buffer in blocks less than 1k.

With each buffer, we send one header block (6 bytes) and one or more data
blocks, depending on the data size.

So each time we broadcast, we must do more than one sendto() calls.

It seems the more data (less time between sendto() calls) the higher data
loss on the receive end. If we put a delay between calls to sendto(), then
we can minimize the data loss.

How do you figure how much time to wait before the next sendto() call?

What’s going on here? How does UDP work?

I would like to minimize data packet loss. How?

TIA

Augie

UDP (some call it Unreliable Datagram Protocol) isn’t a guaranteed delivery
mechanism, unlike TCP where packets are generally guaranteed to be
delivered. When sending UDP packets, it is likely you will lose some. And
since you are broadcasting them, you need to put delays in between otherwise
you are flooding your network with broadcast packets.

I’m not sure what else to say, as I’m not an expert at networking and code
for network related applications. But from some experience, I know that
broadcasting lots of data can be bad and some will be lost.


“Augie Henriques” <augiehenriques@hotmail.com> wrote in message
news:9f8sa8$l33$1@inn.qnx.com

Hi.

We have some data that we want to broadcast over UDP. These buffers are
between 16 and 64K bytes. If the buffer is greater than 1k, then we break
up the buffer in blocks less than 1k.

With each buffer, we send one header block (6 bytes) and one or more data
blocks, depending on the data size.

So each time we broadcast, we must do more than one sendto() calls.

It seems the more data (less time between sendto() calls) the higher data
loss on the receive end. If we put a delay between calls to sendto(),
then
we can minimize the data loss.

How do you figure how much time to wait before the next sendto() call?

What’s going on here? How does UDP work?

I would like to minimize data packet loss. How?

TIA

Augie

First off I would recommend you get a really good reference book
such as

TCP/IP Illustrated Volume 1: The Protocols
W. Richard Stevens
ISBN 0-201-63346-9

Actually he has a set of 3, all are great, good easy to understand
explanations with out having to read through his life’s story. But
I am sure O’Reilly probably has a good one as well.

Anyway, there are TCP (Transmission Control Protocol) and UDP (User
Datagram Protocol) as the two main transports. TCP is a dedicated
stream of data from one host to another host. It has guaranteed
reception of the data, if it can’t deliver, then the transition
will fail. It will also handle buffers greater than MTU (Max
Transmition unit), in the case of Ethernet 1500 bytes (give over
take depending on header :slight_smile: The TCP packet header is larger, and
you have a ACK, or Acknowledgment for each packet.

Where as UDP has a lot less overhead, but, it 'aint guaranteed. As
you have found out. There are good things to UDP though, there are
actually 3 different addressing schemes, unicast: that is from the
source machine to a single machine. Multicast: from a single host
to a select set of host; Internet radio is a good example where
this could be used. And finally, broadcast: where the packet is
targeted at all computers, generally speaking though these are
limited to a subnet, for obvious reasons.

Now why would anyone use UDP, simple, applications such as internet
radio can handle some packet loss, and frankly the internet is
generally pretty reliable (and I should say, internet radio
generally does not need that much bandwidth, unlike your application).
So the lack ACK’s and the flexibility of multicast and its reduced
load on the internet may more than make up for the occasional packet
loss. And of course applications where you will not have a full
tcp/ip stack, such as DHCP/bootp/PXE where the bootp/DHCP and TFTP
packets are all UPD, thus greatly reducing the amount of code in
the boot ROM.

So there is simple explanation of what is going on, the book does
a better job than I.

Now you said you are broadcasting packets, are you trying to say
you are using broadcast UDP packets, or just emitting packets from
a machine? If the former I would suggest taking a look at RFC 2090,
it talks ofusing TFTP (Trivial File Transfer Protocol) and Multicast
UDP packets to transfer the same file to many clients at the same
time. Allowing each one to ensure they receive the entire file.


Tom

Augie Henriques <augiehenriques@hotmail.com> wrote:

Hi.

We have some data that we want to broadcast over UDP. These buffers are
between 16 and 64K bytes. If the buffer is greater than 1k, then we break
up the buffer in blocks less than 1k.

With each buffer, we send one header block (6 bytes) and one or more data
blocks, depending on the data size.

So each time we broadcast, we must do more than one sendto() calls.

It seems the more data (less time between sendto() calls) the higher data
loss on the receive end. If we put a delay between calls to sendto(), then
we can minimize the data loss.

How do you figure how much time to wait before the next sendto() call?

What’s going on here? How does UDP work?

I would like to minimize data packet loss. How?

TIA

Augie


Thomas Emberson <Thomas@QNX.com>

<thomas@qnx.com> wrote in message news:9fa1lh$kn$1@nntp.qnx.com

First off I would recommend you get a really good reference book
such as

TCP/IP Illustrated Volume 1: The Protocols
W. Richard Stevens
ISBN 0-201-63346-9

Actually he has a set of 3, all are great, good easy to understand
explanations with out having to read through his life’s story. But
I am sure O’Reilly probably has a good one as well.

Anyway, there are TCP (Transmission Control Protocol) and UDP (User
Datagram Protocol) as the two main transports. TCP is a dedicated
stream of data from one host to another host. It has guaranteed
reception of the data, if it can’t deliver, then the transition
will fail. It will also handle buffers greater than MTU (Max
Transmition unit), in the case of Ethernet 1500 bytes (give over
take depending on header > :slight_smile: > The TCP packet header is larger, and
you have a ACK, or Acknowledgment for each packet.

Where as UDP has a lot less overhead, but, it 'aint guaranteed. As
you have found out. There are good things to UDP though, there are
actually 3 different addressing schemes, unicast: that is from the
source machine to a single machine. Multicast: from a single host
to a select set of host; Internet radio is a good example where
this could be used. And finally, broadcast: where the packet is
targeted at all computers, generally speaking though these are
limited to a subnet, for obvious reasons.

Now why would anyone use UDP, simple, applications such as internet
radio can handle some packet loss, and frankly the internet is
generally pretty reliable (and I should say, internet radio
generally does not need that much bandwidth, unlike your application).
So the lack ACK’s and the flexibility of multicast and its reduced
load on the internet may more than make up for the occasional packet
loss. And of course applications where you will not have a full
tcp/ip stack, such as DHCP/bootp/PXE where the bootp/DHCP and TFTP
packets are all UPD, thus greatly reducing the amount of code in
the boot ROM.

So there is simple explanation of what is going on, the book does
a better job than I.

Now you said you are broadcasting packets, are you trying to say
you are using broadcast UDP packets, or just emitting packets from
a machine? If the former I would suggest taking a look at RFC 2090,
it talks ofusing TFTP (Trivial File Transfer Protocol) and Multicast
UDP packets to transfer the same file to many clients at the same
time. Allowing each one to ensure they receive the entire file.

Thanks for all the input. I’m well aware of the different protocols and I
have read the different books on this subject.

We are trying to send data (lots of it) from a data acquisition system to
several clients (as fast as possible). I know TCP/IP is reliable, but I
don’t want to be sending all this data to several different clients.

Our system will use an isolated (local) ethernet peer to peer network. We
don’t have to worry about other systems on the network. The network lives
for the data broadcast. Normally we would not be connected to the internet
nor any other kind of network.

I have given multicast a try, and found out that multicast is not really all
that different than UDP. For one thing, multicast is not reliable just like
UDP. I thought I read some time ago when multicast was in the works that it
was supposed to be a reliable protocol.

What I would like is some ideas on how I can minimize/improve data loss?

Have other people done this? What delays work better between block sends?
I don’t want long delays to slow down my data transmission

Or, if anyone is got some ideas of how there may be a better solution to the
design.

TIA

Augie

Tom

Augie Henriques <> augiehenriques@hotmail.com> > wrote:
Hi.

We have some data that we want to broadcast over UDP. These buffers are
between 16 and 64K bytes. If the buffer is greater than 1k, then we
break
up the buffer in blocks less than 1k.

With each buffer, we send one header block (6 bytes) and one or more
data
blocks, depending on the data size.

So each time we broadcast, we must do more than one sendto() calls.

It seems the more data (less time between sendto() calls) the higher
data
loss on the receive end. If we put a delay between calls to sendto(),
then
we can minimize the data loss.

How do you figure how much time to wait before the next sendto() call?

What’s going on here? How does UDP work?

I would like to minimize data packet loss. How?

TIA

Augie


\

Thomas Emberson <> Thomas@QNX.com

Have other people done this? What delays work better between block sends?
I don’t want long delays to slow down my data transmission

This reminds me of a theological question. Is god
omnipotent enough to let me cut off my head and still be a
live?

The probable reason that delays decrease packet loss is that
the probability of a collision decreases. So isn’t what you
are asking for, to be able to decrease delays and packet loss
at the same time similar to my theological question?


Mitchell Schoenbrun --------- maschoen@pobox.com

The probable reason that delays decrease packet loss is that
the probability of a collision decreases. So isn’t what you
are asking for, to be able to decrease delays and packet loss
at the same time similar to my theological question?

Not wanting to start the deterministic network thread again, but the
answer to the question is to use a deterministic network media. With a
deterministic network, packet loss will not be inversly related to
packet interval.

Rennie Allen <RAllen@csical.com> wrote:

The probable reason that delays decrease packet loss is that
the probability of a collision decreases. So isn’t what you
are asking for, to be able to decrease delays and packet loss
at the same time similar to my theological question?

Not wanting to start the deterministic network thread again, but the
answer to the question is to use a deterministic network media. With a
deterministic network, packet loss will not be inversly related to
packet interval.

I’ve been doing a bit of playing between two computers, and here
is what I found.

Environment

small LAN, with an other person have some internet traffic, but it
is behind a firewall. 2 machines each running RTP patch B, photon
2. 10Mbit network.

Machine 1: 350 PII, ne2000 clone sitting on the PcCard bus Machine
2: 233 P, Speedo/82557 sitting on the PCI bus

When I dump 1000, 1500 byte packets from

Machine 1 to 2, get virtually zero packet loss
Machine 2 to 1, get up to 30-50% packet loss.

Simple, ne2000 on a PcCard is probably going to move less data than
a 82557 on a PCI bus.

I figure you are saturating the net/machines durring your bursts.

I would suggest that you make your own protocol based on UDP packets.
I think you will have a hard time trying to figure out a “timeing”
that will work for all occasions and machine loads. Because of
course, the real time processes with have a higher priority, right?

Something along the lines of 5 packets out then wait for an ACK,
so you get the burst of the 5 packets then you will be slowed up
with the ACK. Of course TCP does that for you, it will burst, using
a sliding window approach where it will let itself get a few packets
ahead of the ACKs IIRC.

Anyway, if you pay attention to the return code from the send to,
the sending machine should not drop any packets itself, unless the
PHY is going bad.

It’s not pretty but here is what I have for a sending loop:

size=1500;
for (ctr=0; ctr<1000; ++ctr) {
*(unsigned long *)data=ctr;
do {
if ((out=sendto(sock, data, size, 0,
(struct sockaddr *)&name, sizeof(name))) < 0) {
if (errno=ENOSPC) continue;
perror(“sending datagram message”);
break;
}
} while (out<size);
}
close(sock);

YMMV

Anyway, if the DATA must get there, then you are going to have to
use something that gives you reliability. Whether it is a roll your
own, that you can tune. Or standard TCP, unless the bandwidth is
low enough that you are not going to get any overruns anywhere, or
you can stand loosing packets, you are not going to accomplish what
you need.

Tom

– Thomas Emberson <Thomas@QNX.com>

<thomas@qnx.com> wrote in message news:9fltiu$hl6$1@nntp.qnx.com

Rennie Allen <> RAllen@csical.com> > wrote:
The probable reason that delays decrease packet loss is that
the probability of a collision decreases. So isn’t what you
are asking for, to be able to decrease delays and packet loss
at the same time similar to my theological question?

Not wanting to start the deterministic network thread again, but the
answer to the question is to use a deterministic network media. With a
deterministic network, packet loss will not be inversly related to
packet interval.

I’ve been doing a bit of playing between two computers, and here
is what I found.

Environment

small LAN, with an other person have some internet traffic, but it
is behind a firewall. 2 machines each running RTP patch B, photon
2. 10Mbit network.

Machine 1: 350 PII, ne2000 clone sitting on the PcCard bus Machine
2: 233 P, Speedo/82557 sitting on the PCI bus

When I dump 1000, 1500 byte packets from

Machine 1 to 2, get virtually zero packet loss
Machine 2 to 1, get up to 30-50% packet loss.

Simple, ne2000 on a PcCard is probably going to move less data than
a 82557 on a PCI bus.

I figure you are saturating the net/machines durring your bursts.

I would suggest that you make your own protocol based on UDP packets.
I think you will have a hard time trying to figure out a “timeing”
that will work for all occasions and machine loads. Because of
course, the real time processes with have a higher priority, right?

Something along the lines of 5 packets out then wait for an ACK,
so you get the burst of the 5 packets then you will be slowed up
with the ACK. Of course TCP does that for you, it will burst, using
a sliding window approach where it will let itself get a few packets
ahead of the ACKs IIRC.

Anyway, if you pay attention to the return code from the send to,
the sending machine should not drop any packets itself, unless the
PHY is going bad.

It’s not pretty but here is what I have for a sending loop:

size=1500;
for (ctr=0; ctr<1000; ++ctr) {
*(unsigned long *)data=ctr;
do {
if ((out=sendto(sock, data, size, 0,
(struct sockaddr *)&name, sizeof(name))) < 0) {
if (errno=ENOSPC) continue;
perror(“sending datagram message”);
break;
}
} while (out<size);
}
close(sock);

YMMV

Anyway, if the DATA must get there, then you are going to have to
use something that gives you reliability. Whether it is a roll your
own, that you can tune. Or standard TCP, unless the bandwidth is
low enough that you are not going to get any overruns anywhere, or
you can stand loosing packets, you are not going to accomplish what
you need.

Tom

Thanks.

If I use TCP, I can reliably send the data from one server to a client. If
I have more than one client, then I will start to run into problems. The
amount of data that I can send will be cut by 1/2 for two clients (and so
on). Not only do I have a problem with the amount of data that I can
transmit, but I also have a problem with the amount of time it takes to
transmit this data. I now have to transmit the data several time, one for
each client.

So, my thinking is that (in my case) UDP wins even with the data loss. I
can send more data and I don’t have to spend so much time sending it.

There is no way to have one transmit to multiple clients without data loss,
correct?

TIA

Augie


– Thomas Emberson <> Thomas@QNX.com

Augie Henriques <augiehenriques@hotmail.com> wrote:

thomas@qnx.com> > wrote in message news:9fltiu$hl6$> 1@nntp.qnx.com> …
Rennie Allen <> RAllen@csical.com> > wrote:
The probable reason that delays decrease packet loss is that
the probability of a collision decreases. So isn’t what you
are asking for, to be able to decrease delays and packet loss
at the same time similar to my theological question?

Not wanting to start the deterministic network thread again, but the
answer to the question is to use a deterministic network media. With a
deterministic network, packet loss will not be inversly related to
packet interval.

I’ve been doing a bit of playing between two computers, and here
is what I found.

Environment

small LAN, with an other person have some internet traffic, but it
is behind a firewall. 2 machines each running RTP patch B, photon
2. 10Mbit network.

Machine 1: 350 PII, ne2000 clone sitting on the PcCard bus Machine
2: 233 P, Speedo/82557 sitting on the PCI bus

When I dump 1000, 1500 byte packets from

Machine 1 to 2, get virtually zero packet loss
Machine 2 to 1, get up to 30-50% packet loss.

Simple, ne2000 on a PcCard is probably going to move less data than
a 82557 on a PCI bus.

I figure you are saturating the net/machines durring your bursts.

I would suggest that you make your own protocol based on UDP packets.
I think you will have a hard time trying to figure out a “timeing”
that will work for all occasions and machine loads. Because of
course, the real time processes with have a higher priority, right?

Something along the lines of 5 packets out then wait for an ACK,
so you get the burst of the 5 packets then you will be slowed up
with the ACK. Of course TCP does that for you, it will burst, using
a sliding window approach where it will let itself get a few packets
ahead of the ACKs IIRC.

Anyway, if you pay attention to the return code from the send to,
the sending machine should not drop any packets itself, unless the
PHY is going bad.

It’s not pretty but here is what I have for a sending loop:

size=1500;
for (ctr=0; ctr<1000; ++ctr) {
*(unsigned long *)data=ctr;
do {
if ((out=sendto(sock, data, size, 0,
(struct sockaddr *)&name, sizeof(name))) < 0) {
if (errno=ENOSPC) continue;
perror(“sending datagram message”);
break;
}
} while (out<size);
}
close(sock);

YMMV

Anyway, if the DATA must get there, then you are going to have to
use something that gives you reliability. Whether it is a roll your
own, that you can tune. Or standard TCP, unless the bandwidth is
low enough that you are not going to get any overruns anywhere, or
you can stand loosing packets, you are not going to accomplish what
you need.

Tom

Thanks.

If I use TCP, I can reliably send the data from one server to a client. If
I have more than one client, then I will start to run into problems. The
amount of data that I can send will be cut by 1/2 for two clients (and so
on). Not only do I have a problem with the amount of data that I can
transmit, but I also have a problem with the amount of time it takes to
transmit this data. I now have to transmit the data several time, one for
each client.

So, my thinking is that (in my case) UDP wins even with the data loss. I
can send more data and I don’t have to spend so much time sending it.

There is no way to have one transmit to multiple clients without data loss,
correct?

The short answer is “NO”. “reliable broadcast” or “reliable multicast” are
2 research areas and as far as I know, there is no “good” solution now.

UDP is not reliable, only your protocal (on top of UDP) could make it
reliable (by detect packet loss and resending). That’s basically how
NFS working.

You can use broadcast/mulitcast, and have the other end ACK back to
determine if packet loss. Then choose and already established TCP link
to resend that packet (to the specific node).

-xtang

Augie Henriques <augiehenriques@hotmail.com> wrote:


Thanks.

If I use TCP, I can reliably send the data from one server to a client. If
I have more than one client, then I will start to run into problems. The
amount of data that I can send will be cut by 1/2 for two clients (and so
on). Not only do I have a problem with the amount of data that I can
transmit, but I also have a problem with the amount of time it takes to
transmit this data. I now have to transmit the data several time, one for
each client.

So, my thinking is that (in my case) UDP wins even with the data loss. I
can send more data and I don’t have to spend so much time sending it.

There is no way to have one transmit to multiple clients without data loss,
correct?

I kinda thought you where going there.

At this point I think you are definately at a roll your own point.

Take a look at the standards for TFTP and TFTP-multicast option.

I would probably not do a straight TFTP-mulitcast option, I would
probably be more inclined to have multiple clients listen for
multicast/broadcast packets and then reply with individual
acknolowedgements. Where the TFTP:mulitcast is based around a finite
amount of data, IOW a single file. You have a continuous stream of
data.

So set up your own transport on top of UDP, the header would include

  • sequence number
  • total packets
  • forced ACK control.
  • checksum of current packet (optional)
  • out of band data/control information (optional)

(other information, such as size is available from the UDP
header/information from recv)

So when a client receives a packet it can then compare it to the
last known packet it has received. If it missed one then it can
store this buffer away for future use and send an Anti-ACK containing
the sequence number of the missing packet to the server. When the
server receives the Anti-ACK it would of course resend the packet.

As well I would look at grouped positive ACKs, so for example every
5 (or so) packets a client has recieved it will send a packet with
the sequence numbers successfully received. This will allow the
server to through out packets it no longer needs. As well you will
notice that I included a ‘forced ACK control’, when that is set
all clients will be foreced to ACK all packets that have not already
been acknowledged. This could be used at the end of a data set for
example. Then have an adjustable timeout, so if you don’t receive
and ACK after from a machine after you have let say 10ms elapse
from the recept of the last ACK, then you can then resend the last
packet.

By reducing the number of outstanding packets, ex 5 pack minimum,
before an ACK, the clients will effectly throttle down the servers
output enough to allow each client to keep up with the data flow,
but not suffer the extra bandwidth required with individual TCP
streams. You don’t have to get down to the lock step level of TFTP,
but it would be nice to get a bit of the sliding window approach
from TCP.

And hey, if it is good, it works, and your company lets you, write
a RFC. My RFC was not accepted as experimental until after I started
working at QNX, but shortly afterwards it became a requirement for
things such as PXE by MS, you can guess the abuse I took there :slight_smile:

Anyway, got to leave some work for you :slight_smile:

And of course YMMV.

Tom

\

Thomas Emberson <Thomas@QNX.com>