I’ve been having a strange problem with packets that io-net is passing up to my
ncm. My ncm has a down-type of ‘en’ and an up-type of ‘vmac’ (our own type).
After a while (measured in number of packets received) I will get a packet that
appears to be offset by two bytes. Thereafter, it happens more and more
frequently. The problem resets itself when I restart io-net. For example, if I
show you just the 802.3 header of an expected packet and the actual packet
received from io-net:
- expected *
00 40 05 7c 8e fb 00 90 c2 c1 87 86 00 04 … - actual *
00 40 00 40 05 7c 8e fb 00 90 c2 c1 87 86 00 04 …
Naturally the length is also wrong (0x8786 instead of whatever length the real
packet has).
Ethereal sniffing on a different computer shows that the expected packets are
what is really out on the wire, so the problem lies between the wire and my
ncm.
I’ve been able to kludge around it with the following code, and this works
reliably. Of course that doesn’t change the fact that it’s a kludge…
p = ni->iov_base;
if (p[0] == 0x00 && p[1] == 0x40 && p[2] == 0x00 && p[3] == 0x40)
{
_802_3_hdr_t *hdr;
ni->iov_base = ni->iov_base + 2;
hdr = (_802_3_hdr_t*)ni->iov_base;
ni->iov_len = sizeof(_802_3_hdr_t) + ntohs(hdr->len);
log_warning(“vmac_en::rx_up: skewampus packet detected, correcting”);
}
I recognize that the fact that I have the MAC address hard-coded does nothing
to make it less-kludgey. It’s just a stop-gap measure.
So, is this a bug in io-net, the devn-rtl.so driver, or my code? I can’t see
anything in my code that would cause it, but I think it may be possible that
I’m not managing memory properly and that in some way is confusing io-net or
the en driver. Has anyone seen anything like this before?