I am hesitent to document it here in the newsgroup since I am not
really sure if it will continue to be used in the future. Right now
only the pcnet driver is taking advantage of it. So you can certainly
look in that location for an example.
chris
Dave Edwards <> Dave.edwards@abicom-international.com> > wrote:
Ok,
I’ve just checked the Support.h file.
There’s not a lot in it.
Can Sean or Chris please explain the functions of:
nic_allocator_create
nic_allocator_alloc
nic_allocator_free
nic_allocator_destroy
My guess that this would create a pool of npkts for use by a driver, if
that’s so, what are the args to be passed to allocator_create?
Cheers
Dave
John A. Murphy wrote:
Assuming you have the Network DDK, take a look in usr/include/drvr/support.h for the
nic_allocator stuff; that’s the only documentation I’m aware of short of contacting
your sales rep.
On the packet sizing, note that it was the chargen daemon I changed (replaced,
actually), not the client. The standard QNX version, according to Sean, does 74 byte
writes; mine does 7030 byte writes.
Murf
Dave Edwards wrote:
Gent’s
I must have missed out on the NIC allocator routines. Is there any
documentation? Are they mentioned in the 6.2PE docs?
With respect to the realtek chip, I’m currently stuck with this as it’s
fitted to the COTS board that we are using. I would rather use an AMD
based device (for the DMA reasons specified) but I can’t.
As to packet sizing, I’ll rework my Chargen Test so see if that changes
things
Dave
John A. Murphy wrote:
As far as the 8139, Lew summed it up very nicely: the 8319 follows in the grand
tradition of the NE2000, which served its purpose for many years - but they’re
both primitive antiques by today’s standards, especially if you have any interest
in performance.
As for reuse of buffers, I agree that it would be much more efficient to reuse
the buffers; the nic_allocator functions provide a handy (although undocumented)
way to do this. Before we discovered nic_allocator we implemented the same thing
on our own and witnessed a substantial performance improvement.
As for Sean’s comments about the use of small buffers, I threw together a chargen
that does big sends, and saw the 8Mbps output of a 200MHz machine jump to over
75Mbps. Another example of one of the basic truths of computers: if you move
data around in small chunks you’ll pay a high price in performance.
Murf
Dave Edwards wrote:
Murf,
Your results are closer to what I expected. I’ve never been able to
achieve these rates on the slower machines.
Have you also noticed that the IP stack can sometimes fragment
unnecessarily?
I’m confident that My Hardware is OK, as I’ve written a bridge module
that sits underneath the IP stack and forwards the network traffic to
another NIC. In this mode I can talk to one of the faster machines, via
the slow NEC processor, and still achieve good throughput.
[PC]—>[NEC, EN0]–>[bridge]—>[NEC, EN1]---->[AMD 1500]
^----------300MHz NEC-----------^
In this configuration I can saturate the link between EN1 and the AMD
1500 and get nearly full bandwidth (it’s not an Ethernet port but has a
capacity of around 18Mbit/sec)
If I do the test to the 300MHz Nec machine, running the same firmware I
get a data rate of ~3Mbit/sec.
One other thing to mention is that this appears to be independent of
link setting on the NEC machine (10/100Mbit Full/Half Duplex), and It’s
independent of my code!
One other question,
Has anyone done any optimisation on the realtek 8139 code? I have a copy
of the source code from the UK office and notice that it uses a pool
of RX npkts for receive, the version I received uses the pool in a
strange way. Basically it deletes any receive_complete npkts and then
does an ion-alloc to add a packet to the RX pool.
This happens for every frame.
Q1, Isn’t this inefficient?
Q2, Why does the driver not recycle the npkts once that have been released?
Dave
John A. Murphy wrote:
Just for comparasion, I’ve been running some very similar tests, and I seem
to get better results than yours from slow machines, and not quite as good
results from fast machines:
Processor speed OS Data Rate from chargen
200Mhz QNX 6.2.0 ~8Mbps
500Mhz QNX 6.2.0 ~20Mbps
1.8Ghz QNX 6.2.0 ~50Mbps
2.4Ghs QNX 6.2.1 ~65Mbps
I’m just making a TCP connection to the chargen port and hanging in a loop
doing recv() into a large buffer until I’ve seen a total of 10MB.
Murf
Dave Edwards wrote:
Folks!
Can anyone shed some light on QNX’s TCP/IP performance? I have an
ongoing issue (since 6.0) where the performance of the ipstack is
not that good on low power processor platforms.
I’ve noticed this on various processor architectures and with different
versions of the ipstack.
An example would be the chargen service on the stack. I have a test
application that requests data from this port and then calculates the
throughput.
The results are (all from the same requesting machine, all targets have
the same network card and identical drivers, when using qnx)
Processor/Speed OS Max Data Rate
AMD 1500MHz W2000 > 70Mbit/sec
AMD 1500MHz QNX6.1 > 70Mbit/sec
AMD 1500MHz QNX6.2.1 > 70Mbit/sec
AMD 1500MHz Linux > 80Mbit/sec
NEC 300MHz QNX6.1 ~ 3Mbit/sec
NEC 300MHz QNX6.2 ~ 3Mbit/sec
NEC 300MHz Linux > 50Mbit/sec
PPC 750 200MHz QNX 6.1 ~ 3Mbit/sec
PPC 750 200MHz QNX 6.2 ~ 3Mbit/sec
I can appreciate that I’m using much slower processors for the embedded
systems and as such the final throughput should be lower, however when
running the tests the results don’t appear to scale as I would expect.
\