advice please!

Rennie Allen wrote:

USB is in no way deterministic.

USB has deterministic modes of operation (Isochronous modes).

I’m not convinced …

AFAIK … the time of the reconfiguration can be in the range of
seconds. And it is done by the ARCnet chip and can’t be controlled
by software!

No, not seconds. On 2.5Mbit Arcnet it was in the range of 10’s of
milliseconds. This was improved for 20Mbit Arcnet and, I believe, it is
in the single digits.

Interesting …

The reconfiguration happens always if the token get lost … because
of switching a station on/off and cabling problems (sometimes
realy a nightmare…).

Determinism is independant of fault tolerance. How the system reacts
during a cable fault is outside the scope of a discussion on
deterministic networks.

Hm, think twice … in the case of QNX4 you will get ARCnet,
but you will loose a lot other nice things (thread support,
DLL support a.s.o )

What is most important, reliable distributed real-time behavior, or the
latest, current fashion, me-too features ?

IMHO … thread support isn’t a me-too feature, it is essential.

However … also a good solution would be to use reflective memory
interfaces in a star configuration, but more cost effective is a
PROFIBUS based solution. With PROFIBUS you can switch on and off
work stations with no negative effects on the communication.

Profibus is token passing just like Arcnet,

That’s correct for a multi master configuration, because message
passing is only used at master level … the communication with the
assigned slaves takes place when the master has the token (the time
slot) and this is of course done without token passing (the
PROFIBUS protocol is a so called hybrid token passing protocol).

and suffers bus
reconfigurations in exactly the same way (although the time may be
smaller, do you know what it is ?).

In a single master configuration doesn’t the master share the token
… that means there is not reconfiguration possible and no
suffering will happen :slight_smile:

To make that technology clear:
we support a PCI board which allows a direct access to the memory
which will be read and written directly by the PROFIBUS ASIC
(ASPC2).

The ASPC2 polls each slave in a few microseconds … that means it
reads the data from an output memory area of the host and transfers
it to the output memory of the slave. At the same time it reads the
contents of the input memory of the slave and transfers it to the
input memory area of the host.

All of these slave specific memory areas are located at the host
side in a 2MB shared memory of the controller board. The host
application has direct access to that memory … no DMA is
neccessary.

This IO processing is very similar to the reflective memory
technique.

If 2 master boards are used for two physically seperated buses,
a time resolution of 100-150 microsecond is possible using only
memory-to-memory processing.

Armin

Thomas Emberson wrote:

Rennie Allen <> RAllen@csical.com> > wrote:
You would need to wire the system like this
[ clip …]
How about something more like

1
/
20 2
/
19 3
/
18 4
/
17 5
| |
16 6
| |
15 7
\ /
14 8
\ /
13 9
\ /
12 10
\ /
11

So instead of the connect network Rennie suggests, why not create
your own ring. Might actually be cheaper than a 20 port switch as
well, that is if you get 40 ne2000 clones. In effect you will get
20*10Mb/s worth of bandwidth and you will also get some determinism
since each ehternet cable will only have 2 computers talking on
it.

Why do you build a physical ring??? Building a logical ring using
a token passing protocol (token bus) makes more sense, it’s much
cheaper and is using just a standard bus architecture!!

Ever heard about MAP??

One problem that may be created with this is latency if you need
to get the data to each node in ~1ms, or every 1ms. Former may be
difficult, but the later would be no problem.

Spend 100us to transfer a packet from one node to the other … it
will take 20 times 100us = 2ms to distribute the information and you
have 20
times a point of failure =:-/

Seems not to be a viable aproach …

OR
Again depending up the uniqueness of the data, broadcast packets …
[ clip …]

Comment:
use the MAP protocol before you fiddle around with UDP packets.

That ‘wheel’ is already invented :slight_smile:

Armin

In article <9ejqt1$4a7$1@inn.qnx.com>, kovalenko@home.com says…

“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9ejhoc$s2p$> 1@inn.qnx.com> …

“Bill Caroselli” <> Bill@Sattel.com> > wrote in message
news:9ejat8$nn8$> 1@inn.qnx.com> …
“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9eiu8c$flf$> 1@inn.qnx.com> …
As for sending 20 bytes every 1ms on a network that should not be a
problem,
that
equals to 20k/sec per nodes, times 20 nodes if they all send at the
same
time to each other that’s
400k/sec. On 10mbits it could be tight but 100mbits shouldn’t have
any
problem.
However beware, a network with 20 PC is not very deterministic, it may
be
very difficult if not
impossible to guaranty 100% success. That doesn’t have anything to do
with
QNX but rather it’s the nature of Ethernet.


With 20 transmitters trying to put out a packet every 1 ms collisions
are
going to kill you. It doesn’t matter how little data there is in each
packet. Also, don’t forget to add in size of headers for each packet.


As Mario said, it isn’t QNX that’s the problem, its Ethernet. In this
one
situation I can say that Ethernet IS deterministic. I can determin that
before you even try it, it won’t work on 10 Mbps Ethernet. You can
probibly
get away with it on 100 Mbps Ethernet. But with so many transmitters
transmitting so often and regurally I would recommend trying 16 Mbps
token
ring instead.

I though token ring wasn’t availabe on QNX6? If it would, i guess
it would be more expensive (from the original post cost seems to
be a concern).

One option (if possible) would be to have one node poll all the other to
prevent
collision.

One other possibility would be to use broadcast that way each nodes
wouldn’t have to send the data 20 times to every other nodes. Also with
broadcast you may be able to reduce collision by cascading the broadcast:
node 1 send the data when node 2 received it it send it, when node 3
received
it it sends it, etc…


How about to use a 20-port switch?
I thought switched full duplex ethernet does not have collisions and 10mb/s
would be enough…

That is true. However the switch then has to do some buffering.
Take the case of all 20 nodes sending a broadcast packet simultaneously.
The switch has to send 19 packets out of all 20 ports to the nodes.
20 packets in, 380 packets out!
It should be able to handle it - if it has enough internal buffers, and
not too much other traffic. (especially other broadcasts)

  • igor
    \


Stephen Munnings
Software Developer
Corman Technologies Inc.

“Bill Caroselli” <Bill@Sattel.com> wrote in message
news:9ekid7$hlk$1@inn.qnx.com

“Igor Kovalenko” <> kovalenko@home.com> > wrote in message
news:9ekam6$dae$> 1@inn.qnx.com> …
If there aren’t, look at this. Motorola makes a cPCI chassis with fully
meshed 100Mb/s ethernet backplane, all you need to do is insert SBCs
with
built-in ethernet (I think that chassis takes up to 16 SBCs) and they
can
talk to each other all at the same time. It is called PXP1000 and also
has
two gigabit ethernet ports.

Cool.

I believe 100VG-Anylan is deterministic. And I think Corman sells
hardware
with QNX drivers.
Purists will of course tell that it is not ethernet.

I believe that this is long since been discontinued. YES! It was a nice
product.

Unfortunately Bill is quite right. We had to discontinue our 100VG-AnyLAN
product line some years ago when AT&T stopped making the chipset. Very
unfortunate, because 100VG was superior to 100TX in almost every way:
deterministic network access time, two-level priority scheme, cat 3 cabling
or up to 200 metre cat 5 cable runs, better noise immunity, up to 9 levels
of hubs in a network, and support for Ethernet or Token Ring style packets.
But 100VG was “different”, so it lost out. Today, the ready availability of
10/100 Mbps Ethernet switches makes a lot of 100VG’s advantages rather moot.
However, here is one application where 100VG might have been the perfect
solution. Anybody want to pay us to make our own 100VG ASIC…? :wink:

Bert “pining for the good old days” Menkveld
Engineer
Corman Technologies Inc.

In article <9ekam6$dae$1@inn.qnx.com>, kovalenko@home.com says…

I believe 100VG-Anylan is deterministic. And I think Corman sells hardware
with QNX drivers.
Purists will of course tell that it is not ethernet.

It was indeed! Note the was.
Corman do not make or sell 100VG-Anylan any more, since the mfgr of the
chipsets no longer makes them. In effect, “100VG-Anylan is dead”.
I believe it is another case of “VHS” (good enough for most users)
pushing “Beta” (the better standard for discerning users) off the market!

I would think that switches are either store-and-forward in which case they
compensate insufficient switching fabric with memory, or fully meshed. Does
anyone know better?

In practice, I believe that you are correct. However, to prove,
mathematically, that this is sufficient may not be easy!

  • Igor

Stephen Munnings
Software Developer
Corman Technologies Inc.

Thomas Emberson wrote:

Previously, Armin Steinhoff wrote in qdn.public.qnxrtp.advocacy:


Why do you build a physical ring??? Building a logical ring using
a token passing protocol (token bus) makes more sense, it’s much
cheaper and is using just a standard bus architecture!!

Define ‘standard bus architecture’,

Should be clear what a ‘standard bus architecture’ is …

if you are talking about 20
ethernet cards into a 20 port high bandwith switch solution, that
may be more expensive. Of course we have not asked how much the
programmers tims is worth compared to the purchase of an expensive
switch.

One problem that may be created with this is latency if you need
to get the data to each node in ~1ms, or every 1ms. Former may be
difficult, but the later would be no problem.

Spend 100us to transfer a packet from one node to the other … it
will take 20 times 100us = 2ms to distribute the information and you
have 20
times a point of failure =:-/

Again, it depends on how quickly you need the information, as my
last point stated.

The initial poster requested distribution of data in the timeframe
of ONE millisecond … that’s the point!

Read closely as I alreay mentioned that caveat.

OR
Again depending up the uniqueness of the data, broadcast packets …
[ clip …]

Comment:
use the MAP protocol before you fiddle around with UDP packets.

That ‘wheel’ is already invented > :slight_smile:

So you are suggesting adding even more traffic to a net work that
is already congested.

It’s hard to ‘congest’ a fast 100MB/s Ethernet switch with 20
workstations which are using small (20 bytes) packets.

I thought the whole point to the general
discussion was to make it possible to get all data to all the nodes
while spending as little money as possible.

Not correct … the point is to make the communication as fast as
possible … time frame is ONE millisecond.

[ clip …]

What’s wrong with UPD packets anyway,

Noting is wrong with UDP … the problem of RT communication over
Ethernet is as old as Ethernet and there are solutions in the
industrial communication which are nearly as old as Ethernet (MAP).
MAP is a result of reseach efforts of several years … so I don’t
believe that you will get similar results when you ‘fiddle around’
with UDP packets (the point is ‘fiddle around’)

right tool for the right job.

Yes … but you have to know the ‘right tools’.

Armin

“Things should be made as simple as possible, but not too simple”

  • Albert Einstein",

On Thu, 24 May 2001 11:39:58 +0800, “ycao” <ycao@mail.ipp.ac.cn>
wrote:

hi,all

We will build a conrtol system ,and the demand of the realtimeness and
stability is very high.The system consists of 20 industrial PCs,
which will be connected by network adapter.(however,it maybe
unreasonable,but we have no enough money,and want tomake a try ) The control
cycle time is about 1.3 milliscend,that is we have only 1.3 milliscend to
transfer data from one node to others,inluding the date processing on every
nodes.It must be mentioned that the data transfered from one node to others
is about 20 bytes every time.Among the control cycle time the transfer frome
one node to another may happen 20 times,and some of them can be paralleled.


We choose QNX as the OS for the control system above.But as a greenhand i
can’t conclude that QNX will be fit for the system.So i wish you give me a
hint and help me out .Give me any advice will be good.

Thank you beforehand!


How about a FireWire (IEEE 1394) solution using the serial bus

protocol? I have literally just cracked open the book on 1394 (Don
Anderson’s “FireWire System Architecture”) but the speed and
flexibility of network configuration look attractive.
There are a lot of unspecified requirements for the 20 PC system,
among them are separation between boxes and actual complexity of
message passing.
Also there may be additional constraints due to location and funding.

Bob Bottemiller
Stein.DSI/Redmond, WA

Perhaps I’m being naive but might it not be possible to do it with fewer
PCs? I’m assuming these 20 byte packets are coming from some sort of
sensing apparatus - perhaps each machine could have more sensors. If you
could put 4 or 5 sensors on each machine and then conglomerate the data.
I mean, putting 80 or 100 bytes in a packet doesn’t make much more network
traffic but if you’ve only got 4 or 5 machines sending them…

cheers,

Kris

ycao <ycao@mail.ipp.ac.cn> wrote:

hi,all

We will build a conrtol system ,and the demand of the realtimeness and
stability is very high.The system consists of 20 industrial PCs,
which will be connected by network adapter.(however,it maybe
unreasonable,but we have no enough money,and want tomake a try ) The control
cycle time is about 1.3 milliscend,that is we have only 1.3 milliscend to
transfer data from one node to others,inluding the date processing on every
nodes.It must be mentioned that the data transfered from one node to others
is about 20 bytes every time.Among the control cycle time the transfer frome
one node to another may happen 20 times,and some of them can be paralleled.



We choose QNX as the OS for the control system above.But as a greenhand i
can’t conclude that QNX will be fit for the system.So i wish you give me a
hint and help me out .Give me any advice will be good.

Thank you beforehand!















Kris Warkentin
kewarken@qnx.com
(613)591-0836 x9368
“You’re bound to be unhappy if you optimize everything” - Donald Knuth

“Stephen Munnings” <steve@cormantech.com> wrote in message
news:MPG.157824bb2bbf0699989688@inn.qnx.com

In article <9ekam6$dae$> 1@inn.qnx.com> >, > kovalenko@home.com > says…
I believe 100VG-Anylan is deterministic. And I think Corman sells
hardware
with QNX drivers.
Purists will of course tell that it is not ethernet.

It was indeed! Note the was.

Amen.

Corman do not make or sell 100VG-Anylan any more, since the mfgr of the
chipsets no longer makes them. In effect, “100VG-Anylan is dead”.

I believe it is another case of “VHS” (good enough for most users)
pushing “Beta” (the better standard for discerning users) off the market!

And Amen. I still have my BetaMax. How about you?


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122

One can also have 19 macnines transmit their packets to one ‘central’
machine which would then combine them into one big packet and broadcast it
back to all machines. That means only 20 trasmissions every 1ms but the
result will be essentially the same - everyone gets every packet from
everyone else. That is still 500K/sec though, so it won’t fly on 10Mbit
network.

Now if that ‘central’ machine could have 4-5 NIC cards (should be easy
enough) and other nodes would be split into groups of 4-5 each on separate
segment then we only have 4-5 transmissions every 1 ms on each segment which
should be easy even for 10Mb network (now only about 120K/sec which is below
magical 30% utilization level). Of course central node would need to do
multicasts back…

One can get away with 25 cheap NE2000s on well forgotten thin coax cable so
even cheap hubs aren’t required at all. Total cost of network infrastructure
would be about $600. It would be still non-deterministic but can work for
practical purposes. Anyway, this is as far in terms of determinism as one
can get without heavy price tag (a 100Mbit switch for 20 ports would be a
4-zeroes tag).

  • Igor

“Kris Eric Warkentin” <kewarken@qnx.com> wrote in message
news:9em4qh$7rq$1@nntp.qnx.com

Perhaps I’m being naive but might it not be possible to do it with fewer
PCs? I’m assuming these 20 byte packets are coming from some sort of
sensing apparatus - perhaps each machine could have more sensors. If you
could put 4 or 5 sensors on each machine and then conglomerate the data.
I mean, putting 80 or 100 bytes in a packet doesn’t make much more network
traffic but if you’ve only got 4 or 5 machines sending them…

cheers,

Kris

ycao <> ycao@mail.ipp.ac.cn> > wrote:
hi,all

We will build a conrtol system ,and the demand of the realtimeness
and
stability is very high.The system consists of 20 industrial PCs,
which will be connected by network adapter.(however,it maybe
unreasonable,but we have no enough money,and want tomake a try ) The
control
cycle time is about 1.3 milliscend,that is we have only 1.3 milliscend
to
transfer data from one node to others,inluding the date processing on
every
nodes.It must be mentioned that the data transfered from one node to
others
is about 20 bytes every time.Among the control cycle time the transfer
frome
one node to another may happen 20 times,and some of them can be
paralleled.


We choose QNX as the OS for the control system above.But as a
greenhand i
can’t conclude that QNX will be fit for the system.So i wish you give me
a
hint and help me out .Give me any advice will be good.

Thank you beforehand!
















\

Kris Warkentin
kewarken@qnx.com
(613)591-0836 x9368
“You’re bound to be unhappy if you optimize everything” - Donald Knuth

Hum, are we waisting our breath here, the original author has
post back once?

“Igor Kovalenko” <kovalenko@home.com> wrote in message
news:9ep6i8$gsl$1@inn.qnx.com

One can also have 19 macnines transmit their packets to one ‘central’
machine which would then combine them into one big packet and broadcast it
back to all machines. That means only 20 trasmissions every 1ms but the
result will be essentially the same - everyone gets every packet from
everyone else. That is still 500K/sec though, so it won’t fly on 10Mbit
network.

Now if that ‘central’ machine could have 4-5 NIC cards (should be easy
enough) and other nodes would be split into groups of 4-5 each on separate
segment then we only have 4-5 transmissions every 1 ms on each segment
which
should be easy even for 10Mb network (now only about 120K/sec which is
below
magical 30% utilization level). Of course central node would need to do
multicasts back…

One can get away with 25 cheap NE2000s on well forgotten thin coax cable
so
even cheap hubs aren’t required at all. Total cost of network
infrastructure
would be about $600. It would be still non-deterministic but can work for
practical purposes. Anyway, this is as far in terms of determinism as one
can get without heavy price tag (a 100Mbit switch for 20 ports would be a
4-zeroes tag).

  • Igor

“Kris Eric Warkentin” <> kewarken@qnx.com> > wrote in message
news:9em4qh$7rq$> 1@nntp.qnx.com> …
Perhaps I’m being naive but might it not be possible to do it with fewer
PCs? I’m assuming these 20 byte packets are coming from some sort of
sensing apparatus - perhaps each machine could have more sensors. If
you
could put 4 or 5 sensors on each machine and then conglomerate the data.
I mean, putting 80 or 100 bytes in a packet doesn’t make much more
network
traffic but if you’ve only got 4 or 5 machines sending them…

cheers,

Kris

ycao <> ycao@mail.ipp.ac.cn> > wrote:
hi,all

We will build a conrtol system ,and the demand of the realtimeness
and
stability is very high.The system consists of 20 industrial PCs,
which will be connected by network adapter.(however,it maybe
unreasonable,but we have no enough money,and want tomake a try ) The
control
cycle time is about 1.3 milliscend,that is we have only 1.3 milliscend
to
transfer data from one node to others,inluding the date processing on
every
nodes.It must be mentioned that the data transfered from one node to
others
is about 20 bytes every time.Among the control cycle time the transfer
frome
one node to another may happen 20 times,and some of them can be
paralleled.


We choose QNX as the OS for the control system above.But as a
greenhand i
can’t conclude that QNX will be fit for the system.So i wish you give
me
a
hint and help me out .Give me any advice will be good.

Thank you beforehand!
















\

Kris Warkentin
kewarken@qnx.com
(613)591-0836 x9368
“You’re bound to be unhappy if you optimize everything” - Donald Knuth

In article <9em82c$l9p$1@inn.qnx.com>, Bill@Sattel.com says…

Corman do not make or sell 100VG-Anylan any more, since the mfgr of the
chipsets no longer makes them. In effect, “100VG-Anylan is dead”.

I believe it is another case of “VHS” (good enough for most users)
pushing “Beta” (the better standard for discerning users) off the market!

And Amen. I still have my BetaMax. How about you?

I never did get any VCR equipment until after this matter was settled - I

had no choice!


Stephen Munnings
Software Developer
Corman Technologies Inc.

In a single master configuration doesn’t the master share the token
… that means there is not reconfiguration possible and no
suffering will happen > :slight_smile:

Yes, but any serious real-world application will use multi-masters, so
this is only a theoretical advantage.

Rennie

Spend 100us to transfer a packet from one node to the other … it
will take 20 times 100us = 2ms to distribute the information and you
have 20
times a point of failure =:-/

Seems not to be a viable aproach …

This “non-viable approach” is exactly the approach used by ABB (formerly
Bailey) in their DCS. This is one of the few DCS designs deemed
reliable enough for boiler control on large (multi-hundred to
multi-thousand megawatt) thermal generating plants. I used a Bailey DCS
for 4 years, and we only lost the network once in that time (and that
was due to human error - both rings were disconnected simultaneously).

Physical rings are more difficult to wire, and require redundancy for
fault-tolerance, but they have a lot of desirable attributes when it
comes to determinism.

Rennie

Rennie Allen wrote:

In a single master configuration doesn’t the master share the token
… that means there is not reconfiguration possible and no
suffering will happen > :slight_smile:

Yes, but any serious real-world application will use multi-masters, so
this is only a theoretical advantage.

That simply wrong. The typical PREOFIBUS DP installation is a SINGLE
master installation … also the proposed configuration is SINGLE
master one.

Armin

Rennie Allen wrote:

In a single master configuration doesn’t the master share the token
… that means there is not reconfiguration possible and no
suffering will happen > :slight_smile:

Yes, but any serious real-world application will use multi-masters, so
this is only a theoretical advantage.

That simply wrong. The typical PREOFIBUS DP installation is a SINGLE
master installation … also the proposed configuration is SINGLE
master one.

Then the typical Profibus installation is not a serious control
application. Almost every single system we ship has redundant masters.
Control systems that don’t have redundant masters are not fault
tolerant, and hence are not “typical” real-world control systems. As
far as the proposed configuration, it seems quite clear to me that a
multi-master (or peer-to-peer) system is being proposed.

Rennie

You mean that switch can’t pass traffic from every port to every other
port
at the same time? Aren’t there fully-meshed switches, or they are
called
differently?

I am not sure what you mean by “fully-meshed”. Most high-speed switches
use a switch fabric that allows any port to be connected directly to any
other port. If there is no contention (i.e. no two ports are
txing/rxing to the same port) then traffic is being passed
simultaneously, but what happens when two ports try to talk to the same
port at the same time (something that seems likely to happen in the
system being described by the original poster) ? Even if (as you claim)
a “fully-meshed” device can somehow simultaneously pass data through
independent channels within the switch, what would happen when that data
arrived on the common wires of the 100baseT cable ? When a 100BaseT
ethernet card detects that it has been plugged into a switch it
negotiates full duplex, and is completely incapable of sharing the
media. A switch capable of passing traffic from any port to any other
port simultaneously, is therefore impossible (with 100BaseT at any
rate).

If there aren’t, look at this. Motorola makes a cPCI chassis with
fully
meshed 100Mb/s ethernet backplane, all you need to do is insert SBCs
with
built-in ethernet (I think that chassis takes up to 16 SBCs) and they
can
talk to each other all at the same time. It is called PXP1000 and also
has
two gigabit ethernet ports.

Sounds cool, but I still don’t believe that two ethernet ports can
transmit to the same port at the same time without contention.

Of course there’s contention if 19 nodes all send packets to 1, but
with
short packets (we’re talking 20 bytes here) they would all fit into
NIC’s
receive buffer and as long as they can be all sucked up in one cycle
there
should not be a problem.

Either the packets are being transmitted at the same time or they
aren’t. If all 19 nodes really are transmitting to one node at the same
time then the actual signals on the wire are corrupted, how are buffers
going to help (other than to store corrupt data) ?

I believe 100VG-Anylan is deterministic.
^^^

was

Your right, and a nice technology it was (rest in peace).

I would think that switches are either store-and-forward in which case
they
compensate insufficient switching fabric with memory, or fully meshed.
Does
anyone know better?

As I have stated above, if you have a complete switch fabric, that does
not eliminate contention.

Rennie

Oh,i really thanks for so many zealous advicers.
You bing me so many possibilities and impossibilities ,although may some of
them are contradictory.

I’ll give a more detail description about my system.But my English is so
shabby. :slight_smile:

1.The 20 IPCs can be gathered in one room.

2.We have only 1.3 milliscend to transfer data from one node to
others,inluding the date processing on every nodes.

Q: I want to know the general kernel Operration timings ,for example:
sem_pos,fork,spawn etc,the benchmark results on the CPU faster than P500
will be better. I think the data processing may not be neglected,though
which is not so important.

3.The data transfered from one node to another is about 20 bytes .
Q:I want to know what size the head may takes in one package in common!

4.Among the control cycle time the transfering from one node to another may
happen 20 times.some can be paralleled.Some are not so demmanding.And some
nodes are listening nodes,not needing a reply.Shall i use ‘virtual
proxy’?Does it save half of time?

Q:We have a simple test that: On a net connected by 100Mbit 3c905 Ethernet
network card,each node has the QNX 4.24 ,the message transfering from one
node to the other may take 250us per time.You see ,we have no on-chipconcise
timer ,and the QNX time ticksize resolution may ranging from 0.5ms~50ms.So
we use the tatistical method.we send message 2000 times and conclude a
average time,maybe that’s not the very fact. But the length of the message
ranging from 10bytes to 1000byte does’t show obvious differences.However,as
<> reported at the Apendix B,that’s is not
the turth.

<> Apendix B:
QNX 4.24 System Performance Numbers

Hardware Environment Processor: Intel 133 MHz Pentium (Triton chipset)
RAM: 16 Megabytes
Disk Drive: 4 Gbyte Barracuda Wide SCSI
Disk Controller: Adaptec 2940 Wide SCSI (PCI-bus)
Network Cards: 10 Mbit ISA-bus NE2000 Ethernet; 100 Mbit PCI-bus Digital
21040 Ethernet

Network Throughput
10 Mbit Ethernet: 1.1 Mbytes/second
100 Mbit Ethernet: 7.5 Mbytes/second

Message-Passing Throughput
100-byte message: 1.0 Mbytes/second
1000-byte message: 6.0 Mbytes/second
4000-byte message: 8.5 Mbytes/second
Note: As the size of the message exceeds that of the processor’s cache,
throughput will drop off because of cache-miss overhead.

As it also says:
Since primitives copy data directly from process to process without
queuing, message delivery performance approaches the memory bandwidth of the
underlying hardware.


5.Avoiding the colission we may use a swither.

Q:I want to know the the dalay time a swither may bing!(We want to use a
3c16456 swither.)

Q:I want to know the the dalay time a swither may bing!(We want to
use a
3c16456 swither.)

This has been answered. The answer is that you will have to ask the
manufacturer what arbitration mechanism they use internal to the switch.
There is a very good chance that it will meet your requirements, but
if this is a control system for a graphite core nuclear reactor, I would
check with the mfg first :slight_smile:

Rennie

“Rennie Allen” <RAllen@csical.com> wrote in message
news:D4907B331846D31198090050046F80C904B1DD@exchangecal.hq.csical.com

As I have stated above, if you have a complete switch fabric, that does
not eliminate contention.

I would think that switches do not directly connect port to port, or they
would have been pretty much a bunch of wires in a box. May be I’m wrong here
but I would expect that swithes actually allow all ports to transmit at the
same time, by streaming all ports to internal buffers first. Whether those
transmissions will be passed to destinations immediately or not would depend
on what destinations are. If 10 ports transmit to 10 other ports, yes. If 19
ports transmit to 1 then switch would have to store (and possibly bundle)
incoming packets and then spill them out as receiver is ready to handle next
portion. You still have contention of course but in this case you have bound
worst case latency because arbitration mechanism (queing) is deterministic.

  • Igor