cPCI and hot-swap

Hi…

I wonder if any one has explored or implemented hot swap of VME or cPCI
(or any other form factor?) boards on a chassis running QNX?

What are the experiences with doing this?

thanks…

regards…

Miguel

“Miguel Simon” <simon@ou.edu> wrote in message
news:3E6D3C63.5040005@ou.edu

Hi…

I wonder if any one has explored or implemented hot swap of VME or cPCI
(or any other form factor?) boards on a chassis running QNX?

What are the experiences with doing this?

thanks…

We’ve looked into cPCI hot swap. There are several issues that you’ll have
to resolve.

  1. There is no unified standard for host CPU board hot-swap. Each chassis
    vendor pushes its own solution and they all differ in details, in particular
    how chassis management is implemented. Most notable implementations I think
    are Motorola’s and Intel’s (stuff they bought from Ziatech and now
    apparently have sold to Continuous Computing Inc.)

  2. Assuming you have decided on the hardware implementation, you’ll need a
    driver for the ‘management board’, whatever that is. It would be responsible
    for delivering hot-swap events to applications and providing capabilities
    like power-up/down of individual slots, environmental control, etc. If you
    want to support host board hot-swap (active/standby) then you also need some
    kind of high-level application framework that provides high-availability
    features like redundant routing, checkpoints, domain failover, etc.

  3. There are two major types of cPCI chassis. One type has actual cPCI
    backplane, another (becoming very popular now) only uses cPCI mechanical
    form-factor and the only way for host board to talk to devices in other
    slots is through backplane ethernet mesh (aka PICMG2.16). The latter type
    usually has ‘switch slots’ for ethernet switches. Both types may or may not
    have the H.110 telephony bus (you can view it as a local T1/E1 interconnect
    bus).

  4. If you’re looking into the first type (i.e., the host board needs to see
    peripheral boards on PCI bus) then you have trouble with QNX PCI
    implementation as well. There’s no direct support for initialization of PCI
    bridge chips and there’s no [official/documented] framework to add such
    support to the QNX pci server. That means cPCI boards that have local PCI
    bus behind a bridge (most of them do) must be initialized by BIOS (or its
    equivalent on non-x86 platforms). If you hot-plug such a board, it won’t
    appear on PCI bus even if you call pci_rescan_bus().

If you have deep enough pockets you might be able to persuade QNX to do
something about the last issue, otherwise you will have to write a better
pci server to replace theirs. AFAIK, they have some undocumented hooks to
allow loading of DLLs into pci server. There also was a plan to port one
popular high-availability framework. I don’t know the current state of the
affairs.

Short of that, you might be better off using one of the ‘out of the box’
solutions instead. Motorola has HA-Linux distribution (RedHat-based) that
allegedly should provide full support of high-availability features on their
chassis. CCInc has a number of HA packages for Solaris and Linux supporting
their hardware, including transparent filesystem replication.

Good luck,
– igor

“Miguel Simon” <simon@ou.edu> wrote in message
news:3E6D96F5.3060306@ou.edu

Hi Igor…

Thanks for your help here. We do have a cPCI chassis with back plane
pci, and given your comments bellow, it seems that I am in trouble from
the get-go. (Our cPCI chassis is from APW, any comments for this
particular type of cPCI implementation?).

We looked at APW. Their chassis does not support redundant host slots in the
PCI backplane version, unfortunately. I did not find anyone except Sun, Mot
& Intel who’d support that anyway. It is tricky in the PCI backplane flavor
because it requires host board to be able to take over another PCI domain
should that domain’s host board fail. Which means there got to be a way to
bridge domains and that is not governed by any standard. It requires custom
BIOS/ROM too, since there must be a way to set up initial ownership of
domains. So, only players big enough to play to their own tune can be in
this band.

Sun has some nice solutions for this, but it only works with Solaris/SPARC.
Motorola’s and Intel’s designs have different trade-offs between capacity
and size/cost. Motorola’s chassis has 2 PCI domains spanned by single H.110
bus, whereas Intel’s has 4 PCI domains (that can be bridged into 2 pairs)
spanned by 2 separate H.110 buses. Basically, Motorola’s stuff is geared
toward active/standby pair of host boards working with single set of
communication boards (connected by common H.110). Intel’s stuff is geared
toward active/standby + active/active configuration at the same time, which
is why 4 PCI domains and 2 H.110 buses. This is roughly equivalent to
putting 2 Motorola boxes together, only with half capacity in each. You save
on the iron, power, space, but capacity is limited.

Intel/CCInc chassis only supports x86 boards, Motorola also supports PPC.

I am studying your comments, and I will have questions later. I’ll be
doing this in the back burner for the foreseeable future. Thanks again.

You’re welcome.

– igor

Hi Igor…

Thanks for your help here. We do have a cPCI chassis with back plane
pci, and given your comments bellow, it seems that I am in trouble from
the get-go. (Our cPCI chassis is from APW, any comments for this
particular type of cPCI implementation?).

I am studying your comments, and I will have questions later. I’ll be
doing this in the back burner for the foreseeable future. Thanks again.

Regards…

Miguel.



Igor Kovalenko wrote:

“Miguel Simon” <> simon@ou.edu> > wrote in message
news:> 3E6D3C63.5040005@ou.edu> …

Hi…

I wonder if any one has explored or implemented hot swap of VME or cPCI
(or any other form factor?) boards on a chassis running QNX?

What are the experiences with doing this?

thanks…


We’ve looked into cPCI hot swap. There are several issues that you’ll have
to resolve.

  1. There is no unified standard for host CPU board hot-swap. Each chassis
    vendor pushes its own solution and they all differ in details, in particular
    how chassis management is implemented. Most notable implementations I think
    are Motorola’s and Intel’s (stuff they bought from Ziatech and now
    apparently have sold to Continuous Computing Inc.)

  2. Assuming you have decided on the hardware implementation, you’ll need a
    driver for the ‘management board’, whatever that is. It would be responsible
    for delivering hot-swap events to applications and providing capabilities
    like power-up/down of individual slots, environmental control, etc. If you
    want to support host board hot-swap (active/standby) then you also need some
    kind of high-level application framework that provides high-availability
    features like redundant routing, checkpoints, domain failover, etc.

  3. There are two major types of cPCI chassis. One type has actual cPCI
    backplane, another (becoming very popular now) only uses cPCI mechanical
    form-factor and the only way for host board to talk to devices in other
    slots is through backplane ethernet mesh (aka PICMG2.16). The latter type
    usually has ‘switch slots’ for ethernet switches. Both types may or may not
    have the H.110 telephony bus (you can view it as a local T1/E1 interconnect
    bus).

  4. If you’re looking into the first type (i.e., the host board needs to see
    peripheral boards on PCI bus) then you have trouble with QNX PCI
    implementation as well. There’s no direct support for initialization of PCI
    bridge chips and there’s no [official/documented] framework to add such
    support to the QNX pci server. That means cPCI boards that have local PCI
    bus behind a bridge (most of them do) must be initialized by BIOS (or its
    equivalent on non-x86 platforms). If you hot-plug such a board, it won’t
    appear on PCI bus even if you call pci_rescan_bus().

If you have deep enough pockets you might be able to persuade QNX to do
something about the last issue, otherwise you will have to write a better
pci server to replace theirs. AFAIK, they have some undocumented hooks to
allow loading of DLLs into pci server. There also was a plan to port one
popular high-availability framework. I don’t know the current state of the
affairs.

Short of that, you might be better off using one of the ‘out of the box’
solutions instead. Motorola has HA-Linux distribution (RedHat-based) that
allegedly should provide full support of high-availability features on their
chassis. CCInc has a number of HA packages for Solaris and Linux supporting
their hardware, including transparent filesystem replication.

Good luck,
– igor