PCI Address of Memory?

Hi,

We are in the process of writing a driver for a CompactPCI card that we have
developed. We are currently working on getting the DMA engine, that is
available on that card, functional. It looks like we are able to execute a
PCI-to-localbus transfer on it. The data at the target address has changed
after the transfer. But we’re not sure where the data is coming from. It is
certainly not transferring the data that we want it to.

We’ve searched the documentation for some hints on what functions we need to
call and tried some variations without success. The programmer’s guide type
of documentation is a little weak on this area of the OS.

So, I have a few questions:

What is the proper way to get the PCI address of a block of memory? We know
that we can allocate physical memory with mmap_device_memory() but how do
you get the PCI address of it?

We’ve looked at the rsrcdbmgr_* functions and have determined that we can
allocate some “RSRCDBMGR_PCI_MEMORY” but we don’t know exactly what we’ve
allocated when we’re done. Does this allocate a block of memory accessible
from the PCI bus as the name seems to indicate? If so, then we assume that
it is returning to us the PCI address range that was allocated. How do we
then get a CPU address to that memory?

Is there any thing special that we have to do to make it work on an x86
system? on a PowerPC system?

Any tips or pointers would be appreciated.

Thanks,

Wayne

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:

Hi,

We are in the process of writing a driver for a CompactPCI card that we have
developed. We are currently working on getting the DMA engine, that is
available on that card, functional. It looks like we are able to execute a
PCI-to-localbus transfer on it. The data at the target address has changed
after the transfer. But we’re not sure where the data is coming from. It is
certainly not transferring the data that we want it to.

We’ve searched the documentation for some hints on what functions we need to
call and tried some variations without success. The programmer’s guide type
of documentation is a little weak on this area of the OS.

So, I have a few questions:

What is the proper way to get the PCI address of a block of memory? We know
that we can allocate physical memory with mmap_device_memory() but how do
you get the PCI address of it?

We’ve looked at the rsrcdbmgr_* functions and have determined that we can
allocate some “RSRCDBMGR_PCI_MEMORY” but we don’t know exactly what we’ve
allocated when we’re done. Does this allocate a block of memory accessible
from the PCI bus as the name seems to indicate? If so, then we assume that
it is returning to us the PCI address range that was allocated. How do we
then get a CPU address to that memory?

Is there any thing special that we have to do to make it work on an x86
system? on a PowerPC system?

Any tips or pointers would be appreciated.

If you have requested a block of memory using mmap_device_memory(), then you
can use the mem_offset() function to obtain the physical address of this
block of memory. If you are programming this address into the PCI device,
then you will have to use the CpuMemTranslation field in the returned
structure from the pci_attach_device() function call to translate this
address to a PCI address. (see pci.h for manifests).

Thanks,

Wayne
\

Thanks, Hugh.

But it is actually the CpuBmstrTranslation that we had to apply. Not the
CpuMemTranslation. Just in case someone else references this thread in the
future.

Wayne

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.010928085649.28005A@node90.ott.qnx.com

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
Hi,
[snip]
Any tips or pointers would be appreciated.

If you have requested a block of memory using mmap_device_memory(), then
you
can use the mem_offset() function to obtain the physical address of this
block of memory. If you are programming this address into the PCI device,
then you will have to use the CpuMemTranslation field in the returned
structure from the pci_attach_device() function call to translate this
address to a PCI address. (see pci.h for manifests).

Thanks,

Wayne

I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?

  • igor

Wayne Fisher wrote:

Thanks, Hugh.

But it is actually the CpuBmstrTranslation that we had to apply. Not the
CpuMemTranslation. Just in case someone else references this thread in the
future.

Wayne

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.010928085649.28005A@node90.ott.qnx.com> …
Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
Hi,
[snip]
Any tips or pointers would be appreciated.

If you have requested a block of memory using mmap_device_memory(), then
you
can use the mem_offset() function to obtain the physical address of this
block of memory. If you are programming this address into the PCI device,
then you will have to use the CpuMemTranslation field in the returned
structure from the pci_attach_device() function call to translate this
address to a PCI address. (see pci.h for manifests).

Thanks,

Wayne

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:

I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?

On some non-x86 systems, the PCI bridge chips translate addresses from
CPU addresses to PCI addresses, so that is why we have the translation
fields in the pci_attach structure.

  • igor

Wayne Fisher wrote:

Thanks, Hugh.

But it is actually the CpuBmstrTranslation that we had to apply. Not the
CpuMemTranslation. Just in case someone else references this thread in the
future.

Wayne

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.010928085649.28005A@node90.ott.qnx.com> …
Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
Hi,
[snip]
Any tips or pointers would be appreciated.

If you have requested a block of memory using mmap_device_memory(), then
you
can use the mem_offset() function to obtain the physical address of this
block of memory. If you are programming this address into the PCI device,
then you will have to use the CpuMemTranslation field in the returned
structure from the pci_attach_device() function call to translate this
address to a PCI address. (see pci.h for manifests).

Thanks,

Wayne

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011001080912.27495A@node90.ott.qnx.com

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:
I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?


On some non-x86 systems, the PCI bridge chips translate addresses from
CPU addresses to PCI addresses, so that is why we have the translation
fields in the pci_attach structure.

I know that. PCI is little-endian, so on big-endian systems like PPC the PCI
chipsets, such as Raven or Hawk do the translation. What I don’t understand
is, why do we have to do translation in the software as well?

  • igor

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011001080912.27495A@node90.ott.qnx.com> …
Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:
I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?


On some non-x86 systems, the PCI bridge chips translate addresses from
CPU addresses to PCI addresses, so that is why we have the translation
fields in the pci_attach structure.


I know that. PCI is little-endian, so on big-endian systems like PPC the PCI
chipsets, such as Raven or Hawk do the translation. What I don’t understand
is, why do we have to do translation in the software as well?

The translation has nothing to do with big- or little-endian. It is for
translating CPU to PCI addresses and vice versa.

  • igor

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011001080912.27495A@node90.ott.qnx.com

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:
I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?

On some non-x86 systems, the PCI bridge chips translate addresses from
CPU addresses to PCI addresses, so that is why we have the translation
fields in the pci_attach structure.

On an architecture-dependent related note, how do the PCI servers differ
between an x86 host and, say, pci-raven with regards to assigning
interrupts?

We have now managed to get data to move via DMA between the host and the
card and vice-versa. So now we’re looking at hooking up to the DMA transfer
complete interrupt and are having some problems. Everything seems fine when
we plug the board into an x86 PC. The card is assigned an interrupt (Int 5,
in this case). We see that interrupt number in the BIOS startup, we see it
in “pci -v”, and we see it in the pci_dev_info structure. We attach to it
and we receive the interrupt.

Now we move on to the PowerPC on a Motorola MCP750 single board computer.
Using the same card, both “pci -v” and the pci_dev_info structure report
that it’s using int 0. Interrupt “0” immediately sends alarms off in our
heads, but we attach to it anyway. Unfortunately, we don’t receive the
interrupt in our software using the same C source code.

We’ve looked through the online docs and have found functions like
pci_map_irq(), and pci_irq_routing_options() but we don’t know if we should
be using them. We tried a few instances of pci_map_irq() without success.

Does any documentation exist on how all this PCI stuff is supposed to work?
We couldn’t seem to find anything in the docs.

How do we get a valid interrupt assigned to this card on the PowerPC SBC?

Thanks,

Wayne

I presume that you are running pci-raven? How many PCI slots do you have
on the SBC? Does interrupt 0 get assigned in all the slots? The output
from ‘pci -v’ would also be helpful.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011001080912.27495A@node90.ott.qnx.com> …
Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:
I am getting curious. Having ported couple of Linux drivers to QNX I did
not notice using that translation. The physical address obtained from
mem_offset is simply split into low and high part and fed into DMA
engine. And that seems to work. Is it architecture-dependent? Do I miss
something?

On some non-x86 systems, the PCI bridge chips translate addresses from
CPU addresses to PCI addresses, so that is why we have the translation
fields in the pci_attach structure.

On an architecture-dependent related note, how do the PCI servers differ
between an x86 host and, say, pci-raven with regards to assigning
interrupts?

We have now managed to get data to move via DMA between the host and the
card and vice-versa. So now we’re looking at hooking up to the DMA transfer
complete interrupt and are having some problems. Everything seems fine when
we plug the board into an x86 PC. The card is assigned an interrupt (Int 5,
in this case). We see that interrupt number in the BIOS startup, we see it
in “pci -v”, and we see it in the pci_dev_info structure. We attach to it
and we receive the interrupt.

Now we move on to the PowerPC on a Motorola MCP750 single board computer.
Using the same card, both “pci -v” and the pci_dev_info structure report
that it’s using int 0. Interrupt “0” immediately sends alarms off in our
heads, but we attach to it anyway. Unfortunately, we don’t receive the
interrupt in our software using the same C source code.

We’ve looked through the online docs and have found functions like
pci_map_irq(), and pci_irq_routing_options() but we don’t know if we should
be using them. We tried a few instances of pci_map_irq() without success.

Does any documentation exist on how all this PCI stuff is supposed to work?
We couldn’t seem to find anything in the docs.

How do we get a valid interrupt assigned to this card on the PowerPC SBC?

Thanks,

Wayne
\

We are running pci-raven with no command line options. We have a six slot
backplane - for 5 cards plus the SBC.

We tested every slot and it now seems that sometimes interrupt 0 is assigned
and sometimes we get “no connection”. The same slot may give interrupt 0
after booting one time, and “no connection” after the next boot.

The output from “pci -v” is attached at the end of this posting. The last
device listed is our board.

Thanks,

Wayne

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011003081027.8845C@node90.ott.qnx.com

I presume that you are running pci-raven? How many PCI slots do you have
on the SBC? Does interrupt 0 get assigned in all the slots? The output
from ‘pci -v’ would also be helpful.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
[snip]
On an architecture-dependent related note, how do the PCI servers differ
between an x86 host and, say, pci-raven with regards to assigning
interrupts?

We have now managed to get data to move via DMA between the host and the
card and vice-versa. So now we’re looking at hooking up to the DMA
transfer
complete interrupt and are having some problems. Everything seems fine
when
we plug the board into an x86 PC. The card is assigned an interrupt (Int
5,
in this case). We see that interrupt number in the BIOS startup, we see
it
in “pci -v”, and we see it in the pci_dev_info structure. We attach to
it
and we receive the interrupt.

Now we move on to the PowerPC on a Motorola MCP750 single board
computer.
Using the same card, both “pci -v” and the pci_dev_info structure report
that it’s using int 0. Interrupt “0” immediately sends alarms off in our
heads, but we attach to it anyway. Unfortunately, we don’t receive the
interrupt in our software using the same C source code.

We’ve looked through the online docs and have found functions like
pci_map_irq(), and pci_irq_routing_options() but we don’t know if we
should
be using them. We tried a few instances of pci_map_irq() without
success.

Does any documentation exist on how all this PCI stuff is supposed to
work?
We couldn’t seem to find anything in the docs.

How do we get a valid interrupt assigned to this card on the PowerPC
SBC?

Thanks,

Wayne

PCI version = 2.10

Class = Bridge (Host/PCI)
Vendor ID = 1057h, Motorola
Device ID = 4801h, Raven PowerPC Chipset
PCI index = 0h
Class Codes = 060000h
Revision ID = 5h
Bus number = 0
Device number = 0
Function num = 0
Status Reg = 2280h
Command Reg = 6h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 0h
Cache Line Size= 0h
CPU Bus Master Translation = 80000000h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = NC
Interrupt line = 0

Class = Bridge (PCI/ISA)
Vendor ID = 1106h, VIA Technologies Inc
Device ID = 586h, VT82C586VP PCI-to-ISA Bridge
PCI index = 0h
Class Codes = 060100h
Revision ID = 41h
Bus number = 0
Device number = 1
Function num = 0
Status Reg = 200h
Command Reg = 87h
Header type = 0h Multi-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 0h
Cache Line Size= 0h
CPU Bus Master Translation = 80000000h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = NC
Interrupt line = 0

Class = Mass Storage (IDE)
Vendor ID = 1106h, VIA Technologies Inc
Device ID = 571h, VT82C586/686 PCI IDE Controller
PCI index = 0h
Class Codes = 01018fh
Revision ID = 6h
Bus number = 0
Device number = 1
Function num = 1
Status Reg = 280h
Command Reg = 85h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 20h
Cache Line Size= 0h
CPU Bus Master Translation = 80000000h
PCI IO Address = fff0h length 8 enabled
CPU IO Address = 8000fff0h
PCI IO Address = ffech length 4 enabled
CPU IO Address = 8000ffech
PCI IO Address = ffe0h length 8 enabled
CPU IO Address = 8000ffe0h
PCI IO Address = ffdch length 4 enabled
CPU IO Address = 8000ffdch
PCI IO Address = ffc0h length 16 enabled
CPU IO Address = 8000ffc0h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = INT A
Interrupt line = 14

Class = Serial Bus (Universal Serial Bus)
Vendor ID = 1106h, VIA Technologies Inc
Device ID = 3038h, VT83C572 PCI USB Controller
PCI index = 0h
Class Codes = 0c0300h
Revision ID = 2h
Bus number = 0
Device number = 1
Function num = 2
Status Reg = 200h
Command Reg = 5h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 16h
Cache Line Size= 8h un-cacheable
CPU Bus Master Translation = 80000000h
PCI IO Address = ffa0h length 32 enabled
CPU IO Address = 8000ffa0h
Subsystem Vendor ID = 925h
Subsystem ID = 1234h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = INT D
Interrupt line = 11

Class = Pre-2.0 (Non-VGA)
Vendor ID = 1106h, VIA Technologies Inc
Device ID = 3040h, VT83C572 Power Management Controller
PCI index = 0h
Class Codes = 000000h
Revision ID = 10h
Bus number = 0
Device number = 1
Function num = 3
Status Reg = 280h
Command Reg = 0h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 0h
Cache Line Size= 0h
CPU Bus Master Translation = 80000000h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = NC
Interrupt line = 0

Class = Network (Ethernet)
Vendor ID = 1011h, Digital Equipment Corporation
Device ID = 9h, DC21140 Fast Ethernet Ctrlr
PCI index = 0h
Class Codes = 020000h
Revision ID = 22h
Bus number = 0
Device number = 4
Function num = 0
Status Reg = 280h
Command Reg = 7h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 20h
Cache Line Size= 8h un-cacheable
CPU Bus Master Translation = 80000000h
PCI IO Address = fffff00h length 128 enabled
CPU IO Address = 8fffff00h
PCI Mem Address = 3bffff00h 32bit length 128 enabled
CPU Mem Address = fbffff00h
PCI Expansion ROM = 3ffc0000h length 262144 disabled
CPU Expansion ROM = fffc0000h
Max Lat = 40ns
Min Gnt = 20ns
PCI Int Pin = INT A
Interrupt line = 2

Class = Bridge (PCI/PCI)
Vendor ID = 1011h, Digital Equipment Corporation
Device ID = 26h, 21154 PCI-PCI Bridge
PCI index = 0h
Class Codes = 060400h
Revision ID = 2h
Bus number = 0
Device number = 10
Function num = 0
Status Reg = 290h
Command Reg = 7h
Header type = 1h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 80h
Cache Line Size= 8h un-cacheable
CPU Bus Master Translation = 80000000h
Primary Bus Number = 0h
Secondary Bus Number = 1h
Subordinate Bus Number = 1h
Secondary Latency Timer = 80h
I/O Base = e1h
I/O Limit = e1h
Secondary Status = 2280h
Memory Base = 3000h
Memory Limit = 3be0h
Prefetchable Memory Base = fff1h
Prefetchable Memory Limit= 1h
Prefetchable Base Upper 32 Bits = ffffffffh
Prefetchable Limit Upper 32 Bits = 0h
I/O Base Upper 16 Bits = ffffh
I/O Limit Upper 16 Bits = ffffh
Bridge Control = 0ns
PCI Int Pin = NC
Interrupt line = 0

Class = Data Acquisition (Unknown)
Vendor ID = 10b5h, PLX Technology
Device ID = 9656h, PCI 9656 64-bit 66 MHz PCI Master I/O Accelerator
PCI index = 0h
Class Codes = 118000h
Revision ID = abh
Bus number = 1
Device number = 10
Function num = 0
Status Reg = 2b0h
Command Reg = 7h
Header type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 20h
Cache Line Size= 8h un-cacheable
CPU Bus Master Translation = 80000000h
PCI Mem Address = 3beffe00h 32bit length 512 enabled
CPU Mem Address = fbeffe00h
PCI IO Address = ef00h length 256 enabled
CPU IO Address = 8000ef00h
PCI Mem Address = 3beff000h 32bit length 2048 enabled
CPU Mem Address = fbeff000h
PCI Mem Address = 30000000h 32bit length 134217728 enabled
CPU Mem Address = f0000000h
Subsystem Vendor ID = 10b5h
Subsystem ID = 9656h
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = INT A
Interrupt line = no connection

The pci-raven that we supply, is for the 603 board and not for the 750 board
that you are running. The pci-raven server has hard coded values for the
603 board, which are obviously not the same for your board. Have you
modified the pci-raven driver for your board?

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:

We are running pci-raven with no command line options. We have a six slot
backplane - for 5 cards plus the SBC.

We tested every slot and it now seems that sometimes interrupt 0 is assigned
and sometimes we get “no connection”. The same slot may give interrupt 0
after booting one time, and “no connection” after the next boot.

The output from “pci -v” is attached at the end of this posting. The last
device listed is our board.

Thanks,

Wayne

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003081027.8845C@node90.ott.qnx.com> …
I presume that you are running pci-raven? How many PCI slots do you have
on the SBC? Does interrupt 0 get assigned in all the slots? The output
from ‘pci -v’ would also be helpful.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
[snip]
On an architecture-dependent related note, how do the PCI servers differ
between an x86 host and, say, pci-raven with regards to assigning
interrupts?

We have now managed to get data to move via DMA between the host and the
card and vice-versa. So now we’re looking at hooking up to the DMA
transfer
complete interrupt and are having some problems. Everything seems fine
when
we plug the board into an x86 PC. The card is assigned an interrupt (Int
5,
in this case). We see that interrupt number in the BIOS startup, we see
it
in “pci -v”, and we see it in the pci_dev_info structure. We attach to
it
and we receive the interrupt.

Now we move on to the PowerPC on a Motorola MCP750 single board
computer.
Using the same card, both “pci -v” and the pci_dev_info structure report
that it’s using int 0. Interrupt “0” immediately sends alarms off in our
heads, but we attach to it anyway. Unfortunately, we don’t receive the
interrupt in our software using the same C source code.

We’ve looked through the online docs and have found functions like
pci_map_irq(), and pci_irq_routing_options() but we don’t know if we
should
be using them. We tried a few instances of pci_map_irq() without
success.

Does any documentation exist on how all this PCI stuff is supposed to
work?
We couldn’t seem to find anything in the docs.

How do we get a valid interrupt assigned to this card on the PowerPC
SBC?

Thanks,

Wayne

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011003142742.29555B@node90.ott.qnx.com

The pci-raven that we supply, is for the 603 board and not for the 750
board
that you are running. The pci-raven server has hard coded values for the
603 board, which are obviously not the same for your board. Have you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard pci-raven, and
there is not a word anywhere that you’re even supposed to modify it. Nor
there’s source for it, as far as I remember.

What else is supposed to be modified? How about pci-hawk and 765? Anyway,
having hardcoded values is little bit silly I think.

  • igor

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
We are running pci-raven with no command line options. We have a six
slot
backplane - for 5 cards plus the SBC.

We tested every slot and it now seems that sometimes interrupt 0 is
assigned
and sometimes we get “no connection”. The same slot may give interrupt 0
after booting one time, and “no connection” after the next boot.

The output from “pci -v” is attached at the end of this posting. The
last
device listed is our board.

Thanks,

Wayne

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003081027.8845C@node90.ott.qnx.com> …
I presume that you are running pci-raven? How many PCI slots do you
have
on the SBC? Does interrupt 0 get assigned in all the slots? The output
from ‘pci -v’ would also be helpful.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
[snip]
On an architecture-dependent related note, how do the PCI servers
differ
between an x86 host and, say, pci-raven with regards to assigning
interrupts?

We have now managed to get data to move via DMA between the host and
the
card and vice-versa. So now we’re looking at hooking up to the DMA
transfer
complete interrupt and are having some problems. Everything seems
fine
when
we plug the board into an x86 PC. The card is assigned an interrupt
(Int
5,
in this case). We see that interrupt number in the BIOS startup, we
see
it
in “pci -v”, and we see it in the pci_dev_info structure. We attach
to
it
and we receive the interrupt.

Now we move on to the PowerPC on a Motorola MCP750 single board
computer.
Using the same card, both “pci -v” and the pci_dev_info structure
report
that it’s using int 0. Interrupt “0” immediately sends alarms off in
our
heads, but we attach to it anyway. Unfortunately, we don’t receive
the
interrupt in our software using the same C source code.

We’ve looked through the online docs and have found functions like
pci_map_irq(), and pci_irq_routing_options() but we don’t know if we
should
be using them. We tried a few instances of pci_map_irq() without
success.

Does any documentation exist on how all this PCI stuff is supposed
to
work?
We couldn’t seem to find anything in the docs.

How do we get a valid interrupt assigned to this card on the PowerPC
SBC?

Thanks,

Wayne

“Igor Kovalenko” <kovalenko@home.com> wrote in message
news:9pfnor$krf$1@inn.qnx.com

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003142742.29555B@node90.ott.qnx.com> …
The pci-raven that we supply, is for the 603 board and not for the 750
board
that you are running. The pci-raven server has hard coded values for the
603 board, which are obviously not the same for your board. Have you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard pci-raven, and
there is not a word anywhere that you’re even supposed to modify it. Nor
there’s source for it, as far as I remember.

We have to modify pci-raven? I just checked the docs and for the pci-raven
utility it says “PCI support for the MTX60x (ATX) and MCP750 (compact PCI)
Motorola boards”. The board we are using is a MCP750 so we expected that it
was directly supported by QNX6 out of the box. I haven’t found anything in
the docs that say we have to modify it. Where might we find source to
modify?

What else is supposed to be modified? How about pci-hawk and 765? Anyway,
having hardcoded values is little bit silly I think.

Agreed, hardcoded values are bad.

  • igor

Wayne

PS. If anyone from the doc team at QSSL are reading this, I think that the
docs related to this could be expanded upon. Same for the PCI support.

I’m looking into this and will get back to you.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:

“Igor Kovalenko” <> kovalenko@home.com> > wrote in message
news:9pfnor$krf$> 1@inn.qnx.com> …
“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003142742.29555B@node90.ott.qnx.com> …
The pci-raven that we supply, is for the 603 board and not for the 750
board
that you are running. The pci-raven server has hard coded values for the
603 board, which are obviously not the same for your board. Have you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard pci-raven, and
there is not a word anywhere that you’re even supposed to modify it. Nor
there’s source for it, as far as I remember.

We have to modify pci-raven? I just checked the docs and for the pci-raven
utility it says “PCI support for the MTX60x (ATX) and MCP750 (compact PCI)
Motorola boards”. The board we are using is a MCP750 so we expected that it
was directly supported by QNX6 out of the box. I haven’t found anything in
the docs that say we have to modify it. Where might we find source to
modify?

What else is supposed to be modified? How about pci-hawk and 765? Anyway,
having hardcoded values is little bit silly I think.

Agreed, hardcoded values are bad.

  • igor

Wayne

PS. If anyone from the doc team at QSSL are reading this, I think that the
docs related to this could be expanded upon. Same for the PCI support.
\

Yes, pci-raven is the correct server to be running on this board.
However, the problem that you are having is back plane dependent,
as the routing of the interrupts is different on each manufacturer’s
back plane. You will have to assign interrupt lines in your
application according to the routing of the interrupts on the back
plane that you have.

Previously, Hugh Brown wrote in qdn.public.qnxrtp.os:

I’m looking into this and will get back to you.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
“Igor Kovalenko” <> kovalenko@home.com> > wrote in message
news:9pfnor$krf$> 1@inn.qnx.com> …
“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003142742.29555B@node90.ott.qnx.com> …
The pci-raven that we supply, is for the 603 board and not for the 750
board
that you are running. The pci-raven server has hard coded values for the
603 board, which are obviously not the same for your board. Have you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard pci-raven, and
there is not a word anywhere that you’re even supposed to modify it. Nor
there’s source for it, as far as I remember.

We have to modify pci-raven? I just checked the docs and for the pci-raven
utility it says “PCI support for the MTX60x (ATX) and MCP750 (compact PCI)
Motorola boards”. The board we are using is a MCP750 so we expected that it
was directly supported by QNX6 out of the box. I haven’t found anything in
the docs that say we have to modify it. Where might we find source to
modify?


Wayne

PS. If anyone from the doc team at QSSL are reading this, I think that the
docs related to this could be expanded upon. Same for the PCI support.


\

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011004112731.29499A@node90.ott.qnx.com

Yes, pci-raven is the correct server to be running on this board.

Ok.

However, the problem that you are having is back plane dependent,
as the routing of the interrupts is different on each manufacturer’s
back plane.

Sounds fair.

You will have to assign interrupt lines in your
application according to the routing of the interrupts on the back
plane that you have.

Umm, how? How do we assign the interrupt lines? We tried doing an
InterruptAttach() on various interrupts with no luck. We tried using
pci_map_irq() and some different interrupts again with no luck.

Is pci_map_irq() the right function to use?

I hate having to learn how this works through you, Hugh. Is there any
documentation on the PCI functions that you can point me to, or give me?
Some sample code? Anything more than the library reference?

Thanks,

Wayne

Previously, Hugh Brown wrote in qdn.public.qnxrtp.os:
I’m looking into this and will get back to you.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
“Igor Kovalenko” <> kovalenko@home.com> > wrote in message
news:9pfnor$krf$> 1@inn.qnx.com> …
“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003142742.29555B@node90.ott.qnx.com> …
The pci-raven that we supply, is for the 603 board and not for the
750
board
that you are running. The pci-raven server has hard coded values
for the
603 board, which are obviously not the same for your board. Have
you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard
pci-raven, and
there is not a word anywhere that you’re even supposed to modify it.
Nor
there’s source for it, as far as I remember.

We have to modify pci-raven? I just checked the docs and for the
pci-raven
utility it says “PCI support for the MTX60x (ATX) and MCP750 (compact
PCI)
Motorola boards”. The board we are using is a MCP750 so we expected
that it
was directly supported by QNX6 out of the box. I haven’t found
anything in
the docs that say we have to modify it. Where might we find source to
modify?


Wayne

PS. If anyone from the doc team at QSSL are reading this, I think that
the
docs related to this could be expanded upon. Same for the PCI support.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:

“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011004112731.29499A@node90.ott.qnx.com> …
Yes, pci-raven is the correct server to be running on this board.

Ok.

However, the problem that you are having is back plane dependent,
as the routing of the interrupts is different on each manufacturer’s
back plane.

Sounds fair.

You will have to assign interrupt lines in your
application according to the routing of the interrupts on the back
plane that you have.

Umm, how? How do we assign the interrupt lines? We tried doing an
InterruptAttach() on various interrupts with no luck. We tried using
pci_map_irq() and some different interrupts again with no luck.

Is pci_map_irq() the right function to use?

It seems as though you have to go into the ROM monitor on this board
and setup the external interrupt routing. We have tested the MCP750 on
2 Motorola chassis and other adapters work OK. Once you have done this
setup, it should just work with the interrupt that has been assigned
to the specific slot.


I hate having to learn how this works through you, Hugh. Is there any
documentation on the PCI functions that you can point me to, or give me?
Some sample code? Anything more than the library reference?

Thanks,

Wayne


Previously, Hugh Brown wrote in qdn.public.qnxrtp.os:
I’m looking into this and will get back to you.

Previously, Wayne Fisher wrote in qdn.public.qnxrtp.os:
“Igor Kovalenko” <> kovalenko@home.com> > wrote in message
news:9pfnor$krf$> 1@inn.qnx.com> …
“Hugh Brown” <> hsbrown@qnx.com> > wrote in message
news:> Voyager.011003142742.29555B@node90.ott.qnx.com> …
The pci-raven that we supply, is for the 603 board and not for the
750
board
that you are running. The pci-raven server has hard coded values
for the
603 board, which are obviously not the same for your board. Have
you
modified the pci-raven driver for your board?

Wow, this is first time I ever hear about it. The 750 is listed as
supported, BSP comes with boot templates which use standard
pci-raven, and
there is not a word anywhere that you’re even supposed to modify it.
Nor
there’s source for it, as far as I remember.

We have to modify pci-raven? I just checked the docs and for the
pci-raven
utility it says “PCI support for the MTX60x (ATX) and MCP750 (compact
PCI)
Motorola boards”. The board we are using is a MCP750 so we expected
that it
was directly supported by QNX6 out of the box. I haven’t found
anything in
the docs that say we have to modify it. Where might we find source to
modify?


Wayne

PS. If anyone from the doc team at QSSL are reading this, I think that
the
docs related to this could be expanded upon. Same for the PCI support.
\

Thanks for all of your help, Hugh. We have succeeded in getting everything
to work.

Thanks,

Wayne

“Hugh Brown” <hsbrown@qnx.com> wrote in message
news:Voyager.011005103155.5976B@node90.ott.qnx.com
[snip]

It seems as though you have to go into the ROM monitor on this board
and setup the external interrupt routing. We have tested the MCP750 on
2 Motorola chassis and other adapters work OK. Once you have done this
setup, it should just work with the interrupt that has been assigned
to the specific slot.
[snip]