qnx_segment_overlay_flags ENOMEM error

We have been using the qnx_segment_overlay_flags function call for a while
in our drivers, but on a recent project I am seeing it fail with an errno of
ENOMEM.

I have verified that dev_drvr_register is occurring after this and the code
is compiled with the -T 1 option.

I have verified with show_pci that the address I am trying to use is the
valid physical memory address that the product is using as well and the
length of the memory location is also accurate.

The strange thing is that as a test we hard coded the exact same code to go
after a memory aperture of another product of ours in the machine at
fea00000h and it succeeds. While the accurate value of fedff800h fails.

We also tried this fedff800h address in another driver of our that has been
released and it also gave us the same ENOMEM failure.

We also tested and found it works if the address of fedf0000h is used and an
offset of f800 for making the pointer, but this has issues with us claiming
some memory that is not really ours.

qnx_segment_overlay call has the same exact issue.

The address we are trying to use was returned using CA_PCI* functions and
agrees with show_pci.

Does anyone have any idea what could be going wrong here?

Thanks

Allan

fedff800h does not work

I have noticed just now in the docs that the memory location has to be on a
4k boundary.

I tried to port my code to use shm_open and mmap but with the same problem
with non 4k boundaries.

This poses a pretty big issue for us as the hardware we are talking to has a
fixed PCI bridge chip on it that wants as little as 1k of Memory space and
can be aligned on 1k boundaries.

Fudging the value of the memory location a bit low and adding an offset to
it could cause an issue with other devices that are mapped in below our
memory location and not making the memory range sharable.

Any thoughts?

Allan

“Allan Smith” <aes@connecttech.com> wrote in message
news:bhvui4$br$1@inn.qnx.com

We have been using the qnx_segment_overlay_flags function call for a while
in our drivers, but on a recent project I am seeing it fail with an errno
of
ENOMEM.

I have verified that dev_drvr_register is occurring after this and the
code
is compiled with the -T 1 option.

I have verified with show_pci that the address I am trying to use is the
valid physical memory address that the product is using as well and the
length of the memory location is also accurate.

The strange thing is that as a test we hard coded the exact same code to
go
after a memory aperture of another product of ours in the machine at
fea00000h and it succeeds. While the accurate value of fedff800h fails.

We also tried this fedff800h address in another driver of our that has
been
released and it also gave us the same ENOMEM failure.

We also tested and found it works if the address of fedf0000h is used and
an
offset of f800 for making the pointer, but this has issues with us
claiming
some memory that is not really ours.

qnx_segment_overlay call has the same exact issue.

The address we are trying to use was returned using CA_PCI* functions
and
agrees with show_pci.

Does anyone have any idea what could be going wrong here?

Thanks

Allan

fedff800h does not work

Allan Smith <aes@connecttech.com> wrote:

The address we are trying to use was returned using CA_PCI* functions and
agrees with show_pci.

Does anyone have any idea what could be going wrong here?

I don’t know what might be going wrong – but are you coding a
16-bit or 32-bit application? If 32-bit, have you tried using
shm_open("/dev/shmem/physical",…) and mmap() to get access to
the memory, instead of the qnx_segment_* functions?

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Allan Smith <aes@connecttech.com> wrote:

I have noticed just now in the docs that the memory location has to be on a
4k boundary.

Yes, it does. QNX works with 4k memory/address “chunks” at the OS level –
it can’t deal any smaller.

I tried to port my code to use shm_open and mmap but with the same problem
with non 4k boundaries.

Yup, same thing.

This poses a pretty big issue for us as the hardware we are talking to has a
fixed PCI bridge chip on it that wants as little as 1k of Memory space and
can be aligned on 1k boundaries.

Fudging the value of the memory location a bit low and adding an offset to
it could cause an issue with other devices that are mapped in below our
memory location and not making the memory range sharable.

You probably have to fudge low, and add the offset.

Just because one process has mapped in a physical range, doesn’t mean another
process can’t also map in the same range. The main risk, though, is if you
offset wrong, you’ll hit the wrong memory.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.