How to write io_mmap for a PCI resource manager?

I’m writing a Neutrino resource manager for a PCI card my company
builds. I have a large part of the resource manager up and running
correctly. Now, I need to map part of the card into a client
application’s address space.

The card is a multimedia card with a number of onboard frame buffers.
I’d like to be able to memory map those buffers directly into the client
application so the app can render directly into the frame buffer – the
same mechanism our apps use on other operating systems.

For example, our linux driver implements the mmap driver vector. The
driver performs the appropriate linux VM magic to map the pages for the
hardware into the client’s address space.

Is implementing io_mmap the right way to solve this problem for our
Neutrino driver? If so, where can I find information on how to implement
the io_mmap vector properly? All the docs I’ve found suggest delegating
completing to iofunc_mmap_default, but that I don’t see how to specify
that the pages so created should be backed by my hardware.

Any pointers or suggestions accepted.

Thanks in advance,
Eric

Hi Eric

Why don’t you have the resource manager map the buffers into
shared memory. You could then have the clients do a devctl to
get the info from the resmgr that they need to use the shared
mem.

Chris Foran
QNX Technical Support


Previously, Eric Berdahl wrote in qdn.public.qnxrtp.applications, qdn.public.qnxrtp.os:

I’m writing a Neutrino resource manager for a PCI card my company
builds. I have a large part of the resource manager up and running
correctly. Now, I need to map part of the card into a client
application’s address space.

The card is a multimedia card with a number of onboard frame buffers.
I’d like to be able to memory map those buffers directly into the client
application so the app can render directly into the frame buffer – the
same mechanism our apps use on other operating systems.

For example, our linux driver implements the mmap driver vector. The
driver performs the appropriate linux VM magic to map the pages for the
hardware into the client’s address space.

Is implementing io_mmap the right way to solve this problem for our
Neutrino driver? If so, where can I find information on how to implement
the io_mmap vector properly? All the docs I’ve found suggest delegating
completing to iofunc_mmap_default, but that I don’t see how to specify
that the pages so created should be backed by my hardware.

Any pointers or suggestions accepted.

Thanks in advance,
Eric

Have you checked out mmap_device_memory ?


Eric Berdahl wrote:

I’m writing a Neutrino resource manager for a PCI card my company
builds. I have a large part of the resource manager up and running
correctly. Now, I need to map part of the card into a client
application’s address space.

The card is a multimedia card with a number of onboard frame buffers.
I’d like to be able to memory map those buffers directly into the client
application so the app can render directly into the frame buffer – the
same mechanism our apps use on other operating systems.

For example, our linux driver implements the mmap driver vector. The
driver performs the appropriate linux VM magic to map the pages for the
hardware into the client’s address space.

Is implementing io_mmap the right way to solve this problem for our
Neutrino driver? If so, where can I find information on how to implement
the io_mmap vector properly? All the docs I’ve found suggest delegating
completing to iofunc_mmap_default, but that I don’t see how to specify
that the pages so created should be backed by my hardware.

Any pointers or suggestions accepted.

Thanks in advance,
Eric

In article <3A1EF9F5.B802CE48@opal-rt.com>, Francois Desruisseaux
<Francois.Desruisseaux@opal-rt.com> wrote:

Have you checked out mmap_device_memory ?

Yes, but that doesn’t solve my problem. mmap_device_memory is useful for
mapping the hardware into the driver’s address space (or any other app
with root priviledges).

Having done that, I need to find a mechanism to map some of that card’s
memory (e.g. a frame buffer) into a client (non-root) application. So
again, isimplementing io_mmap the right way to solve this problem for
our Neutrino driver? If so, where can I find information on how to
implement the io_mmap vector properly? All the docs I’ve found suggest
delegating completing to iofunc_mmap_default, but that I don’t see how
to specify that the pages so created should be backed by my hardware.

Thanks in advance,
Eric

In article Voyager.001123152744.438294B@cforan, cforan@qnx.com wrote:

Why don’t you have the resource manager map the buffers into
shared memory. You could then have the clients do a devctl to
get the info from the resmgr that they need to use the shared
mem.

That sounds like exactly what I need. How do I, as the driver, ask the
resource manager to map the buffers into shared memory? I’ve tried a
number of different tricks, none of which word (empirical data).

Thanks in advance,
Eric