Tom Labno <tlabno@birinc.com> wrote:
Thanks for the information.
This has helped me a lot.
I have other issues, though…
- If I use mmap() to allocate contiguous memory, how do I free it?
I tried munmap() but it doesn’t appear to work. This is not critical since
I was able to re-create the logical-to-physical mappings for buffers
allocated using malloc(), memalign() or mmap(), and I can use a
chain DMA.
munmap() should free the memory allocated by mmap().
(If setting up a named shared memory area, the memory is managed
by reference count, each fd & mapping is a reference, as is the name.
When all maps are unmapped, all fds are closed, and the name is
unlinked, then the memory is freed.)
If it doesn’t that is a bug – what makes you think it wasn’t
freed?
Note that if you malloc() your memory, while virtually contiguous,
the memory returned need not be physically contiguous. And you
don’t know where the boundary is, and the behaviour of mem_offset()
is not defined for malloc()ed data. (Note in the mem_offset() docs
it says the fd must be the one passed for mmap() the data – you don’t
know that any particular fd you pass is the same as malloc() might have
used to get the memory.)
- The purpose for all of this was to prevent a data copy when sending
data between processes. I have the following three options (I think)
How much data are you trying to move around? Sometimes there is as
much work in not copying the data as there is in copying it.
a) Use neutrino MsgSend()/MsgRecieve().
This will be blocking (there are ways around this) and involves a data copy.
Blocking can be useful – it is built-in synchronisation.
b) Use conventional shared memory
This may work fine, but either I will have to create a separate shared
memory object for each buffer sent or I will need to create a single
chunk of shared memory and write my own memory manager
For a simple, fixed-size allocator, this is pretty easy. Hopefully
you don’t need a general-purpose allocator.
Note, you can combine this scheme with message-passing – send the
offset (and size if relevant) of the chunk to be processed on to
the next process in the chain.
c) Use my method for determining the list of physical addresses for
the buffer, send a message (using message queues) to the next process
(including this list) and marking the buffer as read-only using mprotect().
After the next process has completed processing the data, a return
message will be sent which informs the source process that he may
remove the write protection and re-use the buffer.
This is a bit ugly. Again, how much data are you handling? Message
queues involve a double S/R/R combination – extra overhead in
context-switching and kernel calls, that may eat up your savings
in data-copy times.
My problem with (c), is how do I handle the case where the sending
process dies before the receiver processes the data? Is there a way
to lock the memory region during this transaction so that it cannot
be re-allocated?
About the only way to do this, is to associate the memory with a name
in the pathname space.
Is there a better way of doing this?
Possibly, but it may require stepping back another step or two,
examining data flow and assumptions about data flow, then designing.
Who is the ultimate “owner” of the data? Can somebody up-stream
allocate the memory, then send a request down to the driver doing
the DMA, saying, in essence, fill this physical area with the data
and reply when you’re done? Or, start filling this physical area
with data, and notify me every so many chunks (maybe with a pulse)?
-David
David Gibbs
QNX Training Services
dagibbs@qnx.com