A resource manager to map over '/'

Hi,

Disclaimer: I’ve never done a resource manager before, so maybe I’ve got
the wrong idea about how resource managers work.

I’m trying to get an event on file write (or ideally file close) to
anywhere on ‘/’ (including mounted DOS volumes in /fs if possible), so
I’ve compiled and run the basic example on qnx.com, changing
‘/dev/sample’ to ‘/’ and putting in a few printfs to see what is going on.

What I get is that reads and writes to the disk work fine, although I
have not implemented my own io_read or io_writes. However, directory
reads like ‘ls’ do not work. So while my resmgr is running, I can do:

echo “QNX!” > /tmp/qnx
and
more /tmp/qnx

and they work fine, but stuff like

cd /tmp
ls

Do not work.

I find it a bit strange that my resmgr maps right over the top of ‘/’
and some stuff works and some stuff does not. The basic behaviour I
would like is for everything to work like my resmgr was not there, but
to be able to call my own functions on fopen() or fclose() and then
continue like nothing happened. Is this possible without re-doing lots
of functionality which is already present?

I hope I have explained myself correctly…

Cheers

Garry

Garry <asdfasdfadsf@asdfasdfsd.com> wrote:

Hi,

Disclaimer: I’ve never done a resource manager before, so maybe I’ve got
the wrong idea about how resource managers work.

I’m trying to get an event on file write (or ideally file close) to
anywhere on ‘/’ (including mounted DOS volumes in /fs if possible), so
I’ve compiled and run the basic example on qnx.com, changing
‘/dev/sample’ to ‘/’ and putting in a few printfs to see what is going on.

You can only get control on an open. Once the process manager has assigned
a connection between a client and a resource manager, no other resource manager
will be allowed to “interfere” (sample, sniff, look at, modify) with that
connection. The connection persists until closed.

What I get is that reads and writes to the disk work fine, although I
have not implemented my own io_read or io_writes. However, directory

Yes, the open has gone by you, been assigned to the appropriate resource
manager, and it’s the one that now handles them; so that’s why they appear
to work even though you haven’t written any code – you’re just not in the
loop.

reads like ‘ls’ do not work. So while my resmgr is running, I can do:

ls is trickier; I believe (but someone with current knowledge will have to
comment) that ls goes around and asks each resource manager for an opendir/readdir/close
cycle.

echo “QNX!” > /tmp/qnx
and
more /tmp/qnx

and they work fine, but stuff like

cd /tmp
ls

Do not work.

I find it a bit strange that my resmgr maps right over the top of ‘/’
and some stuff works and some stuff does not. The basic behaviour I
would like is for everything to work like my resmgr was not there, but
to be able to call my own functions on fopen() or fclose() and then
continue like nothing happened. Is this possible without re-doing lots
of functionality which is already present?

The only way I can think of doing what you want is to take over everything
and have your resource manager call read() when its io_read() is called,
call write() when its io_write() is called etc. Effectively, you need to
intercept EACH AND EVERY function call because otherwise you have been bypassed
and you will not get the call that you are really interested in (the write()
or the close()).

Also, you’ll want to get in early and use the RESMGR_BEFORE flag to ensure that
you are first in line for requests…

I hope I have explained myself correctly…

Me too! :slight_smile:

Hope this helps,
-RK


[If replying via email, you’ll need to click on the URL that’s emailed to you
afterwards to forward the email to me – spam filters and all that]
Robert Krten, PDP minicomputer collector http://www.parse.com/~museum/

You can only get control on an open. Once the process manager has assigned
a connection between a client and a resource manager, no other resource manager
will be allowed to “interfere” (sample, sniff, look at, modify) with that
connection. The connection persists until closed.


What I get is that reads and writes to the disk work fine, although I
have not implemented my own io_read or io_writes. However, directory


Yes, the open has gone by you, been assigned to the appropriate resource
manager, and it’s the one that now handles them; so that’s why they appear
to work even though you haven’t written any code – you’re just not in the
loop.

Right, so if I implemented my own io_open, then reads and writes would
not work, unless I made them work myself?

reads like ‘ls’ do not work. So while my resmgr is running, I can do:


ls is trickier; I believe (but someone with current knowledge will have to
comment) that ls goes around and asks each resource manager for an opendir/readdir/close
cycle.

I don’t need to intercept opendirs or anything, is there a way to
explicitly exclude my resmgr from doing this?

The only way I can think of doing what you want is to take over everything
and have your resource manager call read() when its io_read() is called,
call write() when its io_write() is called etc. Effectively, you need to
intercept EACH AND EVERY function call because otherwise you have been bypassed
and you will not get the call that you are really interested in (the write()
or the close()).

Also, you’ll want to get in early and use the RESMGR_BEFORE flag to ensure that
you are first in line for requests…


I hope I have explained myself correctly…

I think I see now, intercepting each call might not be too bad, but can
I call the ‘original’ function after mine, or do I actually have to do
the work of the original function, the ‘original’ function being the one
that gets called if my resmgr is not running.

Thanks a lot for the help.

Garry

Garry <asdfasdfasdfasd@adsfasdfasdf.com> wrote:

You can only get control on an open. Once the process manager has assigned
a connection between a client and a resource manager, no other resource manager
will be allowed to “interfere” (sample, sniff, look at, modify) with that
connection. The connection persists until closed.


What I get is that reads and writes to the disk work fine, although I
have not implemented my own io_read or io_writes. However, directory


Yes, the open has gone by you, been assigned to the appropriate resource
manager, and it’s the one that now handles them; so that’s why they appear
to work even though you haven’t written any code – you’re just not in the
loop.

Right, so if I implemented my own io_open, then reads and writes would
not work, unless I made them work myself?

Correct. Once an open() has been assigned to you, you are responsible
for all further communications with the client until the client does the
final close() [“Final” because there might be intervening dup()s and close()s]

reads like ‘ls’ do not work. So while my resmgr is running, I can do:

ls is trickier; I believe (but someone with current knowledge will have to
comment) that ls goes around and asks each resource manager for an opendir/readdir/close
cycle.

I don’t need to intercept opendirs or anything, is there a way to
explicitly exclude my resmgr from doing this?

Not sure if you can tell why you’re being open()d. If you can’t tell why,
then an open() looks just like an opendir().

The only way I can think of doing what you want is to take over everything
and have your resource manager call read() when its io_read() is called,
call write() when its io_write() is called etc. Effectively, you need to
intercept EACH AND EVERY function call because otherwise you have been bypassed
and you will not get the call that you are really interested in (the write()
or the close()).

Also, you’ll want to get in early and use the RESMGR_BEFORE flag to ensure that
you are first in line for requests…


I hope I have explained myself correctly…

I think I see now, intercepting each call might not be too bad, but can
I call the ‘original’ function after mine, or do I actually have to do
the work of the original function, the ‘original’ function being the one
that gets called if my resmgr is not running.

You are in a protected address space, meaning that you do not have access to
the “original” function – you must go through the API (e.g., open(), read(),
write(), lseek(), fpathconf(), chown(), chmod(), …) just like anybody else.
And, you also have to prevent yourself from open()ing yourself.

Consider if you took over /, and someone did an open of /home/garry/spud.txt.
You get the open(), and you say “I will handle this”. Not because you want to,
but because you want to get informed of changes. Since you’re handling it, you
now need to do an open() of /home/garry/spud.txt so that you can get the file
contents. But you don’t want to open() yourself [in fact, without setting a
special flag, you can’t]. So you need to reject the open() of yourself and
pass it on to the next guy. You could probably do this by looking at the
process ID of the person trying to open() you, see that it was yourself, and
deny the request.

Also, when you say “intercepting each call might not be too bad” you do realize
that you must COPY the data for each and every data transaction, right? So
if a process writes() to a file that you’ve intercepted, you must read the data
from the client, and write() it to the target yourself, doubling the amount of
data transfers.

Thanks a lot for the help.

Garry

No probs. Enjoy!

Cheers,
-RK


[If replying via email, you’ll need to click on the URL that’s emailed to you
afterwards to forward the email to me – spam filters and all that]
Robert Krten, PDP minicomputer collector http://www.parse.com/~museum/

Thanks for the info Robert, I think I’m getting close to having a clue
what I am doing… :wink:

The read/write only working if you accept the open request, but I’m
not really sure why this does not affect opendir stuff, I will need
to investigate further…

I see what you mean about doubling data transfers, I kind of thought
of it as redirecting data, rather than copying it, but I see how
that’s not really possible. I find disk I/O to be not terribly speedy
on my QNX machine anyway, but I’m not sure if that is caused by
getting data to the disk, or the disk itself, so maybe duplicating
data would half performance or maybe the bottleneck is elsewhere, and
the performance would remain roughly the same.

I’ll keep on looking at this,I wonder, do any of your books cover this
sort of thing?

thegman <gtaylor@lowebroadway-dot-com.no-spam.invalid> wrote:

Thanks for the info Robert, I think I’m getting close to having a clue
what I am doing… > :wink:

Excellent (he said, tenting his fingers)…

The read/write only working if you accept the open request, but I’m
not really sure why this does not affect opendir stuff, I will need
to investigate further…

I see what you mean about doubling data transfers, I kind of thought
of it as redirecting data, rather than copying it, but I see how
that’s not really possible. I find disk I/O to be not terribly speedy
on my QNX machine anyway, but I’m not sure if that is caused by
getting data to the disk, or the disk itself, so maybe duplicating
data would half performance or maybe the bottleneck is elsewhere, and
the performance would remain roughly the same.

On my “BFE” system which allows you to have > 2GB files all I did was
implement that on top of the “normal” filesystem. Benchmarks are on my
website (I think the URL is http://www.parse.com/samples/manpages/bfe.html)

I’ll keep on looking at this,I wonder, do any of your books cover this
sort of thing?

All of them cover it in one way or another; the QNX Cookbook is your best
“how to” guide (it features a RAMdisk filesystem and a .tar filesystem)
and the “Getting Started” book is the reference part for that…

Cheers,
-RK


[If replying via email, you’ll need to click on the URL that’s emailed to you
afterwards to forward the email to me – spam filters and all that]
Robert Krten, PDP minicomputer collector http://www.parse.com/~museum/

I’m not sure if this will be useful for you, but the ‘inflator’ utility that
comes with QNX does something like what you are asking to implement runtime
compression of filesystem. If run with verbosity enabled (inflator -vvv) it
will report on disk access (not sure about each write).

Rob.

Garry wrote:

Hi,

Disclaimer: I’ve never done a resource manager before, so maybe I’ve
got the wrong idea about how resource managers work.

I’m trying to get an event on file write (or ideally file close) to
anywhere on ‘/’ (including mounted DOS volumes in /fs if possible), so
I’ve compiled and run the basic example on qnx.com, changing
‘/dev/sample’ to ‘/’ and putting in a few printfs to see what is
going on.
What I get is that reads and writes to the disk work fine, although I
have not implemented my own io_read or io_writes. However, directory
reads like ‘ls’ do not work. So while my resmgr is running, I can do:

echo “QNX!” > /tmp/qnx
and
more /tmp/qnx

and they work fine, but stuff like

cd /tmp
ls

Do not work.

I find it a bit strange that my resmgr maps right over the top of ‘/’
and some stuff works and some stuff does not. The basic behaviour I
would like is for everything to work like my resmgr was not there, but
to be able to call my own functions on fopen() or fclose() and then
continue like nothing happened. Is this possible without re-doing lots
of functionality which is already present?

I hope I have explained myself correctly…

Cheers

Garry

Robert Muil wrote:

I’m not sure if this will be useful for you, but the ‘inflator’ utility that
comes with QNX does something like what you are asking to implement runtime
compression of filesystem. If run with verbosity enabled (inflator -vvv) it
will report on disk access (not sure about each write).

Rob.

If I could get at the source, that would be great, but I’m not
interested in compressed filesystems, just getting an event on file
write to any filesystem, but mainly to ‘/’. It’s shame QNX have not
implemented it themselves, could be very useful for a lot of people.

Cheers

Garry