Chang Im <chim@cisco.com> wrote:
“David Gibbs” <> dagibbs@qnx.com> > wrote in message
news:a8sgq3$ki9$> 1@nntp.qnx.com> …
…
Next question, why do you need to know the difference between a
voluntary close() call, and an involuntary close() due to the
client exiting? What are you trying to do differently?
Let say, a resource mananger make a copy of a file when the file is closed.
In order to avoid copying a potentially corrupted file, a resource manager
tries to detect if a file is cleanly closed. If a file is not closed by the
client, then
the file may be in an inconsistent state.
The close message generated from an explicit close() call, from the
implied close() call if exit() is called explicitly, and from the
implied cleanup if a process exits abnormally is the same in all
cases. (And, this is intentional.)
If you need this sort of handling, I would suggest doing something like
extending the OCB to include a “dirty” flag, mark this flag on anything
that would change the device (writes), and implement a devctl() that would
say clear the dirty flag (i.e. stable data state), then in your
iofunc_close_ocb() handler, you could check the state of the dirty flag
in the OCB to know which branch of handling to take.
This would allow the additional flexibility of a process to state that
“its sane” without having to do any closes – it could then do a later
update, mark as “sane”, update, mark as sane, etc… until it is finally
done, and does a close().
But, of course, there are still the issues of multiple processes opening
the same device – are you allowing this? Have you thought through the
implications for this sort of state-saving? Maybe you need this as a
per-device (iofunc_attr_t structure, or extension) flag, rather than a
per OCB flag. (filenames are basically mapped to devices or
iofunc_attr_t structues; UNIQUE opens are mapped to OCBs, with
dup()ed fds [including dups due to process creation] being mapped
to the same OCB as the previous open().)
-David
QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.