File System Descrepency

I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data drive”. If I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run “df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u. That’s
twice what “du” reports. How can this be? (and, no I’m not mistaking 512
byte blocks for kilobytes)

Also, I can’t easily verify this (because it’s a 24x7 production system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much matched (at
around ~43% used).

At this point I’m just looking for some clues… of what to look at/for?

TIA

-Rob

“Rob” <rob@spamyourself.com> wrote in message
news:9drpmj$rui$1@inn.qnx.com

I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data drive”. If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u. That’s
twice what “du” reports. How can this be? (and, no I’m not mistaking 512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated space is
eating
up a fair amount of disk space. The preallocation can be turned off.


Also, I can’t easily verify this (because it’s a 24x7 production system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much matched (at
around ~43% used).

At this point I’m just looking for some clues… of what to look at/for?

TIA

-Rob

Thanks Mario…

But I knew most of that already :wink: I did say I was keeping it simple to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on our
production system (unless I want to come in at 3:00am and shut everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better have
access to some sort of ioctl mechanism, so it could be handles on a file by
file basis.

One other thing I don’t know for sure… does the preallocation work on just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file. The reason I ask is, we are a transaction
acquirer. We have a lot of (not so small) files. The basic method used for
transaction logging is to open in append mode, write a transaction and close
the file. At any given moment there really aren’t a lot of files open on
the system, but over an extended period there are hundreds of files being
appended to regularly.

Also, do you or anyone else out there know of any utilities I could use to
diagnose this problem? The only thing that I’ve run across is the -x option
to ls. And you know what… it’s got the ‘G’ flag set for all our active
transactions files (Meaning an extent is reserved beyond the EOF)… Hmmm?
I’d really like to know how big that preallocated extent is on a file by
file basis.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

The only other variable I can think of is there are more database processes
running on the node I’m having problems with (it’s counter part just keeps
copies of the database files, but not the active processes). There aren’t
very may files associated with this, but they are always open. When they
grow, they do so in rather large chunks (like 100-300k at a time). I wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

Argh, I guess I need to know how to tune all this. Help!

TIA (Again)

-Rob
P.S.
Warren is alive and well… but, he’s got some new toys to play with. And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him :wink:


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$1@inn.qnx.com

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated space
is
eating
up a fair amount of disk space. The preallocation can be turned off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look at/for?

TIA

-Rob
\

First off… Sorry, it’s not really feasible to be doing a chkfsys on our
production system (unless I want to come in at 3:00am and shut everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we haven’t
had any bad data files for a very long time. I personally check on that
every day.

Corruption can take many forms. If for example you have ever turn ed the
power off while the system had files open it is possble that you’ve
created orphan sectors. These are sectors marked as used, but not
part of any file or directory. CHKFSYS is very good at recovering these.
You can intentionally create orphan’s with the ZAP command. If you have
a leak somehow, via ZAP maybe, eventually you will lose your entire hard
disk and then your system will stop in the middle of the day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better have
access to some sort of ioctl mechanism, so it could be handles on a file by
file basis.

There does not seem to be an Fsys option to control preallocation. It seems
to work quite well, and I don’t know any reason why you would want to turn it
off. The alternative would be to speed up fragmenation on your disk. You can
however make an end run around it if you want. To do this, never just open a file
and write to it. Instead, preallocate your files yourself. Open them for write,
create as large a file as you will need and then close the file. Then re-open the
file in update mode. You will have to keep track of the end of file yourself.
This is probably not what you are looking for.

One other thing I don’t know for sure… does the preallocation work on just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file.

When you close a file properly, there should be no extra allocated sectors.
There may be some unallocated bytes in the last sector.

The reason I ask is, we are a transaction
acquirer. We have a lot of (not so small) files. The basic method used for
transaction logging is to open in append mode, write a transaction and close
the file. At any given moment there really aren’t a lot of files open on
the system, but over an extended period there are hundreds of files being
appended to regularly.

This is likely to cause a lot of fragmentation on your disk.
I wonder if this is causing the system to create so many
inodes that they are leaching sectors from your system. Try
“ls -x” on one of your files. If the 2nd number is very
large then this may be your problem.

Also, do you or anyone else out there know of any utilities I could use to
diagnose this problem? The only thing that I’ve run across is the -x option
to ls. And you know what… it’s got the ‘G’ flag set for all our active
transactions files (Meaning an extent is reserved beyond the EOF)… Hmmm?
I’d really like to know how big that preallocated extent is on a file by
file basis.

I recall Bill Flowers explaining that it is situation dependent. For example,
lets say you open a file and write 10 sequential sectors. The system might
preallocate 10 more. If you keep on writing sequential sectors, the system
preallocates larger and large blocks up to some limit. I don’t think the limit
itself is all that large compared to the size of a disk.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

I think the G is probably just telling you that the file is open and being
written to. How is the 2nd machine mirror’d. Do the transactions come in
one by one, or are they updated in blocks?

The only other variable I can think of is there are more database processes
running on the node I’m having problems with (it’s counter part just keeps
copies of the database files, but not the active processes). There aren’t
very may files associated with this, but they are always open. When they
grow, they do so in rather large chunks (like 100-300k at a time). I wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

I think it is something like this.


Mitchell Schoenbrun --------- maschoen@pobox.com

Rob <rob@spamyourself.com> wrote:

Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on our
production system (unless I want to come in at 3:00am and shut everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better have
access to some sort of ioctl mechanism, so it could be handles on a file by
file basis.

From the description of what you’re doing – lots of appends to lots of
files – this is the behaviour that will cause there to be bunches of
“pre-grown” files around. Usually you want pre-grown files – it makes
later accesses/updates to that file more efficient by preventing the
fragmentation of the drive.

I don’t think there is an ioctl for turning this off, but I seem to remember
that there was a discussion about this in the old quics – the archives
are supposed to all be searchable using qdn (the web site) now, so you might
try searching the archives for pregrown or pre-grown and see what comes up.

I vaguely recall that there was a sequence of behaviours that you could do
that would act to “tell” Fsys that it could release the pre-grown blocks.
I THINK it might have been to explicitly truncate [ltrunc()] the file to
its current size.

One other thing I don’t know for sure… does the preallocation work on just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file.

Its keeps it beyond the close of a file.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

The only other variable I can think of is there are more database processes
running on the node I’m having problems with (it’s counter part just keeps
copies of the database files, but not the active processes). There aren’t
very may files associated with this, but they are always open. When they
grow, they do so in rather large chunks (like 100-300k at a time). I wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

Gotten beyond my knowledge/recall here. If John notices this thread, he
might have more info… or try that search I suggested.

-David

QNX Training Services
dagibbs@qnx.com

On Tue, 15 May 2001 14:51:47 -0500, “Rob” <rob@spamyourself.com>
wrote:

Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on our
production system (unless I want to come in at 3:00am and shut everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better have
access to some sort of ioctl mechanism, so it could be handles on a file by
file basis.

One other thing I don’t know for sure… does the preallocation work on just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file. The reason I ask is, we are a transaction
acquirer. We have a lot of (not so small) files. The basic method used for
transaction logging is to open in append mode, write a transaction and close
the file. At any given moment there really aren’t a lot of files open on
the system, but over an extended period there are hundreds of files being
appended to regularly.

Also, do you or anyone else out there know of any utilities I could use to
diagnose this problem? The only thing that I’ve run across is the -x option
to ls. And you know what… it’s got the ‘G’ flag set for all our active
transactions files (Meaning an extent is reserved beyond the EOF)… Hmmm?
I’d really like to know how big that preallocated extent is on a file by
file basis.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

The only other variable I can think of is there are more database processes
running on the node I’m having problems with (it’s counter part just keeps
copies of the database files, but not the active processes). There aren’t
very may files associated with this, but they are always open. When they
grow, they do so in rather large chunks (like 100-300k at a time). I wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

Does the term ‘database’ mean that a kind of DBMS is used?
Then a database file definition may contain an extension size/pre-grow
parameter.

ako

Argh, I guess I need to know how to tune all this. Help!

TIA (Again)

-Rob
P.S.
Warren is alive and well… but, he’s got some new toys to play with. And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him > :wink:


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated space
is
eating
up a fair amount of disk space. The preallocation can be turned off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look at/for?

TIA

-Rob


\

“Rob” <rob@spamyourself.com> wrote in message
news:9ds151$3bs$1@inn.qnx.com

Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on our
production system (unless I want to come in at 3:00am and shut everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better
have
access to some sort of ioctl mechanism, so it could be handles on a file
by
file basis.

I just check and the option is not in Fsys but rather in the mount command,
the option is -g.


P.S.
Warren is alive and well… but, he’s got some new toys to play with. And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him > :wink:

Thanks for the info.

“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated space
is
eating
up a fair amount of disk space. The preallocation can be turned off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look
at/for?

TIA

-Rob


\

Mitchell,

When you say "Try “ls -x” on one of your files. If the 2nd number is
very
large then this may be your problem. ", what consitutes a large number?
I’ve
noticed these on some of our data files where this number is between 1 and
20.
TIA

Ivan Bannon
RJG Inc.

Mitchell Schoenbrun <maschoen@pobox.com> wrote in message
news:Voyager.010515135300.211B@schoenbrun.com

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on that
every day.

Corruption can take many forms. If for example you have ever turn ed the
power off while the system had files open it is possble that you’ve
created orphan sectors. These are sectors marked as used, but not
part of any file or directory. CHKFSYS is very good at recovering these.
You can intentionally create orphan’s with the ZAP command. If you have
a leak somehow, via ZAP maybe, eventually you will lose your entire hard
disk and then your system will stop in the middle of the day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better
have
access to some sort of ioctl mechanism, so it could be handles on a file
by
file basis.

There does not seem to be an Fsys option to control preallocation. It
seems
to work quite well, and I don’t know any reason why you would want to turn
it
off. The alternative would be to speed up fragmenation on your disk. You
can
however make an end run around it if you want. To do this, never just
open a file
and write to it. Instead, preallocate your files yourself. Open them for
write,
create as large a file as you will need and then close the file. Then
re-open the
file in update mode. You will have to keep track of the end of file
yourself.
This is probably not what you are looking for.

One other thing I don’t know for sure… does the preallocation work on
just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file.

When you close a file properly, there should be no extra allocated
sectors.
There may be some unallocated bytes in the last sector.

The reason I ask is, we are a transaction
acquirer. We have a lot of (not so small) files. The basic method used
for
transaction logging is to open in append mode, write a transaction and
close
the file. At any given moment there really aren’t a lot of files open
on
the system, but over an extended period there are hundreds of files
being
appended to regularly.

This is likely to cause a lot of fragmentation on your disk.
I wonder if this is causing the system to create so many
inodes that they are leaching sectors from your system. Try
“ls -x” on one of your files. If the 2nd number is very
large then this may be your problem.

Also, do you or anyone else out there know of any utilities I could use
to
diagnose this problem? The only thing that I’ve run across is the -x
option
to ls. And you know what… it’s got the ‘G’ flag set for all our
active
transactions files (Meaning an extent is reserved beyond the EOF)…
Hmmm?
I’d really like to know how big that preallocated extent is on a file by
file basis.

I recall Bill Flowers explaining that it is situation dependent. For
example,
lets say you open a file and write 10 sequential sectors. The system
might
preallocate 10 more. If you keep on writing sequential sectors, the
system
preallocates larger and large blocks up to some limit. I don’t think the
limit
itself is all that large compared to the size of a disk.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file
system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is
what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

I think the G is probably just telling you that the file is open and being
written to. How is the 2nd machine mirror’d. Do the transactions come in
one by one, or are they updated in blocks?

The only other variable I can think of is there are more database
processes
running on the node I’m having problems with (it’s counter part just
keeps
copies of the database files, but not the active processes). There
aren’t
very may files associated with this, but they are always open. When
they
grow, they do so in rather large chunks (like 100-300k at a time). I
wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

I think it is something like this.


Mitchell Schoenbrun --------- > maschoen@pobox.com

See below…

“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010515135300.211B@schoenbrun.com

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on that
every day.

Corruption can take many forms. If for example you have ever turn ed the
power off while the system had files open it is possble that you’ve
created orphan sectors. These are sectors marked as used, but not
part of any file or directory. CHKFSYS is very good at recovering these.
You can intentionally create orphan’s with the ZAP command. If you have
a leak somehow, via ZAP maybe, eventually you will lose your entire hard
disk and then your system will stop in the middle of the day.

Thanks for your concern… I assure you, I’m a very prudent person. If
there was any practical way I could run chkfsys, I would. But, I’m 99% sure
this isn’t a file system corruption problem. I have run chkfsys on this
node previously, after I started noticing the disk usage discrepancies.
There was no corruption.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better
have
access to some sort of ioctl mechanism, so it could be handles on a file
by
file basis.

There does not seem to be an Fsys option to control preallocation. It
seems
to work quite well, and I don’t know any reason why you would want to turn
it
off. The alternative would be to speed up fragmenation on your disk. You
can
however make an end run around it if you want. To do this, never just
open a file
and write to it. Instead, preallocate your files yourself. Open them for
write,
create as large a file as you will need and then close the file. Then
re-open the
file in update mode. You will have to keep track of the end of file
yourself.
This is probably not what you are looking for.

Thanks for confirming that… I’m not blind! Yes, I agree, it does work
quite well. That’s why I’d prefer to tune it or deal with it on an
individual file basis.

One other thing I don’t know for sure… does the preallocation work on
just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file.

When you close a file properly, there should be no extra allocated
sectors.
There may be some unallocated bytes in the last sector.

David’s response seems to contradict that… I’ll be investigating further.


The reason I ask is, we are a transaction
acquirer. We have a lot of (not so small) files. The basic method used
for
transaction logging is to open in append mode, write a transaction and
close
the file. At any given moment there really aren’t a lot of files open
on
the system, but over an extended period there are hundreds of files
being
appended to regularly.

This is likely to cause a lot of fragmentation on your disk.
I wonder if this is causing the system to create so many
inodes that they are leaching sectors from your system. Try
“ls -x” on one of your files. If the 2nd number is very
large then this may be your problem.

I totally undestand and agree. For transaction files, no it’s not a big
number. I’m beginning to conclude that “normal” append mode writes aren’t
the problem here.

Also, do you or anyone else out there know of any utilities I could use
to
diagnose this problem? The only thing that I’ve run across is the -x
option
to ls. And you know what… it’s got the ‘G’ flag set for all our
active
transactions files (Meaning an extent is reserved beyond the EOF)…
Hmmm?
I’d really like to know how big that preallocated extent is on a file by
file basis.

I recall Bill Flowers explaining that it is situation dependent. For
example,
lets say you open a file and write 10 sequential sectors. The system
might
preallocate 10 more. If you keep on writing sequential sectors, the
system
preallocates larger and large blocks up to some limit. I don’t think the
limit
itself is all that large compared to the size of a disk.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file
system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is
what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

I think the G is probably just telling you that the file is open and being
written to. How is the 2nd machine mirror’d. Do the transactions come in
one by one, or are they updated in blocks?

A single transaction at a time. Transaction processing is identical on both
systems. Again, I don’t think append mode writes are the problem here.

See my database comments/suspicions in another reply.

The only other variable I can think of is there are more database
processes
running on the node I’m having problems with (it’s counter part just
keeps
copies of the database files, but not the active processes). There
aren’t
very may files associated with this, but they are always open. When
they
grow, they do so in rather large chunks (like 100-300k at a time). I
wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

I think it is something like this.

Thanks Mitchell

Mitchell Schoenbrun --------- > maschoen@pobox.com

See below…

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:9ds6kj$48d$1@nntp.qnx.com

Rob <> rob@spamyourself.com> > wrote:
Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple
to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better
have
access to some sort of ioctl mechanism, so it could be handles on a file
by
file basis.

From the description of what you’re doing – lots of appends to lots of
files – this is the behaviour that will cause there to be bunches of
“pre-grown” files around. Usually you want pre-grown files – it makes
later accesses/updates to that file more efficient by preventing the
fragmentation of the drive.

Yes, for transaction, log, etc. files definately.

I don’t think there is an ioctl for turning this off, but I seem to
remember
that there was a discussion about this in the old quics – the archives
are supposed to all be searchable using qdn (the web site) now, so you
might
try searching the archives for pregrown or pre-grown and see what comes
up.

I vaguely recall that there was a sequence of behaviours that you could do
that would act to “tell” Fsys that it could release the pre-grown blocks.
I THINK it might have been to explicitly truncate [ltrunc()] the file to
its current size.

I will investgate this… thanks.

One other thing I don’t know for sure… does the preallocation work on
just
open files or is it part of Fsys’ caching mechanism e.g. extends/exists
beyond a close of a file.

Its keeps it beyond the close of a file.

A HA! That’s why restarting a database process didn’t change anything.

OK… now I’ll really through a wrench in all this…
We run redundant processing centers. Any transaction that hits one
processing center gets replicated to the other processing center. The
counter part to the node in question is pretty much a mirror, file
system
wise. Running a ‘df’ on that node reports 45% used for /u. Which is
what I
would expect… it matches the ‘du -k’… ‘ls -x’ shows a ‘G’ for all
transaction files there also??

The only other variable I can think of is there are more database
processes
running on the node I’m having problems with (it’s counter part just
keeps
copies of the database files, but not the active processes). There
aren’t
very may files associated with this, but they are always open. When
they
grow, they do so in rather large chunks (like 100-300k at a time). I
wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

Gotten beyond my knowledge/recall here. If John notices this thread, he
might have more info… or try that search I suggested.

Ok… Thanks a lot. Checkout my other replies… I may be getting a handle
on this :slight_smile:

-David

QNX Training Services
dagibbs@qnx.com

Not really a DBMS in the traditional sense, but in theory, yes. The engine
for the “database processing” was written from scratch in-house by me. It’s
based on lean/mean built for speed, single key index files. It does in fact
have a parameter for extension size/pre-growth (that’s why it grows in
100-300k chunks).

There are two primary reason why I’m suspicious of the “database
processing”…

  1. The big difference between the redundant/mirrored file systems is active
    database processing. The one with more database processing has a greater
    file system discrepancy.
  2. In the last 6 months or so we’ve add several client processing suites
    (Hosts for short) to our system that require relatively large index files in
    the database that supports each host. The “card index” in particular for
    each of these starts out rather large (100 Meg) and grows steadily… like
    maybe 3-5 Meg a month on average. The huge discrepancies between ‘du’ &
    ‘df’ disk usage started showing up after these Hosts were added.

It think the problem here may be that I need to out smart Fsys. I’m going
to take David Gibbs’ suggestion and search the QUICS archives for that
possible “ltrunc” solution. I’ll let you all know how it turns out.

Thanks

Rob



“Andrzej Kocon” <ako@box43.gnet.pl> wrote in message
news:3b02423c.12178051@inn.qnx.com

On Tue, 15 May 2001 14:51:47 -0500, “Rob” <> rob@spamyourself.com
wrote:

[SNIP]



The only other variable I can think of is there are more database
processes
running on the node I’m having problems with (it’s counter part just
keeps
copies of the database files, but not the active processes). There
aren’t
very may files associated with this, but they are always open. When they
grow, they do so in rather large chunks (like 100-300k at a time). I
wonder
what the algorithm is for preallocation? Is it some multiple of the
requested extent/last write past EOF?

Does the term ‘database’ mean that a kind of DBMS is used?
Then a database file definition may contain an extension size/pre-grow
parameter.

ako

Y I B… there it is! One would think that if mount can do this, there must
be some sort of private message that is sent to Fsys… wouldn’t one? I’ll
see if I can get some the other possible solutions to work, but this is
something good to know. Thanks.

Rob


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9du2tl$cho$1@inn.qnx.com

“Rob” <> rob@spamyourself.com> > wrote in message
news:9ds151$3bs$> 1@inn.qnx.com> …
Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple
to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it an
Fsys option? Ideally, I’d like to be able to tune it… or even better
have
access to some sort of ioctl mechanism, so it could be handles on a file
by
file basis.

I just check and the option is not in Fsys but rather in the mount
command,
the option is -g.


P.S.
Warren is alive and well… but, he’s got some new toys to play with.
And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him > :wink:


Thanks for the info.


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data
drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not
mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode
and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated
space
is
eating
up a fair amount of disk space. The preallocation can be turned off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all production
processing was shutdown, the “du” and “df” numbers pretty much
matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look
at/for?

TIA

-Rob




\

One possibility is to create a separate partition to house this stuff on and
mount it with that option.

  • Richard

“Rob” <rob@spamyourself.com> wrote in message
news:9dudi1$ivl$1@inn.qnx.com

Y I B… there it is! One would think that if mount can do this, there
must
be some sort of private message that is sent to Fsys… wouldn’t one?
I’ll
see if I can get some the other possible solutions to work, but this is
something good to know. Thanks.

Rob


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9du2tl$cho$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9ds151$3bs$> 1@inn.qnx.com> …
Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple
to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on
that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it
an
Fsys option? Ideally, I’d like to be able to tune it… or even
better
have
access to some sort of ioctl mechanism, so it could be handles on a
file
by
file basis.

I just check and the option is not in Fsys but rather in the mount
command,
the option is -g.


P.S.
Warren is alive and well… but, he’s got some new toys to play with.
And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him > :wink:


Thanks for the info.


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data
drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I
run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not
mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode
and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated
space
is
eating
up a fair amount of disk space. The preallocation can be turned
off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all
production
processing was shutdown, the “du” and “df” numbers pretty much
matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look
at/for?

TIA

-Rob






\

“Rob” <rob@spamyourself.com> wrote in message
news:9dudi1$ivl$1@inn.qnx.com

Y I B… there it is! One would think that if mount can do this, there
must
be some sort of private message that is sent to Fsys… wouldn’t one?

Maybe Fsys probably doesn’t support changing this on the fly. Hence
can only be done at mount time.

see if I can get some the other possible solutions to work, but this is
something good to know. Thanks.

Rob


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9du2tl$cho$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9ds151$3bs$> 1@inn.qnx.com> …
Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it simple
to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on
that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is it
an
Fsys option? Ideally, I’d like to be able to tune it… or even
better
have
access to some sort of ioctl mechanism, so it could be handles on a
file
by
file basis.

I just check and the option is not in Fsys but rather in the mount
command,
the option is -g.


P.S.
Warren is alive and well… but, he’s got some new toys to play with.
And
then of course there’s snipy practice for Quake… Expect him when you
see/hear from him > :wink:


Thanks for the info.


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very busy)
production systems, running QNX 4.25. It’s the primary “data
drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I
run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on /u.
That’s
twice what “du” reports. How can this be? (and, no I’m not
mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the inode
and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem is
intact.

On file that grow a lot, the filesystem preallocate space to reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated
space
is
eating
up a fair amount of disk space. The preallocation can be turned
off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all
production
processing was shutdown, the “du” and “df” numbers pretty much
matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look
at/for?

TIA

-Rob






\

Yes, that’s a possibility… another is to get a bigger hard drive :wink: I’m
working on that … but what I really need is an interim software solution,
until I’m ready to upgrade the system.

-Rob

“Brown, Richard” <brownr@aecl.ca> wrote in message
news:9dug70$kn6$1@inn.qnx.com

One possibility is to create a separate partition to house this stuff on
and
mount it with that option.

  • Richard

“Rob” <> rob@spamyourself.com> > wrote in message
news:9dudi1$ivl$> 1@inn.qnx.com> …
Y I B… there it is! One would think that if mount can do this, there
must
be some sort of private message that is sent to Fsys… wouldn’t one?
I’ll
see if I can get some the other possible solutions to work, but this is
something good to know. Thanks.

Rob


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9du2tl$cho$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9ds151$3bs$> 1@inn.qnx.com> …
Thanks Mario…

But I knew most of that already > :wink: > I did say I was keeping it
simple
to
start with.

First off… Sorry, it’s not really feasible to be doing a chkfsys
on
our
production system (unless I want to come in at 3:00am and shut
everything
down). BUT, I’m fairly certain the file system isn’t corrupt… we
haven’t
had any bad data files for a very long time. I personally check on
that
every day.

One thing I don’t know is how to turn off Fsys preallocation. Is
it
an
Fsys option? Ideally, I’d like to be able to tune it… or even
better
have
access to some sort of ioctl mechanism, so it could be handles on a
file
by
file basis.

I just check and the option is not in Fsys but rather in the mount
command,
the option is -g.


P.S.
Warren is alive and well… but, he’s got some new toys to play
with.
And
then of course there’s snipy practice for Quake… Expect him when
you
see/hear from him > :wink:


Thanks for the info.


“Mario Charest” <mcharest@antispam_zinformatic.com> wrote in message
news:9drru3$7r$> 1@inn.qnx.com> …

“Rob” <> rob@spamyourself.com> > wrote in message
news:9drpmj$rui$> 1@inn.qnx.com> …
I’ll keep this simple to start with…

We have a 4 Gig hard drive mounted as /u on one of our (very
busy)
production systems, running QNX 4.25. It’s the primary “data
drive”.
If
I
run a “du -k /u”, it reports ~1.7 Gig of used file space. If I
run
“df -h”,
it reports 3.5 Gig used or 84% of the total space available on
/u.
That’s
twice what “du” reports. How can this be? (and, no I’m not
mistaking
512
byte blocks for kilobytes)

du reports file size, while df reports info as specify by the
inode
and
bitmap file.
First |I would run chkfsys on the HD to make sure the filesystem
is
intact.

On file that grow a lot, the filesystem preallocate space to
reduce
fragmentation.
If there a lots of small file it’s very possible this preallocated
space
is
eating
up a fair amount of disk space. The preallocation can be turned
off.



Also, I can’t easily verify this (because it’s a 24x7 production
system),
but it seems to me, that the last time I checked when all
production
processing was shutdown, the “du” and “df” numbers pretty much
matched
(at
around ~43% used).

At this point I’m just looking for some clues… of what to look
at/for?

TIA

-Rob








\

Previously, Ivan Bannon wrote in qdn.public.qnx4:

Mitchell,

When you say "Try “ls -x” on one of your files. If the 2nd number is
very
large then this may be your problem. ", what consitutes a large number?
I’ve
noticed these on some of our data files where this number is between 1 and
20.

20 is not a very large number unless the file is fairly small, say less than
100K. This number would be the number of extents. An extent is a single
contigous block of sectors. If extents get very small, your system will be
fragmented. It would take some very bad fragmentation to get to where the I-NODES
were sucking up a lot of sectors. If the number of extents were close to the number
of sectors in a file, well that’s pretty bad.

Mitchell Schoenbrun --------- maschoen@pobox.com

Previously, Rob wrote in qdn.public.qnx4:

David’s response seems to contradict that… I’ll be investigating further.

He seems to be correct. I was not aware of this until now. QNX2 did not
do this, and Bill never mentioned this behavior, but it actually does make
good sense. Imagine you had a program that alternately opened one of two
files A and B, and added a sector each time. The disk would end up looking
like this:

UUUUUUUUUUUUUUUUUUUUUUUUUUUABABABABABABFFFFFFFFFFFFFFFFFFFFF

U - Used
F - Free

Short of corruption this is the worst possible situation for
a hard disk to be in. With pre-allocation you would get

UUUUUUUUUUUUUUAAAAAAAAAAABBBBBBBBBBBAAAAAAA…BBBBBB…FFFFFFFFF

. = pre-allocated

A single transaction at a time. Transaction processing is identical on both
systems. Again, I don’t think append mode writes are the problem here.

Ok, but if you turn off pre-allocation and you really close the file each
time you will probably get a bigger problem. Just my 2 cents here.



Mitchell Schoenbrun --------- maschoen@pobox.com

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

“Rob” <> rob@spamyourself.com> > wrote in message
news:9dudi1$ivl$> 1@inn.qnx.com> …
Y I B… there it is! One would think that if mount can do this, there
must
be some sort of private message that is sent to Fsys… wouldn’t one?

Maybe Fsys probably doesn’t support changing this on the fly. Hence
can only be done at mount time.

Actually “at mount time” is MORE on the fly than a command-line parameter
to Fsys (which is what was originally being looked for). It does give
you the flexibility to have some files keep the pre-grown files after
close and others to not do so.

(My understanding is that the -g option won’t turn off file pre-growth,
but it will make the pre-grown blocks be “released” when the file is
closed.)

This may be the solution for you.

-David

QNX Training Services
dagibbs@qnx.com

More extremely useful information (will it never end… let’s hope not, eh
:wink:

I think my particular scenario is the opposite case of where the -g option
would be beneficial. The files that get open and closed a lot are
transaction and log files. Which I DO want to be pre-grown. The files that
I suspect are the problem (active database files) are open all the time.

By the way, I had a beast of time trying to find something in the QDN
Knowledge base… but, I finally did find a relevant article by searching
for “ltrunc” (and it confirms everything you originally “recalled”). I
didn’t find anything that specifically mentioned searching the QUICS
archives, though??

I’m working on an update to my database engine library, that will hopefully
alleviate the current problems. I’m trying to use ltrunc to redefine the
file size each time a database file needs to be expanded. Expansion is in
an explicit “block size” custom defined for each file. A “block” is to hold
a fixed number of records… This can be anywhere from a few hundred to
thousands, depending on factors I’d just as soon not get into right now.
What I’d like to be able to do is insure that an expansion is done in a
single (contiguous if possible) extent. However, there are a few things I’m
not quite sure off…

An expansion block is not going to be initially filled with active records.
Either there is only one record to be added (in the case of an append to the
end of the index) or it will be filled with about half a block’s worth of
records (in the case of needing to insert a record into a full block
somewhere in the middle of the file). My question is: Will it work to just
do an ltrunc at the end-of-block file offset (beyond the actual EOF,
hopefully telling Fsys an explicit extent/pre-growth size) or must I first
do an lseek to the end-of-block offset - 1, write a byte and then do an
ltrunc (explicitly setting the EOF and telling Fsys to forget about doing
any pre-growth) ? Or other possible options? … Anyone?

TIA
Rob “I wish I had Bill Flowers’ home phone number… and a case of
interesting home brew” :wink:

P.S.
Sorry David, I hit the wrong button the first time.
“David Gibbs” <dagibbs@qnx.com> wrote in message
news:9dus9s$o12$1@nntp.qnx.com

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

“Rob” <> rob@spamyourself.com> > wrote in message
news:9dudi1$ivl$> 1@inn.qnx.com> …
Y I B… there it is! One would think that if mount can do this, there
must
be some sort of private message that is sent to Fsys… wouldn’t one?

Maybe Fsys probably doesn’t support changing this on the fly. Hence
can only be done at mount time.

Actually “at mount time” is MORE on the fly than a command-line parameter
to Fsys (which is what was originally being looked for). It does give
you the flexibility to have some files keep the pre-grown files after
close and others to not do so.

(My understanding is that the -g option won’t turn off file pre-growth,
but it will make the pre-grown blocks be “released” when the file is
closed.)

This may be the solution for you.

-David

QNX Training Services
dagibbs@qnx.com

“Rob” <rob@spamyourself.com> wrote in message
news:9e0mch$4cl$1@inn.qnx.com

More extremely useful information (will it never end… let’s hope not, eh
:wink:

I think my particular scenario is the opposite case of where the -g option
would be beneficial. The files that get open and closed a lot are
transaction and log files. Which I DO want to be pre-grown. The files
that
I suspect are the problem (active database files) are open all the time.

By the way, I had a beast of time trying to find something in the QDN
Knowledge base… but, I finally did find a relevant article by searching
for “ltrunc” (and it confirms everything you originally “recalled”). I
didn’t find anything that specifically mentioned searching the QUICS
archives, though??

I’m working on an update to my database engine library, that will
hopefully
alleviate the current problems. I’m trying to use ltrunc to redefine the
file size each time a database file needs to be expanded. Expansion is in
an explicit “block size” custom defined for each file. A “block” is to
hold
a fixed number of records… This can be anywhere from a few hundred to
thousands, depending on factors I’d just as soon not get into right now.
What I’d like to be able to do is insure that an expansion is done in a
single (contiguous if possible) extent. However, there are a few things
I’m
not quite sure off…

An expansion block is not going to be initially filled with active
records.
Either there is only one record to be added (in the case of an append to
the
end of the index) or it will be filled with about half a block’s worth of
records (in the case of needing to insert a record into a full block
somewhere in the middle of the file). My question is: Will it work to
just
do an ltrunc at the end-of-block file offset (beyond the actual EOF,
hopefully telling Fsys an explicit extent/pre-growth size) or must I first
do an lseek to the end-of-block offset - 1, write a byte and then do an
ltrunc (explicitly setting the EOF and telling Fsys to forget about doing
any pre-growth) ? Or other possible options? … Anyone?

TIA
Rob “I wish I had Bill Flowers’ home phone number… and a case of
interesting home brew” > :wink:

I’m not sure about this but I seems to recall Bill talking about a
flag passed in the open() that would prevent pre-growinig, that’s
a vague memory…

P.S.
Sorry David, I hit the wrong button the first time.
“David Gibbs” <> dagibbs@qnx.com> > wrote in message
news:9dus9s$o12$> 1@nntp.qnx.com> …
Mario Charest <mcharest@antispam_zinformatic.com> wrote:

“Rob” <> rob@spamyourself.com> > wrote in message
news:9dudi1$ivl$> 1@inn.qnx.com> …
Y I B… there it is! One would think that if mount can do this,
there
must
be some sort of private message that is sent to Fsys… wouldn’t one?

Maybe Fsys probably doesn’t support changing this on the fly. Hence
can only be done at mount time.

Actually “at mount time” is MORE on the fly than a command-line
parameter
to Fsys (which is what was originally being looked for). It does give
you the flexibility to have some files keep the pre-grown files after
close and others to not do so.

(My understanding is that the -g option won’t turn off file pre-growth,
but it will make the pre-grown blocks be “released” when the file is
closed.)

This may be the solution for you.

-David

QNX Training Services
dagibbs@qnx.com