Fsys -H

A couple of years ago there was a thread in quics concerning very large hard
drives and (as I recall) Fsys running out of heap space. Someone at QSSL
provided an elaborate calculation for providing a numeric value for the -H
option to Fsys. It was based on sectors, heads etc. At that time we were
trying to get a 70G RAID system working (and did so successfully). The
number that was arrived at was 215040.

I thought I had kept some notes on this, but I can’t seem to find them now
and there doesn’t seem to be anything in the QDN knowledge base or even a -H
option for Fsys documented anywhere.

We’re now trying to get an 18G RAID system working… I suspect the 215040
is just a bit of overkill. We’re using Fsys version 4.24B on our production
systems.

My questions are…
Is there a newer version of Fsys that doesn’t require the -H option for
“large” drives?
If not, can some one please dig up the documentation on how to calculate
“the number” required for -H?
OR is 18G safely under the limit for needing the -H option? FYI… there is
actually ~26G of disk on the new system… an 8G boot drive (with 2
partitions (t77 & t78) and an 18G “user” drive (with 1 t77 partition).

TIA

-Rob

Hi

I remember being a victim of this. We had about 12 SCSI disks with 2
partitions each. We had to supply the -H parameter.

But I don’t think it referred to pure size. I think it referred to number
of block special files (partitions that were mounted).

But that was long ago. Hope this help some but I can’t swear that my memory
is 100% on this one.

“Rob” <rob@spamyourself.com> wrote in message
news:9c1t06$por$1@inn.qnx.com

A couple of years ago there was a thread in quics concerning very large
hard
drives and (as I recall) Fsys running out of heap space. Someone at QSSL
provided an elaborate calculation for providing a numeric value for the -H
option to Fsys. It was based on sectors, heads etc. At that time we were
trying to get a 70G RAID system working (and did so successfully). The
number that was arrived at was 215040.

I thought I had kept some notes on this, but I can’t seem to find them now
and there doesn’t seem to be anything in the QDN knowledge base or even
a -H
option for Fsys documented anywhere.

We’re now trying to get an 18G RAID system working… I suspect the 215040
is just a bit of overkill. We’re using Fsys version 4.24B on our
production
systems.

My questions are…
Is there a newer version of Fsys that doesn’t require the -H option for
“large” drives?
If not, can some one please dig up the documentation on how to calculate
“the number” required for -H?
OR is 18G safely under the limit for needing the -H option? FYI… there
is
actually ~26G of disk on the new system… an 8G boot drive (with 2
partitions (t77 & t78) and an 18G “user” drive (with 1 t77 partition).

TIA

-Rob

Found it in the QUICS archives… Oct 1999…

It was John Garvey (Thanks again John :wink: Jay Hogg had started the thread.
Jay (if your listening) did you ever document this thing? I still don’t
quite follow the calculations …

=== From QUICS ====
Jay Hogg wrote: >

The -H being added to the “known requirements” was what I was
missing. I couldn’t figure out how 63+20 = 300+K.

Is there a baseline rule (I’ve gotta doc this for those that
are technically challenged) for calculations like:

Each 10gig of disk or portion of needs 20k
Each 1000 files needs 5k
Add 20k for other stuff (partitions etc)

This would give me:
hd0 10gig (2 partitions) +20k
hd0t77 3000 files +15k
hd0t78 1000 files +5k
hd1 18gig (1 partition) +40k
hd1t80 4000 files +20k

For a total of 100k - In the environment I’m in I would much
rather run safe than sorry.

If I added another 18gig:
hd2 +40k
hd2t80 4000 files +20k
= 160k
Safe guess?

Has this since been fixed, or is there actually a formula to use? Any help
would be appreciated.

How does one guestimate the number of files to add in? I just did a ‘find
/u | wc’ on one of our production nodes and came up with over 125,000
files. Ok, so maybe 10-20% of those are directories, but that’s still a lot
more than 4000 - 8000 files. Is this in addition to what Fsys will handle
by default?

Just taking a stab at this…
Old System…
hd0 4G +10k
hd1 70G +140k
Misc. +60k … Good for 120,000 files? @ 5k per 1000 files

210k = 215040

New System…
hd0 8G +20k
Hd1 18G +40k
Misc. +60k … Good for 120,000 files @ 5k per 1000 files

120k = 122880 … Does this look reasonable?

-Rob

“Rob” <rob@spamyourself.com> wrote in message
news:9c1t06$por$1@inn.qnx.com

A couple of years ago there was a thread in quics concerning very large
hard
drives and (as I recall) Fsys running out of heap space. Someone at QSSL
provided an elaborate calculation for providing a numeric value for the -H
option to Fsys. It was based on sectors, heads etc. At that time we were
trying to get a 70G RAID system working (and did so successfully). The
number that was arrived at was 215040.

I thought I had kept some notes on this, but I can’t seem to find them now
and there doesn’t seem to be anything in the QDN knowledge base or even
a -H
option for Fsys documented anywhere.

We’re now trying to get an 18G RAID system working… I suspect the 215040
is just a bit of overkill. We’re using Fsys version 4.24B on our
production
systems.

My questions are…
Is there a newer version of Fsys that doesn’t require the -H option for
“large” drives?
If not, can some one please dig up the documentation on how to calculate
“the number” required for -H?
OR is 18G safely under the limit for needing the -H option? FYI… there
is
actually ~26G of disk on the new system… an 8G boot drive (with 2
partitions (t77 & t78) and an 18G “user” drive (with 1 t77 partition).

TIA

-Rob

If you have access to the old archives, look at the fsys thread
starting around 6337 - it actually started before that.

Here is one of John’s posts:

From quics!jgarvey Fri Oct 8 18:11:55 1999
Xref: quics quics.experts.fsys:6346
Newsgroups: quics.experts.fsys
Path: quics!jgarvey
From: jgarvey@qnx.com (John Garvey)
Subject: Re: EMFILE on a lightly loaded system
Organization: QNX Software Systems
Message-ID: <FJ75Bz.Bu4@qnx.com>
References: <FJ42II.72z@qnx.com> <FJ5D8p.35J@qnx.com>
<7tfji5$f2l$1@gateway.qnx.com> <FJ6pun.6tK@qnx.com>
Distribution: quics
X-Newsreader: TIN [version 1.2 PL2]
Date: Wed, 6 Oct 1999 19:40:47 GMT

Jay Hogg (jshogg@qnx.com) wrote:

Did anything interesting appear in traceinfo?
If your look towards the end of ~/snapshot.emfile there are a number
of traceinfo entries that didn’t get translated (even using traceinfo.net)
so I would have to say ‘yes’ but I don’t know what they are.

Oct 04 20:22:38 2 00003024 0000F7B1 0004E781 656C6966

Oops, these are “internal heap exhaustion (object=file)”. So you
have enhausted the data heap of Fsys (since it shares its DS with the
drivers it pregrows this at startup to a guesstimate value). What
was fooling me was that this situation will return ENFILE from Fsys,
not the EMFILE you were seeing?!

One of my guys reported Fsys -f 1200 (apparently) fixed the problem
but …

Yes, increasing that parameter (or -i or -C too) bumps the heap guess
up.

I’d say you’ve recently added an extra or a bigger drive to the system,
and the heap guess is now too small. You can increase it with the
“Fsys -H” parameter. The traceinfo message shows the size of
the heap that was guessed (drive-specific, because no drives have yet
been attached) plus what was created taking into account known
parameters
(-f, -i, -C, etc). You should add some onto the first number and use
this to -H. I’ve put a format line that you can add into your
“/etc/config/traceinfo” file (in the Fsys section) to format this …

36 internal heap exhaustion (nbytes=%ld/%ld) (object=%s)



“Bill Caroselli @ Q-TPS” wrote:

Hi

I remember being a victim of this. We had about 12 SCSI disks with 2
partitions each. We had to supply the -H parameter.

But I don’t think it referred to pure size. I think it referred to number
of block special files (partitions that were mounted).

But that was long ago. Hope this help some but I can’t swear that my memory
is 100% on this one.

“Rob” <> rob@spamyourself.com> > wrote in message
news:9c1t06$por$> 1@inn.qnx.com> …
A couple of years ago there was a thread in quics concerning very large
hard
drives and (as I recall) Fsys running out of heap space. Someone at QSSL
provided an elaborate calculation for providing a numeric value for the -H
option to Fsys. It was based on sectors, heads etc. At that time we were
trying to get a 70G RAID system working (and did so successfully). The
number that was arrived at was 215040.

I thought I had kept some notes on this, but I can’t seem to find them now
and there doesn’t seem to be anything in the QDN knowledge base or even
a -H
option for Fsys documented anywhere.

We’re now trying to get an 18G RAID system working… I suspect the 215040
is just a bit of overkill. We’re using Fsys version 4.24B on our
production
systems.

My questions are…
Is there a newer version of Fsys that doesn’t require the -H option for
“large” drives?
If not, can some one please dig up the documentation on how to calculate
“the number” required for -H?
OR is 18G safely under the limit for needing the -H option? FYI… there
is
actually ~26G of disk on the new system… an 8G boot drive (with 2
partitions (t77 & t78) and an 18G “user” drive (with 1 t77 partition).

TIA

-Rob