If you have access to the old archives, look at the fsys thread
starting around 6337 - it actually started before that.
Here is one of John’s posts:
From quics!jgarvey Fri Oct 8 18:11:55 1999
Xref: quics quics.experts.fsys:6346
Newsgroups: quics.experts.fsys
Path: quics!jgarvey
From: jgarvey@qnx.com (John Garvey)
Subject: Re: EMFILE on a lightly loaded system
Organization: QNX Software Systems
Message-ID: <FJ75Bz.Bu4@qnx.com>
References: <FJ42II.72z@qnx.com> <FJ5D8p.35J@qnx.com>
<7tfji5$f2l$1@gateway.qnx.com> <FJ6pun.6tK@qnx.com>
Distribution: quics
X-Newsreader: TIN [version 1.2 PL2]
Date: Wed, 6 Oct 1999 19:40:47 GMT
Jay Hogg (jshogg@qnx.com) wrote:
Did anything interesting appear in traceinfo?
If your look towards the end of ~/snapshot.emfile there are a number
of traceinfo entries that didn’t get translated (even using traceinfo.net)
so I would have to say ‘yes’ but I don’t know what they are.
Oct 04 20:22:38 2 00003024 0000F7B1 0004E781 656C6966
Oops, these are “internal heap exhaustion (object=file)”. So you
have enhausted the data heap of Fsys (since it shares its DS with the
drivers it pregrows this at startup to a guesstimate value). What
was fooling me was that this situation will return ENFILE from Fsys,
not the EMFILE you were seeing?!
One of my guys reported Fsys -f 1200 (apparently) fixed the problem
but …
Yes, increasing that parameter (or -i or -C too) bumps the heap guess
up.
I’d say you’ve recently added an extra or a bigger drive to the system,
and the heap guess is now too small. You can increase it with the
“Fsys -H” parameter. The traceinfo message shows the size of
the heap that was guessed (drive-specific, because no drives have yet
been attached) plus what was created taking into account known
parameters
(-f, -i, -C, etc). You should add some onto the first number and use
this to -H. I’ve put a format line that you can add into your
“/etc/config/traceinfo” file (in the Fsys section) to format this …
36 internal heap exhaustion (nbytes=%ld/%ld) (object=%s)
“Bill Caroselli @ Q-TPS” wrote:
Hi
I remember being a victim of this. We had about 12 SCSI disks with 2
partitions each. We had to supply the -H parameter.
But I don’t think it referred to pure size. I think it referred to number
of block special files (partitions that were mounted).
But that was long ago. Hope this help some but I can’t swear that my memory
is 100% on this one.
“Rob” <> rob@spamyourself.com> > wrote in message
news:9c1t06$por$> 1@inn.qnx.com> …
A couple of years ago there was a thread in quics concerning very large
hard
drives and (as I recall) Fsys running out of heap space. Someone at QSSL
provided an elaborate calculation for providing a numeric value for the -H
option to Fsys. It was based on sectors, heads etc. At that time we were
trying to get a 70G RAID system working (and did so successfully). The
number that was arrived at was 215040.
I thought I had kept some notes on this, but I can’t seem to find them now
and there doesn’t seem to be anything in the QDN knowledge base or even
a -H
option for Fsys documented anywhere.
We’re now trying to get an 18G RAID system working… I suspect the 215040
is just a bit of overkill. We’re using Fsys version 4.24B on our
production
systems.
My questions are…
Is there a newer version of Fsys that doesn’t require the -H option for
“large” drives?
If not, can some one please dig up the documentation on how to calculate
“the number” required for -H?
OR is 18G safely under the limit for needing the -H option? FYI… there
is
actually ~26G of disk on the new system… an 8G boot drive (with 2
partitions (t77 & t78) and an 18G “user” drive (with 1 t77 partition).
TIA
-Rob