Heap errors simulation

Hi,
I want to get the following errors in traceinfo. Somehow i haven’t
been able to simulate them .

I want one of the below lines to be logged in traceinfo.

00003024 internal heap exhaustion (nbytes=d/t) (object=s)

or
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 646F6E69 65
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 646F6E69 65
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 656D616E
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 656D616E
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 656D616E
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 656D616E
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 646F6E69 65
Jun 28 19:51:34 2 00003024 00013F1B 00052EEB 656D616E
Jun 28 19:51:39 2 00003024 00013F1B 00052EEB 646F6E69 65
Jun 28 19:51:39 2 00003024 00013F1B 00052EEB 656D616E

I know the fix for this but i want to simulate and know if the fix
works.

I have written some piece of code as below but none of them produces
3024 in traceinfo.

Code 1:



Regards
Navin

bash-2.00$ ./a.out
Opening file:0
Opening file:1
Opening file:2
Opening file:3
Opening file:4
Opening file:5
Opening file:6
Opening file:7
Opening file:8
Opening file:9
Opening file:10
Opening file:11
Opening file:12
Opening file:13
Opening file:14
Opening file:15
Opening file:16
Opening file:17
Opening file:18
Opening file:19

//1/home/spothuri/a.out terminated (SIGSEGV) at
10A3:000011BE.
Segmentation fault
bash-2.00$ cat f_ld1.c
#include<stdio.h>
#include<fcntl.h>
#include<sys/stat.h>
#include<unistd.h>
#include<sys/types.h>
#define MAX 100
FILE *fp[MAX];
int arr[MAX];
char str[20] = “/tmp/large/”, stra[6],
buf[20] = “large disk try”;
int
main ()
{

int i,j;
for (i = 0; i < MAX; i++)
{
printf (“Opening file:%d\n”, i);
sprintf (stra, “%d”, i);
strcat (str, stra);
arr _= open (str, O_CREAT | O_WRONLY|O_EXCL);
write (arr, buf, sizeof (buf));
/* With and without below 2 lines i get the same error /
for(j=0;j<10;j++)
arr[MAX-i-1]=dup(arr);
/
close (arr); */


}
return 0;
}
bash-2.00$


\

[code:1:d97ce39369]
bash-2.00$ cat fork.c

int main()
{
while(1)
{
fork();
}
return 0;
}

bash-2.00$ traceinfo | tail
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Jul 10 08:42:53 2 00001014 No free pid
Warning! 49935 overruns have occurred. Some trace events lost.
bash-2.00$
[/code:1:d97ce39369]_

I think you need to learn to program first. The first program has a
few obvious bugs. First, you reuse the variable str, so your file
names get ever longer. Eventually you overwrite protected or
non-existant memory. You should check the return code from open()
because eventually you will get a “too many files” error.
Your current code will then abort when you try to do the write. The
loop with dup() in it seems to be nonsense, so I would remove it.
You also do need to close your files at the end of the loop, unless
you want the get the “too many files” error. In any
event, there is no reason to think that this will cause a heap
overrun…

The second program shows a good way to eat up all the pid resources in
your system. Once it does, the many forked versions will also soak up
all the cpu time. Why do you think this should cause a heap overrun?

Hi,
I have seen the bugs. I would have commented the dup part. It was
just for creating too many file descriptors.

Can you tell me What would actually simulate an heap overrun for the
program Fsys because that is what causes the “internal heap
exhaustion” and other messages.

This sounds like an error that is idiosyncratic to Fsys. That is,
Fsys is probably creating the trace line itself. You can simulate
this by just writing an identical looking record with the trace
facility, but I doubt this is what you want to do. I think that you
would like to know how to cause the error? If you are getting the
error, then you probably know more than anyone, including QSSL about
what will cause it. You probably need to contact QSSL tech support
about this directly. One obvious direction to go in would be to
configure a system that you know will fail, so that the dump is saved
across the network to another system.

Yeah i know how to resolve this . By adding Fsys -H { large size } . I
would actually know how to simulate the error instead of changing the
format in /etc/config/traceinfo.
I know the fix but don’t know the exact cause :slight_smile:.

thanks

For Heap to run out you need a very large hard disk ( 80G well not that
large by today’s standard) and lots of open file which I think you also need
to read from (open is not enough).

But how about trying to specify a very small heap value with -H.

“navinp” <navinp@gmail-dot-com.no-spam.invalid> wrote in message
news:e8vso9$n00$1@inn.qnx.com

Yeah i know how to resolve this . By adding Fsys -H { large size } . I
would actually know how to simulate the error instead of changing the
format in /etc/config/traceinfo.
I know the fix but don’t know the exact cause > :slight_smile:> .

thanks

navinp <navinp@gmail-dot-com.no-spam.invalid> wrote:

Hi,
I have seen the bugs. I would have commented the dup part. It was
just for creating too many file descriptors.

Can you tell me What would actually simulate an heap overrun for the
program Fsys because that is what causes the “internal heap
exhaustion” and other messages.

Multiple large disks hooked up to the machine.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com