QNX is not meant for large system?

Nowadays PC server can be installed with large amount of physical memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if we
have processes sharing the same huge data space (shmem). For example, if we
have 1 GB of physical memory, and if our process code size is 2 MB sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory and
the 100 processes would occupy 2MB of the physical memory because they use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we run
our 20th process.

Does anyone know how to solve this constraint?

Regards,
Johannes

In article <9oqcsu$1d3$2@inn.qnx.com>, jsukamtoh@yahoo.com says…

Nowadays PC server can be installed with large amount of physical memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if we
have processes sharing the same huge data space (shmem). For example, if we
have 1 GB of physical memory, and if our process code size is 2 MB sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory and
the 100 processes would occupy 2MB of the physical memory because they use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we run
our 20th process.

Does anyone know how to solve this constraint?

Regards,
Johannes

Well, I am not QSSL, and they may disagree (please pipe up if you do,

QSSL), but I don’t think that is how Virtual Memory works.
AFAIK, Virtual Memory is the memory given to each process (it is kept
separate through the use of GDT and LDT tables)
Virtual Memory is about mapping physical memory to virtual addresses, for
each process. The GDT and LDT define the available (virtual and
physical) memory addresses for EACH process.
The 1GB of physical memory will be your limiting factor here!
Even if the shared (physical) memory of the code space were mapped into
different addresses for each process, it would not use up any more than
2Mb of the virtual address space for any given process. The 2Mb of each
of the other processes are NOT mapped in at all of the possible
addresses, other physical memory would be mapped in there, or nothing at
all. Each process would see only the single 2Mb code space, and there
should be nothing stopping them from being at the same virtual memory
address. It is not important (in the context of this discussion) whether
they are at the same virtual address or not. I suspect that they will in
fact be at the same virtual address in all of the different address
spaces (remember, each process gets its own Address Space!). Unless the
code is PIC (Position Independent Code) the linker would set it up so
that the code MUST be at the same virtual address in every process’
Address Space.
If the data memory is also truly shared, it too would not take up any
more than 200 Mb in any given process’ Address Space.
Now, with many multiples of the many processes using large amounts of
memory, this might take a lot of space (in physical memory) for the GDT
and the LDTs for these processes. You may need to have the OS allocate
more memory for these tables (which I believe can be done, by giving
command line paramaters to the programs when you build your boot image).


Stephen Munnings
Software Developer
Corman Technologies Inc.

“Johannes” <jsukamtoh@yahoo.com> wrote in message
news:9oqcsu$1d3$2@inn.qnx.com

Nowadays PC server can be installed with large amount of physical memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if we
have processes sharing the same huge data space (shmem). For example, if
we
have 1 GB of physical memory, and if our process code size is 2 MB sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory and
the 100 processes would occupy 2MB of the physical memory because they use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we
run
our 20th process.

Does anyone know how to solve this constraint?

Assuming you are using QNX6, you could turn these processes into threads.
I beleive thread share the same virtual memory info, I’m guessing here.

Regards,
Johannes

\

The problem is page table space. You need page table pointers to 200MB
for each process. You need over 1Meg of page table space, which is no
doubt much more than the default allocated by the OS. I could not find
a QNX4 parameter to Proc that would modify this, but there may be one.
I would post in qdn.public.qnx4 to get a more qualified answer.

Mitchell Schoenbrun --------- maschoen@pobox.com

I agree with you because that is what I thought it should work until I have
this problem. I don’t believe it in the first place, so I write a small
program (attached) to test it out. This program takes the name of the shared
memory as argument and allocate 5 shared memory tables of 10MB each. I use
10MB here because my tester machine has 96MB only. You can increase it if
you have more memory so that you can catch the problem faster. After
allocating the 50 MB shared memory, the tester sleeps for a long period so
that it does not release the shared memory. Run the program as background
(a.out X &) again and again. While you are running it, notice the Memory
and Virtual in “sin in”. The Memory, which is the physical memory, is not
much change, which is correct. But the Virtual at the lower left corner and
I had never noticed it, has increased tremendously, and at last you will
encounter “mmap fails: Not enough memory”. In my PC, sin in shows the
Virtual Memory as 59M/4219M and Memory as 34M/97M.

I also wish to hear from QSSL, please!
#include <errno.h>
#include <malloc.h>
#include <signal.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/kernel.h>
#include <sys/seginfo.h>
#include <sys/mman.h>
#include <sys/types.h>

void XLInitShm (char **dest, char *shm_name, int shm_size)
{
int fd;
char temp[80];

if ((fd = shm_open (shm_name, O_RDWR, 0777)) == -1)
{
if (errno != ENOENT)
{
printf (temp, “\n%d %d shm_open new: %s”, getpid(), errno, shm_name);
perror (temp);
exit (-1);
}

if ((fd = shm_open (shm_name, O_RDWR | O_CREAT, 0777)) == -1)
{
sprintf (temp, “\n%d %d shm_open fails: %s”, getpid(), errno, shm_name);
perror (temp);
exit (-1);
}

if (ltrunc (fd, shm_size, SEEK_SET) == -1)
{
sprintf (temp, “\n%d %d ltrunc fails: %s %d”, getpid(), errno, shm_name,
shm_size);
perror (temp);
exit (-1);
}
}

if (((*dest = mmap (0, shm_size,
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)) == NULL)
|| (*dest == (char *) -1))
{
sprintf (temp, “\n%d %d mmap fails: %s”, getpid(), errno, shm_name);
perror (temp);
sleep (3);
}

/* 010501 Do not unlink or close fd here */

} /* End of XLInitShm */




main (int argc, char **argv)
{
int fd;
char *pPtr;
char c;
char sName[100];
int n;

if (argc != 2)
{
printf (“shm_cr \n”);
exit (0);
}

/*

  • Allocate shared memory of size 50 MB
    */
    for (n = 0; n < 5; ++n)
    {
    sprintf (sName, “%s%d”, argv[1], n);
    XLInitShm (&pPtr, sName, 10000000);
    printf ("%d Map addr is %6.6X\n", n, pPtr);
    sleep (1);
    }

/*

  • sleep here so that the process remains in memory and so is the
  • shared memory.
    */
    sleep (1000);

}




“Stephen Munnings” <steve@cormantech.com> wrote in message
news:MPG.161a9c2185ec44449896cc@inn.qnx.com

In article <9oqcsu$1d3$> 2@inn.qnx.com> >, > jsukamtoh@yahoo.com > says…
Nowadays PC server can be installed with large amount of physical
memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if
we
have processes sharing the same huge data space (shmem). For example, if
we
have 1 GB of physical memory, and if our process code size is 2 MB
sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory
and
the 100 processes would occupy 2MB of the physical memory because they
use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we
run
our 20th process.

Does anyone know how to solve this constraint?

Regards,
Johannes

Well, I am not QSSL, and they may disagree (please pipe up if you do,
QSSL), but I don’t think that is how Virtual Memory works.
AFAIK, Virtual Memory is the memory given to each process (it is kept
separate through the use of GDT and LDT tables)
Virtual Memory is about mapping physical memory to virtual addresses, for
each process. The GDT and LDT define the available (virtual and
physical) memory addresses for EACH process.
The 1GB of physical memory will be your limiting factor here!
Even if the shared (physical) memory of the code space were mapped into
different addresses for each process, it would not use up any more than
2Mb of the virtual address space for any given process. The 2Mb of each
of the other processes are NOT mapped in at all of the possible
addresses, other physical memory would be mapped in there, or nothing at
all. Each process would see only the single 2Mb code space, and there
should be nothing stopping them from being at the same virtual memory
address. It is not important (in the context of this discussion) whether
they are at the same virtual address or not. I suspect that they will in
fact be at the same virtual address in all of the different address
spaces (remember, each process gets its own Address Space!). Unless the
code is PIC (Position Independent Code) the linker would set it up so
that the code MUST be at the same virtual address in every process’
Address Space.
If the data memory is also truly shared, it too would not take up any
more than 200 Mb in any given process’ Address Space.
Now, with many multiples of the many processes using large amounts of
memory, this might take a lot of space (in physical memory) for the GDT
and the LDTs for these processes. You may need to have the OS allocate
more memory for these tables (which I believe can be done, by giving
command line paramaters to the programs when you build your boot image).






\

Stephen Munnings
Software Developer
Corman Technologies Inc.

Unfortunately I am using 4.25E. And I am not sure it works either. Try my
tester program to find out.

“Mario Charest” <mcharest@clipzinformatic.com> wrote in message
news:9oqjot$58c$1@inn.qnx.com

“Johannes” <> jsukamtoh@yahoo.com> > wrote in message
news:9oqcsu$1d3$> 2@inn.qnx.com> …
Nowadays PC server can be installed with large amount of physical
memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if
we
have processes sharing the same huge data space (shmem). For example, if
we
have 1 GB of physical memory, and if our process code size is 2 MB
sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory
and
the 100 processes would occupy 2MB of the physical memory because they
use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we
run
our 20th process.

Does anyone know how to solve this constraint?

Assuming you are using QNX6, you could turn these processes into threads.
I beleive thread share the same virtual memory info, I’m guessing here.


Regards,
Johannes



\

Thanks. I hope to hear the answer from QSSL. IF it is TRUE the QNX works
that way (I may be wrong and that’s why I need the answer from QSSL), I
suggest QSSL should add it as one of their limitation so that QNX developers
is aware of the limitation during the design phase.

“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010925151658.11880C@schoenbrun.com

The problem is page table space. You need page table pointers to 200MB
for each process. You need over 1Meg of page table space, which is no
doubt much more than the default allocated by the OS. I could not find
a QNX4 parameter to Proc that would modify this, but there may be one.
I would post in qdn.public.qnx4 to get a more qualified answer.

Mitchell Schoenbrun --------- > maschoen@pobox.com

Johannes <jsukamtoh@yahoo.com> wrote:

Thanks. I hope to hear the answer from QSSL. IF it is TRUE the QNX works
that way (I may be wrong and that’s why I need the answer from QSSL), I
suggest QSSL should add it as one of their limitation so that QNX developers
is aware of the limitation during the design phase.

To clarify - are you using QNX4 or QNX6?

chris


“Mitchell Schoenbrun” <> maschoen@pobox.com> > wrote in message
news:> Voyager.010925151658.11880C@schoenbrun.com> …
The problem is page table space. You need page table pointers to 200MB
for each process. You need over 1Meg of page table space, which is no
doubt much more than the default allocated by the OS. I could not find
a QNX4 parameter to Proc that would modify this, but there may be one.
I would post in qdn.public.qnx4 to get a more qualified answer.

Mitchell Schoenbrun --------- > maschoen@pobox.com


\

cdm@qnx.com > “The faster I go, the behinder I get.”

Chris McKillop – Lewis Carroll –
Software Engineer, QSSL
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I am using QNX 4.25E

“Chris McKillop” <cdm@qnx.com> wrote in message
news:9orp32$2pf$1@nntp.qnx.com

Johannes <> jsukamtoh@yahoo.com> > wrote:

Thanks. I hope to hear the answer from QSSL. IF it is TRUE the QNX works
that way (I may be wrong and that’s why I need the answer from QSSL), I
suggest QSSL should add it as one of their limitation so that QNX
developers
is aware of the limitation during the design phase.


To clarify - are you using QNX4 or QNX6?

chris


“Mitchell Schoenbrun” <> maschoen@pobox.com> > wrote in message
news:> Voyager.010925151658.11880C@schoenbrun.com> …
The problem is page table space. You need page table pointers to 200MB
for each process. You need over 1Meg of page table space, which is no
doubt much more than the default allocated by the OS. I could not find
a QNX4 parameter to Proc that would modify this, but there may be one.
I would post in qdn.public.qnx4 to get a more qualified answer.

Mitchell Schoenbrun --------- > maschoen@pobox.com





\

cdm@qnx.com > “The faster I go, the behinder I get.”
Chris McKillop – Lewis Carroll –
Software Engineer, QSSL

“Johannes” <jsukamtoh@yahoo.com> wrote in message
news:9orjfp$m45$2@inn.qnx.com

Unfortunately I am using 4.25E. And I am not sure it works either. Try my
tester program to find out.

No it wouldn’t work, QNX 4.25 threads are just like process.

“Mario Charest” <> mcharest@clipzinformatic.com> > wrote in message
news:9oqjot$58c$> 1@inn.qnx.com> …

“Johannes” <> jsukamtoh@yahoo.com> > wrote in message
news:9oqcsu$1d3$> 2@inn.qnx.com> …
Nowadays PC server can be installed with large amount of physical
memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if
we
have processes sharing the same huge data space (shmem). For example,
if
we
have 1 GB of physical memory, and if our process code size is 2 MB
sharing
the same data space of total size 200MB, then we can only run AT MOST
19
processes of the same program. At first we thought we could run over
100
processes because the shmem would occupy 200MB of the physical memory
and
the 100 processes would occupy 2MB of the physical memory because they
use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for
the
200MB shared memory, therefore adding new process does not increase
the
physical memory usage at all. But each process occupies 200MB of
virtual
memory (with different virtual address), which runs out of space when
we
run
our 20th process.

Does anyone know how to solve this constraint?

Assuming you are using QNX6, you could turn these processes into
threads.
I beleive thread share the same virtual memory info, I’m guessing
here.


Regards,
Johannes





\