QNX is not meant for large system?

Nowadays PC server can be installed with large amount of physical memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if we
have processes sharing the same huge data space (shmem). For example, if we
have 1 GB of physical memory, and if our process code size is 2 MB sharing
the same data space of total size 200MB, then we can only run AT MOST 19
processes of the same program. At first we thought we could run over 100
processes because the shmem would occupy 200MB of the physical memory and
the 100 processes would occupy 2MB of the physical memory because they use
the same code space. It is true in term of physical memory, but not in
VIRTUAL MEMORY. Each process uses the same physical memory area for the
200MB shared memory, therefore adding new process does not increase the
physical memory usage at all. But each process occupies 200MB of virtual
memory (with different virtual address), which runs out of space when we run
our 20th process.

Does anyone know how to solve this constraint?

Regards,
Johannes

Johannes <jsukamtoh@yahoo.com> wrote:

Nowadays PC server can be installed with large amount of physical memory,
but QNX is limited to 4 GB of VIRTUAL MEMORY which is a big problem if we
have processes sharing the same huge data space (shmem).

Does anyone know how to solve this constraint?

There isn’t a solution. As noted in your second thread, QNX 4 has
and has always had scalability limitations at the high-end. It was
never really designed to scale much above a large desk-top system.
The scalability at the high-end has improved over the lifespan of
QNX 4, but have never gone away, and won’t. (At one point, the
limit was 256M of virtual address space (memory) in use.)

When QNX 6 was designed, the issue of high-end scalability was one of
the design considerations, and it doesn’t have the same sort of constraints
as QNX4 – it should scale appropriately right up to the mega system range.

-David

QNX Training Services
dagibbs@qnx.com