Upper file size limit in qnx4?

Hello qnx users

I’m adding further information on the query I sent to this list
yesterday (16 Aug).

I am attempting to make a database file slightly larger. The c routine
I am using for doing this reaches a maximum size (adding 1byte at a time
to the file) and then comes up with errno=24 (file too large).

The file size is 2147483647 bytes or 4290772993 blocks
(512bytes/block).
The size of an unsigned long is 4294967295 which is much
bigger when I am adding only a byte at a time.

In Solaris I think there is a limit and ulimit command for cshell and
bourne shell which can be used to set an upper limit for a file size for
any user. Is this the case in qnx where I have reached an upper
limit? And is this limit fixed or configurable?

Regards
Geoff Pincham
geoffrey.r.pincham@bhpbilliton.com

Hi Geoff

While the math indicates that you might squeeze out a few more bytes, you
are basically up against the limit of the size of a file in QNX4.

Here is an option if you can use it. Since your file is so large I’m
guessing that you don’t exactly need all the benefits of a complete file
system for that particulay file. If not, then try this:

  1. allocate a partition that is as large as needed
  2. don’t bother to initialize it
  3. write your own server process that is a data base fie system. Support
    whatever kind of IO you need but limit your access to the file to open(),
    blockread(), blockwrite(), and close(). Blockread() adn blockwrite() do
    their IO by block number (relitive sector number) offsets instead of byte
    offsets. I.E. you can’t use read() write(), lseek(), tell(). You will be
    able to impose unsigned long offsets into the file yourself.
  4. write your own read() and write() cover functions for access to this file
    so that you will have to change a minimum of your application code.

I have done this (long ago). It’s not too hard.

Bill Caroselli


“Geoff Pincham” <pincham.geoffrey.gr@bhp.com.au> wrote in message
news:3B7C8197.8E2A8D54@bhp.com.au

Hello qnx users

I’m adding further information on the query I sent to this list
yesterday (16 Aug).

I am attempting to make a database file slightly larger. The c routine
I am using for doing this reaches a maximum size (adding 1byte at a time
to the file) and then comes up with errno=24 (file too large).

The file size is 2147483647 bytes or 4290772993 blocks
(512bytes/block).
The size of an unsigned long is 4294967295 which is much
bigger when I am adding only a byte at a time.

In Solaris I think there is a limit and ulimit command for cshell and
bourne shell which can be used to set an upper limit for a file size for
any user. Is this the case in qnx where I have reached an upper
limit? And is this limit fixed or configurable?

Regards
Geoff Pincham
geoffrey.r.pincham@bhpbilliton.com

Geoff Pincham <pincham.geoffrey.gr@bhp.com.au> wrote:

Hello qnx users

I’m adding further information on the query I sent to this list
yesterday (16 Aug).

I am attempting to make a database file slightly larger. The c routine
I am using for doing this reaches a maximum size (adding 1byte at a time
to the file) and then comes up with errno=24 (file too large).

The file size is 2147483647 bytes or 4290772993 blocks
(512bytes/block).
The size of an unsigned long is 4294967295 which is much
bigger when I am adding only a byte at a time.

In Solaris I think there is a limit and ulimit command for cshell and
bourne shell which can be used to set an upper limit for a file size for
any user. Is this the case in qnx where I have reached an upper
limit? And is this limit fixed or configurable?

This limit is fixed in the filesystem – it comes from the lseek()
command which must be able to seek the full length of the command,
and takes a signed int for the file position (which it has to, as
you can do a relative seek backwards in the file, so negative numbers
must be defined.)

The only thing I can suggest as an alternative is to use a non Filesystem
partition for your database, and use the block_read() and block_write()
routines to read/write 512 byte blocks (or sets of 512 byte blocks) to/from
the database. This will require some planning in advance – but if you
know which database files will become large, you can do it.

-David

QNX Training Services
dagibbs@qnx.com

Previously, David Gibbs wrote in comp.os.qnx:

The only thing I can suggest as an alternative is to use a non Filesystem
partition for your database, and use the block_read() and block_write()
routines to read/write 512 byte blocks (or sets of 512 byte blocks) to/from
the database. This will require some planning in advance – but if you
know which database files will become large, you can do it.

Yet another possibility would be to create an I/O manager that makes
multiple very large files look like one. You could do this either
using just the block I/O routines, or you could have an IOCTL call
that switches from one are of the meta file to another.


Mitchell Schoenbrun --------- maschoen@pobox.com