Setting cache=0 on 1 of 2 partitions

G’day folks

I am making a custom 6.2 image with two partitions. Partition 1 (t79) will
have the os and pkgs etc.
Partition 2 will be just for data (t78)

I already have the image set up using devb-aha8. There is only 1 physical
SCSI drive ( hd0)

My problem is that I want to set cache=0 for the data partition but still
having caching set as standard on the boot partition.

I tried using two instances of devb-aha8 but it seems that you can only
start two instances if you have two controllers.

Is there any other way to solve my problem ?

Thanks in advance

Matt McHugh

matt mchugh <mattgreeneggswithspam@ruzz.com> wrote:

I am making a custom 6.2 image with two partitions. Partition 1 (t79) will
have the os and pkgs etc.
Partition 2 will be just for data (t78)
I already have the image set up using devb-aha8. There is only 1 physical
SCSI drive ( hd0)
My problem is that I want to set cache=0 for the data partition but still
having caching set as standard on the boot partition.

What part of caching are you trying to disable? There always has to be
some cache involved in reading, as data goes from the disk to the user
via the buffer cache. For writing the cache is also involved but you
can force no write-behind delay by doing a synchronous mount. So, if
your desire to not cache t78 is just so your data is always flushed
immediately to disk, then you can say something like:
“mount -tqnx4 -ocommit=high /dev/hd0t78 /data”

Note that some options like “cache=” will apply to the whole disk
subsystem, whereas other options can apply on a mount-by-mount basis.
A “use io-blk.so” differentiates the two classes of option …

Is there any other way to solve my problem ?

If the above doesn’t help please give more detail as to what you are
trying to achieve by disabling the cache for a single partition …

What we are trying to achieve is to maximise the performance to disk when
writing (streaming) large files which will not be accessed in the immediate
future.

In general (and on other operating systems) in these circumstances it is
recommended to turn off the cache since it provides no benefit and only adds
overhead (of course it is still beneficial to gather writes together).

John, what is the recommendation for the best configuration for this under
QNX6? As previously mentioned we are using SCSI drives (aha8).

Also, is qnx4 actually the best performing fs for this application? We don’t
really mind about POSIX compliance and will have a very “flat” directory
structure on the target disk (just a root and one or two levels of
sub-directories). Under these circumstances do we gain anything by using
FAT32 or ext2 for the “data” partition?

Rob Rutherford

“John Garvey” <jgarvey@qnx.com> wrote in message
news:ah8eb6$q0u$1@nntp.qnx.com

matt mchugh <> mattgreeneggswithspam@ruzz.com> > wrote:
I am making a custom 6.2 image with two partitions. Partition 1 (t79)
will
have the os and pkgs etc.
Partition 2 will be just for data (t78)
I already have the image set up using devb-aha8. There is only 1
physical
SCSI drive ( hd0)
My problem is that I want to set cache=0 for the data partition but
still
having caching set as standard on the boot partition.

What part of caching are you trying to disable? There always has to be
some cache involved in reading, as data goes from the disk to the user
via the buffer cache. For writing the cache is also involved but you
can force no write-behind delay by doing a synchronous mount. So, if
your desire to not cache t78 is just so your data is always flushed
immediately to disk, then you can say something like:
“mount -tqnx4 -ocommit=high /dev/hd0t78 /data”

Note that some options like “cache=” will apply to the whole disk
subsystem, whereas other options can apply on a mount-by-mount basis.
A “use io-blk.so” differentiates the two classes of option …

Is there any other way to solve my problem ?

If the above doesn’t help please give more detail as to what you are
trying to achieve by disabling the cache for a single partition …

Robert Rutherford <ruzz@nospamplease.ruzz.com> wrote:

What we are trying to achieve is to maximise the performance to disk when
writing (streaming) large files which will not be accessed in the immediate
future. In general (and on other operating systems) in these circumstances
it is recommended to turn off the cache since it provides no benefit and
only adds overhead (of course it is still beneficial to gather writes
together).

Hmm, whilst there are internal mechanisms that do this, there is no
external interface to provide such a hint. Using some cache does allow
writes to be chunked together, as you point out, but then the cache is
maintained as simple LRU. You might try the “blk wipe=” option to limit
the amount of cache occupied by a single file (but this will also affect
reads). You could use direct-IO to bypass the cache (although this is
disabled in the current release).

Also, is qnx4 actually the best performing fs for this application? We
don’t really mind about POSIX compliance and will have a very “flat”
directory structure on the target disk (just a root and one or two levels
of sub-directories). Under these circumstances do we gain anything by
using FAT32 or ext2 for the “data” partition?

fs-qnx4 and fs-dos have similar performance. Both of these have more
“implementation tricks” and are faster than fs-ext2, especially in more
recent (unreleased) versions. The “qnx4 overalloc” may help by
attempting to allocate larger contiguous areas when growing a file.
Try to write in 8-32k chunks (reduce context-switch/msg-pass overhead).
Specifying “commit=none” on the data partition may also improve performance
by allowing all bitmap/inode/FAT updates to be non-synchronous.

You could use direct-IO to bypass the cache (although this is
disabled in the current release).

Could you explain what you mean by direct-IO?
Is this readblock()/writeblock() or are you talking about something else
altogether?
In the current version, is there significant performance gain to be had by
using readblock/writeblock over regular read/write calls?

fs-qnx4 and fs-dos have similar performance. Both of these have more
“implementation tricks” and are faster than fs-ext2, especially in more
recent (unreleased) versions. The “qnx4 overalloc” may help by
attempting to allocate larger contiguous areas when growing a file.
Try to write in 8-32k chunks (reduce context-switch/msg-pass overhead).
Specifying “commit=none” on the data partition may also improve
performance
by allowing all bitmap/inode/FAT updates to be non-synchronous.

Thanks for the hints.

Robert

Robert Rutherford <ruzz@nospamplease.ruzz.com> wrote:

You could use direct-IO to bypass the cache (although this is
disabled in the current release).
Could you explain what you mean by direct-IO?

In the traditional sense, ie DMA direct from disk to user space. Support
has been in for a while but is disabled in the current release. A rewrite
of the readahead code too means that it actually gives you no gain for
sequential reads (for EIDE/UDMA4 both seem to peak at ~38MB/s) but in
conjunction with the non-0-fill file extension it should help for writes.
Of course, sadly, this information is of no use to you yet :-/

Is this readblock()/writeblock() or are you talking about something else
altogether? In the current version, is there significant performance gain
to be had by using readblock/writeblock over regular read/write calls?

No, these offer absolutely no gain except for random-access, where their
advantage is the combination of the IO_SEEK and IO_READ/WRITE message,
and the corresponding reduction in message-passing. For sequential
access they may be very very slightly slower (unnecessary seek processing :slight_smile:

“John Garvey” <jgarvey@qnx.com> wrote in message
news:ahi7li$520$1@nntp.qnx.com

In the traditional sense, ie DMA direct from disk to user space. Support
has been in for a while but is disabled in the current release.
Of course, sadly, this information is of no use to you yet :-/

Sorry to be dense, but when this does become of use to us (;-)), how does it
actually work from the user/application point of view? Do we just make
regular read/write calls and the libs take care of the rest?

Robert