Ramdisk (6.2)

When running devb-ram, it seems to consume
more memory than requested.
e.g.
#sin in
localhost 130M 1 119 Pentium II stepping 2
#devb-ram
[1] 868384

Path =0 etc etc

#sin in
localhost 117M 1 119 Pentium II stepping 2

Seems there is a constant 10M extra allocated/used by
devb-ram, irrespective of the capacity specified.

Alex Cellarius <acellarius@yahoo.com> wrote:

When running devb-ram, it seems to consume
more memory than requested.
e.g.
#sin in
localhost 130M 1 119 Pentium II stepping 2
#devb-ram
[1] 868384

Path =0 etc etc

#sin in
localhost 117M 1 119 Pentium II stepping 2

Seems there is a constant 10M extra allocated/used by
devb-ram, irrespective of the capacity specified.

devb-ram uses the “standard” disk shared objects including
cam-disk.so and io-blk.so. Of particular interest is
io-blk.so which allocates RAM for disk cache. In the case
of a ram disk, this is not particularly needed – but I would
bet it still happens. Try passing a “blk cache=20k” or something
similar to devb-ram.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:ah3voq$ee4$1@nntp.qnx.com

Alex Cellarius <> acellarius@yahoo.com> > wrote:
When running devb-ram, it seems to consume
more memory than requested.
e.g.
#sin in
localhost 130M 1 119 Pentium II stepping 2
#devb-ram
[1] 868384

Path =0 etc etc

#sin in
localhost 117M 1 119 Pentium II stepping 2

Seems there is a constant 10M extra allocated/used by
devb-ram, irrespective of the capacity specified.

devb-ram uses the “standard” disk shared objects including
cam-disk.so and io-blk.so. Of particular interest is
io-blk.so which allocates RAM for disk cache. In the case
of a ram disk, this is not particularly needed – but I would
bet it still happens. Try passing a “blk cache=20k” or something
similar to devb-ram.

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.

You might want to look at devf-ram. It doesn’t required all
the share object devb-ram requires.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah47bv$k40$1@inn.qnx.com

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.

You made my day with this joke Mario :wink:
What DMA, it is memory to begin with. The devb-ram is being slow like pig
not because of using CPU. It is just something silly in the framework what
imposes short (like 512 bytes) message sizes somewhere along the path of
data. And that (raw messaging with small buffers) is REALLY slow, at least
in QNX6.

– igor

“Igor Kovalenko” <kovalenko@attbi.com> wrote in message
news:ah4c4j$nof$1@inn.qnx.com

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah47bv$k40$> 1@inn.qnx.com> …

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.


You made my day with this joke Mario > :wink:

Joke?

What DMA, it is memory to begin with. The devb-ram is being slow like pig
not because of using CPU. It is just something silly in the framework what
imposes short (like 512 bytes) message sizes somewhere along the path of
data. And that (raw messaging with small buffers) is REALLY slow, at least
in QNX6.

That devb-ram is so slow is because of the framework and short message
I agree. But my point was that it’s slower then HD because it’s not
using DMA.

I assumed readers of my post would have also read David’s message
explaining that devb-ram uses the same framwork as devb-eide.

– igor
\

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah4lp4$11c$1@inn.qnx.com

“Igor Kovalenko” <> kovalenko@attbi.com> > wrote in message
news:ah4c4j$nof$> 1@inn.qnx.com> …
“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah47bv$k40$> 1@inn.qnx.com> …

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.


You made my day with this joke Mario > :wink:

Joke?

What DMA, it is memory to begin with. The devb-ram is being slow like
pig
not because of using CPU. It is just something silly in the framework
what
imposes short (like 512 bytes) message sizes somewhere along the path
of
data. And that (raw messaging with small buffers) is REALLY slow, at
least
in QNX6.


That devb-ram is so slow is because of the framework and short message
I agree. But my point was that it’s slower then HD because it’s not
using DMA.

I assumed readers of my post would have also read David’s message
explaining that devb-ram uses the same framwork as devb-eide.

DMA stands for Direct Memory Access, which is a way to allow a device to
access host memory directly, without help from CPU. With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
access’. You still have to move data between buffer cache and application
buffers, but that should happen in any case since DMA can’t send data
directly into user buffers (they are not DMA-safe, plus data must go through
buffer cache anyway).

Now keep in mind that RAM is about 3 orders of magnitude faster than any HD
so there is no way for disk DMA to be faster than accessing RAM directly.
Then logically devb-ram must be faster than eide no matter what. If devb-ram
is slower, the only possible reason for that is that I/O framework of QNX6
limits maximum I/O bandwidth at levels much below maximum memory bandwidth.
Experiments indicate bandwidth of devb-ram is roughly equivalent of raw
message passing with 512 byte buffers adjusted for overhead (and it could be
about 10 times better if 2k buffers were used).

– igor

On Wed, 17 Jul 2002 12:56:17 -0400, “Mario Charest” postmaster@127.0.0.1 wrote:

“David Gibbs” <> dagibbs@qnx.com> > wrote in message
news:ah3voq$ee4$> 1@nntp.qnx.com> …
Alex Cellarius <> acellarius@yahoo.com> > wrote:
When running devb-ram, it seems to consume
more memory than requested.
e.g.
#sin in
localhost 130M 1 119 Pentium II stepping 2
#devb-ram
[1] 868384

Path =0 etc etc

#sin in
localhost 117M 1 119 Pentium II stepping 2

Seems there is a constant 10M extra allocated/used by
devb-ram, irrespective of the capacity specified.

devb-ram uses the “standard” disk shared objects including
cam-disk.so and io-blk.so. Of particular interest is
io-blk.so which allocates RAM for disk cache. In the case
of a ram disk, this is not particularly needed – but I would
bet it still happens. Try passing a “blk cache=20k” or something
similar to devb-ram.

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.

You might want to look at devf-ram. It doesn’t required all
the share object devb-ram requires.

Thanks David & Mario

Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote in article <ah4r3k$503$1@inn.qnx.com>…

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah4lp4$11c$> 1@inn.qnx.com> …

“Igor Kovalenko” <> kovalenko@attbi.com> > wrote in message
news:ah4c4j$nof$> 1@inn.qnx.com> …
“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah47bv$k40$> 1@inn.qnx.com> …

Alex note, that ram disk with devb-eide aren’t that fast. It doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.


You made my day with this joke Mario > :wink:

Joke?

What DMA, it is memory to begin with. The devb-ram is being slow like
pig
not because of using CPU. It is just something silly in the framework
what
imposes short (like 512 bytes) message sizes somewhere along the path
of
data. And that (raw messaging with small buffers) is REALLY slow, at
least
in QNX6.


That devb-ram is so slow is because of the framework and short message
I agree. But my point was that it’s slower then HD because it’s not
using DMA.

I assumed readers of my post would have also read David’s message
explaining that devb-ram uses the same framwork as devb-eide.


DMA stands for Direct Memory Access, which is a way to allow a device to
access host memory directly, without help from CPU.

I agree.

With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
access’.

It’s very inaccutare… I even don’t understand what you tried to say…
The PC/AT buit-in DMA controller is able to transfer one chunk of host memory to another chunk. See
DMA control register (0x08) bit 0 - enable memory-to-memory DMA (ch0<->ch1). This trick was used
for memory refresh in old PC. Definetelly, the external DMA controller on PCI bus is able to do the
same work quite well… I agree, it’s absolutely useless, but this still will direct memory
access… but your statement above is not quite true… I see quotes marks… but I’m still
disagree, I’m sorry Igor.

You still have to move data between buffer cache and application
buffers, but that should happen in any case since DMA can’t send data
directly into user buffers (they are not DMA-safe, plus data must go through
buffer cache anyway).

Now keep in mind that RAM is about 3 orders of magnitude faster than any HD
so there is no way for disk DMA to be faster than accessing RAM directly.
Then logically devb-ram must be faster than eide no matter what. If devb-ram
is slower, the only possible reason for that is that I/O framework of QNX6
limits maximum I/O bandwidth at levels much below maximum memory bandwidth.
Experiments indicate bandwidth of devb-ram is roughly equivalent of raw
message passing with 512 byte buffers

When real HD works, it just gets data for reading/writting to sector (512 bytes). It’s problem of
HD’s hardware how to write/read the sector. Keep in mind, the HD effectivelly uses its own cache
memory buffer. When devb-ram is working, I believe, it’s use 512 bytes message passing to simulate
of read/write sector operation.

I believe, if devb-ram will smart like modern HD, it will work much more effectivelly than real HD.
And, of course, since devb-ram is software simulator for hardware thing, it will eat CPU time a lot
and, yes, DMA issue is not point here, I’m sorry Mario.

Cheers.

Eduard.
ed1k at ukr dot net


adjusted for overhead (and it could be
about 10 times better if 2k buffers were used).

– igor
\

“ed1k” <ed1k@spamerstrap.com> wrote in message
news:01c22e31$6bbcb680$106fa8c0@ED1K…

With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
access’.

It’s very inaccutare… I even don’t understand what you tried to say…
The PC/AT buit-in DMA controller is able to transfer one chunk of host
memory to another chunk. See
DMA control register (0x08) bit 0 - enable memory-to-memory DMA
(ch0<->ch1). This trick was used
for memory refresh in old PC. Definetelly, the external DMA controller on
PCI bus is able to do the
same work quite well… I agree, it’s absolutely useless, but this still
will direct memory
access… but your statement above is not quite true… I see quotes
marks… but I’m still
disagree, I’m sorry Igor.

I guess I was a bit unclear. What I meant is that for RAM driver situation
looks like “DMA is already done” since data is already in host RAM. Yes I
know that AT DMA controller could theoretically be used to transfer chunks
of host memory, but that is irrelevant because EIDE driver does not do that
either. It would not be very useful anyway because transfer speed would be
limited by ISA bandwidth I believe. I don’t even understand why did you want
to bring issue of ISA DMA here, it is pretty useless for almost anything
except ancient ISA audio & network cards and floppies. Sorry ed1k :wink:

When real HD works, it just gets data for reading/writting to sector (512
bytes). It’s problem of
HD’s hardware how to write/read the sector. Keep in mind, the HD
effectivelly uses its own cache
memory buffer. When devb-ram is working, I believe, it’s use 512 bytes
message passing to simulate
of read/write sector operation.

There is no reason to limit yourself with 512 byte sector size. As you
mentioned yourself, even hard drives try to use larger chunks for transfers.
I would not be surprized if there were drives with larger physical sector
size either.

I believe, if devb-ram will smart like modern HD, it will work much more
effectivelly than real HD.
And, of course, since devb-ram is software simulator for hardware thing,
it will eat CPU time a lot

I don’t believe RAM driver has to simulate HD hardware. RAM driver appears
to me like a “EIDE minus access to media”. Then of course, it could written
in such a way too (if the point was to test/benchmark CAM layer for hard
disks rather than be efficient). But in such case QNX should be now busy
figuring out why 512-byte messaging is so slow and fixing it.

and, yes, DMA issue is not point here, I’m sorry Mario.

Glad we agree on something.

– igor

“Igor Kovalenko” <Igor.Kovalenko@motorola.com> wrote in message
news:ah4r3k$503$1@inn.qnx.com

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah4lp4$11c$> 1@inn.qnx.com> …

“Igor Kovalenko” <> kovalenko@attbi.com> > wrote in message
news:ah4c4j$nof$> 1@inn.qnx.com> …
“Mario Charest” postmaster@127.0.0.1 wrote in message
news:ah47bv$k40$> 1@inn.qnx.com> …

Alex note, that ram disk with devb-eide aren’t that fast. It
doesn’t
use DMA hence require more CPU power. In some cases I have
seen devb-ram being slower then devb-eide.


You made my day with this joke Mario > :wink:

Joke?

What DMA, it is memory to begin with. The devb-ram is being slow like
pig
not because of using CPU. It is just something silly in the framework
what
imposes short (like 512 bytes) message sizes somewhere along the path
of
data. And that (raw messaging with small buffers) is REALLY slow, at
least
in QNX6.


That devb-ram is so slow is because of the framework and short message
I agree. But my point was that it’s slower then HD because it’s not
using DMA.

I assumed readers of my post would have also read David’s message
explaining that devb-ram uses the same framwork as devb-eide.


DMA stands for Direct Memory Access, which is a way to allow a device to
access host memory directly, without help from CPU.

With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
access’.

I’m not so sure about that. Yes the data is in ram, but it’s my impression
it
needs to be copied into some internal buffer, just like it gets moves from
the
HD.

buffers, but that should happen in any case since DMA can’t send data
directly into user buffers (they are not DMA-safe, plus data must go
through
buffer cache anyway).

Now keep in mind that RAM is about 3 orders of magnitude faster than any
HD
so there is no way for disk DMA to be faster than accessing RAM directly.



Then logically devb-ram must be faster than eide no matter what. If
devb-ram
is slower, the only possible reason for that is that I/O framework of QNX6
limits maximum I/O bandwidth at levels much below maximum memory
bandwidth.
Experiments indicate bandwidth of devb-ram is roughly equivalent of raw
message passing with 512 byte buffers adjusted for overhead (and it could
be
about 10 times better if 2k buffers were used).

Maybe I should redo my test, my memory may play trick on me;-) I may
be thinking about filesystem cache versus ram disk.

– igor

Igor Kovalenko <kovalenko@attbi.com> wrote in article <ah6vpl$p52$1@inn.qnx.com>…

“ed1k” <> ed1k@spamerstrap.com> > wrote in message
news:01c22e31$6bbcb680$106fa8c0@ED1K…
With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
access’.

[…]
I guess I was a bit unclear. What I meant is that for RAM driver situation
looks like “DMA is already done” since data is already in host RAM.

Hurrah! You said much better this time. And all what I meant is to remove that bit of unclear.

Yes I
know that AT DMA controller could theoretically be used to transfer chunks
of host memory, but that is irrelevant because EIDE driver does not do that
either. It would not be very useful anyway because transfer speed would be
limited by ISA bandwidth I believe. I don’t even understand why did you want
to bring issue of ISA DMA here, it is pretty useless for almost anything
except ancient ISA audio & network cards and floppies. Sorry ed1k > :wink:

:slight_smile: I bring attention that DMA transfers are DMA transfers regardless of is it ISA DMA, EIDE DMA or
any other DMA.

Couple of
LACC src
SACL dest
or just single
BLDD src, dest
(sorry for TMS’s but not QNX’s example, but I think you understand those mnemonics)
IS NOT DMA under any circumstances. (BTW, it should be faster than any DMA, there is no DMA
handshake delay)

Exactly this was unclear from your previous post. What EIDE driver does it is usage of DMA
operations which is supported by EIDE controller. The mechanism of usage is very similar to ISA DMA
or any other DMA: firstly the driver programs the controller for operate with DMA and secondly it
issues the commands READ DMA / WRITE DMA or something. Electrically, there is the same DMARQ from
HD and the same respond DMACK- from host (EIDE controller). Programmaticly, the data comes to
memory buffer without CPU intervension and there is no need for polling and reading data from data
register.

But DMA can be used for memory to memory transfers, in this case memory access is direct memory
access. Also I mentioned it is useless and ridiculous untill it’s not used for memory refresh :slight_smile:.

And, yes, DMA is useless for RAM driver because the data is already in host memory, And definetelly
RAM driver should be faster than real HD because it doesn’t need any media to memory transfers. If
it’s slower, than there is some issue in implementation.

When real HD works, it just gets data for reading/writting to sector (512
bytes). It’s problem of
HD’s hardware how to write/read the sector. Keep in mind, the HD
effectivelly uses its own cache
memory buffer. When devb-ram is working, I believe, it’s use 512 bytes
message passing to simulate
of read/write sector operation.

There is no reason to limit yourself with 512 byte sector size. As you
mentioned yourself, even hard drives try to use larger chunks for transfers.

Don’t remember when I said it :wink: I mentioned the drive uses own cache, when you read only one
sector it’s able to read and put on the own cache few sectors. The drive will return you exactly
what you asked - the 512 bytes of data, but highly probably the next command will “read next
sector”, and you get the data much faster from cache.

I would not be surprized if there were drives with larger physical sector
size either.

I’m not sure there were drives with larger physical sector size. At least, it can use some extra
size for data integrity etc., but for end-user it should be pretty standard size. The filesystem
layer works with cluster, I believe. And cluster size can be from 1 sector to 256 sectors (it’s
maximum quantity of sectors for ATAPI which can be read/written by one command)

I believe, if devb-ram will smart like modern HD, it will work much more
effectivelly than real HD.
And, of course, since devb-ram is software simulator for hardware thing,
it will eat CPU time a lot

I don’t believe RAM driver has to simulate HD hardware. RAM driver appears
to me like a “EIDE minus access to media”.

As I can guess under all info in this thread, devb-ram is “EIDE minus access to media plus access
to memory”. Since it uses io-blk, I believe it simulates somehow hardware.

Then of course, it could written
in such a way too (if the point was to test/benchmark CAM layer for hard
disks rather than be efficient). But in such case QNX should be now busy
figuring out why 512-byte messaging is so slow and fixing it.

I thought you mentioned some experimental data about bandwidth of devb-ram. And it was my guess in
order to explain this. And of course, I hope someone from QSS read this NG and will take a look at
this :slight_smile:

and, yes, DMA issue is not point here, I’m sorry Mario.


Glad we agree on something.

:wink: Igor, you was right, but not very clear (I can vindicate myself by language issue, but you often
are very clear:))
Cheers,

Eduard.
ed1k at ukr dot net

and, yes, DMA issue is not point here, I’m sorry Mario.

Sorry? What for. As far as I know there is nothing wrong
in being wrong :wink:

I know more then I did yesterday so all is fine and dandy :wink:

Glad we agree on something.

– igor