Igor Kovalenko <email@example.com> wrote in article <firstname.lastname@example.org>…
“ed1k” <> email@example.com> > wrote in message
With devb-ram your data
is already in the host memory, so any memory access is ‘direct memory
I guess I was a bit unclear. What I meant is that for RAM driver situation
looks like “DMA is already done” since data is already in host RAM.
Hurrah! You said much better this time. And all what I meant is to remove that bit of unclear.
know that AT DMA controller could theoretically be used to transfer chunks
of host memory, but that is irrelevant because EIDE driver does not do that
either. It would not be very useful anyway because transfer speed would be
limited by ISA bandwidth I believe. I don’t even understand why did you want
to bring issue of ISA DMA here, it is pretty useless for almost anything
except ancient ISA audio & network cards and floppies. Sorry ed1k >
I bring attention that DMA transfers are DMA transfers regardless of is it ISA DMA, EIDE DMA or
any other DMA.
or just single
BLDD src, dest
(sorry for TMS’s but not QNX’s example, but I think you understand those mnemonics)
IS NOT DMA under any circumstances. (BTW, it should be faster than any DMA, there is no DMA
Exactly this was unclear from your previous post. What EIDE driver does it is usage of DMA
operations which is supported by EIDE controller. The mechanism of usage is very similar to ISA DMA
or any other DMA: firstly the driver programs the controller for operate with DMA and secondly it
issues the commands READ DMA / WRITE DMA or something. Electrically, there is the same DMARQ from
HD and the same respond DMACK- from host (EIDE controller). Programmaticly, the data comes to
memory buffer without CPU intervension and there is no need for polling and reading data from data
But DMA can be used for memory to memory transfers, in this case memory access is direct memory
access. Also I mentioned it is useless and ridiculous untill it’s not used for memory refresh .
And, yes, DMA is useless for RAM driver because the data is already in host memory, And definetelly
RAM driver should be faster than real HD because it doesn’t need any media to memory transfers. If
it’s slower, than there is some issue in implementation.
When real HD works, it just gets data for reading/writting to sector (512
bytes). It’s problem of
HD’s hardware how to write/read the sector. Keep in mind, the HD
effectivelly uses its own cache
memory buffer. When devb-ram is working, I believe, it’s use 512 bytes
message passing to simulate
of read/write sector operation.
There is no reason to limit yourself with 512 byte sector size. As you
mentioned yourself, even hard drives try to use larger chunks for transfers.
Don’t remember when I said it I mentioned the drive uses own cache, when you read only one
sector it’s able to read and put on the own cache few sectors. The drive will return you exactly
what you asked - the 512 bytes of data, but highly probably the next command will “read next
sector”, and you get the data much faster from cache.
I would not be surprized if there were drives with larger physical sector
I’m not sure there were drives with larger physical sector size. At least, it can use some extra
size for data integrity etc., but for end-user it should be pretty standard size. The filesystem
layer works with cluster, I believe. And cluster size can be from 1 sector to 256 sectors (it’s
maximum quantity of sectors for ATAPI which can be read/written by one command)
I believe, if devb-ram will smart like modern HD, it will work much more
effectivelly than real HD.
And, of course, since devb-ram is software simulator for hardware thing,
it will eat CPU time a lot
I don’t believe RAM driver has to simulate HD hardware. RAM driver appears
to me like a “EIDE minus access to media”.
As I can guess under all info in this thread, devb-ram is “EIDE minus access to media plus access
to memory”. Since it uses io-blk, I believe it simulates somehow hardware.
Then of course, it could written
in such a way too (if the point was to test/benchmark CAM layer for hard
disks rather than be efficient). But in such case QNX should be now busy
figuring out why 512-byte messaging is so slow and fixing it.
I thought you mentioned some experimental data about bandwidth of devb-ram. And it was my guess in
order to explain this. And of course, I hope someone from QSS read this NG and will take a look at
and, yes, DMA issue is not point here, I’m sorry Mario.
Glad we agree on something.
Igor, you was right, but not very clear (I can vindicate myself by language issue, but you often
are very clear:))
ed1k at ukr dot net