# Speed variation in logical partitions

Hi all,

I am again here with another observation with my disk read/write application.
There is a difference in my read/write speeds when I am testing on a single partition only
and when I am testing on two logical partitions on the same drive simultaneously.
When logical two partitions of the same drive are tested simultaneously, the write/read speed is found
to be reduced to almost half of the speed which is obtained while testing a single partition.
I have implemented this simultaneous testing using two parallel threads.
I have got the similar behaviour with HDD and SSD. Could you please share your thoughts on the reason of this
observation?

Thanks,
Lullaby

So you do a test single partition (with one thread), you get X Msec, then you run the test again on two partitions via 2 threads that do the test at the same time. When that happend each thread reports a speed of X/4 ( for a total of X/2 )

Try to run the same test with the 2 threads on the same partition?

On a HD the reason is simple physics.

Data on a hard drive is stored on a rotating disk. The data is stored in circles called called tracks. If there are multiple platters, the same tracks on different platters are called a cylinder. To read data, first the read heads are moved to the correct track. This takes a variable amount of time. The head could be already over the correct track, or it might need to move the entire width of the disk. This delay is called the seek time. Track to track seek times are very short, but crossing the disk involves not just the travel time, but the settling time. Once at the track, there can be a delay as the correct data rotates to the head. This is also variable, from zero to the time it takes the disk to spin once, however on average it is half this number. Once reading starts, the entire track can be read in the time mentioned. The start positions on adjoining tracks can be set so that the track to track seek and rotation delay overlap.

So if you doing one read, you get one large seek, one significant rotational delay followed by almost continuous reading of data. If you are doing reads to two different partitions, you instead get the head flying back and forth across the disk, slowing things down. If the file system code were really dumb, you could end up reading sector on one partition, then the next sector on the second, back and fourth. This would slow things down more than an order of magnitude. Fortunately once the QNX file system detects that you are reading sequentially, it will read a large number of sectors into a cache improving performance. But going back and forth between partitions will still enact a large penalty.

I can’t explain why this happens on an SSD. It is quite surprising. My guess is that internally the SSD is broken up into separate units, and that changing from unit to unit is like doing a seek in terms of delay.

Actually wouldn’t you expect this exact behavior on an SSD?

SSD’s have firmware in them to write/retrieve data and to do things like level-wearing etc. This firmware is undoubtedly non-OS (like on a PIC micro) and therefore single threaded. So doing 2 reads/writes at the same time to the SSD will have to be serviced sequentially.

Tim

I think there may be some confusion from the original post.

First an obvious observation, reading and writing to a drive is almost always single threaded. I say almost because, at least for SCSI, there is a feature that allows sending multiple requests which the drive is free to optimize. If the drive had some fancy features, eg. multiple arms, or a hybrid solid state + disk arrangement, then it could do some optimization beyond track sorting. A minor note, when flushing the cache, QNX does this.

So the question is whether the poster was saying, hey I tried two reads at the same time and they took as long as 2 reads, duh!!! (I assumed not) or did they meaningfully observe that when trying two simultaneous reads, the throughput was cut in half. The latter is what one might expect when seeks, back and forth across partitions were frequently required. If that is the case, then what is the explanation for the same reduced throughput on an SSD?

Of course I could be wrong about what the poster meant, but lets hope not.

Hi all,

And sorry for replying late to clarify your confusion regarding original post.
Actually I meant like the below code snippet:-

test_fn()
{

while(my_var)
{
:
:
disk_read(); or disk_write(); //can be either read() or write()
:
:
}
}

if (two partitions to be tested)
{
}

Hope you understood the code snippet. If user need to test two partitions at the same time, I will create two threads which execute the same function test_fn. The only difference is the logical partition name in the thread arguments. Thus two threads will be executing simultaneously testing two partitions of the same volume. If the user needs to test only one partition, only one thread will be created for performing the test.

The following is my observation:-
As mario said, when I execute only a single logical partition test (that means, only single thread is present), I get , say X MB/s.
If I execute two logical partition test (that means, 2 threads with different arguments; only difference in partition name), I get almost equal to X/2 MB/s for both partitions. This reduction in speed remains for almost 7 to 8 seconds. After that the pattern of speed is like X MB/s, X/2 MB/s, X/2 MB/s, X MB/s, X MB/s, X/2 MB/s, … This behaviour is similar in the case of HDD and SSD.

I also thought the speed reduction may be because of the Physics, as Maschoen told. But this behaviour is reproduced while testing in SSD also.

Anyways, if you have any more points that is relevant to this observation, could you please share it?

Thanks,
Lullaby

This is an older article on Toms Hardware (where they regularly do these kinds of tests on SSD’s) but it shows the issue you are having.

Without knowing which SSD you have (or how full of data it is) it’s impossible to say whether you have one that’s really good at doing repeated reads/writes or not (some models are much much better than others depending on their firmware)

tomshardware.com/reviews/ssd … ,2279.html

This older model intel is really bad at repeated reads/writes getting worse performance than what you got

tomshardware.com/reviews/ssd … 279-4.html

The Samsung model they tested was better but still suffered performance drops

tomshardware.com/reviews/ssd … 279-7.html

In other words to get repeatable performance you’ll have to hunt around on Toms Hardware or other sites where they have already done the tests for you to know which SSD you need to buy.

Tim

QNX doesn’t support TRIM ( sigh ) yet. Hence it’s up to the garbage collector of the SSD to keep things snappy. This will differ greatly from brand to brand and model to model and even firmware to firmware.

That being said I’m not sure this is the cause of what you are seeing. I would guesstimate what you are seeing are side effect of the filesytem caching. A cache is most often good for “generic” usage and for something as specify as your test case, the cache algorithm QNX used is probably not the best.

Don’t expect “performance” from the QNX filesystem/subsystem, and it will most likely will not get any better with time.