ETFS performance

It has been a struggle with the nand performance on QNX with etfs filesystem. On the same board, ubifs on linux gives better performance.

one experiment observation is below…

cp /tmp/4M.cp /test_dir/4M.cp
Linux: 81.63 MB/sec
Qnx: 5.41 MB/sec

cp /test_dir/4M.cp /tmp/4M.cp
Linux: 105.26 MB/sec
Qnx: 9.09 MB/sec

[got these speed by formula of size/time consumed; here size = 4M and time collected from time command in seconds]

i am wondering, if etfs can match up to ubifs performance. if yes, then how ?

Is there any benchmark for etfs file system? i couldn’t find any reference.

Or is there a better way to compare the performance b/w these?


I assume your aware that the etfs filesystem is transaction based for high reliability. That means it’s not going to have high speed.

Linux’s ubifs on the other hand is designed for speed and has write caching and other features that efts can’t have because it’s designed for reliability (ie it has to wait for each block to confirm it’s written). Note that there are options for reliability mode for ubifs so did you turn those on in order to do an apple to apple comparison?


Hi Tim,

Thanks for you response.

I agree with your theory.
But even, ETFS has cache. which is default to 64 cluster(= 64*2k = 128k ). Even if you try to reduce it ( use -c 0 ) while launching, it keep the cache size to 32 clusters(64k) internally. Even more interesting observation is when we tried to increase the cache in hope of better performance it rather cause a degrade performance in write.

Regarding, ubifs reliability mode, i believe you are referring ‘-o sync’ option while mounting the ubifs. At most, that is what i found. Although, i couldn’t found exactly where is mount is being triggered in Linux( it is not part of fstab ).

i amhoping for some realistic number to prove the theory.


This link (which I am sure you’ve read) talks about mounting a RAM based version in the examples at the bottom … s-ram.html

One thing you might do to test if it’s ETFS is to run the RAM based version and to create a RAM drive (but leave all the data error correction stuff running). Then copy your file to the RAM drive and see how long that takes. Then repeat the experiment with the plain RAM drive (devb-ram) and see how long that takes. The difference in time is the difference between ETFS and the regular QNX file system.

If they are essentially the same time then perhaps it’s the QNX hardware driver to the NAND that’s the problem.


Hi Tim,

The experiment you suggested seems to be only useful to calculate the nand driver overhead on top of etfs filesystem. I don’t see, how does it help to give sense to etfs number in comparison of ubifs numbers. But, Yes, if we could have run ubifs also in ram mode(similar to etfs_ram), we could have make out some sense.

Even though, I have done experiment as you suggested… observation is as following…

size(MB)  read_time(sec)   read_speed(MB/sec)   write_time(sec)   write_speed(MB/sec)   case description
0.002     0.00002          100.00               0.0002            10.00                 spec max speed( 20-60us for read/200-2000us for write)
0.002     0.00006          33.33                0.002              1.00                 spec min speed( 20-60us for read/200-2000us for write)
4         0.44             9.09                 0.74               5.41                 Normal copy ( cp to/from tmp )
4         0.1              40.00                0.1               40.00                 Normal copy ram_etfs ( cp to/from tmp )
64        1.35             47.41                1.29              49.61                 Normal copy ram_etfs ( cp to/from tmp )
4         0.09             44.44                0.09              44.44                 Normal copy within_ram ( cp /tmp to /tmp  )
64        1.13             56.64                1.13              56.64                 Normal copy within_ram ( cp /tmp to /tmp  )
4         0.08             50.00                0.08              50.00                 Normal copy within_etfs( cp /etfs to /etfs  )

Your right it doesn’t help compare etfs to ubifs.

But it seems to me there are 3 possible reasons for etfs being much slower than ubifs

  1. etfs is slow due to it’s transactional nature
  2. The QNX nand driver is slower than the Linux equivalent
  3. It’s a combination of etfs and the nand driver

So I wanted to see some number to try and pinpoint where the bottleneck is.

Looking at your table and removing the 64 Meg and the .0002 Megs we are left with

size(MB)  read_time(sec)   read_speed(MB/sec)   write_time(sec)   write_speed(MB/sec)   case description
4         0.44             9.09                 0.74               5.41                 Normal copy ( cp to/from tmp )
4         0.1              40.00                0.1               40.00                 Normal copy ram_etfs ( cp to/from tmp )
4         0.09             44.44                0.09              44.44                 Normal copy within_ram ( cp /tmp to /tmp  )
4         0.08             50.00                0.08              50.00                 Normal copy within_etfs( cp /etfs to /etfs  )
  1. This is your original post data writing to/from the NAND.
    But I don’t quite understand what the other 3 entries are. Where are you copying from->to?
  2. Is this copying NAND->Ram?
  3. Is this copying Ram->Ram?
  4. Is this copying Ram->Ram?

Regardless of what 2-4 really represent they are all essentially the same speed. Which would make me think that etfs and it’s transactional nature isn’t the problem. That the problem is the NAND driver in which case I’d start looking at the options to the NAND driver (which driver is that?).


Clarifying other 3 entries…

  1. this is using ram based version of etfs (ram_etfs; as you suggested in previous reply. ), and performed the same operation as in 1).
  1. this is plain ram to ram transaction to understand ram transfer speed.

  2. this is etfs_ram to etfs_ram transfer speed, to compare with 3) and understand the etfs FS overhead.

Entries with .0002 Megs( 2k page size ) are important as well, as those are the NAND controller raw speed range as per the User-Manual.

So, if we compare controller speed ( no software overhead involved ) with Linux( ~81M/s write and 105M/s read). Linux seems to be unreasonably faster than even the controller speed.

Regarding driver, We are developing for FSMC controller with Micron8bit 8Gbits Nand chip.


Something seems a bit wrong with the numbers:

The Linux numbers your posting( ~81M/s write and 105M/s read) are faster than RAM copies (44 write, 44 read) in QNX. It seems impossible that copying a file in the Ram disk to the Ram disk can be significantly slower (<half) than writing to Nand. Something’s wrong somewhere.

Back to the real problem your trying to solve which is throughput to Nand under QNX. Are you saying YOUR writing the Nand controller driver? Or are you using a driver supplied with the BSP/QNX? If your writing the driver then its your code and you’ll have to figure out what your doing wrong. If your using a driver that came with the BSP/QNX then you should check the driver options looking for something like DMA mode option.


P.S. I just did a quick test on my QNX machine using my Ram drive. I copied a 13 Meg file to the Ram drive. Then I did ‘time cp /fs/ram/testfile /fs/ram/testfile1’. This reported .67s real time. That equates to 19 Meg (13/.67) transfer. But that’s using my cp command from my disk which isn’t very fast (don’t have DMA mode on this disk). So I placed the cp command itself into the ram drive and then did ‘time /fs/ram/cp /fs/ram/testfile /fs/ram/testfile1’. This reported as .05s real time. That equates to 260 Meg (13/.05) which seems a lot more in line with what a Ram drive should give in terms of speed. So maybe you should put your cp command in the Ram drive and then re-run your tests on the Nand AND the Ram drive so that the slow Nand read time just to access ‘cp’ isn’t affecting your numbers.