which way to go? Linux or QNX?

You can’t “DEVELOP” in 16 mb.

“Chris McKillop” <cdm@qnx.com> wrote in message
news:amc0vj$4gv$1@nntp.qnx.com

All that aside, new development on QNX4 has ended. There won’t be any
new
device drivers written unless someone pays for it. This alone can be a
show
stopper. But if you can develop an application with generic off the
shelf
components then QNX4 is my choice. I can develop a sophisticated
application with graphics and networking using QNX4 in 16 MB. I can’t
(hardly) log in in text mode in QNX6 in 16 MB.


And I can’t find a COTS PC today with less then 128M, most come with 256M.
And the implication you are making is that Neutrino is unable to run in a
system with networking, graphics, and applications in less then 16M. This
is untrue. Check > http://qnxzone.com/eQip/ > - a full PDA environment with
kernel, flash filesystem, serial drivers, audio system, networking,
system services (hotswap and the like), photon, handwriting recognition,
PDA front-end and a webbrowser running in about 12M. This is stock
Neutrino and Photon 2 with no effort to reduce memory useage.

camz@passageway.com wrote:

Armin <> a-steinhoff@web.de> > wrote:

Any comments about the file system ??


pfm take hours to copy the contents of a directory from one system to an
other. It takes few seconds with an equivalent tar file > :slight_smile:


Armin, I’m not sure what your issue is, but you need to learn how to
interpret the results you get.

Well … you have to learn to interpret (or to understand) my statements
in a logical way.

If pfm takes a long time and tar does not, well…

You have to differenciate between the tar command and a tar file.
I thought it is evident that a tar file means usually a *.tar file.

they BOTH use the filesystem, so that means that it is NOT
the problem.

Again … the problem is that a COPY/PASTE action on a directory
take hours between two QNET nodes … the copy of a previous built tar
archive of the same directory takes seconds.

Hope this is understandable ??

Oh … I just forgot to mention that this recursive copy between two
nodes works also incomplete. Lots of subdirectories has not been not copied.

Selecting a bunch of files for deletion doesn’t work … only one file
will be deleted if the rigth mouse button has been used.

pfm shows not all directories if a file filter is set …

Is it possible to port the pfm from Photon 1.14 ??
This beast works …

Armin


PS.: this is written with the Xlib Mozilla 1.1 + XPhoton.
Works great!!

“Bill Caroselli (Q-TPS)” <QTPS@earthlink.net> wrote:

You can’t “DEVELOP” in 16 mb.

You could - but why bother doing the work to make it possible when a COTS
PC today comes with 10-20x that much memory? And because of the nature of
gcc and related tools, they will be slow with so little memory. This is
one of the only things I miss from QNX4 - the speed and efficency of the
compiler. Although, watcom wasn’t too fast when you fed it C++ with a
lot of templates.

chris

“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:amc0vj$4gv$> 1@nntp.qnx.com> …
All that aside, new development on QNX4 has ended. There won’t be any
new
device drivers written unless someone pays for it. This alone can be a
show
stopper. But if you can develop an application with generic off the
shelf
components then QNX4 is my choice. I can develop a sophisticated
application with graphics and networking using QNX4 in 16 MB. I can’t
(hardly) log in in text mode in QNX6 in 16 MB.


And I can’t find a COTS PC today with less then 128M, most come with 256M.
And the implication you are making is that Neutrino is unable to run in a
system with networking, graphics, and applications in less then 16M. This
is untrue. Check > http://qnxzone.com/eQip/ > - a full PDA environment with
kernel, flash filesystem, serial drivers, audio system, networking,
system services (hotswap and the like), photon, handwriting recognition,
PDA front-end and a webbrowser running in about 12M. This is stock
Neutrino and Photon 2 with no effort to reduce memory useage.
\


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

Indeed. And speed of compiler never is a rate-limiting factor in
development. GCC or not, it still can compile your code way faster than you
can write :wink:

Efficiency of code is another matter. That however is often overvalued, in
the sense that other factors limit efficiency more than compiler. GNU
toolchain and ELF format do not help efficiency or compactness of code. They
however do contribute to portability of Unix/Linux code to QNX. And they do
it for free.

Now is the speed/size difference so great that it can justify couple
thousand dollars on the compiler? For some people maybe. I bet that for
majority it is not. The reality is that lot of people use open source code
in their projects and that trend will probably grow. Then how about time
needed to work around compatibility issues when working with non-GNU
toolchains? It all comes down to the question of what time is more
important - human time or computer time. I don’t think computer time is
worth $30/hour… Coumputer time also tends to become cheaper, while human
time tends to become more expensive.

My opinion is that it is not worth the effort right now for QNX to get
saddled with switching compiler. There are much more interesting things to
be done. Many of them would not be even possible in QNX4.

– igor

“Chris McKillop” <cdm@qnx.com> wrote in message
news:amdl0d$4av$3@nntp.qnx.com

“Bill Caroselli (Q-TPS)” <> QTPS@earthlink.net> > wrote:
You can’t “DEVELOP” in 16 mb.


You could - but why bother doing the work to make it possible when a COTS
PC today comes with 10-20x that much memory? And because of the nature of
gcc and related tools, they will be slow with so little memory. This is
one of the only things I miss from QNX4 - the speed and efficency of the
compiler. Although, watcom wasn’t too fast when you fed it C++ with a
lot of templates.

chris


“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:amc0vj$4gv$> 1@nntp.qnx.com> …
All that aside, new development on QNX4 has ended. There won’t be
any
new
device drivers written unless someone pays for it. This alone can be
a
show
stopper. But if you can develop an application with generic off the
shelf
components then QNX4 is my choice. I can develop a sophisticated
application with graphics and networking using QNX4 in 16 MB. I
can’t
(hardly) log in in text mode in QNX6 in 16 MB.


And I can’t find a COTS PC today with less then 128M, most come with
256M.
And the implication you are making is that Neutrino is unable to run in
a
system with networking, graphics, and applications in less then 16M.
This
is untrue. Check > http://qnxzone.com/eQip/ > - a full PDA environment
with
kernel, flash filesystem, serial drivers, audio system, networking,
system services (hotswap and the like), photon, handwriting
recognition,
PDA front-end and a webbrowser running in about 12M. This is stock
Neutrino and Photon 2 with no effort to reduce memory useage.



\

Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

Colin Burgess <cburgess@qnx.com> wrote:

tar -cf - srcdir | (cd destdir; on -f10 tar -xvf -)

was much faster than cp -cRvp . Perhaps pfm should use a similar mechanism?

I’ve NEVER trusted the recursive mode of the cp command, it never seems to
behave the same way on two different OSs. So I ALWAYS use that tar trick.

Cheers,
Camz.

camz@passageway.com wrote:

Colin Burgess <> cburgess@qnx.com> > wrote:

tar -cf - srcdir | (cd destdir; on -f10 tar -xvf -)


was much faster than cp -cRvp . Perhaps pfm should use a similar mechanism?


I’ve NEVER trusted the recursive mode of the cp command,

So you can’t trust any copy and paste actions of pfm ??

it never seems to
behave the same way on two different OSs. So I ALWAYS use that tar trick.

Until now … I haven’t seen similar problems between Linux and M$Windows.

Armin

Cheers,
Camz.

Now is the speed/size difference so great that it can justify couple
thousand dollars on the compiler? For some people maybe. I bet that for
majority it is not. The reality is that lot of people use open source code
in their projects and that trend will probably grow. Then how about time
needed to work around compatibility issues when working with non-GNU
toolchains? It all comes down to the question of what time is more
important - human time or computer time. I don’t think computer time is
worth $30/hour… Coumputer time also tends to become cheaper, while human
time tends to become more expensive.

I remember about 10 years ago, by boss got a new computer it was a 386,

dont remember the speed though, but it was high tech at the time. Most of the software people had 286, compile C program was taking very long. First thing I try was try compiling the project on the 386. It compile almost 3 times faster. We all went to the boss and beg for new 386 machines, thinking how much more efficient we could be. He told me that if I could prove that the time saved by the speed of the 386 could justify the cost he would by them. So I wrote a script that log the start and stop time of compilation, at the end of the week we knew how much time the machine spent compiling project. I dont recall the exact number but I beleive it would
have taken something like 10 years to get a return on the investements.
Hence we all got to keep our 286, that is until the boss got himself a 486
and a lucky programmer got the 386 :wink:

My opinion is that it is not worth the effort right now for QNX to get
saddled with switching compiler. There are much more interesting things to
be done. Many of them would not be even possible in QNX4.

Quite true. I`` love to get the speed of Watcom, but I bought a faster
machine to run QNX6 and got over it.

– igor

“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:amdl0d$4av$> 3@nntp.qnx.com> …
“Bill Caroselli (Q-TPS)” <> QTPS@earthlink.net> > wrote:
You can’t “DEVELOP” in 16 mb.


You could - but why bother doing the work to make it possible when a
COTS
PC today comes with 10-20x that much memory? And because of the nature
of
gcc and related tools, they will be slow with so little memory. This is
one of the only things I miss from QNX4 - the speed and efficency of the
compiler. Although, watcom wasn’t too fast when you fed it C++ with a
lot of templates.

chris


“Chris McKillop” <> cdm@qnx.com> > wrote in message
news:amc0vj$4gv$> 1@nntp.qnx.com> …
All that aside, new development on QNX4 has ended. There won’t be
any
new
device drivers written unless someone pays for it. This alone can
be
a
show
stopper. But if you can develop an application with generic off
the
shelf
components then QNX4 is my choice. I can develop a sophisticated
application with graphics and networking using QNX4 in 16 MB. I
can’t
(hardly) log in in text mode in QNX6 in 16 MB.


And I can’t find a COTS PC today with less then 128M, most come with
256M.
And the implication you are making is that Neutrino is unable to run
in
a
system with networking, graphics, and applications in less then 16M.
This
is untrue. Check > http://qnxzone.com/eQip/ > - a full PDA environment
with
kernel, flash filesystem, serial drivers, audio system, networking,
system services (hotswap and the like), photon, handwriting
recognition,
PDA front-end and a webbrowser running in about 12M. This is stock
Neutrino and Photon 2 with no effort to reduce memory useage.



\

Chris McKillop <> cdm@qnx.com> > “The faster I go, the behinder I
get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/
\

Armin <a-steinhoff@web.de> wrote:

camz@passageway.com > wrote:
I’ve NEVER trusted the recursive mode of the cp command,

So you can’t trust any copy and paste actions of pfm ??

Armin, what are you smoking, we aren’t even talking about pfm any more!
We are comparing the cp -R option with a clever trick using tar and pipes.


it never seems to
behave the same way on two different OSs. So I ALWAYS use that tar trick.

Until now … I haven’t seen similar problems between Linux and M$Windows.

Stop smoking that stuff! Windows doesn’t even have a cp OR tar command,
so of course you wouldn’t see this problem.

Cheers,
Camz.

Hi…

Mario Charest wrote:

Quite true. I`` love to get the speed of Watcom, but I bought a faster
machine to run QNX6 and got over it.

That is a great line. :slight_smile:

Trade-off’s. You’ll hate them, but you will also learn to live with
them. I think that everyone would agree that QNX6 will only improve.
For example, I wonder when the next kernel release will take place?
Incidentally, how can I tell what is the current kernel version?

Keep hanging around Mario, do not go away just yet. Many have learned
quite a bit over the years by reading your answers to questions from
every one, including myself. You and many others have a lot to offer,
and every one appreciates that. The same goes for every one in your
list, and even those not in your least.

Regards…

Miguel.

camz@passageway.com wrote:

Armin <> a-steinhoff@web.de> > wrote:

camz@passageway.com > wrote:

I’ve NEVER trusted the recursive mode of the cp command,


So you can’t trust any copy and paste actions of pfm ??

because it seems to use a recursive cp command …

Armin, what are you smoking, we aren’t even talking about pfm any more!

No … only YOU are talking about the cp and tar commands.
Collins statement was also related to pfm and underlaying copy mechanism!

We are comparing the cp -R option with a clever trick using tar and pipes.

YOU are comparing it … not “we” (what ever it means). I"m talking
about the behavior of copy/paste action between QNET nodes.

it never seems to
behave the same way on two different OSs. So I ALWAYS use that tar trick.


Until now … I haven’t seen similar problems between Linux and M$Windows.


Stop smoking that stuff! Windows doesn’t even have a cp OR tar command,

But it supports copy/paste as QNX and Linux does. And BTW … Windows
has a copy command.


Armin

“Armin” <a-steinhoff@web.de> wrote in message
news:3D8B9C9F.8070205@web.de

camz@passageway.com > wrote:
Armin <> a-steinhoff@web.de> > wrote:

camz@passageway.com > wrote:

I’ve NEVER trusted the recursive mode of the cp command,


So you can’t trust any copy and paste actions of pfm ??

because it seems to use a recursive cp command …

Armin, what are you smoking, we aren’t even talking about pfm any more!

No … only YOU are talking about the cp and tar commands.
Collins statement was also related to pfm and underlaying copy mechanism!

We are comparing the cp -R option with a clever trick using tar and
pipes.

YOU are comparing it … not “we” (what ever it means). I"m talking
about the behavior of copy/paste action between QNET nodes.

it never seems to
behave the same way on two different OSs. So I ALWAYS use that tar
trick.


Until now … I haven’t seen similar problems between Linux and
M$Windows.


Stop smoking that stuff! Windows doesn’t even have a cp OR tar command,

But it supports copy/paste as QNX and Linux does. And BTW … Windows
has a copy command.

What a lovely discussion, I like the technical depth and width of
arguments…

Windows and QNX ‘copy command’ is nothing but a dumb interface to underlying
I/O subsystem. The cp utility is nothing but dumb interface to the same I/O
subsystem. The tar utility is yet another dumb interface to the same I/O
subsystem. They all end up doing open/read/write. By ‘dumb’ I mean that they
don’t try to be clever and just do that.

Copy/paste of files between QNET nodes sucks because I/O over QNET sucks,
not because filesystem performance is misteriously slower using pfm than
using cp or tar. I have a case when QNET is 5x slower on copying a file than
FTP. Granted, filesystem, performance sucks too, but it is equally bad in
all cases, so it is irrelevant. True, tar makes things faster because it
reduces number of remote read/write requests (at the expense of more
local read/write requests). So what? It is off-topic.

Whatever people are smoking, both cp and tar are applications, it is silly
to point that Windows ‘does not have them’. Applications are not property of
OS (although M$ might think otherwise). They can be written and they have
been written. I have tar on Windows and surely it does come with copy and
xcopy.

Come on guys. Long pause in flamewars should not be replaced by discussions
as shallow and boring as this. We do have higher goals, don’t we? Like
proving who’s ego is bigger maybe? :stuck_out_tongue:

Cheers,
– igor

Trade-off’s. You’ll hate them, but you will also learn to live with
them. I think that everyone would agree that QNX6 will only improve.
For example, I wonder when the next kernel release will take place?
Incidentally, how can I tell what is the current kernel version?

uname -a

For example, on my machine it is…

QNX bigbox 6.2.1 2002/09/17-16:53:13EDT x86pc x86

…on most machines for people outside of QSS it will be 6.2.0.

chris


Chris McKillop <cdm@qnx.com> “The faster I go, the behinder I get.”
Software Engineer, QSSL – Lewis Carroll –
http://qnx.wox.org/

Igor Kovalenko wrote:

“Armin” <> a-steinhoff@web.de> > wrote in message
news:> 3D8B9C9F.8070205@web.de> …

[ clip …]



What a lovely discussion, I like the technical depth and width of
arguments…

Windows and QNX ‘copy command’ is nothing but a dumb interface to underlying
I/O subsystem. The cp utility is nothing but dumb interface to the same I/O
subsystem. The tar utility is yet another dumb interface to the same I/O
subsystem. They all end up doing open/read/write. By ‘dumb’ I mean that they
don’t try to be clever and just do that.

Copy/paste of files between QNET nodes sucks because I/O over QNET sucks,
not because filesystem performance is misteriously slower using pfm than
using cp or tar. I have a case when QNET is 5x slower on copying a file than
FTP. Granted, filesystem, performance sucks too, but it is equally bad in
all cases, so it is irrelevant. True, tar makes things faster because it
reduces number of remote read/write requests (at the expense of more
local read/write requests). So what? It is off-topic.

Whatever people are smoking, both cp and tar are applications, it is silly
to point that Windows ‘does not have them’. Applications are not property of
OS (although M$ might think otherwise). They can be written and they have
been written. I have tar on Windows and surely it does come with copy and
xcopy.

Come on guys. Long pause in flamewars should not be replaced by discussions
as shallow and boring as this. We do have higher goals, don’t we?

Yes … e.g. to have a convenient Eclipse. It takes more than 1 MINUTE
to load Eclipse on a 200MB and 700Mhz system.

It makes no sense to blame Eclipse (Java) for this as long as the file
system is amazingly slow.

I pointed out in my initial statement that there are similar problems
with the loading of Python based applications from a LOCAL file
system. (I able to compare the loading time directly with the time
needed under Linux and Windows)

I also didn’t link the copy/paste problem with the file system
performance because it is mixed up with the QNET protocol

Like proving who’s ego is bigger maybe? :stuck_out_tongue:

I don’t have such problems … but I dislike this flashy waffleing about
‘smoking’

Cheers

Armin

Cheers,
– igor

Armin wrote:

Come on guys. Long pause in flamewars should not be replaced by
discussions
as shallow and boring as this. We do have higher goals, don’t we?


Yes … e.g. to have a convenient Eclipse. It takes more than 1 MINUTE
to load Eclipse on a 200MB and 700Mhz system.

It makes no sense to blame Eclipse (Java) for this as long as the file
system is amazingly slow.

I pointed out in my initial statement that there are similar problems
with the loading of Python based applications from a LOCAL file
system. (I able to compare the loading time directly with the time
needed under Linux and Windows)

I also didn’t link the copy/paste problem with the file system
performance because it is mixed up with the QNET protocol

If you’re using a modern EIDE or SCSI drive, you should get I/O
bandwidth of about 30-40Mb/sec. That is about same as you get under
Linux/Windows, because at this point the drives limit bandwidth sooner
than OS limits it. Accessing lot of small files is different story,
that’s where most impact from filesystem comes I think. Not sure it
alone can explain the slowness of Eclipse though. Some important libc
functions like malloc() and memmove() have issues in QNX and that may
contribute quite a bit. Photon bindings for SWT may not be perfect…

Bottom line, it is complicated issue. Good news is, we have Chris
looking into it :slight_smile: And there is rumor that 6.2.1 will have considerably
improved I/O performance.

Like proving who’s ego is bigger maybe? :stuck_out_tongue:


I don’t have such problems … but I dislike this flashy waffleing about
‘smoking’

Expressions like ‘What are you smoking’ and ‘Stop smoking that stuff’
are just north-american jokes. It does not imply anything real and often
used in friendly conversations, especially by younger folks. Think you
could forgive camz for being so informal? :wink:

Cheers,
– igor

Miguel Simon wrote:

Hi…

Chris McKillop wrote:

For example, I wonder when the next kernel release will take place?
Incidentally, how can I tell what is the current kernel version?



uname -a

For example, on my machine it is…

QNX bigbox 6.2.1 2002/09/17-16:53:13EDT x86pc x86



Yes, I know this, but what about the kernel version? The above is the
version of the rtp which is 6.2.1 in your case, but it is my
understanding that the actual Nto kernel version is something like
2.1.1. Would this be correct? Then the question is: how could I tell
the actual version of Nto kernel (or does this question makes on sense)?

I think they changed all this, so now it all simply is Momentics… err…

RTP… err… never mind, it all is , and it is 620, cause the
includes say so :wink:
sys/neutrino.h : 61

#define _NTO_VERSION 620 /* version number * 100 */


/Johan

regards…

Miguel.



…on most machines for people outside of QSS it will be 6.2.0.

chris

One thing I haven’t tried but I know some people who have had very good
results with it is IDE RAID. Lots of current motherboards support it and
are quite inexpensive. Then you get a couple cheap drives and stripe them
(RAID 0) and you get some pretty good throughput. As far as I know it’s all
setup in the bios so the controller just presents an IDE drive to the
software (by which I mean you might have a good chance Neutrino would run
it)

cheers,

Kris

Igor Kovalenko wrote:

Armin wrote:
Come on guys. Long pause in flamewars should not be replaced by
discussions
as shallow and boring as this. We do have higher goals, don’t we?


Yes … e.g. to have a convenient Eclipse. It takes more than 1 MINUTE
to load Eclipse on a 200MB and 700Mhz system.

It makes no sense to blame Eclipse (Java) for this as long as the file
system is amazingly slow.

I pointed out in my initial statement that there are similar problems
with the loading of Python based applications from a LOCAL file
system. (I able to compare the loading time directly with the time
needed under Linux and Windows)

I also didn’t link the copy/paste problem with the file system
performance because it is mixed up with the QNET protocol


If you’re using a modern EIDE or SCSI drive, you should get I/O
bandwidth of about 30-40Mb/sec. That is about same as you get under
Linux/Windows, because at this point the drives limit bandwidth sooner
than OS limits it. Accessing lot of small files is different story,
that’s where most impact from filesystem comes I think. Not sure it
alone can explain the slowness of Eclipse though. Some important libc
functions like malloc() and memmove() have issues in QNX and that may
contribute quite a bit. Photon bindings for SWT may not be perfect…

You can make a huge tradeoff in startup speed (at the expense of later
performance), but turning off jit. It now starts up on my machine in
less than 10 seconds.

Rick…

Rick Duff Internet: rick@astranetwork.com
Astra Network QUICS: rgduff
QNX Consulting and Custom Programming URL:
http://www.astranetwork.com
+1 (204) 987-7475 Fax: +1 (204) 987-7479

“Kris Warkentin” <kewarken@qnx.com> wrote in message
news:amn3cj$nbj$1@nntp.qnx.com

One thing I haven’t tried but I know some people who have had very good
results with it is IDE RAID. Lots of current motherboards support it and
are quite inexpensive. Then you get a couple cheap drives and stripe them
(RAID 0) and you get some pretty good throughput. As far as I know it’s
all
setup in the bios so the controller just presents an IDE drive to the

software (by which I mean you might have a good chance Neutrino would run
it)

No it would not since Neutrino doesn’t go through the BIOS . The hardware
interface is very different then normal IDE

The performance increase isn’t that great :wink: The raw transfer rate improves
quite a lot, but real life activity don’t get speed up that much.
Futhermore you can’t move a RAID setup to a different motherboard unless it
used the same chip. So if the motherboard dies you need to get a
replacement with exact same raid chip.

cheers,

Kris

Hi…

Chris McKillop wrote:

For example, I wonder when the next kernel release will take place?
Incidentally, how can I tell what is the current kernel version?



uname -a

For example, on my machine it is…

QNX bigbox 6.2.1 2002/09/17-16:53:13EDT x86pc x86

Yes, I know this, but what about the kernel version? The above is the
version of the rtp which is 6.2.1 in your case, but it is my
understanding that the actual Nto kernel version is something like
2.1.1. Would this be correct? Then the question is: how could I tell
the actual version of Nto kernel (or does this question makes on sense)?

regards…

Miguel.


…on most machines for people outside of QSS it will be 6.2.0.

chris

“Mario Charest” postmaster@127.0.0.1 wrote in message
news:amn759$4nd$1@inn.qnx.com

“Kris Warkentin” <> kewarken@qnx.com> > wrote in message
news:amn3cj$nbj$> 1@nntp.qnx.com> …
One thing I haven’t tried but I know some people who have had very good
results with it is IDE RAID. Lots of current motherboards support it
and
are quite inexpensive. Then you get a couple cheap drives and stripe
them
(RAID 0) and you get some pretty good throughput. As far as I know it’s
all
setup in the bios so the controller just presents an IDE drive to the

software (by which I mean you might have a good chance Neutrino would
run
it)

No it would not since Neutrino doesn’t go through the BIOS . The hardware
interface is very different then normal IDE

It depends. If the RAID is bridgeless (i.e., it exposes the IDE chips to the
host) then it might work (I understand some older DPT SCSI RAID cards worked
that way with NTO).

Most of SCSI RAID cards these days have a non-transparent bridge. Host will
only see the processor (usually i960 or SA1100 or Xscale lately) and SCSI
chips will be behind the bridge. I don’t know about IDE RAIDs.

The performance increase isn’t that great > :wink: > The raw transfer rate
improves
quite a lot, but real life activity don’t get speed up that much.

Yes, bandwidth and TPS are different benchmarks. SCSI RAID systems usually
try to improve the TPS by using large cache (64Mb-128Mb). IDE versions are
mostly designed for bandwidth-greedy applications like video editing.

Futhermore you can’t move a RAID setup to a different motherboard unless
it
used the same chip. So if the motherboard dies you need to get a
replacement with exact same raid chip.

Not always true. Some vendors support migration of RAID setups to newer
versions of their product.

– igor