Understanding RAID support by OS

I got myself Adaptec AAA-131U2 RAID controller and curious what is involved
in OS support for RAID in principle. The controller has AIC7890 chip, which
is listed as supported. It also has AIC7815 chip (which is hardware
XOR-engine afaik), apparently connected through Intel 21152 PCI-PCI bridge.

I guess that means there’s chance it will be recognized by devb-aha8. But
will it function as RAID, or just as plain SCSI? What extra a driver needs
to do for controller to function as RAID? I understand all RAID
configuration is done in SCSI BIOS, so what else? Getting status from
controller to provide alarms, etc? That would not be too hard, what’s the
reason there is still no official support for RAID then?

Any insight are appreciated.

  • igor

I know enough to be dangerous here. For a typical external RAID, to
a SCSI driver/Controller card combination, the RAID looks like a normal
SCSI hard drive, or drives. There’s not much to fuss about. Interaction
with the external RAID controller is done either via a serial link or
as is becoming more common, via an Ethernet link. From this interface
you can define RAID sets, and do repair work, such as replacing a
dead drive and having controller repair it. An example would be
if you are using RAID 1 (Mirroring) the current good version would need
to be copied in its entirety to the new drive.

Now, when you go to an internal RAID controller, a driver will again
see the RAID set as just a SCSI drive. The controller should trasparently
take care of any issues like redundant I/O. However the extra functions
like defining the RAID sets will have to go somewhere.

For one product I worked with, the Vortex controllers, this functionality
was provided through an ESCAPE during boot into their custom BIOS. The
board itself stored information on RAID set definition in NVRAM.
One downside to this arrangement is that there was no concurrent RAID
repair. If a RAID set was damaged and the system was running in a
degraded mode, you would hear a warning beep, but you would have to
shut down and repair from the BIOS before starting up again. No
hot repairing.

An alternative would be to provide an interface so that a utility could
dynamically do the repair. I don’t know whether any of the existing
board level RAID’s do this, but there’s a big downside for QNX.
Most manufactorers won’t support QNX directly. This adds a big
extra chore to a driver writer. It’s also possible that some
boards will not store the RAID set information on the board, thus
it would need to be on the disk as part of the file system, or
possibly hidden in a partition.

Thanks Mitchell.
This sounds like as long as i am okay with shutting down for repair, it
should work with QNX if SCSI chip is supported.

Now since you have experience, can you tell if one really needs
64bit/66Mhz boards? There is bunch of cheaper U160 33Mhz/32bit ones, but
I am curious because PCI bus will limit them to 132Mb/s and nothing will
be left for anything else. Basically, how likely I am to hit PCI
limitation? It seems like all drives won’t be transfering at the same
time anyway, so how many drives will warrant wider/faster PCI bus?

  • igor

Mitchell Schoenbrun wrote:

I know enough to be dangerous here. For a typical external RAID, to
a SCSI driver/Controller card combination, the RAID looks like a normal
SCSI hard drive, or drives. There’s not much to fuss about. Interaction
with the external RAID controller is done either via a serial link or
as is becoming more common, via an Ethernet link. From this interface
you can define RAID sets, and do repair work, such as replacing a
dead drive and having controller repair it. An example would be
if you are using RAID 1 (Mirroring) the current good version would need
to be copied in its entirety to the new drive.

Now, when you go to an internal RAID controller, a driver will again
see the RAID set as just a SCSI drive. The controller should trasparently
take care of any issues like redundant I/O. However the extra functions
like defining the RAID sets will have to go somewhere.

For one product I worked with, the Vortex controllers, this functionality
was provided through an ESCAPE during boot into their custom BIOS. The
board itself stored information on RAID set definition in NVRAM.
One downside to this arrangement is that there was no concurrent RAID
repair. If a RAID set was damaged and the system was running in a
degraded mode, you would hear a warning beep, but you would have to
shut down and repair from the BIOS before starting up again. No
hot repairing.

An alternative would be to provide an interface so that a utility could
dynamically do the repair. I don’t know whether any of the existing
board level RAID’s do this, but there’s a big downside for QNX.
Most manufactorers won’t support QNX directly. This adds a big
extra chore to a driver writer. It’s also possible that some
boards will not store the RAID set information on the board, thus
it would need to be on the disk as part of the file system, or
possibly hidden in a partition.

Previously, Igor Kovalenko wrote in qdn.public.qnxrtp.os:

Now since you have experience, can you tell if one really needs
64bit/66Mhz boards? There is bunch of cheaper U160 33Mhz/32bit ones, but
I am curious because PCI bus will limit them to 132Mb/s and nothing will
be left for anything else. Basically, how likely I am to hit PCI
limitation? It seems like all drives won’t be transfering at the same
time anyway, so how many drives will warrant wider/faster PCI bus?

A good place to start would be, what type of RAID are you interested
in. Here is a break down,

RAID 1 - mirroring

In this case your read speed is limited by the speed of
two hard drives. I don’t know if a typical RAID would even
try to speed up an I/O up by splitting the read between drives.
Continuous write speed is limited by the transfer rate of a
single drive, although some RAID’s will have considerable
amounts of write cache that can be filled up at the maximum
transfer rate of the SCSI bus, which would/could be limited
by the 33Mhz limitation.

This brings up a side discussion. QNX 4’s Fsys would
only have one active read operation per drive. I suspect
the same is true for QNX 6. This is not the theoretical
limitation. SCSI targets (drives) have the capability to
accept multiple requests. Whether this is supported, and
how the multiple requests are handled is another complicated
issue. For RAID, it is quite possible that the requests
could be on the same logical drive, but on separate physical
drives, in which case a smart RAID could get you better
performance. Since QNX 6 probably doesn’t take advantage of
this, it won’t matter. However QNX4 and I suspect QNX6
would issue multiple reads to different hard drives. If you
will have multiple logical drives on your RAID, and your mix
of I/O’s will be distributed across them then you might
find the 33Mhz bus a limiting factor.


RAID 0 and 3

Both of these methods are used to improve throughput by
distributing data across drives. RAID 0, sometimes known as
striping is not really Redundant at all. The distribution
of data creates the possibility of throughput not limited by
the transfer rate of a single drive. A typical RAID 3
setup using banks of 4 or 5 drives would easily swamp the
33Mhz bus. This is equally true during read and write
operations.

RAID 5

RAID 5 is optomized more for multiple I/O’s than for
maximum transfer rates so it is much less likely that a RAID
5 dataset would find the bus rate a bottleneck than RAID 3.

So to summarize, the question of whether a 33Mhz PCI bus will
card will limit potential throughput is dependent on a number
of factors.

  1. The RAID level used
  2. The number of RAID sets being accessed and the what the I/O mix to them is
  3. The throughput of a single drive
  4. In the case of RAID 0,3, and 5, the number of drives that the data
    is distributed over
  5. Whether or not QNX 6 will issue multiple read/write requests to a
    single (logical) drive, probably not
  6. RAID controller issues, such as its own internal bus speed and amount of cache


    Mitchell Schoenbrun --------- maschoen@pobox.com

I’m totally talking out my arse here, but I seem to recall from previous
discussions in another lifetime that some of these PCI RAID controllers appear
as two or more individual SCSI controllers and it’s totally up to the
O.S./driver to handle the redundant I/O, drive rebuilding, and so on. It may
vary by manufacturer exactly how much intelligence they have on each board.
That XOR processor is used to figure the parity data for things like RAID 5 so
you don’t beat the heck out of the CPU. But unless there’s a processor on that
card along with a bunch of RAM and it presents itsself as a single SCSI
controller device to the O.S., my guess (and it is truly just a guess) is that
you’re going to have to write it ALL.

-Warren “nothing ventured, nothing to make fun of” Peece



“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.011213185131.8985A@schoenbrun.com
| I know enough to be dangerous here. For a typical external RAID, to
| a SCSI driver/Controller card combination, the RAID looks like a normal
| SCSI hard drive, or drives. There’s not much to fuss about. Interaction
| with the external RAID controller is done either via a serial link or
| as is becoming more common, via an Ethernet link. From this interface
| you can define RAID sets, and do repair work, such as replacing a
| dead drive and having controller repair it. An example would be
| if you are using RAID 1 (Mirroring) the current good version would need
| to be copied in its entirety to the new drive.
|
| Now, when you go to an internal RAID controller, a driver will again
| see the RAID set as just a SCSI drive. The controller should trasparently
| take care of any issues like redundant I/O. However the extra functions
| like defining the RAID sets will have to go somewhere.
|
| For one product I worked with, the Vortex controllers, this functionality
| was provided through an ESCAPE during boot into their custom BIOS. The
| board itself stored information on RAID set definition in NVRAM.
| One downside to this arrangement is that there was no concurrent RAID
| repair. If a RAID set was damaged and the system was running in a
| degraded mode, you would hear a warning beep, but you would have to
| shut down and repair from the BIOS before starting up again. No
| hot repairing.
|
| An alternative would be to provide an interface so that a utility could
| dynamically do the repair. I don’t know whether any of the existing
| board level RAID’s do this, but there’s a big downside for QNX.
| Most manufactorers won’t support QNX directly. This adds a big
| extra chore to a driver writer. It’s also possible that some
| boards will not store the RAID set information on the board, thus
| it would need to be on the disk as part of the file system, or
| possibly hidden in a partition.
|

That AAA-131U2 appears to be cheap ass (and I got it for cheap, indeed).
It has quite standard AIC-7890AB with SCSI BIOS which looks identical to
AHA-2940. You can’t set up RAID without using a software for a supported
OS (Windows/Netware/Unixware). It does have cache however (up to 64Mb of
ECC RAM).

I know that other RAID boards, such as Intel, Mylex, AMI and LSI usually
have i960 or some equivalent CPU on board. Most of them use LSI Logic
SCSI chips, such as 53C1010 and have much wider OS support than Adaptec,
probably because they don’t rely on host driver to perform RAID
functions. Adaptec apparently does, even though it has hardware XOR
engine there is no general purpose CPU on board to drive it
autonomously.

The lesson is, I’ll stay away from Adaptec RAID in future :wink:
BTW, QNX enumerator hung on that AIC7890AB (at the ‘looking for aha7…’
stage) anyway. I’ll just use it in Windows then.

Speaking about troughput, major advantage of SCSI is that you can
disconnect from drive and connect to another one as soon as you’ve
queued an I/O request, whereas with IDE you’d have to wait till it
completes. So SCSI RAID would be able to use several drives at the same
time. PCI bus limitation won’t really apply to Ultra2 single-channel
RAID, it is limited to 80Mb/sec. Still usable with 3-4 drive arrays,
especially with large cache it should fly on most I/O operations.

  • igor

Warren Peece wrote:

I’m totally talking out my arse here, but I seem to recall from previous
discussions in another lifetime that some of these PCI RAID controllers appear
as two or more individual SCSI controllers and it’s totally up to the
O.S./driver to handle the redundant I/O, drive rebuilding, and so on. It may
vary by manufacturer exactly how much intelligence they have on each board.
That XOR processor is used to figure the parity data for things like RAID 5 so
you don’t beat the heck out of the CPU. But unless there’s a processor on that
card along with a bunch of RAM and it presents itsself as a single SCSI
controller device to the O.S., my guess (and it is truly just a guess) is that
you’re going to have to write it ALL.

-Warren “nothing ventured, nothing to make fun of” Peece

“Mitchell Schoenbrun” <> maschoen@pobox.com> > wrote in message
news:> Voyager.011213185131.8985A@schoenbrun.com> …
| I know enough to be dangerous here. For a typical external RAID, to
| a SCSI driver/Controller card combination, the RAID looks like a normal
| SCSI hard drive, or drives. There’s not much to fuss about. Interaction
| with the external RAID controller is done either via a serial link or
| as is becoming more common, via an Ethernet link. From this interface
| you can define RAID sets, and do repair work, such as replacing a
| dead drive and having controller repair it. An example would be
| if you are using RAID 1 (Mirroring) the current good version would need
| to be copied in its entirety to the new drive.
|
| Now, when you go to an internal RAID controller, a driver will again
| see the RAID set as just a SCSI drive. The controller should trasparently
| take care of any issues like redundant I/O. However the extra functions
| like defining the RAID sets will have to go somewhere.
|
| For one product I worked with, the Vortex controllers, this functionality
| was provided through an ESCAPE during boot into their custom BIOS. The
| board itself stored information on RAID set definition in NVRAM.
| One downside to this arrangement is that there was no concurrent RAID
| repair. If a RAID set was damaged and the system was running in a
| degraded mode, you would hear a warning beep, but you would have to
| shut down and repair from the BIOS before starting up again. No
| hot repairing.
|
| An alternative would be to provide an interface so that a utility could
| dynamically do the repair. I don’t know whether any of the existing
| board level RAID’s do this, but there’s a big downside for QNX.
| Most manufactorers won’t support QNX directly. This adds a big
| extra chore to a driver writer. It’s also possible that some
| boards will not store the RAID set information on the board, thus
| it would need to be on the disk as part of the file system, or
| possibly hidden in a partition.
|

Previously, Warren Peece wrote in qdn.public.qnxrtp.os:

I’m totally talking out my arse here, but I seem to recall from previous
discussions in another lifetime that some of these PCI RAID controllers appear
as two or more individual SCSI controllers and it’s totally up to the
O.S./driver to handle the redundant I/O, drive rebuilding, and so on.

Well that is certainly possible. Although why have 2
controllers when one is adequate to call it RAID. Having 2
controllers gives you two pathways. For a redundant pathway
to mirrored (RAID 1) drives this would be adequate. But for
RAID 3 or 5, or 0 for that matter, losing a channel would
lose the whole game. Cough Cough, that is unless you used
the double channels give you redundant access to each drive.
The trouble with this route is that while it does protect
you from losing a controller, assuming that losing the
controller doesn’t bring your PC bus down, it does nothing
if you lose access because, for example, an active
terminator fails.

But getting back to your point, It would be pretty sneaky to
just present a SCSI controller card, dual or not, as RAID,
even if you supply the software. NT for example provides
this software at the OS level. Pointedly, it would be
worthless for QNX unless it came with a QNX driver.

It may
vary by manufacturer exactly how much intelligence they have on each board.
That XOR processor is used to figure the parity data for things like RAID 5 so
you don’t beat the heck out of the CPU. But unless there’s a processor on that
card along with a bunch of RAM and it presents itsself as a single SCSI
controller device to the O.S., my guess (and it is truly just a guess) is that
you’re going to have to write it ALL.

Right. I have not actually seen such a beast on the market myself.
It seems unlikely as I’ve seen mirroring EIDE controller cards with
dual channels and large chunks of cache for something in the $100
range. The three functions they supported were RAID 0, RAID 1 and
Virtual disk. The latter simply makes one through four disks look
like one disk.

Mitchell Schoenbrun --------- maschoen@pobox.com

“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.011214153417.8901A@schoenbrun.com

Previously, Warren Peece wrote in qdn.public.qnxrtp.os:

It may
vary by manufacturer exactly how much intelligence they have on each
board.
That XOR processor is used to figure the parity data for things like
RAID 5 so
you don’t beat the heck out of the CPU. But unless there’s a processor
on that
card along with a bunch of RAM and it presents itsself as a single SCSI
controller device to the O.S., my guess (and it is truly just a guess)
is that
you’re going to have to write it ALL.

Right. I have not actually seen such a beast on the market myself.
It seems unlikely as I’ve seen mirroring EIDE controller cards with
dual channels and large chunks of cache for something in the $100
range. The three functions they supported were RAID 0, RAID 1 and
Virtual disk. The latter simply makes one through four disks look
like one disk.

IDE RAID sucks. Try fitting 4 drives into an case with those IDE cables. And
if you can, your drives won’t work in parallel. And they won’t be
hot-swapable either. You probably will be limited to RAID 0,1 or 0/1. So I
certainly prefer Ultra2 SCSI which you can buy below $200 these days.
Ultra160 will cost you $300+, but there are ‘beasts’ out there, I’ve seen
them. You can get ones with 100Mhz i960 CPU below $400 (Intel) or if you
have deeper budget get Mylex with 210 Mhz StrongARM on board, that will be
over a thousand though.

  • igor

IDE RAID sucks. Try fitting 4 drives into an case with those IDE cables.
And
if you can, your drives won’t work in parallel. And they won’t be
hot-swapable either. You probably will be limited to RAID 0,1 or 0/1. So I
certainly prefer Ultra2 SCSI which you can buy below $200 these days.
Ultra160 will cost you $300+, but there are ‘beasts’ out there, I’ve seen
them. You can get ones with 100Mhz i960 CPU below $400 (Intel) or if you
have deeper budget get Mylex with 210 Mhz StrongARM on board, that will be
over a thousand though.

FYI we have just been testing an internal IDE RAID solution. It has a single
IDE connector on the rear but provides two hot-swap IDE drives including
automatic re-build. Has a LCD status display and two buttons and a switch
for configuration. Works fine with both QNX4 and QNX6 and costs about
US$200. It takes up two standard drive bays.

I can’t comment on speed/performance as we are only looking at this for
fault tolerance. Looks good so far.

Rob Rutherford

IDE RAID sucks.

I object, they don’t sucks. They are not as good as SCSI but
they have a purpose

Try fitting 4 drives into an case with those IDE cables.

Get round cable.

And if you can, your drives won’t work in parallel.

Yes they can will. Many motherboard will come with
IDE RAID controler. However you are limited two 2 HD
if you want parallelism. If you want to do softraid with
NT or XP you can have 4 HD and keep parallelism.

And they won’t be hot-swapable either.

That true,

< You probably will be limited to RAID 0,1 or 0/1.

Software RAID with NT can do RAID 5 i beleive.

So I certainly prefer Ultra2 SCSI which you can buy below $200 these days.

I agree. But SCSI HD are 3 to 4 times more expensive then IDE (in Canada),
so I can get 4 HD IDE raid setup for the price of one IDE HD.

Ultra160 will cost you $300+, but there are ‘beasts’ out there, I’ve seen
them. You can get ones with 100Mhz i960 CPU below $400 (Intel) or if you
have deeper budget get Mylex with 210 Mhz StrongARM on board, that will be
over a thousand though.

For development machine where hot swap is not a real requirement,
IDE raid is great (at least it’s been for me)

  • igor

Mario Charest <mcharest@clipzinformatic.com> wrote in article <9vkrd5$re2$1@inn.qnx.com>…
<…>

Yes they can will. Many motherboard will come with
IDE RAID controler. However you are limited two 2 HD
if you want parallelism. If you want to do softraid with
NT or XP you can have 4 HD and keep parallelism.

IMO softraid with NT keep parallelism only with two HD which is connected on different EIDE
channels (if you have good motherboard). I tried it on Iwill m/b RAID “split” and “mirror” (0/1),
and had no success (had no parallelism) on NONAME m/b.
Eduard.

“ed1k” <ed1k@yahoo.com> wrote in message
news:01c18718$32692140$106fa8c0@ED1K…

Mario Charest <> mcharest@clipzinformatic.com> > wrote in article
9vkrd5$re2$> 1@inn.qnx.com> >…

Yes they can will. Many motherboard will come with
IDE RAID controler. However you are limited two 2 HD
if you want parallelism. If you want to do softraid with
NT or XP you can have 4 HD and keep parallelism.

IMO softraid with NT keep parallelism only with two HD which is connected
on different EIDE
channels (if you have good motherboard).

Many motherboard now have 4 IDE controllers.

I tried it on Iwill m/b RAID “split” and “mirror” (0/1),
and had no success (had no parallelism) on NONAME m/b.
Eduard.

Either way, I have yet to see IDE RAID5. Even with 4 IDE controllers all
you can get is RAID0 (anti-reliability) or RAID10 (waste of storage). To
do RAID5 they’d need to add stuff which will make it cost nearly as much
as SCSI solutions, if done in useful way.

  • igor

Mario Charest wrote:

“ed1k” <> ed1k@yahoo.com> > wrote in message
news:01c18718$32692140$106fa8c0@ED1K…
Mario Charest <> mcharest@clipzinformatic.com> > wrote in article
9vkrd5$re2$> 1@inn.qnx.com> >…

Yes they can will. Many motherboard will come with
IDE RAID controler. However you are limited two 2 HD
if you want parallelism. If you want to do softraid with
NT or XP you can have 4 HD and keep parallelism.

IMO softraid with NT keep parallelism only with two HD which is connected
on different EIDE
channels (if you have good motherboard).

Many motherboard now have 4 IDE controllers.

I tried it on Iwill m/b RAID “split” and “mirror” (0/1),
and had no success (had no parallelism) on NONAME m/b.
Eduard.

Mario Charest wrote:

You probably will be limited to RAID 0,1 or 0/1.

Software RAID with NT can do RAID 5 i beleive.

No, it can’t. And if it could, it would suck, your CPU would be swamped
totally. RAID5 only has acceptable performance when there’s dedicated
hardware XOR engine.

So I certainly prefer Ultra2 SCSI which you can buy below $200 these days.

I agree. But SCSI HD are 3 to 4 times more expensive then IDE (in Canada),
so I can get 4 HD IDE raid setup for the price of one IDE HD.

Ultra160 will cost you $300+, but there are ‘beasts’ out there, I’ve seen
them. You can get ones with 100Mhz i960 CPU below $400 (Intel) or if you
have deeper budget get Mylex with 210 Mhz StrongARM on board, that will be
over a thousand though.


For development machine where hot swap is not a real requirement,
IDE raid is great (at least it’s been for me)

It is great until one of drives fails and then you’ve lost all your
development files. The only thing to do about it is use RAID10
(strip+mirror), but then you’d need at least 4 drives and you’d get
capacity of 2, so your price calculations should be adjusted. Then if
you really make them work in parallel, you’d get your PCI bus swamped by
4 drives, 2 of which are redundant. So you’d need to get 64bit PCI2.2
mobo for your quad IDE RAID controller, which won’t be that cheap.

So yes, IDE RAID apparently has some use, since it is being
manufactured. But I thinnk it is mostly useful for temporary
high-bandwidth storage, like SWAP or work areas for audio/video
encoding, etc. Then you can forget about mirroring and just go with
stripping, but in such case you could as well use soft RAID :wink:

  • igor

Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote in article <3C1EBA42.1647B730@motorola.com>…
<…> But I thinnk it is mostly useful for temporary

high-bandwidth storage, like SWAP or work areas for audio/video
encoding, etc. Then you can forget about mirroring and just go with
stripping, but in such case you could as well use soft RAID > :wink:

Yes, indeed I was under conditions of high-bandwidth storage. BTW, swapfile of NT cannot be placed
on stripped volume :wink:)
Eduard.

It is great until one of drives fails and then you’ve lost all your
development files. The only thing to do about it is use RAID10
(strip+mirror), but then you’d need at least 4 drives and you’d get
capacity of 2, so your price calculations should be adjusted. Then if
you really make them work in parallel, you’d get your PCI bus swamped by
4 drives, 2 of which are redundant. So you’d need to get 64bit PCI2.2
mobo for your quad IDE RAID controller, which won’t be that cheap.

So yes, IDE RAID apparently has some use, since it is being
manufactured. But I thinnk it is mostly useful for temporary
high-bandwidth storage, like SWAP or work areas for audio/video
encoding, etc. Then you can forget about mirroring and just go with
stripping, but in such case you could as well use soft RAID > :wink:

I use IDE RAID 0 on my development machine (windows) to
get more speed (at the price of reliability). Since the RAID
is done by hardware it doesn’t cost any CPU nor extra
PCI bandwidth. Reliability is not really an issue for me
since I use tape backup for that. So for the price
of one IDE HD (god know how cheap they are ), +
a ~30$ more for a MB with RAID support , I
get between 30%-50% increase in performance. I like that!!!

Granted it is far less flexible and scales pretty bad compare
to SCSI, but I don’t think it sucks :wink: