PCIe SATA RAID controllers

C

Calab

Can anyone suggest some inexpensive PCIe SATA RAID controllers? I have eight
drives that I need to build into one four terabyte RAID5 array. Any 8 port
card I find is expensive. I know that there are cards that you can install
two of and they can cooperate and create a single array, but this is a
detail I that isn't given very often.

Any suggestions on what I can do?
 
P

Paul

Calab said:
Can anyone suggest some inexpensive PCIe SATA RAID controllers? I have eight
drives that I need to build into one four terabyte RAID5 array. Any 8 port
card I find is expensive. I know that there are cards that you can install
two of and they can cooperate and create a single array, but this is a
detail I that isn't given very often.

Any suggestions on what I can do?

One issue I see, is a maximum array size issue. A 32 bit OS tends to use
a 32 bit sector number to address the array. That gives a 2.2TB limit for
array size. If you want to build a single 4GB array, there would have to be
a "trick" to it. Investigate the size limit carefully, before proceeding
to build a single 4TB array. It could be this is fixed, with the right
OS choice.

One user of an Areca card posted a problem about his usage of disks. He
said his array was "working", and he copied over some big files, just
past the 2.2TB point, and the file system was corrupted. On downloading
the Areca manual, it turns out the Areca can support up to 16TB or so.
But a special setting must be enabled in the Areca, which he failed to do,
in order to address 16TB with a 32 bit sector address. That trick is,
to change the effective size of a sector. If this was my array, I would
copy fake test data to it, until I got past 2.2TB, then reboot and see
if the file system is totally intact.

So your first planning activity, would be to investigate whether a 4TB
array can be supported without tricks in your environment.

Tomshardware had an article years ago, about using softraid built into
Windows, to control an array of disks. The idea would be, to use non-RAID
cards, connect your eight disks, and then use RAID5 built into Windows.
The disk controllers in that case, could be more ordinary ones (say,
a couple of SIL3114 based cards). The problem I see there, is what
software interface is available for repairing the array. The Tomshardware
article didn't explain that part, and merely got the array working with
new disks.

Other than that, you could use SIL3114 cards, and build two RAID5 arrays.
And those cards are pretty cheap. It really all depends on how dependable
your implementation has to be.

This is an example of a 4 port SIL3114 card from Rosewill. There are a total
of six physical ports, of which four can work. A jumper block steers two ports,
to either the two front connectors, or to two internal connectors. In
normal usage, the ESATA connectors on the faceplate would be disabled,
and you'd be using the four internal connectors for disks.

http://c1.neweggimages.com/NeweggImage/productimage/16-132-013-11.jpg

This is a comment from a reviewer.

"Pros: Easy set up, SATA and E-SATA ports.

Cons: Parity RAID (RAID 5) will not build correctly, tech support has
not responded to any of the support requests sent.

Other Thoughts: It's unfortunate that parity RAID constantly fails
to build with known good harddisks. Card works well
without RAID, E-SATA is handy."

So that is part of the fun of a cheap product. The fun begins, after
the array fails, and then you discover just how good the firmware and
software for the thing really are. I've also heard of cases, where
someone owns a cheap RAID5, one disk fails, and rather than being in
"degraded" state, the array fails to work at all. Which means for
whatever reason, the redundancy feature is not working. RAID5 is
supposed to survive a single drive failure.

Another way to build an array, is use a card with a SIL3132, which
supports port multipliers, and the flash chip on the card can be
flashed to use the RAID BIOS. Then connect two five port, port
multiplier cards to it. The SIL3132 can control ten disks in a
RAID array. You want a card with the right kind of flash chip on
it, so it can be flashed to use the RAID BIOS (that is, if the
card ships with the BASE BIOS, and needs to be changed to RAID).

http://www.newegg.com/Product/Product.aspx?Item=N82E16815124027

The tools for modding the card are here.

http://www.siliconimage.com/support/supportsearchresults.aspx?pid=32&cid=15&ctid=2&osid=0&

The port multiplier boxes are the tricky part. Shop carefully
for a supplier of these. These items take one SATA port and make
five ports from it, but they only work when plugged into certain
controllers, such as the SIL3132. Your total project cost becomes
$20 for the SIL3132, then $100 + $100 to get two port multiplier
boxes, for a total of $220 or so plus cables. Check resellerratings.com
for the reputation of some of the small companies selling port
multipliers.

You can see some products here, made by Addonics.

http://www.addonics.com/products/pm/

http://www.addonics.com/products/host_controller/ad5sapm-e.asp $85

http://www.addonics.com/products/host_controller/extpm.asp $95

http://www.resellerratings.com/store/Addonics (reviews look typical)
(probably not a ripoff)

Naturally, using the SIL3132 approach, depends on it having drivers
for whatever OS allows >2.2TB arrays. Being a "softraid" without
an XOR engine or DRAM cache, means performance will be "average"
and not outstanding.

The reason good cards are expensive, is because they've been designed
by serious people, for use in servers. And the company may have more
than a casual interest in getting it right. Companies using the SIL3132
chip, just copy the hardware design, and rely on Silicon Image to provide
good, working, firmware and software. The companies don't add any value
to the product, such as by rewriting the software and adding features.

The port multiplier boxes are "dumb", so there is no firmware/software
issue with those. But they do have to be plugged to devices that understand
how to control them (which is why the SIL3132 RAID Management software
for Windows is an important component of the package).

HTH,
Paul
 
C

Calab

One issue I see, is a maximum array size issue. A 32 bit OS tends to use
a 32 bit sector number to address the array. That gives a 2.2TB limit for
array size. If you want to build a single 4GB array, there would have to
be
a "trick" to it. Investigate the size limit carefully, before proceeding
to build a single 4TB array. It could be this is fixed, with the right
OS choice.

OS choice is Windows Standard Server 2003. This SHOULD allow for up to 16TB
arrays. Of course the RAID card would also have to support it.

Currently we have an Adaptec 21610SA PCIx RAID controller. It has 16 SATA
ports on it, but Adaptec says that the card can't build an array larger than
2TB, regardless of OS.
Tomshardware had an article years ago, about using softraid built into
Windows, to control an array of disks. The idea would be, to use non-RAID
cards, connect your eight disks, and then use RAID5 built into Windows.
The disk controllers in that case, could be more ordinary ones (say,
a couple of SIL3114 based cards). The problem I see there, is what
software interface is available for repairing the array. The Tomshardware
article didn't explain that part, and merely got the array working with
new disks.

I believe that I read the same article. It said that arrays could be moved
to new Windows systems for rebuilding, if necessary.

I was actually running a four drive RAID5 array with the Windows software
RAID. Then I found out that you can't add a drive and grow the software
array.
Other than that, you could use SIL3114 cards, and build two RAID5 arrays.
And those cards are pretty cheap. It really all depends on how dependable
your implementation has to be.

I could do this. This is actually what it looks like we need to do with the
Adaptec card. It can support multple arrays, as long as they aren't larger
than 2TB.
Another way to build an array, is use a card with a SIL3132, which
supports port multipliers, and the flash chip on the card can be
flashed to use the RAID BIOS. Then connect two five port, port
multiplier cards to it. The SIL3132 can control ten disks in a
RAID array. You want a card with the right kind of flash chip on
it, so it can be flashed to use the RAID BIOS (that is, if the
card ships with the BASE BIOS, and needs to be changed to RAID).

http://www.newegg.com/Product/Product.aspx?Item=N82E16815124027

The tools for modding the card are here.

http://www.siliconimage.com/support/supportsearchresults.aspx?pid=32&cid=15&ctid=2&osid=0&

The port multiplier boxes are the tricky part. Shop carefully
for a supplier of these. These items take one SATA port and make
five ports from it, but they only work when plugged into certain
controllers, such as the SIL3132. Your total project cost becomes
$20 for the SIL3132, then $100 + $100 to get two port multiplier
boxes, for a total of $220 or so plus cables. Check resellerratings.com
for the reputation of some of the small companies selling port
multipliers.

Very cool. I did not know about the multipliers. This will be something to
take into consideration.
Naturally, using the SIL3132 approach, depends on it having drivers
for whatever OS allows >2.2TB arrays. Being a "softraid" without
an XOR engine or DRAM cache, means performance will be "average"
and not outstanding.

Performance is not critical. Most important are the RAID5 abilities, ability
to grow the arrays, largest array sizes... in that order.

I appreciate the in depth reply. Thank you VERY much!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top