Newbie Question re hardware vs software RAID

G

Gilgamesh

I'm looking at installing a SATA RAID 5 system in my home PC to provide a
means of data recovery in case of hard disk failure (I'm thinking this will
be cheaper in the long run than installing a tape drive for a Tb of data).
When the 400Gb drives are released I'll be looking at getting 4 for the
array.

Is there any way of determining from the manyfacturer specs if the cards are
software or hardware based for the parity calculations. A review on Toms
hardware for the RocketRAID 1820 said that it was software based. Another
article elsewhere indicated that if the card had an XOR processor then the
parity calculations were hardware based and there is a RocketRAID 1820A with
an XOR processor. I don't know how valid that indication about the XOR
processor was.

I'm looking at hardware based RAID because I use Norton Ghost to back up my
system partition (which I plan to have of the RAID 5 volume) and Ghost uses
MS DOS which I don't think a software system would support.

Thanks
 
C

Curious George

I'm looking at installing a SATA RAID 5 system in my home PC to provide a
means of data recovery in case of hard disk failure (I'm thinking this will
be cheaper in the long run than installing a tape drive for a Tb of data).
When the 400Gb drives are released I'll be looking at getting 4 for the
array.

RAID protection is very different than backup protection. RAID
increases availability/uptime. You need regular offline/offsite
backups regardless. If you have that much important data, you're not
going to be able to get away from spending a few thousand USD to back
it up properly regardless of method.
Is there any way of determining from the manyfacturer specs if the cards are
software or hardware based for the parity calculations. A review on Toms
hardware for the RocketRAID 1820 said that it was software based. Another
article elsewhere indicated that if the card had an XOR processor then the
parity calculations were hardware based and there is a RocketRAID 1820A with
an XOR processor. I don't know how valid that indication about the XOR
processor was.

It doesn't really matter. On the low end you are going to encounter
unnecessary headaches regardless of the exact role of the driver.
Proper implementation of RAID 5 is somewhat of an engineering
nightmare & I would stay away from all parity raid levels on the
"personal storage" level anyway.
I'm looking at hardware based RAID because I use Norton Ghost to back up my
system partition (which I plan to have of the RAID 5 volume) and Ghost uses
MS DOS which I don't think a software system would support.

All you should need is the correct msdos driver to get it to work.
Depending on the card, you may need this even with a full firmware
enterprise controller. Full native MSdos support is spotty across all
raid cards. Fully firmware raid does not necessarily protect you from
issues with dos disk utility support.
 
E

ECM

Gilgamesh said:
I'm looking at installing a SATA RAID 5 system in my home PC to provide a
means of data recovery in case of hard disk failure (I'm thinking this will
be cheaper in the long run than installing a tape drive for a Tb of data).
When the 400Gb drives are released I'll be looking at getting 4 for the
array.

Is there any way of determining from the manyfacturer specs if the cards are
software or hardware based for the parity calculations. A review on Toms
hardware for the RocketRAID 1820 said that it was software based. Another
article elsewhere indicated that if the card had an XOR processor then the
parity calculations were hardware based and there is a RocketRAID 1820A with
an XOR processor. I don't know how valid that indication about the XOR
processor was.

I'm looking at hardware based RAID because I use Norton Ghost to back up my
system partition (which I plan to have of the RAID 5 volume) and Ghost uses
MS DOS which I don't think a software system would support.

Thanks

RAID 5 is a high end solution - to get a good RAID 5 hardware card, you'll
spend more than another drive will cost, anyways. Better to look at RAID 1
or 0+1 - it's faster because very little calculation is required; a good
card can be had for as little as $15. Rebuilding the array after a drive
fails will be much faster, too.

I think another poster already mentioned, though - it's not a backup. In
your shoes, I'd buy a couple of external HDD cases, and put your new 400GB
drives into them - then ghost or backup your data onto them, and keep them
in a safe place off site. Where would your terabyte of data be if, God
forbid, you had a house fire? Or, some trojan on your computer was serving
kiddie porn, and the FBI confiscated your computer? Or some kid managed to
download a virus, and answered "yes" to "format all"? Or, for that matter,
if Windows does something unexpected (THAT never happens, right?) and you
can't get to your data? You'd be HOSED.

Good Luck!
ECM
 
C

Curious George

RAID 5 is a high end solution - to get a good RAID 5 hardware card, you'll
spend more than another drive will cost, anyways. Better to look at RAID 1
or 0+1
1+0 is generally better than 0+1 from a fault recoverability
standpoint. It is very fast so 0+1 is advisable in fewer
circumstances.
- it's faster because very little calculation is required;
No parity calculation is done, actually.
a good
card can be had for as little as $15. Rebuilding the array after a drive
fails will be much faster, too.
Like what? I'd be interested to know. Considering the state of
enterprise controllers I'm quite fearful of the $15 category.
 
E

ECM

Curious George said:
1+0 is generally better than 0+1 from a fault recoverability
standpoint. It is very fast so 0+1 is advisable in fewer
circumstances.

No parity calculation is done, actually.

Like what? I'd be interested to know. Considering the state of
enterprise controllers I'm quite fearful of the $15 category.

I wasn't really speaking of enterprise level equipment - the OP was
asking about home use, I believe.

And yes, you're right - 1+0 is faster to recover, especially in large
arrays. I'm not sure whether the lower end cards support 1+0, however
- most advertise 0+1. The GigaRAID controller built in to Gigabyte's
MB's, for instance, doesn't give you the option of 1+0 - it sets up
the RAID 0 arrays and then mirrors one with the other; there are no
other options.

Either way, however, would be preferable to a RAID 5 solution
(especially a cheap one). And I believe both of us mentioned that RAID
in general is not a substitute for backups..... I always remind myself
that there's at least three parts of any RAID array that can fail -
two drives and a controller.

ECM
 
C

Curious George

I wasn't really speaking of enterprise level equipment - the OP was
asking about home use, I believe.

I know. What I was saying was my experience with enterprise
controllers causes me to question the viability of extremely cheap
consumer level cards. I'm not trying to shoot you down, I would still
like to know if you would recommend one such card.
And yes, you're right - 1+0 is faster to recover, especially in large
arrays.

It's not that it restores faster, it's that it can sustain a greater
number of failures. When you loose a drive in a 0+1 the array becomes
essentially a raid 0 striped set. With RAID 1+0 you can loose a
maximum of half your drives and still operate (provided you are lucky
and the 'right' ones fail). Because they both use striped & mirrored
data they _should_ restore at the same/similar rate. But it doesn't
really matter so much as running degraded/recovery mode does not
affect performance to the extent the parity levels can.
I'm not sure whether the lower end cards support 1+0, however
- most advertise 0+1.

because the low end is marketed to the enthusiast segment who are more
concerned with performance (even mythical) over reliability and they
use small non-mission-critical arrays.
The GigaRAID controller built in to Gigabyte's
MB's, for instance, doesn't give you the option of 1+0 - it sets up
the RAID 0 arrays and then mirrors one with the other; there are no
other options.

on-board raid is not something to be taken seriously as it lacks so
many important features. From what I see the same is true of low-end
cards. Lacking such features can invalidate the theoretical
reliability gains from raid. The devil is in the details.

Even if a particular controller does not support a nested raid level
you can still do it by using a combination of firmware and software
raid (i.e. OS stripe of 2 mirrored logical drives or OS mirror of 2
striped sets or OS stripe of 2 raid 5 logical drives, etc.) -but you
won't be able to boot of it.
Either way, however, would be preferable to a RAID 5 solution
(especially a cheap one). And I believe both of us mentioned that RAID
in general is not a substitute for backups..... I always remind myself
that there's at least three parts of any RAID array that can fail -
two drives and a controller.

ECM

good points.
 
D

DevilsPGD

In message <[email protected]> Curious George
It's not that it restores faster, it's that it can sustain a greater
number of failures. When you loose a drive in a 0+1 the array becomes
essentially a raid 0 striped set. With RAID 1+0 you can loose a
maximum of half your drives and still operate (provided you are lucky
and the 'right' ones fail). Because they both use striped & mirrored
data they _should_ restore at the same/similar rate. But it doesn't
really matter so much as running degraded/recovery mode does not
affect performance to the extent the parity levels can.

You should take this into account when you're buying drives too.

Buy half your drives from one batch, and half from another batch (if you
have that control -- Staying with the same model if you can, of course.

If there is a physical defect which affects an entire batch, it won't
take down your entire array, it will probably take down the affected
batch only and you can replace the drives.
 
C

Curious George

It's not that it restores faster, it's that it can sustain a greater
number of failures. When you loose a drive in a 0+1 the array becomes
essentially a raid 0 striped set. With RAID 1+0 you can loose a
maximum of half your drives and still operate (provided you are lucky
and the 'right' ones fail).

Sorry, I realize that is a bit of an overstatement. That can be true
But it is actually more complicated than that. The truth is that
there is a marked increase in the likelihood that a second disk
failure/multiple failures will bring down a 0+1 array. Both 0+1 & 1+0
are supposed to be able to sustain more than one failure in certain
circumstances.

Also when a failed disk is replaced 1+0 only has to re-mirror one
drive but 0+1 has to re-mirror the entire failed set. RAID 1+0 is
supposed to recover much faster but in reality, restore speed is also
determined by the priority that process is assigned as well as other
peculiarities of the controller, array, etc.. I'm not sure that this
is much of a reason to choose one level over another.
 
D

DevilsPGD

In message <[email protected]> Curious George
Sorry, I realize that is a bit of an overstatement. That can be true
But it is actually more complicated than that. The truth is that
there is a marked increase in the likelihood that a second disk
failure/multiple failures will bring down a 0+1 array. Both 0+1 & 1+0
are supposed to be able to sustain more than one failure in certain
circumstances.

If the right drives fail either type can handle up to 50% of the drives
failing simultaneously. The difference is that when one drive has
failed and you're considering which remaining drives can fail without
taking down the entire array, a 1+0 array has more choices of drives
which can die while still continuing to run.
 
C

Curious George

You should take this into account when you're buying drives too.

Buy half your drives from one batch, and half from another batch (if you
have that control -- Staying with the same model if you can, of course.

This may be viable. One potential issue is when you buy in a single
batch you are more likely to get the same model and firmware
revisions. Some firmware may have raid issues or raid related
conflicts with differing levels. With ata you generally can't upgrade
that.
If there is a physical defect which affects an entire batch, it won't
take down your entire array, it will probably take down the affected
batch only and you can replace the drives.

Interesting. In many cases though you can run into problem with
multiple failures that a 50/50 split can't save you from.

Another viable option is thorough testing before array creation and
having spares on-site. You can use the spare(s) to create a disk
rotation schedule to combat the problem of many drives dying at a
similar time down the line because they are identical age with
identical usage. You also don't have to loose any sleep while you
wait for the warranty replacements and can survive additional
failure(s).
 
D

DevilsPGD

In message <[email protected]> Curious George
This may be viable. One potential issue is when you buy in a single
batch you are more likely to get the same model and firmware
revisions. Some firmware may have raid issues or raid related
conflicts with differing levels. With ata you generally can't upgrade
that.

Yeah, it's a risk.
Interesting. In many cases though you can run into problem with
multiple failures that a 50/50 split can't save you from.

I disagree... As long as you plan your array properly either 0+1 or 1+0
can handle a 50% failure, as long as it's the right drives that fail.

By planning the placement of the drives appropriately you can ensure
that the most likely mass-failure scenario will include the right set of
drives.
Another viable option is thorough testing before array creation and
having spares on-site. You can use the spare(s) to create a disk
rotation schedule to combat the problem of many drives dying at a
similar time down the line because they are identical age with
identical usage. You also don't have to loose any sleep while you
wait for the warranty replacements and can survive additional
failure(s).

Yeah, I generally try to keep a spare of each batch (and run the batches
across several arrays) just in case.

I hadn't considered rotating them on a schedule, and while it sounds
like it would probably help negate the issue, but I'd always be afraid
of another failure while I've got part of the array down.
 
G

Gilgamesh

Curious George said:
RAID protection is very different than backup protection. RAID
increases availability/uptime. You need regular offline/offsite
backups regardless. If you have that much important data, you're not
going to be able to get away from spending a few thousand USD to back
it up properly regardless of method.

I've become pretty jaded with backups at work due to rate we have of failed
data recovery and data being irrevocably lost as a result. Even so I've
been thinking about your point and I now realise that I will need to do this
as well. The information will be pretty static so I will back it up to a
(couple of)spindles of DVDs.

Even so I still want to reduce my reliance on backups by installing RAID5.
As others have pointed out I can still lose multiple components at the same
time and lose the data but I would lose it if it was on a single disk that
failed as well.

I realise I will never have a 100% foolproof system but I want to increase
its reliability a bit. I'm still looking for some guidance on telling HW
from SW raid cards, can you help on that?
 
T

The professor

I use software RAID5 (on my linux box) : don't need dedicated hardware
: i use the 2 ide channels of the motherboard + 2 extra channels on a
pci card controller + 4x100gig HD. All the work is made by the OS
(software).

Anyway, you need a real backup solution in addition to RAID. I
personnally use an internet online backup service because ive heard
bad things on cdr/dvdr reliability over time. Its more expensive (15$
per month for 150gig) than burning dvd and refreshing every year but
easier for me.
 
G

Gilgamesh

The professor said:
I use software RAID5 (on my linux box) : don't need dedicated hardware
: i use the 2 ide channels of the motherboard + 2 extra channels on a
pci card controller + 4x100gig HD. All the work is made by the OS
(software).

Anyway, you need a real backup solution in addition to RAID. I
personnally use an internet online backup service because ive heard
bad things on cdr/dvdr reliability over time. Its more expensive (15$
per month for 150gig) than burning dvd and refreshing every year but
easier for me.

Unfortunately I'm stuck on a 31K dialup connection and can't get broadband
where I like so this would not be a viable option for me.
 
C

Curious George

I've become pretty jaded with backups at work due to rate we have of failed
data recovery and data being irrevocably lost as a result. Even so I've
been thinking about your point and I now realise that I will need to do this
as well. The information will be pretty static so I will back it up to a
(couple of)spindles of DVDs.

Very real concern. The raid will reduce the need to look to backups
due to hardware issues but will not help if someone deletes a file or
erroneously 'updates' one or if there is a disk utility or malware
that takes a crap on the file system. You still need backups for that
and (and for some regulatory compliance issues) as well as if the
building burns down, the computers are stolen, etc.

Another poster mentioned DVD reliability. This is another real issue
and managing rotation/migration schedule for media so small can be a
serious problem. Things like removeable hard drives, SDLT, LTO, VXA2
are generally worth the additional expense but careful planning is
more important than any particular technology.
Even so I still want to reduce my reliance on backups by installing RAID5.
As others have pointed out I can still lose multiple components at the same
time and lose the data but I would lose it if it was on a single disk that
failed as well.

true and offsite data and additional resilency/business continuity
planning are also important.
I realise I will never have a 100% foolproof system but I want to increase
its reliability a bit. I'm still looking for some guidance on telling HW
from SW raid cards, can you help on that?

If reliability is a primary concern I recommend the additional expense
of scsi RAID. Seagate Cheetahs with FDB motors on a
Mylex/LSI/adaptec/IBM RAID card in RAID 1, 1E, or 1+0 are a safe bet
for many scenarios. Most PATA drives as well as most SATA drives are
"Personal Storage" caliber devices. If you want higher reliability
you are forced to pay the premium these items command.
 
C

Curious George

These are both very 'hands on' labor intensive solutions which add
additional complexity to the implementation. I believe mine deals
more extending array life (mature failures) while your deals more with
premature failures. The more I think about it your solution has a bit
more elegance and perhaps more practical gains (premature failures are
the real concern esp if you plan to retire the unit when they reach
their service life), but it is sometimes not possible to do if you
order the RAID from an OEM or Integrator. I also worry about
administrators trying this without full knowledge of the specifics of
a raid implementation and the idea backfiring. I dismissed an idea
similar to this a while back, but I think your solution has a lot of
potential.

Your concern about unnecessary rebuilds backfiring is valid. Careful
planning is necessary for either solution and it is not appropiate for
many arrays. It's not something I would recommend be done frequently,
or more than one drive at a time. The upside is even in the worst
case scenario and the planning fails, the maintenance is done at a
time of your choosing.
 
D

DevilsPGD

In message <[email protected]>
Anyway, you need a real backup solution in addition to RAID. I
personnally use an internet online backup service because ive heard
bad things on cdr/dvdr reliability over time. Its more expensive (15$
per month for 150gig) than burning dvd and refreshing every year but
easier for me.

Do you happen to know any reliable reasonably priced companies offering
this service?
 
K

kony

If reliability is a primary concern I recommend the additional expense
of scsi RAID. Seagate Cheetahs with FDB motors on a
Mylex/LSI/adaptec/IBM RAID card in RAID 1, 1E, or 1+0 are a safe bet
for many scenarios. Most PATA drives as well as most SATA drives are
"Personal Storage" caliber devices. If you want higher reliability
you are forced to pay the premium these items command.


Do tell why you expect the SCSI drives to be more reliable?
Can you point to a more reliable FDB bearing instead of the
exact same bearing used on PATA?

Can you point to specific chips more prone to fail on PATA?
How about ANYTHING?

For the most part, that is complete nonsense. All you get
with SCSI beyond the superior bus is higher cost, not higher
reliability... unless you tell us different, specific
component failure points that would apply only to specific
drives, not just "SCSI vs PATA".
 
C

Curious George

Do tell why you expect the SCSI drives to be more reliable?
Can you point to a more reliable FDB bearing instead of the
exact same bearing used on PATA?

Drives reliability is a lot more complicated than the bearings. FDB
drives tend to be more forgiving of bumps and vibrations, and that can
affect longevity. Also the 15k FDB cheetahs have a great & proven
track record.
Can you point to specific chips more prone to fail on PATA?
How about ANYTHING?

Why? Why is engineering/reverse engineering knowledge so superior to
experiential knowledge?
For the most part, that is complete nonsense.

For the most part? what about the other part?
All you get
with SCSI beyond the superior bus is higher cost, not higher
reliability... unless you tell us different, specific
component failure points that would apply only to specific
drives, not just "SCSI vs PATA".

Than what DO you want? You ask me to cite specific failure points but
then say that it would not clarify a SCSI vs ATA bias?

I can regurgitate a lot of crap from manufacturers to explain the
alleged enterprise vs personal storage reliability, but we both know
this type of information tends to be a lot of marketing hype and tends
to be something you should take with a grain of salt. But if SCSI
only offers higher price, why do people buy them? Why are they the
standard device interface for serious/mission-critical use? In fact
the FDB 15k Cheetahs have a great reliability reputation. They also
place at the top of storagereview.com's reliability database (if you
place any stock in that). True many ATA drives place higher than many
SCSI ones in that same survey, but I did not recommend _ANY_ SCSI
drive over _ANY_ ATA one.

I can really only share my anecdotal experience which is much higher
satisfaction with SCSI disk subsystems. I've had a lot of ATA drives
that just sometimes do weird things or tend to suffer more hard errors
/corruption as they reach the end of a much shorter 'realistic'
service life. I've also seen a lot of scsi drives function perfectly
for longer periods during heavier use, and seen them deal with
problems better (without data loss/corruption). Of course not EVERY
model scsi device is going to be GREAT, but neither is everything of
anything else. Sure this is anecdotal, but it the way it's _supposed_
to be as there is _supposed_ to be a difference in reliability of
enterprise storage vs personal storage. Is it a HUGE difference in
ALL cases, well no but with a SCSI RAID solution you are more likely
to get a full fledged, cohesive product that is more geared to serious
use. That's not ATA vs SCSI trolling, it's just how the product lines
run. Better management software, Better drivers, better features,
better error handling, better handling of configuration and
recoverability, better support, better compatibility, better warranty,
more scalable, available for faster busses, etc.

I fully acknowledge I'm just offering an opinion. I thought my
perspective bared some clarification for the OP and group but I don't
really care what you use or think is better. If you think PATA is
better please use them. I'm simply not interested in getting further
involved in a potential "SCSI vs xATA" flame/troll. (this smells like
a Folkert)
 
D

DevilsPGD

In message <[email protected]> kony
For the most part, that is complete nonsense. All you get
with SCSI beyond the superior bus is higher cost, not higher
reliability... unless you tell us different, specific
component failure points that would apply only to specific
drives, not just "SCSI vs PATA".

As a general rule, if you buy the cheapest SCSI drives on the market,
you'll get exactly the same hardware, just a different controller with a
SCSI interface rather then an IDE interface.

However, if you're comparing a higher end SCSI drive to a low end IDE
drive, you'll definitely see better quality parts on the higher end
drive.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top