raid10.. how many drives can fail and still have the array in tact?(4 drives/8 drives)

M

markm75

With a 4 drive raid10.. i'm a little unclear on how many can fail here
and still work..

Same with 8 drive?

Any thoughts?

Thanks
 
A

Arno Wagner

Previously markm75 said:
With a 4 drive raid10.. i'm a little unclear on how many can fail here
and still work..

A raid 10 uses RAID1 components as basis to build a RAID 0 on
top. If in any of the RAID1 subcomponents two drives fail, the
whole array fails.

For 4 drives that would be two RAID1 pairs. If 1 drive fails,
the array works. If 2 drives fail, it may or may not work.
3 drives kill oit reliably.

Same with 8 drive?

That would be 4 RAID1 pairs.

1 drive failure will not kill it. 2-4 drive failures may or may
not kill it, depending on whcih drives fail. 5 drives reliably
kill the array.

Arno
 
M

markm75

A raid 10 uses RAID1 components as basis to build a RAID 0 on
top. If in any of the RAID1 subcomponents two drives fail, the
whole array fails.

For 4 drives that would be two RAID1 pairs. If 1 drive fails,
the array works. If 2 drives fail, it may or may not work.
3 drives kill oit reliably.


That would be 4 RAID1 pairs.

1 drive failure will not kill it. 2-4 drive failures may or may
not kill it, depending on whcih drives fail. 5 drives reliably
kill the array.

Arno

So basically with 4 drive.. there are two on each side.. if 1 drive on
one side dies.. its ok.. but if 1 drive on each side dies then its a
goner.. if 2 drives fail on one side.. i'd think it would be ok, just
not mirrored..

I'm trying to decide for my beefy virtual hosting server and file
server what to do.. i have 8, 500gb drives..

i originally was going to do 4 drive raid 5 for the main filesharing/
shares area.. then raid10 4 drive, for the virtual servers being
hosted on this box (8 of them, only 3 remotely beefy i guess).. i'd
prefer an all in one solution, but that would mean either going 8
drive raid5 (which would be horribly slow on rebuilds) or 8 drive
raid10, which sounds a little risky but fast on writes.
 
A

Arno Wagner

So basically with 4 drive.. there are two on each side.. if 1 drive on
one side dies.. its ok.. but if 1 drive on each side dies then its a
goner.. if 2 drives fail on one side.. i'd think it would be ok, just
not mirrored..
Exactly.

I'm trying to decide for my beefy virtual hosting server and file
server what to do.. i have 8, 500gb drives..
i originally was going to do 4 drive raid 5 for the main filesharing/
shares area.. then raid10 4 drive, for the virtual servers being
hosted on this box (8 of them, only 3 remotely beefy i guess).. i'd
prefer an all in one solution, but that would mean either going 8
drive raid5 (which would be horribly slow on rebuilds) or 8 drive
raid10, which sounds a little risky but fast on writes.

You should determine what your bottlenecks are first. You
may even have time for RAID6 without knowing it.

Arno
 
M

markm75

You should determine what your bottlenecks are first. You
may even have time for RAID6 without knowing it.

Arno- Hide quoted text -

- Show quoted text -

Hi there..

What did you mean by that ...(bottlenecks)..

RAID6.. how many drives can fail here.. is it the same as raid5.. i
have forgotten.. i think there was extra parity?

So raid6, 8 drives of 500gb.. does this still equate to 3.5TB?

I didnt think the writes were any better with raid6 than raid5.. i've
always been a fan of the writes of raid10.
 
M

markm75

You should determine what your bottlenecks are first. You
may even have time for RAID6 without knowing it.

Arno- Hide quoted text -

- Show quoted text -

btw.. it took 10hr 37 min for my 4 drive (500gb each) raid5 set to
build on this card.. and it took 1hr 41 min for the raid10 4 drive set
to build.
 
M

markm75

You should determine what your bottlenecks are first. You
may even have time for RAID6 without knowing it.

Arno- Hide quoted text -

- Show quoted text -

Btw.. forgot.. that 10hr build time was a background build, not
foreground and the raid10 time was foreground only.
 
A

Arno Wagner

Hi there..
What did you mean by that ...(bottlenecks)..

Slowest components that matter.
RAID6.. how many drives can fail here.. is it the same as raid5.. i
have forgotten.. i think there was extra parity?

RAID6 can survive loss of any two disks. Since parity is
not enough, it will be slow with two failed drives.
So raid6, 8 drives of 500gb.. does this still equate to 3.5TB?
3TB.

I didnt think the writes were any better with raid6 than raid5.. i've
always been a fan of the writes of raid10.

RAID6 is about good redundancy.

Arno
 
A

Arno Wagner

btw.. it took 10hr 37 min for my 4 drive (500gb each) raid5 set to
build on this card.. and it took 1hr 41 min for the raid10 4 drive set
to build.

The RAID10 time is standard. The RAID5 time is extremely poor.
I have a 8 500GB disk software RAID6 on older hardware,
that builds in about 4 hours on Linux.

Arno
 
M

markm75

What is a background build?

Arno- Hide quoted text -

- Show quoted text -


The build process is slower in a background build.. it gives the
option for initialization.. background or foreground.. but raid10
doesnt have this option.. you must stay in the bios (its not
immediately available).

I decided to do the 8 drive system for raid10.. i built it.. took the
same amount of time as 4 drives.. i then pulled 2 drives.. the drive
letter stayed in tact.. i pulled a 3rd drive.. it was then no longer
in tact..

So with 8 drives, it appears you can lose 2 and still be ok.. which is
the same as raid6.. so i think ill stick with raid10, since the 2tb
space is way more than we ever expected to have anyway (had 600gb
before :)
 
A

Arno Wagner

Previously markm75 said:
The build process is slower in a background build.. it gives the
option for initialization.. background or foreground.. but raid10
doesnt have this option.. you must stay in the bios (its not
immediately available).

I see.
I decided to do the 8 drive system for raid10.. i built it.. took the
same amount of time as 4 drives.. i then pulled 2 drives.. the drive
letter stayed in tact.. i pulled a 3rd drive.. it was then no longer
in tact..

Depends on which drives you pulled.
So with 8 drives, it appears you can lose 2 and still be ok.. which is
the same as raid6.. so i think ill stick with raid10, since the 2tb
space is way more than we ever expected to have anyway (had 600gb
before :)

No, you cannot. Loose a RAID1 pair and the array goes down.

Arno
 
M

markm75

I see.


Depends on which drives you pulled.


No, you cannot. Loose a RAID1 pair and the array goes down.

Arno- Hide quoted text -

- Show quoted text -

Ah crap.. i see that now.. did another random 2 drive test.. and
boom..

well.. i think i may opt to Not use raid10.. its quite sad, as the
performance i was getting using a performance tester was through the
roof.. i mean, we do use dpm 2007 to back stuff up, but this "chance"
of one going on each side is too hard to ignore..

I'm initializing an 8 drive raid6 array now, ill conduct the same
speed tests on it...
 
A

Arno Wagner

Ah crap.. i see that now.. did another random 2 drive test.. and
boom..
well.. i think i may opt to Not use raid10.. its quite sad, as the
performance i was getting using a performance tester was through the
roof.. i mean, we do use dpm 2007 to back stuff up, but this "chance"
of one going on each side is too hard to ignore..
Indeed.

I'm initializing an 8 drive raid6 array now, ill conduct the same
speed tests on it...

Pleas post results here. Could be interesting.

Arno
 
M

markm75

Ah crap.. i see that now.. did another random 2 drive test.. and
boom..

well.. i think i may opt to Not use raid10.. its quite sad, as the
performance i was getting using a performance tester was through the
roof.. i mean, we do use dpm 2007 to back stuff up, but this "chance"
of one going on each side is too hard to ignore..

I'm initializing an 8 drive raid6 array now, ill conduct the same
speed tests on it...- Hide quoted text -

- Show quoted text -

The initialization of an 8 drive raid6, only took 1hr 42 minutes
(foreground, not background).. same speed as the raid10 basically...

So here were my results.. these are from performance tester v6.1:

D: drive raid10, 8 drives,... (500gb, Seagate, 32mb cache, sataII, TOTAL
2.0TB space)
Burst speed of about 1050.4 MB/s
Avg read 351.2 MB/s
Seq writes of 235 MB/s (0.5x that of raid6?!)
Random seek+RW: 52.6 MB/sec (2.2x that of raid6)

D: drive raid6, 8 drive... (500gb, Seagate, 32mb cache, sataII, TOTAL
3.0TB space)
Burst speed of about 1049 MB/sec
Avg read 633 MB/sec !! (twice that of raid10)
D drive seq writes of 458.2 MB/sec (showing twice that of raid10??)
Random seek+RW: 18.0 MB/sec (random seek+RW, appears to be
slightly under half as fast as RAID10)

So it appears on random seek+RW, raid6 is the loser, but on sequential
writes.. its actually faster than RAID10, which seems odd to me..

Based on just these numbers (performance).. which would be the winner?

I always though raid10 should win...

But raid6 isnt horribly slower on a few specs, so i'm leaning towards
just sticking with RAID6?

All of these are with the ARC-1231ML Pci-e x8 card, which in my tests
shows faster stats than the 3ware comparative models.
 
A

Arno Wagner

The initialization of an 8 drive raid6, only took 1hr 42 minutes
(foreground, not background).. same speed as the raid10 basically...
So here were my results.. these are from performance tester v6.1:
D: drive raid10, 8 drives,... (500gb, Seagate, 32mb cache, sataII, TOTAL
2.0TB space)
Burst speed of about 1050.4 MB/s
Avg read 351.2 MB/s
Seq writes of 235 MB/s (0.5x that of raid6?!)
Random seek+RW: 52.6 MB/sec (2.2x that of raid6)
D: drive raid6, 8 drive... (500gb, Seagate, 32mb cache, sataII, TOTAL
3.0TB space)
Burst speed of about 1049 MB/sec
Avg read 633 MB/sec !! (twice that of raid10)
D drive seq writes of 458.2 MB/sec (showing twice that of raid10??)
Random seek+RW: 18.0 MB/sec (random seek+RW, appears to be
slightly under half as fast as RAID10)

Yes, these numberas are possible with RAID6, if you have a really
fast controller.
So it appears on random seek+RW, raid6 is the loser, but on sequential
writes.. its actually faster than RAID10, which seems odd to me..

No, that is entriely correct. It is, after all, kind of a 6 way raid0.
As to the random RW, that is likely due to the writes being small,
but larger that the stripe size.
Based on just these numbers (performance).. which would be the winner?
I always though raid10 should win...

It used to be like that, because of limited computing power and bus
transfer speeds. As to raw disk access speeds RAID5/6 with
3/4 disks is faster than RAID0.
But raid6 isnt horribly slower on a few specs, so i'm leaning towards
just sticking with RAID6?
All of these are with the ARC-1231ML Pci-e x8 card, which in my tests
shows faster stats than the 3ware comparative models.

I think you should use RAID6.

Arno
 
D

David Lesher

RAID6.. how many drives can fail here.. is it the same as raid5.. i
have forgotten.. i think there was extra parity?

RAID6 is RAID5 that survives two drive failures, not one.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top