Vista does not support RAID or Volume Striping

L

Leythos

RAID arrays are only better at keeping data if the mean time to repair is
kept short.
A RAID 5 that takes a week to get around to fixing isn't much more reliable
than just having the disks.
This is worse the larger the number of drives in the array which is why you
have hot standbys to minimize the repair time.
Unless you do the math you cannot state that RAID is better or worse in any
particular circumstance

BS, since the RAID-5 that has one bad member is STILL WORKING, it means
that the computer is still providing data.

Why do you contradict all that has been proven before you started this
crap.

With few exceptions, RAID-0 is worthless, even more so for home users
because of the double failure rate of their system, and the fews seconds
of difference is not worth the downtime.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
K

Kerry Brown

NoStop said:
And you failed to add that most of these cheap "raid" controllers are fake
raid, ie. software based.


All RAID is software based. The better RAID cards have a fast CPU, optimized
DMA, bust mastering, etc., etc. but in the end it's all controlled by
software. I think you may be referring to the fact that some on board
controllers do not have a dedicated CPU and use the system CPU.
 
D

DevilsPGD

In message <[email protected]>
dennis@home said:
RAID arrays are only better at keeping data if the mean time to repair is
kept short.
A RAID 5 that takes a week to get around to fixing isn't much more reliable
than just having the disks.

Sure it is -- One drive fails, is replaced and rebuilt without the users
even noticing a thing is far better then an outage while files are
restored from backups and an office full of people redo whatever work
they managed to accomplish since IT did the last backup.

I'm also not sure where you get the 1-week time to fix, I'm looking at
about 26 hours to rebuild from a single drive failure on just over 1TB
of data, with users still accessing the data in the mean time.

I'm sure you could cut a fair amount of time off that if there were no
users.
 
D

dennis@home

Leythos said:
BS, since the RAID-5 that has one bad member is STILL WORKING, it means
that the computer is still providing data.

Its not BS.
Why do you contradict all that has been proven before you started this
crap.

I didn't contradict anything and its only crap because you don't understand.

I said you need to do the maths to decide if RAID-5 is suitable for what you
want.
Suitability depends on the number of disks in the array and the number of
hot standbys and the MTTR *and* the application.
You can claim RAID-5 is always better if you want, I will work it out and
then decide on a case by case basis.
Suppose you are running a data warehouse operation and have 15 disks in a
RAID-5, what is the faliure rate of one disk, two disks?
How long does it take you to get there and replace a disk.
What are the chances of the second fault occuring before you get there to
fix it?
What happens if you extend the array by a disk or two?
Once you know these answers you can argue with me if you want.
With few exceptions, RAID-0 is worthless, even more so for home users
because of the double failure rate of their system, and the fews seconds
of difference is not worth the downtime.

That depends on the application.
I can see uses for RAID-0 and they are perfectly good uses.

About the only thing that is true is that with RAID-1
 
D

dennis@home

DevilsPGD said:
In message <[email protected]>


Sure it is -- One drive fails, is replaced and rebuilt without the users
even noticing a thing is far better then an outage while files are
restored from backups and an office full of people redo whatever work
they managed to accomplish since IT did the last backup.

I didn't say it would fail if a disk failed, I said if it takes a long time
to fix it may be no more reliable.
Thats the problem with RAID-5, more hardware, more software, more chances
that something will go wrong.

In a typical home PC it will be the PSU and that will take the whole thing
down RAID and all.
Also RAID does nothing (except maybe make it worse) to stop software errors
damaging data unlike transaction logs and continuous backups (like Memeo).

In short you can't just state that RAID-5 is better without knowing why and
that depends on circumstance.
I'm also not sure where you get the 1-week time to fix, I'm looking at
about 26 hours to rebuild from a single drive failure on just over 1TB
of data, with users still accessing the data in the mean time.

26 hours is quite quick.
But it doesn't start untill after you have fitted a new drive (unless you
have hot standbys).
So that adds another 24 hours to it and doubles the chances of the array
failing due to a second fault.
 
L

Leythos

I didn't contradict anything and its only crap because you don't understand.

I said you need to do the maths to decide if RAID-5 is suitable for what you
want.
Suitability depends on the number of disks in the array and the number of
hot standbys and the MTTR *and* the application.
You can claim RAID-5 is always better if you want, I will work it out and
then decide on a case by case basis.
Suppose you are running a data warehouse operation and have 15 disks in a
RAID-5, what is the faliure rate of one disk, two disks?
How long does it take you to get there and replace a disk.
What are the chances of the second fault occuring before you get there to
fix it?
What happens if you extend the array by a disk or two?
Once you know these answers you can argue with me if you want.

We're not talking a Data Center, we're not talking a 15 disk array,
we're talking Vista and home users or SOHO users.

Face it, you're wrong.

In a typical home PC you have the ability to have 4 Disks, some
controllers on motherboards allow for 6. None of the motherboard
controllers allow for hot spares in the HOME market, so we're not going
to be going there.

Since we're clearly in the HOME user market we're talking about what the
typical motherboard for a home user can handle - that's RAID-1, RAID-0
and some can handle RAID-5, none with hot spares.

In the typical HOME environment the RAID-0 array is goingt to cause a
total data loss BEFORE the RAID-1 array does. It's that simple.

In the typical home user setup a RAID-0 doesn't gain any significant
performance advantage when stacked against the INCREASED LOSS of
computer usage.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
L

Leythos

So that adds another 24 hours to it and doubles the chances of the array
failing due to a second fault.

No, it doesn't DOUBLE anything. The probability of a catastrophic
failure doesn't change because you've added the replacement drive back
in.

The probability of a 3 x Drive R5 with a single drive failed having a
catastrophic failure is the same as a 2 x Drive RAID-0 at any time. The
difference is that when a RAID-0 fails you have total loss, when a RAID-
1 or RAID-5 fails you have no data loss.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
D

dennis@home

Leythos said:
We're not talking a Data Center, we're not talking a 15 disk array,
we're talking Vista and home users or SOHO users.

Face it, you're wrong.

Face it you misunderstood what I said.
In a typical home PC you have the ability to have 4 Disks, some
controllers on motherboards allow for 6. None of the motherboard
controllers allow for hot spares in the HOME market, so we're not going
to be going there.

In a typical home PC the PSU will fail and take your array down more
frequently than the disks.
Since we're clearly in the HOME user market we're talking about what the
typical motherboard for a home user can handle - that's RAID-1, RAID-0
and some can handle RAID-5, none with hot spares.

Where did it say home PC?
In the typical HOME environment the RAID-0 array is goingt to cause a
total data loss BEFORE the RAID-1 array does. It's that simple.

Did I say it wouldn't?
No I did not.
In the typical home user setup a RAID-0 doesn't gain any significant
performance advantage when stacked against the INCREASED LOSS of
computer usage.

That depends on the application.
You cannot just state that one is better than the other without knowing what
its being used for.
 
F

forty-nine

Leythos said:
We're not talking a Data Center, we're not talking a 15 disk array,
we're talking Vista and home users or SOHO users.

Face it, you're wrong.

In a typical home PC you have the ability to have 4 Disks, some
controllers on motherboards allow for 6. None of the motherboard
controllers allow for hot spares in the HOME market, so we're not going
to be going there.

Since we're clearly in the HOME user market we're talking about what the
typical motherboard for a home user can handle - that's RAID-1, RAID-0
and some can handle RAID-5, none with hot spares.

In the typical HOME environment the RAID-0 array is goingt to cause a
total data loss BEFORE the RAID-1 array does. It's that simple.

In the typical home user setup a RAID-0 doesn't gain any significant
performance advantage when stacked against the INCREASED LOSS of
computer usage.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)


Going from RAID 0 to single drives took my disk performance from a 5.9 to
5.3 (just to throw out some numbers) : )
 
A

Adam Albright

Face it you misunderstood what I said.

Too funny for words, two pompous fanboy windbags arguing with each
other trying to prove who's the least wrong. I'll get the popcorn, sit
back and just watch.

ROTFLMAO!
 
D

dennis@home

Leythos said:
No, it doesn't DOUBLE anything. The probability of a catastrophic
failure doesn't change because you've added the replacement drive back
in.

It had better change or there wouldn't be any point in replacing the drive.
Have you told all the users of RAID-5 taht they don't need to replace the
drive?
The probability of a 3 x Drive R5 with a single drive failed having a
catastrophic failure is the same as a 2 x Drive RAID-0 at any time. The
difference is that when a RAID-0 fails you have total loss, when a RAID-
1 or RAID-5 fails you have no data loss.

You don't appear to understand failure modes.
Of course it has twice the chance of a second drive faulting if the replace
and rebuild time doubles.


If you want to try the maths then here is a starter..

the chances of a drive faulting is x.
so the chances of one drive faulting in a two drive stripe is 2x.
the chances of a drive faulting in a three drive RAID-5 is 3x (i.e. the
RAID-5 has 1.5 times the fault rate of the striped array)

Now the chances of losing data are a bit more complicated.. and you will
need to put some real fault rates in to get a proper answer..

For the striped array to fail you need one fault.. 2 x the fault rate of the
disks.

Now for a RAID-5 you have to get two faults to fail..
the first fault will be 3 x the fault rate of the disks (i.e. a fault will
occur at 2/3 the time for the striped array).
Now a second fault must occur before you have replaced the disk and rebuilt
the array (mean time to repair MTTR).
IIRC my maths that will be (2 x failure rate of a disk)/MTTR chance. i.e. if
it take twice as long to repair there is twice the chance the array will
fail rather than just fault. This is why *repair times* are critical to
system uptime on fault tolerant machines.

Now if you follow the maths you will notice that for a two disk mirror there
is half the chance of the second drive failing during the repair period than
there is for the RAID-5. (One disk failing out of one rather than one disk
failing out of two).

HTH.
 
D

dennis@home

Adam Albright said:
Too funny for words, two pompous fanboy windbags arguing with each
other trying to prove who's the least wrong. I'll get the popcorn, sit
back and just watch.

Hi crazy,
I wondered how long it would be before your foolish face showed up.
Do you have anything useful to contribute or are you just going to admit its
all over your head again?
 
D

DevilsPGD

In message <[email protected]> Leythos
With few exceptions, RAID-0 is worthless, even more so for home users
because of the double failure rate of their system, and the fews seconds
of difference is not worth the downtime.

A few seconds of difference every time a beneficial operation occurs, vs
double of an almost non-existent failure rate?

Perhaps I've been lucky, but I can count the number of
failed-within-warranty drives I've had in my life without using any
fingers at all (not counting DOAs)

I've currently only got 15 drives spinning within arms reach though. At
$DAYJOB we've got ~40 humans on staff, well over 400 drives, an average
of two drives die within warranty annually.

I don't know about you, but I have zero five+ year old drives spinning,
so I can't speak to out of warranty failures.

So, a less then 1% failure rate seems to be reasonable, bumping up to a
0% per annum failure also doesn't sound that unreasonable.

(Note, this is for drives treated reasonably well, mounted to a properly
grounded case, otherwise not moved for most of their life, not dropped,
run within temperature and humidity ranges, etc, in servers spinning
24/7, on desktops spinning down after 15 minutes of inactivity)

You still need to backup, but you already needed to backup, so that's
nothing new, all we're talking about is time saved vs lost to failure.
 
L

Leythos

Face it you misunderstood what I said.

Not a chance, you don't understand failure statistics, you don't
understand RAID.
In a typical home PC the PSU will fail and take your array down more
frequently than the disks.

No, the array will go down with the entire PC and 99% of the time
nothing will be wrong when you boot back up - not to mention that the
array will rebuild itself if there is an issue.
Where did it say home PC?

We're in a Windows VISTA News group, we've been talking about RAID as it
relates to users, we've not been talking about systems with multiple
raid controllers or high-end controllers, we're talking about what
people that visit this group might use/find.
Did I say it wouldn't?
No I did not.

Yea, go read your posts again.
That depends on the application.
You cannot just state that one is better than the other without knowing what
its being used for.

Yes, I can, without knowing the application, because it holds completely
true for the scenario I stated above.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
L

Leythos

Going from RAID 0 to single drives took my disk performance from a 5.9 to
5.3 (just to throw out some numbers) : )

So, 11%, now setup a RAID-1 array and try it again. Reads will be
faster, writes will take the same time as a single drive write.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
L

Leythos

In message <[email protected]> Leythos


A few seconds of difference every time a beneficial operation occurs, vs
double of an almost non-existent failure rate?

And at the end of each day you won't have been any more productive, when
using MS Office and other Office type apps. If you were using something
that really benefited from RAID-0, then yes, but you would not be using
RAID-0 for your OS drive and your Data would be on the RAID-0, but it
would be backed-up as needed.
Perhaps I've been lucky, but I can count the number of
failed-within-warranty drives I've had in my life without using any
fingers at all (not counting DOAs)

Almost 3000 systems currently, 1 to 28 drives in each system (servers
have at least 8 drives each, up to 28 drives in the SQL servers)...

We average 1 drive per year over the last 5 years.
I've currently only got 15 drives spinning within arms reach though. At
$DAYJOB we've got ~40 humans on staff, well over 400 drives, an average
of two drives die within warranty annually.

I don't know about you, but I have zero five+ year old drives spinning,
so I can't speak to out of warranty failures.

I have 3 drives that came with the IBM PC-AT that still work when I turn
them on. I have 10+ year old drives that are on a shelf waiting to be
destroyed, that are under 2GB, but they still work.
So, a less then 1% failure rate seems to be reasonable, bumping up to a
0% per annum failure also doesn't sound that unreasonable.

(Note, this is for drives treated reasonably well, mounted to a properly
grounded case, otherwise not moved for most of their life, not dropped,
run within temperature and humidity ranges, etc, in servers spinning
24/7, on desktops spinning down after 15 minutes of inactivity)

You still need to backup, but you already needed to backup, so that's
nothing new, all we're talking about is time saved vs lost to failure.

We don't use RAID-0 on any of the systems, 1+0 on Many, but not RAID-0,
it's not worth the maintenance cost.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
S

Synapse Syndrome

Leythos said:
So, 11%, now setup a RAID-1 array and try it again. Reads will be
faster, writes will take the same time as a single drive write.


Not necessarily only a 11% benchmark improvement, as 5.9 is the highest
current possible score. I get 5.9 with twin Raptors in RAID-0, and you get
the same score, at the moment, with a 5 disk Ultra-320 SCSI array, for
example.

ss.
 
A

Adam Albright

Not a chance, you don't understand failure statistics, you don't
understand RAID.

What I understand is both you and Dennis are two of the biggest
crybabies and pompous jerks I've seen post in any newsgroup in years.
What's so damn funny to me is NEITHER of you buffoons know a damn
thing, but you both keep trying so hard to pretend.

You two have been more fun to watch hitting each other over the head
than watching two monkey fighting over the same banana.

Give us a break and take it to email.
 
S

Synapse Syndrome

Leythos said:
If you were using something
that really benefited from RAID-0, then yes, but you would not be using
RAID-0 for your OS drive and your Data would be on the RAID-0, but it
would be backed-up as needed.

What the hell is the point of having your data on RAID-0 and NOT the OS?
Are you really thick or what?

ss.
 
L

Leythos

Not necessarily only a 11% benchmark improvement, as 5.9 is the highest
current possible score. I get 5.9 with twin Raptors in RAID-0, and you get
the same score, at the moment, with a 5 disk Ultra-320 SCSI array, for
example.

And on 4 different machines I get 5.9 on the drive performance, with low
end motherboard RAID-1 controllers using Dual SATA-I drives, same with
SATA-II drives.

Now, if you were talking about some application that really shines with
RAID-0, the proper configuration would be OS on RAID-1, DATA for
application on RAID-0, backup RAID-0 to some storage area as needed.


--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top