Optimal stripe size of RAID 0 to install OS on?

A

achen

I've done research on this for two days, and all of my questions are
cleared except this last one. My post isn't a question that requires
someone to explain everything from beginning. :)

A couple simple conclusions:
1. small stripe size for random access (OS, applications), large
stripe size for data (large files, storage).
2. you would get optimal performance if stripe size = cluster size, or
(# of drive * stripe size) = cluster size


My question is, the default cluster size for Windows OS is 4k and I
believe it stays like that since Win 2000 to now for a reason, the MS
engineers must have determined this is the best way. OK then let me
assume I don't change that, although a lot of ppl suggest changing it
to 16k.


Why had I never seen anyone suggesting 4k as stripe size? From all
I've read, it's either 16k or 32k. If the concept of "stripe size =
cluster size" is correct, wouldn't it be optimal to set 4k strip on a
disk that is formatted by Windows installation CD, which cluster is
4k?


If 4k option is available on the RAID controller (ICH9R), is there any
reason I don't do 4k? Is there any ownside to do 4K? Is it
considered TOO SMALL and will increase the burden of RAID controller
and reduce the overall performance?
 
K

Ken Blake, MVP

I've done research on this for two days, and all of my questions are
cleared except this last one. My post isn't a question that requires
someone to explain everything from beginning. :)

A couple simple conclusions:
1. small stripe size for random access (OS, applications), large
stripe size for data (large files, storage).
2. you would get optimal performance if stripe size = cluster size, or
(# of drive * stripe size) = cluster size


My question is, the default cluster size for Windows OS is 4k


No, that's not the "default cluster size for Windows OS," it's the
default cluster size for NTFS.

I don't have any suggestions regarding cluster size for striping, but
I'd like to recommend *against* striping. Although RAID0 sounds
like it gives substantial speed improvement, in practice the actual
improvement is usually almost unnoticeable. And it has a severe
downside: if either drive fails, you lose everything on both drives.

I used to run RAID0 on this machine, and stopped using it several
months ago. I decided that the increased risk wasn't worth the very
small speed improvement. My experience since then has been what I
expected. I can't discern any difference in speed with or without the
RAID0.
 
A

achen

Sorry for the incorrect description, you know I meant "default cluster
size by the format tool in Windows OS install CD", right?
:)
 
R

Robert Moir

achen said:
Sorry for the incorrect description, you know I meant "default cluster
size by the format tool in Windows OS install CD", right?
:)

While I can't speak for Ken, I don't believe it changes the answer.

The sort of "Baby's first RAID" support you get with the average desktop
hard drives in RAID 0 shows a remarkably small return on performance for a
very high risk of losing the system in the event of a problem.
 
K

Ken Blake, MVP

While I can't speak for Ken, I don't believe it changes the answer.

The sort of "Baby's first RAID" support you get with the average desktop
hard drives in RAID 0 shows a remarkably small return on performance for a
very high risk of losing the system in the event of a problem.



I must have missed his earlier message (I've been away for the
weekend), but yes, I completely agree.
 
A

achen

I totally understand the risk of putting OS on a RAID 0 array, but
with all due respect I disagree the conclusion of "remarkably small
return on performance". To me and other folks that used RAID 0, the
increase in performance is significant, if you configure it
correctly. Only timing the boot time, it could be reduced from 65
seconds to 45 seconds.

Taking the risk of losing the drive due to disk failure out of
picture, I think I've found the answer. During the long weekend I
created SIX RAID 0 arrays with different stripe size (4k, 8k, 16k,
32k, 64k, 128k) and restore my OS drive by imaging software and did
some tests. The best setting "for OS" falls into 16k or 32k.

It is definitely worth it if you manage your data to a different drive
to a level that if the RAID 0 fails, you won't drop a tear for it.
"Disk performance" is all the purpose of RAID 0, isn't it?
 
K

Ken Blake, MVP

I totally understand the risk of putting OS on a RAID 0 array, but
with all due respect I disagree the conclusion of "remarkably small
return on performance". To me and other folks that used RAID 0, the
increase in performance is significant, if you configure it
correctly.


My experience and that of many others I know is exactly the opposite
of yours.

Only timing the boot time, it could be reduced from 65
seconds to 45 seconds.

Taking the risk of losing the drive due to disk failure out of
picture, I think I've found the answer. During the long weekend I
created SIX RAID 0 arrays with different stripe size (4k, 8k, 16k,
32k, 64k, 128k) and restore my OS drive by imaging software and did
some tests. The best setting "for OS" falls into 16k or 32k.

It is definitely worth it if you manage your data to a different drive
to a level that if the RAID 0 fails, you won't drop a tear for it.
"Disk performance" is all the purpose of RAID 0, isn't it?


In theory, yes. In practice, no.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top