Are there any RAID experts out there?

L

LittleRed

OK, I've had it. I cannot get a satisfactory answer from anyone
regarding RAID performance problems, so I am seeking help from the
masses.

Here is the problem:
I have two systems that I have noticed have low disk performance, both
IBM servers.

System 1 - IBM 7100 server, which is quite a nice machine. I have two
seperate raid5 arrays on two seperate controllers (one ServeRaid 4M one
a 4L). One is a four 36GB disk array, the other a three 72GB disk array,
all 10K ultra160s. When I copy a 70GB file from one array to the other,
it does so at an average of 5MB/s (yes, five). This figure is derived
from two sources - the disk performance counter in Perfmon and actually
timing the copy using a nice little utility called timethis.
I even attached a third array on another controller in a three disk
Raid0 configuration and got the same result.

System 2 - A brand new IBM x345 with a ServeRaid 5i controller and three
144GB 15k Ultra320 disks in a raid5 configuration (8k). I have tested
this array with ATTO, Bench32, Nbench and a timed file copy to and from
memory (using a 1GB ramdisk). The figures I get are 30MB/s read and
26MB/s write. Given that these disks are specified as having a sustained
read rate of over 80MB/s, you would expect at least that in any Raid
configuration.

I have spent too much time on the phone to IBM support and nobody seems
to know exactly what figure I should get. It seems that IBM have never
actually benchmarked their Raid systems. All they can tell me is that a
Raid system should deliver 'superior' performance. Compared to what? a
floppy disk? One engineer even told me that I should expect lower
performance from a raid array than from a single disk (err, that's not
what your brochure says).

Now I know there is a whole science in determining which raid
configuration best suits different requirements, be it a database, file
server etc. etc., but if you can't even get reasonable performance by
doing a simple file copy what sort of performance are you going to get
on a busy database?

All I am trying to find out is what rate of throughput I should expect
from these systems, because in my opinion, what I am getting is not what
I paid for. It is also costing me a lot of time because I have to wait
over five hours to copy a file that should be done in about half an
hour.

Does anybody out there know why these figures are so low or where I can
go to find the answers I am looking for. Perhaps some comparitive
figures or maybe a standard test that I can perform.

Please, any help would be appreciated.
 
W

Will Dormann

LittleRed said:
OK, I've had it. I cannot get a satisfactory answer from anyone
regarding RAID performance problems, so I am seeking help from the
masses.

Here is the problem:
I have two systems that I have noticed have low disk performance, both
IBM servers.

System 1 - IBM 7100 server, which is quite a nice machine. I have two
seperate raid5 arrays on two seperate controllers (one ServeRaid 4M one
a 4L). One is a four 36GB disk array, the other a three 72GB disk array,
all 10K ultra160s. When I copy a 70GB file from one array to the other,
it does so at an average of 5MB/s (yes, five). This figure is derived
from two sources - the disk performance counter in Perfmon and actually
timing the copy using a nice little utility called timethis.
I even attached a third array on another controller in a three disk
Raid0 configuration and got the same result.


Is one of the arrays in a degraded state? (failed disk)


-WD
 
I

idunno

The ppl @ IBM can't give you a x MB/sec figure because it all depends
on too many variables and it doesn't really translate well to the real
world. It depends on the raid level, the model of disks, the number
of disks, the stripe size, the file system, the file system's cluster
size, the size of files you are moving ... They also don't benchmark
that way because IOs are more important in the context of most server
applications.

Writes in RAID 5 can be atrocious unless you have a truly fantastic
controller. The ServeRAID 4L is an old, long retired controller with
16MB write cache and an i960 processor. You need at least 64MB for
kinda decent RAID 5 performance. The ServeRAID 4L will handle RAID 1E
and RAID 10 quite nicely. The 4M should do RAID 5 decently but not
spectacularly. It uses the same circuit board as the 4L but has a
second channel and 64MB cache. Since you are copying between the
arrays it's performance may be limited by the 4L. Change the 4L to
level 1E and change the stripe size of both arrays to the maximum and
enable spindle sync all over and see what happens.

The 5i should do much better than those older controllers - and oh
look it does. But Raid 5 isn't really a super duper, absolute highest
possible bandwidth raid level so its no surprise it doesn't really
blow your pants off. If you want to see the maximum MB/sec the array
can do, or get optimal single user performance you should try RAID 0,
3, 10, 0+1 or 1E (IBM's variant of 10). When you do this you should
use the largest stripe size and enable spindle sync. The Filesystem
you use should also have a large cluster size. Once you get that
impressive MB/sec number that you can brag about, you'll probably end
up configuring the arrays totally differently for production. ;)
 
C

Chevalier des Bois

3 disks is very few for raid5. 4 is the very beginning.
Better performance = more disks. Usually the best perf is said to be around
7-8 disks. That's why people concerned with performance usualy use 18 or
36GB disks.
But do not dream too much if you are in a windows environnement.
 
?

_

3 disks is very few for raid5. 4 is the very beginning.
Better performance = more disks. Usually the best perf is said to be around
7-8 disks. That's why people concerned with performance usualy use 18 or
36GB disks.
But do not dream too much if you are in a windows environnement.

What's a good PCI card for something like raid5 in a windows
environment ?
 
E

Eric Gisin

If you mean desktop Windows, RAID 5 isn't terribly useful. RAID 0, 1, 1+0
are better.
 
C

Chevalier des Bois

First of all the first bottleneck is the PCI bus.
Be sure your PC have 64bits wide PCI (and the card, too)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top