What's reasonable RAID 5 performance?

K

kenw

I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
configuration. I may be wrong, but I don't think it's performing properly
at all. The question is, what's reasonable?

When using large (>500MB) files to swamp out cache effects, I'm getting
roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
MB/sec, when measured with IOzone.

Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
to NUL:) and about 14MB/sec copying back to the same array. One of the
challenges has been getting consistent results; not sure why.

These numbers strike me as being OTL (out to lunch) for such
high-performance drives and array controller.

The array consists of five Seagate Cheetah ST336753LC 15,000RPM drives
connected via a 320MB/sec SCSI interface. The controller is an Intel
SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
on-board battery (the server has a UPS and redundant power supplies
connected to separate power sources) which means the controller will do
write-through, but not write-back, caching. No override is available.

Intel claims that this is the optimum configration for the controller, and
that more RAM or the battery pack will not help performance significantly.
The motherboard, BTW, is an Intel SE7501HG2 with dual 2.8GHz Xeons and 2GB
of RAM.

There must be thousands of RAID 5 arrays out there very similar to this
one. _Somebody_ must know. Are these performance figures reasonable, or
not?
Ken Wallewein
K&M Systems Integration
Phone (403)274-7848
Fax (403)275-4535
(e-mail address removed)
www.kmsi.net
 
E

Eric Gisin

I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
configuration. I may be wrong, but I don't think it's performing properly
at all. The question is, what's reasonable?

When using large (>500MB) files to swamp out cache effects, I'm getting
roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
MB/sec, when measured with IOzone.

Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
to NUL:) and about 14MB/sec copying back to the same array. One of the
challenges has been getting consistent results; not sure why.
The destination file has to be contiguous to get proper results. This is not
likely with cmd's copy or Sandra. Xcopy is if it can preallocate contig free
space, but it will be seek bound if you have a single array.

Create a 10MB temp file (so it stays in cache), and do "copy/b big+big+(18
more) bigger". Or xcopy from a server if you have GB ethernet.
 
A

Arno Wagner

In said:
I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
configuration. I may be wrong, but I don't think it's performing properly
at all. The question is, what's reasonable?
When using large (>500MB) files to swamp out cache effects, I'm getting
roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
MB/sec, when measured with IOzone.

One data-point: On a software RAID5 with 2.6.7 and 5 * Maxtor
200GB DiamondMax 9 plus I get 22MB/s large file write performance
(measured with 1GB data file) and 65MB/s read performance. That is with
ext3 journalling file system, which also journals data, not only
metadata.
Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
to NUL:) and about 14MB/sec copying back to the same array. One of the
challenges has been getting consistent results; not sure why.
These numbers strike me as being OTL (out to lunch) for such
high-performance drives and array controller.

I would say the performance is rather embarassing when a software
RAID on half as fast disks performs massively better. However I
recenly made the mistake of buying an adaptec SATA RAID controller.
Also slower than software RAID. I also recently talked to some
guy running huge usenet servers: They also have noted that hardware
RAID is now slower than software RAID. As soon as Linux supports
ATA/SATA hotplugging the last advantage of hardware RAID will
be gone.

Arno
 
R

Ron Reaugh

Eric Gisin said:
Create a 10MB temp file (so it stays in cache), and do "copy/b big+big+(18
more) bigger". Or xcopy from a server if you have GB ethernet.

Gigabit isn't fast enough here.
 
R

Ron Reaugh

Arno Wagner said:
Also slower than software RAID. I also recently talked to some
guy running huge usenet servers: They also have noted that hardware
RAID is now slower than software RAID.

Some HW RAID may be slower but not the right stuff configured properly. SW
RAID is moving in on most the territories though.
 
R

Rob Turk

[SNIP]
The controller is an Intel
SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
on-board battery (the server has a UPS and redundant power supplies
connected to separate power sources) which means the controller will do
write-through, but not write-back, caching. No override is available.

If your 12MB/s is really what you get, then flushing your 128MB cache takes
10 seconds. If some idiot decides to push the power button on the server, it
will switch off before your cache is flushed. Maybe not such a bad idea to
get the battery option anyway??

Rob
 
K

kenw

Ron Reaugh said:
Gigabit isn't fast enough here.

Sure it is. It's far faster than the throughput I'm getting from the array
right now, and faster than a 32-bit PCI bus (the RAID server's 64). As it
happens, the only system I currently have to trade files with has a 32bit
PCI, and when I watch network utilization, the bottleneck is obvious.

A gigabit network should be able to approach 100MB/sec -- say, at least 80.
If I was getting that from my RAID array, I'd be happy.

BTW, the copy append idea in Eric's message is a great idea. It
effectively lets me do write-only write performance testing, almost the
reverse of my copy-to-NUL read test. Cool!

Unfortunately, none of this either confirms or denies whether my current
RAID 5 performance is reasonable.

/kenw
Ken Wallewein
K&M Systems Integration
Phone (403)274-7848
Fax (403)275-4535
(e-mail address removed)
www.kmsi.net
 
R

Ron Reaugh

Sure it is. It's far faster than the throughput I'm getting from the array
right now,

You just contradicted yourself overall. Your stated goal is "should be
getting". For that gigabit is NOT fast enough. What about the 200?
and faster than a 32-bit PCI bus (the RAID server's 64).

No, gigabit is about the same speed at peak of 32 bit 33 Mhz PCI.
As it
happens, the only system I currently have to trade files with has a 32bit
PCI, and when I watch network utilization, the bottleneck is obvious.

Rethink what you are watching.
A gigabit network should be able to approach 100MB/sec -- say, at least
80.

That's what I've said and 32 bit 33.3 Mhz PCI does 133.3 MB/sec.
If I was getting that from my RAID array, I'd be happy.

What about the 200?
 
F

Folkert Rienstra

Rob Turk said:
If your 12MB/s is really what you get, then flushing your 128MB cache
takes 10 seconds.

Oh?
What about the "the controller will do write-through, but not write-back, caching"?
 
R

Ron Reaugh

Folkert Rienstra said:
[SNIP]
The controller is an Intel
SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
on-board battery (the server has a UPS and redundant power supplies
connected to separate power sources) which means the controller will do
write-through, but not write-back, caching. No override is available.

If your 12MB/s is really what you get, then flushing your 128MB cache
takes 10 seconds.

Oh?
What about the "the controller will do write-through, but not write-back,
caching"?

Read the thread before blathering. The controller WILL do write-back when
it has a battery.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top