How I built a 2.8TB RAID storage array

J

John-Paul Stewart

Steve said:
The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems awfully slow
for this sort of setup.

I noticed that, too, but then noticed that the OP seemed to be running
three copies of Bonnie++ in parallel. His command line was:

'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'

I'm no expert, but if he's running three in parallel on the same
software RAID, I'd suspect that the total performance should be taken as
the *sum* of those three---or over 60 MB/sec.
As a comparison, I have two machines with software RAID 5 arrays, one a
2x866 P3 system with 5x120-gig drives, the other an A64 system with 8x300
gig drives, and both of them can read and write to/from their RAID 5 array
at 45+ MB/s, even with the controller cards plugged into a single 32/33 PCI
bus.

As another point of comparison: 5x73GB SCSI drives, software RAID-5,
one U160 SCSI channel, 32-bit/33-MHz bus, dual 1GHz P-III: writes at 36
MB/sec and read reads at 74 MB/sec.
 
P

Peter

(Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
advantage of the theoretical bandwidth available on the slots,
anyway.)
There is no 66MHz PCI-X.
3Ware 7506 cards are PCI 2.2 compliant 64-bit/66MHz bus master.
 
F

Folkert Rienstra

Peter said:
There is no 66MHz PCI-X.

The PCI-SIG seem to think different. Perhaps you know better then?
And contrary to what you say elsewhere, they say there is no 100MHz
spec. That was added by the industry.
 
Y

Yeechang Lee

I wrote earlier:
This does concern me. How the heck do I tell them apart, even now?
How di I figure out which drive is sda, which is sdb, which is sdc,
etc., etc.?

As it turns out, it proved straightforward to use either 'smartctl -a
--device=3ware,[0-3] /dev/twe[0-1]' or 3Ware's 3dm2 and tw_cli
(available on the Web site) tools to read the serial numbers of the
drives. So mystery solved.
 
Y

Yeechang Lee

Peter said:
There is no 66MHz PCI-X.
3Ware 7506 cards are PCI 2.2 compliant 64-bit/66MHz bus master.

What's the difference? I thought 64-bit/66Mhz PCI *was* PCI-X.
 
D

dg

Rod Speed said:
Thats measuring the power INTO the power supply, not what its supplying
so isnt very useful for checking how close you are getting to the PSU rating.

Its just a matter of time before all power supplies have some sort of load
monitoring method, just like most all motherboards now have software for
monitoring fan speeds, temperature, voltage from the PS. Has anybody seen a
smart power supply that can indicate load?

--Dan
 
S

Steve Wolfe

I noticed that, too, but then noticed that the OP seemed to be running
three copies of Bonnie++ in parallel. His command line was:

'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'

I'm no expert, but if he's running three in parallel on the same
software RAID, I'd suspect that the total performance should be taken as
the *sum* of those three---or over 60 MB/sec.

Good point- I missed that!

steve
 
E

Eric Gisin

Yeechang Lee said:
What's the difference? I thought 64-bit/66Mhz PCI *was* PCI-X.
Both standards have that combo, but PCI-X is 10-30% faster. PCI-X 1.0 is
66/100/133 Mhz, 32/64 bits.
 
A

Anton Ertl

dg said:

Sure, but it certainly gives an upper limit for the output of the PSU.
So since my PSU never draws more than 180W on my Athlon 64 box, I know
that my 365W power supply is overdimensioned.

Of course one also has to take the load for the different voltages
into consideration, not just the overall rating, and the input wattage
does not help that much there.

PSU efficiency for typical loads seems to be around 70%-75% (give or
take a few percent depending on the quality of the PSU).
Its just a matter of time before all power supplies have some sort of load
monitoring method, just like most all motherboards now have software for
monitoring fan speeds, temperature, voltage from the PS. Has anybody seen a
smart power supply that can indicate load?

That would be a bad move on the part of the PSU manufacturers: It
would cost them money to include this feature, and it would convince
their customers to get smaller (cheaper) PSUs next time.

Followups set to colh, because I read that.

- anton
 
F

Folkert Rienstra

Anton Ertl said:
Sure, but it certainly gives an upper limit for the output of the PSU.
So since my PSU never draws more than 180W on my Athlon 64 box, I know
that my 365W power supply is overdimensioned.

Of course one also has to take the load for the different voltages
into consideration, not just the overall rating, and the input wattage
does not help that much there.

PSU efficiency for typical loads seems to be around 70%-75% (give or
take a few percent depending on the quality of the PSU).


That would be a bad move on the part of the PSU manufacturers: It
would cost them money to include this feature, and it would convince
their customers to get smaller (cheaper) PSUs next time.
Followups set to colh, because I read that.

Right, and to hell with everyone else, who doesn't. Stupid troll.
 
R

Rod Speed

Sure, but it certainly gives an upper limit for the output of the PSU.
So since my PSU never draws more than 180W on my Athlon 64
box, I know that my 365W power supply is overdimensioned.

In that particular situation you know that anyway from a calculation.
Of course one also has to take the load for the different
voltages into consideration, not just the overall rating,
and the input wattage does not help that much there.

Which is what I originally said.
PSU efficiency for typical loads seems to be around 70%-75%
(give or take a few percent depending on the quality of the PSU).

Utterly mangled all over again.
That would be a bad move on the part of the PSU manufacturers:

Wrong. Some would buy a supply like that.
It would cost them money to include this feature, and it would
convince their customers to get smaller (cheaper) PSUs next time.

You dont know that either.
Followups set to colh, because I read that.

**** that. You have always been, and always
will be, completely and utterly irrelevant.
 
J

Jon Forrest

Yeechang said:
No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,
<URL:http://www.chemistry.wustl.edu/~gelb/castle_raid.html> (which
does note that software striping two 3Ware hardware RAID 5 solutions
"might be competitive" with software) and
<URL:http://staff.chess.cornell.edu/~schuller/raid.html> (which states
that no, all-software still has the edge in such a scenario).

I'm going to try to read these articles closely because
this conclusion, that software RAID performs better than
hardware RAID, is anti-intuitive, especially when doing
lots of writes. Of course, if you're not CPU bound and
have CPU cycles to spare, then you might not see any real
slowdown. But, offloading the RAID computations from the main
CPU to an I/O controller should reduce the load on the CPU.
(This also presumes that the interface between the controller
and the CPU is fast enough so that I/O setup and teardown isn't
significant.)

Jon Forrest
 
Y

Yeechang Lee

Jon said:
I'm going to try to read these articles closely because
this conclusion, that software RAID performs better than
hardware RAID, is anti-intuitive, especially when doing
lots of writes. Of course, if you're not CPU bound and
have CPU cycles to spare, then you might not see any real
slowdown.

The argument, as I understand it, is that $480 (the amount I paid for
the two 3Ware cards) buys a lot more CPU power from Intel or AMD than
from 3Ware, Adaptec, Promise, or Broadcom.

On the server under discussion, the md0_raid5 process takes 5-8% of
CPU time of one of the dual Xeons. I've never seen it go much higher,
even during a rebuild.
 
J

John-Paul Stewart

Yeechang said:
The argument, as I understand it, is that $480 (the amount I paid for
the two 3Ware cards) buys a lot more CPU power from Intel or AMD than
from 3Ware, Adaptec, Promise, or Broadcom.

Yes. If you look at the CPUs on RAID cards, they're a lot less
powerfull than the host CPU (even on the most expensive $1000+ cards).
However, that assumes that there are CPU cycles available on the host
(i.e., it is *not* CPU bound, as the previous poster mentioned).

I haven't seen any benchmarks comparing software RAID to hardware RAID
where the host CPU was heavily used. They always seem to be done on
otherwise unloaded systems. But then, everything I've read agrees with
the previous poster's assesment that hardware RAID will win when the
host CPU is otherwise occupied.
 
J

Jon Forrest

Yeechang said:
The argument, as I understand it, is that $480 (the amount I paid for
the two 3Ware cards) buys a lot more CPU power from Intel or AMD than
from 3Ware, Adaptec, Promise, or Broadcom.

OK, so this is a cost/performance question, not a pure
performance question. I can see this is true because general
purpose hardware (e.g. Intel/AMD CPUs) are generally less
expensive than special purpose hardware (e.g. 3Ware I/O
cards).
On the server under discussion, the md0_raid5 process takes 5-8% of
CPU time of one of the dual Xeons. I've never seen it go much higher,
even during a rebuild.

This means that something else is limiting its performance. It's
probably something physical in the disks themselves since you're
not hitting the limit imposed by the ATA interface (I think - I
no longer have your original posting, which was very interesting).

Jon
 
J

Jon Forrest

John-Paul Stewart said:
Yes. If you look at the CPUs on RAID cards, they're a lot less
powerfull than the host CPU (even on the most expensive $1000+ cards).

That's because, other than performing the XOR operations
for writes, they don't have to do very much.
I haven't seen any benchmarks comparing software RAID to hardware RAID
where the host CPU was heavily used. They always seem to be done on
otherwise unloaded systems. But then, everything I've read agrees with
the previous poster's assesment that hardware RAID will win when the
host CPU is otherwise occupied.

Right. Even when a server is busy, satisfying read requests and non-RAID-5
requests shouldn't add much to the load. Most of the work is done
by the intelligence built-in to the ATA or SCSI electronics on the disk.
The latency imposed by the movement of the arms and platters dominates
the latency caused by a busy CPU.

For a while I was a big fan of those cheap IDE pseudo-RAID 0 and 1 controllers
but I now realize that they really don't provide much benefit
compared to just adding more IDE channels since those controllers
do so little. That's one reason why you can convert one of those
Promise IDE boards into a RAID controller by simply adding a resistor.

Jon
 
T

Thor Lancelot Simon

That's because, other than performing the XOR operations
for writes, they don't have to do very much.

Because most RAID card firmware is amazingly stupid, they actually also
have to have a great deal of memory bandwidth: they tend to copy a
lot of data around when they don't really have to. Memory bandwidh is
one thing embedded CPUs generally don't have much of; and this is one
reason why RAID card performance is sometimes surprisingly bad.

My comments much earlier in this thread on hardware versus software
RAID performance for _real_ workloads (not synthetic benchmarks
like those used to get the perennial "L1NU>< S0F+\/\AR3 RA1D 15 31337!!!!1"
numbers) still stand, however: well-implemented hardware RAID often
performs much, much better, among other reasons because it can gather
operations in nonvolatile memory.
 
M

Malcolm Weir

That's because, other than performing the XOR operations
for writes, they don't have to do very much.

And in many implementations, they don't do that, either.

They simply manage the data flow, setting up transfers and commands
and interpreting results.

The operations that actually touch the data (the XOR operations) are
done by dedicated hardware (e.g. part of an ASIC or FPGA).

[ Snip ]

Malc.
 
M

Maxim S. Shatskih

For a while I was a big fan of those cheap IDE pseudo-RAID 0 and 1
controllers
but I now realize that they really don't provide much benefit
compared to just adding more IDE channels since those controllers
do so little. That's one reason why you can convert one of those
Promise IDE boards into a RAID controller by simply adding a resistor.

I would also better trust Veritas (Windows Dynamic Disk is licensed from
Veritas and is a simplified version of VxVM) - then to, say, HighPoint.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top