How I built a 2.8TB RAID storage array

Y

Yeechang Lee

My 2.8TB RAID 5 array is finally up and running. Here I'll discuss my
initial intended specifications, what I actually ended up with, and
associated commentary. Please see
<URL:http://groups.google.ca/[email protected]>
and
<URL:http://groups.google.ca/[email protected]>
for background material.

STORAGE MEDIUM
Initial: Eight 250GB SATA drives.
Actual: Nine 400GB PATA drives; eight for use, one as a cold spare.
Why: Found a stupendous sale at CompUSA Christmas week;
just-released-in-November Seagate Barracuda 7200.8 400GB PATA drives
at $230 each, with no quantity limitation . I'd have loved to have
gone with the SATA model, but given that Froogle lists the lowest
price for one at $350 (the PATA model retails at $250-350), it was an
easy choice.


CASE
Initial: Antec tower case.
Actual: Antec 4U rackmount case.
Why: I'd always thought of rackmounts as unsuitable for anyone with an
actual rack sitting in their data center, but after realizing that a
rackmount case is simply a tower case sitting on its size, it was an
easy decision given the space advantages. The Antec case here comes
with Antec's True Power 550W EPS12V power supply, and both have great
reputations. In practice, I found that the Antec case was remarkably
easy to open up (one thumbscrew), work with (all drive cages are
removable), and roomy.


MOTHERBOARD
Initial: Unspecified, but probably something Athlon-based and cheap.
Actual: Gigabyte X5DAL-G Intel server motherboard
Why: I became convinced that the sheer volume of the PCI traffic
generated by my proposed array under software RAID would overwhelm any
non-server motherboard, resulting in errors. In addition, I wanted
PCI-X slots for optimal performance. Even though I think AMD in
general offers much better bang for the buck, since I didn't want to
spend the $$$ for Opteron, a Xeon motherboard with an Intel server
chipset was the best comprimise.


CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.


SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).

I saw *lots* of conflicting benchmarks on whether XFS or JFS was the
way to go. Ultimately
<URL:http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm> pushed
me toward JFS, but I suspect I could have gone XFS with no difficulty
whatsoever.


COST
As implied above, I paid $2070 plus sales tax for the drives. I lucked
out and found a terrific eBay deal for a prebuilt system containing
the above-mentioned case and motherboard, two Xeon 2.8GHz CPUs, a DVD
drive, and 2GB memory for $1260 including shipping labor aside, I'd
have paid *much* more to build an equivalent system myself. The 3Ware
cards were $240 each, no shipping or tax, from Monarch Computer. With
miscellaneous costs (such as a Cooler Master 4-in-3 drive cage and an
80GB boot drive from Best Buy for $40 after rebates), I paid under
$4100, tax and shipping included, for everything. At $1.46/GB *plus* a
powerful dual-CPU system, boatloads of memory, and a spare drive, I am
quite satisfied with the overall bang for the buck.


ASSEMBLY: HARDWARE
I spent most of the assembly time on the physical assembly part; it's
astonishing just how long the simple tasks of opening up each
retail-boxed drive, screwing the drive into the drive cage, putting
the cage into the case, removing the cage and the drive when you
realize you've put the drive in with the wrong mounting holes,
reinstalling the drive and cage, etc., etc. take! My studio apartment
still looks like a computer store exploded inside it.

3Ware wisely provides PATA master-only cables with its cards, which
saved some room, but my formerly-roomy case nonetheless looks like the
rat's nest to end all rat's nests inside.


ASSEMBLY: SOFTWARE
I'd gone ahead and installed Fedora Core 3 with the boot drive only
before the controller cards arrived. The 3Ware cards present each
PATA drive as a SCSI device (/dev/sd[a-h]). Once booted, I used mdadm
to create the RAID array (no partitions; just whole drives). While the
array chugged along to create the parity information (about four
hours), I then created one large LVM2 volume group and logical volume
on top of the array, then created one large JFS file system.

By the way, I found a RAID-related bug with Fedora Core's bootscripts;
see <URL:https://bugzilla.redhat.com/beta/show_bug.cgi?id=129633>).


RESULTS
'df -h':
/dev/mapper/VolGroup01-LogVol00
2.6T 221G 2.4T 9% /mnt/newspace


'mdadm --detail /dev/md0':
Version : 00.90.01
Creation Time : Wed Feb 16 01:53:33 2005
Raid Level : raid5
Array Size : 2734979072 (2608.28 GiB 2800.62 GB)
Device Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sat Feb 19 16:26:34 2005
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
6 8 96 6 active sync /dev/sdg
7 8 112 7 active sync /dev/sdh
Events : 0.319006


'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'
(To be honest these results are just a bunch of numbers to me, so any
interpretations of them are welcome. I should mention that these were
done with three distributed computing [BOINC, mprime, and
Folding@Home] projects running in the background. Although 'nice -n
19' each, they surely impacted CPU and perhaps disk performance
somewhat.)

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 15749 50 15897 8 7791 6 10431 49 20245 11 138.1 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 381 6 +++++ +++ 208 3 165 7 +++++ +++ 192 4
3ware-swraid5-type-c1,4G,15749,50,15897,8,7791,6,10431,49,20245,11,138.1,2,16,381,6,+++++,+++,208,3,165,7,+++++,+++,192,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13739 46 17265 9 7930 6 10569 50 20196 11 146.7 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 383 7 +++++ +++ 207 3 162 7 +++++ +++ 191 4
3ware-swraid5-type-c2,4G,13739,46,17265,9,7930,6,10569,50,20196,11,146.7,2,16,383,7,+++++,+++,207,3,162,7,+++++,+++,191,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13288 43 16143 8 7863 6 10695 50 20231 12 149.6 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 537 9 +++++ +++ 207 3 161 7 +++++ +++ 188 4
3ware-swraid5-type-c3,4G,13288,43,16143,8,7863,6,10695,50,20231,12,149.6,2,16,537,9,+++++,+++,207,3,161,7,+++++,+++,188,4


FINAL NOTES, THOUGHTS, AND QUESTIONS
I've noticed that over sync NFS, initiating a file copy from my older
Athlon 1.4GHz system to the RAID array system is *much, much, much*
(seconds as opposed to many minutes)slower than if I initiate the copy
in the same direction but from the array system. Why is this?

I almost went with the SATA (8506) version of the 3Ware cards and a
bunch of PATA-SATA adapters in order to maintain compatibility with
future drives, likely to be SATA only. However, a colleague pointed
out the foolishness of paying $200 extra ($120 for eight adapters plus
$80 for the extra cost of the SATA cards) in order to (possibly)
futureproof a $480 investment.

I was concerned that the drives (and the PATA cables) would cause
horrible heat and noise issues. These, surprisingly, didn't occur;
according to 'sensors', internal temperatures only rose by a few
degrees, and the server is just as (very) noisy now as pre-RAID
drives. I think I'l be able to get away with stuffing the array inside
my hall closet after all.

The server, before I put the cards and RAID drives into the system but
with the distributed-computing projects putting the CPU at 100%
utilization, took the power output on my Best Fortress 750VA/450W UPS
from about 55% to about 76%. With the RAID up and running and again
with 100% CPU utilization, output is 87-101% with the median at
perhaps 93%. I realize I really ought to invest in another UPS, but
with these figures I'm tempted to get by on what I currently have.

Yes, I could've saved a considerable amount of money had I gone with,
say, a used dual PIII server system with regular PCI slots (and, thus,
$80 Highpoint RAID cards, again for the four PATA channels and not for
their RAID functionality per se) and 512MB. And I suspect that for a
home user like me performance wouldn't have been too much less. But I
like to buy and build systems I can use for years and years without
having to bother with upgrading, and figure I've made a long-term (at
least 4-5 years, which is long term in the computer world) investment
that provides me with much more than just storage functionality. And
again, $1.46/GB is hard to beat.
 
D

dg

What kind of cables did 3ware provide, regular flat ribbon or round cables?
If round cables, can you tell if they are just ribbons rolled up?

I had a bunch of questions but I read your post again and pretty much
everything was answered. Maybe even the cable question but I didn't see it.

While everything is still fresh in your mind, make sure you label the drives
so you are absolutely sure which drive is which. When I had a drive failure
with my measly 500GB raid 5 array, it was a big concern of mine when I
pulled a drive and replaced it. Not knowing EXACTLY what would happen
should I pull the wrong drive and replace it. I can only imagine my
sweating on which of the 8 drives to replace! Like they say, measure twice,
cut once!

For me, choosing between 2 hardware arrays or 1 software array would have
been a big decision, the decision of all decisions. When did you finally
make the decision? Was the machine already assembled before you really knew
which way you would go?

Isn't current tech/$ great? A guy can do some really, really cool stuff
with a reasonable budget. I mean $4100 is a lot of money, but what you have
is amazing.

Great project by the way.

--Dan


Yeechang Lee said:
My 2.8TB RAID 5 array is finally up and running. Here I'll discuss my
initial intended specifications, what I actually ended up with, and
associated commentary. Please see
CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.


SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).

I saw *lots* of conflicting benchmarks on whether XFS or JFS was the
way to go. Ultimately
<URL:http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm> pushed
me toward JFS, but I suspect I could have gone XFS with no difficulty
whatsoever.


COST
As implied above, I paid $2070 plus sales tax for the drives. I lucked
out and found a terrific eBay deal for a prebuilt system containing
the above-mentioned case and motherboard, two Xeon 2.8GHz CPUs, a DVD
drive, and 2GB memory for $1260 including shipping labor aside, I'd
have paid *much* more to build an equivalent system myself. The 3Ware
cards were $240 each, no shipping or tax, from Monarch Computer. With
miscellaneous costs (such as a Cooler Master 4-in-3 drive cage and an
80GB boot drive from Best Buy for $40 after rebates), I paid under
$4100, tax and shipping included, for everything. At $1.46/GB *plus* a
powerful dual-CPU system, boatloads of memory, and a spare drive, I am
quite satisfied with the overall bang for the buck.


ASSEMBLY: HARDWARE
I spent most of the assembly time on the physical assembly part; it's
astonishing just how long the simple tasks of opening up each
retail-boxed drive, screwing the drive into the drive cage, putting
the cage into the case, removing the cage and the drive when you
realize you've put the drive in with the wrong mounting holes,
reinstalling the drive and cage, etc., etc. take! My studio apartment
still looks like a computer store exploded inside it.

3Ware wisely provides PATA master-only cables with its cards, which
saved some room, but my formerly-roomy case nonetheless looks like the
rat's nest to end all rat's nests inside.


ASSEMBLY: SOFTWARE
I'd gone ahead and installed Fedora Core 3 with the boot drive only
before the controller cards arrived. The 3Ware cards present each
PATA drive as a SCSI device (/dev/sd[a-h]). Once booted, I used mdadm
to create the RAID array (no partitions; just whole drives). While the
array chugged along to create the parity information (about four
hours), I then created one large LVM2 volume group and logical volume
on top of the array, then created one large JFS file system.

By the way, I found a RAID-related bug with Fedora Core's bootscripts;
see <URL:https://bugzilla.redhat.com/beta/show_bug.cgi?id=129633>).


RESULTS
'df -h':
/dev/mapper/VolGroup01-LogVol00
2.6T 221G 2.4T 9% /mnt/newspace


'mdadm --detail /dev/md0':
Version : 00.90.01
Creation Time : Wed Feb 16 01:53:33 2005
Raid Level : raid5
Array Size : 2734979072 (2608.28 GiB 2800.62 GB)
Device Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sat Feb 19 16:26:34 2005
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
6 8 96 6 active sync /dev/sdg
7 8 112 7 active sync /dev/sdh
Events : 0.319006


'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'
(To be honest these results are just a bunch of numbers to me, so any
interpretations of them are welcome. I should mention that these were
done with three distributed computing [BOINC, mprime, and
Folding@Home] projects running in the background. Although 'nice -n
19' each, they surely impacted CPU and perhaps disk performance
somewhat.)

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 15749 50 15897 8 7791 6 10431 49 20245 11 138.1 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Del ete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 381 6 +++++ +++ 208 3 165 7 +++++ +++ 192 4
3ware-swraid5-type-c1,4G,15749,50,15897,8,7791,6,10431,49,20245,11,138.1,2,1
6,381,6,+++++,+++,208,3,165,7,+++++,+++,192,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13739 46 17265 9 7930 6 10569 50 20196 11 146.7 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Del ete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 383 7 +++++ +++ 207 3 162 7 +++++ +++ 191 4
3ware-swraid5-type-c2,4G,13739,46,17265,9,7930,6,10569,50,20196,11,146.7,2,1
6,383,7,+++++,+++,207,3,162,7,+++++,+++,191,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13288 43 16143 8 7863 6 10695 50 20231 12 149.6 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Del ete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 537 9 +++++ +++ 207 3 161 7 +++++ +++ 188 4
3ware-swraid5-type-c3,4G,13288,43,16143,8,7863,6,10695,50,20231,12,149.6,2,1
6,537,9,+++++,+++,207,3,161,7,+++++,+++,188,4


FINAL NOTES, THOUGHTS, AND QUESTIONS
I've noticed that over sync NFS, initiating a file copy from my older
Athlon 1.4GHz system to the RAID array system is *much, much, much*
(seconds as opposed to many minutes)slower than if I initiate the copy
in the same direction but from the array system. Why is this?

I almost went with the SATA (8506) version of the 3Ware cards and a
bunch of PATA-SATA adapters in order to maintain compatibility with
future drives, likely to be SATA only. However, a colleague pointed
out the foolishness of paying $200 extra ($120 for eight adapters plus
$80 for the extra cost of the SATA cards) in order to (possibly)
futureproof a $480 investment.

I was concerned that the drives (and the PATA cables) would cause
horrible heat and noise issues. These, surprisingly, didn't occur;
according to 'sensors', internal temperatures only rose by a few
degrees, and the server is just as (very) noisy now as pre-RAID
drives. I think I'l be able to get away with stuffing the array inside
my hall closet after all.

The server, before I put the cards and RAID drives into the system but
with the distributed-computing projects putting the CPU at 100%
utilization, took the power output on my Best Fortress 750VA/450W UPS
from about 55% to about 76%. With the RAID up and running and again
with 100% CPU utilization, output is 87-101% with the median at
perhaps 93%. I realize I really ought to invest in another UPS, but
with these figures I'm tempted to get by on what I currently have.

Yes, I could've saved a considerable amount of money had I gone with,
say, a used dual PIII server system with regular PCI slots (and, thus,
$80 Highpoint RAID cards, again for the four PATA channels and not for
their RAID functionality per se) and 512MB. And I suspect that for a
home user like me performance wouldn't have been too much less. But I
like to buy and build systems I can use for years and years without
having to bother with upgrading, and figure I've made a long-term (at
least 4-5 years, which is long term in the computer world) investment
that provides me with much more than just storage functionality. And
again, $1.46/GB is hard to beat.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.7% us, 3.7% sy, 0.4% ni, 75.4% id, 12.3% wa, 1.4% hi, 0.0% si
Mem: 515800k total, 511628k used, 4172k free, 5812k buffers
Swap: 2101032k total, 13152k used, 2087880k free, 163928k cache
 
Y

Yeechang Lee

dg said:
What kind of cables did 3ware provide, regular flat ribbon or round
cables?

Flat. The only thing special about them was that they lacked slave
connectors.

I'm glad they're flat; despite the (lack of) air flow, at some point I
intend to try the fabled PATA cable origami methods I've heard about.
While everything is still fresh in your mind, make sure you label
the drives so you are absolutely sure which drive is which.

This does concern me. How the heck do I tell them apart, even now? How
di I figure out which drive is sda, which is sdb, which is sdc, etc.,
etc.? Advice is appreciated.
For me, choosing between 2 hardware arrays or 1 software array would
have been a big decision, the decision of all decisions.

Not me; all my research told me that software was the way to go for
both performance and downward-compatibility reasons.
Great project by the way.

Thank you. It's still amazes me to see that little '2.6T' label appear
in the 'df -h' output.
 
S

Sayso Takewashi

Wow,Congrats for your sucessfull build!
I am on the Way to build a storage Array myself.Thinking of an
1U-Server with 3 x SoftwareRaid5 250Gig Disks and Fedora too.
Although it might be enought for now,i had the chance to expand it in
the future and save some money yet.
 
A

Anton Ertl

Yeechang Lee said:
This does concern me. How the heck do I tell them apart, even now? How
di I figure out which drive is sda, which is sdb, which is sdc, etc.,
etc.? Advice is appreciated.

One way is to disconnect them one by one, and see which drive is
missing in the list (unless you want to test the md driver's
reconstruction abilities, you should be doing this with a kernel that
does not have an md driver, probably booting from CD). You can also
use that method when a drive fails (but then its even more important
that the kernel does not have an md driver).

Another way is to just look which ports on the cards connect with
which drives. They are typically marked on the card and/or in the
manual with IDE0, IDE1, etc. You also have to find out which card is
which. There may be a method to do this through the PCI IDs, but I
would go for the disconnection method for that.

Followups set to comp.os.linux.hardware (because I read that, csiphs
would probably be more appropriate).

- anton
 
D

Dorothy Bradbury

I am on the Way to build a storage Array myself.Thinking of an
1U-Server with 3 x SoftwareRaid5 250Gig Disks and Fedora too.
Although it might be enought for now,i had the chance to expand it in
the future and save some money yet.

Watch cooling:
o Try to go for a case with 40x20mm fans over 40x10mm fans
---- ideal would be 40x28mm, but they tend to be noisy - 40-46dB(A)
o Ideally consider 2U if not space (price) constrained re Coloco
---- easier to cool - 80mm fans over 40mm

Watch PSU:
o To the original poster & any multi-GB system, PSU matters
---- not just re s/w failure, but h/w failure
---- very rare, but this IS an area where over-capacity is an idea
o If going for 1U, consider 350-460W over 300W
---- yes, a good 300W will be fine
---- however the higher rated ones have better cooling (twin fans)

The ideal 1U PSU is one with 2x 40mm exhaust fans at one end,
with the IEC connector between them. Quite rare. At the minimum
get one with inlet & exhaust 40mm fan - good redundancy :)

For multi-GB, Linux with a Journalling Filesystem is important.
Still not figured out how long a fsck on 2.8TB would take :)
 
Y

Yeechang Lee

Dorothy said:
Watch PSU:
o To the original poster & any multi-GB system, PSU matters
---- not just re s/w failure, but h/w failure
---- very rare, but this IS an area where over-capacity is an idea

PSU concerns are why I went with an Antec 550W supply as opposed to
some 300-400W noname brand. Since my rackmount case does not have room
for a redundant supply, I suspect this is the best I can do. As you
say, PSU problems are relatively rare.

That said, anyone know how I can dynamically measure the actual
wattage used by my system, beyond just adding up each individual
component's wattage?
 
A

Al Dykes

PSU concerns are why I went with an Antec 550W supply as opposed to
some 300-400W noname brand. Since my rackmount case does not have room
for a redundant supply, I suspect this is the best I can do. As you
say, PSU problems are relatively rare.

That said, anyone know how I can dynamically measure the actual
wattage used by my system, beyond just adding up each individual
component's wattage?

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.9% us, 3.5% sy, 0.8% ni, 75.8% id, 11.7% wa, 1.3% hi, 0.0% si
Mem: 515800k total, 399300k used, 116500k free, 3980k buffers
Swap: 2101032k total, 13360k used, 2087672k free, 47212k cached


http://www.ahernstore.com/p4400.html about $30. I've got one.
 
C

chocolatemalt

S

Sayso Takewashi

Dorothy Bradbury wrote:

-Fans and Noise from them

I could live with it.I will place it somewhere,where the Noise doesnt
matter and the Output will be redirected with VNCServer to my
Workstations.

-Power Supply within 1U Servers

If i choose 8 Disks,i surely will get a 550Watt Power Supply.But with
3-4 Disks,i could live with the stock PS.After a Year i will upgrade
it,because it could be failing (saw some very nice Offers for used
1U-Servers).
 
D

dg

I need to stay away from this thread for a while, I am starting to feel some
inspiration. It has been some time since I have run Linux, and well, to be
honest I have always had an urge to build a functional linux box for myself.
And raid fascinates me, so, well, I need to stop reading this stuff. I
can't afford a new toy now.

--Dan
 
E

Eric Gisin

Yeechang Lee said:
CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.
No, one PCI-X card would be just as good.

You don't mention the ethernet card, which could also be PCI-X.
SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).
Is there a comparison of Linux RAID 5 to top-end RAID cards? I suspect 3Ware is
better.
 
J

John-Paul Stewart

Eric said:
No, one PCI-X card would be just as good.

Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.
So if those two cards are in two slots on one PCI-X bus, that's not
distributing the bandwidth at all. The motherboard may offer multiple
PCI-X busses, in which case the OP may want to ensure the cards are in
slots that correspond to different busses. The built-in NIC on most
motherboards (along with most other built-in devices) are also on one
(or more) of the PCI busses, so consider bandwidth used by those as well
when distributing the load.
 
F

Folkert Rienstra

Eric Gisin said:
No, one PCI-X card would be just as good.

Probably, yes.
Depends on what PCI-X (version, clock) and whether the slots are
seperate PCI buses or not.

If seperate buses the highest clock is atainable and they both have the
full PCI-X bandwidth, say 1GB/s (133MHz) or 533 MB/s (66MHz)
If on same bus, the clock is lower to start with and they have to share
that bus PCI-X bandwidth, say a still plenty 400MB/s each (100MHz)
but may become iffy in case of 66MHz clock (266MB/s) or even 50MHz.
You don't mention the ethernet card, which could also be PCI-X.

What if?
 
Y

Yeechang Lee

John-Paul Stewart said:
Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.

The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
to each PCI-X slot, thus my desire to spread out the load with two
cards. Otherwise I'd have gone with the 7506-8 eight-channel card
instead and saved about $120.

The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?
 
D

dg

Yeechang Lee said:
The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?

When you DO go gigabit, be sure to at least do some basic throughput
benchmarks (even if its just with a stopwatch, but I suspect you will come
up with a good method) and then compare afterwards. That is really good
data to get firsthand from somebody with such an extreme array and well
documented hardware and software setup. Really good stuff! I wonder what
kind of data rates that array is capable of within the machine too.
Somewhere there is a guy claiming to get 90+MB per second over gigabit
ethernet using raid arrays on both ends.

Gigabit switches are getting so cheap its incredible.

--Dan
 
Y

Yeechang Lee

Eric said:
Is there a comparison of Linux RAID 5 to top-end RAID cards? I
suspect 3Ware is better.

No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,
<URL:http://www.chemistry.wustl.edu/~gelb/castle_raid.html> (which
does note that software striping two 3Ware hardware RAID 5 solutions
"might be competitive" with software) and
<URL:http://staff.chess.cornell.edu/~schuller/raid.html> (which states
that no, all-software still has the edge in such a scenario).
 
T

Thor Lancelot Simon

No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,

If all you care about is "rod length check" long-sequential-read or
long-sequential-write performance, that's probably true. If, of
course, you restrict yourself to a single stream...

....of course, in the real world, people actually do short writes and
multi-stream large access every once in a while. Software RAID is
particularly bad at the former because it can't safely gather writes
without NVRAM. Of course, both software implementations *and* typical
cheap PCI RAID card (e.g. 3ware 7/8xxx) implementations are pretty
awful at the latter, too, and for no good reason that I could ever see.
 
S

Steve Wolfe

No, one PCI-X card would be just as good.
The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
to each PCI-X slot, thus my desire to spread out the load with two
cards. Otherwise I'd have gone with the 7506-8 eight-channel card
instead and saved about $120.

The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?

The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems awfully slow
for this sort of setup.

As a comparison, I have two machines with software RAID 5 arrays, one a
2x866 P3 system with 5x120-gig drives, the other an A64 system with 8x300
gig drives, and both of them can read and write to/from their RAID 5 array
at 45+ MB/s, even with the controller cards plugged into a single 32/33 PCI
bus.

To answer your question, GigE at full speed is a bit more than 100
MB/sec. The PCI-X busses on that motherboard are both capable of at least
100 MHz operation, which at 64 bits would give you a max *realistic*
throughput of about 500 MB/second, so any performance detriment from using
the gigE would likely be completely insignificant.

I've got another machine with a 3Ware 7000-series card with a bunch of
120-gig drives on it (I haven't looked at the machine in quite a while), and
I was pretty disappointed with the performance from that controller. It
works for the intended usage (point-in-time snapshots), but responsiveness
of the machine under disk I/O is pathetic - even with dual Xeons.

steve
 
Y

Yeechang Lee

Steve said:
The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems
awfully slow for this sort of setup.

Agreed. However, those benchmarks were done with no tuning whatsoever
(and, as noted, the three distributed computing projects going full
blast); since then I've done some minor tweaking, notably the noatime
mount option, which has helped. I'd post newer benchmarks but the
array's right now rebuilding itself due to a kernel panic I caused by
trying to use smartctl to talk to the bare drives without invoking the
special 3ware switch.
To answer your question, GigE at full speed is a bit more than
100 MB/sec. The PCI-X busses on that motherboard are both capable
of at least 100 MHz operation, which at 64 bits would give you a max
*realistic* throughput of about 500 MB/second, so any performance
detriment from using the gigE would likely be completely
insignificant.

That was my sense as well; I suspect network saturation-by-disk will
only cease to be an issue when we all hit the 10GigE world.

(Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
advantage of the theoretical bandwidth available on the slots,
anyway.)
I've got another machine with a 3Ware 7000-series card with a bunch of
120-gig drives on it (I haven't looked at the machine in quite a
while), and I was pretty disappointed with the performance from that
controller.

Appreciate the report. Fortunately, as a home user performance (or
given that I'm only recording TV episodes, even data integrity
actually; thus no backup plans for the array, even if backing up 2.8TB
was practical in any way budgetwise) isn't my prime
consideration. Were I after that, I'd probably have gone with the
9000-series controllers and SATA drives, but my wallet's busted enough
with what I already have!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top