Optimising speed of RAID-0 (mixed SATA/PATA)

A

A. J. Moss

I have an Asus A8N-SLI Deluxe motherboard, and I want to create
3TB of, let's call it temporary space, on a single NTFS partition.
(By temporary, I mean the contents are replaceable from the source
data in the event of a disk crash, and the source data is safely
backed up; so RAID-0 will suffice.)

I already have two PATA and four SATA 500GB hard disks, and the
NVRAID controller built into the motherboard allows an array to
span both PATA and SATA disks. I'd rather not spend hundreds of
pounds on four new 750GB hard disks, when what I already have
will do.

I know there's a noticeable speed advantage in putting the PATA
hard drives on separate IDE channels. Also, the NVRAID controller
does support SATA-2, despite the manual's claim to the contrary.

Would it make any difference if I connect the PATA drives as
third and sixth in the array, rather than fifth and sixth?
I'm wondering if it would help to spread out the accesses to
the relatively slow UDMA-100 controllers, rather than going
to the second one immediately after the first one.

The computer also has a DVD writer that I hardly ever use.
Would there be any harm in leaving it attached as a slave
drive, on one of the PATA channels to be used by the RAID?
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage A. J. Moss said:
I have an Asus A8N-SLI Deluxe motherboard, and I want to create
3TB of, let's call it temporary space, on a single NTFS partition.
(By temporary, I mean the contents are replaceable from the source
data in the event of a disk crash, and the source data is safely
backed up; so RAID-0 will suffice.)
I already have two PATA and four SATA 500GB hard disks, and the
NVRAID controller built into the motherboard allows an array to
span both PATA and SATA disks. I'd rather not spend hundreds of
pounds on four new 750GB hard disks, when what I already have
will do.
I know there's a noticeable speed advantage in putting the PATA
hard drives on separate IDE channels. Also, the NVRAID controller
does support SATA-2, despite the manual's claim to the contrary.
Would it make any difference if I connect the PATA drives as
third and sixth in the array, rather than fifth and sixth?

The difference should not be too large, since you use
SPAN/JBOD/APPEND mode. It switches between disks only seldomly.
I'm wondering if it would help to spread out the accesses to
the relatively slow UDMA-100 controllers, rather than going
to the second one immediately after the first one.

Since when is UDMA-100 slow?
The computer also has a DVD writer that I hardly ever use.
Would there be any harm in leaving it attached as a slave
drive, on one of the PATA channels to be used by the RAID?

If it is inactive, it should not matter.

Arno
 
J

Jaimie Vandenbergh

I have an Asus A8N-SLI Deluxe motherboard, and I want to create
3TB of, let's call it temporary space, on a single NTFS partition.
(By temporary, I mean the contents are replaceable from the source
data in the event of a disk crash, and the source data is safely
backed up; so RAID-0 will suffice.)

I already have two PATA and four SATA 500GB hard disks, and the
NVRAID controller built into the motherboard allows an array to
span both PATA and SATA disks. I'd rather not spend hundreds of
pounds on four new 750GB hard disks, when what I already have
will do.

I know there's a noticeable speed advantage in putting the PATA
hard drives on separate IDE channels.

All good.
Also, the NVRAID controller
does support SATA-2, despite the manual's claim to the contrary.

Not that that matters much.
Would it make any difference if I connect the PATA drives as
third and sixth in the array, rather than fifth and sixth?
I'm wondering if it would help to spread out the accesses to
the relatively slow UDMA-100 controllers, rather than going
to the second one immediately after the first one.

I'd be surprised if it did make any difference. To a first-order
approximation, UDMA-100 supports up to 100meg/second bandwidth; each
hard drive will actually support maybe 40-60meg/second. With only a
single hard drive on each IDE ribbon, you're well within bandwidth.

The limiting factor with a six-disk RAID0 system is more likely to be
your motherboard bus. The manual doesn't include a block diagram of
how the components are connected, so it's possible that the RAID chip
is on the end of a lovely slow PCI bus...

On the other hand, it's the work of half an hour to try it each way
and benchmark - just don't try and format a whole 3Tb partition each
time! A 10gig or so partition will suffice for this particular test.
The computer also has a DVD writer that I hardly ever use.
Would there be any harm in leaving it attached as a slave
drive, on one of the PATA channels to be used by the RAID?

No, that's fine - UDMA devices don't slow the bus when they're not in
use.

Cheers - Jaimie
 
J

John Jordan

Jaimie said:
I'd be surprised if it did make any difference. To a first-order
approximation, UDMA-100 supports up to 100meg/second bandwidth; each
hard drive will actually support maybe 40-60meg/second. With only a
single hard drive on each IDE ribbon, you're well within bandwidth.

IIRC, UDMA maxes out at about 75% of theoretical transfer rate, so for a
modern drive with up to 90MB/s sequential transfer, it's marginal.
That's not going to matter much here, because even if a single drive is
limited to 75MB/s, that's going to be 450MB/s for the whole array.

Regardless, that's not actually the issue here. The question is whether
the two UDMA controllers share a limited bus, separately from the SATA
controllers. I can't decide whether this is likely or not.
The limiting factor with a six-disk RAID0 system is more likely to be
your motherboard bus. The manual doesn't include a block diagram of
how the components are connected, so it's possible that the RAID chip
is on the end of a lovely slow PCI bus...

It's a southbridge-integrated controller, so it's not going to be PCI.
Still, it's quite possible that there's a bus somewhere along the line
that can't handle 500MB/sec.
 
F

Folkert Rienstra

Arno Wagner said:
The difference should not be too large, since you use
SPAN/JBOD/APPEND mode. It switches between disks only seldomly.

Babblehead, what exactly did you not understand in "Optimising speed of RAID-0 (mixed SATA/PATA)"?
 
F

Folkert Rienstra

Jaimie Vandenbergh said:
All good.


Not that that matters much.

They're going out in parallel anyway if on seperate channels.
I'd be surprised if it did make any difference. To a first-order approxi
mation, UDMA-100 supports up to 100meg/second bandwidth; each hard
drive will actually support maybe 40-60meg/second. With only a
single hard drive on each IDE ribbon, you're well within bandwidth.

He probably meant 'access time' or latency', not transfer speed.
The limiting factor with a six-disk RAID0 system is more likely to be
your motherboard bus. The manual doesn't include a block diagram of
how the components are connected, so it's possible that the RAID chip
is on the end of a lovely slow PCI bus...

On the other hand, it's the work of half an hour to try it each way
and benchmark - just don't try and format a whole 3Tb partition each
time! A 10gig or so partition will suffice for this particular test.

A Format speed test, what else is new.
No, that's fine - UDMA devices don't slow the bus when they're not in use.

Right, so make sure it is in UDMA mode or replace with an UDMA model. Idjut.
 
F

Folkert Rienstra

IIRC, UDMA maxes out at about 75% of theoretical transfer rate,

There is no such 'theoretical' transfer rate.
There is about 10% command overhead in IDE when using DMA bus protocol.
so for a modern drive with up to 90MB/s sequential transfer,

90 eh? No kidding.
it's marginal.

Actually, that should suffice.
That's not going to matter much here, because even if a single drive is
limited to 75MB/s, that's going to be 450MB/s for the whole array.

Regardless, that's not actually the issue here. The question is whether
the two UDMA controllers

You mean the IDE controllers.
share a limited bus,

Of course they do.
separately from the SATA controllers.
I can't decide whether this is likely or not.
It's a southbridge-integrated controller,
so it's not going to be PCI.

Uhuh, and this has what exactly to do with "being on the southbridge"?
And *what* southbridge, it's single chip.
Still, it's quite possible that there's a bus somewhere along the line
that can't handle 500MB/sec.

It has Hypertransport, 800MB/s.
 
J

Jaimie Vandenbergh

On Mon, 30 Jul 2007 18:05:51 +0200, "Folkert Rienstra"

[twaddle]

Oh, I do so love crossposts.

Cheers - Jaimie
 
J

John Jordan

Folkert said:
There is no such 'theoretical' transfer rate.
There is about 10% command overhead in IDE when using DMA bus protocol.

"Theoretical" was a poor choice of words. However, I just tested burst
rates with HD Tune on a few different drives and a board that can limit
UDMA rates in the BIOS, and I don't get anything like a flat 10%:

UDMA-6 (133): 95MB/s
UDMA-5 (100): 76-78MB/s
UDMA-4 (66): 55-56MB/s
UDMA-3 (44): 35-38MB/s
UDMA-2 (33): 30MB/s

The UDMA-6 result may be limited by drive electronics, but the others
suggest that the overhead increases with transfer rate.
90 eh? No kidding.

I don't know if I'm reading your sarcasm correctly, but many recent SATA
drives do touch 90MB/s. The IDE drives may be slower, but it's hard to
tell because no-one reviews them.
It has Hypertransport, 800MB/s.

Hypertransport is 1.6GB/s or more likely 2GB/s each way on that board. I
was referring more to internal buses in the chipset, although it's
possible that the HT or memory bus performs poorly on DMA transfers.
 
F

Folkert Rienstra

John Jordan said:
"Theoretical" was a poor choice of words. However, I just tested burst
rates with HD Tune on a few different drives and a board that can limit
UDMA rates in the BIOS, and I don't get anything like a flat 10%:

UDMA-6 (133): 95MB/s
UDMA-5 (100): 76-78MB/s
UDMA-4 (66): 55-56MB/s
UDMA-3 (44): 35-38MB/s
UDMA-2 (33): 30MB/s

Burst rates are unreliable, they depend on drive caching behaviour.
Use STR. What does HD Tach say.
The UDMA-6 result may be limited by drive electronics, but the others
suggest that the overhead increases with transfer rate.


I don't know if I'm reading your sarcasm correctly, but many recent SATA
drives do touch 90MB/s.

Many even.
The Hitachi 7K1000 comes close though, so does the new WD Raptor but that's a 10k drive.
Seagate Barracudas are relatively slow apparently at 75, 78 for the ES
and not the norm.
WD's new Caviar beats everything handsdown though at a staggering 97MB/s.

So you are correct, drive speeds have finally stepped up very recently.
Storage Review are lagging quite a bit behind with their bench-
mark database, depending on which one you pick. That's my fault.
The IDE drives may be slower, but it's hard to tell because no-one reviews them.
Hypertransport is 1.6GB/s or more likely 2GB/s each way on that board.

800MB/s is what nVidia specifies for that Chip (nForce4)
in Features & Benefits
http://www.nvidia.com/page/pg_20041015208345.html.

8GB/s though in Tech Specs:
http://www.nvidia.com/page/pg_20041015990644
I was referring more to internal buses in the chipset,

Which is HyperTransport, presumably. Couldn't find a functional diagram.
But what's the point of HyperTransport if nothing else uses it, no?
 
J

John Jordan

Folkert said:
Burst rates are unreliable, they depend on drive caching behaviour.
Use STR. What does HD Tach say.

Oh, I thought HD Tach was fully commercial for some reason. Turns out
that it gives much higher burst rates than HD Tune (actually somewhat
less than 10% overhead).

Not too sure which set of values is correct, as the STR appears to be
limited by the HD Tune burst rates. However, HD Tune appears to give
somewhat slower STRs as well.

Would need another hour or two of testing to make sure. For now I'll say
that you're probably right and HD Tune is just slow.

800MB/s is what nVidia specifies for that Chip (nForce4) in Features
& Benefits http://www.nvidia.com/page/pg_20041015208345.html.

8GB/s though in Tech Specs:
http://www.nvidia.com/page/pg_20041015990644

8GB/s, or 4GB/s each way is correct. I forgot the DDR - it's 1GHz clock
* 2 (DDR) * 2 (16-bit width).

No idea where they get 800MB/s from, but I'm guessing that it's an
error. Even the crippled HT in the early nF3 chipsets managed 1.6GB/s
each way.
Which is HyperTransport, presumably. Couldn't find a functional
diagram. But what's the point of HyperTransport if nothing else uses
it, no?

HyperTransport is point-to-point, like PCI-E, so there's no reason for
them to use it inside the chip. Most likely they'd use something much
simpler, as the transistor count for HT is pretty high.
 
S

Stretch

Jaimie Vandenbergh said:
[twaddle]

Oh, I do so love crossposts.

Cheers - Jaimie

Yeah, one day you're still the one-eyed King in the Land of the Blind, the
next your throne is unceremoniously kicked from under your clueless butt
and you're exposed for what you are. It's ridiculous, it should be forbidden.
 
A

A. J. Moss

I have an Asus A8N-SLI Deluxe motherboard, and I want to create
3TB of, let's call it temporary space, on a single NTFS partition.

Thanks for the advice, everyone.

I've since found out by experiment that both the SIL3114 (4 x SATA)
and the NVRAID (4 x SATA + 2 x PATA) controllers limit arrays to no
larger than 2TiB (2^41 bytes, or a 32-bit number of sectors). If you
try to go over this, the number of GiB reported available is reduced
modulo 2048.

This applies as much to four 750GB hard disks configured as RAID 0+1
as it does to four disks configured as RAID 0.

It's enough to make me wish Seagate hadn't gone the extra mile to
squeeze 187.5GB per platter onto their first 750GB hard disks. Three
750GB hard disks arranged as RAID-0 weigh in at 2097GiB (2048+47),
so the RAID controllers report a piddly 47GiB available under this
configuration. Three 700GB hard disks, or three 720GB ones, would
squeeze in below this 2TiB limit, while offering fractionally more
capacity than four 500GB ones.
 
E

Eric Gisin

There is no need for a soft-RAID card in this case.
Windows RAID 0 has no 2TB limit and is just as fast.
 
F

Folkert Rienstra

Thanks for the advice, everyone.

I've since found out by experiment that both the SIL3114 (4 x SATA)
and the NVRAID (4 x SATA + 2 x PATA) controllers limit arrays to no
larger than 2TiB (2^41 bytes, or a 32-bit number of sectors). If you try to
go over this, the number of GiB reported available is reduced modulo 2048.

This applies as much to four 750GB hard disks configured as RAID 0+1
as it does to four disks configured as RAID 0.
It's enough to make me wish Seagate hadn't gone the extra mile to
squeeze 187.5GB per platter onto their first 750GB hard disks. Three
750GB hard disks arranged as RAID-0 weigh in at 2097GiB (2048+47),
so the RAID controllers report a piddly 47GiB available under this
configuration. Three 700GB hard disks, or three 720GB ones, would
squeeze in below this 2TiB limit, while offering fractionally more
capacity than four 500GB ones.

Then the solution is obviously simple, shortstroke each drive to 733GB.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top