raid vs network performance

3

3dO

ok, this is a long one but i really am clueless on this one

two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.

now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)

the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem

when i work with a 4.01GB video dvd image i get the following transferrates:

localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s

local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...

now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)

so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.

has anybody any idea, it would really be appreciated !

thx
3dO
 
C

CJT

3dO said:
ok, this is a long one but i really am clueless on this one

two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.

now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)

the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem

when i work with a 4.01GB video dvd image i get the following
transferrates:

localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s

local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...

now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)

so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.

has anybody any idea, it would really be appreciated !

thx
3dO

raid 5 is not for speed so much as for reliability and compactness

there can be big differences between gigabit cards

the pci bus comes in many flavors

overlay on those that no chain is stronger than its weakest link

plus -

1 GB of RAM is puny by today's standards

Microsoft makes server versions -- XP Pro isn't one of them
-- you'd probably do better with Samba running on Linux or Solaris
 
E

Eric Gisin

I doubt Promise makes a hardware RAID card. PCI is a mistake too.

You will get better results from Intel's soft RAID on the ICH8/9R,
which has a GB/s path to system RAM, compared to PCI's 120MB/s.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage 3dO said:
ok, this is a long one but i really am clueless on this one
two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.
now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)

Check the SMART status. If there is nothing in there, then
the disks are fine. IF it is just the one click, I would guess
this is the head unloading and nothing to worry about.

the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem
when i work with a 4.01GB video dvd image i get the following transferrates:
localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s

This controller is likely fakeRAID, i.e not a hardware controller.
For that the numbers are about right, when assuming a PCI bus.
local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...
now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)
so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.
has anybody any idea, it would really be appreciated !

We have some Arceas in use with 8 disks as RAID 6 on
them. Read speed exceeds 200MB/s with a 64 bit/133MHz
PCI-X bus. I have not benchmarked the write speed, but
it is likely in the same class. Note that PCI cannot
gice you these speeds, it levels out at about 100MB/s
combines speed. Your localREAD speeds are very likely the
maximum the PCI bus of your machine can support.
Software RAID and fakeRAID can saturate the bus of the
machine, but it can need more bandwidth compared to
a hadrware RAID controller, which is, of course, much
more expensive.

3ware also makes good RAID controllers, with better Linux
support. Under Windows, that does not matter of course.

Arno
 
P

Paul

3dO said:
ok, this is a long one but i really am clueless on this one

two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.

now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)

the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem

when i work with a 4.01GB video dvd image i get the following
transferrates:

localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s

local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...

now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)

so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.

has anybody any idea, it would really be appreciated !

thx
3dO

http://www.newegg.com/Product/ProductReview.aspx?Item=N82E16822152054

"Cons: After about 2 and a half weeks the drive started making a loud
clicking noise and promptly failed."

Maybe the click on your drives is significant.

Test the drives individually, on a Southbridge port, and use whatever
diagnostics Samsung offers.

TX4310 - SoftRAID, no cache
Areca 1210 - Intel IOP processor for hardware RAID, onboard cache RAM
- Download driver from Areca site, not the one on the CD.
(As posted in the Newegg reviews for the card.)
WinXP Pro - limited number of connections to "shares" (10 users) ? Not a server OS.

It may be RAID5, but it still needs backups!

Keep a matched spare disk on hand, for maintenance.

Using the matched spare disk, do a "fire drill" on the new array. Unplug
a drive, do a rebuild with the spare. Note any response from the Areca software,
like whether you get notification that the drive went missing. Practice while
nobody is using it, so you're prepared for the day when something happens
to the array. Know how many hours it takes to do the rebuild, and what
performance penalty exists while rebuilding. (Try some benchmarks on a client,
while the rebuild is underway.)

Paul
 
K

kony

ok, this is a long one but i really am clueless on this one

two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.

now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)

the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem

So even in the RAID bios, not the software management app in
windows, you can only use 16K?

Is the PCI bus 66 or 33MHz? I suppose this is a desktop
board so it's 33MHz. Yes that would be a bottleneck but not
yet, the PCI bus (assuming you have no other high bandwidth
devices on it) can easily hit 100MB/s and a little more.

when i work with a 4.01GB video dvd image i get the following transferrates:

localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained

The write to the raid array is too slow, I think the problem
is your software RAID card is putting too much overhead on
the CPU. However you don't tell us what you're doing when
are you "wroking with a ... dvd image", if there is other
processing going on with that image or if it is incidental
that it's a DVD image and you just mean you copied a large
file from one place to another.

Since a Core2Duo isn't exactly a slow processor, you might
try changing the driver for the RAID card, or using RAID1
instead of 5, or getting a hardware raid card.

netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s

These are also dreadful, considering it must be gigabit
ethernet. The raid card is probably part of a bottleneck
per the factors already mentioned, but either the system on
the other end is also a bottleneck or you have a lot of
network processing overhead. Are you using jumbo frames (do
so on if not)? What's the CPU utilization look like during
these copy periods?


local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...

now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)

If the network chip is sitting on the PCI bus that will
limit bandwidth some, but it would not account for such a
low bottleneck of 24MB/s.


so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard

Is the onboard network capable of gigabit? If so, try using
it instead. If not, disable it in the bios. Once disabled
there should be no conflicts if there were any.


so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.

First I would break up the array (losing data on it) and
benchmark with only one drive on the card as a single drive
span, and/or a single drive strip if it allows that. Also
try with same drive on the motherboard integral controller
with and without the RAID card in the system. Doing these
things will give you a better idea what the peak performance
of the drive is in the best possible scenario, then how the
RAID card limits that, then how that compares to the worst
case results you mentioned above.

Yes, that replacement card has the parameters that should
make it a reasonable replacement. It removes the CPU
processing for the RAID array, removes the as-yet unseen PCI
bottleneck you would see if it were PCI instead of PCI
Express. You might still use jumbo frames for your network
if you aren't yet.

Contrary to another poster's comment, 1GB of memory is not a
factor in this at all. Not even slightly, I've an old video
capture system with 256MB system memory in it, formerly
128MB, that achieves over 30MB/s with a cheap gigabit card.

IF you had a lot of continual access of same files or
multiple concurrent access, then the filecaching of having a
lot of memory would help, but as of yet you have not
described a use where that will be needed.

Also, if your gigabit ethernet card is really poor that can
account for this. I've found the cheap Realtek based cards
do reasonably for the price but the Via chipped cards do
dreadfully in some systems. Intel or 3Com chipped cards
tend to be slightly faster with lower CPU utilization but it
shouldn't matter so much with this fairly modern system, and
yet a different NIC is a small cost increase relative to
what you're already spending for a better RAID card.
 
A

Arno Wagner

So even in the RAID bios, not the software management app in
windows, you can only use 16K?
Is the PCI bus 66 or 33MHz? I suppose this is a desktop
board so it's 33MHz. Yes that would be a bottleneck but not
yet, the PCI bus (assuming you have no other high bandwidth
devices on it) can easily hit 100MB/s and a little more.

You forget that for copying data or moving it to a network
card, the data has to go over the bus twice....

Arno
 
K

kony

You forget that for copying data or moving it to a network
card, the data has to go over the bus twice....

Arno

True, the data goes over it twice but that does not account
for the figures the OP lists. The 100MB/s figure was meant
to contrast with the figure given for the local copy
operation, not the network copy operation... which was still
substantially below the PCI bus' potential bottleneck. The
same is true for the network copy operation, the PCI bus is
not the bottleneck "yet", though it may easily be once OP
has the better RAID card that was linked.

As a frame of reference from a far slower (overall) system
than the OP has, I've an old Celeron 500 fileserver with PCI
software raid card and PCI gigabit NIC that achieves about
35MB/s... and this on a mere 66MHz system memory bus with
integrated video usurping part of that memory bandwidth
(Intel i810 chipset without the discrete video memory chips
onboard) and the poor old CPU at 50% utilization from the
network packets and soft-raid driver. The drives aren't
anything special either, probably no faster than what the OP
is using though accessing the oldest drives in it do reduce
the throughput to closer to 20-25MB/s, IIRC.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage kony said:
On 14 Dec 2007 11:46:24 GMT, Arno Wagner <[email protected]>
wrote:
True, the data goes over it twice but that does not account
for the figures the OP lists.

Not entirely, no. The network numbers are pretty bad.
The local copy ones may be due to bus limitations.
The 100MB/s figure was meant
to contrast with the figure given for the local copy
operation, not the network copy operation... which was still
substantially below the PCI bus' potential bottleneck. The
same is true for the network copy operation, the PCI bus is
not the bottleneck "yet", though it may easily be once OP
has the better RAID card that was linked.

It may just be a slow GbE card. The fastest GbE card
for the PCI bus I have ever used leveled out at 60MB/s.
Slower ones (RTL), make maybe 25MB/s. If you need
the full Gigabit, my experience is that you need either
PCI-X or PCI-E (or equivalent on-board cards).
As a frame of reference from a far slower (overall) system
than the OP has, I've an old Celeron 500 fileserver with PCI
software raid card and PCI gigabit NIC that achieves about
35MB/s... and this on a mere 66MHz system memory bus with

Yes, but that typically is a 64 bit bus, i.e. has
4 (!) times the PCI bandwidth.
integrated video usurping part of that memory bandwidth
(Intel i810 chipset without the discrete video memory chips
onboard) and the poor old CPU at 50% utilization from the
network packets and soft-raid driver. The drives aren't
anything special either, probably no faster than what the OP
is using though accessing the oldest drives in it do reduce
the throughput to closer to 20-25MB/s, IIRC.

As I said, it may just be a slow NIC. With a fast NIC,
the OP should get almost the local copy speeds over
the network.

Arno
 
K

kony

Not entirely, no. The network numbers are pretty bad.
The local copy ones may be due to bus limitations.

This is not likely, these read and write tests were to/fro
the PCI raid card but the other drive was on a southbridge
integral controller that is not logically on the PCI bus.
In other words as I'd written already, the PCI bus will not
limit it untill around 100MB/s or higher, unless there was
other concurrent PCI bus activity but it seems pretty
unlikely as this was a focused test.


It may just be a slow GbE card. The fastest GbE card
for the PCI bus I have ever used leveled out at 60MB/s.
Slower ones (RTL), make maybe 25MB/s.

RT8169 can fairly easily do 40MB/s.

If you need
the full Gigabit, my experience is that you need either
PCI-X or PCI-E (or equivalent on-board cards).

Sure, but it would be a bit of an extreme expense, when
talking about a RAID5 of consumer grade 7K2 RPM SATA drives.


Yes, but that typically is a 64 bit bus, i.e. has
4 (!) times the PCI bandwidth.

In practice, benchmarks of that era showed these platforms
achieving about 200MB/s (actually a little less), minus the
bandwidth used by the integrated video and other system
operations.

As I said, it may just be a slow NIC. With a fast NIC,
the OP should get almost the local copy speeds over
the network.

Arno

A different NIC may have some benefit but that much is not
a certainty. At some point the PCI bus will be a bottleneck
as these drives may be able to sustain over 50MB/s in a
local copy.
http://www.anandtech.com/printarticle.aspx?i=3031
 
3

3dO

kony said:
So even in the RAID bios, not the software management app in
windows, you can only use 16K?

Is the PCI bus 66 or 33MHz? I suppose this is a desktop
board so it's 33MHz. Yes that would be a bottleneck but not
yet, the PCI bus (assuming you have no other high bandwidth
devices on it) can easily hit 100MB/s and a little more.



The write to the raid array is too slow, I think the problem
is your software RAID card is putting too much overhead on
the CPU. However you don't tell us what you're doing when
are you "wroking with a ... dvd image", if there is other
processing going on with that image or if it is incidental
that it's a DVD image and you just mean you copied a large
file from one place to another.

Since a Core2Duo isn't exactly a slow processor, you might
try changing the driver for the RAID card, or using RAID1
instead of 5, or getting a hardware raid card.



These are also dreadful, considering it must be gigabit
ethernet. The raid card is probably part of a bottleneck
per the factors already mentioned, but either the system on
the other end is also a bottleneck or you have a lot of
network processing overhead. Are you using jumbo frames (do
so on if not)? What's the CPU utilization look like during
these copy periods?




If the network chip is sitting on the PCI bus that will
limit bandwidth some, but it would not account for such a
low bottleneck of 24MB/s.




Is the onboard network capable of gigabit? If so, try using
it instead. If not, disable it in the bios. Once disabled
there should be no conflicts if there were any.




First I would break up the array (losing data on it) and
benchmark with only one drive on the card as a single drive
span, and/or a single drive strip if it allows that. Also
try with same drive on the motherboard integral controller
with and without the RAID card in the system. Doing these
things will give you a better idea what the peak performance
of the drive is in the best possible scenario, then how the
RAID card limits that, then how that compares to the worst
case results you mentioned above.

Yes, that replacement card has the parameters that should
make it a reasonable replacement. It removes the CPU
processing for the RAID array, removes the as-yet unseen PCI
bottleneck you would see if it were PCI instead of PCI
Express. You might still use jumbo frames for your network
if you aren't yet.

Contrary to another poster's comment, 1GB of memory is not a
factor in this at all. Not even slightly, I've an old video
capture system with 256MB system memory in it, formerly
128MB, that achieves over 30MB/s with a cheap gigabit card.

IF you had a lot of continual access of same files or
multiple concurrent access, then the filecaching of having a
lot of memory would help, but as of yet you have not
described a use where that will be needed.

Also, if your gigabit ethernet card is really poor that can
account for this. I've found the cheap Realtek based cards
do reasonably for the price but the Via chipped cards do
dreadfully in some systems. Intel or 3Com chipped cards
tend to be slightly faster with lower CPU utilization but it
shouldn't matter so much with this fairly modern system, and
yet a different NIC is a small cost increase relative to
what you're already spending for a better RAID card.

Ok thanks a lot for all the info of all of you
To comment on your questions Kony:
So even in the RAID bios, not the software management app in
windows, you can only use 16K?
Yes, the 16K is a "limit" of the card, you cannot change it, manual and
promise website confirm this
The write to the raid array is too slow, I think the problem
is your software RAID card is putting too much overhead on
the CPU. However you don't tell us what you're doing when
are you "wroking with a ... dvd image", if there is other
processing going on with that image or if it is incidental
that it's a DVD image and you just mean you copied a large
file from one place to another.
It is actually just copying an already rendered DVD image, so just one
file from a to b over network, no background processes or anything.
These are also dreadful, considering it must be gigabit
ethernet. The raid card is probably part of a bottleneck
per the factors already mentioned, but either the system on
the other end is also a bottleneck or you have a lot of
network processing overhead. Are you using jumbo frames (do
so on if not)? What's the CPU utilization look like during
these copy periods?
The system i did my tests from is an intel c2d e6600, 320 seagate sata
disk, 2gigs of ram, gfx 8800gtx and windows vista premium (work horse
which doubles as a game animal) so i doubt that this would be an issue.
CPU loads on the raid5 machine during copy (local or network) never
exceeds 12% on either core. (note: with avg antivirus running, turning
this off does not have any impact on performance)
Is the onboard network capable of gigabit? If so, try using
it instead. If not, disable it in the bios. Once disabled
there should be no conflicts if there were any.
Maybe an unclarity from my part, i am using the onboard gigabit network


So now i'm really thinking about trading in the -indeed- softraid card
for an areca, what would mean spending some 150 euros extra while
looking for prices for an intel Gbit networdcard over pci express, these
big boys are going over the counter here where i live for some 90 euros
which compared to the 150 of the areca is quite hefty...

Maybe another idea from you guys ? but i'm really thinking, after
reading all posts here, that the promise raid card is just a piece of
BEEP causing the problems.
 
3

3dO

Ok thanks a lot for all the info of all of you
To comment on your questions Kony:
So even in the RAID bios, not the software management app in
windows, you can only use 16K?
Yes, the 16K is a "limit" of the card, you cannot change it, manual and
promise website confirm this
The write to the raid array is too slow, I think the problem
is your software RAID card is putting too much overhead on
the CPU. However you don't tell us what you're doing when
are you "wroking with a ... dvd image", if there is other
processing going on with that image or if it is incidental
that it's a DVD image and you just mean you copied a large
file from one place to another.
It is actually just copying an already rendered DVD image, so just one
file from a to b over network, no background processes or anything.
These are also dreadful, considering it must be gigabit
ethernet. The raid card is probably part of a bottleneck
per the factors already mentioned, but either the system on
the other end is also a bottleneck or you have a lot of
network processing overhead. Are you using jumbo frames (do
so on if not)? What's the CPU utilization look like during
these copy periods?
The system i did my tests from is an intel c2d e6600, 320 seagate sata
disk, 2gigs of ram, gfx 8800gtx and windows vista premium (work horse
which doubles as a game animal) so i doubt that this would be an issue.
CPU loads on the raid5 machine during copy (local or network) never
exceeds 12% on either core. (note: with avg antivirus running, turning
this off does not have any impact on performance)
Is the onboard network capable of gigabit? If so, try using
it instead. If not, disable it in the bios. Once disabled
there should be no conflicts if there were any.
Maybe an unclarity from my part, i am using the onboard gigabit network


So now i'm really thinking about trading in the -indeed- softraid card
for an areca, what would mean spending some 150 euros extra while
looking for prices for an intel Gbit networdcard over pci express, these
big boys are going over the counter here where i live for some 90 euros
which compared to the 150 of the areca is quite hefty...

Maybe another idea from you guys ? but i'm really thinking, after
reading all posts here, that the promise raid card is just a piece of
BEEP causing the problems.
 
K

kony

The system i did my tests from is an intel c2d e6600, 320 seagate sata
disk, 2gigs of ram, gfx 8800gtx and windows vista premium (work horse
which doubles as a game animal) so i doubt that this would be an issue.
CPU loads on the raid5 machine during copy (local or network) never
exceeds 12% on either core. (note: with avg antivirus running, turning
this off does not have any impact on performance)

You are listing things that have nothing to do with
performance in this test. 2GB of ram, video, don't matter.
Vista makes it slower too but it shouldn't make it this
slow. Even so, I would retest under WinXP because actually
Vista is known to have some severe hesitations on some
systems but it is generally when dealing with a lot of
files, not just copying one large file.

You did not mention if you are using jumbo frames. I
suppose it should also be said, that of course with jumbo
frames your switches (if there is one) have to also support
it, and further there are some TCP/IP tweaks (Google will
find them) that benefit LAN performance over gigabit
ethernet.

Maybe an unclarity from my part, i am using the onboard gigabit network

Can you determine if it is logically sitting on the PCI bus?
This might be useful to know as it effects available PCI bus
bandwidth during network copying.


So now i'm really thinking about trading in the -indeed- softraid card
for an areca, what would mean spending some 150 euros extra while
looking for prices for an intel Gbit networdcard over pci express, these
big boys are going over the counter here where i live for some 90 euros
which compared to the 150 of the areca is quite hefty...

Maybe another idea from you guys ? but i'm really thinking, after
reading all posts here, that the promise raid card is just a piece of
BEEP causing the problems.


As already suggested, you should benchmark in other
scenarios. Connect a single drive span to the promise card
and test local and network copy. Test the motherboard
controller connected drive doing network copy. Compare the
results to see how much better each scenario is.

Yes, ultimately the fastest performing solution will be to
replace the PCI software raid card with a hardware PCI
Express version, and the same for the gigabit ethernet card.
The reason question is how much benefit at what cost, and
what your uses are since some don't need utmost performance.
I mention this because many systems can use cheaper hardware
and still achieve 25-30MB/s regularly, though RAID5 on a
software card is probably not a good idea.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage kony said:
On 15 Dec 2007 00:08:07 GMT, Arno Wagner <[email protected]>
wrote:
This is not likely, these read and write tests were to/fro
the PCI raid card but the other drive was on a southbridge
integral controller that is not logically on the PCI bus.

If you are lucky.
In other words as I'd written already, the PCI bus will not
limit it untill around 100MB/s or higher, unless there was
other concurrent PCI bus activity but it seems pretty
unlikely as this was a focused test.

For a good chipset, yes. For slower ones, you may get
lower limits.

RT8169 can fairly easily do 40MB/s.

Depends very much on the board and the other components
in use. I have seen speeds as low as 240Mbit/s and that
was with UDP streaming, not TCP.
Sure, but it would be a bit of an extreme expense, when
talking about a RAID5 of consumer grade 7K2 RPM SATA drives.
In practice, benchmarks of that era showed these platforms
achieving about 200MB/s (actually a little less), minus the
bandwidth used by the integrated video and other system
operations.

Yes, and PCI busses of that aera can often not reach more
than 70-80MB/s.
A different NIC may have some benefit but that much is not
a certainty.

Not if the NIC is not the problem. But I evaluated
7 or 8 GbE NICs last year in PCI slots (replacement
for failing Netgear crapware in a computer cluster)
and have two more at home in use and, believe me, the NIC
can make the difference between 250Mbit/s and 800Mbit/s.
At some point the PCI bus will be a bottleneck
as these drives may be able to sustain over 50MB/s in a
local copy.

Sorry, that makes no sense. What do you mean?

Arnp
 
K

kony

If you are lucky.

Not really, it's been years since intregral drive
controllers were on the PCI bus. Certainly no boards with
integral SATA via southbridge.

For a good chipset, yes. For slower ones, you may get
lower limits.

Not this low, even the most noteworthily terrible Via
chipsets would reach over 60MB/s, even more if there weren't
something especially noteworthy of hogging the PCI bus.

Depends very much on the board and the other components
in use. I have seen speeds as low as 240Mbit/s and that
was with UDP streaming, not TCP.


.... Then it's not RT8169 that is the bottleneck and the
comment about 25MB/s is just invalid.

I have, maybe half a dozen systems using these RT8169
cards, they are not at all limited to the low throughput the
OP is seeing, BUT, on any I bothered to benchmark, I did
ensure better jumbo frame TCP/IP settings.

Also, at some point realtek limited jumbo frame support to
around 7K instead of 9K, so that limit is worth
consideration but in real world use, 7K is not going to
bottleneck anywhere near this value reported... maybe if it
were a 80486 system or Pentium 1.

Yes, and PCI busses of that aera can often not reach more
than 70-80MB/s.


Not true, the vast majority did reach peak throughtput, with
exception of some Via and early skt 370 Sis chipsets. The
rest did quite well at such a meager task. I have build
quite a few filesevers from old legacy systems, I would even
consider them ideal for this use if someone didn't need over
5MB/s, because they need lesser cooling and have lower power
consumption, not to mention they're practically free being
useful for nothing else.


Not if the NIC is not the problem. But I evaluated
7 or 8 GbE NICs last year in PCI slots (replacement
for failing Netgear crapware in a computer cluster)
and have two more at home in use and, believe me, the NIC
can make the difference between 250Mbit/s and 800Mbit/s.


I cannot comment on any random manufacturer since the real
issue os what driver they supplied... what remains is which
chipset and other system PCI bus factors.

I can say that any and all systems I have used Realtek 8169
based PCI cards on, did not have any substantil bottleneck
based on that card. What I really mean is, this is a cheap
card and used when other things are budget constained too,
that it can easily, always reach over 30Mb/s when there is
no other substantial problem. Since the OP doesnt' see
30MB/s, either the nic chipset is even worse than that, or
there is an unrelated substantial problem.

Sorry, that makes no sense. What do you mean?

It makes sense if you you consider drive throughput vs PCI
bottlenecks.

IOW, if the drive can do in excess of 50MB/s, which
benchmarks show it can, then it's 50MB/s over the PCI bus to
memory, then back again to the PCI NIC. While the PCI bus
can achieve 100Mb/s, it can't do much more without at least
minor bottlenecks (even moreso the more one goes above
100MB/s, up to about 120MB/s if ATA133 HDD is used).
 
K

kony

Not true, the vast majority did reach peak throughtput, with
exception of some Via and early skt 370 Sis chipsets. The
rest did quite well at such a meager task. I have build
quite a few filesevers from old legacy systems, I would even
consider them ideal for this use if someone didn't need over
5MB/s,

I meant "didn't need over 35MB's"

The one noteworthy thing about the now aged and seemingly
slow PCI bus was, if the chipset or board designers didn't
make terrible mistake, it really was capable of 100MB/s if
not more... even on an otherwise sluggish system.

Once upon a time ago, even with a meager PII/233 with ATA33
interfaced HDD, I was seeing a little over 25MB/s using a
RT8169 gigabit ethernet card. While that is not impressive,
it's still higher than the 24MB/s that was reported.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage kony said:
On 16 Dec 2007 04:47:38 GMT, Arno Wagner <[email protected]>
wrote:
Not really, it's been years since intregral drive
controllers were on the PCI bus. Certainly no boards with
integral SATA via southbridge.
Not this low, even the most noteworthily terrible Via
chipsets would reach over 60MB/s, even more if there weren't
something especially noteworthy of hogging the PCI bus.


... Then it's not RT8169 that is the bottleneck and the
comment about 25MB/s is just invalid.


240Mbit/s = 25 MB/s
I have, maybe half a dozen systems using these RT8169
cards, they are not at all limited to the low throughput the
OP is seeing, BUT, on any I bothered to benchmark, I did
ensure better jumbo frame TCP/IP settings.

There are different chip generations and the combination
of mainboard and chip also matters.
Also, at some point realtek limited jumbo frame support to
around 7K instead of 9K, so that limit is worth
consideration but in real world use, 7K is not going to
bottleneck anywhere near this value reported... maybe if it
were a 80486 system or Pentium 1.

If you can use Jumbo-frames. Depends.

Not true, the vast majority did reach peak throughtput, with
exception of some Via and early skt 370 Sis chipsets.

So? Then I must have consitently bought the wrong hardware....
The
rest did quite well at such a meager task. I have build
quite a few filesevers from old legacy systems, I would even
consider them ideal for this use if someone didn't need over
5MB/s, because they need lesser cooling and have lower power
consumption, not to mention they're practically free being
useful for nothing else.



I cannot comment on any random manufacturer since the real
issue os what driver they supplied... what remains is which
chipset and other system PCI bus factors.
I can say that any and all systems I have used Realtek 8169
based PCI cards on, did not have any substantil bottleneck
based on that card. What I really mean is, this is a cheap
card and used when other things are budget constained too,
that it can easily, always reach over 30Mb/s when there is

I assume you mean 30MB/s?
no other substantial problem. Since the OP doesnt' see
30MB/s, either the nic chipset is even worse than that, or
there is an unrelated substantial problem.
It makes sense if you you consider drive throughput vs PCI
bottlenecks.
IOW, if the drive can do in excess of 50MB/s, which
benchmarks show it can, then it's 50MB/s over the PCI bus to
memory, then back again to the PCI NIC. While the PCI bus
can achieve 100Mb/s, it can't do much more without at least

Again, woujd that be 100MB/s?
minor bottlenecks (even moreso the more one goes above
100MB/s, up to about 120MB/s if ATA133 HDD is used).

And the bus handover also causes some loss. But, yes.
I agree.

Arno
 
K

kony

240Mbit/s = 25 MB/s

And? It doesn't change that I have systems with RT8169
which achieve more than 25MB/s.


There are different chip generations and the combination
of mainboard and chip also matters.

Some, certainly with a few Via chipsets which had poor PCI
bandwidth that was choking data flow to the NIC. As for
pure chipset - NIC interactions, it is very rare and even
more often attributable to an OS problem rather than
mainboard problem.


If you can use Jumbo-frames. Depends.

Most can at least use 4K frames if not 7-9K. The majority
of networking equipment will pass more than 1.5K.

So? Then I must have consitently bought the wrong hardware....

Many of us did buy a chipset or two that had this problem,
but by midway into the Pentium 3 era, it was mostly reserved
to Via chipsets as Sis had then covercome it. Intel had
great PCI throughput much further back, I can't put my
finger on exactly when it was more than 80MB/s but certainly
by the 440LX/P2 era, and probably before that.


I assume you mean 30MB/s?

Yes, I mistyped 30MB/s.

Again, woujd that be 100MB/s?

Yep, but wouldn't that have been obvious by my prior
statements?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top