Raid0 or Raid5 for network to disk backup (Gigabit)?

M

markm75

Anyone have any thoughts on which is going to give me better write
speeds.. I know raid0 should be much better and if i combine it with
raid1, redundant..

But I'm assuming when I backup my servers to (this backup server)
across the gigabit network, my write speeds would max out at say 60 MB/
s wouldnt they?

I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Any thoughts?
 
A

Arno Wagner

Previously markm75 said:
Anyone have any thoughts on which is going to give me better write
speeds.. I know raid0 should be much better and if i combine it with
raid1, redundant..
But I'm assuming when I backup my servers to (this backup server)
across the gigabit network, my write speeds would max out at say 60 MB/
s wouldnt they?

Yes. Even before that unless you have PCI-X or PCI-E network cards.
I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Any thoughts?

Keep it as it is. If that is not enough, use two different servers
to back up to with separate network connections.

Arno
 
W

willbill

markm75 said:
Anyone have any thoughts on which is going to give me better write
speeds.. I know raid0 should be much better and if i combine it with
raid1, redundant..

But I'm assuming when I backup my servers to (this backup server)
across the gigabit network, my write speeds would max out at say 60 MB/
s wouldnt they?

I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Any thoughts?


two disk raid0 with write cache turned on. better read/write
performance all around and twice the disk space. run the
backup machine with a UPS power supply

so what if it (raid0) fails every few years? it's a minor backup
machine and odds are that it won't be the end of the world

if the backup is not "minor" (meaning totally critical),
then go with slower raid5 (or something similar)

bill
 
M

Maurice Volaski

Aren't these numbers reversed? Anyway, good drives should be > 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.
 
A

Arno Wagner

Aren't these numbers reversed? Anyway, good drives should be > 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.

With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the >75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno
 
M

markm75

With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the >75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno

Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...
 
F

Folkert Rienstra

markm75 said:
Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...

No, really?
Who would have thought that from your first post. Thanks for clearing that up.
It all becomes much clearer now.
 
M

markm75

Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text -

- Show quoted text -

If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.
 
A

Arno Wagner

If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..
I know this sounds reversed, but it is what happens.

It is possible. It does however point to some serious problem
in the write-buffer design.

Arno
 
M

Maxim S. Shatskih

If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.

Depends on data access pattern, on some patterns it is really profitable. For
instance, databases like MSSQLServer also use cache bypass.
 
M

markm75

As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..

My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...

Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are


10.29 MB/sec actual rate

*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s

So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.


Any thoughts on how to get this network backup up in value?

Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?

I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.

My switch is a Dlink DGS-1248T (gigabit, managed).

The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?

Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage markm75 said:
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..
My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...
Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are

10.29 MB/sec actual rate
*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s
So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.

Any thoughts on how to get this network backup up in value?
Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?
I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.
My switch is a Dlink DGS-1248T (gigabit, managed).
The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?
Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...


Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno
 
M

markm75

Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno- Hide quoted text -

- Show quoted text -

Well I started this one around 4:30pm and its 10:30.. been 6 hours ,
it says 3 to go.. that would still be 9 hours or so, I turned
compression off, so we shall c.. not looking good though.. still
nowhere near the bandwidth it should be using (Did a sisandra test to
compare.. sisandra was also coming in around 61 MB/sec).
 
A

Arno Wagner

Well I started this one around 4:30pm and its 10:30.. been 6 hours ,
it says 3 to go.. that would still be 9 hours or so, I turned
compression off, so we shall c.. not looking good though.. still
nowhere near the bandwidth it should be using (Did a sisandra test to
compare.. sisandra was also coming in around 61 MB/sec).

Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.

Arno
 
M

markm75

Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.

Arno

Yep.. it was done over the network and yielded 60 MB/s

btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...
 
M

markm75

Yep.. it was done over the network and yielded 60 MB/s

btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...


Very very bad results with max compression, took like 12 or 13
hours...

This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine.. and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.

Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage markm75 said:
Very very bad results with max compression, took like 12 or 13
hours...
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..

With maximum compression? Ok, then it is not a CPU issue.
and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.

There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.

One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.
Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.

I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.

Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.

Arno
 
M

markm75

With maximum compression? Ok, then it is not a CPU issue.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.

One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.

Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.

Arno- Hide quoted text -

- Show quoted text -

just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.
 
B

Bill Todd

markm75 wrote:

....
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine.. and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.

While I have no specific solution to suggest, it is possible that the
problem is not network bandwidth but network latency, which after the
entire stack is taken into account can add up to hundreds of
microseconds per transaction.

If the storage interactions performed by the backup software (in
contrast to simple streaming file copies) are both small (say, a few KB
apiece) and 'chatty' (such that such a transaction occurs for every
modest-size storage transfer) this could significantly compromise
network throughput (since the per-transaction overhead could increase by
close to a couple of orders of magnitude compared to microsecond-level
local ones).

Another remote possibility is that for some reason transferring across
the network when using the backup software is suppressing write-back
caching at the destination, causing a missed disk revolution on up to
every access (though the worst case would limit throughput to less than
8 MB/sec if Windows is destaging data in its characteristic 64 KB
increments, and you are apparently doing somewhat better than that).

- bill
 
A

Arno Wagner

just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.

That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top