Disk to Disk Backup recommendations requested

M

Michelle

Greetings,

I currently have a Win2K file server which contains:

(2) 80GB 7200/EIDE RAID 1 Mirror (Total working data 80GB)

I'm thinking of building a new box for the file server which contains:

(1) GB Ethernet
(6) 74GB 10,000/SATA RAID 1 Mirror (Total working data 222GB)

I'm also building a target backup server for this new file server
which will contain 1 tape drive and the following to perform disk to
disk backup:

(1) GB Ethernet
(4) 250GB 7200/SATA RAID 1 Mirror (Total Backup Capacity 500GB)

I'm trying to build a file server/backup system that would be able to
handle
multiple streams of data to allow for the fastest backup. There are
many
small files to be backed up. My personal preference is RAID 1. Since
none of my file server partitions are larger then 74GB I'm not worried
that a backup will be larger then any of my backup server partitions.

My question is, how do I build theses systems to get the fastest
backup possible? Should I install multiple GB nics? Is there an array
controller that is specifically designed to handle mutliple streams of
data? I'm aware that the Disk I/O is usually the cause of the
bottleneck during backup. Will moving both server and backup system to
RAID 5 significantly increase speeds of backup? Can someone make some
recommendations in terms of hardware/software.
I'm aware that the OS can also increase file system overhead so I'm
electing
to try out Veritas Backup Exec (although quite frankly I've always
been a fan
of Xcopy).

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!

Michelle
 
O

Odie Ferrous

I won't comment on the above - it's a minefield and everyone will have
their own solution for you.
My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!


Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...

They are preparing a quotation for me, but I suspect the price is going
to be horrific.



Odie
 
J

J. Clarke

Michelle said:
Greetings,

I currently have a Win2K file server which contains:

(2) 80GB 7200/EIDE RAID 1 Mirror (Total working data 80GB)

I'm thinking of building a new box for the file server which contains:

(1) GB Ethernet
(6) 74GB 10,000/SATA RAID 1 Mirror (Total working data 222GB)

I'm also building a target backup server for this new file server
which will contain 1 tape drive and the following to perform disk to
disk backup:

(1) GB Ethernet
(4) 250GB 7200/SATA RAID 1 Mirror (Total Backup Capacity 500GB)

I'm trying to build a file server/backup system that would be able to
handle
multiple streams of data to allow for the fastest backup. There are
many
small files to be backed up. My personal preference is RAID 1. Since
none of my file server partitions are larger then 74GB I'm not worried
that a backup will be larger then any of my backup server partitions.

My question is, how do I build theses systems to get the fastest
backup possible? Should I install multiple GB nics? Is there an array
controller that is specifically designed to handle mutliple streams of
data? I'm aware that the Disk I/O is usually the cause of the
bottleneck during backup. Will moving both server and backup system to
RAID 5 significantly increase speeds of backup? Can someone make some
recommendations in terms of hardware/software.
I'm aware that the OS can also increase file system overhead so I'm
electing
to try out Veritas Backup Exec (although quite frankly I've always
been a fan
of Xcopy).

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!

Something's suspicious here. 1 GB (B=byte, b=bit) of data is 8 Gb plus
overhead, which takes at least 8 seconds to transfer over 1 Gb/sec
Ethernet. If you're doing it in 4 then you're already getting 2 Gb/sec
over a 1 Gb/sec channel.

Backup Exec can do some compression--2:1 is reasonable, that might explain
how you managed to do the transfer at the rate you report.

Multiple gigabit NICs aren't going to help you. The maximum throughput of
the PCI bus is a little over a billion bits per second. When used to
transfer data from the disk via network you're typically going to be
bottlenecked at about 400 Mb/sec by the PCI bus. To get more than that you
need to go to 64 bit 66 MHz PCI, which you will find only on server boards,
or PCI-X or PCI Express, which you will find on some workstation boards as
well as some server boards. Note that both the disk subystem and the
network interface need to be attached via the fast bus for this to confer
benefit.

Once you've got a fast bus then you need to put together a disk subsystem
that can fill that pipe--that's difficult and expensive and what's going to
work is going to depend on the particular data to be transferred. Note
that RAID5 does well in reads but there's a performance hit on writes.
Your RAID1 idea has merit _if_ you have a controller smart enough to
schedule the reads over multiple drives to reduce seek time, but even so
writing to another RAID 1 you're going to be limited by the seek time on
writes.

If you're using a second machine as a backup device and need fast transfers,
you might want to consider going to a clustering technology.
 
J

J. Clarke

Odie said:
I won't comment on the above - it's a minefield and everyone will have
their own solution for you.



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.

I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem. To fill a gigabit pipe you need a 64-bit 66 MHz
or better PCI bus and if you're going to sustain transfers you also need a
very heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets but
for which it was never tested.
The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.

Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.
However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...

Actually, Krone has nothing much to do with that technology--all they do is
make cable and connectors and 10 gig is probably going to run on CAT5E.
 
A

Al Dykes

I won't comment on the above - it's a minefield and everyone will have
their own solution for you.



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...

They are preparing a quotation for me, but I suspect the price is going
to be horrific.



Well, If you go faster than 6.63Gb/sec you've broken the speed record.
Actualy, this is a long-distance benchmark, which has many issues that
computer-room network doesn't have, but the article does give you an
idea of what kind of equipment is required at each end to fill the
pipe. I'm sure if you made some inquiries you could get a technical
description of the equipment involved. Big Bucks.

http://www.internetnews.com/infra/article.php/3403161
 
O

Odie Ferrous

J Clarke,


Oh, dear. You and reality are an oxymoron, aren't you?


I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem. To fill a gigabit pipe you need a 64-bit 66 MHz
or better PCI bus and if you're going to sustain transfers you also need a
very heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets but
for which it was never tested.

Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.

I have better things to do than to reply to your every point - you have
displayed a shocking lack of even basic knowledge of the standards.
And, hey, hey - your statement about cat7 cable not existing?

Have a look here

http://www.ieee802.org/3/10GBT/public/may03/sallaway_1_0503.pdf

Next time, try to work things out a little before jumping down someone's
throat.

You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!

If you'd like more assistance, please don't hesitate to ask - you know
me, always willing to help!!



Odie
 
P

Peter

Can you please verify your statement:
"Backup Exec says that is a transfer rate of 435MB/second"
That does not seem right.
Maybe it should be 435MBytes/minute?
 
E

Eric Gisin

Odie Ferrous said:
What is the point of CAT7 if it is not backward compatible and more expensive
than fiber?
I have better things to do than to reply to your every point - you have
displayed a shocking lack of even basic knowledge of the standards.
And, hey, hey - your statement about cat7 cable not existing?
Another idiotic Odie troll. There is no CAT7 standard, so cables do not exist.
Have a look here

http://www.ieee802.org/3/10GBT/public/may03/sallaway_1_0503.pdf

Next time, try to work things out a little before jumping down someone's
throat.

You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!
Nobody mentioned CAT6, did they?
 
F

Folkert Rienstra

J. Clarke said:
I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem.
To fill a gigabit pipe you need a 64-bit 66 MHz or better PCI bus

Huh? 64-bit or 66-MHz should be fine.
For Bechmarking or when one of the two (network, disksubsystem) is
not on the PCI bus even a standard 32-bit/33MHz bus should suffice.
and if you're going to sustain transfers you also need a very
heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets
but for which it was never tested.


Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.

Which of course isn't even possible.
 
J

J. Clarke

Odie said:
J Clarke,


Oh, dear. You and reality are an oxymoron, aren't you?

I'm sorry, I fail to see how the letter "J." preceding the noun "Clarke"
constitutes a contradiction in terms. Or perhaps you are laboring under
the misconception that an "oxymoron" is a creature of some sort.
I have better things to do than to reply to your every point

You are the one who made assertions about the performance of various cables
and claimed to have test results to back up those assertions. But when
pressed to present enough details for someone else to be able to decide
whether your test methodology was valid instead of doing so you start
hurling insults. That would lead the unbiased observer to believe that you
did not in fact have any such test results.

- you have
displayed a shocking lack of even basic knowledge of the standards.

In what manner?
And, hey, hey - your statement about cat7 cable not existing?

Have a look here

http://www.ieee802.org/3/10GBT/public/may03/sallaway_1_0503.pdf

A year old working group paper discussing potential solutions. The cable
standards are set by EIA/TIA, not IEEE, and EIA/TIA has not released a
standard for category 7 cable. That paper is not a standard of any kind,
it's discussion of what _might_ go into a standard.

Yes, cable manufacturers sell cable that they call "Category 7". They sold
a lot of "Category 6" before a standard was released and the ended up
eating a lot of it when the standard came out and the cable wasn't
compliant.
Next time, try to work things out a little before jumping down someone's
throat.

Go over to the comp.dcom.lans.ethernet and make the same claimes you have
made here and see what happens.
You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!

Nothing that the cable manufacturers say will amaze me. There is no such
thing as "IEEE CAT6"--the standard is set by EIA/TIA and is properly
"EIA/TIA Category 6". The Ethernet spec is a free download from the IEEE
site. If you download it and search it for "Cat 6" or any variation
thereof you will find that such cabling is not mentioned in the standard.
If you go over to the Ethernet newsgroup and ask the people who wrote the
spec what cable gigabit was designed to run on they'll tell you more or
less what I told you. The Ethernet spec defines certain electrical
properties that the cable must possess, it does not define category
numbers.
If you'd like more assistance, please don't hesitate to ask - you know
me, always willing to help!!

Then post your test configuration. Assuming of course that you actually did
conduct the tests that you claim to have conducted.
 
M

Michelle

J. Clarke said:
Something's suspicious here. 1 GB (B=byte, b=bit) of data is 8 Gb plus
overhead, which takes at least 8 seconds to transfer over 1 Gb/sec
Ethernet. If you're doing it in 4 then you're already getting 2 Gb/sec
over a 1 Gb/sec channel.

Backup Exec can do some compression--2:1 is reasonable, that might explain
how you managed to do the transfer at the rate you report.

Multiple gigabit NICs aren't going to help you. The maximum throughput of
the PCI bus is a little over a billion bits per second. When used to
transfer data from the disk via network you're typically going to be
bottlenecked at about 400 Mb/sec by the PCI bus. To get more than that you
need to go to 64 bit 66 MHz PCI, which you will find only on server boards,
or PCI-X or PCI Express, which you will find on some workstation boards as
well as some server boards. Note that both the disk subystem and the
network interface need to be attached via the fast bus for this to confer
benefit.

Once you've got a fast bus then you need to put together a disk subsystem
that can fill that pipe--that's difficult and expensive and what's going to
work is going to depend on the particular data to be transferred. Note
that RAID5 does well in reads but there's a performance hit on writes.
Your RAID1 idea has merit _if_ you have a controller smart enough to
schedule the reads over multiple drives to reduce seek time, but even so
writing to another RAID 1 you're going to be limited by the seek time on
writes.

If you're using a second machine as a backup device and need fast transfers,
you might want to consider going to a clustering technology.


My apologies, I did in fact mean 435 megaBYTEs per minute and not seconds.

-Michelle
 
J

J. Clarke

Michelle said:
My apologies, I did in fact mean 435 megaBYTEs per minute and not seconds.

That's only 58 Mb/sec. A single gigabit connection is good for 17 times
that. Gigabit is clearly not your bottleneck. The PCI bus is good (in
practical terms) for at least 4 times that. I'd look to the storage system
for the bottleneck. Hitachi 7K250s on the fastest tracks can sustain about
500 Mb/sec, but that's assuming an optimal read pattern. Put in random
seeks and file system overhead and the like and in the real world you get a
lot less than that. RAID 0 gives you a little bit of a boost but not all
that much.

First thing to do is see how long it takes you to transfer a file from one
disk or array to another in the same machine.
 
T

Toshi1873

I won't comment on the above - it's a minefield and everyone will have
their own solution for you.



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Gigabit is pretty much a no-brainer decision. Interface
cards are cheap and most motherboards now include a
gigabit port. Existing CAT5 cabling seems to do just
fine, but new cabling should probably be 5E (don't even
think you can buy regular CAT5 anymore).

24-port switches (you do *not* want a hub) are well
under $2000 now. 8-port workgroup switches are under
$150. If you have multiple 24-port hubs (like we did),
changing over to a star-topology with a new 24-port
switch in the center is a good start. All servers hook
directly to the switch and the workstations attach to
the outer hubs.

Even on my cheap $150 3com switch and cheap 3com NICs,
I'm able to shove 20-30 MB/s across the switch from box
A to box B. What I have found is that even though I can
do SATA to SATA inside the box at 30-50 MB/s, going from
one SATA to a SATA on another workstation will only do
16-24 MB/s. (Network latency is the killer, combined
with not using the large frames.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top