Analyze Backup Server

Discussion in 'Computer Hardware' started by daveg.01@gmail.com, Jan 10, 2007.

  1. Guest

    I am trying to analyze/identify the bottleneck on our new backup
    server. I was wondering if someone could recommend some performance
    counters to watch.

    Right now I am watching "% Write Time" I am averaging around
    "500" for this measure while copying a 50GB database file. It is
    taking almost twice as long to copy the file to a different server as
    opposed to a different direct attached array. The % disk write counter
    is similar on both servers. Am I wrong to assume that the physical
    disks are the bottleneck (due to high performance counter ratings)?


    Does anyone know some general guide lines on real world throughput on
    the following? What kinds of things should I be looking at and/or
    asking our IT department?

    Gigabit dedicated network
    Perc4 (I read on Dells site that it is 320MB/s which translates to
    1.1TB / hour)
    Raid 5 using 6 - 300MB disks (10,000 RPM)
    Other bottleneck candidate?


    Server Configuration
    Windows Server 2003
    Dell PowerEdge 2850
    Perc4 - Raid 5
    PowerVault 200s
    Gigabit network
     
    , Jan 10, 2007
    #1
    1. Advertisements

  2. Paul Guest

    wrote:
    > I am trying to analyze/identify the bottleneck on our new backup
    > server. I was wondering if someone could recommend some performance
    > counters to watch.
    >
    > Right now I am watching "% Write Time" I am averaging around
    > "500" for this measure while copying a 50GB database file. It is
    > taking almost twice as long to copy the file to a different server as
    > opposed to a different direct attached array. The % disk write counter
    > is similar on both servers. Am I wrong to assume that the physical
    > disks are the bottleneck (due to high performance counter ratings)?
    >
    >
    > Does anyone know some general guide lines on real world throughput on
    > the following? What kinds of things should I be looking at and/or
    > asking our IT department?
    >
    > Gigabit dedicated network
    > Perc4 (I read on Dells site that it is 320MB/s which translates to
    > 1.1TB / hour)
    > Raid 5 using 6 - 300MB disks (10,000 RPM)
    > Other bottleneck candidate?
    >
    >
    > Server Configuration
    > Windows Server 2003
    > Dell PowerEdge 2850
    > Perc4 - Raid 5
    > PowerVault 200s
    > Gigabit network
    >


    5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
    the disk level. You might verify that the disk transfer rate
    is full Ultra320. A cabling or termination problem might
    reduce the transfer rate.

    The PERC4 listed here, has a 64 bit 66MHz interface, and as long
    as it is fully utilizing the bus, the bus should not be a limit.
    Sometimes a bus segment can be slowed, by the presence of a slower
    card. So you may want to check the card configuration and bus
    structure of your server.

    http://www.jjwei.com/shop/item.asp?itemid=78

    But your Gigabit Ethernet only does 125MB/sec theoretical in one
    direction, so a single transaction with your server will be limited
    by the network. A local transfer on the server itself could go faster.
    Plenty of little things to check.

    Paul
     
    Paul, Jan 10, 2007
    #2
    1. Advertisements

  3. Dave Guest

    Assuming everything is configured properly, how long would I expect it
    to take to transfer a 50GB file to the backup server?

    If the perfmon coutner "%disk write" goes above 100 should I assume
    that the physical disks are the bottleneck?


    Paul wrote:
    > wrote:
    > > I am trying to analyze/identify the bottleneck on our new backup
    > > server. I was wondering if someone could recommend some performance
    > > counters to watch.
    > >
    > > Right now I am watching "% Write Time" I am averaging around
    > > "500" for this measure while copying a 50GB database file. It is
    > > taking almost twice as long to copy the file to a different server as
    > > opposed to a different direct attached array. The % disk write counter
    > > is similar on both servers. Am I wrong to assume that the physical
    > > disks are the bottleneck (due to high performance counter ratings)?
    > >
    > >
    > > Does anyone know some general guide lines on real world throughput on
    > > the following? What kinds of things should I be looking at and/or
    > > asking our IT department?
    > >
    > > Gigabit dedicated network
    > > Perc4 (I read on Dells site that it is 320MB/s which translates to
    > > 1.1TB / hour)
    > > Raid 5 using 6 - 300MB disks (10,000 RPM)
    > > Other bottleneck candidate?
    > >
    > >
    > > Server Configuration
    > > Windows Server 2003
    > > Dell PowerEdge 2850
    > > Perc4 - Raid 5
    > > PowerVault 200s
    > > Gigabit network
    > >

    >
    > 5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
    > the disk level. You might verify that the disk transfer rate
    > is full Ultra320. A cabling or termination problem might
    > reduce the transfer rate.
    >
    > The PERC4 listed here, has a 64 bit 66MHz interface, and as long
    > as it is fully utilizing the bus, the bus should not be a limit.
    > Sometimes a bus segment can be slowed, by the presence of a slower
    > card. So you may want to check the card configuration and bus
    > structure of your server.
    >
    > http://www.jjwei.com/shop/item.asp?itemid=78
    >
    > But your Gigabit Ethernet only does 125MB/sec theoretical in one
    > direction, so a single transaction with your server will be limited
    > by the network. A local transfer on the server itself could go faster.
    > Plenty of little things to check.
    >
    > Paul
     
    Dave, Jan 11, 2007
    #3
  4. Paul Guest

    Dave wrote:
    > Assuming everything is configured properly, how long would I expect it
    > to take to transfer a 50GB file to the backup server?
    >
    > If the perfmon coutner "%disk write" goes above 100 should I assume
    > that the physical disks are the bottleneck?


    If the transfer is over the network, the limit is 125MB/sec. To transfer
    a 50GB file takes 400 seconds.

    Paul

    >
    > Paul wrote:
    >> wrote:
    >>> I am trying to analyze/identify the bottleneck on our new backup
    >>> server. I was wondering if someone could recommend some performance
    >>> counters to watch.
    >>>
    >>> Right now I am watching "% Write Time" I am averaging around
    >>> "500" for this measure while copying a 50GB database file. It is
    >>> taking almost twice as long to copy the file to a different server as
    >>> opposed to a different direct attached array. The % disk write counter
    >>> is similar on both servers. Am I wrong to assume that the physical
    >>> disks are the bottleneck (due to high performance counter ratings)?
    >>>
    >>>
    >>> Does anyone know some general guide lines on real world throughput on
    >>> the following? What kinds of things should I be looking at and/or
    >>> asking our IT department?
    >>>
    >>> Gigabit dedicated network
    >>> Perc4 (I read on Dells site that it is 320MB/s which translates to
    >>> 1.1TB / hour)
    >>> Raid 5 using 6 - 300MB disks (10,000 RPM)
    >>> Other bottleneck candidate?
    >>>
    >>>
    >>> Server Configuration
    >>> Windows Server 2003
    >>> Dell PowerEdge 2850
    >>> Perc4 - Raid 5
    >>> PowerVault 200s
    >>> Gigabit network
    >>>

    >> 5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
    >> the disk level. You might verify that the disk transfer rate
    >> is full Ultra320. A cabling or termination problem might
    >> reduce the transfer rate.
    >>
    >> The PERC4 listed here, has a 64 bit 66MHz interface, and as long
    >> as it is fully utilizing the bus, the bus should not be a limit.
    >> Sometimes a bus segment can be slowed, by the presence of a slower
    >> card. So you may want to check the card configuration and bus
    >> structure of your server.
    >>
    >> http://www.jjwei.com/shop/item.asp?itemid=78
    >>
    >> But your Gigabit Ethernet only does 125MB/sec theoretical in one
    >> direction, so a single transaction with your server will be limited
    >> by the network. A local transfer on the server itself could go faster.
    >> Plenty of little things to check.
    >>
    >> Paul

    >
     
    Paul, Jan 14, 2007
    #4
  5. kony Guest

    On Sun, 14 Jan 2007 06:54:21 -0500, Paul <>
    wrote:

    >Dave wrote:
    >> Assuming everything is configured properly, how long would I expect it
    >> to take to transfer a 50GB file to the backup server?
    >>
    >> If the perfmon coutner "%disk write" goes above 100 should I assume
    >> that the physical disks are the bottleneck?

    >
    >If the transfer is over the network, the limit is 125MB/sec. To transfer
    >a 50GB file takes 400 seconds.
    >
    > Paul
    >



    I'd be surprised if a real transfer averages over 90MB/s,
    especially if TCP/IP
     
    kony, Jan 14, 2007
    #5
    1. Advertisements

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Steve

    How to destroy backup tapes?

    Steve, Aug 6, 2003, in forum: Computer Hardware
    Replies:
    8
    Views:
    6,601
  2. jtsnow

    Linksys backup server?

    jtsnow, Jan 22, 2005, in forum: Computer Hardware
    Replies:
    4
    Views:
    129
    jtsnow
    Jan 22, 2005
  3. Chu
    Replies:
    1
    Views:
    2,083
  4. markm75
    Replies:
    0
    Views:
    203
    markm75
    Jan 26, 2007
  5. Grumps

    Re: Backup, backup, backup....

    Grumps, Apr 4, 2008, in forum: Computer Hardware
    Replies:
    0
    Views:
    359
    Grumps
    Apr 4, 2008
Loading...

Share This Page