max transfer update

G

Geoff

Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff
 
P

Paul

Geoff said:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff

That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.

Looking at the RCP protocol and doing some simple minded hand
calculations, I feel you could get 119MB/sec out of the link
theoretical max of 125MB/sec, using RCP. I managed to get 117MB/sec
with the hardware that works the best. So I'm reasonably happy
with the result. But when using other protocols, the rate drops.
I was surprised when my FTP tests, didn't give me good results.
My past experience was, I could transfer faster with FTP,
than with Windows file sharing. (Using Windows XP Pro, I can
install IIS web server, and there is also an FTP server
hiding in the installation options. That's how I can set up
an FTP test case. I don't leave IIS running longer than
necessary, and it's been removed again.)

The rate can drop a lot, if there is "packet fragmentation", where
the network path is required to figure out what size of packet
will fit. We had some problems with "work at home" from stuff
like that, where the link was encrypted, and the MTU from
the encrypted path was smaller than normal. File sharing
performance was "close to zero", but the files were very secure :-(
If you can't download the files, I guess that makes them
secure.

Paul
 
G

Geoff

That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.

Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff
 
G

Geoff

Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

oops! I should have written 400Mbps and 45MB/sec is 360Mbps is getting
close - so perhaps I ought to be happy?!

Geoff
 
P

Paul

Geoff said:
Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff

I've seen the same kind of thing here. Namely, difference in the
transfer rate in one direction, than the other. The nice thing
about the test cases, is no two test cases give the same results.

*******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

IPV4 checksum offload (I presume that's done in hardware)

Large Send OFfload IPV4
Large Send Offload V2 (IPV4)
Large Send offload V2 (IPV6)

You might try disabling the last three. Apparently, the features
are a function of the NDIS revision, so Microsoft plays a part
in defining those things. One web page I could find, claimed
enabling those could result in "chunking" of data transfers.
And perhaps more ACKs and smaller transmission windows are the
result.

It probably isn't your PCI bus. Even my crappy VIA situation managed
70MB/sec. There is one ancient AMD chipset, where the 32 bit PCI bus
was crippled at 25MB/sec instead of the more normal 110-120MB/sec,
but I doubt you're using that :)

You can slow down a PCI bus, by changing the burst size. It
was termed the "latency timer", but the setting has been
removed from modern BIOS. At one time, the default might have been a
setting of 32. People wishing to inflate a benchmark test, would
crank it to 64 or larger. The idea being, that higher values promote
PCI unfairness. A large value allows a longer burst, and gets you
closer to 120MB/sec or so.

I had one motherboard years ago, where you had to tune that one
*very* carefully, to get good system operation. I spent hours
playing with that one. If you set the setting two low, the PC just
*crawled*. That wasn't exactly a pleasant motherboard to play with,
because it barely worked. It probably had a Pentium 3 processor or
the like. I think there was only one latency setting, that made
the sound work properly, and I could still use the disk. (Back
then, everything ran off PCI.)

Back when I was testing Win2K, it was the Win2K protocol stack that
limited performance to around 40MB/sec. Both of your OSes should be
able to do better than that.

So either it's a PCI bus issue, or it's one of those Device
Manager NIC options. Apparently, the offload settings can cause
really low transfer rates, and your transfer rates aren't that
bad.

Paul
 
P

Paul

Geoff said:
oops! I should have written 400Mbps and 45MB/sec is 360Mbps is getting
close - so perhaps I ought to be happy?!

Geoff

You should be able to do better than the 70MB/sec I got on the
VIA chipset motherboard.

If the OSes you were testing were both Win2K, I'd tell you to stop. But
there is still hope...

Paul
 
J

John McGaw

I've seen the same kind of thing here. Namely, difference in the
transfer rate in one direction, than the other. The nice thing
about the test cases, is no two test cases give the same results.

*******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

IPV4 checksum offload (I presume that's done in hardware)

Large Send OFfload IPV4
Large Send Offload V2 (IPV4)
Large Send offload V2 (IPV6)

You might try disabling the last three. Apparently, the features
are a function of the NDIS revision, so Microsoft plays a part
in defining those things. One web page I could find, claimed
enabling those could result in "chunking" of data transfers.
And perhaps more ACKs and smaller transmission windows are the
result.

It probably isn't your PCI bus. Even my crappy VIA situation managed
70MB/sec. There is one ancient AMD chipset, where the 32 bit PCI bus
was crippled at 25MB/sec instead of the more normal 110-120MB/sec,
but I doubt you're using that :)

You can slow down a PCI bus, by changing the burst size. It
was termed the "latency timer", but the setting has been
removed from modern BIOS. At one time, the default might have been a
setting of 32. People wishing to inflate a benchmark test, would
crank it to 64 or larger. The idea being, that higher values promote
PCI unfairness. A large value allows a longer burst, and gets you
closer to 120MB/sec or so.

I had one motherboard years ago, where you had to tune that one
*very* carefully, to get good system operation. I spent hours
playing with that one. If you set the setting two low, the PC just
*crawled*. That wasn't exactly a pleasant motherboard to play with,
because it barely worked. It probably had a Pentium 3 processor or
the like. I think there was only one latency setting, that made
the sound work properly, and I could still use the disk. (Back
then, everything ran off PCI.)

Back when I was testing Win2K, it was the Win2K protocol stack that
limited performance to around 40MB/sec. Both of your OSes should be
able to do better than that.

So either it's a PCI bus issue, or it's one of those Device
Manager NIC options. Apparently, the offload settings can cause
really low transfer rates, and your transfer rates aren't that
bad.

Paul

I've noticed just recently that throughput on file copying is dependent on
more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows
file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing a
video sized about 900MB and, as usual, offloaded it from the SSD on my work
machine to my HTPC where I could view it on the big flat screen and onto
the server in the basement for backup purposes. Started the copy to one
machine and noticed that the speed was jumping around 20-40MB/s and then
without thinking I started the second copy before the first had completed.
At that point I saw that the downstream speed was spiking up around 90MB/s.
Neither destination machine would accept data as quickly as my i7+SSD
machine could spit it out, presumably because they have relatively slower
processors and 2tB 'green' spinning drives for storage but together they
managed to bring the output of the i7 machine up to levels I've never seen
before. That means that, at least for dumping data down the pipe, this
machine is certainly up to the task and to me that looks as if the
restriction is on the receiving/storing side.
 
G

Geoff

******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

Paul,

I have been playing around with these settings but no speed
improvement so far ...

Cheers

Geoff
 
G

Geoff

I've noticed just recently that throughput on file copying is dependent on
more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows

John,

I have seen the speed start at 40MB/sec and then quickly fall to
20MB/sec.
file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing a
video sized about 900MB and, as usual, offloaded it from the SSD on my work
machine to my HTPC where I could view it on the big flat screen and onto
the server in the basement for backup purposes. Started the copy to one
machine and noticed that the speed was jumping around 20-40MB/s and then
without thinking I started the second copy before the first had completed.

only have 2 PCs so cannot try the above!

Cheers

Geoff
 
P

Paul

John said:
I've noticed just recently that throughput on file copying is dependent
on more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows
file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing
a video sized about 900MB and, as usual, offloaded it from the SSD on my
work machine to my HTPC where I could view it on the big flat screen and
onto the server in the basement for backup purposes. Started the copy to
one machine and noticed that the speed was jumping around 20-40MB/s and
then without thinking I started the second copy before the first had
completed. At that point I saw that the downstream speed was spiking up
around 90MB/s. Neither destination machine would accept data as quickly
as my i7+SSD machine could spit it out, presumably because they have
relatively slower processors and 2tB 'green' spinning drives for storage
but together they managed to bring the output of the i7 machine up to
levels I've never seen before. That means that, at least for dumping
data down the pipe, this machine is certainly up to the task and to me
that looks as if the restriction is on the receiving/storing side.

If you've got enough RAM on the machine, you can test with a RAMDisk
as a storage target. I've been using 1GB RAMdisks for my testing,
because my machines have 2GB, 3GB, and 4GB installed RAM (I tested
with four different computers, and two of them only have 2GB).

http://memory.dataram.com/products-and-services/software/ramdisk

When I was doing testing with Linux as the OS, I used LiveCDs,
and they happen to mount /tmp on RAM, which effectively results
in the same thing (a 1GB sized RAMDisk).

I was expecting my test results to be all over the place, and
so far, I haven't been disappointed.

Paul
 
G

Geoff

Paul

I received the following suggestions from a Microsoft Partners Forum.

Method 1 - when I ran the fix I was told not applicable to my
computer.

Method 2 - no change in speeds.

Method 3 - perhaps a very small increase ..

Any comments on the suggestions?

The alternative advice is to pay MS some $99 for email support or $259
for talking to someone ...

Cheers

Geoff

Here, I would like to provided some general methods to optimize the
TCP/IP transfer speeds.

Method 1, you can install the following hotfix on the Win 7 machine.

The TCP receive window autotuning feature does not work correctly in
Windows Server 2008 R2 or in Windows 7

http://support.microsoft.com/kb/983528/en-us


Method 2, run the following command to optimize the TCP/IP
connections.



1. netsh int tcp set global chimney=disabled

2. netsh int tcp set global rss=disabled

3. netsh int tcp set global netdma=disabled

4. netsh int tcp set global congestion=none

5. netsh int tcp set global autotuning=disabled

6. netsh int ip set global taskoffload=disabled



Method 3, run the below commands to disable SMB2 in the Win 7, to
force the Win 7 to use SMB1 to communicate with XP machine.



1. sc config lanmanworkstation depend= bowser/mrxsmb10/nsi

2. sc config mrxsmb20 start= disabled
 
P

Paul

Geoff said:
Paul

I received the following suggestions from a Microsoft Partners Forum.

Method 1 - when I ran the fix I was told not applicable to my
computer.

Method 2 - no change in speeds.

Method 3 - perhaps a very small increase ..

Any comments on the suggestions?

The alternative advice is to pay MS some $99 for email support or $259
for talking to someone ...

Cheers

Geoff

Here, I would like to provided some general methods to optimize the
TCP/IP transfer speeds.

Method 1, you can install the following hotfix on the Win 7 machine.

The TCP receive window autotuning feature does not work correctly in
Windows Server 2008 R2 or in Windows 7

http://support.microsoft.com/kb/983528/en-us


Method 2, run the following command to optimize the TCP/IP
connections.



1. netsh int tcp set global chimney=disabled

2. netsh int tcp set global rss=disabled

3. netsh int tcp set global netdma=disabled

4. netsh int tcp set global congestion=none

5. netsh int tcp set global autotuning=disabled

6. netsh int ip set global taskoffload=disabled



Method 3, run the below commands to disable SMB2 in the Win 7, to
force the Win 7 to use SMB1 to communicate with XP machine.



1. sc config lanmanworkstation depend= bowser/mrxsmb10/nsi

2. sc config mrxsmb20 start= disabled

They're good suggestions, but it's possible the effect would
be more evident if the delay*bandwidth product was larger.
In a LAN environment and through your switch, the delay is
close to zero. If you were sending at GbE speeds, half way
across the countryside, some of the tuning options make
more of a difference (things involving receive windows or
the like).

The "taskoffload=disabled" might be similar in some way, to changing
the settings in the NIC properties in Device Manager. I'm not sure
though, that there is a one to one mapping. Apparently, using
commands like that, you can also list the properties that
are available to adjust.

What are the motherboard(s) involved ? Can you list make
and model number of the motherboard in each computer ?

The only solid lead I have here so far, is a difference in PCI
bus behavior as a root cause.

As I've had success with RCP (rsh-client, rsh-server, edit
..rhosts to allow another computer to connect), I'd give that
a try and see what kind of transfer rate you can manage. At
least, what I was seeing in Wireshark seemed to make sense.
I used to Linux LiveCDs for that testing, as Windows only
has one half of an RCP solution.

Why do I want to do max rate testing ? To prove the hardware
is capable. If you can never, under any circumstances, get more
than 40MB/sec, then *maybe* it's hardware. Now that I've seen
117MB/sec here, at least three of my four capable NICs tested,
have managed full rate, and without jumbo frames.
The odd result, is the TG3269 on PCI bus.

P4C800-E - GbE on CSA 266MB/sec bus
Laptop - GbE on PCI Express x1 (250MB/sec + bidirectional bus)
Good Desktop - GbE (Marvell chip) on PCI Express x1 (ditto)

Bad Desktop - GbE TG3269 on PCI 32 bit bus, top speed 70MB/sec
out of 110-120MB/sec potential PCI bus performance.

Paul
 
G

Geoff

Paul,
What are the motherboard(s) involved ? Can you list make
and model number of the motherboard in each computer ?

1. Windows 7 motherboard is a

Gigabyte GA-8IPE1000-G v4 (says Everest Ultimate)

2. The XP Pro has

Intel D845GLAD AAA86713-206 (says Belarc)

Could the fact that the hard drive on the Windows 7 PC is a nearly new
SATA Seagate Barracuda 1TB 3Gbps whereas the hard drive on the XP Pro
is an IDE MDT 250 GB MD02500-BJBW-RO which I guess has a lower write
capability but have not found any figure for it, explain why the
transfer to the Windows 7 PC is 2x faster than the reverse?

Cheers

Geoff
 
P

Paul

Geoff said:
Paul,


1. Windows 7 motherboard is a

Gigabyte GA-8IPE1000-G v4 (says Everest Ultimate)

http://ee.gigabyte.com/products/page/mb/ga-8ipe1000-g_4x

ICH5, 266MB/sec hub bus, SATA 150MB/sec bridged to hub.
PCI bus also bridged to hub. 110MB/sec flowing upwards
from the PCI card, followed by 110MB/sec downward to
hard drive is possible. As far as I know, the 266MB/sec
bus is half duplex (and works best with burst transfers).

Using a RAMDisk for the storage device, is an alternative,
and would be practical if the machine has 2GB of memory.
You could give that a try, or switch to iperf/ntttcp testing.
2. The XP Pro has

Intel D845GLAD AAA86713-206 (says Belarc)

Could the fact that the hard drive on the Windows 7 PC is a nearly new
SATA Seagate Barracuda 1TB 3Gbps whereas the hard drive on the XP Pro
is an IDE MDT 250 GB MD02500-BJBW-RO which I guess has a lower write
capability but have not found any figure for it, explain why the
transfer to the Windows 7 PC is 2x faster than the reverse?

Cheers

Geoff

MDT re-certifies drives. It doesn't say who makes the drives.

http://www.mdtglobal.com/Welcome/

Running the benchmark in HDTune should give you a transfer curve.
The free version does a read-only benchmark. If a drive has
bad sectors, that can cause the transfer rate to sink
even further.

http://www.hdtune.com/files/hdtune_255.exe

Intel chipsets support UDMA100 on a Southbridge IDE cable.
Due to how the write strobe is generated, write rates are
inferior to read. Write could be 88MB/sec, rather than 100MB/sec.
If the head to platter rate is less than that (and could be
the case on older drives), then that takes precedence. I
have plenty of drives here that only do 65MB/sec best case,
so the stinky Intel cable performance doesn't matter. It's
the disk that is responsible.

The RAMDisk on the other hand, is blazing fast :)

This is captured on the computer I'm typing on. 2GB oF
RAMDisk on a 4GB machine, used for this test.

http://img196.imageshack.us/img196/8694/hdtunedataram2gbabove.gif

Your Gigabyte motherboard can probably take 4GB of RAM (4x1GB).

D845GLAD - two DIMM slots, four PCI. According to Crucial, can take
2x1GB PC2700. Socket S478, FSB400, perhaps 2.6GHz or
2.8GHz P4 max, as that's as fast as FSB400 processors go.

http://www.xbitlabs.com/images/mainboards/i845g-i845gl/i845gl.jpg

845GL datasheet. Hub interface on 845GL is 266MB/sec, and PCI
will be bridged off that. ICH4 Southbridge should be as good
as ICH5, except it is missing SATA ports. Same 100MB/sec read,
88.9 MB/sec write on the IDE cable max.

http://download.intel.com/design/chipsets/datashts/29074602.pdf

ICH4. 88.9MB/sec IDE write limit is on page 170.

http://developer.intel.com/Assets/PDF/datasheet/290744.pdf

Bench your MDT hard drive. I think the answer is there...
It might give you an excuse for the 40MB/sec you got.
If your 845 has max RAM, you can split the RAM and use
half for a RAMDisk, and retest.

http://memory.dataram.com/products-and-services/software/ramdisk

Paul
 
G

Geoff

http://ee.gigabyte.com/products/page/mb/ga-8ipe1000-g_4x

ICH5, 266MB/sec hub bus, SATA 150MB/sec bridged to hub.
PCI bus also bridged to hub. 110MB/sec flowing upwards
from the PCI card, followed by 110MB/sec downward to
hard drive is possible. As far as I know, the 266MB/sec
bus is half duplex (and works best with burst transfers).

Using a RAMDisk for the storage device, is an alternative,
and would be practical if the machine has 2GB of memory.
You could give that a try, or switch to iperf/ntttcp testing.


MDT re-certifies drives. It doesn't say who makes the drives.

http://www.mdtglobal.com/Welcome/

Running the benchmark in HDTune should give you a transfer curve.
The free version does a read-only benchmark. If a drive has
bad sectors, that can cause the transfer rate to sink
even further.

http://www.hdtune.com/files/hdtune_255.exe

Intel chipsets support UDMA100 on a Southbridge IDE cable.
Due to how the write strobe is generated, write rates are
inferior to read. Write could be 88MB/sec, rather than 100MB/sec.
If the head to platter rate is less than that (and could be
the case on older drives), then that takes precedence. I
have plenty of drives here that only do 65MB/sec best case,
so the stinky Intel cable performance doesn't matter. It's
the disk that is responsible.

The RAMDisk on the other hand, is blazing fast :)

This is captured on the computer I'm typing on. 2GB oF
RAMDisk on a 4GB machine, used for this test.

http://img196.imageshack.us/img196/8694/hdtunedataram2gbabove.gif

Your Gigabyte motherboard can probably take 4GB of RAM (4x1GB).

D845GLAD - two DIMM slots, four PCI. According to Crucial, can take
2x1GB PC2700. Socket S478, FSB400, perhaps 2.6GHz or
2.8GHz P4 max, as that's as fast as FSB400 processors go.

http://www.xbitlabs.com/images/mainboards/i845g-i845gl/i845gl.jpg

845GL datasheet. Hub interface on 845GL is 266MB/sec, and PCI
will be bridged off that. ICH4 Southbridge should be as good
as ICH5, except it is missing SATA ports. Same 100MB/sec read,
88.9 MB/sec write on the IDE cable max.

http://download.intel.com/design/chipsets/datashts/29074602.pdf

ICH4. 88.9MB/sec IDE write limit is on page 170.

http://developer.intel.com/Assets/PDF/datasheet/290744.pdf

Bench your MDT hard drive. I think the answer is there...
It might give you an excuse for the 40MB/sec you got.
If your 845 has max RAM, you can split the RAM and use
half for a RAMDisk, and retest.

Paul

I used hdtunepro_461_tria.exe and for the

MDT I get

min transfer speed 31MB/sec
max 58MB/sec

With the Seagate Barracuda I get

min 107MB/sec
max 125MB/sec

These are for transfer rates - these what you need?

Does the write option overwrite current data on the HDD?!

Geoff
 
P

Paul

Geoff said:
I used hdtunepro_461_tria.exe and for the

MDT I get

min transfer speed 31MB/sec
max 58MB/sec

With the Seagate Barracuda I get

min 107MB/sec
max 125MB/sec

These are for transfer rates - these what you need?

Does the write option overwrite current data on the HDD?!

Geoff

You don't want or need the write option. That's why the
free version of HDTune is good enough - you just want
a rough idea of what "class" of drive you own. (As you
speculate, if a benchmark has a write option, you'd want
to know whether it is destructive or not.)

Well, the reason you did this benchmark, is to see
if the disk drive is fast enough to give good network
copy speed.

And the MDT is not.

The benchmark shape is a curve, and your results range
from 58MB/sec to 31MB/sec over the surface of the platter.
You said you got a transfer rate over the network of
40MB/sec best case. If the place the file is to be read
or written is in the middle of the disk, I can see the
disk being the limiting factor.

That's why you first want to set up a network test
condition, that relies less on a disk. For convenience,
I used RAMDisks for storage, so I could have the freedom
to test regular protocols. And a 1GB sized RAMDisk is
all my computers could have in common, so that's the
size I used. (One computer could have had a bigger
RAM disk, but then that wouldn't be a good candidate
for the smaller computer. I do RAMDisk to RAMDisk transfers
over the network, to eliminate storage speed as an issue.)

Certainly, doing real-world test cases, like hard drive
to hard drive transfers over the network, will tell you
the final result. But it doesn't tell you what part of
the setup is a limiting factor.

You can benchmark the drives. The Seagate end looks good,
and you'll likely hit 100 on that end.

Using network tests like iperf or ntttcp, is to test
the "part in the middle", your network connection.
But any benchmark utility like that, that requires user
tuning, may make it harder to understand the results
and what is really happening.

*******

To "fix" the 845GL, the easiest solution is

1) Buy another Seagate, like the one you just benched.

2) Your 845 motherboard only has IDE. You purchase an
IDE to SATA adapter.

This plugs into the back of a SATA drive. This is the one
I use for experiments. The output is a 40 pin IDE connector,
suitable for plugging to a ribbon cable. Using this,
converts your Seagate, into an IDE drive.

http://ca.startech.com/product/IDE2SAT-25in-and-35in-40-Pin-Male-IDE-to-SATA-Adapter-Converter

Using (1) and (2), you'll get to the 88.9MB/sec write level
as per the Intel datasheet. That's the best I can do for you.

The reason I can't do better, is the PCI bus. If you buy a
SATA add-in card for your 845, it sits on the PCI bus. The
NIC card sits on the PCI bus. The traffic from the two
adds together. The aggregate traffic on the PCI might
be 120MB/sec best case. Splitting that in two, giving
60MB/sec to the NIC and 60MB/sec to the hard drive during
a network transfer, is worse than using the SATA adapter
solution above.

If you use a SATA adapter on the IDE cable, and a PCI NIC,
the 266MB/sec hub supports both of them. You can draw
88.9MB/sec from the IDE interface, 88.9MB/sec on the NIC
over the PCI bus, and the total of ~180MB/sec or so,
fits within the 266MB/sec hub limit. By "spreading the load"
around, you end up with the potential for 88.9MB/sec.

Obviously, this only works, if your network-specific testing
identifies that the network (running some protocol), can
hit above 88.9MB/sec. If the NIC is actually limited in some
way, to 40MB/sec, or you run out of CPU cycles, then installing
the fast disk purely for this purpose would be a waste of time.
So before buying a new disk, you *still* need to test just the
network portion. What I recommend, is a protocol that
allows you to check with Wireshark, that the protocol is
working as expected. (A good test case, will have lots of
1K+ sized packets going in only one direction, without a lot
of other traffic mixed in. Using Wireshark, you can see
how "pure" the test case is.)

Even my RCP test case, the packet pattern didn't look perfect.
By using Wireshark, I could see why selecting Jumbo Frames,
was failing to go faster. For some reason, the packets being
sent, were all different sizes, in a repeating pattern, with
too high a ratio of ACK packets. Using non-jumbo packets
was leading to a better looking trace in Wireshark.

You can't run a whole bulk transfer through Wireshark,
because it will exhaust all the memory on the computer.
You don't do Wireshark protocol analysis while transferring
a 1 GB test file on the disk. I did my testing with a 7 MB
file, so Wireshark would gather a ~7 MB sized trace. And
by doing that, only a little memory got used. Then I could
scroll through the trace, to better understand the
transmission pattern. I could probably afford to go back
and repeat my FTP testing, and use Wireshark to figure out
why it was slower than expected.

Paul
 
G

Geoff

You don't want or need the write option. That's why the
free version of HDTune is good enough - you just want
a rough idea of what "class" of drive you own. (As you
speculate, if a benchmark has a write option, you'd want
to know whether it is destructive or not.)

Well, the reason you did this benchmark, is to see
if the disk drive is fast enough to give good network
copy speed.

And the MDT is not.

The benchmark shape is a curve, and your results range
from 58MB/sec to 31MB/sec over the surface of the platter.
You said you got a transfer rate over the network of
40MB/sec best case. If the place the file is to be read
or written is in the middle of the disk, I can see the
disk being the limiting factor.

That's why you first want to set up a network test
condition, that relies less on a disk. For convenience,
I used RAMDisks for storage, so I could have the freedom
to test regular protocols. And a 1GB sized RAMDisk is
all my computers could have in common, so that's the
size I used. (One computer could have had a bigger
RAM disk, but then that wouldn't be a good candidate
for the smaller computer. I do RAMDisk to RAMDisk transfers
over the network, to eliminate storage speed as an issue.)

Certainly, doing real-world test cases, like hard drive
to hard drive transfers over the network, will tell you
the final result. But it doesn't tell you what part of
the setup is a limiting factor.

You can benchmark the drives. The Seagate end looks good,
and you'll likely hit 100 on that end.

Using network tests like iperf or ntttcp, is to test
the "part in the middle", your network connection.
But any benchmark utility like that, that requires user
tuning, may make it harder to understand the results
and what is really happening.

*******

To "fix" the 845GL, the easiest solution is

1) Buy another Seagate, like the one you just benched.

2) Your 845 motherboard only has IDE. You purchase an
IDE to SATA adapter.

This plugs into the back of a SATA drive. This is the one
I use for experiments. The output is a 40 pin IDE connector,
suitable for plugging to a ribbon cable. Using this,
converts your Seagate, into an IDE drive.

http://ca.startech.com/product/IDE2SAT-25in-and-35in-40-Pin-Male-IDE-to-SATA-Adapter-Converter

Using (1) and (2), you'll get to the 88.9MB/sec write level
as per the Intel datasheet. That's the best I can do for you.

The reason I can't do better, is the PCI bus. If you buy a
SATA add-in card for your 845, it sits on the PCI bus. The
NIC card sits on the PCI bus. The traffic from the two
adds together. The aggregate traffic on the PCI might
be 120MB/sec best case. Splitting that in two, giving
60MB/sec to the NIC and 60MB/sec to the hard drive during
a network transfer, is worse than using the SATA adapter
solution above.

If you use a SATA adapter on the IDE cable, and a PCI NIC,
the 266MB/sec hub supports both of them. You can draw
88.9MB/sec from the IDE interface, 88.9MB/sec on the NIC
over the PCI bus, and the total of ~180MB/sec or so,
fits within the 266MB/sec hub limit. By "spreading the load"
around, you end up with the potential for 88.9MB/sec.

Obviously, this only works, if your network-specific testing
identifies that the network (running some protocol), can
hit above 88.9MB/sec. If the NIC is actually limited in some
way, to 40MB/sec, or you run out of CPU cycles, then installing
the fast disk purely for this purpose would be a waste of time.
So before buying a new disk, you *still* need to test just the
network portion. What I recommend, is a protocol that
allows you to check with Wireshark, that the protocol is
working as expected. (A good test case, will have lots of
1K+ sized packets going in only one direction, without a lot
of other traffic mixed in. Using Wireshark, you can see
how "pure" the test case is.)

Even my RCP test case, the packet pattern didn't look perfect.
By using Wireshark, I could see why selecting Jumbo Frames,
was failing to go faster. For some reason, the packets being
sent, were all different sizes, in a repeating pattern, with
too high a ratio of ACK packets. Using non-jumbo packets
was leading to a better looking trace in Wireshark.

You can't run a whole bulk transfer through Wireshark,
because it will exhaust all the memory on the computer.
You don't do Wireshark protocol analysis while transferring
a 1 GB test file on the disk. I did my testing with a 7 MB
file, so Wireshark would gather a ~7 MB sized trace. And
by doing that, only a little memory got used. Then I could
scroll through the trace, to better understand the
transmission pattern. I could probably afford to go back
and repeat my FTP testing, and use Wireshark to figure out
why it was slower than expected.

Paul

Many thanks for all the above Paul - I'm starting to get a clearer
picture of what is happening.

You may remember that I did have a quick try with the RAM disk idea
but my Windows 7 PC slowed down badly.

I have 2GM RAM on each machine - what would you suggest I should
allocate for RAMDisk?

Cheers

Geoff
 
P

Paul

Geoff said:
Many thanks for all the above Paul - I'm starting to get a clearer
picture of what is happening.

You may remember that I did have a quick try with the RAM disk idea
but my Windows 7 PC slowed down badly.

I have 2GM RAM on each machine - what would you suggest I should
allocate for RAMDisk?

Cheers

Geoff

I selected 1GB for my RAM Disk. The Windows 7 minimum requirement is
1GB, so that would be cutting it pretty close (2GB installed minus
1GB for RAM Disk, leaves only 1GB for the OS). On the source machine,
I copy a test file into the RAM Disk, and "share" it with the
other computers. On the destination machine, I drag and drop the
remote shared file, onto my empty RAM Disk, and then the transfer
begins. The size of the RAM Disk is set for the lowest common
denominator. The 2GB machine limits me to a 1GB RAM Disk, and that
is the size I use on the other machines as well.

Paul
 
G

Geoff

Paul

I have just created a 500MB RAMDisk on each machine and I get the same
transfer speeds as before...

I created unformatted 500MB and then formatted then using NTFS under
Windows/Manage etc.

Windows 7 to XP Pro 22MB/sec

XP Pro to Windows 7 34MB/sec

This cast any further light?!

Cheers

Geoff
 
P

Paul

Geoff said:
Paul

I have just created a 500MB RAMDisk on each machine and I get the same
transfer speeds as before...

I created unformatted 500MB and then formatted then using NTFS under
Windows/Manage etc.

Windows 7 to XP Pro 22MB/sec

XP Pro to Windows 7 34MB/sec

This cast any further light?!

Cheers

Geoff

What is your percentage of CPU utilization ?

Is the Windows 7 machine maxed out ?

I've had one test case here, where one of my machines
was maxed out on CPU, and couldn't go any faster
as a result. Many of my other test cases, seemed to
make good usage of DMA for moving data around, and
in those cases, I was seeing 10% to 30% CPU usage on
a Core2 processor. For example, on one of my 117MB/sec
runs, I've penciled next to it that CPU was 15%.

On one of my test days, I had a similar symptom to what
you're seeing. A "fast start", followed by slowing down
after a few seconds. The next day, the identical test
was smooth from beginning to end, with no quirky
behavior. I don't know what changed.

My Windows 7 laptop only has a single core processor,
so it isn't exactly a powerhouse. if there is any
other computing activity on there, that could easily
distort the results. If the thing had a quad core,
it might be a bit more resilient. The processor is
a 2.2GHz AMD, which when translated to Pentium 4
equivalent, would be the same as a 3GHz Pentium 4.

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top