One HDTach problem solved, one to go...

O

Ohaya

Hi,

As some of you may have noted, I've been struggling with trying to determine
why I'm getting high CPU Utilization and relatively low burst read speed
with HDTach on one Win2K installation (my original one) vs. a clean Win2K
installation.

Well, tonight, I stumbled across the solution to at least half of the
puzzle, the high CPU Utilization.

This is going to be somewhat complicated ...

Ok, with large drives, I'm in the habit of partitioning the drive.
Typically, I've been partitioning my drives as follows:

C: - Windows (FAT32)
D: - Swap (FAT16)
E: - Data

I've been following this scheme for awhile, since at least Win98, because
having a small FAT16 partition as for the swap/pagefile was recommended, and
when I moved to Win2K, I kept doing the same thing.

If you recall, the problem that I was encountering was that when I got a
larger drive, I copied my original Win2K installation over to the new drive,
and I was finding that I was getting high (~45%+) CPU Utilization, and
relatively low (see below) burst read speed (~70MB/s).

To try to track these problems down, I created another partition on the
drive so that I could do a clean Win2K installation, and swap between my
original Win2K installation and the clean Win2K installation. With the clean
Win2K installation, I was finding relatively low (~ 5 - 15%) CPU
Utilization, and relatively high burst speeds (~80MB/s).

As I was testing tonight, I was testing with the clean Win2K installation,
and I just happened to notice that if I put the swap/pagefile on the D:
partition, I'd get high (~45%) CPU Utilization whereas if I put my
swap/pagefile on my C: partition, I'd get low CPU Utilization. This was with
the clean Win2K installation!!

Aha!

So, I deleted the Swap partition, and re-created it as a FAT32 partition.
Once I did that, I tested some more, first with the clean Win2K
installation.

What I found was that with the Swap partition being formatted as FAT32, I
could put my swap/pagefile on either the C: or D: partition, and I'd still
get the lower (~5 - 15%) CPU Utilization!!!

I then booted into my original Win2K installation, and did the same test,
and again, I found that I was now getting the lower CPU Utilization, even
with my original Win2K installation!!

So, the bottom line is that it appears that there formatting of the drive
and/or partitions is affecting the CPU Utilization results that HDTach
produces.

I don't quite know why, but it definitely appears this way from my testing.

Ok, so it appears (to me at least) that I've resolved the CPU Utilization
issue with HDTach.

I still am puzzled about the burst read speed difference between my original
Win2K installation and the clean Win2K installation, but I kind of think
that there's something (either a hidden service or driver or something) that
is causing the burst read speed on my original Win2K installation to be
lower.

But that's a battle for another day ...

Anyway, I hope that all of this helps others who may be trying to figure out
what's going on with their HDTach CPU Utilization results!!

Jim
 
A

Al Dykes

Hi,

As some of you may have noted, I've been struggling with trying to determine
why I'm getting high CPU Utilization and relatively low burst read speed
with HDTach on one Win2K installation (my original one) vs. a clean Win2K
installation.

Well, tonight, I stumbled across the solution to at least half of the
puzzle, the high CPU Utilization.

This is going to be somewhat complicated ...

Ok, with large drives, I'm in the habit of partitioning the drive.
Typically, I've been partitioning my drives as follows:

C: - Windows (FAT32)
D: - Swap (FAT16)
E: - Data

I've been following this scheme for awhile, since at least Win98, because
having a small FAT16 partition as for the swap/pagefile was recommended, and
when I moved to Win2K, I kept doing the same thing.

I gotta ask. How much CPU and memory do you have on your system ?
Just curious.

A seperate swap partition is a w/98 thing. For an NT system with a
single disk it has never been a recommendation, and I imagine it's a
small performance loss since the heads have to seek farther, on the
average, to swap. Making the OS manage buffers for multiple
partitions and file systems has a cost in memory and cycles too, as
you have discovered. In addition, you have absolutly no idea how much
swap space you need until you run your apps and repartitioning is a
PITA. Use NTFS for everything unless there is a clear reason not to.

Let the swap file live in the C partition and let the OS decide how to
optimize buffers and I/O. At least since NT, the Operating System is
much smarter than most humans in knowing now to optimize itself, and
it can change from momenent to moment. Bill Knows Best. ;-). In This
case, really.

(For me, the old folklore is to set the swap file to a fixed size i
control panel and then do to a standalone defrag to get it into one
fragment. You have to do that just once. People have told me that for
XP I should let the OS manage the size and fragments of the swap
file. I learned this when I found that our developer's machines had
_hunderds_ of swap file fragments after heavy use. Things
change. Comments Anyone ?)

If you want to understand what your system is doing, learn to use
Performance Monitor (perfmon.exe). It will graph everything your
system is going, in great detail if you ask it. If you don't use
perfmon you are really shooting in the dark as far as understanding
what the bottleneck is in your system.

What you are looking for in PERFMON is concurrancy and to identify the
bottleneck that prevents it. You want to keep the CPU and all your
disks busy at the same time, WHEN THE SYSTEM IS RUNNING YOUR
APPLICATION. Benchmarks are nice but they aren't why most of us bought
a PC. When my system is pumping, perfmon shows that I can be reading
a big graphic (sometimes 60MB, each, for me) from C, writing data and
swaping to Z simultainiously, and leave CPU cycles for the GUI
interaction, and playing the internet radio without it breaking up.
 
O

Ohaya

Al,

Thanks for the comments. Mine are interspersed below.

Jim


...snip..
I gotta ask. How much CPU and memory do you have on your system ?
Just curious.

CPU is a Duron 1.2 in this case, with 256MB memory. OS is Win2K Pro SP4
right now.

A seperate swap partition is a w/98 thing. For an NT system with a
single disk it has never been a recommendation, and I imagine it's a
small performance loss since the heads have to seek farther, on the
average, to swap. Making the OS manage buffers for multiple
partitions and file systems has a cost in memory and cycles too, as
you have discovered. In addition, you have absolutly no idea how much
swap space you need until you run your apps and repartitioning is a
PITA.

I'll think about that, but you're right in the sense that based on my tests
last night, with the swap partition as FAT32, it didn't seem to make a
difference whether the swap/pagefile was on the C: or D: partition.

I thought that one of the reasons for having the separate swap/pagefile on a
separate partition is that, since its size isn't static, it can grow on the
separate (clean) partition. Also, when you do defrag, the swap file shows
up a 'system' area, which is not moveable by defrag, so if I leave it on my
C: partition and then defrag the C: partition, doesn't defrag have to "work
around" this unmoveable area?

Use NTFS for everything unless there is a clear reason not to.

I know about NTFS, but when the system gets into trouble, and I've had to
save my data files off of an NTFS partition, it's difficult. That's the
main reason why I keep all my partitions as FAT32 on my machines.

Not looking for a debate here :). Believe me, I've had several occasions
where a system crashed, and I simply had to be able to save my data from the
crashed system. I had used NTFSDOS, but then lost all of the non-short
names, which made another major problem trying to figure out which file was
which.

With a FAT32 partition, at worst, I move the drive over to another machine,
and I can use Windows to copy stuff off there, while retaining long file
names.

Jim
 
A

Al Dykes

Al,

Thanks for the comments. Mine are interspersed below.

Jim


..snip..


CPU is a Duron 1.2 in this case, with 256MB memory. OS is Win2K Pro SP4
right now.


That's enough memory that you are not paying the CPU and IO cost of
swaping.
I'll think about that, but you're right in the sense that based on my tests
last night, with the swap partition as FAT32, it didn't seem to make a
difference whether the swap/pagefile was on the C: or D: partition.

I thought that one of the reasons for having the separate swap/pagefile on a
separate partition is that, since its size isn't static, it can grow on the
separate (clean) partition. Also, when you do defrag, the swap file shows
up a 'system' area, which is not moveable by defrag, so if I leave it on my
C: partition and then defrag the C: partition, doesn't defrag have to "work
around" this unmoveable area?

1. You can "fix" the size and framentation of the SWAP file
on C. You need to pay for a defrag tool. I like PerfectDisk
(www.raxco.com).

2. Even if you don't buy a defrag tool, I still maintain that keeping
the swap file on C the right thing to do. Defrag as soon as you've can
during setup and then expand the swap file to wnat you need and be
happy.
I know about NTFS, but when the system gets into trouble, and I've had to
save my data files off of an NTFS partition, it's difficult. That's the
main reason why I keep all my partitions as FAT32 on my machines.

What's so hard about NTFS recovery ? You can do the same thing with
NTFS. Or boot a Knoppix/Linux CD and read the ntfs data that way. I
have done both many many times. I've been working with NTFS on PCs
and servers since about 1992 and I have never been unable to recover
NTFS data unless the disk hardware has failed, in which case the type
of file system dowsn't matter.

If you're loosing file systems that often you have to look at what you
are doing, or the hardware you're using. Something's wrong.

If you want/need to run FAT32, then make everything FAT32. Keep swap
in your system partition on a single disk system. A second data
partition is fine, but don't put swap there.

The probability of spontaneous "file system corruption" is right down
there with the risk of getting hit by lightning unless you are playing
with low level disk tools. Data loss happens, and the only defense is
BACKUP BACKP BACKUP, which you should be doing anyway.
Not looking for a debate here :). Believe me, I've had several occasions
where a system crashed, and I simply had to be able to save my data from the
crashed system. I had used NTFSDOS, but then lost all of the non-short
names, which made another major problem trying to figure out which file was
which.

I never needed to play with NTFSDOS, and these days Knoppix fills that
function, anyway.

There was another thread like this in some usenet recently.
 
O

Odie

All this palaver for what?

There are *very* few programs these days that force you to run a swap
file. Older versions of Photoshop, Photoshop Elements, etc.

Do yourself a favour and have at least 512MB of RAM installed and run
***WITHOUT*** a swapfile. Much quicker.

And, if you have 768MB or more, get yourself a virtual RAM DISK program
and run the swapfile in the RAM disk. Programs like Photoshop
absolutely fly along.

Odie
 
R

Rod Speed

As some of you may have noted, I've been struggling with
trying to determine why I'm getting high CPU Utilization and
relatively low burst read speed with HDTach on one Win2K
installation (my original one) vs. a clean Win2K installation.
Well, tonight, I stumbled across the solution to at
least half of the puzzle, the high CPU Utilization.
This is going to be somewhat complicated ...
Ok, with large drives, I'm in the habit of partitioning the drive.
Typically, I've been partitioning my drives as follows:
C: - Windows (FAT32)
D: - Swap (FAT16)
E: - Data

Thats not a good idea because it produces more head movement
if the swap file is used much other than at boot time, and if you have
enough physical ram so the swap file isnt used much except at boot
time, there isnt any point in having a separate partition for it.

It can produce a measurable improvement in performance
if the swap file is on a separate physical drive on a separate
controller if its used much other than at boot time, but it
makes a lot more sense today to just have more physical
ram so it doesnt get used much except at boot time.
I've been following this scheme for awhile, since at least Win98, because
having a small FAT16 partition as for the swap/pagefile was recommended,

Nope, not when its on the same physical drive as where
most of the head activity is in other than boot time.
and when I moved to Win2K, I kept doing the same thing.
If you recall, the problem that I was encountering was that when I got
a larger drive, I copied my original Win2K installation over to the new
drive, and I was finding that I was getting high (~45%+) CPU Utilization,
and relatively low (see below) burst read speed (~70MB/s).
To try to track these problems down, I created another partition on the
drive so that I could do a clean Win2K installation, and swap between
my original Win2K installation and the clean Win2K installation. With the
clean Win2K installation, I was finding relatively low (~ 5 - 15%) CPU
Utilization, and relatively high burst speeds (~80MB/s).
As I was testing tonight, I was testing with the clean Win2K
installation, and I just happened to notice that if I put the
swap/pagefile on the D: partition, I'd get high (~45%) CPU
Utilization whereas if I put my swap/pagefile on my C: partition,
I'd get low CPU Utilization. This was with the clean Win2K installation!!

Likely some quirk of HDTach. It'd be worth trying some other benchmarks.
So, I deleted the Swap partition, and re-created
it as a FAT32 partition. Once I did that, I tested
some more, first with the clean Win2K installation.
What I found was that with the Swap partition being formatted
as FAT32, I could put my swap/pagefile on either the C: or D:
partition, and I'd still get the lower (~5 - 15%) CPU Utilization!!!
I then booted into my original Win2K installation, and did the
same test, and again, I found that I was now getting the lower
CPU Utilization, even with my original Win2K installation!!
So, the bottom line is that it appears that there
formatting of the drive and/or partitions is affecting
the CPU Utilization results that HDTach produces.
I don't quite know why, but it definitely appears this way from my testing.

Yeah, certainly looks that way.
Ok, so it appears (to me at least) that I've
resolved the CPU Utilization issue with HDTach.
I still am puzzled about the burst read speed difference between
my original Win2K installation and the clean Win2K installation,
but I kind of think that there's something (either a hidden service
or driver or something) that is causing the burst read speed on
my original Win2K installation to be lower.
Likely.

But that's a battle for another day ...

And the difference in the burst rate isnt that great and the burst
rate isnt something a modern OS should be affected by anyway
since its essentially just the movement of data between the tiny
on drive cache and the system memory. That is likely to be
completely dominated by the hard drive physics almost always
with a decent modern OS which wont be mindlessly thrashing
stuff between the tiny on drive cache and system memory much.
Anyway, I hope that all of this helps others who may be trying to
figure out what's going on with their HDTach CPU Utilization results!!

Thanks for posting the washup.
 
E

Eric Gisin

Read the post, oh clueless one. He is running Win 2K, and cannot disable
paging.

In any case, Win 2K and 256MB RAM does little paging unless you load up the
bloatware. I get by with a 64MB pagefile.
 
A

Al Dykes

All this palaver for what?

There are *very* few programs these days that force you to run a swap
file. Older versions of Photoshop, Photoshop Elements, etc.

Do yourself a favour and have at least 512MB of RAM installed and run
***WITHOUT*** a swapfile. Much quicker.

And, if you have 768MB or more, get yourself a virtual RAM DISK program
and run the swapfile in the RAM disk. Programs like Photoshop
absolutely fly along.

Oh so 80's. early 80's. I'm sure your Win 3.0 system screams.

Swaping on a ramdisk. What's wrong with this picture ?

I'm assuming Odie forgot a smiley so I'll put one here;

;-)
 
C

Carl Farrington

Odie said:
All this palaver for what?

There are *very* few programs these days that force you to run a swap
file. Older versions of Photoshop, Photoshop Elements, etc.

Do yourself a favour and have at least 512MB of RAM installed and run
***WITHOUT*** a swapfile. Much quicker.

And, if you have 768MB or more, get yourself a virtual RAM DISK
program and run the swapfile in the RAM disk. Programs like Photoshop
absolutely fly along.

Odie

as Al says, I've never heard anything as backward as setting aside RAM, for
use as swap space.
 
O

Ohaya

2. Even if you don't buy a defrag tool, I still maintain that keeping
the swap file on C the right thing to do. Defrag as soon as you've can
during setup and then expand the swap file to wnat you need and be
happy.

Al,

I may have missed the point in your last sentence above, i.e., expanding the
swap file after setup and then leaving it. That may be something I'd do,
since assuming that I made it large enough it'd never have to "grow".

I'll do some testing (ungh, something else to test :)!).

Thanks,
Jim
 
O

Ohaya

It can produce a measurable improvement in performance
if the swap file is on a separate physical drive on a separate
controller if its used much other than at boot time, but it
makes a lot more sense today to just have more physical
ram so it doesnt get used much except at boot time.

Well I have the old 30GB drive, but there's no way I'm going to put it back
into this machine because of the whining :)!

Likely some quirk of HDTach. It'd be worth trying some other benchmarks.

I think that you may be right above (quirk of HDTach), as earlier, while I
was having the high CPU Utilization in HDTach, I ran WinBench, and was
getting 1.something% CPU utilization. Confused the $*%)_(# out of me :).
 
E

Eric Gisin

There is some evidence that IDE hard drives have cache algorithms that are FAT
aware, and give preference to caching in areas that Windows is deficient in.
This would explain what you are seeing.

The burst rate is simply HD Tach repeatidly reading a small area, which should
reside in the cache.

Personally I would move pagefile to C: and put %TEMP% on D:. Creating files is
slight more efficient on FAT16, and you can quick format it on startup.

Ohaya said:
Hi,

As some of you may have noted, I've been struggling with trying to determine
why I'm getting high CPU Utilization and relatively low burst read speed
with HDTach on one Win2K installation (my original one) vs. a clean Win2K
installation.

Well, tonight, I stumbled across the solution to at least half of the
puzzle, the high CPU Utilization.

This is going to be somewhat complicated ...

Ok, with large drives, I'm in the habit of partitioning the drive.
Typically, I've been partitioning my drives as follows:

C: - Windows (FAT32)
D: - Swap (FAT16)
E: - Data

I've been following this scheme for awhile, since at least Win98, because
having a small FAT16 partition as for the swap/pagefile was recommended, and
when I moved to Win2K, I kept doing the same thing.

If you recall, the problem that I was encountering was that when I got a
larger drive, I copied my original Win2K installation over to the new drive,
and I was finding that I was getting high (~45%+) CPU Utilization, and
relatively low (see below) burst read speed (~70MB/s).
[snippy snip]
 
A

Al Dykes

Al,

I may have missed the point in your last sentence above, i.e., expanding the
swap file after setup and then leaving it. That may be something I'd do,
since assuming that I made it large enough it'd never have to "grow".

I'll do some testing (ungh, something else to test :)!).


My idea is that a system installation copies and deletes so may files
that it leaves the disk fragmented, to some extent, and one swap segemnt,
for the sake of discussion. Doing a defrag makes big chucks of free space.

Then I make what I believe is the right size for the intended use of
the system, and set is as a the starting and MAX size in in Control
Panel. When The system reboots it writes a pagefile.sys that has a
minimum # of fragments and will never grow.

The only way you can find out what a good number for swap size is to
use perfmon, while running your applications. You're making much to
much of this. Setting up a single disk PC is a piece of cake. Make it
NTFS, and don't go nuts over the swap file until you've become an
expert with perfmon. Thru all of this you haven't said what
applications you use and what your goal is, other than file system
reliability.

IMHO fragmentation is much less of an issue when the disk is, less
than say, 60% full. That's easy with today's disks. There's always an
empty block nearby.

Some of us have set up PCs that someone uses head down, all day, to
make money (for somebody.) The software is huge, compared to the
amount of disk, memory, and CPU we can buy, at the time. We had to
take care of details to get acceptable performance from the PC and
reliability for the application. Today I'm amazed that a cheap PC is
ok for running Photoshop for amateur use is a no-brainer. Things that
need a maximum PC that I know of today are video rendering, games, and
high-productivity Photoshop production.
 
A

Al Dykes

There is some evidence that IDE hard drives have cache algorithms that are FAT
aware, and give preference to caching in areas that Windows is deficient in.
This would explain what you are seeing.

If you said FAT32 it might be possible. FAT hasn't been the file type
of choice for anything but floppy disks for a long time.
The burst rate is simply HD Tach repeatidly reading a small area, which should
reside in the cache.

Personally I would move pagefile to C: and put %TEMP% on D:. Creating files is
slight more efficient on FAT16, and you can quick format it on startup.

Ohaya said:
Hi,

As some of you may have noted, I've been struggling with trying to determine
why I'm getting high CPU Utilization and relatively low burst read speed
with HDTach on one Win2K installation (my original one) vs. a clean Win2K
installation.

Well, tonight, I stumbled across the solution to at least half of the
puzzle, the high CPU Utilization.

This is going to be somewhat complicated ...

Ok, with large drives, I'm in the habit of partitioning the drive.
Typically, I've been partitioning my drives as follows:

C: - Windows (FAT32)
D: - Swap (FAT16)
E: - Data

I've been following this scheme for awhile, since at least Win98, because
having a small FAT16 partition as for the swap/pagefile was recommended, and
when I moved to Win2K, I kept doing the same thing.

If you recall, the problem that I was encountering was that when I got a
larger drive, I copied my original Win2K installation over to the new drive,
and I was finding that I was getting high (~45%+) CPU Utilization, and
relatively low (see below) burst read speed (~70MB/s).
[snippy snip]
 
E

Eric Gisin

Al Dykes said:
If you said FAT32 it might be possible. FAT hasn't been the file type
of choice for anything but floppy disks for a long time.
But they started doing this in the FAT16 days, and I doubt they removed that
code from the firmware. Since they don't test their firmware with all possible
partition combos, it is likely Ohaya simply has a combo that exploits an
obscure bug in the firmware.
 
O

Odie

I think all of you are merely highlighting your incompetence in this
area.

Perhaps you should try what I am suggesting before trying to ridicule
it.

Does any one of you have the guts to try the RAM disk? It may well
surprise you.


Odie
 
S

S.Heenan

Ohaya said:
Well I have the old 30GB drive, but there's no way I'm going to put
it back into this machine because of the whining :)!



I think that you may be right above (quirk of HDTach), as earlier,
while I was having the high CPU Utilization in HDTach, I ran
WinBench, and was getting 1.something% CPU utilization. Confused the
$*%)_(# out of me :).


I installed Windows 2000 SP4 over the weekend. Using older Nforce2 IDE
drivers, CPU utilization under HD Tach 2.70 was ~75%. Changed the drivers to
new Forceware 3.13 and utilization went down to ~25%. Exactly the same
results under Windows XP Pro. Results were identical before and after
defragging the drive. All partitions formatted with NTFS. The Task Manager
shows 100% CPU usage for 8-9 seconds during each run of HD Tach.
 
E

Eric Gisin

The only case RAM disk makes sense is if you have way too much RAM that's
never used.

You bought 1GB when you never used more than 512MB, right?
 
O

Odie

Eric said:
The only case RAM disk makes sense is if you have way too much RAM that's
never used.

You bought 1GB when you never used more than 512MB, right?

On the contrary, Windows XP seems a little happier with 1024MB. As part
of an experiment (dual-channel memory speed) I *did* install 1GB - made
up of 2 x 512MB PC3200 modules. And I will readily admit that there is
probably no significant difference with day-to-day usage. However, when
I was editing a 160MB WMF file recently, the 1GB made a very noticeable
difference.

I still stand by the argument of having a swapfile in RAM. Try it -
then come back and comment.

Odie
 
F

Folkert Rienstra

Carl Farrington said:
as Al says, I've never heard anything as backward as setting aside RAM, for
use as swap space.

There's applications around that will complain when there isn't a swapfile.
If the workaround is to use one on a ramdisk to satisfy those, so be it.
On the other hand, the swapfile may not actually be used as the app wants
to *see* lots of memory but not actually *use* it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top