defrag won't defrag with page file partition

B

bg

as title says. I can't remember what fixed it. I did the usual checkdisk
and cleanup of files. The partition is dedicated to pagefile and temp files
and Crap.

Before booting into cmd mode, is there a way to deal with this from
windows.
after defrag it still says I need one.
 
L

Leonard Grey

Windows' built-in defragger doesn't defragment the page file. If
defragmenting the page file is important to you, install a third-party
defragger.
 
G

Gerry

Leonard

Defragmenting the pagefile is a waste of time.


--



Gerry
~~~~
FCA
Stourport, England
Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
J

John John (MVP)

That is not true. If you have a dynamic pagefile it may be difficult to
keep it in a single fragment but a static pagefile in a single fragment
will lead to faster virtual memory access.

How to configure paging files for optimization and recovery in Windows XP
http://support.microsoft.com/kb/314482

John
 
L

Leonard Grey

Ooops - sorry, hit the wrong key. There's nothing more I have to
contribute to this thread.
 
D

db.·.. >

you might try installing
a utility to run a boot
time and called page defrag
from microsoft.com

--

db·´¯`·...¸><)))º>
DatabaseBen, Retired Professional
- Systems Analyst
- Database Developer
- Accountancy
- Veteran of the Armed Forces
 
G

Gerry

John

My feeling is that the contents of a contiguous pagefile can be
fragmented. Am I mistaken?. I have a single contiguous pagefile, which I
have never defragmented.

--
Regards.

Gerry
~~~~
FCA
Stourport, England
Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
J

John John (MVP)

I don't know about that but my understanding of pagefile fragmentation
is that it is in several segments on the volume. If you have a
contiguous pagefile then it isn't fragmented.

John
 
G

Gerry

John

I appreciate that you can interpet the KB Article that way but it makes
no sense given my experience. My pagefile is on my C partition set with
a minimum and maximum size of 2,000 mb.

--
Regards.

Gerry
~~~~
FCA
Stourport, England
Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
J

John John (MVP)

What makes no sense? I don't understand what you are trying to say.
You have a static pagefile set at 2GB, it is in one contiguous block on
the disk, it isn't fragmented, you don't need to defragment it. When a
pagefile is created or when the pagefile size is increased it can become
fragmented and pagefile fragmentation can negatively affect virtual
memory performance. I don't know what else you are interpreting from
the article.

John
 
G

Gerry

John

We have reached the point we reached the last time we discussed a
similar point. It's probably pointless to pursue the subject further.

I will merely say that the pagefile holds many pieces of data, some
connected and some unrelated. Data will be added or removed from the
pagefile as required. Any connected data within the pagefile may be
spread through the page file. The situation depends on where there is
free space at the time it is written. If there is free contiguous space
within the page file the data written will be contiguous.

--



Gerry
~~~~
FCA
Stourport, England
Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
J

John John (MVP)

Gerry said:
John

We have reached the point we reached the last time we discussed a
similar point. It's probably pointless to pursue the subject further.

I will merely say that the pagefile holds many pieces of data, some
connected and some unrelated. Data will be added or removed from the
pagefile as required. Any connected data within the pagefile may be
spread through the page file. The situation depends on where there is
free space at the time it is written. If there is free contiguous space
within the page file the data written will be contiguous.

It certainly won't be contiguous if the data is in two different
segments at two completely different locations on the disk, that is what
is meant by pagefile fragmentation! Where the data is placed in a
contiguous pagefile is not an issue and is not what is meant by pagefile
fragmentation, it is not what defragmenting a pagefile does. The memory
manager has the map to the pagefile and it knows where to get or store
data within the file, the performance issue arises with disk seeks and
writes in different areas of the disk.

When the pagefile is allowed to dynamically grow some of this
fragmentation is normal and temporary (until you reboot), if you
occasionally have 2 or 3 segments due to dynamic expansion you
shouldn't worry too much about it. But if you often have a heavily
fragmented pagefile then you should take steps to prevent fragmentation.
If you have a static pagefile you should have it in one segment, as
you have it on your system.

John
 
G

Gerry

John

We seem to be debating two points, namely what is fragmentation and what
a boot time defragmenter does. I agree that a non-contiguous file is
fragmented. It is problematic because, being non-contiguous, it causes
free disk space to be fragmented into smaller pockets across the volume.
This means that new files being written are more likely to immediately
fragment than they might if free space were contiguous. I sense you
accept grudging that the contents of a contiguous pagefile can still be
fragmented but maintain that this is not what is generally meant when
users discuss what a Disk Defragmenter does. Of course if you have a
pagefile
that is not fixed in size i.e. windows managed, you will not have free
space within the pagefile to fragment. Fragmentation within the pagefile
is probably only of academic interest as I suspect that there is no
utility available which is capable of defragmenting the data. In any
event the data held within the pagefile is constantly changing.

It is a commonly held view that defragmenting the pagefile is pointless
because of the constantly changing nature of the data held in the
pagefile. You make the point "When the pagefile is allowed to
dynamically grow some of this fragmentation is normal and temporary
(until you reboot), if you occasionally have 2 or 3 segments due to
dynamic expansion you shouldn't worry too much about it." Several
points here. The pagefile can contract. I am unclear what you mean by
linking temporary and reboot? Are you saying that the data in the
pagefile is abandoned on shutdown and new data is loaded the next time
the computer is booted or something else? You also choose the word
normal, which is questionable given that user is given choices on how to
manage the pagefile.

I have reservations regarding your comment about not worrying about two
or three segments. If they were at a fixed location then fine. However,
you also use the expression "dynamic expansion". Where the free disk
space is less than 60% this will eventually lead to accelerated
fragmentation as the free disk space is filled over time. My preference
is to set a pagefile with the minimum and maximum the same. Whilst there
is 60% free space you can create a contiguous pagefile. Once the free
space reduces below 60% it becomes increasingly difficult to create a
single contiguous pagefile. If the amount of free disk space is marginal
you can temporarily increase space, turning off system restore is one
way, to enable a contiguous pagefile to be created. You then turn system
restore back on.

I appreciate my views on managing the pagefile do not coincide with the
more commonly held preference to allow windows to manage. My approach
does away with the need for a third party defragmenter.

Regards.

Gerry
~~~~
FCA
Stourport, England
Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
B

bg

Hey that was excellent. Thanks a lot! It must be triggered after 2 fragments
to keep it flagged (I only have 2 fragments in pagefile, the rest is ZERO on
the same partition).
 
J

John John (MVP)

Gerry said:
John

We seem to be debating two points, namely what is fragmentation and what
a boot time defragmenter does. I agree that a non-contiguous file is
fragmented. It is problematic because, being non-contiguous, it causes
free disk space to be fragmented into smaller pockets across the volume.
This means that new files being written are more likely to immediately
fragment than they might if free space were contiguous. I sense you
accept grudging that the contents of a contiguous pagefile can still be
fragmented but maintain that this is not what is generally meant when
users discuss what a Disk Defragmenter does.

Are you running a VAX/VMS machine? There is little of interest with
internal pagefile fragmentation on Windows operating systems, the
pagefile is not a sequentially accessed file, any effects caused by
internal file fragmentation will be minimal.

Of course if you have a pagefile>
that is not fixed in size i.e. windows managed, you will not have free
space within the pagefile to fragment. Fragmentation within the pagefile
is probably only of academic interest as I suspect that there is no
utility available which is capable of defragmenting the data. In any
event the data held within the pagefile is constantly changing.

It is a commonly held view that defragmenting the pagefile is pointless
because of the constantly changing nature of the data held in the
pagefile.

That is *your* view, not one held by Microsoft. When dealing with
Windows operating systems no one, or hardly anyone refers to pagefile
fragmentation as internal file fragmentation, it is always meaning that
the file is not in one contiguous segment on the disk, that can have a
negative impact on virtual memory performance. Your view that
defragmenting the pagefile is a waste of time is one that is based on
internal file fragmentation alone, it is not one that is based on
Microsoft's definition of pagefile fragmentation, it is an erroneous
statement when speaking of pagefiles that are scattered about in
multiple segments on the disk.

You make the point "When the pagefile is allowed to
dynamically grow some of this fragmentation is normal and temporary
(until you reboot), if you occasionally have 2 or 3 segments due to
dynamic expansion you shouldn't worry too much about it." Several
points here. The pagefile can contract. I am unclear what you mean by
linking temporary and reboot? Are you saying that the data in the
pagefile is abandoned on shutdown and new data is loaded the next time
the computer is booted or something else?

The location of committed or reserved memory addresses in the pagefile
is identified by the page-table entry, when the computer is rebooted the
page table entry is cleared and the pagefile contents are no longer
valid, for all intents and purposes, to the Virtual Memory Manager the
pagefile is as good as empty when Windows is rebooted, new table entries
will be given to any new memory addresses that are backed by the
pagefile and the pagefile will simply be overwritten with new frames.


You also choose the word
normal, which is questionable given that user is given choices on how to
manage the pagefile.

Why is it questionable? A system managed pagefile can grow or shrink as
needed, when the file grows if there is no adjacent free space for the
pagefile growth the file will become fragmented on the disk, that is
perfectly normal. Whether or not performance is affected by this
fragmentation entirely depends on whether or not the whole file is used,
the Virtual Memory Manager will only use what pagefile space it needs,
the rest is just unused disk space, just because a system managed file
at one time grew to an unusually large size it doesn't mean that it will
always need to use that much space. If you constantly have a fragmented
pagefile then you should take appropriated steps to prevent this
fragmentation, if this is an occasional occurrence with a system managed
pagefile it may not be that big a deal, the next time you reboot Windows
it won't matter at all, the minimal system managed pagefile size may be
all that the VMM needs. Even if the file were to be in three segments
it doesn't necessarily mean that all three segments will be used, there
may only be need for the first segment, that is the same as having one
large static pagefile, you could make the pagefile 4GB if you wanted to,
it wouldn't affect performance at all to have that large a pagefile, the
Virtual Memory Manager would only use what it needs out of the 4GB file
and the rest would simply be unused and unavailable disk space for all
but the VMM.

I have reservations regarding your comment about not worrying about two
or three segments. If they were at a fixed location then fine. However,
you also use the expression "dynamic expansion". Where the free disk
space is less than 60% this will eventually lead to accelerated
fragmentation as the free disk space is filled over time. My preference
is to set a pagefile with the minimum and maximum the same. Whilst there
is 60% free space you can create a contiguous pagefile. Once the free
space reduces below 60% it becomes increasingly difficult to create a
single contiguous pagefile. If the amount of free disk space is marginal
you can temporarily increase space, turning off system restore is one
way, to enable a contiguous pagefile to be created. You then turn system
restore back on.

So, if you are concerned about these two or three segments why are you
saying that defragmenting the pagefile is a waste of time? Personally I
always prefer and set a static pagefile on my machines and I always
insist on having it in one segment on the disk, I make sure that it
remains in a contiguous location on the disk. But I also know that many
people have a system managed pagefile and that it is sometimes in two or
three segments on the disk but that they seldom use all of the pagefile
and that for them it isn't much of a performance problem. Some of these
people are not technically inclined and they don't care to know the
technical details of how the pagefile works and what pagefile
fragmentation is, they just want to use their computers and not be
bothered with these things. For them a System Managed pagefile is often
the best option and the odd time that the file might need additional
room may be infrequent and may not be a significant performance hit, I
am just being pragmatic with this, for some folks this is not that big a
deal.


I appreciate my views on managing the pagefile do not coincide with the
more commonly held preference to allow windows to manage.

Yes, and I hold the same view, nowhere in my previous posts did I
contradict that or indicate otherwise. I don't disagree with your
approach at all, I agree with you and I think that having a properly
sized pagefile is preferable. But we do see posts here from people who
get Virtual Memory warning messages and who are completely baffled by
these warnings, for them a System Managed pagefile is preferable to a
static pagefile that is too small, as you well know running out of
pagefile resources will cause the operating system to crash. While I
agree with you on the use of a static pagefile I disagree with your
statement that defragmenting the pagefile is a waste of time, it isn't
if the pagefile is fragmented and if you use the file to a significant
extent.

My approach
does away with the need for a third party defragmenter.

That doesn't mean that others don't have a use for it, sometimes people
change their pagefile from a System Managed to a static pagefile or they
increase the size of the pagefile and they end up with the file in
several segments. For them using a utility like PageDefrag might be
easier than deleting and recreating the pagefile, plus PageDefrag can do
it in one reboot instead of two.

Regards;

John
 
D

db.·.. >

yw.

however, if you
keep the pagefile
all by itself in the
partition, it won't
become fragmented.

--

db·´¯`·...¸><)))º>
DatabaseBen, Retired Professional
- Systems Analyst
- Database Developer
- Accountancy
- Veteran of the Armed Forces
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top