Installing XP

S

Sam Hobbs

Ken Blake said:
By the way, what you have *open* is largely irrelevant. It's what you
are actively using that counts. An application that's open and not
being used quickly gets paged out, and takes up virtual memory, but
little or no real memory.

Excellent point and that is what I have tried to say.
The performance penalty comes
about when you are constantly shuttling things in and out of the page
file, because they are all in active use, and there isn't enough real
memory to contain it all at once.

I think the established term is thrashing.
 
S

Sam Hobbs

Ken Blake said:
2. How long it takes XP to start is inconsequential for most people.

In this context, I consider startup time to be an indication of XP's
performance using the configuration it is in. Yes, it does depend on
whatever else is starting so bad performance at startup might be misleading
but if that is acknowledged as an influence then poor performance by the
startup is a possible indication of overall poor performance.

Definitely if the poor performance only occurs during startup then most
people could live with that.
 
B

Bill Sharpe

Unknown said:
You are only being argumentative. Face up to facts and admit 512 is far
superior than
256 megs and in ALMOST every case a speed increase is sensed.
That's pretty much what I said originally <g>. Going from 256 mb to 512
mb does result in faster performance. However, my laptop runs fine for
what I want to do with just 256 mb ram. And Daave seems to agree with me.

My desktop PC has 1 gb memory and is three years newer than the laptop.
Of course it's faster and I wouldn't want to go back to minimum memory
on that machine.

I haven't seen any new PC's advertised lately with less than 512 mb. And
I'd want more than that if I were buying a new machine today.

Bill
 
B

Bill Sharpe

Unknown said:
That of course is YOU personally. I power up my computer AFTER coffee and a
long
bootup time is annoying.

Geez, have a second cup of coffee and relax...

Bill
 
K

Ken Blake, MVP

Excellent point and that is what I have tried to say.


Good, glad we agree.

I think the established term is thrashing.



Yes, it is. I usually try to avoid using technical terms in the
newsgroups, because not everyone understands what they mean.
 
K

Ken Blake, MVP

In this context, I consider startup time to be an indication of XP's
performance using the configuration it is in. Yes, it does depend on
whatever else is starting so bad performance at startup might be misleading


Although I don't disagree, in my view that "might" should be changed
to "is usually," or at least "is often."

but if that is acknowledged as an influence then poor performance by the
startup is a possible indication of overall poor performance.


Possible, yes. But often not. If overall performance *is* poor, then
it should be addressed, and what you have starting automatically
should certainly be looked at as a possible cause of poor performance.
But it's wrong to assume that slow startup automatically means poor
general performance.

I think many people mistakenly assume that a slow startup slows down
everything else, but that's not always (maybe not even usually) the
case.

That's why my standard newsgroup post on slow startup includes the
sentence "Assuming that the computer's speed is otherwise
satisfactory, it may not be worth worrying about."

My own system is a very good case in point. It starts extremely
slowly, because I automatically start a number of applications (not
just background ones) that I use and always keep open all day. These
include Outlook, Forte Agent, Excel, IE7 with Maxthon running on top
of it, and Quicken. If I didn't start them automatically, I would
start them manually just after startup, and it's easier to do it
automatically.

Starting all those programs automatically makes a very slow startup,
but after startup, my performance is just fine.

Definitely if the poor performance only occurs during startup then most
people could live with that.


Yes, that is exactly the point I was making.
 
S

Sam Hobbs

Ken Blake said:
I think many people mistakenly assume that a slow startup slows down
everything else

I don't. If anything I said indicates otherwise then I was not clear.
My own system is a very good case in point. It starts extremely
slowly, because I automatically start a number of applications (not
just background ones) that I use and always keep open all day. These
include Outlook, Forte Agent, Excel, IE7 with Maxthon running on top
of it, and Quicken. If I didn't start them automatically, I would
start them manually just after startup, and it's easier to do it
automatically.

That's great if the system can do all that without the multiple processes
causing paging that would not occur when there is less for the system to do.
If there is a lot of paging (thrashing) caused by Windows trying to do so
much that it is forced to do a lot of physical paging (hard faults) that it
would not need to do with a lighter workload then it would be reasonable to
reduce the workload during startup.
 
U

Unknown

Caffeine does not alleviate slow startups. Slow startup are annoying no
matter how much caffeine is ingested.
 
D

Daave

Sam said:
I don't understand how Commit Charge figures are relevant. See the
following article; it's description of the Commit Charge figures does
not indicate a relevance.

The Memory Shell Game
http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx

This is my understanding:

If the peak level is less than the amount of RAM, the PC doesn't need to
use the pagefile (therefore, performance is at optimal level). I believe
that article was addressing whenever the commit charge is *higher* than
the amount of RAM, the amount of thrashing may not be as high as one
might expect, depending on whether or not I/O is kept to a minimum.
 
K

Ken Blake, MVP

I don't. If anything I said indicates otherwise then I was not clear.


Sorry--no, I didn't think you did. I was pointing that others do, and
that's why I so often stress that a slow startup, in and of itself, is
not necessarily a problem, and for many people often isn't.

That's great if the system can do all that without the multiple processes
causing paging that would not occur when there is less for the system to do.
If there is a lot of paging (thrashing) caused by Windows trying to do so
much that it is forced to do a lot of physical paging (hard faults) that it
would not need to do with a lighter workload then it would be reasonable to
reduce the workload during startup.



Again, the thrashing would occur if several of the programs are in
simultaneous use. Since I keep all these loaded because I use one or
the other frequently throughout the day, but hardly ever use multiples
at the same time, they don't compete with each other for RAM, and
thrashing is not an issue. Performance is fine.
 
S

Sam Hobbs

Daave said:
http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx

This is my understanding:

If the peak level is less than the amount of RAM, the PC doesn't need to
use the pagefile (therefore, performance is at optimal level). I believe
that article was addressing whenever the commit charge is *higher* than
the amount of RAM, the amount of thrashing may not be as high as one
might expect, depending on whether or not I/O is kept to a minimum.


Probably you are correct that if physical memory is adequate enough to
provide all the memory for all the applications executing then the pagefile
is not needed. The assumption is that physical memory is often not adequate,
however processes usually have some memory that is used so rarely that it is
efficient to page out the memory that is rarely used. If however there is
enough physical memory then it still is not necessary to page anything out.
If that is the case then there is no need to discuss anything here; anyone
with sufficient main memory can ignore this. The problem is when there is
such an inadequate supply of main memory that there are performance problems
due to that problem.

Yes, it is true that "whenever the commit charge is *higher* than the amount
of RAM, the amount of thrashing may not be as high as one might expect" but
it is also true that the amount of thrashing might be higher than one might
expect; both situations are possible and the commit charge does not indicate
which is true.
 
S

Sam Hobbs

Ken Blake said:
Again, the thrashing would occur if several of the programs are in
simultaneous use. Since I keep all these loaded because I use one or
the other frequently throughout the day, but hardly ever use multiples
at the same time, they don't compete with each other for RAM, and
thrashing is not an issue. Performance is fine.

I was referring to loading startup with many things to do during startup.
During startup, the programs will likely execute simultaneously with other
programs. This automatic execution of multiple processes could choke main
memory more than at any other time and therefore a limited amount of main
memory could force use of virtual storage. If multiple processes compete for
main memory during the unusual startup environment that occurs during
startup then they might require more time than if they executed at a less
resource-intensive time.

I am speaking theoretically; I assume it does not apply to your system.
 
D

Daave

Sam said:
Probably you are correct that if physical memory is adequate enough to
provide all the memory for all the applications executing then the
pagefile
is not needed. The assumption is that physical memory is often not
adequate, however processes usually have some memory that is used so
rarely that it is efficient to page out the memory that is rarely
used. If however there is
enough physical memory then it still is not necessary to page
anything out.
If that is the case then there is no need to discuss anything here;
anyone
with sufficient main memory can ignore this.

That is the case. :)
 
K

Ken Blake, MVP

Probably you are correct that if physical memory is adequate enough to
provide all the memory for all the applications executing then the pagefile
is not needed.


Nope, that isn't true. Applications often pre-allocate memory for
*potential* use, and in many cases, that potential use never becomes
actual use. Those allocations normally get paged out very quickly
because they are not in use, but if there is no page file they have to
stay in real memory. The result is that without a page file, you
effectively get locked out of some of your real memory, and you can't
actually use all of it.

Add to that that there is *no* advantage to not having a page file
(except for saving a tiny amount of disk space) and it's clear that it
should never to turned off.

For more information, read "Virtual Memory in Windows XP" by the late
MVP Alex Nichol at http://aumha.org/win5/a/xpvm.htm
 
K

Ken Blake, MVP

I was referring to loading startup with many things to do during startup.
During startup, the programs will likely execute simultaneously with other
programs. This automatic execution of multiple processes could choke main
memory more than at any other time and therefore a limited amount of main
memory could force use of virtual storage. If multiple processes compete for
main memory during the unusual startup environment that occurs during
startup then they might require more time than if they executed at a less
resource-intensive time.

I am speaking theoretically; I assume it does not apply to your system.


Well, I'm not absolutely sure I completely understand you, but if you
are saying that competition for real memory, and consequent thrashing,
occurs while the programs are being loaded at startup, yes that's
true, and I agree. I'm sure it does apply to my system, and that's
part of the reason that my startup is slow. But as I said, I start up
very seldom, and I don't mind its being slow, since performance is
otherwise fine.
 
S

Sam Hobbs

Ken Blake said:
Nope, that isn't true. Applications often pre-allocate memory for
*potential* use, and in many cases, that potential use never becomes
actual use. Those allocations normally get paged out very quickly
because they are not in use, but if there is no page file they have to
stay in real memory. The result is that without a page file, you
effectively get locked out of some of your real memory, and you can't
actually use all of it.

Add to that that there is *no* advantage to not having a page file
(except for saving a tiny amount of disk space) and it's clear that it
should never to turned off.

For more information, read "Virtual Memory in Windows XP" by the late
MVP Alex Nichol at http://aumha.org/win5/a/xpvm.htm


What in particular in that article do you think it says that the "result is
that without a page file, you effectively get locked out of some of your
real memory, and you can't actually use all of it"?

As for "they have to stay in real memory", staying in real memory is ideal.
It is foolish to force Windows to swap anything to the pagefile if there is
enough real memory to satisfy all memory requirements for all processes
executing in a system.

The article says "the optimisation implied by this has been found not to
justify the overhead, and normally there is only a single page file in the
first instance". The author does not understand the value of using multiple
physical drives for pagefiles.
 
K

Ken Blake, MVP

What in particular in that article do you think it says that the "result is
that without a page file, you effectively get locked out of some of your
real memory, and you can't actually use all of it"?


"Doing this would waste a lot of the RAM. The reason is that when
programs ask for an allocation of Virtual memory space, they may ask
for a great deal more than they ever actually bring into use — the
total may easily run to hundreds of megabytes. These addresses have to
be assigned to somewhere by the system. If there is a page file
available, the system can assign them to it"


As for "they have to stay in real memory", staying in real memory is ideal.
It is foolish to force Windows to swap anything to the pagefile if there is
enough real memory to satisfy all memory requirements for all processes
executing in a system.


No. We are talking here about allocations that will probably never be
used. What you say is true only for what is used.
 
S

Sam Hobbs

Ken Blake said:
"Doing this would waste a lot of the RAM. The reason is that when
programs ask for an allocation of Virtual memory space, they may ask
for a great deal more than they ever actually bring into use - the
total may easily run to hundreds of megabytes. These addresses have to
be assigned to somewhere by the system. If there is a page file
available, the system can assign them to it"

That is too vague for me to be sure and I don't trust the accuracy. Let's
leave this to be judged by each person. You can help everyone by finding
something more reliable; maybe I will.
 
R

Rick Merrill

Sam said:
That is too vague for me to be sure and I don't trust the accuracy. Let's
leave this to be judged by each person. You can help everyone by finding
something more reliable; maybe I will.


Accidently launch an extra copy of a drawing program and POOF! you are
out of memory.

There is NEVER "enough" RAM to run 1 each of all your programs.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top