Windows ME or 2000 instead of XP

R

Ralph Wade Phillips

Howdy!

James said:
Setup a my old computer for a friend (Intel PII-450, 128ram) and installed
win xp, but am not happy with it as it seems too slow (I have used a friends
celeron/128mb which is faster!)

err - You might want to consider upping that RAM a bit. 128K isn't
really enough even for the OS with XP ...
I therefore think that running win ME or 2000 is the best option but am not
sure which one to go for. Can anyone help?

More RAM, since even ME likes > 128M, and 2K works better with

RwP
 
T

Tilley Meatkutter

In my last job (I'm a s/w Engineer) I used a PII-450, 128Mbyte, W2K for
really heavyweight stuff - debug versions of a 3D CAD/CAM system and
Visual C++ simultaneously - and it was perfectly useable. I did get it
u/g to 256Mbyte which showed a marked performance improvement, but for
the intended use of the OP's machine 128Mbyte should be fine.


Just found out the recommended specs for each os which is as follows.

Windows 98 486DX66 - 16MB
RAM
Windows ME Pentium 150 - 32MB
RAM
Windows 2K Pentium 133 - 64MB
RAM
Windows XP 300mhz (Pentium/K6/Duron etc) - 128MB RAM

Interestly ME requires less ram than 2K, but the opposite with the
processors.

With a PII-450 and 128mb you may be better off with ME as both processor and
ram are well inside spec, having said that most people here seem to
recommend 2K as being the better choice and you still have 2x the
recommended ram for the o/s
 
T

Tony Houghton

In <[email protected]>,
Patrick said:
I run WinME on a machine, serving out dozens of GNU/Linux ISOs on
Limewire at 384KBS from two SCSI Cheetah 10,000 rpm drives.

Why don't you run Linux on it? There is a Limewire client and other
gnutella clients.
 
T

Tim Auton

Tony Houghton said:
In <[email protected]>,


Why don't you run Linux on it? There is a Limewire client and other
gnutella clients.

Those drives seem like overkill for a 384kbps uplink too. I suppose
they're what you had lying around, and SCSI drives are generally
better suited for continuous operation, but I doubt the speed is
required :)


Tim
 
R

Roy Coorne

Tilley Meatkutter wrote:
....
Just found out the recommended specs for each os which is as follows.

Windows 98 486DX66 - 16MB
RAM
Windows ME Pentium 150 - 32MB
RAM
Windows 2K Pentium 133 - 64MB
RAM
Windows XP 300mhz (Pentium/K6/Duron etc) - 128MB RAM
....

Recommended? By whom? Ridiculous... Those values are, perhaps, minimal
ones - like 300MHz/128MB for WinXP.

Follow the other postings in the thread...

Roy
 
L

Larc

| > Just found out the recommended specs for each os which is as follows.
| >
| > Windows XP 300mhz (Pentium/K6/Duron etc) - 128MB RAM
| ...
|
| Recommended? By whom? Ridiculous... Those values are, perhaps, minimal
| ones - like 300MHz/128MB for WinXP.

XP practical minimum RAM = 256MB (less slows XP down too much)
XP "sweet spot" = 384MB (best performance/value combo)
XP really loves = 512MB (lets XP go all out)

More RAM than 512MB isn't a good value return for XP itself — although
it will use more on occasion — but RAM-intensive programs such as
Photoshop will deeply appreciate it.

Larc



§§§ - Change planet to earth to reply by email - §§§
 
C

Creeping Stone

=|[ Larc's ]|= said:
XP practical minimum RAM = 256MB (less slows XP down too much)
XP "sweet spot" = 384MB (best performance/value combo)
XP really loves = 512MB (lets XP go all out)
Above 512MB id recommend turning off or virtualising the swapfile on XP or
else it'll be shipping a significant amount of mem to and fro the
harddrive, slowing other hardrive accesses and stalling between windows.
 
T

Tony Houghton

In <[email protected]>,
Creeping Stone said:
=|[ Larc's ]|= said:
XP practical minimum RAM = 256MB (less slows XP down too much)
XP "sweet spot" = 384MB (best performance/value combo)
XP really loves = 512MB (lets XP go all out)
Above 512MB id recommend turning off or virtualising the swapfile on XP or
else it'll be shipping a significant amount of mem to and fro the
harddrive, slowing other hardrive accesses and stalling between windows.

Is it actually true that Windows (I could believe it of 9x, but not of
NT etc) keeps shifting stuff in and out of swap unnecessarily just
because it's there, to the detriment of performance? Or are there some
closet RISC OS zealots about?
 
C

Creeping Stone

=|[ Tony Houghton's ]|= said:
In <[email protected]>,
Creeping Stone said:
=|[ Larc's ]|= said:
XP practical minimum RAM = 256MB (less slows XP down too much)
XP "sweet spot" = 384MB (best performance/value combo)
XP really loves = 512MB (lets XP go all out)
Above 512MB id recommend turning off or virtualising the swapfile on XP or
else it'll be shipping a significant amount of mem to and fro the
harddrive, slowing other hardrive accesses and stalling between windows.

Is it actually true that Windows (I could believe it of 9x, but not of
NT etc) keeps shifting stuff in and out of swap unnecessarily just
because it's there, to the detriment of performance? Or are there some
closet RISC OS zealots about?

I find on NT and 2000 it does but I could be cranky about the idea ;)
If you set a drives power management to go to sleep quickly after use, you
would notice big system stalls waiting for it to wake up at context changes
even with plenty of memory free.
The OS doesnt know when its going to need more space, so it starts
mirroring pages into the pagefile right from the start, on win2k the more
mem you have the more of this background activity occurs.
imbo the 32bit kernel handles paging very efficiently except for the
relative age it takes to get data back off Hard drive.
If you work out how efficient paging is, it would probably be an impressive
figure ~99% , but the 1% creates annoying pauses if you're used to juggling
applications and tabs alot.

Only XP and 9x lets you turn the pagefile off completely, but data is
quickly compressed into the pagefile as its written so its useful in a way.
I have my swapfile on a virtual ramdisk using memory taken away from
windows with 'maxmem=' switch in boot.ini
If you have a gig of memory and use half that for pagefile, because of the
compression, half a gig of pagefile gives about 1Gig of virtual memory (+
the system mem left to windows)
- it works a dream for me and a few people Ive heard gave it a go.

Theres some strict theorists about say its impossible to gain from taking
memory away from windows and looping it back like this so...

ymmv ;)
 
D

Doug Ramage

Creeping Stone said:
=|[ Tony Houghton's ]|= said:
In <[email protected]>,
Creeping Stone said:
=|[ Larc's ]|= wrote:

XP practical minimum RAM = 256MB (less slows XP down too much)
XP "sweet spot" = 384MB (best performance/value combo)
XP really loves = 512MB (lets XP go all out)

Above 512MB id recommend turning off or virtualising the swapfile on XP or
else it'll be shipping a significant amount of mem to and fro the
harddrive, slowing other hardrive accesses and stalling between
windows.

Is it actually true that Windows (I could believe it of 9x, but not of
NT etc) keeps shifting stuff in and out of swap unnecessarily just
because it's there, to the detriment of performance? Or are there some
closet RISC OS zealots about?

I find on NT and 2000 it does but I could be cranky about the idea ;)
If you set a drives power management to go to sleep quickly after use, you
would notice big system stalls waiting for it to wake up at context changes
even with plenty of memory free.
The OS doesnt know when its going to need more space, so it starts
mirroring pages into the pagefile right from the start, on win2k the more
mem you have the more of this background activity occurs.
imbo the 32bit kernel handles paging very efficiently except for the
relative age it takes to get data back off Hard drive.
If you work out how efficient paging is, it would probably be an impressive
figure ~99% , but the 1% creates annoying pauses if you're used to juggling
applications and tabs alot.

Only XP and 9x lets you turn the pagefile off completely, but data is
quickly compressed into the pagefile as its written so its useful in a way.
I have my swapfile on a virtual ramdisk using memory taken away from
windows with 'maxmem=' switch in boot.ini
If you have a gig of memory and use half that for pagefile, because of the
compression, half a gig of pagefile gives about 1Gig of virtual memory (+
the system mem left to windows)
- it works a dream for me and a few people Ive heard gave it a go.

Theres some strict theorists about say its impossible to gain from taking
memory away from windows and looping it back like this so...

ymmv ;)
--

Some thoughts on pagefile/swapfile settings for XP:

http://www.aumha.org/win5/a/xpvm.htm
 
T

Tony Houghton

In <[email protected]>,
Creeping Stone said:
=|[ Tony Houghton's ]|= said:
Is it actually true that Windows (I could believe it of 9x, but not of
NT etc) keeps shifting stuff in and out of swap unnecessarily just
because it's there, to the detriment of performance? Or are there some
closet RISC OS zealots about?

I find on NT and 2000 it does but I could be cranky about the idea ;)
If you set a drives power management to go to sleep quickly after use, you
would notice big system stalls waiting for it to wake up at context changes
even with plenty of memory free.

So don't set the discs to spin down quickly! ;-)
The OS doesnt know when its going to need more space, so it starts
mirroring pages into the pagefile right from the start, on win2k the more
mem you have the more of this background activity occurs.

I think what you're actually seeing are the effects of FS caching. I
don't think it's at all likely that it would save pages to disc before
it wants the RAM for something more critical. And you're effectively
saying that Windows gets slower the more RAM it has.
 
C

Creeping Stone

=|[ Larc's ]|= said:
On Fri, 28 May 2004 09:09:56 +0100, "Doug Ramage"

| Some thoughts on pagefile/swapfile settings for XP:
| http://www.aumha.org/win5/a/xpvm.htm
Its a fine link, but its scope is not for advanced users or system
administrators. (moreso the horse talk;)

It doesnt recognise, that in this era of plentiful RAM *and* lagging Hard
Drive performance, the old hard drive paging operations are no longer
necessary, and with larger common memory usage, much more memory needs
passed through i/o than was the case when machines commonly ran with less
than 48 megs ram.

A number of authors writing about pagefiles state somewhat arrogantly that
no benefit could possibly be gained from limiting the size of the pagefile,
or virtualising it altogether.

For the advanced user or system builder, the great benefit available is to
free i/o from paging bandwidth (particularly desirable on laptops), and
free the OS from associated lags.

The purpose of setting maximum values on computer resources is to limit
undesired awol circumstances ability to make a mess of everything, flagging
the critical situation before they do. With enough global resource to
accomodate the greediest valid usuage possible, it is detrimental to set
maximum usage beyond that calculatd level (especialy pagefile because
larger pagefile = more i/o work).

On win2k if you set pagefile to 1.5 x RAM minimum, it will begin to
increase when about 90% of virtual memory is used, setting it below that
ratio causes windows to complain about its VM usuage long before its
reaching the estimated ceiling.

If you start with 256 meg of system ram, and allocate 512 megs min+max of
hard drive space for pagefile. That gives the system about 1250 megs
virtual memory (routinely compressed) - which for most users is much more
than enough required for any practical combination of open applications
-including modest CAD and graphic apps which often make more efficent use
of scratch disks that windows VM.

If you have 768 megs of system ram, windows will complain (..is increasing
virtual memory..) if you set pagefile to less than [768 x 1.5] = 1.1 gigs,
so youll have well over 2 gigs of virtual memory!
- for most users that is just a silly allocation of resources and the
relationship between mem and pagefile means youll be mirroring huge swathes
of virtual memory to your huge pagefile - on the cherished hard drive /
through limited i/o bandwidth...

The load on the hardrive and folly of the allocation increases the more
lovely onboard memory you give windows to deal with.

768 megs system memory is a decent sweet spot, if you take 512 megs off
windows and leave it 256 megs to work in, put the pagefile on a capable
ramdisk (or buy a pci card for $1000's)
Then you get much more than enough VM for all but the heaviest workstation
loads, have a perfectly smooth machine and completely free hard drive i/o,
and asociated power saving on laptops.

Programs DO load faster, and lags ARE eliminated, because decent ramdrives
do i/o transactions in ~no time at all.

The type of summarisations that technical authors often make about this
system are oversimplified to the point of being quite irrelevant to the
behaviour of the actual system (also see politicians, economists...;)
- so beware those who want dont wish to mislead in that way, its more
useful to understand the details by actualy observing them.

Regards,
 
C

Creeping Stone

=|[ Tony Houghton's ]|= said:
If you set a drives power management to go to sleep quickly after use, you
would notice big system stalls waiting for it to wake up at context changes
even with plenty of memory free.

So don't set the discs to spin down quickly! ;-)
The test demonstrates the obtrusiveness of windows antiquated pageing
system.
I think what you're actually seeing are the effects of FS caching.
Perhaps thats mixing in :/
I don't think it's at all likely that it would save pages to disc before
it wants the RAM for something more critical.
Im quite sure thats how it works - it must anticipate as part of its
design.
And you're effectively saying that Windows gets slower the more RAM it has.

Beyond what ram it actualy requires, the extra ram needs managed and leads
windows to expect that truely huge demands will be required.
If you have much more memory than you need, why involve the hard disk i/o
in memory management at all??
 
T

Tony Houghton

In <[email protected]>,
Creeping Stone said:
=|[ Tony Houghton's ]|= said:
I don't think it's at all likely that it would save pages to disc before
it wants the RAM for something more critical.
Im quite sure thats how it works - it must anticipate as part of its
design.

Why? I can understand it insisting on keeping a few pages available in
case there's a sudden increase in demand, but otherwise I can't see any
reason why it would waste time mirroring pages to disc just on the off
chance it might want to reallocate them later at some point when it's
not in the mood for writing them to disc.
 
C

Creeping Stone

=|[ Tony Houghton's ]|= said:
Creeping Stone said:
=|[ Tony Houghton's ]|= wrote:
Im quite sure thats how it works - it must anticipate as part of its
design.

Why? I can understand it insisting on keeping a few pages available in
case there's a sudden increase in demand, but otherwise I can't see any
reason why it would waste time mirroring pages to disc just on the off
chance it might want to reallocate them later at some point when it's
not in the mood for writing them to disc.

I read some problems in my big post, so take it lightly.
Ill try and get closer to the bones.

My understanding of windows VM iirc, is this:

The address space of main memory is split into pages, iirc 4096 bytes long.
Each page has a low level record of its useage and its mapping to virtual
memory space, including 'TLB' table which kernel/cpu uses to quickly look
up the real address of virtual memory locations.

The paging system, mirrors lesser used pages to the hard disk pagefile as a
background process - not all pages, but a significant amount. This allows
those pages which are mirrored to the pagefile to be overwritten if/when
memory demand unexpectedly requires it. If such mirrored pages arent
overwritten, there is no reason to retrieve them, but they must be updated
or invalidated in the pagefile if thier contents are changed.

At start up the os begins this mirroring work, observing the activity of
pages and transfering the least active ones, then after using the machine
for a while, there are a selection of pages mirrored in the pagefile, some
that have remained and are valid for a while, these are good pages to have
mirrored, some didnt last long before being invalidated. Theres a soup of
mirrored pages in the pagefile, all recorded and tracked. Some pages are
active enough to have avoided being mirrored altogether, some are marked
nonpageable resulting in the same.
If real memory is running out, the change in circumstance is that the OS is
having to retrieve real-overwritten pages back off the pagefile but the
process of updating the pageable, less active pages is ongoing and not
dependent on whether or not the system is running low on real memory.

All that copying is work for the harddrive - one of the slowest bits of the
computer that has its own work to do as well.

The OS shouldnt need to retrieve data from the pagefile unless it really
has run out of real memory space, but with a very large pagefile, and some
duplication and memory requirements to index it, it may run out before it
would do if the kernel where just compressing areas of mem occasionaly
within mem, and thus need the pagefiles storage facility, or since the
whole thing is not perfect and no prefered axioms can be counted on being
supported by technical implementations :} smaller records might find
themselves exclusively isolated in the pagefile even before they need be.

In short, the pagefiles needs are much better met by actual memory than
creaky old hard drives, which have plenty of thier own work to do.

Newer, Quicker, bigger, smoother :]
 
G

GSV Three Minds in a Can

from the said:
In <[email protected]>,
Creeping Stone said:
=|[ Tony Houghton's ]|= said:
I don't think it's at all likely that it would save pages to disc before
it wants the RAM for something more critical.
Im quite sure thats how it works - it must anticipate as part of its
design.

Why? I can understand it insisting on keeping a few pages available in
case there's a sudden increase in demand, but otherwise I can't see any
reason why it would waste time mirroring pages to disc just on the off
chance it might want to reallocate them later at some point when it's
not in the mood for writing them to disc.

As far as I can tell it doesn't. Most of 'real ram' is actually occupied
with file cache, which can be dumped at the drop of a hat (since it is
already on disk). At the point where all real RAM is occupied by
code/data, and needs writing out to swap space, you are in trouble. You
have to work REALLY HARD to get WinXP to use more than ~350MB of space
for code/data.

fwiw you can =not= believe the Win2k/Xp 'page file usage' numbers .. get
the utility from Doug Knox's page if you want to know what is really in
use. WinXP counts page file as 'in use' when it has just been allocated,
but never written to (which is what XP often does instead of allocating
real RAM, which is why having not page file at all is pretty dumb, since
then 'allocated but unused' space stays in real RAM)
 
C

Creeping Stone

=|[ GSV Three Minds in a Can's ]|= said:
Tony Houghton <[email protected]> said
Why? I can understand it insisting on keeping a few pages available in
case there's a sudden increase in demand, but otherwise I can't see any
reason why it would waste time mirroring pages to disc just on the off
chance it might want to reallocate them later at some point when it's
not in the mood for writing them to disc.

As far as I can tell it doesn't. Most of 'real ram' is actually occupied
with file cache, which can be dumped at the drop of a hat (since it is
already on disk). At the point where all real RAM is occupied by
code/data, and needs writing out to swap space, you are in trouble. You
have to work REALLY HARD to get WinXP to use more than ~350MB of space
for code/data.

fwiw you can =not= believe the Win2k/Xp 'page file usage' numbers .. get
the utility from Doug Knox's page if you want to know what is really in
use.<...>

I had a look with that and with performance monitor,
with my current setup which is:

Total Physical Megs: 320
Pagefile Minimum: 370

Current state
=============
Actual Pagefile use: 39 Megs
System Cache: 173 Megs
Available Free: 159 Megs

At this point the system is freshly booted rather lightly loaded, I often
have bowser windows open amounting to over 100 megs, graphic display of
large directories up to 200 megs, plenty of other apps, then pagefile
effects become much more promenent, anyway...

I suppose Ive been describing an exagerated case, but I think you guys are
over dismissive of pagefile systems problems with meeting unpredictable
memory demands, and how the resulting load is not ideal for hard drive i/o.
Even with machines with large memory, how that increased memory capacity
means delay causing amounts of data are expected to pass transparently
through harddrive i/o.
I notice this particularly if running background tasks which continuously
use hardrive, while I have the pagefile on it.

Once a really bad tweakaholic, I dont do it so much these days, but its
practical heavy workstation experience that leads me to much prefer the
operation of my machine with a virtualised pagefile, tho 512 megs is just a
little low to do this with.

Its dissapointing that no one else has experienced this, or even
acknowledges that it could be the case - i think the config is under
researched because of some of the overstretched summarisations around.

Its all about how cheap and easy it is to have lots of onboard memory these
days, of course memory still benefits from being managed, but significant
benefit -at least for perfectionists, can be gained from taking hard drives
out of the loop.

cheers,
 
G

GSV Three Minds in a Can

Bitstring <[email protected]>, from the
wonderful person Creeping Stone said:
Its all about how cheap and easy it is to have lots of onboard memory these
days, of course memory still benefits from being managed, but significant
benefit -at least for perfectionists, can be gained from taking hard drives
out of the loop.

Nobody would disagree with that. Fit 2Gb of real RAM, assign 256MB of
page file for those 'allocated but never actually used' pages, and for
the 50MB that XP needs for a dump file, and for the ~40MB that XP swaps
out as soon as it loads (and never, afaict, swaps back in again), and
your system will fly.

Until you try to do something with a 4GB video/photo file, then it'll
slow down again. Disc IO is well known to be evil .. that's why (as far
as I can see) XP doesn't do any unless it absolutely has to.
 
C

Creeping Stone

=|[ GSV Three Minds in a Can's ]|= said:
Bitstring <[email protected]>, from the
wonderful person Creeping Stone said:
Its all about how cheap and easy it is to have lots of onboard memory these
days, of course memory still benefits from being managed, but significant
benefit -at least for perfectionists, can be gained from taking hard drives
out of the loop.

Nobody would disagree with that. Fit 2Gb of real RAM, assign 256MB of
page file for those 'allocated but never actually used' pages, and for
the 50MB that XP needs for a dump file, and for the ~40MB that XP swaps
out as soon as it loads (and never, afaict, swaps back in again), and
your system will fly.
I find with nt and 2k, if I set pagefile less than 1.5 times size of real
memory , I soon get alerts that the pagefile needs increased - well before
resources are getting scarce.
I started messing with pagefile settings years ago on socket7 motherboard,
whose chipset wouldnt cache board memory beyond 64 megs, so I got good
results from using >64 megs for a ramdisk.
I just fixed up an old 48meg 586 9x machine, by adding a similarly ancient
harddrive, and relocating its pagefile to a compressed partition on it
-ah the fun %} its almost useable now!
Until you try to do something with a 4GB video/photo file, then it'll
slow down again. Disc IO is well known to be evil .. that's why (as far
as I can see) XP doesn't do any unless it absolutely has to.

Maybe XP is more refined, someday ill check :)

best regards,
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top