Will this computer run Vista adequately?

  • Thread starter Thread starter Bob Newman
  • Start date Start date
Not necessarily true!

Mine has "Intel® Graphics Media Accelerator 950, Up to 224MB Shared Video
Memory, PCI-Express® (PCI-E x16) slot available for upgrade" and only 512 MB
RAM and it lists 502, 504 or 506, depending on where I look.

KB

"Jay Somerset" wrote in message

Try running an application that actually makes use of the video card ;)

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Such as? I'm not a big gamer. :)

My daily needs are very similar to the OP, and all I see are a bunch of
replies suggesting unnecessary hardware and software upgrades that just add
more dollar$ to the co$t.

Instead of debating what all of you want, why not stick to what the OP
actually needs?

KB

"Stephan Rose" wrote in message
 
Such as? I'm not a big gamer. :)

I know, I just couldn't resist the comment. :)
My daily needs are very similar to the OP, and all I see are a bunch of
replies suggesting unnecessary hardware and software upgrades that just add
more dollar$ to the co$t.

Instead of debating what all of you want, why not stick to what the OP
actually needs?

Well if I really wanted to address the OP's needs, I'd scratch Vista off
that list too. Then the machine would be beyond adequate with 512 megs of
ram for years to come.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Although dated, eMachines sell them with Vista Home Basic pre-installed.
http://www.emachines.com/products/products.html?prod=T5088

See also http://www.pcmag.com/article2/0,1895,2143047,00.asp where PC Mag
say it will run Home Basic.

It needs more RAM, even to run Home Basic satisfactorily. 512MB is
inadequate these days.

For HIS needs??? The only thing 512MB is inadequate for is Vista.

For *his* needs 512 MB is beyond sufficient.

There are far better OS choices that would meet his needs without
ridiculous hardware requirements for years to come.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
On Sun, 24 Jun 2007 16:31:08 -0400, "Bob Newman"
I'm looking for a little guidance if I could please. My XP computer died
and I must replace it although I have very limited resources at this time.
I realize the computer I am describing is very basic but I am trying to get
some feedback as to if it is a waste of money. My computer usage is 90% for
email (Outlook 2003), web access, and light Word and Excel usage (both
2003). The computer I am contemplating is an eMachines T5088 specs are:
Pentium 4 Processor 641, 3.20 GHz
512 MB RAM
160 GB HD
DVD RW

You also mentioned Intel graphics of the 965 chipset generation.

If you use the system with Aero disabled, it will be OK.

If you can do one thing to boost the spec for the long term, go for a
larger HD, with the option to add RAM (not too much) later.

If you have one chance to boost spec, add more RAM.

Don't bother with adding a graphics card unless you want to play
games; even though it will free up some system RAM, this will be less
(and cost more) than adding RAM. Vista is DirectX 10.

Intel's graphics, as part of the 965 chipset, do support Aero, buy
your RAM cruch may become painful. Older 915 chipset's onboard
graphics do NOT support Aero, so make sure it's 965 chipset!


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
For HIS needs??? The only thing 512MB is inadequate for is Vista.
For *his* needs 512 MB is beyond sufficient.

There are far better OS choices that would meet his needs without
ridiculous hardware requirements for years to come.

Acknowledged but I'm merely answering the Bob's question without getting
hysterical about OS choices.

To run Vista adequately he will need more than 512MB.
 
So, to get constructive, what would you suggest for the OP?

Well his needs per his words:

Primarily e-mail with web browsing and some minor word processing and
spreadsheets sprinkled on top.

He says he has only limited resources to work with so I am going to assume
this to mean a small budget.

Option 1: Vista

Cons:
- Won't run well with his specs, his system is going to be inadequate for
quite a few features.
- Costs money
- Learning curve. Things are in different places and others done
differently.
- All sorts of potential issues ranging from software to hardware
support. May get lucky and have no problems. It may be a nightmare. It may
fall anywhere in between the two extremes.
- Default installation does not meet all his needs. Needs to install
additional software.

Pros:
- ????

Option 2: XP

Cons:
- Costs money if he has no existing license to transfer
- Is eventually no longer going to be supported.
- Default installation does not meet all his needs. Needs to install
additional software.

Pros:
- Everything is the way he is used to. No learning curve.
- Will definitely run with his system specs.
- Hardware and software support won't be an issue.

Option 3: Ubuntu

Cons:
- Learning curve, different OS. Things are done differently.
- Can't use the apps he is used to.

Pros:
- Doesn't cost a dime. Helps if his budget is low.
- Is always going to be supported.
- Hardware support, particularly on a system such as his, is definitely
not going to be an issue.
- Chosen hardware is more than adequate for everything.
- The default installation out of the box meets all his needs. Evolution
for e-mail, OpenOffice for word processing / spreadsheet. Firefox for
web browsing. All are similar in look and feel to the MS equivalents.

Personally, I'd recommend Option 3. Not because I favor Ubuntu over
Windows, but because with his current hardware configuration Option 1 will
likely not be all that great and Option 2 is eventually a dead-end when
support drops.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Acknowledged but I'm merely answering the Bob's question without getting
hysterical about OS choices.

To run Vista adequately he will need more than 512MB.

That's why I haven't mentioned OS names so far until you asked me directly
in the other post. =)

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
I just loaded Office and a few other apps and so far it actully seems to be
doing fine.

Bob
 
Hey guys. So far so good. I've got my MS Office intalled & running, been
web surfing, and of course newsgroup using. I've also installed several
other more minor programs with no issues. So far I am amazed, me and my 512
MB are doing fine and probably going faster that my old XP machine!

Bob
 
Hey guys. So far so good. I've got my MS Office intalled & running, been
web surfing, and of course newsgroup using. I've also installed several
other more minor programs with no issues. So far I am amazed, me and my 512
MB are doing fine and probably going faster that my old XP machine!

Keyword "Old XP Machine". Comparing a brand new Windows Install, even if
it is Vista, with an old XP Install is pretty meaningless in evaluating
performance.

If your XP install was one year old, report back when the Vista install is
one year old. Only then will you be comparing apples to apples as Windows
installations tend to decline in performance over time. Commonly referred
to as MS Bitrot =)

Now what I'd love to see is how Ubuntu would perform in comparison to
Vista.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
If your XP install was one year old, report back when the Vista install is
one year old. Only then will you be comparing apples to apples as Windows
installations tend to decline in performance over time. Commonly referred
to as MS Bitrot =)

That performance degradation isn't unavoidable.

I avoid it mainly by using paritioning to concentrate most traffic
within a small head reach, irrespective of how full the data storage
may be (as that is off C:) or how fragmented C: may be (a 32G C:on a
320G HD can never be worse than 10% head travel distance).

There's a bit more to it; relocating data stores, disabling SR on
volumes where it is irrelevant, etc. The latter's easy in Vista, as by
default it doesn't enable SR on volumes other than C:

I've thought of the "rot" part of "bitrot" as referring to stability
degradation due to installed junkware and file system damage that is
papered over by AutoChk.

This should be less in a PC that doesn't crash or suffer bad exits; on
my own systems since Win95 original, I've not had a Windows
installation that I've had to rebuild. They've lasted the full life
of the hardware they were installed on.


As to the original poster; I thought he'd be OK as long as he avoided
Aero and didn't let the onboard graphics hog too much of his 512M.

Having built a few 512M and 1G RAM Vista boxen using the same G965
chipset, I've found the 1G systems faster - more so than, say,
comparing 3GHz Celeron with 3.2GHz Pentium 4 - but the 512M PCs
weren't slash-your-wrirsts slower or unusable.


BTW: Another confounding issue where evaluating Vista speed is
concerned, is Vista's indexing. This typically doesn't become active
for the first few days, so initial impressions before delivery may not
be matched by first-month user impressions after delivery.


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 
That performance degradation isn't unavoidable.

Agreed, non-windows operating systems don't suffer from it. =)
As to the original poster; I thought he'd be OK as long as he avoided
Aero and didn't let the onboard graphics hog too much of his 512M.

Having built a few 512M and 1G RAM Vista boxen using the same G965
chipset, I've found the 1G systems faster - more so than, say,
comparing 3GHz Celeron with 3.2GHz Pentium 4 - but the 512M PCs
weren't slash-your-wrirsts slower or unusable.

That depends on the user. Personally I'd consider most everything below my
core 2 duo slash-my-wrist slow and unusable. =) Well ok, my 3.2 GHz P4 is
also doing much better now that it is no longer being dragged down to
sloth-like speeds by windows. Primarily use Ubuntu on it now which helped
big time.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Agreed, non-windows operating systems don't suffer from it. =)

LOL ;-)

Seriously, though; head travel is head travel, irrespective of the
nature of whatever it has to step over. Does *NIX (say) have
mechanisms of zoning content according to a logic that would mimic
manually-imposed and optimized partitioning?

I ask, because that's the only way I can see this generic issue being
effectively managed within a file system - and I can also see how such
logic could generate "category churning" as use patterns and size
cause material to flip from one category to another (and thus be moved
from one part of the head space to another, to optimize it).
That depends on the user.

And usage too - even beyond what apps are involved. For example, a
family with 7 user profiles that are constantly logged in and
fast-switched, will prolly need more RAM than a single user account.
Personally I'd consider most everything below my
core 2 duo slash-my-wrist slow and unusable. =) Well ok, my 3.2 GHz P4 is
also doing much better now that it is no longer being dragged down to
sloth-like speeds by windows. Primarily use Ubuntu on it now which helped
big time.

It's only now that I'll be building with Conroe-generation processors.
In the Vista age, I've built Conroe-capable systems, but using first
pre-Conroe Celeron and then (as the price crunch pushed them down) the
Pentium 4 and Pentium D stuff.

What's changed, are new Conroe-generation processors that penetrate,
rather than crush, the old/new price barrier. All but the slowest of
"old" Celerons will be undercut by the new Conroe-based Celeron-L,
which is a single core with 512k L2 at 800MHz base, 2GHz core.

I'm interested to know how this will compare with the two new
"Pentium"-branded Conroe processors, which are similar to Celeron-L
except they are dual-core, and have slower core speeds of 1.6Ghz and
1.8GHz. Will the second core be used enough to balance the slow GHz?

BTW: Even more than usual, watch your back with Intel's names for
chips. The "easy" names used often have NOTHING to do with underlying
technology, and everything to do with how marketers wish you to
percieve them... if that tricks you into spending extra $$$ on
something that costs the same to make, and offers only trivial actual
added benefit, well, they couldn't be happier.

Right now, "Celeron" and "Pentium" could mean pre- or post-Conroe
cores, which as crazy as calling a 386 a 286 and vice versa.


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
LOL ;-)

Seriously, though; head travel is head travel, irrespective of the
nature of whatever it has to step over. Does *NIX (say) have
mechanisms of zoning content according to a logic that would mimic
manually-imposed and optimized partitioning?

Actually yes it does. I can tell you that my hard drives are significantly
quieter under linux than they are under windows. Matter of fact, if I hear
any significant head movement then that is the exception, not the norm.

I have copied over 30 gigs of data worth of stored DVD Images and not even
heard the head move once.

The reason for this is, is how the file system is organized. NTFS has all
it's master tables, etc. at the beginning of the drive. Also, NFTS master
file table, volume bitmap, etc. are files within its own file system. Yes
you read that right. So accessing any file-system structures actually
incurs file system overhead and fragmentation like any other file does. So
MFT is spread out, non-linearly and randomly across the disk wherever it
managed to scrounge up a few blocks. So in theory you could have the MFT
entry of file A in the beginning of the disk, MFT entry of file B at the
*end* of the disk, MFT entry of file C again at the beginning. And this
doesn't even take the location of the file data and directory entry in
relation of the MFT data into account. Bottom line, NTFS is a recipe for
excessive head movement and it does so very well.

Ext2 and Ext3 on the other hand, don't do this. They use a static
structure spread out linearly over the entire disk. This does give it one
significant advantage. The master file table is written raw to the disk in
predefined locations. No file system overhead accessing the data. Also,
the disk is split up into large linear chunks where each chunk gets its own
master file table and assorted structures that continues where the
previous one left of. I don't know off the top of my head how large these
chunks are. I suspect volume size may also be a factor in determining that.

Now when you create a directory and put files in it, they are all grouped
near the physically closest master file table that is available. So this
significantly minimizes head movement.

It does not matter if the file is created near the beginning or end of the
disk, all the necessary file system structures will always be near it with
very little head movement.

It only stops doing this if the file system becomes near full and it
simply isn't possible anymore. Then it is forced to fragment like NTFS
does.

Also, when Ext3 creates a file, if possible, it purposely leaves physical
gaps between the files it creates. So if a file then needs to grow, it can
grow without defragmenting as long as there is space in the gap.

The file system handler also appears to be smart enough to when it copies
a file, and therefore knows its size ahead of time, to preallocate a
linear block of space for the file (if available) to be able to copy the
file without fragmentation.
I ask, because that's the only way I can see this generic issue being
effectively managed within a file system - and I can also see how such
logic could generate "category churning" as use patterns and size
cause material to flip from one category to another (and thus be moved
from one part of the head space to another, to optimize it).


And usage too - even beyond what apps are involved. For example, a
family with 7 user profiles that are constantly logged in and
fast-switched, will prolly need more RAM than a single user account.

Especially when using an operating system not really intended for more
than one user. =)
It's only now that I'll be building with Conroe-generation processors.
In the Vista age, I've built Conroe-capable systems, but using first
pre-Conroe Celeron and then (as the price crunch pushed them down) the
Pentium 4 and Pentium D stuff.

What's changed, are new Conroe-generation processors that penetrate,
rather than crush, the old/new price barrier. All but the slowest of
"old" Celerons will be undercut by the new Conroe-based Celeron-L,
which is a single core with 512k L2 at 800MHz base, 2GHz core.

I'm interested to know how this will compare with the two new
"Pentium"-branded Conroe processors, which are similar to Celeron-L
except they are dual-core, and have slower core speeds of 1.6Ghz and
1.8GHz. Will the second core be used enough to balance the slow GHz?

Well I can tell you that my 2.4GHz E6600 I have sitting at home blows my
3.2GHz P4 to tiny little pieces, and that's only with one core. It
annihilates my P4 when it actually gets to use an app that can use both
its cores.

Now don't ask me what weird-name generation the cores are. I don't really
keep track of that much. =) I just try to do my research when I build a
new system at what's the most cost effective to buy and go buy that.
Seemed to work out really well on my dual core system! I am very happy
with it.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Actually yes it does. I can tell you that my hard drives are significantly
quieter under linux than they are under windows. Matter of fact, if I hear
any significant head movement then that is the exception, not the norm.

OK, not sure how to interpret that, tho.
I have copied over 30 gigs of data worth of stored DVD Images and not even
heard the head move once.

Is this SCSI, BTW?
The reason for this is, is how the file system is organized. NTFS has all
it's master tables, etc. at the beginning of the drive. Also, NFTS master
file table, volume bitmap, etc. are files within its own file system.

"In the beginning, was The File..." yup, PICK was pretty much like
that, too; like an "embedded metalanguage".

AFAIK, NTFS may put some frequently-accessed stuff in the middle of
the volume, on the assumption that the volume will fill up and this
middle zone will therefore be average- rather than worst-case head
travel, most of the time.

But with a nearly-empty HD, that just forces average-case mediocrity
in a situation that should be near best-case speed.
So accessing any file-system structures actually incurs file system
overhead and fragmentation like any other file does.

There are various aspects to that, both for speed (how many different
locations have to be pecked at during an atomic file op?) and
survivability (are key structures duplicated or otherwise derivable
from redundent info?). Then there's the impact of extra underfootware
code, such as Shadow Copy, etc.

AFAIK, a non-zero file on NTFS will have:
- an MFT entry
- a dir entry, with metadata
- chaining data, prolly included in or bulging from above
- content data, first part included in metadata
- space-used bits within the free space bitmap

Finding the file top open it is more efficient if there are many
entries in the directory, compared to FATxx, as the directory is now
B-tree rather than "flat". Once found, AFAIK all further activity
works through MFT and file metadata., unless the dir entry has to be
refreshed (e.g. whn closing the file).

Another scalability improvement over FATxx is the way chaining is
tracked. Instead of a duplicated look-up table that includes all
cluster addresses, the functions of tracking chaining and
space-occupied are split.

Space-occupied status is now tracked in a bitmap file, which is 1 bit
per cluster, as opposed to 32 bits per cluster address. So it's
1/32nd as large as a single FAT needs to be (and I really hope it's
duplicated for survivability).

Chaining status is done differently, too; instead of looking up each
"next cluster", it's assumed that clusters will be contigous for the
length of a "run". There will be a starting cluster address, together
with a run length; after than many contiguous clusters, the next run's
start/length is looked up, etc.

This will grow, and possibly bulge, if the file is very fragmented.
MFT is spread out, non-linearly and randomly across the disk wherever it
managed to scrounge up a few blocks. So in theory you could have the MFT
entry of file A in the beginning of the disk, MFT entry of file B at the
*end* of the disk, MFT entry of file C again at the beginning. And this
doesn't even take the location of the file data and directory entry in
relation of the MFT data into account. Bottom line, NTFS is a recipe for
excessive head movement and it does so very well.

As PICK does for every file, so NTFS pre-allocates space for MFT,
based on how large it is expected to be. So straight off, you're
saddled with a worst-case file structure size, requiring head travel
to traverse at least this, much as is the case with FATxx's FATs.

If there are fewer files (but they are larger, filling the space) then
some of that MFT space may be surrendered for use.

If there are more files than expected (i.e. lots of tiny ones), then
MFT will have to grow in conditions where it will likely fragment -
and if it is "always in use", it will never get defragged.

I'm not sure if it's compacted, either. For example:
- I have a 160G volume that's 90% full
- I copy off 90% of the file mass to another volume
- I then shrink this volume to (say) 50G
- am I stuck with a large, sparse MFT forever?
Ext2 and Ext3 on the other hand, don't do this. They use a static
structure spread out linearly over the entire disk. This does give it one
significant advantage. The master file table is written raw to the disk in
predefined locations. No file system overhead accessing the data. Also,
the disk is split up into large linear chunks where each chunk gets its own
master file table and assorted structures that continues where the
previous one left of. I don't know off the top of my head how large these
chunks are. I suspect volume size may also be a factor in determining that.

OK, so it sort of "stripes" its file structure metadata into the real
data as it goes long, which is rather like an auto-partitioning idea.

Prolly means it doesn't have to pre-create the structures, either, so
the benefit is the file system structure is never larger than it needs
to be (especially if now-unused zones have their structures
re-absorbed). I can see how I'd do this, i.e. as a linked (or
double-linked) list with missing zones skipped over and the last zone
terminated with a null. Effectively "formats" itself as it goes.

I'd want to know how linkages could be re-built from derived data, to
know that the structure is survivable/recoverable.
Now when you create a directory and put files in it, they are all grouped
near the physically closest master file table that is available. So this
significantly minimizes head movement.

OK. The testing conditions are small, frequently-purged files (like a
web cache) that create holes in the data mass, and slowly-growing
files, especially those that are always "in use". The process of
defragging would go about correcting for these, as well as garbage
collection of "deleted but retained for indexing purposes" stuff.

That's nice as it goes - in fact, it's *real* nice - but it still
doesn't help with the generic issue of travel between first-installed
code files as these have to be paged into RAM, and last-created new
files at the edge of free space.

To address that, you'd need an awareness of what files are to be
"frequently used" and which ones are not, as well as which files are
destined to be large and those that aren't.

PICK's pre-allocation of space - and NTFS's pre-allocation of MFT
space - are ways to hedge the fragging of slowly-growing files. They
also facilitate PICK's hashing access to file contents (given that a
"file" in PICK is actually a database full of items, thus more like a
directory in DOS/Windows terms).
It does not matter if the file is created near the beginning or end of the
disk, all the necessary file system structures will always be near it with
very little head movement.

I like the association of file content with its structural metadata,
which cuts down on head-skip for a given file (especially for a file
that was created in a single contiguous process, as those large video
files may have been). This scales up better than the FAT-era
assumption that the structural metadata would be held in RAM.
It only stops doing this if the file system becomes near full and it
simply isn't possible anymore. Then it is forced to fragment like NTFS
does.
OK.

Also, when Ext3 creates a file, if possible, it purposely leaves physical
gaps between the files it creates. So if a file then needs to grow, it can
grow without defragmenting as long as there is space in the gap.

OK... does sound like "mediocrity now", compared to one's usual drive
to mimimise head stretch by defragging out free space to the end.
The file system handler also appears to be smart enough to when it copies
a file, and therefore knows its size ahead of time, to preallocate a
linear block of space for the file (if available) to be able to copy the
file without fragmentation.

That's OK when copying files in a single process, but AFAIK Windows'
"File open for writes" and "File open for reads/writes" don't take a
parameter for expected size. Often, many contexts do not "know" how
big the file will be eventually be; it may not even be predictable by
type (e.g. ".AVI may be 500M+, .HTM likely < 100k")
Especially when using an operating system not really intended for more
than one user. =)

Heh... PICK was the opposite; multi-user from the get-go (the expected
usage model was "many dumb terminals") but each user was expected to
run one task at a time. So; multi-user, single-tasking, with machine
tasks (such as the Print Spooler) operating as extra "user accounts".
Well I can tell you that my 2.4GHz E6600 I have sitting at home blows my
3.2GHz P4 to tiny little pieces, and that's only with one core. It
annihilates my P4 when it actually gets to use an app that can use both
its cores.

You mean, neither the OS nor the app uses extra cores unless
specifically coded to do so? Nice for testing, but eww to live with
in an age where multi-core will be the norm (these Celeron-L are
expected to be the last single-core processors to be made)
Now don't ask me what weird-name generation the cores are.

Well, the E6600 sounds Conroe, i.e. "Core 2 Duo" (is a single-core
varient "Core Duo" or "Core 2 Mono"?) whereas the P4 would be Prescott
or later (so 800MHz and either 1M or 2M L2 cache).

The Prescott (or rather, all of the "P4") generation were designed to
leverage clock speeds for performance, a.k.a. the NetBurst
architecture. As fab size shrunk, it was expected that this would
allow lower voltage and higher clock speeds for the same heat
production, but this failed to scale as expected; also, it was a very
un-Green way to do things in terms of global warming - both from
direct heat from the chips and from the extra heat from producing the
energy these chips consumed.

So when Prescott became a near-melt-down embarrasment, Intel looked to
the mobile processor team for the next generation. These folks went
for processing efficiency per clock cycle, even if that meant chips
that couldn't clock as high, so as to extend battery life and not
thermally overwhelm the compact laptop form factor.

I dunno how they did it, but the result is what one review describes
as "at the same clock speed, the new core is expected to be 80-100%
faster than the old core". By that logic, one core of your 2.4GHz
"new generation" chip should run like a P4 at 4GHz or better.

We already have 3GHz-clocked Conroe (dual cores and all), whereas
after X years and multiple fab shrinks, the old P4 designs still don't
reach the 4GHz mark. So the new design has "great legs" :-)
Seemed to work out really well on my dual core system! I am very happy
with it.

I've just brought home a new Intel G33 chipset mobo (which I will use
to replace Roger City G965 in my builds) and 2GHz Celeron-L, and can't
wait to build 'em, but the case place was closed for stock-take. Bah!


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
OK, not sure how to interpret that, tho.

Yea, it is not a very scientific observation. Just a casual one. =)
Is this SCSI, BTW?

Nope, SATAII. I am playing with the thought of having a 6 disk SCSI striped
raid with parity setup on my next system.
"In the beginning, was The File..." yup, PICK was pretty much like
that, too; like an "embedded metalanguage".

AFAIK, NTFS may put some frequently-accessed stuff in the middle of
the volume, on the assumption that the volume will fill up and this
middle zone will therefore be average- rather than worst-case head
travel, most of the time.

Now that you mention it, yea it does put some stuff towards the middle.
Though really, what is the "Middle"? There isn't just *one* middle with
today's drives having multiple platters. So really, NTFS putting things in
the middle of the linear address space of a drive is not really the middle
at all. It could be putting things on the very outer edge of a platter.
But with a nearly-empty HD, that just forces average-case mediocrity
in a situation that should be near best-case speed.


There are various aspects to that, both for speed (how many different
locations have to be pecked at during an atomic file op?) and
survivability (are key structures duplicated or otherwise derivable
from redundent info?). Then there's the impact of extra underfootware
code, such as Shadow Copy, etc.

I know Ext2/3 has duplicates of the superblock which defines the file
system itself.
As PICK does for every file, so NTFS pre-allocates space for MFT,
based on how large it is expected to be. So straight off, you're
saddled with a worst-case file structure size, requiring head travel
to traverse at least this, much as is the case with FATxx's FATs.

Yea but it doesn't seem to pre-allocate all that much. Diskeeper does a
pretty good job at showing MFT fragmentation. It's also capable of
defragmenting it at boot-time. And usually, I see the MFT all over the
disk in random places. =)
If there are fewer files (but they are larger, filling the space) then
some of that MFT space may be surrendered for use.

If there are more files than expected (i.e. lots of tiny ones), then
MFT will have to grow in conditions where it will likely fragment -
and if it is "always in use", it will never get defragged.

I'm not sure if it's compacted, either. For example:
- I have a 160G volume that's 90% full
- I copy off 90% of the file mass to another volume
- I then shrink this volume to (say) 50G
- am I stuck with a large, sparse MFT forever?

As I've said above, it can be defragmented with the right software. But
otherwise, you probably would be stuck. =)
OK, so it sort of "stripes" its file structure metadata into the real
data as it goes long, which is rather like an auto-partitioning idea.
Basically.


Prolly means it doesn't have to pre-create the structures, either, so
the benefit is the file system structure is never larger than it needs
to be (especially if now-unused zones have their structures
re-absorbed). I can see how I'd do this, i.e. as a linked (or
double-linked) list with missing zones skipped over and the last zone
terminated with a null. Effectively "formats" itself as it goes.

Actually, it does pre-create all the primary structures when you first
format the disk. This does mean that Ext2/3 has a fixed limit of how many
files you can theoratically create in the file system. The limit depends
on the size of the disk and can go up to 1.3 x 10^20 files. Seeing how
across 1 terrabyte of storage I've just barely managed to scratch 300k
files, I am not worried about the limit. =)
I'd want to know how linkages could be re-built from derived data, to
know that the structure is survivable/recoverable.

That I don't know.
OK. The testing conditions are small, frequently-purged files (like a
web cache) that create holes in the data mass, and slowly-growing
files, especially those that are always "in use". The process of
defragging would go about correcting for these, as well as garbage
collection of "deleted but retained for indexing purposes" stuff.

That's nice as it goes - in fact, it's *real* nice - but it still
doesn't help with the generic issue of travel between first-installed
code files as these have to be paged into RAM, and last-created new
files at the edge of free space.

To address that, you'd need an awareness of what files are to be
"frequently used" and which ones are not, as well as which files are
destined to be large and those that aren't.

The idea of moving around files that are frequently used is not a bad one.
Though I do see some problems with it:

- Need enough contiguous space to group the files together.
- Need time to actually do so. It doesn't help any if it takes more time
to move the files around than it would take to simply read them from
opposite ends of the disk.
- There is no single "start" and "end" of the disk. Multiple platters and
2 sides per platter mean multiple starts and ends.
- How frequently does a file need to be accessed before it is considered
frequent? What if the usage pattern changes a lot?
OK... does sound like "mediocrity now", compared to one's usual drive
to mimimise head stretch by defragging out free space to the end.

Actually most defraggers these days don't really bother doing that
anymore all that much. Diskeeper actually has a section in its FAQ
answering why it doesn't. =)
That's OK when copying files in a single process, but AFAIK Windows'
"File open for writes" and "File open for reads/writes" don't take a
parameter for expected size. Often, many contexts do not "know" how
big the file will be eventually be; it may not even be predictable by
type (e.g. ".AVI may be 500M+, .HTM likely < 100k")

True, but this is partially mitigated by the fact that file data is kept
in memory as long as possible and committed as late as possible. So that
way, even if the context isn't aware of the file size, the file system
handler is by keeping the content in RAM if available.
You mean, neither the OS nor the app uses extra cores unless
specifically coded to do so? Nice for testing, but eww to live with
in an age where multi-core will be the norm (these Celeron-L are
expected to be the last single-core processors to be made)

Well both Windows and Linux use all available cores for task scheduling.
However, as far as the application is concerned, if it is a single
threaded app it will not use more than 1 core. Doesn't matter if you had a
motherboard with a million cores on it. You can't spread a thread across
multiple cores.

An application, in any operating system, does have to be specifically
written to take advantage of multiple cores or processors in a system.
This also presents challenges from a programming point.

Not all problems can easily be split into multiple threads and most apps
don't always process data that is very suitable for parallel processing. A
word processor for example will probably never benefit much from more than
1 core.

Multithreading is also not easy to deal with from a programming
point. There are a LOT of things that can go wrong, so usually a lot of
programmers avoid it. There also was very little, if any, benefit for it
in the desktop market before multi core CPUs. Multithreading an app to run
on a Pentium 4 has very little, if any, benefit. On the contrary, it
could actually slow things down as switching threads takes time. Generally
I've only used multiple threads where not doing so would freeze the UI
while it waits on the task to be completed (triggered by a button press
for example).

Multithreading can also sometimes introduce more overhead than it actually
gains in performance, though this is less of a problem with multi-core
CPUs than it is with a hyperthreaded P4. One easiest way to multithread an
application, if applicable, is to create "tasks" and have a master thread
manage these tasks and distribute these to worker threads to perform. This
application level task management though does eat up time and is not
suitable for all types of apps. Games sometimes like to use it though.

I myself am in the boat right now. Application I am developing could in
theory benefit from multithreading but currently is single threaded and
will only use one core. The reason I don't go multithreading is because
doing so would greatly increase the complexity of my code and at this
point in time I simply don't have the need performance wise.

Even on my P4 system I can throw 300k triangles, which is a fairly complex
dataset,to be rendered at the processor and still get more than 60 frames
per second. And that's on a system that wouldn't even really benefit from
multi threading the rendering pipeline. My core 2 duo just yawns at me and
begs me to give it more data with the other core sound asleep. =)

On top of that, my single threaded engine isn't even as fast as it could
be. I currently still store vertex data in system memory. If I actually
stored it in video memory the speeds would be even more insane than they
are.

So even for me, multithreading is somewhere at the extreme bottom of
performance enhancements that I might think about.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„å‡ºã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
Yea, it is not a very scientific observation. Just a casual one. =)

If those were created in a "linear" way, then the process should be
fairly "smooth" from one HD to another, tho back-and-forth thrashing
would be inevitable if the destination and target were different
volumes on the same physical HD.
Nope, SATAII. I am playing with the thought of having a 6 disk SCSI striped
raid with parity setup on my next system.

I asked, because I was wondering about possible interface intelligence
that SCSI and perhaps S-ATA/NLQ are reputed to have. That may clump
any OS burps that may pop up during the copy ;-)

Is this a system with enough RAM to prevent paging?
Now that you mention it, yea it does put some stuff towards the middle.
Though really, what is the "Middle"? There isn't just *one* middle with
today's drives having multiple platters. So really, NTFS putting things in
the middle of the linear address space of a drive is not really the middle
at all. It could be putting things on the very outer edge of a platter.

I'm pretty sure linear addressing (LBA) will fill cylinders before
clicking heads forward. A "cylinder" being the logical construct of
all tracks on all platter surfaces that can be accessed without moving
the heads - that could span physical HDs in RAID 0, for example.

Exceptions might arise when HD firmware (or OS "fixing") relocates
failing sectors (or clusters) to OK spares. Shouldn't really be an
active "feature" in a healthy setup, tho.
I know Ext2/3 has duplicates of the superblock which defines the file
system itself.

NTFS duplicates the first few records of the MFT, and that's it. Not
sure if that includes the run start/length entries for all files,
which would be NTFS's equivalent to FAT1+FAT2.

It sounds as if Ext2/3 does something similar, depending on just how
far "defines the file system itself" takes you.
Yea but it doesn't seem to pre-allocate all that much.

Depends on how the NTFS was created, perhaps - i.e. formatted as NTFS,
or converted from FATxx, etc. I know there's an OEM setup parameter
to address this, which sets aside space for an eventual NTFS MFT after
the initial FATxx file system is converted later. I think this is to
cater for a "FAT32, install/image OS, convert NTFS" build method that
old build tools may require some OEMs to use.

Also, not everything that is "can't move" is MFT; some may be
pagefile, etc. Not sure how Diskkeeper shows this info...
As I've said above, it can be defragmented with the right software. But
otherwise, you probably would be stuck. =)

Yup. Also, does defragging purge old entries?
Basically.

I kinda like the old FATxx design, for one particular reason; it's
easy to preserve and re-assert the file system structure in data
recover scenarios, if you know exactly where it will be.
Actually, it does pre-create all the primary structures when you first
format the disk. This does mean that Ext2/3 has a fixed limit of how many
files you can theoratically create in the file system.

Hmm... that sounds like a missed "scalability" opportunity to me.

(keeping structure metadata and file content together...)
The idea of moving around files that are frequently used is not a bad one.

Since Win98, this has changed the objectives of "defrag" from
"defragging files" to "optimizing access to frequently used code", so
apps and OS start up faster. This can have the effect of fragmenting
large files that have only some parts "frequently used", which would
be sacrilige to an "old logic" defragger.
Though I do see some problems with it:

- Need enough contiguous space to group the files together.

That's what the slow "first 10%" in a Win98 defrag is all about :-)
- Need time to actually do so. It doesn't help any if it takes more time
to move the files around than it would take to simply read them from
opposite ends of the disk.

Defrag = "duty now for the future", where the intention is to waste
time defragging at times the user isn't trying to work. Underfootware
defragging is a Vista idea that I have yet to warm to.

What's more of a problem is the thrashing that can happen when the
status of files is percieved to change. An overly-sensitive
"frequently-used" logic could cause that, so you need a bit of
hysteriesis (spelling?) to lag that from bouncing around.
- There is no single "start" and "end" of the disk. Multiple platters and
2 sides per platter mean multiple starts and ends.

Logically, all surfaces, heads and tracks of "the disk" are considered
as one, and (hopefully) addressed in a sequence that minimizes head
travel. This logic has been present since DOS first dealt with
double-sided diskettes, where they are filled in sector, head, track
order (rather than the sector, track, head order you intuit).

My first disk system didn't have that logic; it treated each side of a
diskette as a separate disk drive! That was a home-cooked add-on for
the ZX Spectrum, which I worked with quite a bit.
- How frequently does a file need to be accessed before it is considered
frequent? What if the usage pattern changes a lot?

Yup. Hello, context thrashing.
Actually most defraggers these days don't really bother doing that
anymore all that much. Diskeeper actually has a section in its FAQ
answering why it doesn't. =)

Which means if you want to concentrate travel within a narrow band,
then leaving it to defrag isn't the hot setup - better to size
partitions and volumes and apply your own control to what goes where.

Else, as far as I can see, you will always have a shlepp from the
old/fast code at the "front" to the new stuff at the "end", The fix
for that is to suck out dead stuff from the "middle" and stash it on
another volume, then kick the OS's ass until it stops fiddling with
this inactive stuff via underfootware processes.

That includes the dead weight of installation archives, backed-up old
code replaced by patches, original pre-install wads of these patches,
etc. which MS OSs are too dumb to do. A "mature" XP installation can
have 3 dead files for every 1 live one... nowhere else would that sort
of inefficiency be tolerated.

Fair enough - that's an easy case to solve, and worth solving. I
think MS's logic is usually to create new files in the unbroken mass
of free space at the "end" of the file system, only filling up the
"lacunae" between files when the volume's wave of files hits the end
of the volume. For this reason, a volume that has run out of space is
worth defragging once you free up enough space, for the same reason
that that defrag will take a lot longer than usual :-)
True, but this is partially mitigated by the fact that file data is kept
in memory as long as possible and committed as late as possible. So that
way, even if the context isn't aware of the file size, the file system
handler is by keeping the content in RAM if available.

Hmm... yes, I can see that can help. Means the start of the file is
only comitted when the file is flushed for the first time.

Another strategy is to work on temp files, then rebuild the "real"
file only when the file is formally saved. This is what Word does,
and why there are always those ~ ghosts lying around if Word gets
bad-exited (those may be the "real" file, due to replace the one you
see only when saved).
Well both Windows and Linux use all available cores for task scheduling.
However, as far as the application is concerned, if it is a single
threaded app it will not use more than 1 core.

Sure, fair enough. By now, one might expect apps doing "heavy things"
to spawn multiple threads, and Vista's limits on what apps can do
encourages this, i.e. splitting underfootware into parts that run as
service and as "normal app" respectively.

These days, you need a spare core just to run all the malware ;-)
An application, in any operating system, does have to be specifically
written to take advantage of multiple cores or processors in a system.
This also presents challenges from a programming point.

Yup, no lie there. I think the compiler will take care of some
details, as long as you separate threads in the first place.

AFAIK NT's been multi-core-aware for a while, at the level of separate
CPUs at least. Hence the alternate 1CPU / multi-CPU HALs, etc.

Not sure how multiple cores within the same CPU are used, though - it
may be that the CPU's internal logic can load-balance across them, and
that this can evolve as the CPUs do. I do know that multi-core CPUs
are expected to present themselves as a single CPU to processes that
count CPUs for software licensing purposes.

It's been said that Vista makes better use of multiple cores than XP,
but often in posts that compare single-core "P4" core processes with
dual-core "Core 2 Duo" processors at similar GHz, without factoring in
the latter's increased efficiency per clock.

So they may in fact only be telling us what we already know, i.e. that
the new cores deliver more computational power per GHz.
A word processor for example will probably never benefit much from
more than 1 core.

No, I can see that being an example where multiple cores would help;
background pagentaion, spell-checking, and tracking line and page
breaks, for example - the stuff that makes typing feel "sticky". Not
to mention the creation of output to be sent to the printer driver,
background saves, etc. Non-destructive in-place editing can be a
challenge, and solutions often involve a lot of background logic.
Multithreading is also not easy to deal with from a programming
point. There are a LOT of things that can go wrong, so usually a lot of
programmers avoid it. There also was very little, if any, benefit for it
in the desktop market before multi core CPUs.

Some of that challenge applies to any event-driven UI, as opposed to
procedural or "wizard" UIs. IOW, solutions to that (which unlink each
action from the base UI module, and sanity-check states between the
various methods spawned) may also pre-solve for multi-core.
I've only used multiple threads where not doing so would freeze the UI
while it waits on the task to be completed (triggered by a button press
for example).

Sure. I used to "hide" some overhead by doing some processing
straight after displaying a UI, on hte basis that the user will look
at it for a while before demanding attention (this is back in the PICK
days) but folks familiar with their apps will type-ahead and catch you
out. There are all sorts of ways to go wrong here, e.g...
- spawn a dialog with a single button on it called OK
- when presses OK, start a process
- replace that button with one to Cancel
- when the process is done, report results
- relace that button with one for OK to close the dialog
Multithreading can also sometimes introduce more overhead than it actually
gains in performance, though this is less of a problem with multi-core
CPUs than it is with a hyperthreaded P4.

I think true multicores may be more "transparent" that HyperThreading,
in that HT prolly can't do everything a "real" core can. So with HT,
only certain things can be shunted off to HT efficiently, whereas a
true multicore processor will have, er, multiple interchangeable cores

There's no doubt that "computing about computing" can pay off, even in
this age of already-complex CPUs. The Core 2 Duo's boost over
NetBurst at lower clock speeds is living proof of that, and frankly, I
was quite surprised by this. I thought that gum had had all its
flavor chewed out of it; witness the long-previous "slower but faster"
processors from Cyrix, and AMD's ghastly K5.
One easiest way to multithread an application, if applicable, is to
create "tasks" and have a master thread manage these tasks
and distribute these to worker threads to perform.

Isn't this pretty much what Windows' internal messaging stuff does? I
know this applies to inter-process comms, I just don't know whether
the various bits of an app are different processes at this level.
Perhaps some of this logic can be built into the standard code and UI
libraries that the app's code is built with?
I myself am in the boat right now. Application I am developing could in
theory benefit from multithreading but currently is single threaded and
will only use one core. The reason I don't go multithreading is because
doing so would greatly increase the complexity of my code and at this
point in time I simply don't have the need performance wise.

The problem is race conditions between the two processes. If you did
spawn one process as a "loose torpedo", the OS could pre-emptively
time-slice between it and it could come to pass that the two halves
wind up on different cores.
Even on my P4 system I can throw 300k triangles, which is a fairly complex
dataset,to be rendered at the processor and still get more than 60 frames
per second. And that's on a system that wouldn't even really benefit from
multi threading the rendering pipeline. My core 2 duo just yawns at me and
begs me to give it more data with the other core sound asleep. =)

Heh heh ;-)

I think modern games are often built on engines; specifically, an
engine to do graphics and another for sound. The AI behind the
character's activities can be split off as an asynchronous thread,
whereas sound and graphics have to catch every beat.
On top of that, my single threaded engine isn't even as fast as it could
be. I currently still store vertex data in system memory. If I actually
stored it in video memory the speeds would be even more insane than they
are.

So even for me, multithreading is somewhere at the extreme bottom of
performance enhancements that I might think about.

It may help at the OS level, e.g. when copying files, the av scanner
coukld be running in the "spare" core. I think this was a large part
of the thinking behind HT... also, there will be network packets to be
batted off by the firewall, etc. and that's a coreful of stuff, too.
Well, perhaps not a complete corefull, but YKWIM.

Real-time stuff like MP3 playback (something else that may be built
into games, with the compression reducing head travel) can also
benefit from having a spare core to process them.

And then there's all the DRM trash to compute, too...


---------- ----- ---- --- -- - - - -
Proverbs Unscrolled #37
"Build it and they will come and break it"
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top