PC 4GB RAM limit

  • Thread starter Thread starter Tim Anderson
  • Start date Start date
Conor said:
Except that it does.

No, it doesn't. Encode a video on an old one and a new one, then tell me
that 'bloat' has 'consumed' all the power of the new processor. Or try
running one of the latest game releases.
Compare the average spec of a box from 1996-1998
and what we have today yet they feel no quicker.

If you have been following the thread then you know my point all along has
been the "appropriately inappropriate measure" and claiming one can
determine the processor has been 'consumed by bloat' from simply a 'feel'
of the GUI is about as inappropriate as one can get.

For one, there is little reason to speed up the GUI because you can blink
your eye only so fast. So go ahead and tell me you could 'feel' the
difference between a Window minimizing in 100usec vs 10 ms.

Typing is a minor part of what goes on.

Of course it is, but 'text' has been Mxsmaniac's 'frame of comparison' in
other messages.

I actually sell old hardware. I make a decent amount selling refurb P3
systems. I stick Win98 on them and they're great for web/office.

I agree that an older system can be quite usable for 'web/office', and have
them myself, but Mxsmaniac isn't saying you don't need a 'fast' computer to
do those tasks. He's saying a modern computer is 'no faster'... period.
 
Mxsmanic said:
David Maynard writes:




No. Such systems would simply not be necessary at all, except for a
handful of applications.

What 'such systems'?

Your current methodology of snipping out all relevant context isn't helping
any.

Old hardware and software are no longer available.

I got plenty. I'll sell you one.
 
Mxsmanic said:
David Maynard writes:




There are not enough applications to run on them for the mass market,
and they generally are not single-user, GUI-oriented machines.

Like I said, they are not "PCs," aren't intended to be "PCs," and aren't
designed to be "PCs."

And if you can't 'get rich' building them then saying they have a "95%
margin" and are a "cash cow" is grossly misleading.
UNIX is close to a mainframe system, though, and a lot of people run
some version of that. They do so because UNIX is cheap.

"UNIX" is not hardware.
 
David said:
CBFalconer wrote:
.... snip ...


Talk about 'bloat'. Why would I want the system to keep multiple
copies just so it can perpetually rediscover I've got the same
processor and then waste lord knows how many resources hopping
around the system informing and reconfiguring everything?

We've been doing the same sort of thing for many years without
bloat. Take my ddtz for CP/M for example, which has been around
for about 25 years. It occupies a grand total of about 7k bytes
code space, and adapts itself to the cpu it runs on. It does cpu
detection on initialization, sets a flag, and several operations
check that flag. The bloat is trivial, maybe 100 bytes. You can
see just how it was done by examining the source on my site (see
organization header), if you wish.

There are short routines available for the whole family of x86
processors to detect the cpu. All it takes is a modicum of
forethought.

--
Some informative links:
http://www.geocities.com/nnqweb/

http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
 
Conor said:
Mxsmanic says...

**** me, don't want to use Linux then. Out of the box
installation of MDK10/Suse9.3 has over 30 processes, including
servers, running after setup.

There is a great difference between processes running and processes
active. Most of those are idle until something happens. Others
can run at very low priority, and just get out of the way for
anything.

--
Some informative links:
http://www.geocities.com/nnqweb/

http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
 
The major offender in computer performance today is slow disk drives.

Define computer performance.
The major offender in computer performance today is slow disk drives.
They've improved only moderately over disk drives from half a century
ago.
They've improved only moderately over disk drives from half a century
ago.

Certainly not a very thoughtful statement.

Fifty years ago, 1956:

The first hard drive was shipped in 1956 by IBM. The first RAMAC had a
capacity of 5,000,000 characters on 50 platters 24 inches in diameter
rotating at 1200 rpm. A single R/W head had to be positioned to the correct
platter surface as well as the correct track.. Data transfer rate was less
than 9,000 charcters 7 bit characters per second, recording density was 2000
characters per sq. in., access time was 0.6 seconds. Lease cost several
thousand US dollars per month.

One year ago, 2004

Hitachi (successor to the IBM hard drive unit) shipped a Deskstar hard drive
with a capacity of 250 Gigabytes on 3 platters 3.5 inches in diameter
spinning at 7200 rpm, an average sustained data rate of 60 Megabytes per
second, and an average seek time of 8.5 milliseconds. Purchase cost about
$200 US.

Performance difference: 1956, the ONLY hard drive vs. 2004, a commodity
hard drive
Capacity: 4.4 Megabyte vs. 200,000 Megabytes | factor of improvement:
45,000 X
Data transfer rate: 8800 cps vs. 61 Megabytes per second | factor of
improvement: 7000 X
Average access time: 600 milliseconds vs. 13 milliseconds | factor of
imporvement: 46 X
Cost per Megabyte: (in 2004 dollars, assuming a useful lifetime of 3 years,
not including maintenence contract)
~ $450,000 US/4.4 Megabytes vs. $200
US/200,000 Megabytes =
~ $100,000 US per Megabyte vs. $0.001 US
per Megabyte | factor of improvement: 100,000,000

Phil Weldon
 
Clearly you have a VERY broad definition of 'frill'. Why don't you give
your definition so we can be part of the same converstation?

Phil Weldon
 
The point is it isn't.



It is precisely the same thing: keeping backward compatibility for an
infinite period of time when technology has gone through over 10 years of
progression.

We seem to have drifted off of the prior off-topic, topic,
but in my mind the more significant factor is x86 rather
than Pentium era departures, but even so, there is no real
gain in having ~ 386 compatibility when further progress has
been made towards more modern mobile processors. Perhaps
the process size matters in outer space as one poster
mentioned but otherwise, there are more suitable modern
alternatives.
 
Then use old hardware since nothing is faster anyway, from your "user
standpoint."

C'mon now, even with your argument you can see the flaw in
that.

It would seem several of us aren't going to agree and this
not being (any of them) software forums, not only is this
cluttering up the groups but could be much more insightful
if restarted in a more appropriate group where there are
participants more seasoned in such things.
 
But they've stuck at that point for the last few years.

They have gotten a little faster still.
About 4-5 years ago a typical PC drive was around 5K4,
"Maybe" 7K2 RPM. STR was roughly 40MB/s. Today it's over
60MB/s. I'd be satisfied with 50% performance increase
every 4-5 years at least until we finally get to the point
where there are consumer/PC solid-state drives.

I don't find drive performance very problematic though,
except for long load times on games. Otherwise I'd just as
soon go back to 5K4 RPM 5.25" drives if it meant higher
capacity. IMO, one of the larger problems is the
price-disparity between 512MB-1GB, and between < 1GB & > 1GB
memory modules.
 
David said:
No, it doesn't. Encode a video on an old one and a new one, then tell me
that 'bloat' has 'consumed' all the power of the new processor. Or try
running one of the latest game releases.

These applications are not representative in that they are
compute-bound. Very few people are encoding video, and game-playing is
a very specialized market.
If you have been following the thread then you know my point all along has
been the "appropriately inappropriate measure" and claiming one can
determine the processor has been 'consumed by bloat' from simply a 'feel'
of the GUI is about as inappropriate as one can get.

It's more than a feel; I've run experiments.
For one, there is little reason to speed up the GUI because you can blink
your eye only so fast. So go ahead and tell me you could 'feel' the
difference between a Window minimizing in 100usec vs 10 ms.

The problem is not minimizing a window (although it takes a lot longer
than 10 ms, since I can watch it happen--it's closer to 100 ms on my
machine). The problem is the cumulative horsepower consumed by all
these bells and whistles. And software bloat also _dramatically_
increases disk I/O, and disk I/O is extremely slow. Most of the delays
on a PC that are not network-related today are due to the slow speed of
disk drives.
I agree that an older system can be quite usable for 'web/office', and have
them myself, but Mxsmaniac isn't saying you don't need a 'fast' computer to
do those tasks. He's saying a modern computer is 'no faster'... period.

No, I'm saying that virtually all the additional horsepower that modern
computers have added over the computers of the olden days has been
wasted by software bloat.

On a 286, it used to take several seconds to save a document. Today, it
still takes several seconds to do that, even though my computer today is
_supposedly_ a thousand times faster.
 
Al said:
You forgot the fact that a disk, in 1970, was being used by the
several/many jobs or user tasks run concurrently on a mainframe. That
could be in the tens to hundreds.

You forget that all of those jobs together were starting fewer I/Os per
second than an average desktop today.
Transer rate counts too. A Photoshop image can be 10s of MB and I'd
hate to have to wait each time that got read or written to an old
disk.

Very few disk I/Os involve a lot of data transfer. Disk I/Os are very
numerous but often very small, and completely random.
 
Phil said:
Certainly not a very thoughtful statement.

Fifty years ago, 1956:

The first hard drive was shipped in 1956 by IBM. The first RAMAC had a
capacity of 5,000,000 characters on 50 platters 24 inches in diameter
rotating at 1200 rpm. A single R/W head had to be positioned to the correct
platter surface as well as the correct track.. Data transfer rate was less
than 9,000 charcters 7 bit characters per second, recording density was 2000
characters per sq. in., access time was 0.6 seconds. Lease cost several
thousand US dollars per month.

So access times have improve by 1000:1. Processor speeds over the same
period have improved by roughly a 1000000:1. Which means that disk
drives today are 1000 times _slower_ than disk drives fifty years ago,
in comparison to processors. This means that most activities on a
typical desktop today are seriously disk-bound.
 
Phil said:
Clearly you have a VERY broad definition of 'frill'. Why don't you give
your definition so we can be part of the same converstation?

Card test, defragmentation, and file recovery programs.
 
So access times have improve by 1000:1. Processor speeds over the same
period have improved by roughly a 1000000:1. Which means that disk
drives today are 1000 times _slower_ than disk drives fifty years ago,
in comparison to processors. This means that most activities on a
typical desktop today are seriously disk-bound.


It's really rather pointless to try to compare any '56
technology to today. While I agree that disk performance
does quite often dictate app load times, there isn't even a
correlation if the ratios you had selected were accurate,
the only relevant factor is the *PC* as it stands today and
what it's present bottlenecks are. In that regard I do
still agree that it is disk-bound, EXCEPT that this is only
true to a limited extent, then memory caching comes into
play. If you find disk performance that much of a problem I
suggest you buy more memory and not reboot the system very
often so all your core apps are cached as much as possible.


Try using a ramdrive too if it's really that much of a
bother. Doing so can alleviate some rather wasteful
tendencies of WinXP to use the pagefile even when it isn't
out of physical memory yet. The ideas yesteryear, that
there is no benefit to put a pagefile on a ramdrive, are
quite wrong. So long as the system has _AMPLE_ memory it is
a clearly superior alternative.
 
Back
Top