PC 4GB RAM limit

  • Thread starter Thread starter Tim Anderson
  • Start date Start date
On Sat, 21 May 2005 04:15:39 GMT, "Phil Weldon"

How about audio and video over Wi-Fi? Should I worry that
html and PDF manuals contain color pictures and screen shots, rather than
lean and mean text only?

How about applications over wifi? They'll run like crap
thanks to the bloat. The world IS going mobile and will be
seeking FULL FEATURED apps that aren't so bloated. Even
good ole Microsoft liked the idea of web services for
thinner clients... only made possible with _less_bloat_.

The idea that someone is going to have to do without most
features and a PDF manual is just wrong. PDF manual and a
couple of pics are (being generous, a half-dozen MB on
average), no account at all for the remaining bloat in many
mainstream apps, let alone windows. You're welcome to
disagree but in very few years you'll be proven wrong...
PDAs and notebooks will merge into new devices more mobile,
yet expected to be able to do common desktop tasks... is
inevitable, so to a certain extent the malcontent towards
bloat will soon enough be lessened. We're just not there
"yet".
 
kony said:
It's really rather pointless to try to compare any '56
technology to today. While I agree that disk performance
does quite often dictate app load times, there isn't even a
correlation if the ratios you had selected were accurate,
the only relevant factor is the *PC* as it stands today and
what it's present bottlenecks are. In that regard I do
still agree that it is disk-bound, EXCEPT that this is only
true to a limited extent, then memory caching comes into
play.

Memory caching is useful only if the disk is small and/or disk access
exhibits strong locality. On large databases with completely random
access, caching is practically useless--the required block is never in
the cache.
If you find disk performance that much of a problem I
suggest you buy more memory and not reboot the system very
often so all your core apps are cached as much as possible.

That won't help. Not only is cache useful only when the same disk
blocks are referenced over and over (for directories, for example), but
many applications insist on _writing_ to disk thousands of times, and
that cannot be safely cached.
The ideas yesteryear, that there is no benefit to put a pagefile
on a ramdrive, are
quite wrong. So long as the system has _AMPLE_ memory it is
a clearly superior alternative.

Just eliminate the file if you have that much memory.
 
kony said:
The world IS going mobile ...

Not any time soon. Silicon Valley might be mobile, but a lot of people
in the world are still running Windows 3.x.

Mobile is a lot of time and trouble for nothing unless you really,
really need to have wireless, mobile access to a network. I certainly
don't, and yet I'm a power user of the Net and computers generally by
just about any standard of measure.
... and will be
seeking FULL FEATURED apps that aren't so bloated. Even
good ole Microsoft liked the idea of web services for
thinner clients... only made possible with _less_bloat_.

The real problem is that computers can't get faster forever, and
eventually the software will have to deal with hardware that cannot
infinitely expand in capacity to cover any amount of carelessness in
design or coding.
PDAs and notebooks will merge into new devices more mobile,
yet expected to be able to do common desktop tasks ...

They may be able to do it, but that doesn't mean that everyone will want
it. I like real keyboards and mice myself, and when I'm away from my
desk, I have a life outside of cyberspace.
 
Memory caching is useful only if the disk is small and/or disk access
exhibits strong locality. On large databases with completely random
access, caching is practically useless--the required block is never in
the cache.

It's all relative. If you have need to access such a large
database, have it on a server with ample memory or add that
to the desktop system. A PC is not typically used for large
databases though, or rather, that is a small minority of
it's uses and isn't an arguement that extends to the
majority of PC uses.

That won't help. Not only is cache useful only when the same disk
blocks are referenced over and over (for directories, for example), but
many applications insist on _writing_ to disk thousands of times, and
that cannot be safely cached.

Contrary to your claims, it DOES help.
If you find your app writing thousands of times, then either
A) you should've considered a ramdrive, or
B) they are larger writes when the increases transfer rate
of a modern drive does in fact help, moreso than latency

The fact is, drives ARE benchmarked to replicate (buth
synthetically and in real-world request types of scenarios)
and the newer drives do in fact performance quite well.

If your only argument is that they didn't improve as much as
{some other particular part) then it's your own fault for
not finding alternative ways to improve file performance
rather than just thinking there's no hope.

Just eliminate the file if you have that much memory.

Not a universal solution. You being one that dislikes
bloat, should then like Win2k more than XP. 2K doesn't
allow disabling the swap file, at least not with *normal*
methods in the user interface.
 
Not any time soon. Silicon Valley might be mobile, but a lot of people
in the world are still running Windows 3.x.

I see you just like to argue. The "world" applies to
technologically modern societies. Even in those, there will
be some who reject technology, but it doesn't change the
fact that those who want it, and can afford it, will be able
to have it.

Mobile is a lot of time and trouble for nothing unless you really,
really need to have wireless, mobile access to a network.

Nonsense. Mobile is for anyone who has a need to have
access to data for (any reason). It is certainly NOT
limited to networkabity, though of course may will want it.

I certainly
don't, and yet I'm a power user of the Net and computers generally by
just about any standard of measure.

Absolutely not.
Having access to our data away from your PC is without
question _one_ standard of measure. If you don't "need"
that, fine, but likening yourself to be some ultimate power
user AND therefore nobody else will want, need, or use this
functionality, is truely ridiculous. You do not speak for
anyone else's needs, only your own. There are already
plenty of people who lug around heavy notebooks, even more
would be expected to have this ability when the devices are
more portable and less expensive.

The real problem is that computers can't get faster forever, and
eventually the software will have to deal with hardware that cannot
infinitely expand in capacity to cover any amount of carelessness in
design or coding.


They may be able to do it, but that doesn't mean that everyone will want
it. I like real keyboards and mice myself, and when I'm away from my
desk, I have a life outside of cyberspace.

I suspect you haven't yet envisioned all the possibilities
for mobile data access yet. A computer does not have to be
your concept of "cyberspace", it's also all the things it
always was, including a paperless office, an information
source, a communication medium through traditional text,
banking, etc. If you wish to avoid information that's
another matter.
 
kony said:
It's all relative. If you have need to access such a large
database, have it on a server with ample memory or add that
to the desktop system.

You can't configure enough memory for very large, randomly-accessed
databases.
Contrary to your claims, it DOES help.

Cache requires that a page requested be in the cache. If the ratio of
pages on the database to pages in memory is high, and access is random,
chances are that pages will almost never be in cache when requested,
which effectively makes cache useless. This is actually what happens
for very large, randomly-accessed databases.
If you find your app writing thousands of times, then either
A) you should've considered a ramdrive, or
B) they are larger writes when the increases transfer rate
of a modern drive does in fact help, moreso than latency

Ramdrive won't help if the writes actually have to be written to
non-volatile disk files. Transfer rate is unimportant for small writes,
and many writes are small (very few files are written in tens of
megabytes at a time).
If your only argument is that they didn't improve as much as
{some other particular part) then it's your own fault for
not finding alternative ways to improve file performance
rather than just thinking there's no hope.

In some cases, there is truly no hope.
Not a universal solution.

Either you have pages found in cache, or you don't. If everything's in
cache, you don't need the file. If nothing is in cache, you don't need
the cache.
 
kony said:
I see you just like to argue.

No, I like to remain objective, and keep things in perspective.
The "world" applies to technologically modern societies.

Why doesn't the rest of the planet count?
Nonsense. Mobile is for anyone who has a need to have
access to data for (any reason). It is certainly NOT
limited to networkabity, though of course may will want it.

Most people don't require mobile access to data, as they are occupied by
other things when moving about.
Having access to our data away from your PC is without
question _one_ standard of measure. If you don't "need"
that, fine, but likening yourself to be some ultimate power
user AND therefore nobody else will want, need, or use this
functionality, is truely ridiculous.

Most power users I know are in the same category.
I suspect you haven't yet envisioned all the possibilities
for mobile data access yet.

Nobody has, but I've seen most of the implementations, and it's hard to
see their utility outside certain specific situations, except for geeks.
A computer does not have to be
your concept of "cyberspace", it's also all the things it
always was, including a paperless office, an information
source, a communication medium through traditional text,
banking, etc.

A lot of people use computers only when they have to.
 
You can't configure enough memory for very large, randomly-accessed
databases.

That has nothing to do with software bloat, and does not
apply in general to your arguement. You had to extend to
one very specific scenario to make it even apply.

Cache requires that a page requested be in the cache. If the ratio of
pages on the database to pages in memory is high, and access is random,
chances are that pages will almost never be in cache when requested,
which effectively makes cache useless. This is actually what happens
for very large, randomly-accessed databases.

Again, trying to argue one specific use that is most
certainly NOT a typical use of a system, is no evidence of
the problem. Such giant databases needing "many" requests
from a single system would be even less common. Further
you're being vague about database size, it IS quite likely a
large percentage of requests could be in memory unless this
database is atypicially large and not set up properly.

Ramdrive won't help if the writes actually have to be written to
non-volatile disk files. Transfer rate is unimportant for small writes,
and many writes are small (very few files are written in tens of
megabytes at a time).

Very few apps have these "thousands" of disk writes too, not
in succession. Regardless of this "problem" you want to
claim, the world continues to function. No matter what the
bottleneck is to a particular use of a system, someone could
come along and claim "but if ONLY that were a lot faster,
it'd help". Well OF COURSE that would help, the concept of
a bottleneck is not a new one. Even so it's a pointless
thing to mention.

In some cases, there is truly no hope.

I'm sorry your computing experiences are so bleak.
The rest of us find computers able to do more than ever
before, faster. Bloat may counteract that to a degree, but
in the end it's all about your insistence that a particular
subsystem isn't evolving fast enough to suit you.

Monitors aren't evolving fast enough to suit me, and they
too impact productivity, but you don't see me demonizing
them.
Either you have pages found in cache, or you don't. If everything's in
cache, you don't need the file. If nothing is in cache, you don't need
the cache.

That's a pretty big assumption you're making that any reads
from the disk woouldn't be repeated with subsequent access,
nor that caching from the drive can be based on a prediction
algorithm, such that even if the requests were truely
random, it would still be possible (perhaps even likely)
that the STR increase of more modern drives resulted in the
drive itself having cached something useful. Not every
time, but then this very narrow scenario you suggest is
certainly not what's being asked of modern drives most of
the time either.
 
No, I like to remain objective, and keep things in perspective.


Most certainly not. You're already being subjective
claiming that your needs would determine what others do with
regards to mobile computing.

Why doesn't the rest of the planet count?

Because you insist on making everything so narrow that it's
easy for you to comprehend.


Most people don't require mobile access to data, as they are occupied by
other things when moving about.

Most people didn't require computers at all in the beginning
of the PC revolution, but here they are!


Most power users I know are in the same category.

This is suprising to you, given that these smaller devices
are yet to hit the market? You weren't paying attention
when you read what I wrote.

Nobody has, but I've seen most of the implementations, and it's hard to
see their utility outside certain specific situations, except for geeks.

You mean, "hard for you".
You can't see the obvious even though there are already tons
of mobile phones, laptops, PDAs, etc. The adoption of these
could as well have been discounted before their arrival with
a similar "I wouldn't, therefore most wouldn't" attitude.
You really don't have any idea of who will or won't adapt to
newer technology if you simply ignore that it has value.

Don't be mobile then, nobody asked you, not that I recall.
I doubt anyone else will wait for you to lead them if you're
not interested.

A lot of people use computers only when they have to.

.... and a lot of people don't.
I'm sure somebody told Henry Ford that the average joe
wouldn't want a car over a horse but guess what....
 
Mxsmanic said:
Unfortunately, the transfer rate is not the problem. The access time
is the problem. Today it is around 6-8 milliseconds; forty years ago
it was around 40 milliseconds. That's not much of an improvement,
and the improvement that has occurred is mostly just a happy side
effect of greater data densities.

Sixty megabytes per second doesn't help much if you have to transfer
100 small blocks from different places on the disk and it takes 10
milliseconds to access each one.


The first disk drives date from the mid-1950s.

Can you give us a link to a website about these disks? Maybe a museum or a
historical page. I'm interested in finding out more about them.

Thanks.
 
If you had ANY experience outside home use, you would not post as you do.
Repeating nonsense makes no less nonsense. Even worse, when quoting you
seem to elide what doesn't fit your narrow perspective, as in the factor
improvements from 1956 to 2004, which are
Storage capacity: 45,000 X
Data transfer rate: 7000 X
Average access time: 46 X
Cost per Megabyte: 100,000,000 X

Most importantly, this is a comparison of the ONLY hard drive availible in
1956 with a COMMODITY drive in 2004.

What exactly is it that makes you think 'most activities on a typical
desktop today are seriously disk-bound'? Now I'm beginning what exactly YOU
mean by 'software bloat'. And what YOU mean by 'disk bound'?

Certainly the disk stored data sets used by a 'typical desktop' are either
accessed entirely in a sequential manner (MP3 or wma) or if at random, MUCH
MUCH smaller than main memory. Even programs are loaded in a sequential
manner. Windows 2000 and XP ensure that programs are loaded sequentially,
and that many programs can be used even before all the necessary modules are
loaded.

Name even ONE 'typical desktop' application that is hard drive data transfer
speed limited. Can it be you are only exercised about program load times?
Is that all it is?

It seems you either you don't think before posting, or that you don't know
much on the subjects to think about.
I could mention overall system balance.
I could mention hard drive performance increase that can be realized by
using non-commodity drives, arrays, wider and faster buses. But I don't
think you would read that either.

I, and others, have done you the courtesy of treating your post seriously.
Should we continue?

Phil Weldon
 
I'm not sure what you mean by 'applications over wifi.' Do you mean using
an application server over wifi? I could see that use as requiring changes
in core application sizes, but with, for example, flash memory rapidly
increasing in capacity and speed, the application server model doesn't seem
right. If some sort of control to prove incremental cash flow is deemed
necessary, the data transfer between application server user could be quite
small, no matter how large the application.

My question were just to indicate unconcern for 'software bloat'. I do not
perceive it as a serious problem. Software is going to be more expensive,
have less performance, and be harder to use WITHOUT what you seem to
consider 'software bloat.'

I'm not wedded to this view, but it seems more rational than all the
hand-wringing over 'software bloat.' Now code that does not do proper error
checking is another, more serious matter.

Phil Weldon
 
Ok, YOU try to manage 512 Mbytes of image data, consisting of up to 300
images WITHOUT card test, defragmentation, and file recovery programs.

You are just disagreeing to be disagreeable, without any thought of why such
applications ARE necessary.

Phil Weldon
 
http://www-03.ibm.com/ibm/history/
http://www.cedmagic.com/history/ibm-305-ramac.html
http://www.asme.org/history/roster/H090.html

There are lots more websites that cover the development of
mechanical/electronic computing machines. There is even a newsgroup,

alt.folklore.computers

with posts like the following:

****

"A brief mention of the Control Data 1700, described in a manual on Al
Kossow's web site, has been added to the 16-bit architectures I describe
on my page at

http://www.quadibloc.com/comp/co0304.htm

I can't blame Al for the misprints and errors that appeared in Control
Data's manual for that computer - the illustrations for the shift and
conditional skip instructions got skipped, and the diagram shows the
same opcode for the shift instructions as the register to register
instructions (but the text gives them distinct opcodes) - but I think I
managed to sort it out.

It seems to me that this one was designed by Seymour Cray, and, if so,
he did stray from 24, 36, 48, 30, and 60 bit words at least once in his
career prior to the Cray 1. Even if this unit did use standard 18-bit
memory modules (one bit for program protection, one bit for parity).

John Savard
http://www.quadibloc.com/index.html"

****

It's a facinating subject, computer history. Enjoy.

Phil Weldon
 
Mxsmanic said:
Phil Weldon writes:




So access times have improve by 1000:1.

I think the point was that most people would not consider 1000:1 (not to
mention the other measures) 'only moderate'
Processor speeds over the same
period have improved by roughly a 1000000:1.

Even more impressive.
Which means that disk
drives today are 1000 times _slower_ than disk drives fifty years ago,
in comparison to processors.

No, what it means is you've made an artificial and inappropriate construct
to suggest an invalid impression; that disk drives are 'slower' (yes, I
know, "in comparison...") when, in fact, nothing is "slower" and certainly
not "1000 times _slower_."

It is well known that mechanical devices do not benefit from silicon
manufacturing processes improvements so the 'comparison' simply has no
meaning other than restating the obvious, that mechanical devices aren't
integrated circuits.
This means that most activities on a
typical desktop today are seriously disk-bound.

Setting aside what an 'average desktop' is and that it has no meaning to
computers of 25 years ago, much less 50, that conclusion would only have a
chance if one presumes there is some fixed relationship between
'activities' (whatever that means, today OR 50 years ago) and 'disk I/O'
that has remained constant over the last 50 years.
 
Mxsmanic said:
David Maynard writes:




Yes. So?

The context you snipped dealt with hardware, in particular mainframe cost
of manufacture and, in particular, the cost of dedicated I/O processors and
other hardware architectural features (in the context of PCs using memory
mapped I/O being "really stupid" because "mainframes have done it [I/O
processors] for decades) and, in that context, you said "UNIX is close to a
mainframe system, though."

UNIX is not hardware.
 
Mxsmanic said:
David Maynard writes:




These applications are not representative in that they are
compute-bound.

A "compute-bound" application is perfectly representative when one is
discussing whether all the processor power has been 'consumed' by whatever,
and it obviously hasn't been.
Very few people are encoding video,

Perhaps you should just say you aren't because I'm not as convinced that
the thousands/millions who do will suddenly stop for the convenience of the
argument.
and game-playing is
a very specialized market.

Setting aside that the "very specialized market" is so huge one might
wonder if PCs are used for much else these days it still shows that the GUI
has not 'consumed all the processing power'.
It's more than a feel; I've run experiments.

The only thing you've presented in here is 'feel', plus dismissing tangible
proof to the contrary (see above).
The problem is not minimizing a window (although it takes a lot longer
than 10 ms, since I can watch it happen--it's closer to 100 ms on my
machine).

I'm impressed by the ms resolution of your eyeballs.
The problem is the cumulative horsepower consumed by all
these bells and whistles.

What you call a 'problem' others find quite pleasing. If you don't then
turn them off.
And software bloat also _dramatically_
increases disk I/O,

It's generally true that complex programs take longer to load than simple
brain dead programs because features/capabilities require code.

What isn't true would be the presumption that your opinion of useful
features/capabilities is universally shared.
and disk I/O is extremely slow. Most of the delays
on a PC that are not network-related today are due to the slow speed of
disk drives.

Previously you claimed the _processor_ was all 'consumed' and now you're
blaming everything on constipated network and disk speed.

Well, I suppose, since the processor is going to just sit there for network
and disk I/O we might as well ring a few bells and blow some whistles in
the GUI while we're waiting.
No, I'm saying that virtually all the additional horsepower that modern
computers have added over the computers of the olden days has been
wasted by software bloat.

Simply rewording the 'no faster' claim doesn't change it.
On a 286, it used to take several seconds to save a document. Today, it
still takes several seconds to do that,

Even if that were true it is an inappropriate measure for the claim. I can
make similar ones about my car: "Cars are no faster today than 50 years ago
because it still takes just as long for me to close the door when I get out."

The 'measure' may be 100% 'true', but it's inappropriate and the claim
doesn't follow from it.

Yours is even worse because, while one could make a half hearted, albeit
fallacious, argument that a car 50 years ago could handle the same
functions they do today, it would be, and is, absurd to even suggest an old
286 comes even close to what a modern computer can do, and not because of
any lack of logic ability in the instruction set but because, by
comparison, they're incredibly SLOW, regardless of how 'efficient' the code is.
even though my computer today is
_supposedly_ a thousand times faster.


And who told you it's "a thousand times faster?" And for what purpose is it
"_supposedly_ a thousand times faster?"

I'd half expect a silly claim like that from the computer illiterate but
not from someone who should know better.
 
kony said:
We seem to have drifted off of the prior off-topic, topic,
but in my mind the more significant factor is x86 rather
than Pentium era departures, but even so, there is no real
gain in having ~ 386 compatibility when further progress has
been made towards more modern mobile processors. Perhaps
the process size matters in outer space as one poster
mentioned but otherwise, there are more suitable modern
alternatives.

I agree.
 
kony said:
C'mon now, even with your argument you can see the flaw in
that.

My point all along has been that the argument is flawed and that was an
attempt at proof by contradiction.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top