PC 4GB RAM limit

  • Thread starter Thread starter Tim Anderson
  • Start date Start date
kony said:
On Sat, 21 May 2005 04:15:39 GMT, "Phil Weldon"




How about applications over wifi? They'll run like crap
thanks to the bloat. The world IS going mobile and will be
seeking FULL FEATURED apps that aren't so bloated. Even
good ole Microsoft liked the idea of web services for
thinner clients... only made possible with _less_bloat_.

The idea that someone is going to have to do without most
features and a PDF manual is just wrong. PDF manual and a
couple of pics are (being generous, a half-dozen MB on
average), no account at all for the remaining bloat in many
mainstream apps, let alone windows. You're welcome to
disagree but in very few years you'll be proven wrong...
PDAs and notebooks will merge into new devices more mobile,
yet expected to be able to do common desktop tasks... is
inevitable,

And as processor speed and memory capacity in 'PDAs' increase so will app
size, just like with every other form factor.
so to a certain extent the malcontent towards
bloat will soon enough be lessened. We're just not there
"yet".

Surely you're kidding. 'Bloat' isn't driven by 'bad code'. 'Bloat' is
driven by 'features'. Like picking which of 35 'tunes' your wireless phone
plays when it 'rings'. Or, better yet, load in a custom one. Tell me *that*
isn't 'bloat'.

Sometimes bloat is incredible useful and one wonders how mankind survived
before cameras were put into phones.
 
kony said:
You mean, "hard for you".
You can't see the obvious even though there are already tons
of mobile phones, laptops, PDAs, etc. The adoption of these
could as well have been discounted before their arrival with
a similar "I wouldn't, therefore most wouldn't" attitude.
You really don't have any idea of who will or won't adapt to
newer technology if you simply ignore that it has value.

"There is no reason for any individual to have a computer in his home."
Ken Olsen, the founder & CEO of Digital Equipment Corporation
 
In message <[email protected]> "Phil
Weldon said:
Ok, YOU try to manage 512 Mbytes of image data, consisting of up to 300
images WITHOUT card test, defragmentation, and file recovery programs.

There is nothing special about removable media, you can use your
favourite management tools.

Moreover, defragmentation is not only useless, it's harmful. Useless,
because there is no seek time so there is no performance hit when files
are fragmented. Harmful because you only get a fixed number of writes,
and defragmenting rewrites most of the data on the card.
 
CBFalconer said:
... snip ...



We've been doing the same sort of thing for many years without
bloat.

Doesn't really matter how small the bloat is it's still bloat when there's
little purpose to it, and I mean in the context of what we were talking
about, an entire system installed on a PC since installed processors just
don't spontaneously change very often and it's even rarer they change
processor class, and rarer still to an earlier processor class (e.g. an
installed 386 kernel would be happy enough on a 686 but not vice versa).

Take my ddtz for CP/M for example, which has been around
for about 25 years. It occupies a grand total of about 7k bytes
code space, and adapts itself to the cpu it runs on. It does cpu
detection on initialization, sets a flag, and several operations
check that flag. The bloat is trivial, maybe 100 bytes. You can
see just how it was done by examining the source on my site (see
organization header), if you wish.

There are short routines available for the whole family of x86
processors to detect the cpu. All it takes is a modicum of
forethought.

We weren't talking about an isolated program. We were talking about the
entire system essentially recompiling itself (and I say that since the
Linux kernels generally are separate compilations), or picking which
version, on every boot when, in the vast majority of cases, there's no need.
 
David said:
A "compute-bound" application is perfectly representative when one is
discussing whether all the processor power has been 'consumed' by whatever,
and it obviously hasn't been.

Mainly because disk and network delays prevent it.
Perhaps you should just say you aren't because I'm not as convinced that
the thousands/millions who do will suddenly stop for the convenience of the
argument.

The vast majority of computer users are not encoding video.
Setting aside that the "very specialized market" is so huge one might
wonder if PCs are used for much else these days it still shows that the GUI
has not 'consumed all the processing power'.

The market is limited largely to adolescent boys; most other computer
users (including the vast majority of women) don't play video games with
any significant frequency.

The fact that things like DirectX are needed to even get games to run
demonstrates how completely they exhaust available horsepower, no matter
how high that horsepower is. However, machines that are woefully
inadequate for gaming are often generously dimensioned for just about
any other type of application short of weather prediction or nuclear
simulations.
I'm impressed by the ms resolution of your eyeballs.

Most people can resolve considerably less than 100 ms.
What you call a 'problem' others find quite pleasing.

Others generally don't care.
It's generally true that complex programs take longer to load than simple
brain dead programs because features/capabilities require code.

These programs are not just doing I/O to load; they are doing I/O
constantly.
What isn't true would be the presumption that your opinion of useful
features/capabilities is universally shared.

It's not shared by geeks, but it is shared by end users, and they are a
much larger part of the user community.
Previously you claimed the _processor_ was all 'consumed' and now you're
blaming everything on constipated network and disk speed.

The processor is usually the strongest link in the chain, even though it
spends most of its time waiting for memory modules to respond.
Well, I suppose, since the processor is going to just sit there for network
and disk I/O we might as well ring a few bells and blow some whistles in
the GUI while we're waiting.

But this becomes a problem if we don't have to wait for network and disk
I/O. Then we end up waiting for the bells and whistles.
Even if that were true it is an inappropriate measure for the claim. I can
make similar ones about my car: "Cars are no faster today than 50 years ago
because it still takes just as long for me to close the door when I get out."

That's not an appropriate analogy, but you can accurately say that cars
are effectively no faster than 50 years ago because it still takes the
same amount of time to get to work in them.
Yours is even worse because, while one could make a half hearted, albeit
fallacious, argument that a car 50 years ago could handle the same
functions they do today, it would be, and is, absurd to even suggest an old
286 comes even close to what a modern computer can do, and not because of
any lack of logic ability in the instruction set but because, by
comparison, they're incredibly SLOW, regardless of how 'efficient' the code is.

Much of what modern, well-written applications require is within the
capabilities of a 286. The 286 is indeed slower but an application
written for maximum efficiency on a 286 might well match a bloated
application on a modern system in terms of response time.
And who told you it's "a thousand times faster?"

From five MHz to 3200 MHz, plus optimizations that increase the number
of instructions that can be executed on average per clock cycle.
And for what purpose is it "_supposedly_ a thousand times faster?"

Anything that requires computing power.
I'd half expect a silly claim like that from the computer illiterate but
not from someone who should know better.

I'd expect argument without personal attacks from the computer literate.
But some of them are angry young males.
 
~misfit~ said:
Can you give us a link to a website about these disks? Maybe a museum or a
historical page. I'm interested in finding out more about them.

There are dozens of such sites. Google on "hard disk history." Many of
them are quite detailed.
 
Phil said:
If you had ANY experience outside home use, you would not post as you do.
Repeating nonsense makes no less nonsense. Even worse, when quoting you
seem to elide what doesn't fit your narrow perspective, as in the factor
improvements from 1956 to 2004, which are
Storage capacity: 45,000 X
Data transfer rate: 7000 X
Average access time: 46 X
Cost per Megabyte: 100,000,000 X

So: Which is the weakest link here? Answer: Disk access times, all
else being equal. Worse yet, modern applications do a lot more disk I/O
than old applications did.
What exactly is it that makes you think 'most activities on a typical
desktop today are seriously disk-bound'?

The fact that the network card and the processor are idle most of the
time even while the user is waiting for an application to respond. The
application is usually waiting on disk I/O if there is a delay
(exceptions include things like browsers, which are usually waiting on
the network).

There are some examples of compute-bound processes, though, including
browsers that are rendering, many applications when they are
initializing, etc. In these cases you'll see a difference with
processor power increases.
Now I'm beginning what exactly YOU mean by 'software bloat'.

Containing far more code than is strictly necessary to fulfill its
function.
And what YOU mean by 'disk bound'?

A process is disk-bound when it spends most of its time waiting for disk
I/O to complete (excluding time spent waiting for user input).
Certainly the disk stored data sets used by a 'typical desktop' are either
accessed entirely in a sequential manner (MP3 or wma) or if at random, MUCH
MUCH smaller than main memory.

Cache can help there, but it doesn't do much when the disk must be
_written_.
Name even ONE 'typical desktop' application that is hard drive data transfer
speed limited. Can it be you are only exercised about program load times?
Is that all it is?

No, it's everything. Clicking on a link in MSIE just now, for example,
required more than 100 disk reads and just under 60 disk writes. The
rest of the delay was rendering time, which is mainly compute-bound.
 
David said:
I think the point was that most people would not consider 1000:1 (not to
mention the other measures) 'only moderate'

They are trivial compared to 1000000:1.
No, what it means is you've made an artificial and inappropriate construct
to suggest an invalid impression; that disk drives are 'slower' (yes, I
know, "in comparison...") when, in fact, nothing is "slower" and certainly
not "1000 times _slower_."

If other requirements expand faster than disk drives improve, then disk
drives become a bottleneck.
It is well known that mechanical devices do not benefit from silicon
manufacturing processes improvements so the 'comparison' simply has no
meaning other than restating the obvious, that mechanical devices aren't
integrated circuits.

Yes, but what most people overlook is that many systems are heavily
dependent on those mechanical devices, and so they are often
bottlenecks.
 
David said:
The context you snipped dealt with hardware, in particular mainframe cost
of manufacture and, in particular, the cost of dedicated I/O processors and
other hardware architectural features (in the context of PCs using memory
mapped I/O being "really stupid" because "mainframes have done it [I/O
processors] for decades) and, in that context, you said "UNIX is close to a
mainframe system, though."

UNIX is not hardware.

UNIX has traditionally run on minicomputers, so UNIX systems are not
mainframes (and at least in the past they were not PCs, either). Saying
UNIX in a historical hardware context thus implies minicomputer hardware
like the PDPs.
 
kony said:
Most people didn't require computers at all in the beginning
of the PC revolution, but here they are!

Most people still don't require them.
This is suprising to you, given that these smaller devices
are yet to hit the market? You weren't paying attention
when you read what I wrote.

If they needed mobile access that badly, they'd all have laptops and
WiFi by now. But they don't.
You mean, "hard for you".

Hard for a lot of people. Ask the average non-geek on the street.
You can't see the obvious even though there are already tons
of mobile phones, laptops, PDAs, etc.

Tons, but their penetration is still very light, except for mobile
phones, and mobile phones are not being used for computing by average
users.
The adoption of these
could as well have been discounted before their arrival with
a similar "I wouldn't, therefore most wouldn't" attitude.
You really don't have any idea of who will or won't adapt to
newer technology if you simply ignore that it has value.

I have a lot of experience with real-world users, as opposed to geeks,
and they are worlds apart. Geeks see the world through very distorting
glasses, and what they consider "essential" and "normal" is often
completely unknown to everyone else.
... and a lot of people don't.

Very few people don't. The only people who use computers when they
don't have to are geeks. Almost no one is interested in computers for
their own sake.
 
David said:
"There is no reason for any individual to have a computer in his home."
Ken Olsen, the founder & CEO of Digital Equipment Corporation

That is still largely true today. An important limiting factor on PC
sales today is the fact that many people just don't want a PC.
 
Phil said:
Ok, YOU try to manage 512 Mbytes of image data, consisting of up to 300
images WITHOUT card test, defragmentation, and file recovery programs.

I manage many gigabytes of image data without these tools.
 
Mxsmanic said:
I want it new.

Why? Like your wallet getting raped on a regular basis?
And I need the software to go with it.
Again, E-Bay. On one network I installed 5 years ago, I saved over £500
by buying NT4 Server with 5 CALs on E-Bay instead of from a e/retailer.
 
There is a great difference between processes running and processes
active. Most of those are idle until something happens. Others
can run at very low priority, and just get out of the way for
anything.
I was thinking of things like CUPS server which is on the go all the
time.
 
Mxsmanic said:
So: Which is the weakest link here? Answer: Disk access times, all
else being equal. Worse yet, modern applications do a lot more disk I/O
than old applications did.
Actually the bandwidth of the LAN is usually the limiting factor.
The fact that the network card and the processor are idle most of the
time even while the user is waiting for an application to respond. The
application is usually waiting on disk I/O if there is a delay
(exceptions include things like browsers, which are usually waiting on
the network).
Oh dear.
 
Mxsmanic said:
The problem is not minimizing a window (although it takes a lot longer
than 10 ms, since I can watch it happen--it's closer to 100 ms on my
machine).

You need a faster graphics card if you can watch it happen.
The problem is the cumulative horsepower consumed by all
these bells and whistles. And software bloat also _dramatically_
increases disk I/O, and disk I/O is extremely slow.

Only on initial loading. I run Windowblinds. Once its up and in RAM, it
doesn't affect the speed at all.
 
Back
Top