David said:
A "compute-bound" application is perfectly representative when one is
discussing whether all the processor power has been 'consumed' by whatever,
and it obviously hasn't been.
Mainly because disk and network delays prevent it.
Perhaps you should just say you aren't because I'm not as convinced that
the thousands/millions who do will suddenly stop for the convenience of the
argument.
The vast majority of computer users are not encoding video.
Setting aside that the "very specialized market" is so huge one might
wonder if PCs are used for much else these days it still shows that the GUI
has not 'consumed all the processing power'.
The market is limited largely to adolescent boys; most other computer
users (including the vast majority of women) don't play video games with
any significant frequency.
The fact that things like DirectX are needed to even get games to run
demonstrates how completely they exhaust available horsepower, no matter
how high that horsepower is. However, machines that are woefully
inadequate for gaming are often generously dimensioned for just about
any other type of application short of weather prediction or nuclear
simulations.
I'm impressed by the ms resolution of your eyeballs.
Most people can resolve considerably less than 100 ms.
What you call a 'problem' others find quite pleasing.
Others generally don't care.
It's generally true that complex programs take longer to load than simple
brain dead programs because features/capabilities require code.
These programs are not just doing I/O to load; they are doing I/O
constantly.
What isn't true would be the presumption that your opinion of useful
features/capabilities is universally shared.
It's not shared by geeks, but it is shared by end users, and they are a
much larger part of the user community.
Previously you claimed the _processor_ was all 'consumed' and now you're
blaming everything on constipated network and disk speed.
The processor is usually the strongest link in the chain, even though it
spends most of its time waiting for memory modules to respond.
Well, I suppose, since the processor is going to just sit there for network
and disk I/O we might as well ring a few bells and blow some whistles in
the GUI while we're waiting.
But this becomes a problem if we don't have to wait for network and disk
I/O. Then we end up waiting for the bells and whistles.
Even if that were true it is an inappropriate measure for the claim. I can
make similar ones about my car: "Cars are no faster today than 50 years ago
because it still takes just as long for me to close the door when I get out."
That's not an appropriate analogy, but you can accurately say that cars
are effectively no faster than 50 years ago because it still takes the
same amount of time to get to work in them.
Yours is even worse because, while one could make a half hearted, albeit
fallacious, argument that a car 50 years ago could handle the same
functions they do today, it would be, and is, absurd to even suggest an old
286 comes even close to what a modern computer can do, and not because of
any lack of logic ability in the instruction set but because, by
comparison, they're incredibly SLOW, regardless of how 'efficient' the code is.
Much of what modern, well-written applications require is within the
capabilities of a 286. The 286 is indeed slower but an application
written for maximum efficiency on a 286 might well match a bloated
application on a modern system in terms of response time.
And who told you it's "a thousand times faster?"
From five MHz to 3200 MHz, plus optimizations that increase the number
of instructions that can be executed on average per clock cycle.
And for what purpose is it "_supposedly_ a thousand times faster?"
Anything that requires computing power.
I'd half expect a silly claim like that from the computer illiterate but
not from someone who should know better.
I'd expect argument without personal attacks from the computer literate.
But some of them are angry young males.