M
Mxsmanic
Bob said:If I thought I could make XP look and feel like 2K, I might consider
using it.
Those are simple desktop options. My XP system looks like NT 4.0 for
the most part.
Bob said:If I thought I could make XP look and feel like 2K, I might consider
using it.
David said:That it's almost universally popular is defacto proof it's not just "a
really stupid way to do things."
Maybe if you put more effort into understanding why it's done that way it
wouldn't be such a mystery.
Ed said:No, it just proves that someone a long time ago thought it was a good idea and
no one has thought otherwise. BTW, there are other ways to do it that doesn't
require using memory addresses, it's just more transparent to the current
processor architecture.
Mainframes have been doing it in other, better ways for nearly half a
century.
Al said:Other ways ? Mainframes invented VM in the 60's, went from 24 bit to
31 bit addressing in the 70's and had multi-gigabyte memory
configurations in the 80s.
Phil said:Compare the cost of one mainframe I/O controller with the cost of 10 desktop
computers.
Ed said:No, it just proves that someone a long time ago thought it was a good
idea and no one has thought otherwise.
BTW, there are other ways to do
it that doesn't require using memory addresses,
it's just more
transparent to the current processor architecture.
Bob said:Sure it can - called Virtual Memory.
Mxsmanic said:David Maynard writes:
Not all, but certainly those related to higher maximum speeds.
Nobody sells 386 machines any more, and no current software runs on
them.
I prefer a GUI for desktops, anyway. The GUI absorbs a huge
amount of machine capacity, though.
Mxsmanic said:David Maynard writes:
Popularity is not necessarily evidence of technical superiority.
The
entire x86 architecture is a case in point.
No need. It wastes memory.
This is one reason why no amount of address space will ever be enough.
You can accommodate real-world needs with a certain number of bits,
but
you cannot compensate for stupidity with any number of bits.
Mxsmanic said:Phil Weldon writes:
The mainframe I/O controller costs less to build, but margins in
mainframe hardware land can be as high as 95% or more.
Mxsmanic said:Al Dykes writes:
Mainframes have handled I/O with fully independent I/O controllers for
decades.
No dedicated main memory required, and highly efficient I/O.
kony said:On Thu, 19 May 2005 06:29:54 -0500, David Maynard
I don't feel it would cost more nor have fewer features.
Cost is somewhat fixed, what the market will bear
someone
buys the application(s) without foreknowledge of the bloat.
As for features, yes I'd be willing to do without the
features that seem to take up hundreds of MB of space, since
an entire office suite can take up under 50MB.
Sure, but suppose an app has 10% additional features added
over 2 versions but grows by 50%.
I consider the bloat to be the unnecessary parts by
definition, not merely that it's larger than a former
version was... so it seems our concept of bloat varies.
Code generally comes from somewhere. It's acquired/made and
put into the application.
Could be laziness, incompetence, lack of sleep, deadlines,
or general apathy, among other reasons I can't foresee.
Comfortable?
Naw, I feel like a sardine in anything modern, even with the
car is big the dashes these days wrap around, plus the
center divider... I feel as cramped in an SUV as I felt once
in a long-ago friend's ~ '80 Ford Escort. And no, it's not
me that's now bloated. ;-)
Sure, they are better but if you recall my plans for
doughnuts in your back yard, well the front-wheel drive
kinda kills that idea.
You're pretty daring bringing politics into a discussion.
What will the trolls think?
Not necessarily true, I actively seek smaller apps that will
fit my needs...
and still use Office 97 more than the newer
versions even though I've a license for O2K/XP. Seems that
along with the bloat, Excell leaves crap behind in
spreadsheets that can only be removed with '97 verison or
manually editing them which I do hate to do. Probably a
patch somewhere for that, don't care enough to look since
'97 does the job.
You might be making a leap there about state-of-the-art
coding. Might it be just the opposite, that they're not at
all using state of the art coding and this is why we have
massive bloat?
Consider how many 1MB-15MB apps are out
there, then what more some of the massive Adobe, Macromedia,
and Microsoft apps do. Even when you choose minimal
installs it insists on dozens of MB. I suppose it's a
matter of choice, I choose to avoid them even with ample
memory and HDD space... but then that may be part of why I
always have plenty of both without having to go to extra
measures to get there. I'm a big fan of only upgrading for
a need, not just to have the latest apps. Could partialy be
because I don't have to fool with warez I suppose, over the
years have accumulated plenty of stuff.
CBFalconer said:I, for one, usually prefer simpler programs which are properly
controllable. The general Unix philosophy of connecting simple
things with scripts and pipes is far more flexible, understandable,
and controllable. Not to mention more accurate.
Phil said:What you feel may not be true (assuming you are thinking of large programs
and operating systems.) There is a very good economic reason programs and
operating systems are get larger. In 1966 computer time (for a
mid-top-range computer) cost $200 US per hour. In 1966, programmer time
(for a mid-top-range computer) cost $4 US per hour. Programs were very
small, and a lot of people time was spent specifically to make those
programs small. Speed was sacrificed for small size. The size and shape
(features) of software was constrained by programming cost vs. computer
facility time, memory storage size, mass storage size, processing speed, and
mass storage speed. Every single one of these factors has changed
dramatically.
Completely new capabilities have arisen. Almost all processing used to be
in 'batch mode'; real time interaction wasn't necessary. Many systems did
not even have interrupts. Displays were rows of lights, or at most, a 30
cps teletype. Magnetic tape storage was very low in density, 800, 1600, or
(gasp) 3200 bits per inch, 8 or 9 tracks; 1 INCH long data blocks, 1/2 INCH
interblock gap. Not a whole lot of code is necessary for such low densities
and I/O speed.
If you REALLY want smaller code, then what do you want to give up?
If you REALLY want smaller code, then why not have applications that only
have the capabilities YOU use?
If you REALLY want smaller code, then why not write your own applications,
or hire system analysts and programmers (and testing and quality control
personel)?
Is it better to have capabilities you MIGHT need, or to save 1 Gbyte hard
drive storage (at a cost of $1 US)? Capabilities you don't need at the
present are probably in use by others, and might be needed by you in the
future.
Try making a list of the capabilities you are willing to forego, and then
compare against similar lists by other users.
Examples
1. I'd be quite willing to forego grammar checking in 'Word'.
2. I'd be quite willing to forego working on spreadsheets within
'Word'.
3. I'd really, really like to lose many capabilites in Adobe Reader.
4. I am NOT willing to forego viewing html in email and websites.
But
1. Some users may actually think 'Word' grammar checking is useful.
2. Some users may feel that manipulating spreadsheets within 'Word'
boosts productivity.
3. Well, Adobe Reader is free, so ...
4. Some users seem quite happy with text only.
The two sample lists above bring up still another important point. Once
there were thousands of computer users and thousands of very specific, well
defined uses. Now, the majority of the population, middle school or above,
in each industrial country is a user, each with a general list of flexible
tasks.
Phil said:Don't forget the other extreme the head-per-track magnetic drum,
the
multi-disk, single head RAMDAC from IBM circa 1964
,That doesn't answer the question it just begs it. Not to mention there are
plenty of 386 machines available and good old, non bloated, software to
run on them.