In home CNC (e.g., setting up a lathe to make widgets automatically),
DOS turns out to be superior to Windows because there are fewer layers
between the software instructions and the hardware actuators. So I
wonder if the fat bump of software will really do the job with future
technologies in which IT seeks to interact with the physical world
on more intimate terms.
When it comes to efficiency vs. "big design", I'd bet on the latter,
and there are three aspects to this.
1) Abstraction has earned its keep
Nowhere is this clearer than in PC gaming. Initially, DOS was better
than Windows as a gaming platform for the reasons you state; you had
direct access to the hardware and that meant you had the efficiency to
do things on hardware of the time.
As long as a sound card successfully pretended to be a Sound Blaster,
and a graphics card successfully pretended to be an IBM VGA or a VESA
SVGA, you were set.
WinG didn't do much to change our minds on this, and then DirectX
started getting traction. Even so, it was initially a matter of "does
the PC have the power to pull the load through the DirectX middleman?"
in the pre-3D era of Quake 2, Hexen 2, etc.
Once factor (2) meant more hardware intelligence, the abstraction of
the hardware let hardware developers off the leash. No longer did a
sound card have to pretend to be a Sound Blaster to work (and thus be
limited down to whatever a Sound Blaster could do) - vendors were free
do develop hardware in any direction they liked, as long as they
provided DirectX drivers to link it through the standard API.
We saw the same thing earlier, when IDE mean HDs no longer had to have
set data encoding and sectors per track. Once again, development was
free to design storage hardware any way possible, as long as it could
be seen as CHS or a linear series of sectors (LBA) - and once again,
capacities and speeds took off like a rocket.
So no; I don't see much future for direct-hardware-access in general
use, though I do see firmware applications in dedicated hardware and
perhaps some real-time OSs for general systems dedicated to running
single apps that require real-time rigor.
As to DOS itself - much as I like it, it just doesn't scale up to
modern hardware that well (USB, GHz, RAM, > 137G etc.). I suspect a
lean Linux, even as lean as that which launches MemTest86 from a boot
CDR or diskette, is more likely the way to go.
2) Efficiency is a temporary advantage
Any business plan that relies on big system performance or spectacular
machine efficiency, has to pay for itself in a fairly short time
period. In contrast, a year's head start in building a new app that
barely runs on current hardware can set you up to dominate that field
for "life" (think Photoshop, CueBase, Premiere, etc.)
For example, let's say you want to be able to do CG video that's
smoother and more realistic than anyone can do at home. So you buy a
huge PC or other hardware system that's far more powerful than any
off-the-peg or self-built PC, and you start cranking out jobs.
You have a market life of maybe 2-3 years, tops, before you will have
to either renew your crushingly expensive hardware investment, or find
some other way to compete with all the PC small-fry who have caught up
with you simply buy buying their PCs a couple of years later.
For another example; let's say you write a competing application to
have a performance edge by working intimately with the hardware, being
written in C or Assembly, etc. with fewer abstraction layers.
Once again, you'll be best-of-breed for a year or two, but htis time,
initially at least, your edge stays in place as faster hardware lets
your more efficient software still outperform other apps.
But then, as the hardware changes and invalidates your original
assumptions, you have to re-develop much of the code, and that's a far
slower and riskier business than (say) getting library updates and
making a few top-level changes to something written in a higher-level
(read, protects you from shooting your feet off) language.
Compare word processors XyWrite, Word Perfect and Word. XyWrite was
hard to use, but beloved by news reporters because it was so much
faster when quickly banging out raw text. It had an edge, but an edge
that simply ceased to matter even before Windows took off.
Word Perfect did things "large" and made "difficult to use" a lock-in
advantage by training users in their arcane ways. Unlike XyWrite,
they relied on efficiency to support the richest available feature set
rather than simply be the fastest tricycle on the block. So as folks
expected more from word processors, they still found it in Word
Perfect... and then Windows came along, and Word Perfect made the
mistake of trying to ignore the OS and still do everything "their
way", in a quest for efficiency and richness. Oops.
3) Possible computing power is a narrow but shifting window
By that I mean the difference between the smallest and largest systems
cost-effective to produce, is far smaller than the best 3-year-old
systems and the worst systems 3 years in the future.
This is, in effect, a re-statement of (2). Let's say user A bought a
cheap 200MHz Pentium with 8M RAM and 512M HD, and user B spent a bomb
on Intel's cutting-edge 333MHz PII with 32M RAM and 2G HD. They're
both boat anchors by now, and not only that; it's no longer
cost-effective to deliberately build a PC as lame as either, in 2007.
Even when you go off general PC hardware (and immediately hit far
larger hardware development costs); it's still not cheaper to build
things smaller than a certain level, unles you are after a
particularly minute form factor (and pay accordingly).
You can think of possible single-unit computing power as a range as
narrow as visible light in the full em spectrum. If you want
substantially more power right now, you generally attain this by
ganging systems together, e.g. SETI, or botnets.
You can also think of computing power over years as that narrow
visible-light band being moved from radio waves through today's
"light" into UV, gamma and cosmic ray territory, much like the pointer
of a radio moves as you turn the tuner knob. All sorts of security
technilogies that are bound to power/time limits suffer short
usability lives due to this effect.
Things change qualitatively as well as quantitatively, and this will
have a greater impact if you've become too intimate with the hardware
in a quest for efficiency. Use the API, Luke...
The main thing is that it is harder to conceptualize something that
has never been possble to do before, than to make a known process more
efficient or re-create it on different hardware.
--------------- ---- --- -- - - - -
"We have captured lightning and used
it to teach sand how to think."