Hmmm. Perhaps it has, but there are a few exceptions here and there.
I'm not sure if your example is such an exception...
For example, I use Starband satellite Internet because I live in a remote
location where there is no cable or wireline phones. They passed out
these sketchy routers that involved a complicated driver on the host PC.
People had no end of trouble getting the driver to work, and it was for
Windows only. Nowadays, Starband replaced their routers with new
ones that require no driver whatsoever. Any system -- DOS, Windows,
Mac, Linux -- with a properly configured network card can connect
....in that what has happened here, is that all the logic that used to
bulge into the "host PC" has now disappeared into the router itself,
which now has the computing power to do it in-house. And the chances
are high that inside that router, you may find a Linux.
What has now happened, is that the router is now a self-contained
system that is abstracted from the PC. The PC no longer has to care
what it is; that's why any OS can use it - as can Macs, etc.
Your analysis is well thought out, and I accept your conclusions as to
what is commercially viable. But, as the window shifts, the question
becomes: What will be done with all that hardware power?
Some of it will be spent on "computing about computing"; security
"should-we-do-this" stuff, including DRM, etc. For example, the
current crude user-based permissions model may devolve down to every
program being treated as a(n un)permitted agent.
That's going to involve massive overhead and complexity, and the
complexity will be unmanageable unless things are designed very
formally (which usually means, quite inefficiently).
For example; I'm Fred (logged in as Fred) on PC1 running a game that
is allowed to access DirectX, but is not allowed to access my data
(even though I as Fred may do so), nor is it allowed onto the
Internet. To fuss about what this particular program can be allowed
to do, means an awareness of all the APIs etc. this game might call,
and what they call, etc.; that's a lot of context tracking.
Some will be spent on a more natural user input experience, as I
described in an earlier post. For example, screening in only your
voice in a crowded bus, when under voice control.
And some power may be used to do things we haven't yet thought of,
even in the sense of rejecting such things as impossible due to
current hardware constraints.
OTOH, the PC might just eat itself, much as your router ate its PC
footprint. If RAM is large enough to hold everything, then we may say
goodbye to the last mechanical item, the hard drive. With that, goes
a whole layer of caching and virtualization, which should result in a
far cleaner OS design - though you may see the old "page to disk, load
from disk" baggage being carried forward for a while, much as DOS
still differentiated between "conventional" vs. XMS in post-Meg era.
I can't imagine that software will continue to bloat out indefinitely;
Oh, I can... I can see processors becoming too large to be bug-free,
so that today's microcode injection becomes microcode that runs as
software; that sort of thing. As it is, much computing involves
trading off speed vs. size, i.e. "should we calculate this, or just
look it up in a pre-calculated table?" and I expect to see size
continuing to grow while speed starts to cap out.
Let's say I gave you the gift of a fabrication plant that shrunk
circuitry to 10th of today's size, running on a 50th of the power.
What's easier; reproduce an existing RAM lattice a few more times to
boost capacity, or bloat out the processor's logic in an attempt to
translate those extra circuit elements into processing power? Unless
you'd built some dumb-ass scalability limit into your RAM addressing
design (such as "thank heavens we boosted max HD size from 8G all the
way to 32G, that should be enough for ever!") the first is free.
I mean, what will the desktop OS of 2020 look like?
I don't think one can guess... some obvious past guesses (e.g. video
phones, robots) took a lot longer than expected, whereas others (spam,
botnets) may not have been foreseen at all.
You may find what we currently think of as "thinking", starts to
appear within system code, as the second human conceit to fall.
The first conceit was that craftsmanship would always outdo automated
manufacturing. That's truly dead; not that craftsmanship is
necessarily dead, but that manufacturing has gone beyond anything that
is practical to hand-craft. I don't see anyone carving out 512
million memory cells by hand, do you?
The second conceit is AI. Every time someone codes up a solution to
something previously waved as the "threshold of thought", the
definition of AI is rolled forward again. Computers may not "think"
in the same way as humans, just as processor lithography does not
carve materials as a sculptor does, yet the job gets done. As it is,
the Turing Test is passed every time someone "opens"an attachment
because it was "from someone they know".
The third conceit is human pre-eminance in economic life. I can see a
day when bots will start up companies, employ humans, pay them, pay
taxes, etc. and we won't particularly care. Already, most Internet
traffic is machine traffic; spam, malware, updates... that can get a
lot worse. Humans may be reduced to rats running around in someone
else's gutters, when it comes to getting a click in edgeways ;-)
Maybe that's more 2050 than "2020 vision", heh... but maybe not...
OK, 2050 may see flexible hardware, i.e. where specialized hardware is
grown under program control, then resorbed or re-purposed when the
need has passed so that something else can be grown and used.
At some point, it will be cheaper to build via nanobots rather than by
creating the factory jigs by hand, just as it is already cheaper to
build factory jigs rather than craft products by hand.
At this point, we may have already considered metal and insulators to
be too crude to operate at the sizes and speeds we need. We may
prefer organics that are the stock in trade of existing nanobots that
we can monkey-see, monkey-do to re-purpose and create from scratch.
But that seems absurd. It seems more likely that the focus will be on
drivers for complex hardware... the 3d printer, a holographic projector,
etc. How Linux will fare vs. Windows under that scenario is something
you can assess better than I can...
You may see Linux within more peripherals, such as printers, scanners,
cameras, sound recorders, 3D monitors etc. with standard interfaces
doing away with the need for "drivers" alltogether.
A modem is too simple to be worth building logic into it, so we have
controllerless modems and drivers. A router's complex enough to build
logic into it, so we have self-contained routers and no special logic
as "drivers". Right now, printers seem to be in between; there's
logic in the printer, but much of the work is still done in drivers
and other hosted software, but that could change.
A lot of this comes down to architecture, i.e. how one generalizes
and/or abstracts things, and scopes between them. It makes sense that
Bill Gates' last role in MSFT was software architecture.
------------ ----- ---- --- -- - - - -
Our senses are our UI to reality