Well, that, and nobody should be expected to support something forever when
there's a viable replacement.
I used to wonder at the intensity of consumer protection in the US;
litigation at the drop of a coffee-cup, as it were. Then again, you
can count on ad-driven (and thus business-driven) media to ridicule
such judgements as vendors stifle nervous laughter.
For example, there was a cse where someone found a power drill on a
rubbish tip, took it home, plugged it in, and got an electric shock...
and then sued the company that made the drill.
Crazy, I thought - and then I thought a bit more. Suppose the case
discovered a larent design defect that would put all users of the
product at potential risk? If so, the judgement would be sound.
And that's the key to the question "how much longer must we support
old products?" - whether the fault is built-in to the way a products
is designed, or one that arises through "wear" in use. The latter's
limited to a warranty period, and with no stated warranty period,
defaults to whatever common-law rights you may have.
But if a design defect puts users at risk, it can (and perhaps should)
be argued that the responsibility exists as long as there are users
still using (and thus at risk from) the product. It would also be
seen as unacceptable to leverage these newly-discovered by
pre-existing defects to compel users to buy a replacement.
So; if your Win95 CD is scratched and won't read anymore, you wouldn't
expect MS to send you a new one for free, many years later.
But when the fundamental design is found to be dangerous, it's
different... or would have been, if standard "real manufacturing"
norms had been applied when all of this started.
Vendors would say "if you held us accountable to that degree, we'd
never be able to offer you the software you enjoy today", and that
would be true. We'd prolly have more modest functionality, but we
might have more safety. Chances are we'd have had a 2-3 year hiatus
in progress while everything was re-engineered to attain that.
The other perspective is that exploits are in fact "wear and tear", so
that a newly-discovered defect that was always there (but not known or
exploitred) is not an original design or manufacturing defect, but is
normal attrition. As more development effort is required
post-release, so "rental slavery" looms as an inevitable response.
It's also been said that bugs are the inevitable result of non-trivial
code, which means if you want bug-free code, keep it trivial. As
Linux, Windows and MacOS are all equally complex and made out of the
same stuff (especially "who needs to know how long the string is?" C),
we'd expect the same bug rate - and that's pretty much what we find.
The more widely-used the software, the faster it will "wear". If you
buy a new car, keep in in a garage, and drive it to town once a week,
it will last longer than if you used it to do non-stop long-distance
travel. By the same token, if you fragment your platform into
multiple dialects and the whole platform has a fraction of the
exposure as Windows, you'd attract less exploits.
A common mistake is to forget this, because this is usually the *real*
reason why certain types of exploit do not happen ITW. For example,
networking security aimed at keeping out intruders may appear to work
fine in the age of Ethernet, but fall apart when the natural hard
scope of cabling is shattered by WiFi. For another example, the
current babylon of different BIOS code may keep out BIOS-level
malware, but that could change in an EFI future.
Debian only supports its stable version for 24 months after release.
Of course, the release cycle with Debian is a bit snappier than
Windows, averaging about once a year (give or take).
Yup. There's a downside to constant updates, because each update is
effetively a new trust equation that invalidates any experience-based
assumptions on trustworthiness you may have had.
That doesn't apply to most of us because we lack the resources to
really study every byte of the code (as is possible in open source) or
watch every behavior (as one has to do with closed source, and should
do with open source as well).
Updates also tend to stomp all over settings and re-assert
functionalities you may have deliberately ripped out. So often it's a
trade-off; sacrifice the safety of non-default settings and deliberate
breakage of unwanted functionality, vs. exposure to new exploits.
Machines that can't go to a newer version of Windows due to
system requirements can be easily converted into a modern
Linux desktop in about an hour.
AFAIK not all Linuxes will run on the proverbial smell of an oil rag,
and you may have driver issues etc. I don't know how far back legacy
hardware support extends on a per-distro basis.
Yep.
Currently, yes. This might change when Vista is EOL'd and people have to
choose something other than Windows for their next OS.
It may happen in 2 years' time if MS sunsets support for XP SP2.
It happens as regularly in the Linux world, tho folks care less about
that for two reasons; it's free, and most linux users aren't fazed by
the geekery involved in "just" upgrading the OS.
------------ ----- --- -- - - - -
Drugs are usually safe. Inject? (Y/n)