They may well do, if the OS is badly-designed unough to be waving
dangerous exploitable surfaces at the world.
For example, as a stand-alone user who merely "consumes" the Internet,
I have absolutely no desire to let any other system to remotely call
procedures on my PC. Yet the OS waves the Remote Procedure Call
service at the Internet, and this cannot be turned off because the OS
is designed in such a way that it "needs" RPC to find its own ass.
So even though I don't need or want the RPC service, I have to patch
defects in it, else I'm at risk. And again, for autorunning macros in
Office "data files". And for hidden admin shares that expose the
startup axis of my PC, so dropped code can run on next boot. Etc.
Also, remember; raw code defects are insane - what happens as a result
of these defects bears to relation to what the code is supposed to do.
Think about that. Need pysical access? That's fine, I'd be comfy
with that. Needs "admin rights"? That's not fine at all, as there
are plenty of ways to gain those. What did you "secure your system"
with; more soggy code that's likely to be as defect-ridden as this
particular hole in the collinder you are currently evaluating?
Think about the logic:
- patch every month, because your code base has hidden defects
- trust patches to be defect-free, even if written by the same folks
It would be interesting to compare the cumulative volume of patches
pushed into your PC, with the total volume of code that is being
patched - especially when you apply the "10% of the code runs 90% of
the time" weighting factor. Then, look at how many successive
revisions are applied to the same problematic code items.
So already, you can predict that bad patches are almost inevitable.
After all, the code they are fixing was thought to be fit for use, so
the assertion that the patches have been tested etc. is meaningless.
Now consider how original code is developed and deployed, vs. patches.
The original code base was developed in controlled circumstances, with
plenty of time for in-house testing, private beta testing, and then
broad public beta testing. It's deployed in controlled circumstances
too; as a fresh installation, onto which other code is added. If
something is added that conflicts with the OS, this generally comes to
light, and you'd uninstall the new addition.
The patches are developed in response to a newly-discovered potential
crisis; sometimes after there is exploit code in the wild. How much
time is there for testing, before having to rush that code out the
door? Next, consider the deployment; this new code is retrofitted
into disparate "live" installations that have diverged from the
initial predictable fresh install state. Can one be sure that the
code will work properly across all possible permutations?
Malware outbreaks and bad patches have the same red-letter impact on
IT, especially in consumerland - they can trigger an urgent need for
assistance across a large chunk of your client base at the same time,
making it impossible to resource these support needs. The more
efficient the patching process, the more clients are hit within a
shorter time-span, with less likelyhood of anyone having a clue as to
what's wrong. This is the hidden bulk of the "just patch" iceberg.
Well, that's the lesson; if all code can contain defects, any code
represents a potential attack surface. MS has yet to rigorously apply
that lesson, i.e. reduce the surfaces waved at external material. We
still see gratuitous handling of external material that was not
explicitly initiated by the user, and so far more crises than need be.
1) Privacy rests on security
2) Security rests on safety
3) Safety rests on sanity
Privacy is the promise you make about how you will manage data.
But if your security is breached, it's not your privacy policy
anymore, because you are no longer the only entity managing that data.
Security is restricting activities to those entities that can be
trusted to perform them, e.g. employees who will behave in ways that
are in keeping with your organizational policy.
But if your employees' actions have consequences contrary to their
intentions (i.e. safety failure), then it's no longer secure simply to
ensure only those employees are involved. Through no fault of their
own, their behavior can no longer be trusted to reflect that of the
organisation, if malware exploits safety failure opportunities.
Safety is a matter of designing code so that the worst the user thinks
can happen, is the worst that can happen. If I have to assess an
incoming attachment, I might take the mild risk of "reading a data
file" but decline the greater risk of "running a program". If the
systrem is so unsafe that I can't tell whether a file is data or code,
or a code file can present itself as a data file, or the OS runs raw
code within what is supposed to be a data file, then I have no control
over consequences. Windows failures abound at all three points.
So safe design is a must, but is meaningless if the code is insane.
For example, if the code that handles a JPEG file doesn't sanity-check
the length of content before copying it into a buffer, so that
"malformed" content overwrites code following the buffer, then the
behavior of that code is no longer limited to design intentions. My
malformed "data" content may now be run as raw code, and that code can
do anything. Any safety assumptions are now meaningless.
-- Risk Management is the clue that asks:
"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"