Yet more Windows XP security patches

M

MAP

You have been coming to this newsgroup long enough to know(at least someone
with an IQ over room temp. should know) that their is a difference between
XP installed on machine A compared to XP installed on machine B,due to
hardware installed,device drivers installed and background apps running to
realize that what works on machine A may not work on machine B. Infact it
could be detrimental to machine B.
I have tried to use simple words in this reply just for you but, incase you
still have trouble understanding it.
http://dictionary.reference.com/

Mike Pawlak :)
 
A

Alias

MAP said:
You have been coming to this newsgroup long enough to know(at least
someone
with an IQ over room temp. should know) that their is a difference between
XP installed on machine A compared to XP installed on machine B,due to
hardware installed,device drivers installed and background apps running to
realize that what works on machine A may not work on machine B. Infact it
could be detrimental to machine B.
I have tried to use simple words in this reply just for you but, incase
you
still have trouble understanding it.
http://dictionary.reference.com/

Mike Pawlak :)

Despite your infantile insults, I will answer you for the benefit of others.
I have three machines. One is an HP Pavilion with an AMD 800 Athlon with XP
Pro on it in Spanish. Another is a white box I built with XP HE in Spanish
with an XP Athlon AMD 2200. The third is another white box with a 3000+ XP
Athlon with XP Pro in English. None had problems. I have a friend who has
two old Pent IIs, a Pent 4 and some 100 customers that he has downloaded the
updates for. NONE had problems.

You sir, are wrong and obviously don't know what the fu*k you're talking
about and should take a look at why YOU fuc*ed up your machine so it runs so
slow and stop blaming the updates. Capiche?

Alias
 
J

Jupiter Jones [MVP]

Yes, the average home user probably does need all Critical Updates.
The average home user may not be able to make a good decision about the
necessity of a specific Critical Updated in their situation.
In order to make the decision not to install a Critical Update, the user
should:
1. Have a thorough understanding of their hardware and software including
all types of use by ALL users.
2. Thorough understanding of the patch especially the potential
consequences of not installing the patch.
The typical home user does not fit both of the above.
That is why when someone asks if a Critical Update should be installed, I
say yes.
If they were thoroughly familiar with the above, they would not ask the
question.
 
M

MAP

and should take a look at why YOU fuc*ed up your machine so it runs so
slow and stop blaming the updates. Capiche?

Where did this come from?No trobles here!
 
M

MAP

Hi Jupiter!
But that is a yes only due to the ignorance of the person maintaining the
system and NOT due to necessity. I can see your view but I stand by my
original post as being correct.
 
M

MAP

You sir, are wrong and obviously don't know what the fu*k you're talking
about and should take a look at why YOU fuc*ed up your machine so it >runs so
slow and stop blaming the updates. Capiche?


My, an ad hominem. How cute.

Mike Pawlak
 
K

kurttrail

MAP said:
As with many so called "critial patches" does the average "home user"
need them?
Before installing these updates it is best to read just what they
do,take the
PNP patch that you mention, The person exploiting this MUST have
their hands on your keyboard! (or "Admin rights" how would they get
this if you secured your system?).
A couple of years ago hotfix Q811493 (the first release) had this same
mitigating factor many installed it because of the "critical update"
header,and their system slowed down to a crawl,did they need it NO!
It takes longer to install these hotfixes reading each one but on the
otherhand it may keep your computer up and running by not installing
some that you really don't need which may screw your system up! I
have read posts already on this months patch tuesday of problems with
computers not booting after installing these updates.


Mitigating Factors for Plug and Play Vulnerability - CAN-2005-1983:

. On Windows XP Service Pack 2 and Windows Server 2003 an attacker
must have valid logon credentials and be able to log on locally to
exploit this vulnerability. The vulnerability could not be exploited
remotely by anonymous users or by users who have standard user
accounts. However, the affected component is available remotely to
users who have administrative permissions.

Just my 2 cents worth
Mike Pawlak

http://www.microsoft.com/technet/security/advisory/899588.mspx

You might want that plug n' play patch if you got any computers running
W2K.

--
Peace!
Kurt
Self-anointed Moderator
microscum.pubic.windowsexp.gonorrhea
http://microscum.com/mscommunity
"Trustworthy Computing" is only another example of an Oxymoron!
"Produkt-Aktivierung macht frei"
 
C

cquirke (MVP Windows shell/user)

They may well do, if the OS is badly-designed unough to be waving
dangerous exploitable surfaces at the world.

For example, as a stand-alone user who merely "consumes" the Internet,
I have absolutely no desire to let any other system to remotely call
procedures on my PC. Yet the OS waves the Remote Procedure Call
service at the Internet, and this cannot be turned off because the OS
is designed in such a way that it "needs" RPC to find its own ass.

So even though I don't need or want the RPC service, I have to patch
defects in it, else I'm at risk. And again, for autorunning macros in
Office "data files". And for hidden admin shares that expose the
startup axis of my PC, so dropped code can run on next boot. Etc.

Also, remember; raw code defects are insane - what happens as a result
of these defects bears to relation to what the code is supposed to do.

Think about that. Need pysical access? That's fine, I'd be comfy
with that. Needs "admin rights"? That's not fine at all, as there
are plenty of ways to gain those. What did you "secure your system"
with; more soggy code that's likely to be as defect-ridden as this
particular hole in the collinder you are currently evaluating?

Think about the logic:
- patch every month, because your code base has hidden defects
- trust patches to be defect-free, even if written by the same folks

It would be interesting to compare the cumulative volume of patches
pushed into your PC, with the total volume of code that is being
patched - especially when you apply the "10% of the code runs 90% of
the time" weighting factor. Then, look at how many successive
revisions are applied to the same problematic code items.

So already, you can predict that bad patches are almost inevitable.
After all, the code they are fixing was thought to be fit for use, so
the assertion that the patches have been tested etc. is meaningless.

Now consider how original code is developed and deployed, vs. patches.

The original code base was developed in controlled circumstances, with
plenty of time for in-house testing, private beta testing, and then
broad public beta testing. It's deployed in controlled circumstances
too; as a fresh installation, onto which other code is added. If
something is added that conflicts with the OS, this generally comes to
light, and you'd uninstall the new addition.

The patches are developed in response to a newly-discovered potential
crisis; sometimes after there is exploit code in the wild. How much
time is there for testing, before having to rush that code out the
door? Next, consider the deployment; this new code is retrofitted
into disparate "live" installations that have diverged from the
initial predictable fresh install state. Can one be sure that the
code will work properly across all possible permutations?

Malware outbreaks and bad patches have the same red-letter impact on
IT, especially in consumerland - they can trigger an urgent need for
assistance across a large chunk of your client base at the same time,
making it impossible to resource these support needs. The more
efficient the patching process, the more clients are hit within a
shorter time-span, with less likelyhood of anyone having a clue as to
what's wrong. This is the hidden bulk of the "just patch" iceberg.

Well, that's the lesson; if all code can contain defects, any code
represents a potential attack surface. MS has yet to rigorously apply
that lesson, i.e. reduce the surfaces waved at external material. We
still see gratuitous handling of external material that was not
explicitly initiated by the user, and so far more crises than need be.

1) Privacy rests on security
2) Security rests on safety
3) Safety rests on sanity

Privacy is the promise you make about how you will manage data.

But if your security is breached, it's not your privacy policy
anymore, because you are no longer the only entity managing that data.

Security is restricting activities to those entities that can be
trusted to perform them, e.g. employees who will behave in ways that
are in keeping with your organizational policy.

But if your employees' actions have consequences contrary to their
intentions (i.e. safety failure), then it's no longer secure simply to
ensure only those employees are involved. Through no fault of their
own, their behavior can no longer be trusted to reflect that of the
organisation, if malware exploits safety failure opportunities.

Safety is a matter of designing code so that the worst the user thinks
can happen, is the worst that can happen. If I have to assess an
incoming attachment, I might take the mild risk of "reading a data
file" but decline the greater risk of "running a program". If the
systrem is so unsafe that I can't tell whether a file is data or code,
or a code file can present itself as a data file, or the OS runs raw
code within what is supposed to be a data file, then I have no control
over consequences. Windows failures abound at all three points.

So safe design is a must, but is meaningless if the code is insane.
For example, if the code that handles a JPEG file doesn't sanity-check
the length of content before copying it into a buffer, so that
"malformed" content overwrites code following the buffer, then the
behavior of that code is no longer limited to design intentions. My
malformed "data" content may now be run as raw code, and that code can
do anything. Any safety assumptions are now meaningless.

-- Risk Management is the clue that asks:
"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top