It would be nice if MS could settingle on a single subnet for updates

L

Leythos

Started playing with Vista again and had to add 5 different subnet
ranges in the firewall in order to get Vista updates, so, considering
Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15
sets of subnets (ranges) needed to allow CAB/EXE and other content from.

MS, Please pick on /24 range and use it for all of your update sites.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
D

DevilsPGD

In message <[email protected]> Leythos
Started playing with Vista again and had to add 5 different subnet
ranges in the firewall in order to get Vista updates, so, considering
Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15
sets of subnets (ranges) needed to allow CAB/EXE and other content from.

MS, Please pick on /24 range and use it for all of your update sites.

Perhaps you should use a larger CIDR range then a /24?
 
L

Leythos

In message <[email protected]> Leythos


Perhaps you should use a larger CIDR range then a /24?

I could, but there is no clear sign from MS as to what IP's they are
using. In many cases the same company that provides their downloads also
provides other companies downloads in the same block.

So, maybe MS should pick one subnet, since they can't possibly need more
than a /24 to provide updates, and publish it for us network admins?

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
M

Mike Brannigan

Leythos said:
I could, but there is no clear sign from MS as to what IP's they are
using. In many cases the same company that provides their downloads also
provides other companies downloads in the same block.

So, maybe MS should pick one subnet, since they can't possibly need more
than a /24 to provide updates, and publish it for us network admins?

--

You should not be using specific addresses to access any Microsoft service -
be that activation, downloads etc.
Microsoft operates a number of layers of protection against various forms of
Internet based attack that include the rapid changing of IP addresses for
key services.
If you try and use specific addresses there is no guarantee that these will
remain valid for any period of time.
Maybe you need to reconsider your firewall and blocking strategy some more
and use either better tools or an alternative strategy for controlling
access from your network to external services.
(Blocking IP ranges is not a via solution longterm)
 
D

DevilsPGD

In message <[email protected]> Leythos
I could, but there is no clear sign from MS as to what IP's they are
using. In many cases the same company that provides their downloads also
provides other companies downloads in the same block.

Ahh, true enough.
So, maybe MS should pick one subnet, since they can't possibly need more
than a /24 to provide updates, and publish it for us network admins?

Perhaps a WSUS server would be more to your needs?
 
S

Steve Riley [MSFT]

IP addresses are spoofable, so they are not appropriate for making security
decisions. Only when you're using IPsec can you do this, because then the
cryptographic signatures appended to the datagrams provide a mechanism for
you to trust originating addresses.

We purposefully change the IP addresses regularly to prevent various kinds
of attacks.

Steve Riley
(e-mail address removed)
http://blogs.technet.com/steriley
 
L

Leythos

IP addresses are spoofable, so they are not appropriate for making security
decisions. Only when you're using IPsec can you do this, because then the
cryptographic signatures appended to the datagrams provide a mechanism for
you to trust originating addresses.

We purposefully change the IP addresses regularly to prevent various kinds
of attacks.

And as a normal measure of security we don't allow unrestricted access
to the net, we don't allow CAB, EXE, and a bunch of other files via HTTP
or SMTP. We only allow web access to partner sites and a few white-
listed sites, this keeps the network secure, along with many other
measures.

I tend to enter subnets for the MS update sites, a /24 or a /28
depending on what I think the range will be, but never just a single IP
as I know the IP will change in that range.

What would be nice, since we have never had a hacked customer, is if we
could have a list of IP ranges used by the different update providers. I
don't have a problem with MS changing them, but it sure would be nice to
know what they are so that we can get them in the system.

As for WSUS - we still need to know what the update sites are, we don't
even allow the servers to get updates unless it's an approved
subnet/network.

Since this is a "security" group, I would think that others would
commonly block all users from code downloads as a standard practice and
only allow code downloads from approved site....

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
M

Mike Brannigan

Leythos said:
And as a normal measure of security we don't allow unrestricted access
to the net, we don't allow CAB, EXE, and a bunch of other files via HTTP
or SMTP. We only allow web access to partner sites and a few white-
listed sites, this keeps the network secure, along with many other
measures.

I tend to enter subnets for the MS update sites, a /24 or a /28
depending on what I think the range will be, but never just a single IP
as I know the IP will change in that range.

What would be nice, since we have never had a hacked customer, is if we
could have a list of IP ranges used by the different update providers. I
don't have a problem with MS changing them, but it sure would be nice to
know what they are so that we can get them in the system.

As for WSUS - we still need to know what the update sites are, we don't
even allow the servers to get updates unless it's an approved
subnet/network.

Since this is a "security" group, I would think that others would
commonly block all users from code downloads as a standard practice and
only allow code downloads from approved site....


Leythos,

As I responded in a similar manner to Steve a few hours earlier it is not a
case of even a range being made public. Microsoft reserve the right to
alter the IP addresses for all public facing services as and when they see
fit - publishing specific ranges would pose a threat to the stability of the
service as this would be simply giving potential attacks a know set of
ranges they can simple target for DOS or other forms of attack. I realize
that it would be possible to work out the entire range that the various
providers of service to Microsoft use and target these but there are many
and it would make the attack surface potentially significantly larger and an
attack even easier to detect etc.
So in short Microsoft is unlikely to make available anything other then the
public facing DNS name for their services.
Maybe you should look at alternative approaches to this.
Consider if you direct your clients to use an internal DNS server that is
configured to only forward for name resolution (conditional forwarding) only
names that meet certain criteria such as *.microsoft.com and your other
white listed sites. This would allow only those sites to be then resolved
by the DNS servers that you choose to use externally and thus accesses.
I realize this does not prevent a direct access if someone knows an IP
address to type into a URL but it is a start while you look at alternative
strategies.
If you use a proxy server at the edge of your network you will be able to
log all access to URLs with in IP address in it and then take appropriate
action against that member of staff etc..
 
L

Leythos

So in short Microsoft is unlikely to make available anything other then the
public facing DNS name for their services.
Maybe you should look at alternative approaches to this.
Consider if you direct your clients to use an internal DNS server that is
configured to only forward for name resolution (conditional forwarding) only
names that meet certain criteria such as *.microsoft.com and your other
white listed sites. This would allow only those sites to be then resolved
by the DNS servers that you choose to use externally and thus accesses.
I realize this does not prevent a direct access if someone knows an IP
address to type into a URL but it is a start while you look at alternative
strategies.
If you use a proxy server at the edge of your network you will be able to
log all access to URLs with in IP address in it and then take appropriate
action against that member of staff etc..

Mike, Steve,

And there lies the problem for security. We already see the rejected
connections and their names and even the full file path/name, and yes,
it's easy to add them into the approved list.

This should be a problem for all users I would think. Where they block
the downloading of code by their users, completely, but want to allow MS
Updates to the servers and workstations. In the case of the firewalls we
have used, most of them on the market, there is no simple means to white
list your update sites as they keep changing. Yes, we could install a
proxy server, but that really seems like a waste when the only place we
have a problem with is MS.

I understand your reasons, but it's a catch 22, move your stuff around
to limit your exposure or force customers to either purchase more
hardware or to allow code to be downloaded from unknown sites.

I'll stick with watching for the Windows Update failures in the logs and
manually adding the networks as needed - at least this way our networks
remain secure.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
K

Kerry Brown

Leythos said:
Mike, Steve,

And there lies the problem for security. We already see the rejected
connections and their names and even the full file path/name, and yes,
it's easy to add them into the approved list.

This should be a problem for all users I would think. Where they block
the downloading of code by their users, completely, but want to allow MS
Updates to the servers and workstations. In the case of the firewalls we
have used, most of them on the market, there is no simple means to white
list your update sites as they keep changing. Yes, we could install a
proxy server, but that really seems like a waste when the only place we
have a problem with is MS.

I understand your reasons, but it's a catch 22, move your stuff around
to limit your exposure or force customers to either purchase more
hardware or to allow code to be downloaded from unknown sites.

I'll stick with watching for the Windows Update failures in the logs and
manually adding the networks as needed - at least this way our networks
remain secure.


Use WSUS and only allow the WSUS server to download updates.
 
M

Mike Brannigan

Leythos said:
Mike, Steve,

And there lies the problem for security. We already see the rejected
connections and their names and even the full file path/name, and yes,
it's easy to add them into the approved list.

This should be a problem for all users I would think. Where they block
the downloading of code by their users, completely, but want to allow MS
Updates to the servers and workstations. In the case of the firewalls we
have used, most of them on the market, there is no simple means to white
list your update sites as they keep changing. Yes, we could install a
proxy server, but that really seems like a waste when the only place we
have a problem with is MS.

I understand your reasons, but it's a catch 22, move your stuff around
to limit your exposure or force customers to either purchase more
hardware or to allow code to be downloaded from unknown sites.

I'll stick with watching for the Windows Update failures in the logs and
manually adding the networks as needed - at least this way our networks
remain secure.

You have highlighted your own biggest problem here - "but want to allow MS
Updates to the servers and workstations." - ABSOLUTELY NOT.
NO never ever ever in a production corporate environment do you allow ANY of
your workstations and servers to directly access anyone for patches or
updates
I have never allowed this or even seen it in real large or enterprise
customers. (the only place it may crop up is in mom and pop 10 PCs and a
Server shops).
If you want to patch your systems you do so in a properly controlled manner
using the appropriate centrally managed distribution tools - such as WSUS
for small medium and System Center Configuration Manager 2007 or similar
products from other vendors.
You download the patches etc or allow your WSUS or similar product to
download them from the vendor - you then regression test them for your
environment (hardware and software etc) then you approve them for deployment
and deploy to the servers and workstations form inside your secure corporate
network. Now it is not a problem to let that one server do its downloads
from the vendors (this is just the same as you would do for anti virus
updates - you download them to an internal distribution server etc).

As you said your only problem is with Microsoft then the solution I have
outlined above is the fix - only one server needs access through your
draconian firewall policies. And you get a real secure enterprise patch
management solution that significantly lowers the risk to your environment.
With the best will in the world if you are letting servers auto update all
patches from Microsoft without any degree of regression testing you have way
bigger problems then worrying about your firewall rules.

If you stick to watching for failures and manually updating rules you are
wasting your time, providing a poor service and getting paid for doing
something that there is no need to do.
 
D

DevilsPGD

In message <[email protected]> Leythos
As for WSUS - we still need to know what the update sites are, we don't
even allow the servers to get updates unless it's an approved
subnet/network.

The suggestion would be to run WSUS outside your firewall, as though it
were your own personal Windows Update server on an IP you'd know and
trust for your internal clients to update.

(Obviously the WSUS server shouldn't be completely unprotected, but it
doesn't need to live within your LAN and have unrestricted internet
access at the same time)
 
L

Leythos

In message <[email protected]> Leythos


The suggestion would be to run WSUS outside your firewall, as though it
were your own personal Windows Update server on an IP you'd know and
trust for your internal clients to update.

(Obviously the WSUS server shouldn't be completely unprotected, but it
doesn't need to live within your LAN and have unrestricted internet
access at the same time)

Yep, and we could do that, even inside the LAN and allow exceptions for
it. In the case of most of our clients, with a very few exceptions, even
locations with several hundred nodes in the lan, we've never had a
problem allowing the workstations to auto download/install the windows
updates, not since it was available. On certain machines we select to
download and then manually install, but for the masses of clients
machines we just allow them to auto-update and have never had any
problems with that method. Servers, manual only.

About half our clients are under 100 nodes on the lan, they most often
have one or two two servers and we could install WSUS on one or the
single server, but the servers are very stable and adding another
component to them might not provide the same stability - so, it's still
a catch-22, but WSUS might just be the only real way around this.

Thanks

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
L

Leythos

You have highlighted your own biggest problem here - "but want to allow MS
Updates to the servers and workstations." - ABSOLUTELY NOT.
NO never ever ever in a production corporate environment do you allow ANY of
your workstations and servers to directly access anyone for patches or
updates

I should have been clearer on the servers, sorry - we download but
manually install on all servers and on specific function workstations.

In all this time we've never had a problem with automatic install on the
workstations (and we have specific machines that we manually install on)
in the production environment.

So, the idea of not allowing automatic updates to most workstations has
never been a problem.

The real problem is that even if we set them to manual, that the could
not get the updates unless we enter exceptions in the firewall for the
MS Update sites. This is what I experienced with another install of
Vista, 29 updates and not a single one was from the same list of ranges
that we get the XP/Office/Server updates from... So, even manual install
fails in that case.

Based on another post I guess I'm going to have to install WSUS and just
allow all exe/cab/code files to be pulled in HTTP sessions to that
server.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
D

DevilsPGD

In message <[email protected]> Leythos
About half our clients are under 100 nodes on the lan, they most often
have one or two two servers and we could install WSUS on one or the
single server, but the servers are very stable and adding another
component to them might not provide the same stability - so, it's still
a catch-22, but WSUS might just be the only real way around this.

Either that, or use hostnames rather then IPs in your firewalling...
 
L

Leythos

In message <[email protected]> Leythos


Either that, or use hostnames rather then IPs in your firewalling...

I wish I could, the firewalls covert names to IP for that function.

--

Leythos
- Igitur qui desiderat pacem, praeparet bellum.
- Calling an illegal alien an "undocumented worker" is like calling a
drug dealer an "unlicensed pharmacist"
(e-mail address removed) (remove 999 for proper email address)
 
C

cquirke (MVP Windows shell/user)

This thread is about the collision between...

No automatic code base changes allowed

....and...

Vendors need to push "code of the day"

Given the only reason we allow vendors to push "code of the day" is
because their existing code fails too often for us to manage manually,
one wonders if our trust in these vendors is well-placed.

A big part of this is knowing that only the vendor is pushing the
code, and that's hard to be sure of. If malware were to hijack a
vendor's update pipe, it could blow black code into the core of
systems, right pas all those system's defenses.

With that in mind, I've switched from wishing MS would use open
standards for patch transmission to being grateful for whatever they
can do to harden the process. I'd still rather not have to leave
myself open to injections of "code of the day", though.
NO never ever ever in a production corporate environment do you allow ANY of
your workstations and servers to directly access anyone for patches
I have never allowed this or even seen it in real large or enterprise
customers. (the only place it may crop up is in mom and pop
10 PCs and a Server shops).

And there's the problem. MS concentrates on scaling up to enterprise
needs, where the enterprise should consolodate patches in one location
and then drive these into systems under their own in-house control.

So scaling up is well catered for.

But what about scaling down?

Do "mom and pop" folks not deserve safety? How about single-PC users
which have everything they own tied up in that one vulnerable box?
What's best-practice for them - "trust me, I'm a software vendor"?

How about scaling outwards?

When every single vendor wants to be able to push "updates" into your
PC, even for things as trivial as prinyers and mouse drivers, how do
you manage these? How do you manage 50 different ad-hoc update
delivery systems, some from vendors who are not much beyond "Mom and
Pop" status themselves? Do we let Zango etc. "update" themselves?

The bottom line: "Ship now, patch later" is an unworkable model.
As you said your only problem is with Microsoft then the solution I have
outlined above is the fix - only one server needs access through your
draconian firewall policies. And you get a real secure enterprise patch
management solution that significantly lowers the risk to your environment.

That's prolly the best solution, for those with the resources to
manage it. It does create a lock-in advantage for MS, but at least it
is one that is value-based (i.e. the positive value of a
well-developed enterprise-ready management system).

However, I have to wonder how effective in-house patch evaluation
really is, especially if it is to keep up with tight time-to-exploit
cycles. It may be the closed-source equivalent of the open source
boast that "our code is validated by a thousand reviewers"; looks good
on paper, but is it really effective in practice?


--------------- ----- ---- --- -- - - -
To one who has never seen a hammer,
nothing looks like a nail
 
M

Mike Brannigan

In line below

--

Mike Brannigan
cquirke (MVP Windows shell/user) said:
This thread is about the collision between...

No automatic code base changes allowed

...and...

Vendors need to push "code of the day"

Given the only reason we allow vendors to push "code of the day" is
because their existing code fails too often for us to manage manually,
one wonders if our trust in these vendors is well-placed.

A big part of this is knowing that only the vendor is pushing the
code, and that's hard to be sure of. If malware were to hijack a
vendor's update pipe, it could blow black code into the core of
systems, right pas all those system's defenses.

With that in mind, I've switched from wishing MS would use open
standards for patch transmission to being grateful for whatever they
can do to harden the process. I'd still rather not have to leave
myself open to injections of "code of the day", though.


And there's the problem. MS concentrates on scaling up to enterprise
needs, where the enterprise should consolodate patches in one location
and then drive these into systems under their own in-house control.

So scaling up is well catered for.

But what about scaling down?

That is where the free WSUS 3.0 product is targeted. One server to do all
the downloads and then you approve the ones you want to deploy and then your
client PCs just get them under their normal Windows Automatic Update process
(but this time they point to your WSUS server internally instead of going
external).
Do "mom and pop" folks not deserve safety? How about single-PC users
which have everything they own tied up in that one vulnerable box?
What's best-practice for them - "trust me, I'm a software vendor"?

As above for anyone one with more then a couple of PCs.
Other wise you subscribe to the security update, are notified when they are
released, use the Download catalog to get the updates test them as you see
fit hem deploy them as you see fit using any production approach you want.

Sorry but this is not new and has been around for the last few years - patch
management for everyone from single user to mega corp. is well understood
out there in the field, which is why I was so surprised by the OPs post and
approach.
How about scaling outwards?

WSUS scales outwards too - unless you mean something else.
If you mean for integration with other vendors - if they wish to create
catalog then SCCM 2007 can handle import of third party or in-house catalog
data to id out of patch machines and patch etc. I suggest you read up on the
SCCM 2007 product.
When every single vendor wants to be able to push "updates" into your
PC, even for things as trivial as prinyers and mouse drivers, how do
you manage these?

Since MSFT is taking a huge number of these patches and updated drivers then
you can continue to handle these. as regards the rest if they start using
standard catalog methods as I mentioned above then integration becomes a no
brainer. Otherwise you again have to get informed of there new updates -
most publish some form of alert etc. then download, test and deploy using
whatever tools you like in your enterprise.
How do you manage 50 different ad-hoc update
delivery systems, some from vendors who are not much beyond "Mom and
Pop" status themselves? Do we let Zango etc. "update" themselves?

The bottom line: "Ship now, patch later" is an unworkable model.

There is almost no such thing as flawless software and particularly when
you are talking about tens of millions of lines of code in an OS.
Every major OS vendor on the planet regularly ships patches and updates for
their products.
That's prolly the best solution, for those with the resources to
manage it. It does create a lock-in advantage for MS, but at least it
is one that is value-based (i.e. the positive value of a
well-developed enterprise-ready management system).

However, I have to wonder how effective in-house patch evaluation
really is, especially if it is to keep up with tight time-to-exploit
cycles.

Then that is their problem and they must address it in the manner best
suited to them either by increasing their resources assigned to it or taking
a firmer approach such as only taking absolutely critical patches etc.
I have worked with enterprise across this whole spectrum from full dedicated
patch management teams that perform full and complete regression testing for
all patches they need to roll out internally to extremely poor ad hoc
solutions and minimal testing,.
 
C

cquirke (MVP Windows shell/user)

Mike Brannigan
"cquirke
That is where the free WSUS 3.0 product is targeted. One server to do all
the downloads and then you approve the ones you want to deploy and then your
client PCs just get them under their normal Windows Automatic Update process
(but this time they point to your WSUS server internally instead of going
external).

Sounds good - by "one server", do you mean a server dedicated to this
task alone, or can that be the only server you have? Will it run on
SBS, or is there a different solution for that?

This is good, but it's still not scaling all the way down to a single
PC that has everything important on it. Those users have no choice
but to trsut patches not to mess up.

Windows *almost* offers a mitigation, but screws it up (or rather,
doesn't see the need to work the way I'd hoped it would).

There's an Automatic Updates option to "download now, but let me
decide when to install them". When I read that, I thought it would
put me in full control over such updates (e.g. "...when or IF to
install them") but it does not. If I click "no, don't install these",
it will stealth them in via the next shutdown.

This is a pity, because otherwise it would facilitate this policy:
- download patches as soon as available but DO NOT INSTALL
- watch for reports of problems with patches
- watch for reports of exploits
- if exploits, get offline and install already-downloaded patches
- else if no "dead bodies" reported from patch bugs, install patches
- but if reports of "dead bodies", can avoid the relevant patches

As it is, if I don't want MS ramming "code of the day" when my back is
turned, I have to disable downloading updates altogether, so...
- do NOT download patches as soon as available
- watch for reports of problems with patches
- watch for reports of exploits
- if exploits, have to stay online to download patches -> exploited
- if no "dead bodies" from patch bugs, downloads and install patches
- but if reports of "dead bodies", can avoid the relevant patches
There is almost no such thing as flawless software and particularly when
you are talking about tens of millions of lines of code in an OS.

Sure, and the lesson is to design and code with this in mind, reducing
automatic exposure of surfaces to arbitrary material, and ensuring
that any code can be amputated immediately, pending patch.

If all non-trivial code has bugs, and you need bug-free code, then the
solution is to keep that code trivial ;-)

I see this as akin to hi-wear surfaces within mechanical systems.
You'd nearly always design such systems so that hi-wear parts are
cheap and detatchable for field replacement, e.g. pressed steel bore
within an aluminium block, piston rings that are not built permanently
into the piston, removable crank bearings rather than ball bearings
running directly on crank and case surfaces, etc.

I don't see enough of that awareness in modern Windows. If anything,
the trend is in the other direction; more automated and complex
handling of material that the user has indicated no intention to
"open", poor or absent file type discipline, subsystems that cannot be
excluded from installation or uninstalled, etc.
Every major OS vendor on the planet regularly ships patches and updates for
their products.

They do indeed, yes, and many vendors are lagging behind MS in
sensible practice. For example, Sun were still allowing Java applets
to "ask" the JRE to pass them through to older versions "for backward
compatibility", and installing new JRE versions did not get rid of old
ones, allowing these to remain a threat.

But the bottom line is, it's a suspension of disbelief to trust patch
code (that may be hastily developed under present exploits) to work
when slid under installed applications that could not possibly have
been written for such code, especially when the reason to swallow such
code is because the same vendor messed up when writing the same code
under less-rushed pre-release circumstances.

What should have happened, is that the first time some unforgivable
code defect allowed widespread exploitation (say, the failure to check
MIME type against file name extension and actual material type when
processing in-line files in HTML "message text"), the vendor should
have been stomped so hard that they'd dare not make the same sort of
mistake again.

Instead, the norm is for swave vendors to fail so regularly that we
have to automate the process of "patching". Vendors can do this by
making the patch material available on a server, leaving it to the
user to cover the cost of obtaining it. Meanwhile, stocks of
defective product are not recalled, nor are replacement disks added to
these packages, so what you buy after the fact is still defective, and
still have to be patched at your expense.

Couple that with the common advice to "just" wipe and re-install, and
you will be constantly falling back to unpatched status, and having to
pull down massive wads of "repair" material - something that just is
not possible to do via pay-per-second dial-up.

I was impressed when MS shipped XP SP2 CDs to end users, as well as
the security roll-up CDs for Windows all the way back to Win98. But
we still need the ability to regenerate a fully-functional and
fully-patched OS installation and maintenance disk - something that
"royalty" OEMs don't provide their victims even at purchase time.
Then that is their problem

Not really, no. The problem arises from a bug rate within exposed
surfaces that is unsustainable for the extent of those surfaces,
forcing too many patches to manage manually. Yes, it becomes our
problem, but we didn't cause it other than by choosing to use a
platform that is so widely used that it is immediately attacked.

That equation not only favors minority platforms such as Mac OS and
Linux, it also favors an abandonment of personal computing for SaaS,
for which the risks are currently way under-estimated.


Note that I don't see the need to patch as an MS issue, given that (as
you mention) all equally-complex systems have similar needs to patch.

What puts MS on the sharp end, is the degree of exposure - like the
difference between the bearing on your boot hinge, and the bearing
that holds the crankshaft in the engine block.

It's been amusing to see how well (or rather, how poorly) Apple's
Safari browser has fared, when exposed to the same "wear".

A trend I really don't like, is where relatively trivial software
vendors jump on the "update" bandwagon, leveraging this to re-assert
thier choice of settings or "call home". It's bad enough that buggy
code quality is rewarded with tighter vendor dependency, as it is.
I have worked with enterprise across this whole spectrum from full dedicated
patch management teams that perform full and complete regression testing for
all patches they need to roll out internally to extremely poor ad hoc
solutions and minimal testing,.

I'm not talking entrerprises, here. They are well-positioned to
manage the problem; it's the "free" end-users with thier single PCs or
small peer-to-peer LANs I'm thinking about.

Collectively, all those loose systems can act as very large botnets.
------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
K

Kerry Brown

cquirke (MVP Windows shell/user) said:
Sounds good - by "one server", do you mean a server dedicated to this
task alone, or can that be the only server you have? Will it run on
SBS, or is there a different solution for that?


SBS 2003 R2 comes with WSUS out if the box.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top