Microsoft Zero Day security holes being exploited

D

Dan W.

cquirke said:
The weakness here is that anything that runs during the user's session
is deemed to have been run with the user's intent, and gets the same
rights as the user. This is an inappropriate assumption when there
are so many by-design opportunities for code to run automatically,
whether the user intended to do so or not.



Many vulnerabilities fall into that category, often because the extra
requirement was originally seen as sufficient mitigation.
Vulnerabilities don't have to fascilitate primary entry to be
significant; they may escalate access after entry, or allow the active
malware state to persist across Windows sessions, etc.



OK, now I'm with you, and I agree with you up to a point. I dunno
where the earlier poster got the notion that Winlogin was there to act
as his "ace in the hole" for controlling malware, as was implied.



I agree with you that it is not - the problem is the difficulty that
the user faces when trying to regain control over malware that is
using Winlogin and similar integration points.

The safety defect is that:
- these integration points are also effective in Safe Mode
- there is no maintenance OS from which they can be managed

We're told we don't need a HD-independent mOS because we have Safe
Mode, ignoring the possibility that Safe Mode's core code may itself
be infected. Playing along with that assertion, we'd expect Safe Mode
to disable any 3rd-party integration, and would provide a UI through
which these integration points can be managed.

But this is not the case - the safety defect is that once software is
permitted to run on the system, the user lacks the tools to regain
control from that software. Couple that with the Windows propensity
to auto-run material either be design or via defects, and you have
what is one of the most common PC management crises around.


That's a safety flaw right there.

You're prolly thinking from the pro-IT perspective, where users are
literally wage-slaves - the PC is owned by someone else, the time the
user spends on the PC is owned by someone else, and that someone else
expects to override user control over the system.

So we have the notion of "administrators" vs. "users". Then you'd
need a single administrator to be able to manage multiple PCs without
having to actually waddle over to all those keyboards - so you design
in backdoors to facilitate administration via the network.

Which is fine - in the un-free world of mass business computing.

But the home user owns thier PCs, and there is no-one else who should
have the right to usurp that control. Creditors and police do not
have the right to break in, search, or sieze within the user's home.

So what happens when an OS designed for wage-slavery is dropped into
free homes as-is? Who is the notional "administrator"? Why is the
Internet treated as if it were a closed and professionally-secured
network? There's no "good administratrors" and "bad administrators"
here; just the person at the keyboard who should have full control
over the system, and other nebulous entities on the Internet who
should have zero control over the system.

Whatever some automated process or network visitationb has done to a
system, the home user at the keyboard should be able to undo.

Windows XP Home is simply not designed for free users to assert thier
rights of ownership, and that's a problem deeper than bits and bytes.



The rights you save may be your own

Exactly, Chris and again well said and a great reason for the Windows
Classic Edition which I have started to work on because I can no longer
depend only on Microsoft. I actually plan to try and get a somewhat
decent copy with tri-mode (9x, NT (New Technology) and open source)
solutions to present to Redmond, Washington when Microsoft decides --
hmm --- this might be good after all and realizes their folly in trying
to fully eliminate the awesome 9s source code in face of the
fundamentally flawed NT source code to start with. The issue is this
and if you examine both source codes at the core levels with all
features and functionality stripped away to as raw a code as possible
then you will see the inherent weakness of NT and how it is like the
foolish man who built his house upon sand in the Bible compared to the
9x source code at its most raw and basic form and you will see it being
like the Wise Man who built his house upon the Rock in the Bible.
Forgive me for bringing religion into it but this was the best analogy
that I could think up and I appreciate your understanding.
 
P

Paul Adare

in the said:
Wow, I will need to check out Bart from your website to read up on it.

Please do us a favour, if all you're going to do is to post a single
line reply, then don't quote the entire message thread to which you're
responding. Remove all but what is really relevant and necessary.
 
P

Paul Adare

in the said:
I actually plan to try and get a somewhat
decent copy with tri-mode (9x, NT (New Technology) and open source)
solutions to present to Redmond, Washington when Microsoft decides --
hmm --- this might be good after all and realizes their folly in trying
to fully eliminate the awesome 9s source code in face of the
fundamentally flawed NT source code to start with. The issue is this
and if you examine both source codes at the core levels with all
features and functionality stripped away to as raw a code as possible
then you will see the inherent weakness of NT

You have access to the source code that allows you to make the above
statement?
Let me know when you get that meeting with Microsoft scheduled will
you? I'd like to be present when you attempt to explain how Windows 9x
is more secure than is NT. I always appreciate a good laugh.
 
C

cquirke (MVP Windows shell/user)

More to the point is that vulnerable surfaces are less-often exposed
to clickless attack - that's really what makes Win9x safer.

You can use an email app that displays only message text, without any
inline content such as graphics etc. so that JPG and WMF exploit
surfaces are less exposed. Couple that with an OS that doesn't wave
RPC, LSASS etc. at the 'net and doesn't grope material underfoot
(indexing) or when folders are viewed ("View As Web Page" and other
metadata handlers) and you're getting somewhere.

For those who cannot subscribe to the "keep getting those patches,
folks!" model, the above makes a lot of sense.

There's no SR in Win98, tho that was prolly when the first 3rd-party
SR-like utilities started to appear. I remember two of these that
seemed to inform WinME-era SR design.

No-one seemed that interested in adding these utilities, yet when the
same functionality was built into WinME, it was touted as reason to
switch to 'ME, and when this functionality fell over, users were often
advised to "just" re-install to regain it. I doubt if we'd have
advised users to "just" re-install the OS so that some 3rd-party
add-on could work again.

XP's SR certainly is massively improved over WinME - and there's so
little in common between them that it's rare one can offer SR
management or tshooting advice that applies to both OSs equally.

I use SR in XP, and kill it at birth in WinME - that's the size of the
difference, though a one-lunger (one big doomed C:) installation may
find the downsides of WinME's SR to less of an issue.
about Microsoft and its early days to present time. The early Microsoft
software engineers nicknamed it the Not There code since it did not have
the type of maintenance operating system that Chris Quirke, MVP fondly
talks about in regards to 98 Second Edition.
If the MOS being discussed for Win 98 is the system boot disk floppy, that
was a very basic MOS and it still works on Windows XP just as well as it
ever did on Windows 98. [Sure, you either have to format your disk as FAT,
or use a third party DOS NTFS driver.]

That was true, until we crossed the 137G limit (where DOS mode is no
longer safe). It's a major reason why I still avoid NTFS... Bart
works so well as a mOS for malware management that I seldom use DOS
mode for that in XP systems, but data recovery and manual file system
maintenance remain seriously limited for NTFS.

Well, ever onward and all that ;-)

Bart is a bigger and better mOS, though it depends on how you build it
(and yes, the effort of building it is larger than for DOS mode
solutions). You can build a mOS from Bart that breaks various mOS
safety rules (e.g. falls through to boot HD on unattended reset,
automatically writes to HD, uses Explorer as shell and thus opens the
risk of malware exploiting its surfaces, etc.).

I'm hoping MS WinPE 2.0, or the subset of this that is built into the
Vista installation DVD, will match what Bart offers. Initial testing
suggests it has the potential, though some mOS safety rules have been
broken (e.g. fall-through to HD boot, requires visible Vista
installation to work, etc.).

The RAM testing component is nice but breaks so many mOS safety rules
so badly that I consider it unfit for use:
- spontaneous reset will reboot the HD
- HD is examined for Vista installation before you reach the test
- a large amount of UI code required to reach the test
- test drops the RAM tester on HD for next boot (!!)
- test logs results to the HD (!!)
- you have to boot full Vista off HD to see the results (!!!)

What this screams to me, is that MS still doesn't "get" what a mOS is,
or how it should be designed. I can understand this, as MS WinPE was
originally intended purely for setting up brand-new, presumed-good
hardware with a fresh (destructive) OS installation.

By default, the RAM test does only one or a few passes; it takes under
an hour or so - and thus is only going to detect pretty grossly-bad
RAM. Grossly bad RAM is unlikely to run an entire GUI reliably, and
can bit-lip any address to the wrong one, or any "read HD" call to a
"write HD" call. The more code you run, the higher the risk of data
corruption, and NO writes to HD should ever be done while the RAM is
suspected to be bad (which is after all why we are testing it.

A mOS boot should never automatically chain to HD boot after a time
out, because the reason you'd be using a mOS in the first place is
because you daren't boot the HD. So when the mOS disk boots, the only
safe thing to do is quickly reach a menu via a minimum of code, and
stop there, with no-time-out fall-through.

It's tempting to fall-through to the RAM test as the only safe option,
but that can undermine unattended RAM testing - if the system
spontaneously resets during such testing, you need to know that, and
it's not obvious if the reboot restarts the RAM test again.

Until RAM, physical HD and logical file system are known to be safe,
and it's known that deleted material is not needed to be recovered, it
is not safe to write to any HD. That means no page file, no swap, and
no "drop and reboot" methods of restarting particular tests.

Until the HD's contents are known to be malware-free, it is unsafe to
run any code off the HD. This goes beyond not booting the HD, or
looking for drivers on the HD; it also means not automatically groping
material there (e.g. when listing files in a folder) as doing so opens
up internal surfaces of the mOS to exploitation risks.


Karl's right, tho... I'm already thinking beyond regaining what we
lost when hardware (> 137G, USB, etc.) and NTFS broke the ability to
use DOS mode as a mOS, to what a purpose-built mOS could offer.

For example, it could contain a generic file and redirected-registry
scanning engine into which av vendor's scanning modules could be
plugged. It could offer a single UI to manage these (i.e. "scan all
files", "don't automatically clean" etc.) and could collate the
results into a single log. It could improve efficiency by applying
each engine in turn to material that is read once, rather than the
norm of having each av scanner pull up the material to scan.

MS could be accused of foreclosing opportunities to av vendors
(blocking kernel access, competing One Care and Defender products),
but this sort of mOS design could open up new opportunities.

Normally, the av market is "dead man's shoes"; a system can have only
one resident scanner, so the race is on to be that scanner (e.g. OEM
bundling deals that reduce per-license revenue). Once users have an
av, it becomes very difficult to get them to switch - they can't try
out an alternate av without uninstalling what they have, and no-one
wants to do that. It's only when feeware av "dies" at the end of a
subscription period, that the user will consider a switch.

But a multi-av mOS allows av vendors to have their engines compared,
at a fairly low development cost. They don't have to create any UI at
all, because the mOS does that; all they have to do is provide a pure
detection and cleaning engine, which is their core compitency anyway.

Chances are, some av vendors would prefer to avoid that challenge :)

They are good few-trick ponies, but they do not constitute a mOS.
They can't run arbitrary apps, so they aren't an OS, and if they
aren't an OS, then by definition that aren't a mOS either.

As it is, RC is crippled as a "recovery" environment, because it can't
access anything other than C: and can't write to anywhere else. Even
before you realise you'd have to copy files off one at a time (no
wildcards, no subtree copy), this kills any data recovery prospects.

At best, RC and OS installation options can be considered "vendor
support obligation" tools, i.e. they assist MS in getting MS's
products working again. Your data is completely irrelevant.

It gets worse; MS accepts crippled OEM OS licensing as being "Genuine"
(i.e. MS got paid) even if they provide NONE of that functionality.

The driver's not even in the car, let alone asleep at the wheel :-(

They do different things.

RC and installation options can regain bootability and OS
functionality, and if you have enabled Set commands before the crisis
you are trying to manage, you can copy off files one at a time. They
are limited to that, as no additional programs can be run.

In contrast, a Win98EBD is an OS, and can run other programs from
diskette, RAM disk or CDR. Such programs include Regedit
(non-interactive, i.e. import/export .REG only), Scandisk (interactive
file system repair, which NTFS still lacks), Odi's LFN tools (copy off
files in bulk, preserving LFNs), Disk Edit (manually repair or
re-create file system structure) and run a number of av.

So while XP's tools are bound to getting XP running again, Win98EBD
functionality encompasses data recovery, malware cleanup, and hardware
diagnostics. It's a no-brainer as to which I'd want (both!)

That's the point I keep trying to make - what Dan refers to is what
I'd call "safety", whereas what Karl's referring to is what I'd call
"security". Security rests on safety, because the benefit of
restricting access to the right users is undermined if what happens is
not limited to what these users intended to happen.

Er... no, not really. That hasn't been my mileage with any Win9x,
compared to Win3.yuk - and as usual, YMMV based on what your hardware
standards are, and how you set up the system. I do find XP more
stable, as I'd expect, given NT's greater protection for hardware.

Mmmh... AFAIK, that sort of protection has been there since Win3.1 at
least (specifically, the "386 Enhanced" mode of Win3.x). Even DOS
used different memory segments for code and data, though it didn't use
386 design to police this separation.

IOW, the promise that "an app can crash, and all that happens is that
app is terminated, the rest of the OS keeps running!" has been made
for every version of Windows since Win3.x - it's just that the reality
always falls short of the promise. It still does, though it gets a
little closer every time.

If anything, there seems to be a back-track on the concept of data vs.
code separation, and this may be a consequence of the
Object-Orientated model. Before, you'd load some monolithic program
into its code segment, which would then load data into a separate data
segment. Now you have multiple objects, each of which can contain
thier own variables (properties) and code (methods).

We're running after the horse by band-aiding CPU-based No-Execute
trapping, so that when (not if) our current software design allows
"data" to spew over into code space, we can catch it.

The real millstone was Win3.yuk (think heaps, co-operative
multitasking). Ironically, DOS apps multitask better than Win16 ones,
as each DOS app lives in its own VM and is pre-emptively multi-tasked.

64-bit is the opportunity to make new rules, as Vista is doing (e.g.
no intrusions into kernel allowed). I'm hoping that this will be as
beneficial as hardware virtualization was for NT.

Win9x apps don't cast as much of a shadow, as after all, Win9x's
native application code was to be the same as NT's. What is a
challenge is getting vendors to conform to reduced user rights, as up
until XP, they could simply ignore this.

There's also the burden of legacy integration points, from
Autoexec.bat through Win.ini through the various fads and fashions of
Win9x and NT and beyond. There's something seriously wrong if MS is
unable to enumerate every single integration point, and provide a
super-MSConfig to manage them all from a single UI.
Classic Edition could be completely compatible with the older software
such as Windows 3.1 programs and DOS programs. Heck, Microsoft
could do this in a heartbeat without too much trouble.

Think about that. Who sits in exactly the same job for 12 years?

All the coders who actually made Win95, aren't front-line coders at MS
anymore. They've either left, or they've climbed the ladder into
other types of job, such as division managers, software architects
etc. To the folks who are currently front-line coders, making Vista
etc., Win9x is as alien as (say) Linux or OS/2.

To build a new Win9x, MS would have to re-train a number of new
coders, which would take ages, and then they'd have to keep this
skills pool alive as long as the new Win9x were in use. I don't see
them wanting to do that, especially as they had such a battle to
sunset Win9x and move everyone over to NT (XP) in the first place.

Also, think about what you want from Win9x - you may find that what
you really want is a set of attributes that are not inherently unique
to Win9x at all, and which may be present in (say) embedded XP.


If you really do need the ability to run DOS and Win3.yuk apps, then
you'd be better served by an emulator for these OSs.

This not only protects the rest of the system to the oddball
activities of these platforms, but can also virtualize incompatible
hardware and mimic the expected slower clock speeds more smoothly than
direct execution could offer. This is important, as unexpected speed
and disparity between instruction times is as much a reason for old
software to fail on new systems as changes within Windows itself.
I will do what it takes to see this come to reality.

Stick around on this, even if there's no further Win9x as such. As we
can see from MS's first mOS since Win98 and WinME EBDs, there's more
to doing this than the ability to write working code - there has to be
an understanding of what the code should do in the "real world".

-- Risk Management is the clue that asks:
"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"
 
I

imhotep

Paul said:
You have access to the source code that allows you to make the above
statement?
Let me know when you get that meeting with Microsoft scheduled will
you? I'd like to be present when you attempt to explain how Windows 9x
is more secure than is NT. I always appreciate a good laugh.

No reason to be sarcastic....

Im
 
S

Stephen Howe

Exactly, Chris and again well said and a great reason for the Windows
Classic Edition which I have started to work on because I can no longer
depend only on Microsoft. I actually plan to try and get a somewhat
decent copy with tri-mode (9x, NT (New Technology) and open source)
solutions to present to Redmond, Washington when Microsoft decides --
hmm --- this might be good after all and realizes their folly in trying
to fully eliminate the awesome 9s source code in face of the
fundamentally flawed NT source code to start with.

Oh give over.
You reveal by this that you are not a programmer.

Windows 9x is based on Windows 3.11.
It is not fully 32-bit, it is a hybrid OS with parts 16-bit, parts 32-bit.
You can see that by examining the size of the 3 main Windows files:
32-bit versions:
GDI32.EXE
USER32.EXE
KERNEL32.DLL
versus the 16-bit versions
GDI.EXE
USER.EXE
KRNL386.EXE
The direction of the thunking can seen by comparing like-with-like and you
can see at once that Microsoft's conversion to 32-bits was incomplete (I
think IBM's success selling OS/2 2.0 forced Microsofts hand).
Memory was provided by a DPMI host which in turn was supplied by HIMEM.SYS.
Processes were not properly insulated from each other.

In constrast, Windows NT is fully 32-bit, written from the ground up with
Dave Cutler in control.
He wrote the OS VMS for DEC. Yes there might be bugs in NT 3.1, NT 3.5, NT
4.0, 2000 but the underlying architecture is sound, certainly much better
than the Windows 9x line.

See
http://en.wikipedia.org/wiki/Microsoft_Windows#Hybrid_16.2F32-bit_operating_systems

Stephen Howe
 
I

imhotep

Stephen Howe said:
Oh give over.
You reveal by this that you are not a programmer.

Windows 9x is based on Windows 3.11.
It is not fully 32-bit, it is a hybrid OS with parts 16-bit, parts 32-bit.
You can see that by examining the size of the 3 main Windows files:
32-bit versions:
GDI32.EXE
USER32.EXE
KERNEL32.DLL
versus the 16-bit versions
GDI.EXE
USER.EXE
KRNL386.EXE
The direction of the thunking can seen by comparing like-with-like and you
can see at once that Microsoft's conversion to 32-bits was incomplete (I
think IBM's success selling OS/2 2.0 forced Microsofts hand).
Memory was provided by a DPMI host which in turn was supplied by
HIMEM.SYS. Processes were not properly insulated from each other.


Yes, Microsoft at the time got caught in a lie as they were saying that
Windows98 was 32 bit. Microsoft lying? Oh my, what is the dam World coming
to!!!

In constrast, Windows NT is fully 32-bit, written from the ground up with
Dave Cutler in control.
He wrote the OS VMS for DEC. Yes there might be bugs in NT 3.1, NT 3.5, NT
4.0, 2000 but the underlying architecture is sound, certainly much better
than the Windows 9x line.

Well, Dave did but, VMS always was a better OS....still is.
 
I

imhotep

cquirke said:
<cough>

See http://cquirke.mvps.org/9x/mimehole.htm, Google( BadTrans B )

That's a very old bug, long fixed unless you "just" re-installed any
OS predating XP, as every such OS uses an IE that is both exploitable
and too "old" to be patched other than by upgrading the whole of IE.

If you read up that bug, you'd see how the nature of exploits and code
bugs have changed.

The MIME hole was a design safety failure, not a code defect - IOW, it
"worked as designed" but the design was brain-dead.

There's still design failures in Windows, and until these are proven
to be exploitable, they won't be patched because "it's working the way
we expected it to". Most exploits that are being patched today are
genuine code defects, and may be harder to exploit.

Then again, the modern malware industry is optimised to overcome any
"an attacker would have to..." mitigations. Once an exploit shape is
found, the source code becomes rapidly available, and malware coders
then drop it straight into attack vehicles that are ready to roll;
either full-fledged multi-function bots, or simple stubs that can pull
down the "real" malware. If these malware haven't been released
before, av won't "know" them at the signature level.


Malware can always out-turn patching. The attacks are smaller than
the patches and can drown out the patching process by sheer volume,
even before you consider DDoSing the fewer number of patching sources
or poisoning the patch pool via fake patching sources.

The other reason malware will always win the race is that the required
software standards are far lower. A malware has to work on some PCs,
and it doesn't matter if it trashes others. But a patch has to work
on all systerms, and not cause new problems of any of them.

If you insist on butting heads with malware on a level playing field,
you will always lose. Better to tilt the playing field so that the
user at the keyboard has the ability to trump all code and remote
access - but MS's fixation on centrally-managed IT and DRM undermines
this and rots the top of the Trust Stack.

See http://cquirke.blogspot.com/2006/08/trust-stack.html


Well, it could be that the nature of the hole was trivial to fix -
e.g. simply changing some static "secret" info that became harvested
and used by the attackers. I suspect this is the case, given how
quickly the fix has been circumvented by the attackers.

We have a very small sample from which to draw conclusions. Sure, we
have a lot of defects that allow user interests to be attacked, and we
have a smaller number where users were left hanging out to try while
patching caught up with ITW exploits. But we have a sample of 1
prompt DRM fix, and it may just happen to have been an easy one to
fix; maybe the next one (or even the continuation of this one) will
take a far longer time to fix. If so, don't expect to read about it!

So... are you saying that all fixes should be held back the same
amount of time, even if they are ready earlier, so that MS can be seen
to act more promptly on the issues we'd most like to see fixed first?


BTW, the post I'm replying to has a ?bug of its own; the sole
newsgroup set for replies is not found to exist on my news server.
Maybe it exists on other news servers, who knows? Here, it's 100%
broken and buggy. Should I wave this around as "proof" that the
poster I'm replying to is trying to hide refutations to his post?

No. As the OP the Followup-to was set
microsoft.public.internetexplorer.security which is a valid newsgroup.
Although, I should have set it to this newsgroup
microsoft.public.security...

Im
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top