Amazing. Reading your response was like finding a box half buried in the
sand, opening it, and seeing a pirate's treasure. While I understand it, I
was amazed at some enemy tactics I never knew about before. E.g. using bad
code in cookies or a firewall to use the program (e.g. Black Ice) to attack
the OS, as opposed to directly attacking a weakness in the OS. Makes
perfect sense though.
It really made me realize that the more filtering programs a person uses,
the greater the possibility that one of these well-intended programs will
compromise the OS. In other words, every anti-virus, anti-spyware,
cookie-filter, and firewall program exposed to the outside world, is another
target the enemy will try to manipulate to betray the trust of the OS, in
order to attack system files/folders. Even more reason, it seems, to isolate
what runs in memory, from what's stored on disk. Maybe absolute safety can
only be attained by a diskless internet appliance. But then, many web sites
that use ActiveX components or require persistent cookies wouldn't work at
all.
From a conceptual perspective (high altitude view), I really like the
approach of making the hard disk as completely off limits as possible to the
account that surfs the web, and confining everything that comes down the
wire to run only in memory, to the greatest degree possible, conceptually
speaking. Alas, even though that will never ensure total protection for OS
system folders, from everything I've seen, it should be one of many strong
layers of protection we use in our defense. Like laminating several layers
of composite material is stronger than a single layer of strong material.
In the real world though, in order to enjoy many web sites that have
forfeited safer tools, in order to employ riskier tools that "enhance our
experience", opening up a folder on disk is unavoidable for the web user
account. Which is why we still need all those other layers of defense
(anti-virus, anti-spyware, firewall, etc.). I must say it is sad to see
(IMHO) that folder permissions is rarely mentioned as a tool/tactic in
newsgroup advice, alongside the top 3 (anti-virus, anti-spyware, firewall).
Especially since folder permissions has less downside risk than filtering
programs (anti-virus, anti-spyware, firewall) to the danger of being
modified or manipulated by enemy forces.
On Sat, 29 May 2004 12:44:19 -0500, "JW"
Cquirke, thanks for including your experience in this thread. While very
valuable indeed, the meaning was sometimes over my head and hard to grasp.
When that happens, quote back the sticky bits and I'll try to explain
them in more detail. Top-posting may make it more difficult for me to
know which bits are sticky, though, especially if you don't trim out
the parts of the quoted material you don't need more details on.
Hope you're still watching this,
Yep - I tend to hang on to the threads I enter, so as long as you
don't start a new thread, I should still be there
Millions of other XP users like me are trying hard to understand and use
every tool and feature in XP to lock down security as best as is possible.
I'm one of those users too
While I understand the terms "many ways to escalate beyond intended account
rights" and "malware drills below this level of abstraction", the method
escapes me. (How it is done would help lead to an understand of how to
inhibit it.)
It's not useful to enumerate the ways, because to do so presupposes
new ways will not be discovered. Instead, you can predict what will
happen just by looking at this conceptually.
Human activities have an inescapable error factor. If I ask you to do
something utterly menial, such as write the letter R on paper 10000
times, you will make some mistakes. Read this post and you will see
typos, and that's in English, my first language, not (say) C++
The more complex a system is, the more likely there will be errors -
in fact, with modern software, this tends to inevitability. This
makes computers interesting, in that they beging to act
non-deterministically. A practical consequence is that one should not
assume any slab of code will always work properly, and thus the more
code that is exposed to the "outside", the higher the risk of exploit.
It doesn't matter what the code is, or how it's intended to work.
Good system design would simply remove dangerous functionalities that
none of that system's users intend to use, and rely on weaker risk
managements (passwords, security zones, account rights) only where
functionalities are to be used, but only in certain contexts.
If you have to rely on a weaker risk management strategy, such as
passwords etc., then this is most effective when the surface exposed
to the "outside" (the "fronteir", in other words) is small. The worst
scenario is where these risk filtering measures are expected to
operate throught the interior of the system - there's such a large
surface area of code exposed, that breakthroughs are inevitable.
Breakthroughts would fall into these categories:
- spoofing a more powerful context (cracking pwds, etc.)
- breaking through into a more powerful context
- drilling beneath that entire layer of abstraction
Millions of us newcomers are thinking that folder permissions
in XP security is not ambiguous or equivocal. E.g. Deny permission
to UserA does not mean UserA is sometimes denied access but
sometimes can drill through it.
Yep. But take Witty as an example; this drills into a defect in Black
Ice Defender (a third-party firewall) and thus attains raw Ring 0
access to the system. At that far lower level of abstraction,
concepts such as "user", "permissions" or even "file system" simply
don't exist. The downside for Witty is that while it's operating at
this low level, it would have to construct by hand an awareness of the
file system in order to find and read files - but if all it wants to
do is trash stuff, it can (and does) simply write to raw disk.
The take-home messages here are:
- no security measure is 100% effective
- therefore *any* measure is useful if downside is small enough
- therefore also, plan what to do *when* defences are breached
In order to move toward an understanding of how to better secure our
systems, how exactly does "malware drill below this level of abstraction"
In the case of Witty, it finds an opportunity presented by bad coding
to position its code such that Black Ice Defender will run it. From
that moment on, it's indivisible from Black Ice Defender as far as the
OS is concerned, and it can do whatever that app can do.
There are other ways where context is lost. For example, consider
security zones such as Internet Zone, My Computer Zone, etc. If a
3rd-party email app passes HTML "message text" to the OS to render as
a Temp file, the chances are high that the OS will process that temp
file as per My Computer (anything goes) zone.
If you read the various security alerts, you will see that many of
these go about loss of context, or an escalation from one context to
another more powerful one.
How does malware "escalate beyond intended account rights"
As above. Malware opportunities arise in three ways:
- social engineering
- bad design
- bad code
Patches go about bad code, but often the bad code is just a wart on
the back of a bad design, and you'd prefer to rip out the entire bad
design as your risk management strategy.
For example, one security alert describes a defect where scripts
within cookies are processed in "My Computer" security zone, rather
than the intended "Internet Zone". As far as MS is concerned, that's
an example of bad code. As far as I'm concerned, that's an example of
bad design - what the hell is the OS running scripts in cookies for,
anyway? - and the patch does NOT address the *design* issue.
What exactly are "these other ways to reduce risk exposure" which are
"rendered difficult or impossible to use", when employing user accounts
(i.e. folder permissions) as a security strategy. Regarding the single
Admin account that you use, what exactly does "set-up properly" mean ?
My starting point is this:
- what I don't intend to risk, I wall out
- what some may need to risk, I differentiate (pwd, etc.)
- what I may need to risk, I evaluate first
- what I decide to risk, I av-scan first
So antivirus is the "goalie of last resort" in this chain.
In order to evaluate risk, I need decent info; I need to know exactly
where I am in the namespace ("show full paths"), know that I am
looking at everything that is there ("do not hide system or hidden
files") and am presented with information about the type of files I am
looking at ("do not hide file name extensions").
If I limit an account in XP Home, it falls back to hiding paths,
hidden files, and file name extensions. Dangerous!
While I certainly agree that (3) "multiple user accounts are not easily
managed", and (2) settings do not stay the same when user account rights are
changed, these (2 and 3) are not reworked on a daily or weekly basis. In
most cases, once it's done (e.g. user account settings), they are not done
again for a very long, long time.
You misunderstand me. When you drop an account from Admin rights in
XP Home, whatever settings you have already made revert to MS
duhfaults, and you cannot change them back.
Else it would be a nuisance rather than a crisis; you'd just change to
Admin, apply settings, and change back again, whenever you needed to
change settings that lower rights render inaccessible.
The item that worries me most is #1. How do new user accounts
get spawned, and who spawns them ? Who has the right ?
You (for human and bot values of "you") need admin rights to spawn new
accounts, and this can be done via keyboard and mouse, or
programatically. When this is done, the new account starts off as per
"Default user", within the additional limitations I've mentioned if
the account has less than admin rights.
Can a Limited user account spawn new user accounts ?
Not directly, AFAIK.
Can a process spawn new user accounts, if it is launched by
a Limited user account ?
Once it transcends the limited user rights, yes. For a cluefull
hacker or malware, it's a game of "Simon Says", that's all.
How can this spawning be stopped ?
No front door I can think of, other than to create a "default user"
account that's so broken any new accounts created from it won't work.
An understanding of these lingering questions would help millions like me
defend ourselves better. Some examples or specifics would be helpful in
transforming your words into tangible steps leading to operational
weaponry.
Those are the skills I'm trying to build also. I'm relatively new to
NT, coming from a background in Win9x (that's what I was awarded MVP
in) and I read the XP newsgroups to learn more than to post.
E.g. other than the standard suite of defenses used by 99% of us home users
(anti-virus, anti-spyware, firewall, folder/account permissions), what
additional tools and tactics would you use to help defend a standalone PC ?
My approach is:
- what I don't intend to risk, I wall out
- keep code patched up to date
- kill off admin shares
- kill off WSH (as I don't use it)
- wall out BHOs (I don't use them either)
- set MSware email to fake settings (I use Eudora)
- use FATxx instead of NTFS (controversial)
- avoid multiple user accounts
- disable remote desktop invites
- block \Autorun.inf processing on HD volumes
- use Classic view (less Desktop.ini risk exposure)
- never full-share C:\ or any part of startup axis
- keep File and Print Sharing off Internet connection
- keep the firewall on
- what some may need to risk, I differentiate (pwd, etc.)
- as single user, nothing falls into this category
- I'd pretend to be a "limited" user if accounts didn't suck++
- what I may need to risk, I evaluate first
- I avoid any auto-running facilities
- improve the information that the OS presents to me
- keep myself up to date reading malware descs, etc.
- what I decide to risk, I av-scan first
- use email app that breaks out attachments on arrival
- keep incoming material out of data set in "suspect" subtree
- run one resident av and keep it up to date
- use additional non-resident av for on-demand and formal use
- update and use on-demand scanners for commercial malware
- make sure I can maintain system in event of disaster
- avoid NTFS until a suitable maintenance OS and formal av exists
- avoid NTFS until a suitable data recovery tools exist
- enable Recovery Console to be more effective
- use a DOS mode as an alternate boot environment
- have an alternate web browser on hand
- find and build skills with maintenance tools
- automate backups (another *long* story, that!)
Links:
http://cquirke.mvps.org/whatmos.htm
http://cquirke.mvps.org/ntfs.htm
http://cquirke.mvps.org/9x/safe2000.htm (dated but useful)
http://cquirke.mvps.org/9x/malware.htm (dated but useful)
http://cquirke.mvps.org/9x/riskfix.htm (Win9x-orientated)
http://cquirke.mvps.org/9x/eudwhy.htm (why I use Eudora email)
-------------------- ----- ---- --- -- - - - -
No, perfection is not an entrance requirement.
We'll settle for integrity and humility