So you really think Linux is better than Vista, do you?

C

cquirke (MVP Windows shell/user)

It's a learning curve, and I'm not really very far along, but I do feel
it has some inherent advantages over Windows.

It's quite nice to use a completely different system again, i.e.
revisit some of the core "ways to do things" decisions. I found that
very liberating when starting on the PC with DOS 3.3 and PICK; just
about everything DOS did, PICK did differently ;-)
For one, everything that can be done in the GUI can be done
at the command line

I'd be more impressed with the reverse, plus easy interoperability
between the two, e.g.
- do something in GUI
- save changes from same dialog in .REG (or equivalent) form
- edit .REG and apply via batch automation etc.

The problem with GUI is that you have to learn a completely different
way to do the same thing from the CLI or batch files. This is no
obstacle to malware kiddies and devs; it's just hard enough to ensure
the end user is left unable to manage the system effectively.
Also, there is no registry; all the configuration data is in plain text files.

Windows used to do that too; from Config.sys/Autoexec.bat to
System.ini/Win.ini to System.dat/User.dat to NT's registry hives.

The reason Windows switched from .INI files to registry in Win95 was
scalability; it was just getting too inefficient to look up flat text
files. The registry is presumably more compact and easier to traverse
due to whatever indexing it may use.

Doesn't flat settings text files impact on Linux performance?
One assumes service ought to be available on demand because of the high
cost of a Windows license, but I have found that to be a false security
blanket. Last year I built a new machine and installed Windows on it...
it worked fine until one day I got a BSOD while booting, and could not
boot except in safe mode. So I went to Microsoft's web site, looked up
the problem, and they said I'd have to contact tech support for a hot
fix... ok, paid $100 for one of their people to work with me by email,
sent them buckets of data for analysis, went back and forth for a week or
two, and in the end they couldn't figure out what was wrong, and I had to
reinstall everything and hope it didn't happen again. At least I got my
$100 back.

By "support", all one really expects from MS is what you get from the
site, or via the update services - and there's quite a bit of value
there. One-on-one support won't be with MS's PSS unless you bought
your license retail, as few do (most are OEM).

When you are advised to get a hot fix, as in your case, you generally
will NOT have to pay as long as you make it clear that you are calling
PSS for that hotfix and nothing else.

Hot fixes are a bit different from patches - typically they are there
to solve a limited-scope problem that only some systems will have, and
if they haven't undergone the same rigorous testing as patches, they
will not be posted for download but will be provided on personal
request, as in your case. Basically, they want to hear from you
whether any problems arose.

There are a few other contexts where you would get personal (PSS)
support for free, even as an OEM licensee.


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 
C

cquirke (MVP Windows shell/user)

Hmmm. Perhaps it has, but there are a few exceptions here and there.

I'm not sure if your example is such an exception...
For example, I use Starband satellite Internet because I live in a remote
location where there is no cable or wireline phones. They passed out
these sketchy routers that involved a complicated driver on the host PC.
People had no end of trouble getting the driver to work, and it was for
Windows only. Nowadays, Starband replaced their routers with new
ones that require no driver whatsoever. Any system -- DOS, Windows,
Mac, Linux -- with a properly configured network card can connect

....in that what has happened here, is that all the logic that used to
bulge into the "host PC" has now disappeared into the router itself,
which now has the computing power to do it in-house. And the chances
are high that inside that router, you may find a Linux.

What has now happened, is that the router is now a self-contained
system that is abstracted from the PC. The PC no longer has to care
what it is; that's why any OS can use it - as can Macs, etc.
Your analysis is well thought out, and I accept your conclusions as to
what is commercially viable. But, as the window shifts, the question
becomes: What will be done with all that hardware power?

Some of it will be spent on "computing about computing"; security
"should-we-do-this" stuff, including DRM, etc. For example, the
current crude user-based permissions model may devolve down to every
program being treated as a(n un)permitted agent.

That's going to involve massive overhead and complexity, and the
complexity will be unmanageable unless things are designed very
formally (which usually means, quite inefficiently).

For example; I'm Fred (logged in as Fred) on PC1 running a game that
is allowed to access DirectX, but is not allowed to access my data
(even though I as Fred may do so), nor is it allowed onto the
Internet. To fuss about what this particular program can be allowed
to do, means an awareness of all the APIs etc. this game might call,
and what they call, etc.; that's a lot of context tracking.

Some will be spent on a more natural user input experience, as I
described in an earlier post. For example, screening in only your
voice in a crowded bus, when under voice control.

And some power may be used to do things we haven't yet thought of,
even in the sense of rejecting such things as impossible due to
current hardware constraints.

OTOH, the PC might just eat itself, much as your router ate its PC
footprint. If RAM is large enough to hold everything, then we may say
goodbye to the last mechanical item, the hard drive. With that, goes
a whole layer of caching and virtualization, which should result in a
far cleaner OS design - though you may see the old "page to disk, load
from disk" baggage being carried forward for a while, much as DOS
still differentiated between "conventional" vs. XMS in post-Meg era.
I can't imagine that software will continue to bloat out indefinitely;

Oh, I can... I can see processors becoming too large to be bug-free,
so that today's microcode injection becomes microcode that runs as
software; that sort of thing. As it is, much computing involves
trading off speed vs. size, i.e. "should we calculate this, or just
look it up in a pre-calculated table?" and I expect to see size
continuing to grow while speed starts to cap out.

Let's say I gave you the gift of a fabrication plant that shrunk
circuitry to 10th of today's size, running on a 50th of the power.

What's easier; reproduce an existing RAM lattice a few more times to
boost capacity, or bloat out the processor's logic in an attempt to
translate those extra circuit elements into processing power? Unless
you'd built some dumb-ass scalability limit into your RAM addressing
design (such as "thank heavens we boosted max HD size from 8G all the
way to 32G, that should be enough for ever!") the first is free.
I mean, what will the desktop OS of 2020 look like?

I don't think one can guess... some obvious past guesses (e.g. video
phones, robots) took a lot longer than expected, whereas others (spam,
botnets) may not have been foreseen at all.

You may find what we currently think of as "thinking", starts to
appear within system code, as the second human conceit to fall.

The first conceit was that craftsmanship would always outdo automated
manufacturing. That's truly dead; not that craftsmanship is
necessarily dead, but that manufacturing has gone beyond anything that
is practical to hand-craft. I don't see anyone carving out 512
million memory cells by hand, do you?

The second conceit is AI. Every time someone codes up a solution to
something previously waved as the "threshold of thought", the
definition of AI is rolled forward again. Computers may not "think"
in the same way as humans, just as processor lithography does not
carve materials as a sculptor does, yet the job gets done. As it is,
the Turing Test is passed every time someone "opens"an attachment
because it was "from someone they know".

The third conceit is human pre-eminance in economic life. I can see a
day when bots will start up companies, employ humans, pay them, pay
taxes, etc. and we won't particularly care. Already, most Internet
traffic is machine traffic; spam, malware, updates... that can get a
lot worse. Humans may be reduced to rats running around in someone
else's gutters, when it comes to getting a click in edgeways ;-)

Maybe that's more 2050 than "2020 vision", heh... but maybe not...

OK, 2050 may see flexible hardware, i.e. where specialized hardware is
grown under program control, then resorbed or re-purposed when the
need has passed so that something else can be grown and used.

At some point, it will be cheaper to build via nanobots rather than by
creating the factory jigs by hand, just as it is already cheaper to
build factory jigs rather than craft products by hand.

At this point, we may have already considered metal and insulators to
be too crude to operate at the sizes and speeds we need. We may
prefer organics that are the stock in trade of existing nanobots that
we can monkey-see, monkey-do to re-purpose and create from scratch.
But that seems absurd. It seems more likely that the focus will be on
drivers for complex hardware... the 3d printer, a holographic projector,
etc. How Linux will fare vs. Windows under that scenario is something
you can assess better than I can...

You may see Linux within more peripherals, such as printers, scanners,
cameras, sound recorders, 3D monitors etc. with standard interfaces
doing away with the need for "drivers" alltogether.

A modem is too simple to be worth building logic into it, so we have
controllerless modems and drivers. A router's complex enough to build
logic into it, so we have self-contained routers and no special logic
as "drivers". Right now, printers seem to be in between; there's
logic in the printer, but much of the work is still done in drivers
and other hosted software, but that could change.

A lot of this comes down to architecture, i.e. how one generalizes
and/or abstracts things, and scopes between them. It makes sense that
Bill Gates' last role in MSFT was software architecture.


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
S

Stephan Rose

cquirke said:
Windows used to do that too; from Config.sys/Autoexec.bat to
System.ini/Win.ini to System.dat/User.dat to NT's registry hives.

The reason Windows switched from .INI files to registry in Win95 was
scalability; it was just getting too inefficient to look up flat text
files. The registry is presumably more compact and easier to traverse
due to whatever indexing it may use.

Doesn't flat settings text files impact on Linux performance?

Nope, performance is just fine. It's easier to parse a < 1kb text file than
it is to search through a 50+ meg registry.

Plus, the way linux does it has a few distinct advantages:

- If a config file gets corrupted only that part of the system is affected.
- I can fix them with a text editor if needed and the system is in a state
where the UI won't work.
- A config file can also be a script giving far greater flexibility.
- Many config files are less than 1 kb.

Now lets look the registry.

- One huge file...if any part of it gets corrupted potentially everything is
shot.
- Edit it?? Without UI? Good luck....
- No scripting support...can only store static data.
- Everytime it needs to be accessed a 50+ meg file needs to be read. That
will defeat any performance gains there may be. Even if cached, it eats up
lots of memory and most of the data is likely rarely used.

I'll take config files over the registry. =)

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

Charlie Wilkes

The reason Windows switched from .INI files to registry in Win95 was
scalability; it was just getting too inefficient to look up flat text
files. The registry is presumably more compact and easier to traverse
due to whatever indexing it may use.

Doesn't flat settings text files impact on Linux performance?

I have Ubuntu and Win2k on the same machine, and as far as I can tell,
they are about the same in terms of speed... how long it takes to boot,
how long to open applications, etc. I don't know enough to evaluate the
performance attributes of Linux .conf files vs. the Windows registry, but
I know which I am more comfortable working with.

Hot fixes are a bit different from patches - typically they are there to
solve a limited-scope problem that only some systems will have, and if
they haven't undergone the same rigorous testing as patches, they will
not be posted for download but will be provided on personal request, as
in your case. Basically, they want to hear from you whether any
problems arose.

In my situation (kernel mode exception not handled...) MS did not have a
hot fix even though they said they did on their web site. I suspect a 3d
party driver was to blame, but the result was a Windows error, and I was
disappointed they couldn't at least tell me what the source of the
problem was.

This was one of several experiences that caused me to re-think the
Windows value proposition.

Charlie
 
C

cquirke (MVP Windows shell/user)

cquirke (MVP Windows shell/user) wrote:
Nope, performance is just fine. It's easier to parse a < 1kb text file than
it is to search through a 50+ meg registry.

How do you fit 50M info into a 1k text file? If there are multiple
text files, then I'd have expected linear searching throuhg these to
be less efficient than an indexed search through a 50M B-tree or
whatever. I'm pretty sure registry is indexed.

If you don't need 50M of storage because the architecture reduces the
overhead required to link everything together, that's interesting too
;-)
Plus, the way linux does it has a few distinct advantages:
- If a config file gets corrupted only that part of the system is affected.

That, I like. I prefer apps that do not need to share settings with
other apps, to store those in an .INI file within the app's base dir.
- I can fix them with a text editor if needed and the system is in a
state where the UI won't work.

Or where the OS won't boot. It's only thanks to Bart and Paraglider
that we have a solution for this in XP; Vista's not quite as lucky,
tho maybe Regedit will work from a DVD boot. So many things don't,
because the DVD boot doesn't run apps needing system-level access.
- A config file can also be a script giving far greater flexibility.

Not sure if I like that - I prefer type discipline, where files do
only what the visible file type says they do.
- Many config files are less than 1 kb.
Now lets look the registry.
- One huge file...if any part of it gets corrupted potentially everything is
shot.

No, it's a number of huge files; SYSTEM, SAM, SECURITY etc. and a one
NTUSER.DAT for each user account - but that's nit-picking; they are
still large egg-baskets with a LOT of write traffic. Bad combination.
- Edit it?? Without UI? Good luck....

I think there's a case to be made for precluding direct (generic)
editing, just as there is for using a proprietary system to deliver
Windows Updates, as opposed to (say) generic FTP.

If you look at posts I made in the mid-1990s, you'll see I've U-turned
on that; in those days, I resented having to use IE and "special"
access to get updates, especially in automated fashion.

The reason I've changed my mind is due to the pressure of malware
assaults. In a sense, all security is security by obscurity, and
while you need sufficient obscurity, any obscurity can help if it is
scoped correctly (i.e. in favor of user, against others).
- No scripting support...can only store static data.

I see that as an advantage. The day some malware can run from pure
registry content alone (i.e. with no external footprint), we're
further in trouble than we already are. As it is, malware is usually
(but not always) dependent on registry to file links that can be
detected and managed; ways around that exist in that the registrry
dependency can be avoided, but a file is still needed in Windows.
- Everytime it needs to be accessed a 50+ meg file needs to be read. That
will defeat any performance gains there may be. Even if cached, it eats up
lots of memory and most of the data is likely rarely used.

I suspect the registry is one of those files (like page file) that is
seldom handled via "normal" file APIs.
I'll take config files over the registry. =)

I don't mind the registry as such, except that user registries appear
to be corrupted too often for comfort, and recent MS OSs have been
weak on maintaining automated backups.

In XP and presumably Vista, there ARE no automated backups except for
those created as a side-effect of System Restore. So if you disable
System Restore, you have no registry backups at all.

The "Last Known Good" is barely better than nothing.

Like the old Win95 .DA0 system, it's one-shot; if a boot "appears" to
work, the current registry overwrites the backup, so that if it fails
after that, there's no fallback Win98 learned from this with the FIFO
RB*.CAB system, which is similar to what I was doing manually with
batch files and an archiver. XP forgot that lesson.

Also, this fallback info is held within the same file as the "live"
data, so anything that eats the "live" data file will also kill the
backup. It's like a Kavlar vest that works only for .38 ammo.


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
C

cquirke (MVP Windows shell/user)

I have Ubuntu and Win2k on the same machine, and as far as I can tell,
they are about the same in terms of speed... how long it takes to boot,
how long to open applications, etc. I don't know enough to evaluate the
performance attributes of Linux .conf files vs. the Windows registry, but
I know which I am more comfortable working with.

It's hard to assess such particular differences across different OSs.

I'm just wondering how they solved the same basic issues without
having to resort to more efficient settings storage. Less settings?
In my situation (kernel mode exception not handled...) MS did not have a
hot fix even though they said they did on their web site.

OK... was the hotfix wasn't applicable, or did it not exist? If the
latter, was it pulled due to problems?
I suspect a 3d party driver was to blame, but the result was a
Windows error, and I was disappointed they couldn't at least
tell me what the source of the problem was.

Vista has better reporting on such matters than XP, though I guess it
won't always help. Or was this pre-Vista?

Drivers are always a pain, because you have to let 3rd-parties play
deep near the hardware for efficiency reasons (in another thread where
it is asked "what will we do with more system power", killing this
dependency may be one of the answers).

MS tries to limit the damage in various ways, such as the two-layer
model that you can trace back to DOS CD-ROM support. Instead of the
hardware vendor writing "everything", there's an OS-level driver that
limits what is needed from the vendor to the bare specifics.

Also, MS will require (rather than "prefer") vendors to sign their
drivers. While this may raise hackles in terms of open source
philosophy ("why can't I write and instal my own drivers?"), it's IMO
a desirable malware speed-bump, given the power of drivers.

This is one of those deliberately legacy-smashing changes that one
might apply when stepping from 32-bit to 64-bit, given the amount of
legacy impact involved there anyway.


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 
S

Stephan Rose

cquirke said:
How do you fit 50M info into a 1k text file? If there are multiple
text files, then I'd have expected linear searching throuhg these to
be less efficient than an indexed search through a 50M B-tree or
whatever. I'm pretty sure registry is indexed.

Well for one the registry is beyond bloated. =) I also meant *per* text
file, since there multiple text files it does add up. I don't think it
comes even close to the windows registry though.

Most config files are also only needed at system start up. Once the system
is running, I don't think it has to even touch many of them. If an
application has config data, it just stores it in its own file and is
responsible for it on its own.
That, I like. I prefer apps that do not need to share settings with
other apps, to store those in an .INI file within the app's base dir.

Well most apps will create a hidden directory in user space where they
create and maintain their data. Apps can't count on the ability to write to
their data in their base dir becase that dir is many times /usr/bin where
most app executables are located. Apps don't have write access to that dir
unless I run them with root priviledges.
Or where the OS won't boot. It's only thanks to Bart and Paraglider
that we have a solution for this in XP; Vista's not quite as lucky,
tho maybe Regedit will work from a DVD boot. So many things don't,
because the DVD boot doesn't run apps needing system-level access.

Makes ya just love Windows doesn't it? =)
Not sure if I like that - I prefer type discipline, where files do
only what the visible file type says they do.

It has its pros and cons. I like it because if there is a problem with it or
a bug in it, *I* can possibly fix it. Not so if its an executable
processing static registry data. At that point in time...I'm screwed....
No, it's a number of huge files; SYSTEM, SAM, SECURITY etc. and a one
NTUSER.DAT for each user account - but that's nit-picking; they are
still large egg-baskets with a LOT of write traffic. Bad combination.

I was mainly just looking at the registry file itself. You're right there,
there are quite a few additional files. I've had to deal with corrupted
ntuser.dat in the past...not fun...
I think there's a case to be made for precluding direct (generic)
editing, just as there is for using a proprietary system to deliver
Windows Updates, as opposed to (say) generic FTP.

If you look at posts I made in the mid-1990s, you'll see I've U-turned
on that; in those days, I resented having to use IE and "special"
access to get updates, especially in automated fashion.

The reason I've changed my mind is due to the pressure of malware
assaults. In a sense, all security is security by obscurity, and
while you need sufficient obscurity, any obscurity can help if it is
scoped correctly (i.e. in favor of user, against others).


I see that as an advantage. The day some malware can run from pure
registry content alone (i.e. with no external footprint), we're
further in trouble than we already are. As it is, malware is usually
(but not always) dependent on registry to file links that can be
detected and managed; ways around that exist in that the registrry
dependency can be avoided, but a file is still needed in Windows.

Well in linux' case there isn't a way for the malware to install itself in
any important part of the system. Even if it is not 100% impossible to
happen, it is exceedingly more difficult than under windows where
everything is accessible from anywhere.
I suspect the registry is one of those files (like page file) that is
seldom handled via "normal" file APIs.


I don't mind the registry as such, except that user registries appear
to be corrupted too often for comfort, and recent MS OSs have been
weak on maintaining automated backups.

In XP and presumably Vista, there ARE no automated backups except for
those created as a side-effect of System Restore. So if you disable
System Restore, you have no registry backups at all.

Yea and if you keep system restore you loose gigabytes of hard drive space
in accumulated backups....lovely solution MS has there for their own
problem. =)

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
S

Stephan Rose

cquirke said:
It's hard to assess such particular differences across different OSs.

I'm just wondering how they solved the same basic issues without
having to resort to more efficient settings storage. Less settings?

Less settings certainly. Only the OS settings are of any relevance.
Applications need to maintain their own settings in their own place where
they belong. =)

I never liked the idea of every application on the planet screwing around
with the w
Also, MS will require (rather than "prefer") vendors to sign their
drivers. While this may raise hackles in terms of open source
philosophy ("why can't I write and instal my own drivers?"), it's IMO
a desirable malware speed-bump, given the power of drivers.

What's the process to sign drivers though? Is it free? Does it cost money?
How long does it take? If it costs money, how much? How will it impact
small companies such as the ones I work for that have created a hand-held
device that now needs a USB Driver?

If driver issues like that become too much of a problem I may very well
start considering shipping new devices with a bootable Linux CD that has my
software to use the device pre-installed and is usable off the CD.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

That, I like. I prefer apps that do not need to share settings with
other apps, to store those in an .INI file within the app's base dir.
[/QUOTE]
Well most apps will create a hidden directory in user space where they
create and maintain their data. Apps can't count on the ability to write to
their data in their base dir becase that dir is many times /usr/bin where
most app executables are located. Apps don't have write access to that dir
unless I run them with root priviledges.

I forgot about that limitation, which applies to Vista and "secured"
XP as well (can't write to "C:\Program Files" without admin rights).

Crucial Q: Are apps constrained from writing code into the user's
"data" space, or is the location they write to, within user space, but
outside of user's data (backed-up) space?
Makes ya just love Windows doesn't it? =)

Au contraire... (examines bleeding knuckles)
It has its pros and cons. I like it because if there is a problem with it or
a bug in it, *I* can possibly fix it. Not so if its an executable
processing static registry data. At that point in time...I'm screwed....

Windows fails type discipline between batch file types and raw code,
so if "batch files" are permitted, then raw code comes along too.

This has been useful in one context; associating DOSKey.exe as the
"batch file" to be run with Command.com in Win9x, causing this to be
present whenever a command window is opened in the GUI. This ceased
to be necessary in WinME, and isn't required in NT family.

Even with strict typing between code and batch files, I still see any
sort of automation - even shortcuts - as a violation of "data only".
I was mainly just looking at the registry file itself. You're right there,
there are quite a few additional files. I've had to deal with corrupted
ntuser.dat in the past...not fun...

Yep; that's the per-user hive, and it seems XP is rather prone to
botching it - perhaps it's held open for too long a critical period
during failure-prone shutdown?
Well in linux' case there isn't a way for the malware to install itself in
any important part of the system.

It's possible that at one time, some folks may have believed that true
in Windows. But "defense in depth" means you assume nothing, and plan
what to do whenever your unbreakable defenses are broken.
Yea and if you keep system restore you loose gigabytes of hard drive space
in accumulated backups....lovely solution MS has there for their own
problem. =)

I limit C: to 8G on XP and 32G in Vista.
In XP, I limit SR to 500M in C:, and have it off everywhere else.
In Vista, I cannot limit SR on C:, but have it off everywhere else.

Vista defaults to SR Off on volumes other than C:, whereas XP defaults
to On - so in that respect, Vista is a huge improvement.

Neither has a clue about the need to pre-set behaviors for devices
that do not yet exist, i.e. disk volumes not yet encountered. That is
a HUGE blind spot, far more pervasive than disk volumes and SR.

My intention is to keep just a couple of day's worth of SR data,
though for registry harvesting, I might like a week of "depth".


-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)
 
C

cquirke (MVP Windows shell/user)

I'm just wondering how they solved the same basic issues without
having to resort to more efficient settings storage. Less settings?
[/QUOTE]
Less settings certainly. Only the OS settings are of any relevance.
Applications need to maintain their own settings in their own place where
they belong. =)

Yup. Some settings may need to be exposed for communication between
apps and as a way of integrating functionality, but we may come to
revisit the risk/benefits of that.

It would not surprise me if we retreat not only from shared .DLLs, but
from inter-application communication alltogether, towards shelling
each app in its own environment (or virtual machine), much as DOS apps
are handled within Windows 95 onwards.

It's another case of effiiency giving way to reliability, as
increasing complexity requires lower defect rates (where "defect" is
enlarged to include any unintended consequence, especially attack).
What's the process to sign drivers though? Is it free? Does it cost money?
How long does it take? If it costs money, how much? How will it impact
small companies such as the ones I work for that have created a hand-held
device that now needs a USB Driver?

You're asking the right questions... all "signing" can be expected to
do, is assert that nominal entity A really did make object B. Even
that is best-case, assuming un-spoofability.

It is the barrier to entry to signing code that confers any sort of
"template of expectation" to proven identity A. If it costs money, as
I suspect it does, then the idea is that malware scammers won't be
able to afford it. The less cynical may also think a malware author
dare not expose identity via signing, as if some sort of cut-out proxy
identity was not possible to arrange.

Even if you did prove that a "real vendor" wrote a driver, what does
that assure us of trustworthiness, when one of the largest "real
vendors" (Sony) can drop rootkits from "audio CDs" and get away with
it? Before you say "they didn't get away with it", the fact that Sony
still live and breathe in the DRM space is proof that they did -
whatever fines they paid would just be "operational costs".

What signing may do (best-case) is limit the types of malware one
might encounter within the driver space, to DRM and problems arising
from incompitence (i.e. bugs).

I'd be more impressed with constraints within driver space, e.g. that
a driver for device (class) A should have no driver-level access to
device (class) B. For example, a fake codec shouldn't be able to
write to raw disk, intercept such writes, or access the 'net; it
should be limited to the sound and visual hardware, if that.
If driver issues like that become too much of a problem I may very well
start considering shipping new devices with a bootable Linux CD that has my
software to use the device pre-installed and is usable off the CD.

Drivers will always be with us; they're a pain, but it's hugely better
than being limited to hardware from one vampiric supplier (hi, Apple)


-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)
 
C

Charlie Wilkes

What has now happened, is that the router is now a self-contained system
that is abstracted from the PC. The PC no longer has to care what it
is; that's why any OS can use it - as can Macs, etc.

Exactly. This, I think, is a case where the hardware has gotten thicker
instead of thinner.
What's easier; reproduce an existing RAM lattice a few more times to
boost capacity, or bloat out the processor's logic in an attempt to
translate those extra circuit elements into processing power? Unless
you'd built some dumb-ass scalability limit into your RAM addressing
design (such as "thank heavens we boosted max HD size from 8G all the
way to 32G, that should be enough for ever!") the first is free.

Ok, but machine resources are only one part of the equation. I think
there is a limit to how big a software package can realistically be,
because the bigger it gets, the more people are involved, because each
individual can only contribute so much. Will it be possible for any
organization to coordinate and manage the output of 500,000 computer
programmers so as to achieve a usable result?
At this point, we may have already considered metal and insulators to be
too crude to operate at the sizes and speeds we need. We may prefer
organics that are the stock in trade of existing nanobots that we can
monkey-see, monkey-do to re-purpose and create from scratch.

That is what I foresee... a union of silicon and biotech. Eventually
scientists will figure out the mechanism by which insect colonies do
their magic, and they will be able to replicate it and make nanobot
colonies that can make and repair all kinds of things. Digital
technology will become a chemical rather than an electrical process. But
that is years down the road... a half century perhaps.
You may see Linux within more peripherals, such as printers, scanners,
cameras, sound recorders, 3D monitors etc. with standard interfaces
doing away with the need for "drivers" alltogether.

This seems to contradict your starting position, which, if I understood
you correctly, was a trend toward thinner hardware as suggested by the
software modem.

I would expect a near future in which hardware becomes more robust. For
example, printers can work directly with digital cameras instead of going
through a PC. What used to be a cable de-scrambler is now a set-top box
that can record TV shows, connect to the Internet, and do many other
things. Hardware is cheap, so it's possible to embed computers in
everything, even coffee makers. But it's not possible for small,
entrepreneurial companies to pay Microsoft a fat licensing fee. Instead,
they will tap into open source... FreeDOS for the coffee maker; Linux for
the set-top box.
A lot of this comes down to architecture, i.e. how one generalizes
and/or abstracts things, and scopes between them. It makes sense that
Bill Gates' last role in MSFT was software architecture.

Gates is busy planning his legacy with Warren Buffett. He faithfully
endorses whatever Microsoft is doing, but he knows the Golden Age is over.

Charlie
 
C

Charlie Wilkes

It's hard to assess such particular differences across different OSs.

I'm just wondering how they solved the same basic issues without having
to resort to more efficient settings storage. Less settings?

I will defer to Stephan Rose on this as I really just don't know.
OK... was the hotfix wasn't applicable, or did it not exist? If the
latter, was it pulled due to problems?

Once I got in contact with MS, the subject of the hot fix never came up.
First they had me boot into safe mode and run a diagnostic program that
collected data in a .cab file, which I was able to get onto a USB stick
and port over to the machine I was using for Internet access. Then they
had me try various procedures, none of which worked. Finally, after
about 2 weeks of back and forth, I ran out of patience. I pulled my data
off the drive, reformatted it, and reinstalled everything. Then I asked
MS for my $100 back, and they gave it to me.
Vista has better reporting on such matters than XP, though I guess it
won't always help. Or was this pre-Vista?

This was pre-Vista, almost a year ago.
Also, MS will require (rather than "prefer") vendors to sign their
drivers. While this may raise hackles in terms of open source
philosophy ("why can't I write and instal my own drivers?"), it's IMO a
desirable malware speed-bump, given the power of drivers.

At that time, I was using a VZW cell phone to connect to the Internet,
and I suspect those drivers were the source of the problem. But, I
didn't know for sure.

Charlie
 
S

Stephan Rose

Well most apps will create a hidden directory in user space where they
create and maintain their data. Apps can't count on the ability to write
to their data in their base dir becase that dir is many times /usr/bin
where most app executables are located. Apps don't have write access to
that dir unless I run them with root priviledges.

I forgot about that limitation, which applies to Vista and "secured"
XP as well (can't write to "C:\Program Files" without admin rights).

Crucial Q: Are apps constrained from writing code into the user's
"data" space, or is the location they write to, within user space, but
outside of user's data (backed-up) space?[/QUOTE]

Well there is only "one" user space, it isn't really seperated like you
describe unless a user were to choose to do so.

User space basically resides under /home/username/...

So basically, an app will create a hidden directory there, and place its
data inside that directory. It has access to everything in that directory
though.

If you had data that you wanted to protect from an app accessing you'd have
to create provisions for that yourself, which is quite possible. One way I
can think of doing it is by creating a special user for that purpose, and
creating a directory, even within your own user space, that is owned by
that user. You can then give yourself read rights but not write rights to
that directory. At that point in time, no application can ever mess with
the data in there.

If you'd want to modify the data, you'd have to access it by starting the
app you want to modify that data with under the other user's permissions.

So yea, that kind of seperation you refer to is possible and such a
constraint is possible if you do the work and set it up.
It's possible that at one time, some folks may have believed that true
in Windows. But "defense in depth" means you assume nothing, and plan
what to do whenever your unbreakable defenses are broken.

Well I see one distinct thing in Linux's favor in terms of security, design
considerations taken aside.

Everyone can see the code. Everyone can see the potential exploits if
present. If I found an exploit today and fixed it, I could submit a patch
to the ubuntu development team for it. My patch is then reviewed and if
found valid merged into the code. This allows exploits to be fixed before
they are even ever exploited.

Now MS on the other hand, only the employed programmers can see the code.
Only the employed programmers who are already swamped with other stuff can
fix the exploits. The only way MS can even find out about an exploit and
get it fixed is until it is actually exploited and the damage already done.
I limit C: to 8G on XP and 32G in Vista.

32 Gigs is the limit I have for root in ubunutu as well. 24.5 gigs free at
the moment.
In XP, I limit SR to 500M in C:, and have it off everywhere else.
In Vista, I cannot limit SR on C:, but have it off everywhere else.

Vista defaults to SR Off on volumes other than C:, whereas XP defaults
to On - so in that respect, Vista is a huge improvement.

Neither has a clue about the need to pre-set behaviors for devices
that do not yet exist, i.e. disk volumes not yet encountered. That is
a HUGE blind spot, far more pervasive than disk volumes and SR.

My intention is to keep just a couple of day's worth of SR data,
though for registry harvesting, I might like a week of "depth".

How much I enjoy not needing restore points or registries. =)

To me, restore points are not a solution to anything. They are just a patch
for an unstable mess that MS can't get right.

I mean say for example one critical thing gets messed up in my registry and
I have to use a restore point. But lets just say I installed a major app
right *after* the last restore point....now all my registry keys for that
app will be missing...just for starters.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
S

Stephan Rose

Less settings certainly. Only the OS settings are of any relevance.
Applications need to maintain their own settings in their own place where
they belong. =)

Yup. Some settings may need to be exposed for communication between
apps and as a way of integrating functionality, but we may come to
revisit the risk/benefits of that.[/QUOTE]

I think there are API calls and other mechanisms for apps to communicate but
I honestly don't know enough about the linux API's to provide any more
in-depth information there.
It would not surprise me if we retreat not only from shared .DLLs, but
from inter-application communication alltogether, towards shelling
each app in its own environment (or virtual machine), much as DOS apps
are handled within Windows 95 onwards.

I'm waiting for the day when VMWare has sufficient 3D Support to where I can
run windows where it belongs...in a window and it's own environment where
it can't mess with my machine. =)

They got experimental DX8 support right now...I am hoping for DX9 support in
the near future.
It's another case of effiiency giving way to reliability, as
increasing complexity requires lower defect rates (where "defect" is
enlarged to include any unintended consequence, especially attack).



You're asking the right questions... all "signing" can be expected to
do, is assert that nominal entity A really did make object B. Even
that is best-case, assuming un-spoofability.

Thanks, I can come up with many more questions. =) Those were just the first
few that popped into my head.
It is the barrier to entry to signing code that confers any sort of
"template of expectation" to proven identity A. If it costs money, as
I suspect it does, then the idea is that malware scammers won't be
able to afford it. The less cynical may also think a malware author
dare not expose identity via signing, as if some sort of cut-out proxy
identity was not possible to arrange.

What about smaller companies though? If malware scammer's won't be able to
afford it, will such companies be able to afford it? What would be
considered an amount that malware spammers can't afford? 1,000? 10,000?
100,000? What are legitimate vendors that can't afford it supposed to do?
Even if you did prove that a "real vendor" wrote a driver, what does
that assure us of trustworthiness, when one of the largest "real
vendors" (Sony) can drop rootkits from "audio CDs" and get away with
it? Before you say "they didn't get away with it", the fact that Sony
still live and breathe in the DRM space is proof that they did -
whatever fines they paid would just be "operational costs".

What signing may do (best-case) is limit the types of malware one
might encounter within the driver space, to DRM and problems arising
from incompitence (i.e. bugs).

I'd be more impressed with constraints within driver space, e.g. that
a driver for device (class) A should have no driver-level access to
device (class) B. For example, a fake codec shouldn't be able to
write to raw disk, intercept such writes, or access the 'net; it
should be limited to the sound and visual hardware, if that.


Drivers will always be with us; they're a pain, but it's hugely better
than being limited to hardware from one vampiric supplier (hi, Apple)

They don't need to be a pain. They are artificially made to be a pain.

All this security, signing, etc. is all in theory a great idea. But I think
at some point in time, too much security crosses the line where it becomes
too impractical. If I am so bogged down worrying about my PC's security and
constantly jumping through hoops and complicated procedures to do simple
tasks made complicated by security procedures....then when am I actually
going to use my PC?

What is the effect of all this security going to do to the prices of
hardware? The vendor isn't going to eat the added cost, the consumer will.

Right now, the only people affected by malware are those who have machines
infected by it. But if the anti-malware security gets to the point where it
increases development time, cost, etc. then it not only affects the victims
of malware....it affects everyone, everywhere, regardless of operating
system as we all use the same hardware.

What is the point in time when trying to secure against malware costs more
than the cost of any amount of damage malware could ever do?

When does it make the users life more difficult than malware could ever do?

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

cquirke (MVP Windows shell/user) wrote: ....etc.

Well there is only "one" user space, it isn't really seperated like you
describe unless a user were to choose to do so.
User space basically resides under /home/username/...

Sounds like Vista's C:\Users or XP's "C:\Documents and Settings",
without any particular Documents subtree for data.

The idea is to ensure that no code or scripts are dropped into the
data set, so that backups of it are sure to be malware-free.

The reasoning is that (particularly in the absence of decent formal
malware cleanup), malware is one of the most likely reasons you'd have
to restore a data backup. So it's best not to have your fire
extinguisher filled with gasolene...
Well I see one distinct thing in Linux's favor in terms of security, design
considerations taken aside.

Everyone can see the code. Everyone can see the potential exploits if
present. If I found an exploit today and fixed it, I could submit a patch
to the ubuntu development team for it.

Hold that thought; as I see it, it's the same sort of error as SR.
Now MS on the other hand, only the employed programmers can see the code.
Only the employed programmers who are already swamped with other stuff can
fix the exploits. The only way MS can even find out about an exploit and
get it fixed is until it is actually exploited and the damage already done.

I think you will find there are programmers dedicated to patching and
post-RTM code development and review. You will often find that an SP
fixes exploits before they've come to light, as a result of this
reworking... we saw this with XP SP2's IE 6 revision, as well as with
IE 7 over previous IE 6 developments.
32 Gigs is the limit I have for root in ubunutu as well. 24.5 gigs free at
the moment.

Sounds similar to Vista32's footprint, then.
To me, restore points are not a solution to anything. They are just a patch
for an unstable mess that MS can't get right.

What they are, is an automated equivalent that savvy users will do
manually. Whenever you do something "hairy", you first prepare an
Undo, so that if it doesn't work out, you aren't screwed.

The type of Undo you prepare is informed by your understanding of the
scope of what you are about to do. If manually editinga config file
or registry hive, you'd make a backup of that hive; if re-sizing a
partition, you might first make an image backup of it, etc.

You can use SR in a similar way, i.e. make a new Restore Point, do
whatever, and if it goes bad, you can restore the RP.

What SR does, is track changes (pretty much like an "installation
watcher" so that these can be undone.

SR runs automatically, too, so that those unsavvy enough to prepare an
Undo will have a reasonably recent one to fall back to.


The trouble is, because SR exists, folks start using it for things
that are best done in other ways.

For example, instead of writing a formal Uninstaller, folks may make
an SR RP, do the install, and tell you to restore the RP if it doesn't
work out. Too bad if you made other monitored changes in the window
between the RP and when the problem is found; those wanted changes are
lost, too. Too bad if you take a month to realize the install broke
something, and all pre-install RPs have been FIFO'd off.

For another example, "why do a separate registry backup, when SR can
roll back all system changes together with the registry fallback?" So
we see XP, without Win98's 5-slot FIFO registry auto-backup.


This is the same mistake as assuming that because it's possible for
folks to read the source code, that folks will actually detect bugs in
the source code, making other exploit management strategies obsolete.

In practice, the ability to peruse source code for an entire OS is
meaningful only if you have the resources to employ professionals to
do this. This is the case with Airbus-vs.-Boeing budgets, or at the
level of national infrastructures. It's not that useful to end-users.

Also, the nature of exploits is such that looking at source code may
be the least likely way you'd spot the problem. It's still assuming
that code complexity is simple enough for determinism to work, whereas
you may get better results by ignoring everything knowable about the
how and why the code was made, and look only at what it does.


For e.g.; finding "infinite life" pokes for games on the ZX Spectrum.

You could dissassemble the game and then read the source to follow the
logic through the game up to where you "die", then change what happens
at that point, either by poking the code or re-assembling the source.

Or you could do a blind search for the "DEC A" opcode, replacing each
found with an AND A" opcode, until you don't die anymore.

Guess which approach works better in practice?

What the second approach uses, is logic as to how the target code
probably works. The number of lives is < 255, so will prolly be
stored as an 8-bit value. It's rarely changed, so it's prolly held in
RAM rather than in a register; in fact, the address of the location is
probably not held in a register either.

The most efficient way to load such a value - i.e. LD A,(literal) -
can only place it in the A register, so that's where it prolly will
be. The most efficient test will be a decriment, as that will set a
flag when zero is reached. Hence, search for a DEC A.

As an AND A, A will do nothing to the flags, and is the same size as
DEC A for offset-preserving pokeage, that's what we'll substitute.

I suspect this is how searches for explots are automated, i.e. "bash
it until it breaks". rather than "let's follow the program logic".
I mean say for example one critical thing gets messed up in my registry and
I have to use a restore point.

Not if you're as sharp on Windows as you are in Linux - you'd just
boot your mOS, harvest the relevant hive from SVI's RP snapshot(s),
keep the old one and drop in this as a fix.

If you aren't that sharp, you can do a System Restore. Sharp dudes
may not like that because it broadens the scope of impact of the fix,
but it's "cleaner" in that the older registry is kept in sync with
changes outside the registry, You or I may prefer to fix such issues
as/when they arise; a less tech-savvy dude may prefer to avoid those
hassles, even if it does mean some wanted changes are smoothly lost.
But lets just say I installed a major app right *after* the last restore
point....now all my registry keys for that app will be missing.

Yep. If you fall back the registry alone, you'd have a disconnect
between the code base and the registry settings that link it. If you
do a System Restore fallback instead, it would be cleaner in that the
entire app would be melted away.

If the fix is "just re-install the app", then old remnants that could
mess up the install are less likely to be an issue if the user did an
SR Restore rather than a registry fallback.

Either way, it's a lot better than having to restore a week-old
partition image. To create a new SR point is quick and easy, as
imaging a partition is not, so one is more likely to actually DO that
before hairy installs, etc. rather than skip that step and then wish
you hadn't, when the install goes Picasso-shaped.


-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)
 
C

cquirke (MVP Windows shell/user)

You're asking the right questions... all "signing" can be expected to
do, is assert that nominal entity A really did make object B.
It is the barrier to entry to signing code that confers any sort of
"template of expectation" to proven identity A. If it costs money
then the idea is that malware scammers won't be able to afford it.
[/QUOTE]
What about smaller companies though? If malware scammer's won't be able to
afford it, will such companies be able to afford it?

Exactly; it's as absurd as not letting folks with big black beards on
airplanes, as a way of keeping bombs out.

A malware syndicate has the Botnet Bank to use as a virtual ATM, so
raising the readies for a certificate isn't a hurdle. Small vendors
that don't shop with other people's money are more likely to hurt.

Well, I guess that's an "oops, sorry, heh heh" side-effect that won't
worry the big-industry proponents of "signing" too much.
They don't need to be a pain. They are artificially made to be a pain.

No, they ARE inevitably a pain.

There are two reasons you need drivers; to hide the strangeness of
arbitrary hardware, and to operate that hardware efficiently.

The first means drivers written by 3rd-parties who may have their main
skills outside of programming, which means buggy code.

The second means code running at a very privaledged level, where a
botch-up can nuke the whole PC, and also means allowing the code to
become intimate with the hardware, accessing it directly.

These two things make "drivers" a great place for malware to pitch
their tents. Lots of strangers, lots of power.

That's why the pain is there. Everything else is merely attempts at a
cure for this... the problem will only go away when the need for
direct hardware access and real-time performance goes away, and that
in turn is where tomorrow's performance boost will get chewed up.
What is the effect of all this security going to do to the prices of
hardware? The vendor isn't going to eat the added cost, the consumer will.

There's a good article on that, with respect to DRM that's embedded in
Vista. As if it it wasn't bad enough having the hardware dudes
writing code to drive their hardware, we now have to have "coders of
the day" hired by media pimps blundering around with DRM logic in the
middle of these real-time data flows. Absurd.
Right now, the only people affected by malware are those who have
machines infected by it.

False. An infected system is a foot soldier in the malware army, or a
node in what may be among the world's most powerful "virtual servers".

These infected systems harvest email addresses and send spam, collect
CC and other info, spread malware out to other systems, act as cut-out
systems in intrusions and cannon-fodder in DDoS attacks.

It would be interesting to compare the relative firepower (OK,
"computational power" of say, SETI, compared with spammer botnets.
But if the anti-malware security gets to the point where it
increases development time, cost, etc. then it not only affects the victims
of malware....it affects everyone, everywhere, regardless of operating
system as we all use the same hardware.

95% of spam is sent via botnets. The impact on "everyone, everywhere"
is already very well felt.

It's not enough to plony security on top, like fresh butter on rotten
bread. The awareness has to be pervasive. For example, imagine if
the world's most commonly-used OS filtered out all SMTP traffic that
was not from the user's designated email app, plus that designated
email app wasn't dumb enough to allow itself to be automated?

TSM (This Stuff Matters)...


------------ ----- --- -- - - - -
Drugs are usually safe. Inject? (Y/n)
 
C

cquirke (MVP Windows shell/user)

This is a case where the hardware has gotten thicker instead of thinner.

Yup - remember, there's a narrow spectrum between "too lame to bother
to make" and "too costly to make". That means "things get thinner" is
not necessarily what happens; instead, "things tend towards the happy
spectrum". At some point, it's cheaper to add logic processing to a
"dumb" device than to, say, rework a small hard-logic chip or even put
it in better plastic, and then at that point, devices get "thicker".

Thicker devices may also mean less real-time coupling with the host,
and may allow abstraction between device and host. That's a way to
beat the "driver blues" (see other erplies in this thread)

Handling SD cards, USB sticks, removable HD brackets etc. as a generic
"USB storage" class is a good example of this.
Ok, but machine resources are only one part of the equation. I think
there is a limit to how big a software package can realistically be,
because the bigger it gets, the more people are involved, because each
individual can only contribute so much. Will it be possible for any
organization to coordinate and manage the output of 500,000 computer
programmers so as to achieve a usable result?

Yep; that's a challenge, and part of the solution is to prefer
large-capacity look-up to computation. The Pentium did this for
certain operations, giving rise to the FDIV bug when the last 4 bytes
of a lookup table were left off the transfer.

The other way to tackle this is to modularise software, and minimize
the details of how these modules interact. In a way, abstracting all
those different hardwatre interfaces into a mass of prioritized USB
streams is a part of that process.

Ideally, each moldule would be simple enough to hold in the head of a
single coder, or a very small team of such coders. "Never code
something bigger than your own head", as the saying goes.
This seems to contradict your starting position, which, if I understood
you correctly, was a trend toward thinner hardware as suggested by the
software modem.

It does, yes... but it's a bit more subtle than that.

Below a certain cost, the device may be completely absorbed, the way
that you stopped seeing separate parallel port cards back in the 486
era. Above a certain cost, it makes sense to virtualize the hardware
via onboard logic that is easier to change and update.

The original modems had pretty dedicated hardware, and it made sense
to migrate this into the PC once the interface speeds ramped up from
8-bit/8MHz ISA to 32-bit/33MHz PCI.

But routers have a more complex job to do, and as they are
edge-facing, the logic is prone to "wear". Just as piston rings and
brake shoes wear from mechanical abrasion, so does edge-facing code
"wear" from the impact of exploits. As both need to be replaced
regularly, it's best not to weld them into larger, more costly
parts... which is why the integration of IE was/is such a stuff-up.

So it doesn't surprise me that routers got fatter. In fact, there is
a "thin" option; ADSL "modems" that just bind ADSL via USB to a single
system and leave the routing and edge wear to that system to manage.

I don't recommend using Windows for *that* ;-)
Gates is busy planning his legacy with Warren Buffett. He faithfully
endorses whatever Microsoft is doing, but he knows the Golden Age is over.

Actually, I'm glad he's turned attention to things that matter more
than what sort of IT pipe we use to view the world. He's acquired the
resources that could solve some bigger problems, and that's what he's
moved onto doing... and good for him (and us), I'd say!


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
S

Stephan Rose

What about smaller companies though? If malware scammer's won't be able to
afford it, will such companies be able to afford it?

Exactly; it's as absurd as not letting folks with big black beards on
airplanes, as a way of keeping bombs out.

A malware syndicate has the Botnet Bank to use as a virtual ATM, so
raising the readies for a certificate isn't a hurdle. Small vendors
that don't shop with other people's money are more likely to hurt.

Well, I guess that's an "oops, sorry, heh heh" side-effect that won't
worry the big-industry proponents of "signing" too much.
They don't need to be a pain. They are artificially made to be a pain.

No, they ARE inevitably a pain.

There are two reasons you need drivers; to hide the strangeness of
arbitrary hardware, and to operate that hardware efficiently.

The first means drivers written by 3rd-parties who may have their main
skills outside of programming, which means buggy code.

The second means code running at a very privaledged level, where a
botch-up can nuke the whole PC, and also means allowing the code to
become intimate with the hardware, accessing it directly.

These two things make "drivers" a great place for malware to pitch
their tents. Lots of strangers, lots of power.

That's why the pain is there. Everything else is merely attempts at a
cure for this... the problem will only go away when the need for
direct hardware access and real-time performance goes away, and that
in turn is where tomorrow's performance boost will get chewed up.[/QUOTE]

No matter what though, *something* has to access the hardware. I don't care
how many abstraction layers are put in between. Something at some level has
to access the hardware and not everything can be generalized.

For video rendering the driver matters. Every line of code less in a video
driver means faster communication with the video card and therefore faster
video rendering. The faster video cards get and the higher the memory
bandwidth get the more of a bottleneck the driver becomes. Adding 50 more
cores to the CPU doesn't help here either. As far as I am concerned,
drivers in that category need to be as close to the hardware as possible to
be as efficient as possible.

Stuff like USB Devices on the other hand can be reasonably abstracted as USB
just isn't high bandwidth enough to where it really matters. And honestly,
I'd like the whole driver mess in USB to just...go away. Dealing with USB
drivers is a royal pain making me wish that USB had been implemented more
similar to a standard serial port without requiring drivers for the
connected device. A simple mechanism that allows the device to register
itself with the system so that the system can assign a communication port
or multiple ports if needed to the device would in my opinion suffice. Then
all I'd need on my software end would be an API call to enumerate devices,
find my device, get its port.....and start talking. No more need for USB
Drivers!

I suppose that most of the stuff in the consumer beyond very high-bandwidth
devices such as video cards *can* be reasonably abstracted and generalized
to not require low-level drivers written by the hardware vendors.

Industrial / commercial markets though? Controlling machinery? Production
processes? Things requiring real-time responses? I'd be hesitant to put any
layer of abstraction there....but if windows goes down a driver path to
where vendors can't write drivers anymore that operate on kernel level with
direct access to the hardware, then as far as I am concerned, it's become a
consumer-only OS and has no more place in any commercial / industrial
setting.
There's a good article on that, with respect to DRM that's embedded in
Vista. As if it it wasn't bad enough having the hardware dudes
writing code to drive their hardware, we now have to have "coders of
the day" hired by media pimps blundering around with DRM logic in the
middle of these real-time data flows. Absurd.

Yup, exceedingly absurd.
False. An infected system is a foot soldier in the malware army, or a
node in what may be among the world's most powerful "virtual servers".

These infected systems harvest email addresses and send spam, collect
CC and other info, spread malware out to other systems, act as cut-out
systems in intrusions and cannon-fodder in DDoS attacks.

True enough but it's systems that are infected doing that. Mine's not
infected however and not doing any of that. Even spam e-mails aren't
honestly a major problem for me. Sure, I get some but it's a mere fraction
of what people get who use an e-mail address provided by their ISP.

So no, I don't really feel affected by it any significant amount.
It would be interesting to compare the relative firepower (OK,
"computational power" of say, SETI, compared with spammer botnets.

Agreed, that would be interesting. =)
95% of spam is sent via botnets. The impact on "everyone, everywhere"
is already very well felt.

It's not enough to plony security on top, like fresh butter on rotten
bread. The awareness has to be pervasive. For example, imagine if
the world's most commonly-used OS filtered out all SMTP traffic that
was not from the user's designated email app, plus that designated
email app wasn't dumb enough to allow itself to be automated?

That would most certainly help, I agree there.


--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
S

Stephan Rose

cquirke said:
Sounds like Vista's C:\Users or XP's "C:\Documents and Settings",
without any particular Documents subtree for data.
Basically.


The idea is to ensure that no code or scripts are dropped into the
data set, so that backups of it are sure to be malware-free.

The reasoning is that (particularly in the absence of decent formal
malware cleanup), malware is one of the most likely reasons you'd have
to restore a data backup. So it's best not to have your fire
extinguisher filled with gasolene...
True.


Hold that thought; as I see it, it's the same sort of error as SR.


I think you will find there are programmers dedicated to patching and
post-RTM code development and review. You will often find that an SP
fixes exploits before they've come to light, as a result of this
reworking... we saw this with XP SP2's IE 6 revision, as well as with
IE 7 over previous IE 6 developments.

Well IE7 is just the next generation of IE. It of course makes sense that
they will also look for bugs when writing the new version and fix them.

I suppose they do have people sitting around doing nothing but patching. Now
there's a job I'd hate to do though...=)
Sounds similar to Vista32's footprint, then.

Well this is actually my footprint with all my development tools, MySQL,
vmware, and a few other applications. Plus I have about 3 different kernels
installed right now due kernel updates from the beta. If I cleaned things
up a bit I could easily get a few gigs out of it.

I think my *fresh* install was a 27.5 gigs of 32 gigs free.
For e.g.; finding "infinite life" pokes for games on the ZX Spectrum.

You could dissassemble the game and then read the source to follow the
logic through the game up to where you "die", then change what happens
at that point, either by poking the code or re-assembling the source.

Or you could do a blind search for the "DEC A" opcode, replacing each
found with an AND A" opcode, until you don't die anymore.

Guess which approach works better in practice?

What the second approach uses, is logic as to how the target code
probably works. The number of lives is < 255, so will prolly be
stored as an 8-bit value. It's rarely changed, so it's prolly held in
RAM rather than in a register; in fact, the address of the location is
probably not held in a register either.

The most efficient way to load such a value - i.e. LD A,(literal) -
can only place it in the A register, so that's where it prolly will
be. The most efficient test will be a decriment, as that will set a
flag when zero is reached. Hence, search for a DEC A.

As an AND A, A will do nothing to the flags, and is the same size as
DEC A for offset-preserving pokeage, that's what we'll substitute.

I suspect this is how searches for explots are automated, i.e. "bash
it until it breaks". rather than "let's follow the program logic".

Most likely however usually people that look for exploits *don't* have the
source code available leaving them with very little choice.

If I wanted to write an exploit, I'd prefer the 1st way with access to the
source code.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
G

Guest

Hmm, did you try any linux irc channels for the problems you were having?
grub error codes, for example, are well documented. I prefer net installs
because they detect your hardware before they reboot, though I have no
experience with Ubuntu and especially the Ubuntu installer [Ubuntu seems
rather dumbed down to me, lacking the features and options of a real Debian
system].

By the way, Linux has been plug and play at least since 2.4. I had never
used Linux until 2.6.18 [the current stable version], so I really don't get
what you're on about.

I knew nothing about Unixen two months ago, and yes, if Windows is all
you're used to, it's going to take a few hours of poking around. But I'm much
more productive now.

As for your firefox woes [I actually thought Ubuntu used iceweasel? Hmm] I
haven't noticed a difference [500 mhz 256 mb ram]. I haven't seen a good
comparison of the swap performance of Windows v Linux, firefox relies very
heavily on the OS's ability to swap and this is probably where your problem
lies, though without knowing your config, I'm not sure. I hear it's better in
Vista, but I prefer the holistic approach used in FreeBSD [Vista is too
cache-happy IMHO, the time taken to fill up my ram with stuff it's going to
have to get rid of anyway is time I could have been doing something useful
with that I/O, but what can you do?].

If you're having performance issues you might want to use a slim window
manager and lower the swappiness [that said, I've never had a speed problem
with Gnome, which is default in Ubuntu]. Linux was not designed to be fast,
but I've found it does a good job if you install the right tools for said job.

Why is it better? Hmm...

If most of what you do does not involve your OS, you won't notice much
difference at all. There's a little extra security [I know I know, Vista
supports MAC and drive encryption by default, but the account separation in
Linux is better, so privilege escalation is harder to do], but it's not
compelling enough to make you want to switch, really.

But, if you need to support multiple users, you have a lot of apps open, or
you muddle around with the filesystem a lot, you will see benefits in Unix.
Windows does support several workspaces with an unofficial UI hack, but it's
not really the same as having a fully configurable UI setup. You can also
download Python or use batch scripts to automate the tasks you'd like to do
in Windows, but it's much easier in a Unix. Plus, installation of programs in
Unix is easier, using CVS or APT [I <3 apt-get and don't know how I got by
without it].
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top