Trojan bifrose removal?

V

Virus Guy

Al said:
I see plenty of ".CHK". files on 98 machines and for me, each one
of those is a file system that has failed it's user.

Nice hyperbole. As if NTFS crashes don't also leave file fragments
hanging around.

You're just as likely to lose a file under NTFS as you are under FAT32
if you pull the system's power.

I've never found anything useful in a .chk file, and I bet neither have
most people.
 
A

Al Dykes

Nice hyperbole. As if NTFS crashes don't also leave file fragments
hanging around.

Never saw them, myself. It's not like I haven't seen a zillion hard
reboots on NT systems. I don't know what a "file fragment" is, anyway.

You're just as likely to lose a file under NTFS as you are under FAT32
if you pull the system's power.


You appear to not have experience with modern filesystems.
 
L

Leythos

Ok, I ran Windows Backup on an XP system running with a FAT32 file
system and it created several .bkf files but kept them to just under 2
gb in size, so I guess that answers that question.

And my experience contradicts yours, try it with XP Prof, Server 2003,
and try it with multiple drive types.
So you moved to each new OS when it became available. Which means you
really don't have a lot of real-world experience with FAT32 (since it
was around for a short time between 98se and 2K, perhaps a year or two
at most).

HA HA HA - you really are a tosser.
Uh hu. See above.

Again, your experience appears to be very limited.
And that qualifies your statement that FAT gets corrupted more often?
How?

Pure Experience in multiple environments and for many years.
Again, your limited experience with win-98 and FAT32 is confusing you.
Win-98 was seriously unstable back when most systems had 32 mb of memory
and buggy AGP video card drivers. By the time that motherboard and
video card drivers had been fixed, and the average system had 256 mb of
memory, most "power" users had already moved on to win-2k. Their
perception of FAT32 being "unstable" was rooted in win-98 being unstable
on the meager hardware it was run on at the time.

LOL, there you go again, people with more experience than you seem to
threaten your beliefs, but it doesn't make you right. Most people with
any real experience will contradict you.
WD always made great drives, regardless of the period.

LOL, spoken like a noob.
Seagate almost always made great drives (and yes, very recently they
****ed up real bad with some barracuda's).


Archiving is not the same as backup.

LOL - and you don't seem to understand what a BACKUP is.
 
V

Virus Guy

Leythos said:
And my experience contradicts yours, try it with XP Prof,
Server 2003, and try it with multiple drive types.

And your experience includes running all those os's from FAT-32
partitioned volumes?
So you moved to each new OS when it became available. Which means]
you really don't have a lot of real-world experience with FAT32
(since it was around for a short time between 98se and 2K, perhaps
a year or two at most).

HA HA HA - you really are a tosser.

Is that you're way of agreeing with my statement?
Again, your experience appears to be very limited.

Again, you avoid a direct response.

Nothing you've said indicates that you've spent any appreciable time
working on systems with FAT32, yet you make the sweeping statement that:
My experience has been that FAT gets corrupted way more often
than NTFS

Pure Experience in multiple environments and for many years.

For how long was a win-98se system your primary desktop system?
 
V

Virus Guy

Dave said:
Maaaaany moons ago, long before Windoze was even a twinkle in its
developer's eye (...)
I had a quick shufty and it was clogged solid with huge CHK files.

So we're not talking about FAT32 in that case.

What you saw was a problem with FAT12 or FAT16 and it's limitation that
it can't handle more than 512 file entries in the root directory.
As you say the vast majority of CHK files don't actually contain
anything of importance and they don't usually mean that other
necessary files are corrupt or missing vital clusters. It's
mainly cluster pointers in the FAT that no longer point to files
in use.

I don't know what the issue is here.

Is it that with FAT32 you're likely to have some CHK files accumulate
over the course of a few year's worth of operation?

Or that they're even created in the first place?

In any case, it's a red herring and it has no implications for the
underlying file system.
 
L

Leythos

Nothing you've said indicates that you've spent any appreciable time
working on systems with FAT32, yet you make the sweeping statement that

Again, I've spent decades working with Windows operating systems, all of
them (except ME) and with the various file systems, for years each, at
least.

It's documented that NTFS is more stable than FAT32, and you've not
shown anything to the contrary - please give a link to something
credible that shows FAT/FAT16/FAT32 is more stable than NTFS.
 
V

Virus Guy

Leythos said:
It's documented that NTFS is more stable than FAT32, and you've not
shown anything to the contrary -

In theory, NTFS is supposed to be more reliable.

In reality FAT32 has never let me down on any system I've built, used or
manage.
please give a link to something credible that shows
FAT/FAT16/FAT32 is more stable than NTFS.

I've never said that FAT32 was _more_ stable than NTFS.

Stability is a function of the OS that's manipulating the file system
(as well as a function of maintaining a controlled supply of power to
the system). A file system is just a set of rules or specifications.
If I follow the rules, then on paper both file systems are reliable.

As I said way back in this thread, NTFS has no advantages for the
typical SOHO user but it has several disadvantages compared to FAT32. I
never said that any of those disadvanatages was instability. I also
indicated that FAT32 is more "robust" than most people give it credit
for.
 
V

Virus Guy

Leythos said:

------------
Point 1: You cannot format a volume larger than 32 gigabytes (GB) in
size using the FAT32 file system during the Windows XP installation
process.
------------

There has been some critical commentary about this, and the consensus is
that it's an intentional handicap given to win-2k and XP by Micro$oft.
If you use third-party drive software (On Track Disc Manager for
example) or even DOS fdisk and format, you can format any size drive as
FAT32 and give it any cluster size you want (cluster-size scaling is
another Micro$oft peculiarity for FAT32).

------------
Point 2: Clusters cannot be 64 kilobytes (KB) or larger. If clusters
are 64 KB or larger, some programs (such as Setup programs) may
incorrectly calculate disk space.
------------

That point is not really a point. Since NTFS uses default 4kb cluster
size, I'm not sure why there would be a complaint that FAT can't have
clusters larger than 32kb. Even on very large volumes (500gb, 1tb, etc)
I'm not sure why you'd want to have clusters larger than 32kb anyways.

------------
Point 3: Windows XP supports three file systems for fixed disks: FAT16,
FAT32, and NTFS. It is recommended that you use NTFS with Windows XP
because of its advanced performance, security, and reliability features.
------------

"Advanced Performance" = hollow statement. Note he does not say "Higher
Performance".

"Security" - in what context? Does the SOHO need file system security -
to keep his system secure from himself? Or does a sys-admin need
network-level security?

"Reliability features" = unsubstantiated in the real world.

------------
Point 4: Some older programs that were not written for Windows NT 4.0 or
Windows 2000 may exhibit slow performance after you convert the FAT32
file system to NTFS. This behavior does not occur on a clean partition
of NTFS.
------------

This is irrelavent in the current context.

------------
Point 5: If you run other Windows operating systems on your computer in
addition to Windows XP, note the following issues: Only Microsoft
Windows 2000 and Windows XP have full access to files on an NTFS volume.
Also, Microsoft Windows Millennium Edition (Me), Windows 98 Second
Edition and earlier, and MS-DOS cannot access files on an NTFS volume.
------------

So if you want both OS's to see the contents of all volumes, then why
isin't the advice given to use FAT32 on all volumes? Otherwise, this
point is also irrelavent in the current context.

------------
Point 6: What is Microsoft's recommendation on this? NTFS is the
recommended file system for computers running the Microsoft Windows XP
and Windows .NET Server operating systems. NTFS offers many end-user
benefits related to functionality, security, stability, availability,
reliability, and performance. NTFS, which was originally introduced
with Microsoft Windows NT® 3.1, has always provided advanced file system
features such as security, transacted operations, large volumes, and
better performance on large volumes. Such capabilities are not available
on either FAT16 or FAT32
------------

Functionality: What can NTFS do that FAT32 can't do that would impact
the average individual or SOHO user?

Stability / Reliability: Saying it's more stable doesn't make it more
stable. Being more complex to impliment again doesn't make anything
more stable.

Availability: What the hell kind of speak is that? Did a Micro$oft PR
guy create this document?

Performace: Every drive test I've seen shows FAT32 scores because FAT32
is just plain faster - because it's simpler.

Security: Again, a fuzzy thing that in reality means nothing for the
individual or SOHO user and is the domain of the institutional /
corporate desktop setting.

Transacted Operations: If that means server use, then again we're not
talking about the individual or SOHO user.

Large volumes / better performance on large volumes: FAT32 can be used
on large volumes (up to 2.2 tb I believe) and with the same cluster-size
choices as NTFS. The fact that 2k/XP was handicapped and can't natively
format FAT32 volumes with those characteristics is not a handicap of the
file system.

What is not mentioned is ease of maintenance and virus detection /
extraction. FAT32 comes out ahead on those.

------------
Point 7: Boot time with FAT32 is increased in hard drives larger then
32 GB because of the time required to read all of the FAT structure.
This must be done to calculate the amount of free space when the volume
is mounted.
------------

It's a fallacy that the (entire) FAT is read into memory during normal
use. If that were the case, then the first win-98 systems with 16 mb of
ram wouldn't be able to function at all. The truth is that the only
time that the entire FAT (or at least large chunks of it) are read into
system memory is during dist maintenance like scandisk and maybe defrag.

When it comes to the calculation of free space when a volume is mounted,
that parameter is stored within the file system and it doesn't have to
be calculated every time a drive is mounted. I will say that a FAT32
drive with more than 6.x million clusters will take a minute or two to
initially appear under DOS and win-98 (explorer), but 2K and XP doesn't
exhibit that behavior.

------------
Point 8: Read/write performance with FAT32 is affected because the file
system must determine the free space on the disk through the small views
of the massive FAT structure. This leads to inefficiencies in file
allocation.
------------

That is just pure bullshit pulled from someone's ass.

-----------------------------------------

Max Volume Size FAT32: 32GB for all OS. 2TB for some OS

That is wrong. If we're talking about Win-98, 2K or XP they can all use
FAT32 for very large drives, certainly 500 gb and probably higher.

Max Clusters Number: FAT32: 4,194,304

That is wrong. The real number is 268,435,437.

Compression: Fat: no

That is wrong. Although I never use it, stacker, drivespace and
doublespace were options under FAT12 and FAT16 (but not FAT32).

Recoverability: NTFS yes FAT32 no

Define recoverability.

Disk Space Economy

That rating does not consider the potential to use the same cluster-size
as NTFS on a given volume. So disk space economy is exactly the same
for both NTFS and FAT32.

Fault Tolerance

Without explaining why or how, the score for "Fault Tolerance" is just
more arm waving.
 
L

Leythos

hat is just pure bullshit pulled from someone's ass.

Why don't you just keep believing what you want to believe and keep
ignoring the rest of the world, you will never see anything except your
way, so why both trying.

The fact remains, for most of us, NTFS provides better control of slack
space, better reliability, and better performance depending how how we
setup the drive based on the types of files being accessed.
 
V

Virus Guy

Leythos said:
Why don't you just keep believing what you want to believe

Why don't you stop flapping your gums and give a detailed rebuttal to
each of my points?
and keep ignoring the rest of the world,

Why are you such a Micro$oft ass-kisser? Why do you believe that
everything MS does is really intelligent, or good, or better? NTFS is
proprietary. That doesn't make it better than FAT32.
you will never see anything except your way, so why both trying.

I've learned not to trust anything that comes from Macroshaft regarding
their own claims for their own products. You (and many others,
apparently) just lap that stuff up without thinking.
The fact remains, for most of us, NTFS provides better control of
slack space,

Again that is a complete lie, or you don't really understand what slack
space is.

Slack space is the amount of unused space in the last cluster (or the
only cluster) of a file. If the cluster size is 32kb, and a file is
only 5kb, then there will be 32 - 5 = 27kb of slack space. With FAT32,
just as with NTFS, you can define the cluster size at the time the
volume is formatted. You're continued refusal to understand this just
shows how ignorant you are.
better reliability,

An arm-waving statement. NTFS is definately more "busy" when it comes
to hard drive reading and writing, so I could argue it NEEDS more
mechanisms and proceedures to safeguard it's own complexity - mechanisms
and proceedures that FAT32 doesn't need.
and better performance

Worse performance.
depending how (...)

Depending on NOTHING!

Because of it's complexity, logging and journaling, NTFS will ALWAYS be
slower than FAT32 when run on the same hardware.

-> http://www.pcguide.com/ref/hdd/file/ntfs/rel_Rec.htm

-------------------------
NTFS's transactional operation that the overhead from this system
reduces performance somewhat. To partially mitigate this impact, NTFS
uses caching of some of the logging operations--in particular, it uses a
system called "lazy commit" when transactions are completed. This means
that the "commit" information associated with a completed operation is
not written directly to the disk for each completed transaction, but
rather cached and only written to the log as a background process. This
reduces the performance hit but has the potential to complicate recovery
somewhat, since a commit may not get recorded when a crash occurs. To
improve the recovery process, NTFS adds a checkpoint functionality.
Every eight seconds, the system writes a checkpoint to the log. These
checkpoints represent "milestones" so that recovery does not require
scanning back through the entire activity log.
-------------------------

It's just one big complicated layer upon layer for NTFS.

And remember that it was designed when hard drives weren't complicated
and didn't have much internal ram caching. Most of NTFS's journaling
strategies probably don't work as intended, or are unnecesary given
modern drive technology.

And jeeze, look at this:

http://www.cnwrecovery.com/html/ntfs_forensic.html
CnW software has tools to assist in investigating both good
and corrupted NTFS disks.

What? I thought NTFS file systems were indestructable!?

I thought they didn't fail.

Why on earth would there be a cottage industry of third-party NTFS
repair tools?
 
V

Virus Guy

FromTheRafters said:

Two items on that page of any relavence:

- Transactional Operation

- Dynamic Bad Cluster Remapping

The second item has been performed at the hardware level by hard drives
themselves for most of the past 10 years, so it's irrelavent to this
discussion.

As for the first item, apparently because the typical hard drive
transaction for NTFS is so complicated and multi-stagged, it either
doesn't trust itself or the hardware that the entire transaction will
happen successfully so hence it relies on the last "known good"
transaction before the replacement transaction has been fully executed.
I wouldn't want to have a file system that needed to rely on such a
mechanism in the first place.

-> http://www.pcguide.com/ref/hdd/file/ntfs/rel_Rec.htm

------------
Once again, I think it's important to point out that the transaction
logging and recovery features of NTFS do not guarantee that no user data
will ever be lost on an NTFS volume. If an update of an NTFS file is
interrupted, the partially-completed update may be rolled back, so it
would need to be repeated. The recovery process ensures that files are
not left in an inconsistent state, but not that all transactions will
always be completed.
-----------

Key statements:

"transaction logging and recovery features of NTFS do not guarantee that
no user data will ever be lost on an NTFS volume"

"The recovery process ensures that files are not left in an inconsistent
state, but not that all transactions will always be completed"

NTFS needs a more sophisticated form of fault recovery simply because it
exposes itself (or the file system) to more risk during drive access.
In the end, that extra sophistication does not necessarily mean it beats
FAT32's odds of suffering corruption in a given situation.
 
A

Al Dykes

Why don't you stop flapping your gums and give a detailed rebuttal to
each of my points?


Why are you such a Micro$oft ass-kisser? Why do you believe that
everything MS does is really intelligent, or good, or better? NTFS is
proprietary. That doesn't make it better than FAT32.



I've learned not to trust anything that comes from Macroshaft regarding
their own claims for their own products. You (and many others,
apparently) just lap that stuff up without thinking.


Again that is a complete lie, or you don't really understand what slack
space is.

Slack space is the amount of unused space in the last cluster (or the
only cluster) of a file. If the cluster size is 32kb, and a file is
only 5kb, then there will be 32 - 5 = 27kb of slack space. With FAT32,
just as with NTFS, you can define the cluster size at the time the
volume is formatted. You're continued refusal to understand this just
shows how ignorant you are.


An arm-waving statement. NTFS is definately more "busy" when it comes
to hard drive reading and writing, so I could argue it NEEDS more
mechanisms and proceedures to safeguard it's own complexity - mechanisms
and proceedures that FAT32 doesn't need.


Worse performance.


Depending on NOTHING!

Because of it's complexity, logging and journaling, NTFS will ALWAYS be
slower than FAT32 when run on the same hardware.

-> http://www.pcguide.com/ref/hdd/file/ntfs/rel_Rec.htm

-------------------------
NTFS's transactional operation that the overhead from this system
reduces performance somewhat. To partially mitigate this impact, NTFS
uses caching of some of the logging operations--in particular, it uses a
system called "lazy commit" when transactions are completed. This means
that the "commit" information associated with a completed operation is
not written directly to the disk for each completed transaction, but
rather cached and only written to the log as a background process. This
reduces the performance hit but has the potential to complicate recovery
somewhat, since a commit may not get recorded when a crash occurs. To
improve the recovery process, NTFS adds a checkpoint functionality.
Every eight seconds, the system writes a checkpoint to the log. These
checkpoints represent "milestones" so that recovery does not require
scanning back through the entire activity log.
-------------------------

It's just one big complicated layer upon layer for NTFS.

And remember that it was designed when hard drives weren't complicated
and didn't have much internal ram caching. Most of NTFS's journaling
strategies probably don't work as intended, or are unnecesary given
modern drive technology.

And jeeze, look at this:

http://www.cnwrecovery.com/html/ntfs_forensic.html


What? I thought NTFS file systems were indestructable!?



Compared to FAT32, they are.

FAT32 systems lose data every time they make a CHK file. Journaled
file systems don't do that.

Every well-used PC with a FAT32 disk has had CHK files on it unless
the user knew to delete them.

Virus Guy <[email protected]> is a troll.
 
L

Leythos

Why don't you stop flapping your gums and give a detailed rebuttal to
each of my points?

Because it would do no good, you've shown that you're only interested in
nitpicking and that nothing will change your position.

My personal as well as professional experience gives me enough data to
believe you are completely wrong and don't actually know what you're
talking about.
 
V

Virus Guy

Leythos said:
Because it would do no good, you've shown that you're only
interested in nitpicking

Nitpicking?

We're talking about the fundamental differences between FAT32 and NTFS
which (you say) point to NTFS being far superior to FAT32, and now you
label a discussion about those details as nitpicking?
and that nothing will change your position.

A coherent (and DEFENDABLE) explanation as to why NTFS is superior to
FAT32 _will_ change my position.
My personal as well as professional experience gives me enough
data to believe you are completely wrong

Why don't you explain how I'm wrong? Why don't you have the guts, or
the knowledge, to explain how or where I'm wrong?

I've posted specific, detailed answers. It should be easy to point out
how they're wrong.

If you don't, then all we're left with is just more gum-flapping from
you.
 
A

Al Dykes

Nitpicking?

We're talking about the fundamental differences between FAT32 and NTFS
which (you say) point to NTFS being far superior to FAT32, and now you
label a discussion about those details as nitpicking?


A coherent (and DEFENDABLE) explanation as to why NTFS is superior to
FAT32 _will_ change my position.


Becuase NTFS doesn't leave CHK files around on disks.

How long does it take to scandisk a 300GB disk, anyway?
 
V

Virus Guy

Compared to FAT32, they are.
FAT32 systems lose data every time they make a CHK file.
Journaled file systems don't do that.

Every well-used PC with a FAT32 disk has had CHK files on it unless
the user knew to delete them.

You should do more reading to understand what a .chk file really
represents. Very rarely does it represent data from a current "known
good" file vs a previously discarded version.

If you want to read a very good comparative explanation about NTFS vs
FAT32, I suggest you read this:

http://cquirke.blogspot.com/2006/01/bad-file-system-or-incompetent-os.html

I'll repeat it below. I suggest Leythos also read it.

Enjoy.

-------------------

14 January 2006
Bad File System or Incompetent OS?

"Use NTFS instead of FAT32, it's a better file system", goes the
knee-jerk. NTFS is a better file system, but not in a sense that every
norm in FAT32 has been improved; depending on how you use your PC and
what infrastructure you have, FATxx may still be a better choice. All
that is discussed here.

The assertion is often made that NTFS is "more robust" than FAT32, and
that FAT32 "always has errors and gets corrupted" in XP. There are two
apparent aspects to this; NTFS's transaction rollback capability, and
inherent file system robustness. But there's a third, hidden factor as
well.

Transaction Rollback

A blind spot is that the only thing expected to go wrong with file
systems is the interruption of sane write operations. All of the
strategies and defaults in Scandisk and ChkDsk/AutoChk (and automated
handling of "dirty" file system states) are based on this.

When sane file system writes are interrupted in FATxx, you are either
left with a length mismatch between FAT chaining and directory entry (in
which case the file data will be truncated) or a FAT chain that has no
directory entry (in which case the file data may be recovered as a "lost
cluster chain" .chk file). It's very rare that the FAT will be
mismatched (the benign "mismatched FAT", and the only case where blind
one-FAT-over-the-other is safe). After repair, you are left with a sane
file system, and the data you were writing is flagged and logged as
damaged (therefore repaired) and you know you should treat that data
with suspicion.

When sane file system writes are interrupted in NTFS, transaction
rollback "undoes" the operation. This assures file system sanity without
having to "repair" it (in essence, the repair is automated and hidden
from you). It also means that all data that was being written is
smoothly and seamlessly lost. The small print in the articles on
Transaction Rollback make it clear that only the metadata is preserved;
"user data" (i.e. the actual content of the file) is not preserved.

Inherent Robustness

What happens when other things cause file system corruption, such as
insane writes to disk structures, arbitrary sectors written to the wrong
addresses, physically unrecoverable bad sectors, unintentional power
interruptions, or malicious malware payloads a la Witty? (Witty worm,
March 2004). That is the true test of file system robustness, and
survivability pivots on four things; redundant information,
documentation, OS accessibility, and data recovery tools.

FATxx redundancy includes the comparison of file data length as defined
in directory entry vs. FAT cluster chaining, and the dual FATs to
protect chaining information that cannot be deduced should this
information be lost. Redundancy is required not only to guide repair,
but to detect errors in the first place - each cluster address should
appear only once within the FAT and collected directory entries, i.e.
each cluster should be part of the chain of one file or the start of the
data of one file, so it is easy to detect anomalies such as cross-links
and lost cluster chains.

NTFS redundancy isn't quite as clear-cut, extending as it does to
duplication of the first 5 records in the Master File Table (MFT). It's
not clear what redundancy there is for anything else, nor are there
tools that can harness this in a user-controlled way.

FATxx is a well-documented standard, and there are plenty of repair
tools available for it. It can be read from a large number of OSs, many
of which are safe for at-risk volumes, i.e. they will not initiate
writes to the at-risk volume of their own accord. Many OSs will tolerate
an utterly deranged FATxx volume simply because unless you initiate an
action on that volume, the OS will simply ignore it. Such OSs can be
used to safely platform your recovery tools, which include
interactively-controllable file system repair tools such as Scandisk.

NTFS is undocumented at the raw bytes level because it is proprietary
and subject to change. This is an unavoidable side-effect of deploying
OS features and security down into the file system (essential if such
security is to be effective), but it does make it hard for tools
vendors. There is no interactive NTFS repair tool such as Scandisk, and
what data recovery tools there are, are mainly of the "trust me, I'll do
it for you" kind. There's no equivalent of Norton DiskEdit, i.e. a raw
sector editor with an understanding of NTFS structure.

More to the point, accessibility is fragile with NTFS. Almost all OSs
depend on NTFS.SYS to access NTFS, whether these be XP (including Safe
Command Only), the bootable XP CD (including Recovery Console), Bart PE
CDR, MS WinPE, Linux that uses the "capture" approach to shelling
NTFS.SYS, or SystemInternals' "Pro" (writable) feeware NTFS drivers for
DOS mode and Win9x GUI.

This came to light when a particular NTFS volume started crashing
NTFS.SYS with STOP 0x24 errors in every context tested (I didn't test
Linux or feeware DOS/Win9x drivers). For starters, that makes ChkDsk
impossible to run, washing out MS's advice to "run ChkDsk /F" to fix the
issue, possible causes of which are sanguinely described as including
"too many files" and "too much file system fragmentation".

The only access I could acquire was BING (www.bootitng.com) to test the
file system as a side-effect of imaging it off and resizing it (it
passes with no errors), and two DOS mode tactics; the LFN-unaware
ReadNTFS utility that allows files and subtrees to be copied off, one at
a time, and full LFN access by loading first an LFN TSR, then the
freeware (read-only) NTFS TSR. Unfortunately, XCopy doesn't see LFNs via
the LFN TSR, and Odi's LFN Tools don't work through drivers such as the
NTFS TSR, so files had to be copied one directory level at a time.

FATxx concentrates all "raw" file system structure at the front of the
disk, making it possible to backup and drop in variations of this
structure while leaving file contents undisturbed. For example, if the
FATs are botched, you can drop in alternate FATs (i.e. using different
repair strategies) and copy off the data under each. It also means the
state of the file system can be snapshotted in quite a small footprint.

In contrast, NTFS sprawls its file system structure all over the place,
mixed in with the data space. This may remove the performance impact of
"back to base" head travel, but it means the whole volume has to be
raw-imaged off to preserve the file system state. This is one of several
compelling arguments in favor of small volumes, if planning for
survivability.

OS Competence

From reading the above, one wonders if NTFS really is more survivable or
robust that FATxx. One also wonders why NTFS advocates are having such
bad mileage with FATxx, given there's little inherent in the file system
structural design to account for this. The answer may lie here.

We know XP is incompetent in managing FAT32 volumes over 32G in size, in
that it is unable to format them. (see below). If you do trick XP into
formatting a volume larger than 32G as FAT32, it fails in the dirtiest,
most destructive way possible; it begins the format (thus irreversibly
clobbering whatever was there before), grinds away for ages, and then
dies with an error when it gets to 32G. This standard of coding is so
bad as to look like a deliberate attempt to create the impression that
FATxx is inherently "bad".

But try this on a FATxx volume; run ChkDsk on it from an XP command
prompt and see how long it takes, then right-click the volume and go
Properties, Tools and "check the file system for errors" and note how
long that takes. Yep, the second process is magically quick; so quick,
it may not even have time to recalculate free space (count all FAT
entries of zero) and compare that to the free space value cached in the
FAT32 boot record.

Now test what this implies; deliberately hand-craft errors in a FATxx
file system, do the right-click "check for errors", note that it finds
none, then get out to DOS mode and do a Scandisk and see what that
finds. Riiight... perhaps the reason FATxx "always has errors" in XP is
because XP's tools are too brain-dead to fix them?

My strategy has always been to build on FATxx rather than NTFS, and
retain a Win9x DOS mode as an alternate boot via Boot.ini - so when I
want to check and fix file system errors, I use DOS mode Scandisk,
rather than XP's AutoChk/ChkDsk (I suppress AutoChk). Maybe that's why
I'm not seeing the "FATxx always has errors" problem? Unfortunately, DOS
mode and Scandisk can't be trusted > 137G, so there's one more reason to
prefer small volumes.

---------------

While the author quite correctly observes the XP can't format a FAT32
volume larger than 32gb, it's been my experience that when a FAT32
volume (or drive) of any size is pre-formatted and then presented to XP,
that XP has no problems mounting and using the volume / drive, and XP
can even be installed on and operate from such a volume / drive.

The author also mentions the 137 gb volume size issue that is associated
with FAT32, but that association is false. It originates from the fact
that the 32-bit protected mode driver (ESDI_506.PDR) used by win-98 has
a "flaw" that prevents it from correctly addressing sectors beyond the
137 gb point on the drive. There are several work-around for this
(third party replacement for that driver, the use of SATA raid mode,
etc) but that issue is relavent only to win-98 and how it handles large
FAT32 volumes, not how XP handles large FAT32 volumes.
 
L

Leythos

I've posted specific, detailed answers. It should be easy to point out
how they're wrong.

You've posted your OPINION, contradicted by many, and it's obvious that
it's not worth the time to "attempt" a discussion with you.

Your limited experience and massive ego seem to give you Guru Complex,
so it's not going to do any good to try and "discuss" anything with you.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top