[/QUOTE]
(edited rather than bulk-trimmed to preserve meaning)
I have used NTFS since Windows 2000 came out as I needed the
security features. I have also had less problems with file and disk
corruption but this is just personal anecdotal evidence.
I'm wondering if that difference is to do with poor OS support for
FATxx, rather than anything inherent in the file systems.
MS'a approach to file system maintenance seems to assume that the
interruption of sane file system operations is the only thing that
ever goes wrong. Whether they are unaware of corruption from other
causes, or take a cynical "not our problem" approach to ignoring
these, is open to conjecture.
So let's look at those scenarios first.
When the system crashes or is reset while file operations are in
progress, the file data will be left in an indeterminate state.
Each non-empty file has three components:
- data within a cluster chain
- a directory entry that points to the cluster chain
- information enumerating the clusters in the chain
Implicit is a fourth item; tracking of free clusters, so as to exclude
those occupied by file data from use.
If you interrupt a file write in FATxx, there will usually be a
mismatch between length predicted by cluster chaining info, vs. stated
within the directory entry. Part or all of the file may be lost, when
automatic Scandisk detects this mismatch and "fixes" the file by
truncating it, or (if the dir entry is missing) recovering the cluster
chain as "lost" as a .CHK file in the volume's root directory.
If you interrupt a file write in NTFS, transaction logging causes the
incomplete new version to be irreversibly discarded.
So both file systems are capable of being repaired after a bad exit,
with broken data often being partially or completely lost in FATxx,
and always being completely lost in NTFS. Hmm.
Now let's look at file system structure detail and survivability.
In the case of FATxx, the tracking of free clusters and the chaining
of file data is done in the same place; a mirrored pair of tables of
cluster addresses called the File Allocation Tables. These are
contiguous data structures at the start of the volume, lying one after
the other, and both are updated as close to the same time as possible.
In the case of NTFS, there are no FATs. Instead (AFAIK) free clusters
are marked in a bitmap, which is like a miniture FAT except it needs 1
bit to store used/free status vs. 12, 16 or 32 bits to store the
address of the following cluster (or 0 if cluster is free). The
linkage of data clusters is stored elsewhere, within the file's
directory information (or more accurately, another per-file meta-data
file system structure), and is stored as a set of start, length
entries each defining a run of contiguous clusters (again, AFAIK).
Which of these is best suited to survive corruption? Which as shorter
critical windows during a file update process? Which has the greater
redundency to detect errors and provide fallback information to guide
repair? FATxx has duplicate FAT copies, but most NTFS files do not
have duplicate cluster chain info, AFAIK, and I don't know whether the
free space bitmap is one of the MFT entries for which a duplicate copy
of the metadata is kept.
Without duplication of core file system structural information, either
explicitly (e.g. dual FAT, mirrored crucial MFT entries) or by
deduction (e.g. cross-referencing directory file length with cluster
chaining information), data will be lost if random garbage overwrites
the file structure, and errors can't be detected.
FATxx is a simple, well-documented file system that stores all core
structures at the start of the volume. So if you need to recover data
via various putative structures, it's easy to backup and restore this
area of file system structure to try (and undo) these alternatives.
NTFS is complex, proprietary, and poorly documented, and it scatters
its structural data across the whole volume. Even if you had the
tools and know-how to try different putative file system structures,
you can't back up, swap and restore these because they permeate the
entire volume. So manual NTFS data recovery is near-impossible.
I mentioned "poor support for FATxx" in XP.
Firstly and most famously, XP cannot create a FAT32 volume over 32G.
Less obviously, the Properties, Tools, Check for errors facility does
not in fact do anything at all to check or fix FATxx file system
structure. I have verified this by running this test - it takes
almost no time, and reports no errors - and then doing a Scandisk from
DOS mode, that does find and manage errors.
ChkDsk, and possibly AutoChk, do seem to really test the file system
structure. But because these primitive DOS 5 era tools allow no user
control as they check the volume - you have to (dis)allow fixing
before they start, and have no veto power thereafter - I don't use
them for FATxx < 137G; instead, I manage FATxx from DOS mode, using
Scandisk. In NTFS I have to choice but to trust ChkDsk.
So if XP fails to actually maintain FATxx, then it's hardly a surprise
that mileage appears poorer than it does for NTFS.
With large hard drives (> 137 GB) becoming very common and most
systems having DVD drives installed FAT is now almost useless
There are two aspects on that; maximum per-file size that limits FATxx
usefullness when mastering DVDs or managing video etc., and efficiency
considerations when dealing with large volumes.
But perhaps you are assuming the use of one huge C: for the hard
drive, as MS (IMO, ill-advisedly) recommends. I'd say that is such a
bad idea that it overshadows FATxx vs. NTFS, i.e. one huge C: vs.
intelligent partitioning has more impact than FATxx vs. NTFS.
Once you don't set the whole HD as one big doomed C:, you can have
your cake and eat it - i.e. use both NTFS and FATxx to taste.
NTFS is better for massive volumes, for a number of reasons...
1) More compact chaining information
FATs contain a map of all cluster addresses in the volume, and as the
volume gets larger, these tables get bigger both because there are
more clusters to track, and because larger addresses require more
space to hold each entry. You can hedge this by incresing the size of
each cluster, but that can only do so much, and just moves the
inefficiency somewhere else (cluster slack space bloat vs. large file
system cluster address tables).
NTFS avoids this by not storing any table of addresses. Instead, a
far smaller table is used to store only the free/used status, and
clusterchaining info is held as start/length info for each run of
clusters 9the more fragmented the file, the more run etries). Storing
this run info nearer the rest of the file's metadata can reduce head
travel and thus shrink the critical window period needed to update the
file, but a lack of redundency increases vulnerability to corruption.
2) More efficient directory structure
FATxx directories are linear lists that have to be traversed from
start to end (or start to match), and that scales poorly for
directories containing thousands of files - especially when these have
additional entries to hold Long File Names.
NTFS directories are b-tree in structure (AFAIK), so it's faster to
traverse them to find entries; this is also why they are inherently
alpha-sorted. In addition, small files may be completely contained
within the metadata, with no data cluster chain at all.
....but large volumes have weaknesses, irrespective of file system;
they take longer to check for errors, defrag, image off as backups,
and more space is required to hold these image backups. At least with
FATxx, you can backup and restore the file system structure in a
reasonable amount of space while doing data recovery; large NTFS
volumes would be particularly disasterous in this context.
I use a 2G FAT16 (!) volume for crucial data, for the best
survivability possible. Files under 32k in size can be recovered
completely even if no FAT survives, the number of clusters is so low
its almost eyeball-manageable, and the entire volume can be peeled off
as one 2G image to be worked on elsewhere, if data is to be recovered
by the afflicted PC has to get back to work immediately.
Most people prefer one large partition even when the benefits of
multi partitions have been explained
MS prefers this, but not because it's best for data safety. Their
agenda is to reduce support calls, and when they pitched this as a
recommendation to us as system builders, it was a blatent case of
"this is easier for users and will give you less support calls"; no
attention was paid to data safety, as if this didn't matter.
The laptops all have DVD burners and are used on site with
digital camcorders. This is just one example. With XP MCE and video in
general becoming more popular the 4 GB limit is the deciding factor.
If you are dumb enough to go one-big-C:, then you take your lumps.
NTFS is less useless at that than FATxx, but both scenarios suck.
---------- ----- ---- --- -- - - - -
Don't pay malware vendors - boycott Sony