FAT32 or NTFS?

A

Al Dykes

Can you recommend free utilities to gain access to NTFS partitions
from outside the operating system.

Knoppix Linux (and some other Linux distros) burned into a bootable CD
and booted, mount all the FAT/FAT32/NTFS partitions it finds as
read-only and gives you all the tools you need to burn your data to
CDs, copy over ethernet, or copy to external backpack disks.

I've done this a bunch of times.

Get Knoppix from linuxiso.org.
 
P

Plato

Toshi1873 said:
I've lost more data to FAT32 corrupting itself then I
have NTFS corrupting itself. FAT32 has *no* redundancy
of the directory information about the disk.

Then you had a hardware problem. FAT32 is perfectly fine on a properly
designed PC.
 
J

J. Clarke

Michael said:
Also:
Bart's PE Builder can be used to build a "live" Windows CD.
http://www.nu2.nu/pebuilder/

Not free:
McAfee Clean Boot (still in beta?)
Winternal's ERD Commander
Avast BART CD
Microsoft OPK PE CD

Also, if you have the docs on the format then you can go after it with any
sector editor.
 
J

J. Clarke

Plato said:
Then you had a hardware problem. FAT32 is perfectly fine on a properly
designed PC.

No, FAT32 is perfectly fine on a machine that never breaks. Now if you want
to claim that a "properly designed PC" is immortal and indestructible and
runs forever with no power and never crashes then on your machine it may be
"perfectly fine". On the machines that the rest of us have to work with it
leaves a bit to be desired.
 
M

Michael Cecil

Then you had a hardware problem. FAT32 is perfectly fine on a properly
designed PC.

If you don't have an UPS and the power goes out in the middle of a long
write operation, FAT will have problems. NTFS will just smile 'cause it's
journaled.
 
A

Alexander Grigoriev

I've had a motherboard which had memory corruption while IDE controller was
transferring data. The corrupted memory was unrelated to the I/O buffers.
The system crashed every so often, at least once a day. I was at loss what
to blame.

But NTFS (80, later 120 GB) survived all those crashes.
 
A

Alexander Grigoriev

Starting with Windows XP, BACKUP can back up open files, even those opened
with zero sharing flag. It takes momentary snapshot of the volume state,
using volume shadow copy. So you don't quite need to close background
services and processes.
 
C

cquirke (MVP Win9x)

"Rod Speed" <[email protected]> wrote in message

See http://cquirke.mvps.org/ntfs.htm (comparing the file systems) and
http://cquirke.mvps.org/whatmos.htm on recovery tools.

Eric made a couple of assertions:

"Besides, you lose all LFNs"

False; this is not inevitably so, as there are tools to preserve LFNs
while working in DOS mode - and I'm not talking DOSLFNBk here.

See the links above.

"Recovery of corrupt volumes is more likely with NTFS."

Details on that assertion, please?

"You only need one NTFS recovery tool, findntfs"

URL? Hint: Finding the partition is not 'recovery' either

"FAT recovery depends on regular defrags, NTFS does not"

Outside of one particular circumstance - loss of the FAT themselves
(and there are two of them, remember) - false.

How does NTFS recovery do when the cluster bitmap or MFT are lost?

True. Also, NTFS does not protect you against malware low-level raw
disk writes, even from within NT, as Witty demonstrated.

See links at start of this message. You can cherry-pick files from
DOS mode either way, but whereas Odi's LFN Tools can pull an entire
FATxx volume, ReadNTFS pulls one subtree at a time and loses LFNs.

Ah, the backup myth. If I dropped in unexpectedly, scorched your HD,
and gave you $10, would you say "Gee, thanks for the $10!" or "Hey!
WTF did you do to my hard drive?" ;-)


---------- ----- ---- --- -- - - - -
Certainty may be your biggest weakness
 
D

DILIP

FAT32 does claim to support partitions up to 2TB, but it does so at the cost
of a hugely bloated FAT and large cluster sizes, which is wasteful and
undesirable.
 
C

cquirke (MVP Win9x)

I thought FAT filesystems kept two copies of the directory and the
file allocation table. But then maybe that was FAT16.

No; FATxx (for any values of xx) keeps two copies of the FAT (File
Allocation Table). This is a list of cluster addresses to track which
data cluster comes after which data cluster, and in FAT32, this
becomes large and takes up much RAM space. This, as much as anything
else, is what makes FATxx a bad choice for tomorrow's capacities.

In contrast, NTFS stores chaining and cluster info differently (and
more space-efficiently). AFAIK it works like this:

A bitmap holds flags as to whether a cluster is free or not, and that
table is this 1/32 the size of a FAT copy. I don't know whether this
structure is duplicated or not.

Data clusters are pointed to as data runs, each of which is assumed to
be contiguous. There is a set of starting points (and, I presume
cluster run lengths) for each data run within the file, and this info
is held in the dir entry for that file. This space has to be
unbounded, if you cosider the worst-case scenario of an >4G file
cluster chain that is completely fragmented, so that each data cluster
is a separate data run of one cluster.

Hmm, can you see a scalability crunch looming there?

If NTFS is to mirror FATxx's redundancy of the FAT, it would have to
duplicate both the bitmap of cluster status, and the data run lists
within each directory entry - as so much of the crucial erstwhile-FAT
info is now held in the directory entry itself.

As said, FATxx does not duplicate directory entry info as it does the
FATs. In FATxx, the dir info contains two crucials; the start cluster
address for the file, and the length of the file in bytes. Recovered
lost cluster chains have to guess the latter, and assume the full
capacity of the cluster chain (i.e. to slack space at the end).

To what extent are NTFS structures duplicated, and what is the disk
footprint and head travel overhead implications of this?

It's time I re-read the Linux documentation of NTFS's byte-level
structure again. It's quite a head-bender, but each time I run at it,
it get slightly further up the wall ;-)

It's ironic we have to rely on Linux reverse-engineering efforts to
learn about our own file system, but there you are!


---------- ----- ---- --- -- - - - -
Certainty may be your biggest weakness
 
B

Bob

Starting with Windows XP, BACKUP can back up open files, even those opened
with zero sharing flag. It takes momentary snapshot of the volume state,
using volume shadow copy. So you don't quite need to close background
services and processes.

That's good to know. It looks like someday soon I am going to have to
migrate to XP.


--

Map Of The Vast Right Wing Conspiracy:
http://www.freewebs.com/vrwc/

"You can all go to hell, and I will go to Texas."
--David Crockett
 
A

Andrew Rossmann

[This followup was posted to comp.sys.ibm.pc.hardware.storage and a copy
was sent to the cited author.]

I use XP.

I want to create a data partition on a new hard drive of approx 50
to 100 GB and I want to store only data in it (jpegs, mpegs, mp3s).

Is it better to have it as FAT32 or NTFS? I want resilience,
recovery, ease of repair, and that sort of thing.

With all the fighting in the other posts about NTFS vs FAT32, I don't
see anybody mentioning one important aspect you may need to know with your
video files:
FAT has a maximum individual file size limit of 4G. NTFS supports
individual file sizes that are effectively unlimited with today's
capacities.
 
B

Bob

Hell is a little cooler.

I lived in the St. Louis area when I was growing up, and it was a lot
hotter than Texas.

I was in Chicago one summer and it was a lot hotter there in August
than in Texas.

What you left out is that most of Texas does not suffer from snowfall.
Our cool season is very mild and makes for great outdoor activity.

I'll take the heat - you can have the snow.

--

Map Of The Vast Right Wing Conspiracy:
http://www.freewebs.com/vrwc/

"You can all go to hell, and I will go to Texas."
--David Crockett
 
C

cquirke (MVP Win9x)

FAT32 does claim to support partitions up to 2TB, but it does so at the cost
of a hugely bloated FAT and large cluster sizes, which is wasteful and
undesirable.

Don't get mesmerized by cluster size - it's not always relevant, and
in the circumstances discussed here, largely irrelevant.

Here's some real figures off a client's PC I am working on...

Volume: D: (FAT32, 120G, 32k clusters)
Content: Video files
140 files, 10 folders, 13.4G
Average file size; 96M
14 391 101 876 bytes of data
14 393 475 072 bytes of space occupied
0.02% capacity wasted in slack space

Volume: D: (FAT32, 120G, 32k clusters)
Content: MP3 music files
8 976 files, 703 folders, 42.3G
Average file size; 4.7M
45 448 637 616 bytes of data
45 598 375 936 bytes of space occupied
0.33% capacity wasted in slack space

Volume: C: (FAT32, 7.9G, 4k clusters)
Content: The Start Menu subtree of shortcuts
479 files, 63 folders, 273k
Average file size; 0.5k
280 362 bytes of data
1 945 600 bytes of space occupied
85% capacity wasted in slack space

....so whereas a "one big C:" layout would undoubtedly be wasteful of
space, this is a trivial issue when storing only large files.

NTFS would be very space and speed efficient for that start menu, not
because of the smaller clusters (in fat, they'd be the same size as
FAT32 on this sub-8G volume anyway) but because the tiny file data
would be held entirely within the directory entry metadata.

That means while FAT32 has to pull up 1 + 63 + 479 data clusters to
handle the menu (2M for only 275k or data), NTFS would have it all in
those 1 + 63 directories.

Not that those directories would themselves be as small as they would
be in FAT32 - after all they contain the file data here :)


---------- ----- ---- --- -- - - - -
Certainty may be your biggest weakness
 
D

DILIP

cquirke (MVP Win9x) said:
Don't get mesmerized by cluster size - it's not always relevant, and
in the circumstances discussed here, largely irrelevant.

The jpegs will affect the equation here. The average size of a picture,
depending on its source may be anywhere between 150KB for web photos to
2-3MB for those downloaded from a digital camera. If we consider the former,
it would not be incorrect to assume that the wastage in such a scenario,
with many small files, would be considerable - As suggested by your figures
about start menu shortcuts.

Other considerations of the original poster were "resilience, recovery, ease
of repair".

resilience --- As far as FAT partitions go, don't forget to wave goodbye to
your all data and the OS, when the disk becomes over full. This has happened
to me once already. In such situations, it becomes tricky to save the OS,
and depending on the seriousness of the crash, data loss possibilities are
considerable. Of course this is just one example.

recovery and ease of repair --- NTFS file systems don't need a scandisk add
on per se, simply because they don't need one. Data verification
technologies imbedded in the file system ensure that data is only written on
to the disk, when it is verifyible. If it cannot be read back, the
transation is simply rolled back. Which brings me to this - In an NTFS file
system, a transaction is either performed completely or not performed at
all. It's 1 or 0. Consider a power outage occuring during a defrag process
of a FAT drive. The possibilities of errors is high on a FAT partition
afterwards. However, the NTFS file system can easily recover from such a
situation due to the transaction log meta file that it reads at the next
boot. Meta file start with $ such as $Logfile, $Boot, etc.

This is why I feel that NTFS is the way to go for large partitions on NT
based systems, such as this scenario. Pl note that, the user does not
mention any dual boot scenario here.

One more thing, you mention that "the tiny file data would be held entirely
within the directory entry metadata." Wouldn't this be stored in the MFT?
metafiles as such, (as I have read anyway) refer to the files created when
an NTFS partition is first created (sixteen), and contain volume & cluster
information; strictly speaking unavailable to the OS. Under NTFS, there is
no specific difference between a file and a collection of attributes,
including data contained in the file itself. When the space required for all
the attributes is smaller than the size of the MFT record itself, then the
data attribute is stored within the MFT record itself. Please clarify what
distinction you consider between MFT and metadata.

Cheers

I usually don't quote, but as Will Durant once said, "Education is a
progressive discovery of our ignorance."
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top