First, thanks to all (Alex, Rick, Ken, cquirke, etc.) for your input. I now have a better understanding of this issue
I now will seriously think about converting from FAT32 -> NTFS just to get rid of the 32k clusters on my 20gb drives
Of course after the nerve-racking problems I had when I "updated" from WinSE (which NEVER crashed on me) to WinXP, which crashed every 1 or 2 weeks for months, I'll have to take large doses of Valium IF I do convert to NTFS. FYI, WinXP has not crashed on me for 6mths. OPPS, I just jinks myself
----- cquirke (MVP Win9x) wrote: ----
On Thu, 1 Apr 2004 08:51:12 -0800, "Tecknomage"
And read up at
www.aumha.org/win5/a/ntfscvt.htm to avoid getting lande
with 512 byte clusters in the resul
What's wrong with 512 clusters? This would be fore efficient usage of HD space,
especially for all the 400 byte shortcut files.
There's a tendency to get blinded by the cluster slack space issue
it's only one of a number of factors that can contribute to fil
system fragility or sluggishness
The smaller the cluster size, the more cluster in each data chain, th
more the fragmentation, and the higher the fragility. Dis
maintenance (ChkDsk, Defrag) takes longer.
The file allocation table of NTFS equivalent structures become large
and require a larger memory footprint - especially for FATxx, wher
the whole FAT has to be held in memory, AFAIK unlike NTFS
As it is, NTFS can store the data of small files within the director
entry metadata, and that helps a lot with that
Finally, the processor's natural paging size is 4k. With large
clusters, there's bagagge to be hauled and discarded with each page
with smaller clusters, there may be head travel required to gather th
fragments of a single page. So 4k is nice there
I don't like having large clusters on my 20gb logical drives. As far as
I'm concerned this results in way too much waisted space. I would
perfer 512 or 1024 clusters
Careful what you wish for - all cluster lookup table structures doubl
in size whenever the cluster size is halved, so you may find you los
much of the gains that way
Am I misunderstanding NTFS? Does NTFS handle clusters
differently than FAT, that is, restricting file allocation to WHOLE
multiples of cluster size
Unless you are using disk compression (as opposed to file compression
I think you will always have slack space wastage. NTFS avoids thi
when the file's data is contained within the directory entry itself
What I'm not sure is how NTFS handles cluster chain lookup. It ma
not have the equivalent of a FAT (not every file system does, thoug
some do). AFAIK it stores pointers to "data runs", i.e. contiguou
slabs of data; each pointer points to the start of the run, an
there'd be no reason to hold "this comes after last" info for th
remaining clusters in that run
On the face of it, that means no FAT required - but I'm not sure ho
NTFS manages free space, which requires a fast overview of wha
locations are allocated to know what is save to allocate and write to
If there's no need to maintain a global lookup table of each cluster
then that strips much of the downside from small clusters on larg
volumes - as suggested by 4k clusters right up to 120G. Even so,
don't think I'd want clusters smaller than 4k, especially if fil
system structures such as directories etc. are sector-based anyway
I must read the docs again :-
-------------------- ----- ---- --- -- - - -
Running Windows-based av to kill active malware is like striking
a match to see if what you are standing in is water or petrol