Converting FAT32 to NTFS

  • Thread starter Thread starter Rick Hanson
  • Start date Start date
R

Rick Hanson

If you make an image of your FAT 32 C partition then convert the C partition
to NTFS can the image be restored without conflicts?
Thanks
 
Typically, imaging utilities don't just capture the files, they capture the
structure of the disk as well, which means that reapplying the image to the
disk would restore the filesystem to FAT32, in the scenerio you mentioned
below.

You can convert a disk from FAT32 to NTFS without losing files. Search for
the CONVERT command in Help and Support for more details.
 
----- Alex Nichol wrote: ----

Mike Kolitz wrote
the CONVERT command in Help and Support for more details

And read up at www.aumha.org/win5/a/ntfscvt.htm to avoid getting lande
with 512 byte clusters in the resul

======

What's wrong with 512 clusters? This would be fore efficient usage of HD space, especially for all the 400 byte shortcut files. I don't like having large clusters on my 20gb logical drives. As far as I'm concerned this results in way too much waisted space. I would perfer 512 or 1024 clusters

Am I misunderstanding NTFS? Does NTFS handle clusters differently than FAT, that is, restricting file allocation to WHOLE multiples of cluster size
 
In
Tecknomage said:
----- Alex Nichol wrote: -----
And read up at www.aumha.org/win5/a/ntfscvt.htm to avoid getting
landed with 512 byte clusters in the result

=======

What's wrong with 512 clusters? This would be fore efficient usage
of HD space, especially for all the 400 byte shortcut files. I don't
like having large clusters on my 20gb logical drives. As far as I'm
concerned this results in way too much waisted space. I would perfer
512 or 1024 clusters.


You are correct that the smaller the cluster size, the more
efficently disk space is used. But it's also true that smaller
clusters mean more clusters, more clusters mean more I/O
accesses, and therefore poorer disk performance.

Particularly in these days of very cheap hard drives, worrying
about the difference in disk utilization between 4K clusters and
512-byte clusters is *extremely* counterproductive. Think of the
waste due to slack in dollars (substitute you local currency, if
not dollars), not megabytes; it's insignificant, only pennies.
The performance impact is far greater than the dollar impact.

For example, you can buy an 80GB drive these days for around $80
US. Each file wastes about half a cluster to slack. If you have
100,000 files on the drive, with 4K clusters you waste around
200MB to slack, and with 512-byte clusters only 25MB. That 200MB
is 20-cents-worth of disk space and 25MB, about 2-cents-worth. Is
is worth hurting performance to save 18 cents-worth of disk
space?
 
----- Alex Nichol wrote: -----



And read up at www.aumha.org/win5/a/ntfscvt.htm to avoid getting landed
with 512 byte clusters in the result

=======

What's wrong with 512 clusters? This would be fore efficient usage of HD space, especially for all the 400 byte shortcut files. I don't like having large clusters on my 20gb logical drives. As far as I'm concerned this results in way too much waisted space. I would perfer 512 or 1024 clusters.

Am I misunderstanding NTFS? Does NTFS handle clusters differently than FAT, that is, restricting file allocation to WHOLE multiples of cluster size?

Did you read the article Alex referred you to?
 
On Thu, 1 Apr 2004 08:51:12 -0800, "Tecknomage"
And read up at www.aumha.org/win5/a/ntfscvt.htm to avoid getting landed
with 512 byte clusters in the result
What's wrong with 512 clusters? This would be fore efficient usage of HD space,
especially for all the 400 byte shortcut files.

There's a tendency to get blinded by the cluster slack space issue;
it's only one of a number of factors that can contribute to file
system fragility or sluggishness.

The smaller the cluster size, the more cluster in each data chain, the
more the fragmentation, and the higher the fragility. Disk
maintenance (ChkDsk, Defrag) takes longer.

The file allocation table of NTFS equivalent structures become larger
and require a larger memory footprint - especially for FATxx, where
the whole FAT has to be held in memory, AFAIK unlike NTFS.

As it is, NTFS can store the data of small files within the directory
entry metadata, and that helps a lot with that.

Finally, the processor's natural paging size is 4k. With larger
clusters, there's bagagge to be hauled and discarded with each page;
with smaller clusters, there may be head travel required to gather the
fragments of a single page. So 4k is nice there.
I don't like having large clusters on my 20gb logical drives. As far as
I'm concerned this results in way too much waisted space. I would
perfer 512 or 1024 clusters.

Careful what you wish for - all cluster lookup table structures double
in size whenever the cluster size is halved, so you may find you lose
much of the gains that way.
Am I misunderstanding NTFS? Does NTFS handle clusters
differently than FAT, that is, restricting file allocation to WHOLE
multiples of cluster size?

Unless you are using disk compression (as opposed to file compression)
I think you will always have slack space wastage. NTFS avoids this
when the file's data is contained within the directory entry itself.

What I'm not sure is how NTFS handles cluster chain lookup. It may
not have the equivalent of a FAT (not every file system does, though
some do). AFAIK it stores pointers to "data runs", i.e. contiguous
slabs of data; each pointer points to the start of the run, and
there'd be no reason to hold "this comes after last" info for the
remaining clusters in that run.

On the face of it, that means no FAT required - but I'm not sure how
NTFS manages free space, which requires a fast overview of what
locations are allocated to know what is save to allocate and write to.

If there's no need to maintain a global lookup table of each cluster,
then that strips much of the downside from small clusters on large
volumes - as suggested by 4k clusters right up to 120G. Even so, I
don't think I'd want clusters smaller than 4k, especially if file
system structures such as directories etc. are sector-based anyway.

I must read the docs again :-)


-------------------- ----- ---- --- -- - - - -
Running Windows-based av to kill active malware is like striking
a match to see if what you are standing in is water or petrol.
 
Tecknomage said:
What's wrong with 512 clusters? This would be fore efficient usage of HD space, especially for all the 400 byte shortcut files. I don't like having large clusters on my 20gb logical drives. As far as I'm concerned this results in way too much waisted space. I would perfer 512 or 1024 clusters.


It is grossly subject to fragmentation, and has a bigger overhead in
getting more clusters. But the major reason is that in XP a 4 K cluster
is optimal for things like Program loading and virtual memory
management, because it matches the 4K internal page used by the Intel
386 protected mode (and later) CPU architecture. This means that
transfers into memory can go direct to the required page without need to
buffer the reads and writes
 
First, thanks to all (Alex, Rick, Ken, cquirke, etc.) for your input. I now have a better understanding of this issue

I now will seriously think about converting from FAT32 -> NTFS just to get rid of the 32k clusters on my 20gb drives

Of course after the nerve-racking problems I had when I "updated" from WinSE (which NEVER crashed on me) to WinXP, which crashed every 1 or 2 weeks for months, I'll have to take large doses of Valium IF I do convert to NTFS. FYI, WinXP has not crashed on me for 6mths. OPPS, I just jinks myself


----- cquirke (MVP Win9x) wrote: ----

On Thu, 1 Apr 2004 08:51:12 -0800, "Tecknomage"
And read up at www.aumha.org/win5/a/ntfscvt.htm to avoid getting lande
with 512 byte clusters in the resul
What's wrong with 512 clusters? This would be fore efficient usage of HD space,
especially for all the 400 byte shortcut files.

There's a tendency to get blinded by the cluster slack space issue
it's only one of a number of factors that can contribute to fil
system fragility or sluggishness

The smaller the cluster size, the more cluster in each data chain, th
more the fragmentation, and the higher the fragility. Dis
maintenance (ChkDsk, Defrag) takes longer.

The file allocation table of NTFS equivalent structures become large
and require a larger memory footprint - especially for FATxx, wher
the whole FAT has to be held in memory, AFAIK unlike NTFS

As it is, NTFS can store the data of small files within the director
entry metadata, and that helps a lot with that

Finally, the processor's natural paging size is 4k. With large
clusters, there's bagagge to be hauled and discarded with each page
with smaller clusters, there may be head travel required to gather th
fragments of a single page. So 4k is nice there
I don't like having large clusters on my 20gb logical drives. As far as
I'm concerned this results in way too much waisted space. I would
perfer 512 or 1024 clusters

Careful what you wish for - all cluster lookup table structures doubl
in size whenever the cluster size is halved, so you may find you los
much of the gains that way
Am I misunderstanding NTFS? Does NTFS handle clusters
differently than FAT, that is, restricting file allocation to WHOLE
multiples of cluster size

Unless you are using disk compression (as opposed to file compression
I think you will always have slack space wastage. NTFS avoids thi
when the file's data is contained within the directory entry itself

What I'm not sure is how NTFS handles cluster chain lookup. It ma
not have the equivalent of a FAT (not every file system does, thoug
some do). AFAIK it stores pointers to "data runs", i.e. contiguou
slabs of data; each pointer points to the start of the run, an
there'd be no reason to hold "this comes after last" info for th
remaining clusters in that run

On the face of it, that means no FAT required - but I'm not sure ho
NTFS manages free space, which requires a fast overview of wha
locations are allocated to know what is save to allocate and write to

If there's no need to maintain a global lookup table of each cluster
then that strips much of the downside from small clusters on larg
volumes - as suggested by 4k clusters right up to 120G. Even so,
don't think I'd want clusters smaller than 4k, especially if fil
system structures such as directories etc. are sector-based anyway

I must read the docs again :-


-------------------- ----- ---- --- -- - - -
Running Windows-based av to kill active malware is like striking
a match to see if what you are standing in is water or petrol
 
In
Tecknomage said:
First, thanks to all (Alex, Rick, Ken, cquirke, etc.) for your input.
I now have a better understanding of this issue.


You're welcome.

I now will seriously think about converting from FAT32 -> NTFS just
to get rid of the 32k clusters on my 20gb drives.


Are you sure that's what you have? The default cluster size for a
20GB FAT32 partition is 16K.
 
Terry said:
Help-im trying to convert my Fat32 to NTFS using the command function but it asks me for the current label and when i respond it tells me thats the wrong label!

You see it on the first line ('Volume in drive. . is') if you go to
Command prompt and give VOL C:

But first please go read my page www.aumha.org/win5/a/ntfscvt.htm or you
are liable to end up with the drive in 512 byte clusters, which is *not*
a good idea
 
Back
Top