Setting to NTFS with data

  • Thread starter Thread starter Mark G.
  • Start date Start date
M

Mark G.

I have an external drive that I recently started using and forgot to set it
to NTSF and just left the default of FAT32. It does have about 150gg's of
data on there already. I am wondering, is there any way to change it over to
NTFS without wiping the data on it? Maybe using something like the old
Partition Magic or the like? Thanks much.
 
Mark G. said:
I have an external drive that I recently started using and forgot to set it
to NTSF and just left the default of FAT32. It does have about 150gg's of
data on there already. I am wondering, is there any way to change it over
to NTFS without wiping the data on it? Maybe using something like the old
Partition Magic or the like? Thanks much.

No need to use third-party tools - there is always the inbuilt "convert"
command:
1. Click Start / run / cmd{OK}
2. Type this command:
convert X: /FS:NTFS{Enter}
 
While that will work, you need to keep in mind that any volume that is
converted from FAT to NTFS is going to have a cluster size of 512kb
(instead of the default of 4kb). This is going to mean an increase slack
space (space wasted due to rounding to the cluster boundary) and it can
decrease the overall performance of i/o as we have to piece together more
clusters when doing read/write to the disk. And it is going to mean you
have a limitation in how big the volume can grow.

I know it is more of a hassle, but backing up, reformatting, and restoring
is really the way to go. The convenience that doing a convert buys you
just isn't worth the cost.

NOTE: Changing cluster size isn't something you can change on the fly. It
is set at the time of format.

Robert Mitchell
Microsoft PSS
 
"Robert Mitchell [MSFT]" said:
While that will work, you need to keep in mind that any volume that is
converted from FAT to NTFS is going to have a cluster size of 512kb
(instead of the default of 4kb). This is going to mean an increase slack
space (space wasted due to rounding to the cluster boundary) and it can
decrease the overall performance of i/o as we have to piece together more
clusters when doing read/write to the disk.
<snip>
I'm a bit undecided on this issue. Having larger clusters could mean more
fragmentation but it also means that fewer clusters need to be read for a
given file size. Are you aware of any authoritative papers issued by
Microsoft?
 
While that will work, you need to keep in mind that any volume that is
converted from FAT to NTFS is going to have a cluster size of 512kb
(instead of the default of 4kb).


Sorry, that's not correct. Although the 512byte cluster size is a
common result, it doesn't always occur.

This is going to mean an increase slack
space (space wasted due to rounding to the cluster boundary)


Sorry, that's also not correct. The normal default cluster size is 4K,
and a smaller cluster size means *less* slack space. The average file
wastes approximately half the cluster size to slack.

and it can
decrease the overall performance of i/o

Yes.


as we have to piece together more
clusters when doing read/write to the disk. And it is going to mean you
have a limitation in how big the volume can grow.

I know it is more of a hassle, but backing up, reformatting, and restoring
is really the way to go.



And that's not correct either. The potential for ensuring that the
cluster size turns out to be the default 4K *is* there. Read
http://www.aumha.org/a/ntfscvt.htm for information on this.
 
Actually a smaller cluster size will give you a greater potential for
fragmentation as time goes passes, depending on the file sizes and changes
made to the files. When files are removed, it will open up tiny holes that
can be used later by newer files....increasing fragmenation. But that sort
of fragmentation can be kept under control with the proper use of Defrag
and Contig. So a good deal of space lost due to fragmentation can be
reclaimed. But reduced i/o due to small cluster size is just something you
end up living with....and it is going to affect all your i/o....unless your
files are all tiny and completely resident.

If you are really curious on how we store files in NTFS, I wrote a series
of blog entries for Technet that you might enjoy.

http://blogs.technet.com/askcore/archive/2008/12/26/the-four-stages-of-file-
growth-part-1.aspx
http://blogs.technet.com/askcore/archive/2008/12/29/the-four-stages-of-file-
growth-part-2.aspx
http://blogs.technet.com/askcore/archive/2009/01/06/the-four-stages-of-file-
growth-part-3.aspx
http://blogs.technet.com/askcore/archive/2009/01/07/the-four-stages-of-file-
growth-part-4.aspx

But back to the subject, It really depends on what your needs are to how
you do your cluster size. If you have all very large files and are more
concerned with performance (like on an SQL system) then larger cluster size
is your best bet. But if you are limited in hard drive space and most of
your files are very small, then a smaller cluster size will fit your needs
better.

Mostly we leave the recommendation of cluster size to the application that
you are using.

http://msdn.microsoft.com/en-us/library/aa178406(SQL.80).aspx

...as an example.

Robert Mitchell
Microsoft PSS
 
"Robert Mitchell [MSFT]" said:
Actually a smaller cluster size will give you a greater potential for
fragmentation as time goes passes, depending on the file sizes and changes
made to the files.

In your first reply you stated the opposite: "we have to piece together more
clusters when doing read/write to the disk".
When files are removed, it will open up tiny holes that
can be used later by newer files....increasing fragmenation. But that
sort
of fragmentation can be kept under control with the proper use of Defrag
and Contig. So a good deal of space lost due to fragmentation can be
reclaimed.

Hang on - you don't "lose" any space due to file fragmentation! A contiguous
file consumes exactly the same amount of disk space as the same file
split into a dozen fragments.
But reduced i/o due to small cluster size is just something you
end up living with....

This is getting contradictory. In your first reply you implied that
***larger*** clusters will slow down disk access. Now it's the
opposite . . .

<snip>
 
As far as I know, that is correct. Please see
http://support.microsoft.com/kb/140365

"When you are using the Convert.exe utility to convert to NTFS, Windows
always uses a 512-byte cluster size."

Now this program you are suggesting might actually workaround this. I have
to admit that I'm curious about it. So I'll probably give it a try and use
Disk Probe to see what it is really changing. Thanks for bringing it to my
attention.

Even still, this wouldn't be something that doing a simple convert would be
able to accomplish.

Robert Mitchell
Microsoft PSS
 
"Robert Mitchell [MSFT]" said:
As far as I know, that is correct. Please see
http://support.microsoft.com/kb/140365

"When you are using the Convert.exe utility to convert to NTFS, Windows
always uses a 512-byte cluster size."

In your first reply you wrote, and I quote "is going to have a cluster size
of 512kb". Did you perhaps mean "512 bytes"? This would explain a lot of the
confusion!
 
Yes! My fast fingers got the better of me. I didn't even notice that I
did a 'kb' instead of just a 'b'. That would explain the confusion.
However, I did want to address something else you brought up.
Hang on - you don't "lose" any space due to file fragmentation! A contiguous
file consumes exactly the same amount of disk space as the same file
split into a dozen fragments.

That's both correct and incorrect, depending on how you look at it.

When we think about a file, what we really are focused on is the file's
$DATA:"" stream. That's what we see. That's what we edit. And a file
that is listed as 100kb has a $DATA:"" stream exactly the same no matter if
it is stored contiguously or not. And that's where you have it right.

But a file is more than its $DATA attribute. A file includes all the
structures used to define itself. So, as fragmentation grows, the overhead
for storing the $DATA attribute greatly increases. With a truly contiguous
file, we would just need a single mapping pair (a virtual cluster number
and a LENgth) to show us where the file was. As fragmentation increases,
we add additional mapping pairs to the file that track the location of all
the parts of the file.

See my blog entries to find out just how complex that can get as we add
additional structures to track all the parts of the file.

Also keep in mind that when the shell displays file information, it is
really an estimate based on the $DATA attributes and doesn't take into
account any of the storage overhead.

So yes, you do lose space due to fragmentation. And the worse it is, the
more you lose.

Robert Mitchell
Microsoft PSS
 
Back
Top