Best size cluster for NTFS partition

A

Alex Coleman

By default WinXP formats NTFS to have 4k cluster sizes but what is
the best cluster size for my situation :-

I have a 60 GB NTFS partition which I use mainly for storing
downloads (software and audio). It will be used by WinXP.

What would the best NTFS cluster size be if this was a 160 GB
partition filled mainly with 200K jpegs and some 10 MB movie clips?

-------

I suspect that 4K might be the best for my 60G and 160 Gb partitions
becuase it saves space. But I don't know if there are overheads in
the MFT and other metadata when the NTFS partition gets to 160 GB.

I also read that third-party defrag utilities (like Diskeeper and
Perfectdisk) will not work on NTFS clusters above a certain size. Is
this true? What is the biggest cluster size I can have if I want to
defrag an NTFS partition?
 
C

Carey Frisch [MVP]

4K is optimal....

--
Carey Frisch
Microsoft MVP
Windows XP - Shell/User
Microsoft Newsgroups

-------------------------------------------------------------------------------------------

:

| By default WinXP formats NTFS to have 4k cluster sizes but what is
| the best cluster size for my situation :-
|
| I have a 60 GB NTFS partition which I use mainly for storing
| downloads (software and audio). It will be used by WinXP.
|
| What would the best NTFS cluster size be if this was a 160 GB
| partition filled mainly with 200K jpegs and some 10 MB movie clips?
|
| -------
|
| I suspect that 4K might be the best for my 60G and 160 Gb partitions
| becuase it saves space. But I don't know if there are overheads in
| the MFT and other metadata when the NTFS partition gets to 160 GB.
|
| I also read that third-party defrag utilities (like Diskeeper and
| Perfectdisk) will not work on NTFS clusters above a certain size. Is
| this true? What is the biggest cluster size I can have if I want to
| defrag an NTFS partition?
 
G

Gerry Cornell

Just curious. Why?


--


Regards.

Gerry

~~~~~~~~~~~~~~~~~~~~~~~~
FCA

Stourport, Worcs, England
Enquire, plan and execute.
~~~~~~~~~~~~~~~~~~~~~~~~
 
R

Richard Urban [MVP]

Because 4k is the data size used when the system is "paging". It just seems
to make the operating system a bit more "snappy" [in my estimation]. I would
guess that it may eliminate extra overhead involved when using
larger/smaller cluster sizes, and the system is making use of the pagefile.

--
Regards,

Richard Urban
Microsoft MVP Windows Shell/User

Quote from: George Ankner
"If you knew as much as you think you know,
You would realize that you don't know what you thought you knew!"
 
L

Leythos

Because 4k is the data size used when the system is "paging". It just seems
to make the operating system a bit more "snappy" [in my estimation]. I would
guess that it may eliminate extra overhead involved when using
larger/smaller cluster sizes, and the system is making use of the pagefile.

I have a drive that is used to store small images, under 30k many times,
I have worked with the drive set at 512b and at the default 4k and even
larger - the 512b provides the best in unwasted slack space - and you
can really see this with 50,000+ files.

For database servers I move their data drive/array to larger cluster
sizes, 4k being way to small in my opinion.

Paging means little of you are not paging a lot.

What you have to do, to find the optimal size, is determine the size of
70% of your files and then determine the amount of wasted slack space
they consume and setup the cluster size for that. Sure, tracking small
cluster sizes is a performance hit, but wasted disk space is often more
of a problem for users.
 
R

Richard Urban [MVP]

Also remember that if you go larger than 4k size clusters, the built in
defrag utility does not function on that drive/partition.

--
Regards,

Richard Urban
Microsoft MVP Windows Shell/User

Quote from: George Ankner
"If you knew as much as you think you know,
You would realize that you don't know what you thought you knew!"

Leythos said:
Because 4k is the data size used when the system is "paging". It just
seems
to make the operating system a bit more "snappy" [in my estimation]. I
would
guess that it may eliminate extra overhead involved when using
larger/smaller cluster sizes, and the system is making use of the
pagefile.

I have a drive that is used to store small images, under 30k many times,
I have worked with the drive set at 512b and at the default 4k and even
larger - the 512b provides the best in unwasted slack space - and you
can really see this with 50,000+ files.

For database servers I move their data drive/array to larger cluster
sizes, 4k being way to small in my opinion.

Paging means little of you are not paging a lot.

What you have to do, to find the optimal size, is determine the size of
70% of your files and then determine the amount of wasted slack space
they consume and setup the cluster size for that. Sure, tracking small
cluster sizes is a performance hit, but wasted disk space is often more
of a problem for users.
 
L

Leythos

Also remember that if you go larger than 4k size clusters, the built in
defrag utility does not function on that drive/partition.

I never use MS Defrag, I run the big brother to it "Diskeeper" and find
no problems with it.
 
E

Evadne Cake

Can't find your reference 814954 at Microsoft. Is the number
miskeyed?

Welcome, Alex, I see you have met our village idiot. Pay no attention to
anything posted by Andrew the Eejit - his sole aim is to cause damage and
disruption to as many computers as possible. He used to post with a valid
address, but I reckon people started complaining to him personally, so he now
posts via the CDO; he probably reckons he can't be traced that way... ;o)
<eg>
 
K

Kerry Brown

Evadne Cake said:
Welcome, Alex, I see you have met our village idiot. Pay no attention to
anything posted by Andrew the Eejit - his sole aim is to cause damage and
disruption to as many computers as possible. He used to post with a valid
address, but I reckon people started complaining to him personally, so he
now
posts via the CDO; he probably reckons he can't be traced that way... ;o)
<eg>

Are you trying to win a Bulwer-Lytton award? What's wrong with using the odd
period here and there to organise things?

Kerry
 
A

Alexander Grigoriev

Leythos said:
I have a drive that is used to store small images, under 30k many times,
I have worked with the drive set at 512b and at the default 4k and even
larger - the 512b provides the best in unwasted slack space - and you
can really see this with 50,000+ files.

Yea, you've gained the whole 90 MB by doing that!
 
G

Greg Hayes/Raxco Software

Agreed. For best overall file system performance, a 4K cluster size is
best. You only really need to consider going larger if the drive is used
for larger files (ie database, large multi-media files, etc...) and absolute
speed is the primary concern.

- Greg/Raxco Software
Microsoft MVP - Windows File System

Want to email me? Delete ntloader.
 
E

edavid3001

One of our servers uses 64KB block size. 700KB worth of cookie data
can easily take 120MB in user roaming profile directorys. This can be
copied to a 4KB block size partition and take around 7MB versus 120MB.

SQL server (MSDE) benefits from 64KB block size.

The benefit is if you have a bunch of large files (on a second drive,
don't do this on you Windows system drive), you get better performance
when loading/saving the files. If you setup a second drive just to
store a bunch of GB MPG files, the 64KB block size makes more sense.

This usually isn't worth it, though. If you want to increase your
performance, setup a RAID 0 across 2 or 3 drives. If you have two
drives that can sustain 50MB/s and you put them in RAID0 you can
realize 90-100MB/s sustained.

Some of this is my opinion, there are enough variables in systems today
that others may have different opinions based on those variables.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top