max number files in folder

T

tshad

In either XP or 2003 server, is there a maximum number of files you can have
in a folder?

We have a folder that has a tremendous number of files (no idea how many).
We are trying to delete some of the old files but we cannot even see the
files in the folder.

If I go to the folder using windows explorer it starts out giving us the
hour glass and then the moving flashlight and then it just sits there with
the message "Searching for items..." in the status bar and never displays
anything.

I then tried to just get the number of files in the folder from the Command
Prompt doing:

F:\Data >dir images | find /i "file"

but it just sits there and nothing comes back.

I would like to do the same using a date filter where I saying something
like: how many files in folder before a certain date. Or delete files in
folder before a certain date.

Is there a way to do this?

Thanks,

Tom
 
B

Big_Al

tshad said this on 5/20/2009 5:17 PM:
In either XP or 2003 server, is there a maximum number of files you can have
in a folder?

We have a folder that has a tremendous number of files (no idea how many).
We are trying to delete some of the old files but we cannot even see the
files in the folder.

If I go to the folder using windows explorer it starts out giving us the
hour glass and then the moving flashlight and then it just sits there with
the message "Searching for items..." in the status bar and never displays
anything.

I then tried to just get the number of files in the folder from the Command
Prompt doing:

F:\Data >dir images | find /i "file"

but it just sits there and nothing comes back.

I would like to do the same using a date filter where I saying something
like: how many files in folder before a certain date. Or delete files in
folder before a certain date.

Is there a way to do this?

Thanks,

Tom
Windows XP it self doesn't have a limit on the number of files per
folder. It depends on the file system that you are using. The max files
and folders per folder are:
FAT16 - 512
FAT32 - 65,534
NTFS - 4,294,967,295

If you use long file names for the files and folders (over 8 characters
for the file name and 3 for the extension) then the number of files and
folders available per folder will be reduced on the FAT16 and FAT32 file
systems. Except for some irrational # of files making the directory so
hard to parse, would be my only possible issue.

Have you done a chkdsk on the drive maybe. The directory could be
clobbered. And search does have a date filter in it.
 
T

tshad

Big_Al said:
tshad said this on 5/20/2009 5:17 PM:
Windows XP it self doesn't have a limit on the number of files per folder.
It depends on the file system that you are using. The max files and
folders per folder are:
FAT16 - 512
FAT32 - 65,534
NTFS - 4,294,967,295

If you use long file names for the files and folders (over 8 characters
for the file name and 3 for the extension) then the number of files and
folders available per folder will be reduced on the FAT16 and FAT32 file
systems. Except for some irrational # of files making the directory so
hard to parse, would be my only possible issue.

Have you done a chkdsk on the drive maybe. The directory could be
clobbered. And search does have a date filter in it.

It is NTFS.

But I don't think the folder is clobbered as I can do a "dir *thumb.jpg" and
it starts listing the files.

But if I do "dir *thumb.jpg | find /i "file"" it just sits there.

It may be that there are just too many files that go into the buffer being
piped to the "find" that is causing the problem.

What is the syntax for the data filter?

Also, on the Find - What does"/i" and "file" do? I can't seem to find
anything on this.

Does :"DIR" allow a date filter? That doesn't seem to be the case.

Thanks,

Tom
 
V

VanguardLH

tshad said:
It is NTFS.

But I don't think the folder is clobbered as I can do a "dir *thumb.jpg" and
it starts listing the files.

But if I do "dir *thumb.jpg | find /i "file"" it just sits there.

It may be that there are just too many files that go into the buffer being
piped to the "find" that is causing the problem.

What is the syntax for the data filter?

Also, on the Find - What does"/i" and "file" do? I can't seem to find
anything on this.

Does :"DIR" allow a date filter? That doesn't seem to be the case.

Thanks,

Tom

Tis the problem of using simplistic command-line utilities. You are
piping the ENTIRE output of the 'dir' command. That means nothing in
the next stage can be performed until the output from 'dir' ceases.
That means the next stage wait, and wait, and waits. Just how long does
it take for just the 'dir' command by itself to complete? There is also
the problem that the buffer is probably too small to handle the entire
string length comprising all those filenames. My guess (just a guess)
is that the max string length for the file list is 65K characters.
That's characters (bytes), not files.

You may have to do multiple 'dir' commands (piped into another stage)
where each one uses wildcarding to select just some of the files. You
might be able to get by in specifying just the filetype, as in 'dir
*.<type> | ...' but even that list could be too long if most of the
files are the same type. You might have to do 'dir a*.* | ...' to run
through all files that start with a, b, c, ..., z, 0, 1, ..., 9.

Why are you piping the dir command, anyway? You are trying to find a
file or a set of them that match on some string criteria. Instead use
the for command. Run "for /?" for more info. Try something like "for
%a in (filespec) do echo "Found: %a", where filespec is the wildcarded
filespec you are using to narrow down on a file or set of them.
I believe (but am not sure) that the for will recurse through the
filespec to find matching files rather than compiling one huge list and
trying to search through it.
 
A

Andrew McLaren

VanguardLH said:
Why are you piping the dir command, anyway?

Tom,

I think VanguardLH has the right answer here.

An additional suggestion, to avoid a pipe (and consequent slow performance):
you can catenate the DIR to a text file, and then examine the text file
using either your favourite text utilities, or a command line. For example:

C:\BigDir>DIR *thumb.jpg > "C:\Users\Tom\My Documents\mydir.txt"

.... will create a text file with all the output of the DIR command. Next,

C:\BigDir>find /i "file" "C:\Users\Tom\My Documents\mydir.txt"

to see all the lines that contain the string "file" in the mydir.txt file.
Alternatively, open mydir.txt in a text editor and search for the strings
you seek (assuming your text editor can handle large files).

To explore the syntax of the DIR and FIND commands, do "DIR /?" an FIND /?"
at the command prompt. An "/i" on FIND does a case-insensitive search.

Windows (including XP and Vista) are engineered on the principle of "make
the common case fast and correct, and make the uncommon case correct - but,
not necessarily fast". If you have a very large number of files in a
directory (eg > 2 million) you should expect file operations to run quite
slowly; but they should complete, eventually.

Hope it helps,

Andrew
 
T

Tim Slattery

Windows XP it self doesn't have a limit on the number of files per
folder. It depends on the file system that you are using. The max files
and folders per folder are:
FAT16 - 512
FAT32 - 65,534
NTFS - 4,294,967,295

The NTFS limit isn't actually entries per folder, but entries per
partition. There's no other restriction on number of entries in NTFS.

NTFS stores entries in a BTree structure. That makes it easy and quick
to find a single file in a directory, even a huge one (compare to FAT,
which must search directories sequentially). But it makes it take
longer to retrieve a list of *all* entries in a huge directory. Since
OP apparently is working with a gigantic directory, I guess it's
taking a *very* long time.
 
T

Tim Meddick

Does not the number of file entries in the ROOT "folder" of a drive
[partition] differ from that of sub-directories?


==



Cheers, Tim Meddick, Peckham, London. :)
 
J

John John - MVP

Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the
root folder only, otherwise FAT16 volumes are limited to 65,536
clusters. FAT32 folders can hold a maximum of 65,534 objects. on both
FAT and FAT32 Long File Names reduce the number of available entries.
On NTFS, it doesn't matter, an NTFS volume can hold more than 4.2
billion objects and you can stuff them in folders as you please, the
number of objects is not affected by LFNs.

John

Tim said:
Does not the number of file entries in the ROOT "folder" of a drive
[partition] differ from that of sub-directories?


==



Cheers, Tim Meddick, Peckham, London. :)


Tim Slattery said:
The NTFS limit isn't actually entries per folder, but entries per
partition. There's no other restriction on number of entries in NTFS.

NTFS stores entries in a BTree structure. That makes it easy and quick
to find a single file in a directory, even a huge one (compare to FAT,
which must search directories sequentially). But it makes it take
longer to retrieve a list of *all* entries in a huge directory. Since
OP apparently is working with a gigantic directory, I guess it's
taking a *very* long time.

--
Tim Slattery
MS MVP(Shell/User)
(e-mail address removed)
http://members.cox.net/slatteryt
 
B

Bill Blanton

It's actually an even 65536 if you count the .<dot> and ..<dotdot>
entries in subdirectories. The root has no . or .. but may have the volume
name entry.

Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of entries,
however DOS (at least) can't handle one with more than 65536.



John John - MVP said:
Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the root folder only, otherwise FAT16 volumes are limited to
65,536 clusters. FAT32 folders can hold a maximum of 65,534 objects. on both FAT and FAT32 Long File Names reduce the number of
available entries. On NTFS, it doesn't matter, an NTFS volume can hold more than 4.2 billion objects and you can stuff them in
folders as you please, the number of objects is not affected by LFNs.

John

Tim said:
Does not the number of file entries in the ROOT "folder" of a drive [partition] differ from that of sub-directories?
 
T

Tim Meddick

Bill,
I was pretty sure there was a limit of 512 entries for the ROOT on FAT
volumes*, was my initial point when I joined this thread. However, I didn't
know for sure whether this was he same for NTFS volumes, but I have been
told it is just FAT.

*unless the 'bit' specifying the limit is changed in a direct disk editor
to alter this 512 default value.

==


Cheers, Tim Meddick, Peckham, London. :)


Bill Blanton said:
It's actually an even 65536 if you count the .<dot> and ..<dotdot>
entries in subdirectories. The root has no . or .. but may have the volume
name entry.

Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of
entries,
however DOS (at least) can't handle one with more than 65536.



John John - MVP said:
Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the
root folder only, otherwise FAT16 volumes are limited to 65,536 clusters.
FAT32 folders can hold a maximum of 65,534 objects. on both FAT and
FAT32 Long File Names reduce the number of available entries. On NTFS, it
doesn't matter, an NTFS volume can hold more than 4.2 billion objects and
you can stuff them in folders as you please, the number of objects is not
affected by LFNs.

John

Tim said:
Does not the number of file entries in the ROOT "folder" of a drive
[partition] differ from that of sub-directories?
 
K

Ken Blake, MVP

Bill,
I was pretty sure there was a limit of 512 entries for the ROOT on FAT
volumes*, was my initial point when I joined this thread.


That's correct, not for all FAT volumes, but for FAT16 volumes.

For FAT32, the limit on the number of entries (in all folders, not
just the root folder) is 64K.
 
B

Bill in Co.

That's correct, not for all FAT volumes, but for FAT16 volumes.

For FAT32, the limit on the number of entries (in all folders, not
just the root folder) is 64K.

But there is a very small limit for the maximum number of folders allowed in
the root directory, at least on a floppy disk. I'm not sure if that
limitation applied to HDs or not. I can't remember now specifically why it
was so limited, either!
 
B

Bill Blanton

As Ken said, 512 in the root is correct concerning FAT (aka FAT16).

The root dir location is also fixed, unlike with FAT32 where it can be
cluster chained and located as any other file. The location and the size
limit may have been determined pre DOS 2.0, when subdirs did not
exist for FAT, and drives were small. (guessing)


I think exFAT is allowed to exceed the 64k dir entry limit.



Tim Meddick said:
Bill,
I was pretty sure there was a limit of 512 entries for the ROOT on FAT volumes*, was my initial point when I joined this
thread. However, I didn't know for sure whether this was he same for NTFS volumes, but I have been told it is just FAT.

*unless the 'bit' specifying the limit is changed in a direct disk editor to alter this 512 default value.

==


Cheers, Tim Meddick, Peckham, London. :)


Bill Blanton said:
It's actually an even 65536 if you count the .<dot> and ..<dotdot>
entries in subdirectories. The root has no . or .. but may have the volume
name entry.

Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of entries,
however DOS (at least) can't handle one with more than 65536.



John John - MVP said:
Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the root folder only, otherwise FAT16 volumes are limited to
65,536 clusters. FAT32 folders can hold a maximum of 65,534 objects. on both FAT and FAT32 Long File Names reduce the number of
available entries. On NTFS, it doesn't matter, an NTFS volume can hold more than 4.2 billion objects and you can stuff them in
folders as you please, the number of objects is not affected by LFNs.

John

Tim Meddick wrote:
Does not the number of file entries in the ROOT "folder" of a drive [partition] differ from that of sub-directories?
 
B

Bill Blanton

Bill in Co. said:
But there is a very small limit for the maximum number of folders allowed in the root directory, at least on a floppy disk. I'm
not sure if that limitation applied to HDs or not. I can't remember now specifically why it was so limited, either!

Floppys are FAT12. An empty dir has to have at least one cluster assigned,
so that also comes into play "space-wise"
 
J

John John - MVP

The FAT16 limit would lie in that it uses 16-bit numbers to count the
clusters, so a FAT16 volume cannot have more than 65535 clusters, every
file will occupy at least 1 cluster so even if the directory table could
hold more than 65535 entries it would still be impossible to store the
files on the volume.

As for FAT32 I believe that the limit is voluntarily imposed by the
FAT32 file system driver, the limit is imposed to keep compatibility
with 16-bit disk utilities that can only handle 16-bit numbers and to
keep the directories to a manageable size so as to not unduly slow down
or cripple the performance of the file system.

John

Bill said:
It's actually an even 65536 if you count the .<dot> and ..<dotdot>
entries in subdirectories. The root has no . or .. but may have the volume
name entry.

Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of entries,
however DOS (at least) can't handle one with more than 65536.



John John - MVP said:
Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the root folder only, otherwise FAT16 volumes are limited to
65,536 clusters. FAT32 folders can hold a maximum of 65,534 objects. on both FAT and FAT32 Long File Names reduce the number of
available entries. On NTFS, it doesn't matter, an NTFS volume can hold more than 4.2 billion objects and you can stuff them in
folders as you please, the number of objects is not affected by LFNs.

John

Tim said:
Does not the number of file entries in the ROOT "folder" of a drive [partition] differ from that of sub-directories?
 
T

Tim Meddick

Bill,
The number of root entries for FAT16 is 512, as already discussed.
Floppy disks have a filesystem of FAT12 and, after your post, I tested out
the number of root entries that it would hold.
I finally came up with a value of 224 !! I thought it would have been the
same as FAT16, but there you go.
However, as also discussed earlier, the number of root entries can be
altered by altering the value of a specific 'bit' that governs the maximum
number of entries.
Well spotted...

==


Cheers, Tim Meddick, Peckham, London. :)
 
B

Bill Blanton

John John - MVP said:
The FAT16 limit would lie in that it uses 16-bit numbers to count the clusters, so a FAT16 volume cannot have more than 65535
clusters, every file will occupy at least 1 cluster so even if the directory table could hold more than 65535 entries it would
still be impossible to store the files on the volume.

It's still possible to exceed that number since a zero-byte file does
not point to any cluster other than 0000.

As for FAT32 I believe that the limit is voluntarily imposed by the FAT32 file system driver, the limit is imposed to keep
compatibility with 16-bit disk utilities that can only handle 16-bit numbers

No doubt that many (probably all) utilities wouldn't be able to handle a
count larger than that, and it's voluntarily imposed by Windows' FAT
drivers. However, I think the reason for the limit is more rooted in the
fact that DOS isn't designed to handle any thing larger than a 16-bit
count.

There's an internal DOS structure used by DOS interrupts to find files
based on the filespec you feed it. The interrupt keeps track of "where
it is" in the dir, based on the entry number, so that it can later jump
to that point to begin searching again. That index number is 16-bits
wide.

See the 16-bit word at offset 0x0d of the Data Transfer Area
http://heim.ifi.uio.no/~stanisls/helppc/dta.html

The DTA is used by the "Find First" and "Find Next"
DOS 0x21 interrupts. To have a count larger than a word would break
those interrupts. (not trivial procedures)

I mention the DTA because I'm familiar with it. There could also be a
16-bit limit count with the DOS File Control Blocks (FCB) interrupts.
and to keep the directories to a manageable size so as to not unduly slow down or cripple the performance of the file system.

There is definitely a performance hit with a dir containing so many files.


Bill said:
It's actually an even 65536 if you count the .<dot> and ..<dotdot>
entries in subdirectories. The root has no . or .. but may have the volume
name entry.

Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of entries,
however DOS (at least) can't handle one with more than 65536.



John John - MVP said:
Not for FAT32 or NTFS. Also, the FAT limit of 512 objects is for the root folder only, otherwise FAT16 volumes are limited to
65,536 clusters. FAT32 folders can hold a maximum of 65,534 objects. on both FAT and FAT32 Long File Names reduce the number
of available entries. On NTFS, it doesn't matter, an NTFS volume can hold more than 4.2 billion objects and you can stuff them
in folders as you please, the number of objects is not affected by LFNs.

John

Tim Meddick wrote:
Does not the number of file entries in the ROOT "folder" of a drive [partition] differ from that of sub-directories?
 
T

Tim Slattery

Bill Blanton said:
Except for the special case root for FAT12 and 16, there's nothing
inherent of a FAT(n) directory structure that limits the number of entries,

True! But the spec dictates that a FAT32 driver will allow no more
than 65,536 entries in a directory.
 
T

Tim Slattery

But there is a very small limit for the maximum number of folders allowed in
the root directory, at least on a floppy disk. I'm not sure if that
limitation applied to HDs or not.

Floppies use FAT12, which is even more restricted than FAT16. The root
in a FAT12 partition would probably be restricted to 512 entries, like
a FAT16 root directory. It certainly wouldn't be any larger.

In FAT16 and FAT12, the root directory began at a particular place in
the partition, and occupied a fixed amount of space. Other directories
could expand, but not root. In FAT32, the root directory was pointed
to by an entry in one of the tables at the beginning of the partition,
and was allowed to expand just like any other directory.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top