Increasing disk performance with many small files (NTFS/ Windowsroaming profiles)

B

Benno...

Due to applications as the SAP client and AutoCAD 2002 our users roaming
profiles contain thousands of very small files. I have noticed that the
average transfer rate of those small files (~350Bytes in size) over the
network is extremely slow compared to normal to large sized files (300KB
up to a few MB). With the normal sized files I'm seeing transfer rates
to the workstations of 4MB to 15MB per second, with the small files this
drops to as low as 75KB per second with an average of ~200KB per second.

The roaming profiles are stored on a RAID5 logical drive with a 64KB
stripe size (I think this is the maximum for the Smart Array 5300
controller) and the NTFS partition is formatted with the default 4KB
cluster size. The Array Controller cache is configured 25% read / 75%
write to compensate for the RAID5 slower writes.

The server is a Windows 2000 SP4 machine, the workstations are NT4 SP6a.
The network is 100Mb switched with a 1000Mb connection to the fileserver.

Is there anything I can do with the RAID stripe size or the cluster size
to increase the throughput of those small files without affecting
transfer speed the normal sized files to much?

Are there any benchmark programs that I can use to test this?

Could the TCP/IP Windows size be an issue here?
 
B

Benno...

Benno... said:
The roaming profiles are stored on a RAID5 logical drive with a 64KB
stripe size (I think this is the maximum for the Smart Array 5300
controller) and the NTFS partition is formatted with the default 4KB
cluster size. The Array Controller cache is configured 25% read / 75%
write to compensate for the RAID5 slower writes.

I was thinking, the RAID5 drive consists of 6 disk. Normally the more
spindles the better the performance but is this also true with those
very small files? Could a large number of spindles have a negative
performance effect?
 
B

Bill Todd

Benno... said:
Due to applications as the SAP client and AutoCAD 2002 our users roaming
profiles contain thousands of very small files. I have noticed that the
average transfer rate of those small files (~350Bytes in size) over the
network is extremely slow

From your later comments, it sounds as if you already recognize that your
performance problem likely has little to do with the network: the
performance that you're seeing is consistent with the requirement for a
separate disk access for each small file (on a fairly fast disk).

compared to normal to large sized files (300KB
up to a few MB). With the normal sized files I'm seeing transfer rates
to the workstations of 4MB to 15MB per second, with the small files this
drops to as low as 75KB per second with an average of ~200KB per second.

The roaming profiles are stored on a RAID5 logical drive with a 64KB
stripe size (I think this is the maximum for the Smart Array 5300
controller) and the NTFS partition is formatted with the default 4KB
cluster size. The Array Controller cache is configured 25% read / 75%
write to compensate for the RAID5 slower writes.

The server is a Windows 2000 SP4 machine, the workstations are NT4 SP6a.
The network is 100Mb switched with a 1000Mb connection to the fileserver.

Is there anything I can do with the RAID stripe size or the cluster size
to increase the throughput of those small files without affecting
transfer speed the normal sized files to much?

No. The only thing that could help in that area is sufficient cache on the
array (you might consider changing the read/write balance: if the cache
isn't helping much at all now, that may not help much more, but your current
heavy skew toward writes may not be helping much either) or in the system
file cache to keep the small files memory-resident.

A file system like Reiserfs that can aggregate many such small files in a
single directory node might help, if the accesses to them are clustered
within directories. The only analogous approach with NTFS would be somehow
to manage to create the files in a clean MFT in the order that they're
accessed by the user, and depend upon the disk's read-ahead mechanism to
prefetch multiple files at a time (though if other activity is also
contending for the disk that could interfere with the read-aheads, or vice
versa if you force read-ahead on every access).

- bill
 
M

Marc de Vries

I was thinking, the RAID5 drive consists of 6 disk. Normally the more
spindles the better the performance but is this also true with those
very small files? Could a large number of spindles have a negative
performance effect?

More spindles also gives better performnance with those very small
files. The array controller then has the option to read multiple files
simultaneously from different spindles.

Especially with small files you should set the stripe size to the
maximum that the controller supports. So it already has the optimum
setting. (The idea behind that is that the small files are stored on
as few disks as possible, which will increase the chance that you can
read several files simultaneously)

But whatever you do, you might increase performance, but you will
never ever get large transferrates with small files.
The reason for that is the following: The time it takes to search for
that file on the disk and open it is very large in comparison to the
time it takes to transfer it.

The same might also happen on the network. I'm not sure what overhead
you get on small files in the network. But you might want to take a
look at what happens when you access those small files directly on the
server, compared to what happens when you access them over the
network.

Are you doing a lot of writing to the array controller? You have to
accept that writes will never be fast. Extra cache will not fix that.
And it might slow the the reads down a bit in such a way that on
average the user experience is slower.
I don't have personal experience with roaming profiles, but I'd guess
they require more read than write capacity. More cache in the array
controller might help. if you have the option to add more.

You could experiment with the cluster size, but I don't think that
that will help. The cause of the slow performance is the relatively
large seektime when accessing small files. And that doesn't change
when the clustersize is smaller.

I'm afraid that I can't think of much to improve the situation.
Basically the applications shouldn't create so many extremely small
files, because that will always hurt performance. (Your backup
software is probably not too happy about it either)

Marc
 
B

Benno...

Marc said:
More spindles also gives better performnance with those very small
files. The array controller then has the option to read multiple files
simultaneously from different spindles.
The same might also happen on the network. I'm not sure what overhead
you get on small files in the network. But you might want to take a
look at what happens when you access those small files directly on the
server, compared to what happens when you access them over the
network.

I setup a test server to do some performance tests. I collected a
dataset of 26 profiles (216MB in 46.075 files and 1773 directories).
Copying them from the server to a workstation gives an average speed of
420KByte/sec (the test server is newer and has therefor better
performing disks/arraycontroller then the production server in my
previous post. The production server gets around 230KB/sec on the test
dataset).
If I copy this dataset on the server itself from the RAID1 boot/system
partition to the RAID5 data partition I see 2500KByte/sec.
 
F

Folkert Rienstra

Marc de Vries said:
More spindles also gives better performnance with those very small
files.

Nope, only on busy servers that do alot of them simultaniously.
The array controller then has the option to read multiple files
simultaneously from different spindles.

Therefor still reads at full stripe width transfer rates.
Especially with small files you should set the stripe size to the
maximum that the controller supports. So it already has the optimum
setting.

Not if this is not a "busy" server. The bigger the stripe size the more
small files that sit on a single disk and transfer at single disk speeds.
If that's not compensated by the shear number of them that a part of
is read simultaniously all the time then you loose.
(The idea behind that is that the small files are stored on as few disks
as possible, which will increase the chance that you can read several files
simultaneously)

So you actually make them slower, to read more of them simultaniously.
On a not so busy server you are insuring that the small files will transfer
even slower compared to doing nothing. Nice one.

When you leave it as it is that you thought was best, at least you don't
make the ones that fill a stripe width slower, and, when they are less
than that, smaller files automatically fill up the gap when the server is
busy and has many outstanding IO.
But whatever you do, you might increase performance, but you will
never ever get large transferrates with small files.

Right, now for yourself to let that sink in.
The reason for that is the following: The time it takes to search for
that file on the disk and open it is very large in comparison to the
time it takes to transfer it.

The same might also happen on the network. I'm not sure what overhead
you get on small files in the network. But you might want to take a
look at what happens when you access those small files directly on the
server, compared to what happens when you access them over the
network.

Are you doing a lot of writing to the array controller? You have to
accept that writes will never be fast. Extra cache will not fix that.

Not on a busy server, no. And not if the write speed is not disk related.
It will if it is and the cache can catch up in less busier periods acting
as a buffer.

What "it"?
might slow the the reads down a bit in such a way that on
average the user experience is slower.
I don't have personal experience with roaming profiles, but I'd guess
they require more read than write capacity. More cache in the array
controller might help. if you have the option to add more.

You could experiment with the cluster size, but I don't think that
that will help. The cause of the slow performance is the relatively
large seektime when accessing small files.
And that doesn't change when the clustersize is smaller.

Actually it does when already small files fragment because of it.
I'm afraid that I can't think of much to improve the situation.
Basically the applications shouldn't create so many extremely small
files, because that will always hurt performance.

Unless they sit on a dedicated drive that is not mechanical in nature.
Solid State Disk.
(Your backup software is probably not too happy about it either)

That obviously depends on the type of backup.
 
F

Folkert Rienstra

Benno said:
I was thinking, the RAID5 drive consists of 6 disk.
Normally the more spindles the better the performance

Depends on filesize and stripe size, really.

When a transfer is not the full stripe width the transfer is slower than
optimal.
but is this also true with those very small files?

Depends on filesize and stripe size.
You choose your stripe size depending on filesize and stripe width.

If you change the stripe width without changing the stripe size then small
files may find themselfs sitting in a less that full stripe(width) and not
benefit from the same full stripewidth transfer rate that bigger files get.
Could a large number of spindles have a negative performance effect?

Sure, when you don't adjust your stripe size accordingly.
 
F

Folkert Rienstra

Benno said:
Due to applications as the SAP client and AutoCAD 2002 our users roaming
profiles contain thousands of very small files. I have noticed that the
average transfer rate of those small files (~350Bytes in size) over the
network is extremely slow compared to normal to large sized files (300KB
up to a few MB). With the normal sized files I'm seeing transfer rates
to the workstations of 4MB to 15MB per second, with the small files this
drops to as low as 75KB per second with an average of ~200KB per second.

512 bytes (one sector) or 4 kB (one cluster) reside in a single 64kB
stripe so it transfers at single drive speed.

At an STR of 51MB/s this file transfers in .1 ms or .4 ms

With an average access time of 12 ms your average transfer rate is from
(.1/12.1)*51MB/s 420kB/s to 1.65MB/s (.4/12.4)*51MB/s

Your 350 byte file may run at 350/4096*1.65 MB/s = 400KB/s.
(And yes, because of that huge difference in access time and actual trans-
fer time it is trivial whether the disk system reads a sector or a cluster).
The roaming profiles are stored on a RAID5 logical drive with a 64KB
stripe size

Any file of 64kB is now a small file.
Whether it is read in parallel now depends on it being fragmented and how.
(I think this is the maximum for the Smart Array 5300
controller) and the NTFS partition is formatted with the default 4KB
cluster size. The Array Controller cache is configured 25% read / 75%
write to compensate for the RAID5 slower writes.

The server is a Windows 2000 SP4 machine, the workstations are NT4 SP6a.
The network is 100Mb switched with a 1000Mb connection to the fileserver.

Is there anything I can do with the RAID stripe size or the cluster size
to increase the throughput of those small files without affecting
transfer speed the normal sized files to much?

Little to none.
Are there any benchmark programs that I can use to test this?

Could the TCP/IP Windows size be an issue here?

Maybe, for the difference between your 75kB/s and my 400kB/s.
 
E

Eric Gisin

Benno... said:
Due to applications as the SAP client and AutoCAD 2002 our users roaming
profiles contain thousands of very small files. I have noticed that the
average transfer rate of those small files (~350Bytes in size) over the
network is extremely slow compared to normal to large sized files (300KB
up to a few MB). With the normal sized files I'm seeing transfer rates
to the workstations of 4MB to 15MB per second, with the small files this
drops to as low as 75KB per second with an average of ~200KB per second.
The small files are stored in the MFT, so a single read opens the file and
reads the data. Since 10K drives do about 100 IO/s, you won't ever copy over
100 files/s with a single threaded copy. Actually, it compares timestamps
before copying, but same argument.

The problem is roaming profiles. Create a home directory for each user
instead.
 
R

Ron Reaugh

Benno... said:
I setup a test server to do some performance tests. I collected a
dataset of 26 profiles (216MB in 46.075 files and 1773 directories).
Copying them from the server to a workstation gives an average speed of
420KByte/sec (the test server is newer and has therefor better
performing disks/arraycontroller then the production server in my
previous post.

Try the same experiment twice, once pushing(xcopy on server) the file set
and once pulling(xcopy on workstation) the fileset.
 
R

Ron Reaugh

Folkert Rienstra said:
You choose your stripe size depending on filesize and stripe width.

If you change the stripe width

Could you please cite a reference for you use of the term "stripe width".
without changing the stripe size then small

Could you please cite a reference for you use of the term "stripe size".
 
M

Marc de Vries

Try the same experiment twice, once pushing(xcopy on server) the file set
and once pulling(xcopy on workstation) the fileset.

Good idea. But I wonder if the roaming profile copy operations will be
that effective in copying. I expect that it behaves more like a normal
copy.

Seems like the network doesn't like all those small files either. I'm
not sure if or how you can improve the situation there. You'll need
some network guys for that.

Marc
 
A

Alexander Grigoriev

If you had XP on workstations, you could use client-side cache (offline
files). If you have security concerns about it, those cached files can be
encrypted on the client.
 
M

Marc de Vries

Nope, only on busy servers that do alot of them simultaniously.

If the server is doing nothing he wouldn't have a bought a Smart array
5300 controller for it.
Therefor still reads at full stripe width transfer rates.

Which is not important for those very small files at all.
Not if this is not a "busy" server. The bigger the stripe size the more
small files that sit on a single disk and transfer at single disk speeds.
If that's not compensated by the shear number of them that a part of
is read simultaniously all the time then you loose.

Wrong. As I have explained you time and again in the past the
transfer rate is not important for small files. Why don't you listen!

When you read thousands of files of 350 bytes size it doesn't matter
if I read them with 30MB/s transfer rate or which 300MB/s transfer
rate. The time to get the file depends solely on the time that is
needed to seek and open the file. That takes 90% of the total time to
get the file. Since the transferrate only takes 10% of the time the
impact of a faster transferrate is neglictable. (rough estimates, it
will be even less for 350 bytes files, but you will remember these
numbers from a few days ago when I explained it to you in detail)
So you actually make them slower, to read more of them simultaniously.

Almost right.
I actually make the transferring of the small files slower by 0.00001%
because of a lower transferrate and then make the transfer of the
total set of files faster by 400% because I can open multiple files at
once.
On a not so busy server you are insuring that the small files will transfer
even slower compared to doing nothing. Nice one.

On a server that is only opening 1 file at a time. (not a realistic
scnenario) I am slowing down the opening of that file by 0.00001%

So who cares about that? (well you do obviously, but anyone actually
using that server in real life server won't)
When you leave it as it is that you thought was best, at least you don't
make the ones that fill a stripe width slower, and, when they are less
than that, smaller files automatically fill up the gap when the server is
busy and has many outstanding IO.


Right, now for yourself to let that sink in.

How about you actually start to listen to what I explain to you and
LEARN from it?
Not on a busy server, no. And not if the write speed is not disk related.
It will if it is and the cache can catch up in less busier periods acting
as a buffer.


What "it"?



Actually it does when already small files fragment because of it.

How about you start reading the thread again.
We are talking about files that are about 350 bytes in size!

The smallest cluster size that NTFS supports is already much bigger
then that: 512 bytes. So how are these files going to be fragmented by
that clustersize?
Unless they sit on a dedicated drive that is not mechanical in nature.
Solid State Disk.

That could be an option. But Even though there are lots of very small
files in the roaming profile, the rest of the roaming profile could be
very big. Windows probably won't let the profile be stored on two
different types of disk.
That obviously depends on the type of backup.

Obviously that is also the reason why I said that the backup is
PROBABLY not too happy about it.
For some backup methods it indeed doesn't matter.

Marc
 
M

Marc de Vries

512 bytes (one sector) or 4 kB (one cluster) reside in a single 64kB
stripe so it transfers at single drive speed.

At an STR of 51MB/s this file transfers in .1 ms or .4 ms

With an average access time of 12 ms your average transfer rate is from
(.1/12.1)*51MB/s 420kB/s to 1.65MB/s (.4/12.4)*51MB/s

Your 350 byte file may run at 350/4096*1.65 MB/s = 400KB/s.
(And yes, because of that huge difference in access time and actual trans-
fer time it is trivial whether the disk system reads a sector or a cluster).

So you have finally accepted what I have been telling you for weeks:
that the transferrate of the array is not important for small files
because the access time is soo much bigger. 12 ms vs 0.4 ms.

I glad that you are aparantly capable of listening and learning after
all.


Marc
 
F

Folkert Rienstra

Marc de Vries said:
So you have finally accepted what I have been telling you for weeks:

Which I fully debunked.
that the transferrate

Still doesn't understand transfer rate.
of the array is not important for small files
because the access time is soo much bigger. 12 ms vs 0.4 ms.

Still mighty clueless. Notice that 51 MB/s in the formulas? That is the STR.
Change it and the average tranfer rate changes with it, 1:1.

The same files on an array will transfer n-times (n=stripewidth)
faster than they would on a single drive when accessed simultaniously.

So when an application reads several small files at once it can with
some luck read them at n-times faster.
On a 6 drive R5 array that 4kB files can be read at 5*1.6 = 8 MB/s
compared to 1.6 MB/s when read in serial. That is a 500% improvement!

However the pure transfertime divided by total transfer time relation in
that same formula dramatically worsens the result for a striped small file.
So for a small file just around the size of a stripe width, the performance
is barely any better than run from a single drive. It is this type of small
file that can dramatically improve performance when it is not striped
(upping the stripesize to the file size) and then several of those files may
be read simultaniously.

The effect however wears off quickly when the files are getting smaller.
The biggest effect is near the stripe size transition point and it is gone
completely below the 1/(n-1) stripe size point.

The 350-byte files are definitely below that.
I glad that you are aparantly capable of listening and learning after all.

Which cannot be said of you when I washed your ears recently and you
keep mixing up terms.
 
F

Folkert Rienstra

Marc de Vries said:
If the server is doing nothing he wouldn't have a bought a Smart array
5300 controller for it.

You never cease to amaze me.
What has that got to do with a "busy server" and doing parallel IO?
Which is not important for those very small files at all.

Ofcourse it is. You wouldn't be making the effort if it wasn't so.
As for those 350-byte files it won't make any difference, that I will agree.
Wrong. As I have explained you time and again in the past the
transfer rate is not important for small files. Why don't you listen!

Because you are are wrong, and I proved it.
When you read thousands of files of 350 bytes size it doesn't matter
if I read them with 30MB/s transfer rate or which 300MB/s transfer
rate.

Even you shouldn't be *that* clueless. That set of files will tranfer 10
times faster when that transfer rate is aggregated by a 10-drive array.

And btw, it doesn't make one jot of difference what stripe size you
choose because those 350-byte files will never sit on more than one
stripe size, whatever you do. Even with a 2kB stripe size you still read
5 of those 350 byte files simultaniously
The time to get the file depends solely on the time that is
needed to seek and open the file.

You miss the whole point.
That takes 90% of the total time to get the file.

So what, it results in a certain (average) transfer rate.
With raid it results in n-times that (average) transfer rate.
Since the transferrate

Still has no clue about transfer rate.
only takes 10% of the time the impact of a faster transferrate is neglictable.

And it's not even a comprehensible sentence.
(rough estimates, it will be even less for 350 bytes files, but you will remember
these numbers from a few days ago when I explained it to you in detail)

You mean, when I explained it to *you*, don't you, troll?
There was a distinct *lack* of detail in *your* post and your constant con-
tradicting yourself which made it so hard to detect where you were wrong.
Almost right.

Exactly right.
I actually make the transferring of the small files slower by 0.00001%

Actually, it is far more complex than that. Your small files are now 64kB.
because of a lower transferrate

Some 13% lower transferrate for those 64kB files.
and then make the transfer of the total set of files faster
by 400% because I can open multiple files at once.

Right, time for some examples:
64kB files, stripe size 13kB, 5 drives Raid0, 51MB/s per drive. 12ms access time.
64kB file is forced on a full stripewidth. 13kB transfers in 13/51 = .25 ms
at an average single drive transer rate of (.25/12.25)*51MB/s= 1.1MB/s
So total average file transfer by 5 drives rate is 5*1.1MB/s is 5.5MB/s.

Now you go to a 64kB stripe size so that several small files 'supposedly' can
transfer at the same time:
64kB files, stripe size 64kB, 5 drives Raid0, 51MB/s per drive. 12ms access time.
64kB is now one stripe size. 64kB transfers in 64/51 ms = 1.25 ms.
at an average single drive transfer rate 1.25/13.25 *51 MB/s = 4.8 MB/s
That is 13% slower, that is 13,000 times your 0.00001%

Five 64kb files read at an aggregated average transfer rate of 24MB/s
A whopping 335% improvement over striping a single file.

Now for 39kB files.
In the 13kB stripe size example the single drive transfer rate was 1.1 MB/s. The
single file transfers now at 3.3MB/s. Reading several files at once get's you ~5.5MB/s.
In the 64kB stripe size example the transfer rate is (.75/12.75)*51MB/s = 3MB/s
Reading 5 files at once get's you 15MB/s. Still a 170% improvement.

Now for 26kB files.
In the 13kB stripe size example the single drive transfer rate was 1.1 MB/s.
The file transfers with 2.2MB/s Reading several files at once get's you ~5.5MB/s
In the 64kB stripe size example the file transfer rate is (.5/12.5)*51MB/s = 2MB/s
Reading several files at once get's you 10 MB/s. That is now only a mere 80% improvement.

Now for 13kB files.
In the 13kB stripe size example the single drive transfer rate was 1.1 MB/s.
Reading several files at once get's you 5.5MB/s
In the 64kB stripe size example the single file transfer rate is still 1.1 MB/s.
Reading several files at once get's you still only 5.5MB/s.
No improvement at all anymore. Vanished. Foetsie.

Your improvement only takes place for files in between the stripe
size and 1/n-1 the stripe size. Anything below that has no effect.

The 350-byte files fall well below that.
On a server that is only opening 1 file at a time. (not a realistic
scnenario) I am slowing down the opening of that file by 0.00001%
13%.


So who cares about that? (well you do obviously, but anyone actually
using that server in real life server won't)


How about you actually start to listen to what I explain to you and
LEARN from it?

I finally did, and what did I discover? That you are full of shit,
exactly as I had anticipated. Yes, I learned a great deal from you.

Actually, I took that as a general comment, not necessarily OP's 4kB
clusters and 350-byte files.
How about you start reading the thread again.
We are talking about files that are about 350 bytes in size!

In theory we are talking about small files up to 64kB (the stripesize)
Theoretically it can be in 16 fragments with a 4kB cluster size

While fragmenting is bad when it is on the same drive it can be beneficial
if the fragments are on seperate drives in an array.
The smallest cluster size that NTFS supports is already much bigger
then that: 512 bytes. So how are these files going to be fragmented by
that clustersize?

Those obviously not.
The ones bigger than 4kB and upto 64kB obviously can.
 
M

Marc de Vries

On Thu, 22 Jul 2004 01:39:23 +0200, "Folkert Rienstra"

Ofcourse it is. You wouldn't be making the effort if it wasn't so.
As for those 350-byte files it won't make any difference, that I will agree.

So even though you constantly said in this thread that I was wrong you
now have changed your mind and agree that I was actually right all
that time.

Nice to finally see that statement in black and white. End of
discussion.

Marc
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top