defrag takes much longer on 1000GB drive

S

sandot

Defragging my 1000 GB hard drive takes much longer than a 200 GB or 500
GB. drive. My system may be a bit slow but the difference in time is
very large (maybe 10 times longer). The cluster size on all my hard
drives is 4KB.

Is it because the NTFS indexes and other system software are so big that
they need a significant amount of processor and I/O time?

As the 1000 Gb fills up will it start to get sluggish? I don't want two
500 GB drives but I'll do that if it prevents a problem.

This is the CHKDSK data:

976751968 KB total disk space.
36065140 KB in 19106 files.
5144 KB in 725 indexes.
0 KB in bad sectors.
154888 KB in use by the system.
65536 KB occupied by the log file.
940526796 KB available on disk.

4096 bytes in each allocation unit.
244187992 total allocation units on disk.
235131699 allocation units available on disk.
 
C

Conor

sandot said:
Defragging my 1000 GB hard drive takes much longer than a 200 GB or 500
GB. drive.

No shit. I'd have never have guessed that.
My system may be a bit slow but the difference in time is
very large (maybe 10 times longer). The cluster size on all my hard
drives is 4KB.
Well there you go then. It's got a lot to check.
As the 1000 Gb fills up will it start to get sluggish?

Yes, the same as any other drive.
I don't want two
500 GB drives but I'll do that if it prevents a problem.
It won't.
 
M

mscotgrove

Get an Intel Extreme processor, up the pagefile, and if it
is like my 1TB, it is the large files like DVD .iso's that
slow up the defrag. Not that defrag makes much difference
to performance.
With a decent defrag utility there will be a scheduler,
so run it overnight once a month.
You don't say how long it takes, and is it a USB
external, or connected by mobo SATA?
With a C:\ OS and D:\ data, defrag of D:\ can run
in the background, with no apparent performance
hit.
What you need to worry about is when it goes
tits up, without a backup.
I use two different utilities, True Image for the
OS, Fileback for D:\ data, onto another 1TB.

Many people say that defrag on NTFS is not required.

Personally, I think it has a place, but a fairly long way down the
priority list. Once every month or two sounds about right, and I
doubt you will spot the difference in speed. I agree with Ato_Zee on
this point.

In my experence, the files that become most fragmented are incremental
logs. Large files (GB files) can also become quite fragmented as well.

Ato_Zee also mentioned what happens when the disk 'fails' A good back
up must be stage 1,2,3 and 4, but from a recovery point of view, a
defraged disk is always much easier.

Michael

www.cnwrecovery.com
 
S

sandot

36065140 KB in 19106 files.
5144 KB in 725 indexes.
0 KB in bad sectors.
154888 KB in use by the system.
65536 KB occupied by the log file.
940526796 KB available on disk.

4096 bytes in each allocatio>n unit.
244187992 total allocation units >on disk. 235131699 allocation
units available on disk.
No shit. I'd have never have guessed that.

Well there you go then. It's got a lot to check.

Yes, the same as any other drive.

It won't.


Thank you for your views. I think the stats may have been unclear. Only
30 GB of the 1000 GB drive has been used.

"976751968 KB total disk space"
means there's a total of 976,751,968 KB on the drive.

"940526796 KB available on disk"
means 940,526,796 KB free space.

The barely used 1000 GB drive is still much slower to defrag than a much
fuller 200 MB or 500 GB drive.
 
R

Rod Speed

Larc said:
You have only one partition on the drive? That's like having one
huge drawer in your house that you keep everything you own in.

Nope, nothing like. We have a funky computer thingo that keeps track of
where everything is in the case of the drive that isnt available with the drawer.

Makes a hell of a lot more sense to organise by folder tree than partition with a drive.
 
R

Rod Speed

Larc wrote
Precisely the same management functions are available for smaller partitions.

Wrong. There is no management function that dynamically
adjusts the size of the separate partitions so that they are
big enough for the contents as the contents change over time.
Plus that would help solve the OP's principal complaint.

You dont know that either. You dont know that that particular
drive wouldnt be just as slow with multiple partitions.

And defragging makes no sense with modern 1TB drives anyway.
 
B

Bilky White

Go on Roddie, send him one of your scathing templates and make him feel
*really* small. Come on, DANCE! NOW!!!
 
R

Rod Speed

Some pathetic little gutless ****wit desperately cowering behind
Bilky White wrote just what you'd expect from a desperately
cowering gutless ****wit.
 
A

Arno

In comp.sys.ibm.pc.hardware.storage chrisv said:
Ron Speed wrote:
Yep. Something like.

Good analogy. Actually disk defragging has aspects of a packing problem,
i.e. how to pack the files best with the least moves. These problems
have no fast solutions and approximations will also either grow in
effort much more strongly than linear.
Wrong, in many people's situations and opinions, and that's all that
counts, for them.

It does make sense to organize by both. UNder Unix/Linux doing it
that way is standard, as you can have symbolic links that seamlessly
(well mostly) hook one tree into another across partitions. MS has
overlooked this invention for a few decades and stuck to the
histroic and very, very outdated concept of "drive letters".
(Stupid? Incompetent?) The easiest way under Windows is indeed
separate partitions.

Arno
 
F

Frazer Jolly Goodfellow

MS has
overlooked this invention for a few decades and stuck to the
histroic and very, very outdated concept of "drive letters".

Arno, you appear to be outdated. NT-derived versions of Windows have had
the ability to mount drives and folder trees into other folder trees since
Windows 2000 - maybe earlier. Try searching on "mount drive as folder ntfs"
in Google.
 
J

Jim Jones

Good analogy.

Lousy one, actually. You dont have folders/directorys in a drawer.
Actually disk defragging has aspects of a packing problem,
i.e. how to pack the files best with the least moves. These
problems have no fast solutions and approximations will
also either grow in effort much more strongly than linear.

Not relevant to the OP's problem with only 30GB on a 1TB drive.
It does make sense to organize by both.

Nope, no point in farting around with mulitiple partitions with Win.

Too hard to get the sizes right initially, because the size needed
changes over time and too dangerous to resize partitions without
a full backup which that level of user hardly ever has.
UNder Unix/Linux doing it that way is standard, as you can have symbolic links that seamlessly
(well mostly) hook one tree into another across partitions. MS has
overlooked this invention for a few decades and stuck to the
histroic and very, very outdated concept of "drive letters".

Hasnt been like that with Win for a hell of a long time now, since 2K.
(Stupid? Incompetent?) The easiest way under Windows is indeed separate partitions.

Wrong. By far the easiest is folder trees in a single partition.

That way the free space isnt scattered across the
partitions and the no folder can ever run out of space.
 
A

Arno

In comp.sys.ibm.pc.hardware.storage Frazer Jolly Goodfellow said:
On 31 Jul 2009 18:35:21 GMT, Arno wrote:
Arno, you appear to be outdated. NT-derived versions of Windows have
had the ability to mount drives and folder trees into other folder
trees since Windows 2000 - maybe earlier. Try searching on "mount
drive as folder ntfs" in Google.

I am not outdated and I know this can be done. It was possible with
special drivers even before. But why is nobody doing it? And why does
MS still have the drive letters?

Arno
 
S

sandot

Defragging is not a linear operation in the number of allocation
units on the drive. It gets more than proportionally slower. In
addition, it is possible that you hit some limit and the
implementation (data A replaces B and then gets replaced by B again
and so on), which can give you nearly arbitrary slowdown.

You do not need two 500GB drives. You can use two 500GB partitions
instead. The limit here is the filesystem size, not the drive
size.

Arno

I'm the OP. If the overhead exists for a defrag then a similar overhead
may appear on a 1TB during normal processing when drive gets fuller. I'll
partition the 1TB drive into two 500GB partitions to cut down on the
extra workload caused by such large system structures.

Does what you say mean there's a sweet spot for size of an NTFS
partition? I get the *intuition* anything bigger than 250GB could start
to get sluggish.

(Assuming 2-platter 7200rpm SATA with file sizes typical of a home office
system.)
 
F

Frazer Jolly Goodfellow

I am not outdated and I know this can be done. It was possible with
special drivers even before. But why is nobody doing it? And why does
MS still have the drive letters?

Arno

So what you're now saying is that with Microsoft you *can* do it the
Unix/Linux way as well, contradicting what you said earlier. Thanks for
clarifying that.
 
J

Jim Jones

Arno wrote
I am not outdated and I know this can be done.

Hell of a contrast to your original claim.

So MS did NOT in fact stick to drive letters at all.
It was possible with special drivers even before.

Nothing to do with drivers.
But why is nobody doing it?

Plenty are. Most dont just because most just have a single partition on a single physical drive.
And why does MS still have the drive letters?

Essentially so anyone who still uses that approach can continue to use it.

It imposes no penalty to leave that in place.
 
A

Arno

Frazer Jolly Goodfellow said:
I am not outdated and I know this can be done. It was possible with
special drivers even before. But why is nobody doing it? And why does
MS still have the drive letters?

Arno
[/QUOTE]
So what you're now saying is that with Microsoft you *can* do it the
Unix/Linux way as well, contradicting what you said earlier. Thanks for
clarifying that.

I am sayin that the standard, generally known and accepted way
on Unix are symbolic links, while they remain obscure on MS
platforms, even after they were finally made possible. As I sait,
I do know wbaout them, which should give you a hint that I was
not referring to technological possibilities, but to wiedely used
practices. In a way they are even more important.

Arno
 
A

Arno

In comp.sys.ibm.pc.hardware.storage sandot said:
On 06:53 31 Jul 2009, Arno wrote:
I'm the OP. If the overhead exists for a defrag then a similar overhead
may appear on a 1TB during normal processing when drive gets fuller.

It does. It mainly applies to file creation and extesion (making them
longer). However it is far lower, as the OS does not try hard to place
well or otherwise you would only need defragmentation in rare cases.
I'll partition the 1TB drive into two 500GB partitions to cut down
on the extra workload caused by such large system structures.
Does what you say mean there's a sweet spot for size of an NTFS
partition? I get the *intuition* anything bigger than 250GB could start
to get sluggish.

Depending on your usage pattern, it is well possible. You cannot
really optimize this in advance. Just reduce filesystem size when
things start to get slow.
(Assuming 2-platter 7200rpm SATA with file sizes typical of a home
office system.)

Not specific enough, sorry. This is also the reason why
disk benchmark results are of limited value.

Arno
 
J

John Turco

Jim Jones wrote:

Nope, no point in farting around with mulitiple partitions with Win.

Too hard to get the sizes right initially, because the size needed
changes over time and too dangerous to resize partitions without
a full backup which that level of user hardly ever has.

Hello, Rod:

Perfectly true! In fact, I had that very problem, with my first PC (a
Pionex 486DX2/66MHz), running Windows 3.1/DOS 6.2. (It's still around
here, somewhere.)

It came with a Western Digital 425MB hard disk, and I upgraded it to a
WD 2.5GB, about two years later. Being stuck with such an ancient OS
and its lack of support for larger partitions, I was forced to chop up
my HDD into several small pieces.

Quite annoying, to say the least.

Wrong. By far the easiest is folder trees in a single partition.

That way the free space isnt scattered across the partitions and
the no folder can ever run out of space.

Which was the most frustrating thing, on my old Pionex box.
 
B

Bilky White

Rod Speed said:
Some pathetic little gutless desperately cowering behind
Bilky White wrote just what you'd expect from a desperately
cowering gutless .

Woo hoo the muppet puppet dances to my tune yet again!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top