Is defraging necessary?

H

HeyBub

Leythos said:
Why do you even consider discussing FAT-32?

You do know that the default cluster size for NTFS (anything modern)
is 4K in most instances, right?

In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.
How does that impact your math now?

It doesn't.
You might want to start learning about drives, formats, RAID,
clusters, etc... before you post again.

Heh! I'll wager I know more about the things you mentioned than you can ever
imagine. I started my career designing test suites for 2311 disk drives on
IBM mainframes and have, mostly, kept up.
 
B

Bill in Co.

HeyBub said:
In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.

But that's not quite the whole story though: The bottom line is that the
files are scattered in fragments all over the hard drive, no matter what
file system you are using, so there will have to be multiple disk sector
seeks and accesses to get them collected together into RAM memory. And if
you've defragged the drive, the number of wildly scattered storage locations
on the drive for these fragments will be greatly reduced (since they will be
in more contiguous sectors), so the net total seek and access times would be
reduced, naturally.
 
E

Erwin Moller

HeyBub schreef:
Ignorance can be fixed - hence the original question. It's knowing something
that is false that's the bigger problem.

Considering your example of 8,000 segments, consider: A minimum segment size
of 4096 bytes implies a minimum of 32 meg file. A FAT-32 system requires a
minimum of 16,000 head movements to gather all the pieces. In this case,
with an average access time of 12msec, you'll spend over six minutes just
moving the head around. Factor in rotational delay to bring the track marker
under the head, then rotational delay to find the sector, and so on, you're
up to ten minutes or so to read the file.

An NTFS system will suck up the file with ONE head movement. You still have
the rotational delays and so forth, but NTFS will cut the six minutes off
the slurp-up time.

Hi Heybub,

This is the second time I hear you claiming this.
How do you 'envision' the head(s) reading all fragments in one go?

In your example: 8000 fragments. If these are scattered all over the
place, the head has to read a lot of different places before all info is in.
Compare this to one continuous sequential set of data where the head
reads all without extra seeking and/or skipping parts.

Also, and especially on systems that need a huge swapfile, after filling
up your HD a few times can lead to heavily fragmented swapfile. This
gives a performance penalty.

I have seen serious performance improvements (on both FAT32 and NTFS)
after defragging (also the systemfiles with
http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

Others claim the same. How do you explain that?

Erwin Moller


De-fragging an NTFS system DOES have its uses: For those who dust the inside
covers of the books on their shelves and weekly scour the inside of the
toilet water tank, a sense of satisfaction infuses their very being after a
successful operation.

I personally think Prozac is cheaper, but to each his own.


--
"There are two ways of constructing a software design: One way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
-- C.A.R. Hoare
 
L

Leythos

In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.


It doesn't.


Heh! I'll wager I know more about the things you mentioned than you can ever
imagine. I started my career designing test suites for 2311 disk drives on
IBM mainframes and have, mostly, kept up.

And yet you don't seem to understand that on NTFS, file fragmentation
means that the heads still have to MOVE to reach the other fragments.

Try and keep up.
 
T

Twayne

In
Bob I said:
RAID 0 is nothing more than Mirrored Drives, it won't be
faster or more stable, only provides a identical copy in
the event a harddrive fails.

Jeez, quit guessing at what you "think" are the facts, dummy!

A RAID 0 (also known as a stripe set or striped volume) splits data evenly
across two or more disks (striped) with no parity information for
redundancy. It is important to note that RAID 0 was not one of the original
RAID levels and provides no data redundancy. RAID 0 is normally used to
increase performance, although it can also be used as a way to create a
small number of large virtual disks out of a large number of small physical
ones.
 
T

Twayne

In Erwin Moller <[email protected]>
typed:
....
Hi Heybub,

This is the second time I hear you claiming this.
How do you 'envision' the head(s) reading all fragments in
one go?
In your example: 8000 fragments. If these are scattered all
over the place, the head has to read a lot of different places
before all info is in. Compare this to one continuous
sequential set of data where the head reads all without extra seeking
and/or skipping parts.

Also, and especially on systems that need a huge swapfile,
after filling up your HD a few times can lead to heavily fragmented
swapfile. This gives a performance penalty.

I have seen serious performance improvements (on both FAT32
and NTFS) after defragging (also the systemfiles with
http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

Others claim the same. How do you explain that?

Erwin Moller
....

Remember, this is the guy who can suspend all laws of physics at his will.
There are a couple such people here in fact. It works for him because the
heads are "magnetic" and so are the data. But the head has a super-magnetic
mode: So, the head just comes down and sucks up all the data it needs from
the disk in one fell swoop. It can tell which ones to slurp up by the
arrangement of the magnetic field on the disk; so when the head goes
super-magnetic, it's only for those data parts that are of the right
polarity; the head just has to sit the until they all collect on it, and
then it moves them over to RAM to be used.!
Sounds pretty simple to me! lol!

HTH,

Twayne`
 
T

Twayne

In
Leythos said:
(e-mail address removed)
says...

My guess is that he's either a troll or some kid in school
that has no friends so he has to pretend to know something
here.

You may be right, but recall also that there is always the "little knowledge
is dangerous" thing too. e.g. if RAID is used for data redundancy was taught
in school, then RAID 0 is just one of those schemes. He may not have yet
noticed that this is a world of generalities, but very, very specific
generalities that don't intuitively cover all cases.

HTH,

Twayne`
 
E

Erwin Moller

Twayne schreef:
In Erwin Moller <[email protected]>
typed:
...

...

Remember, this is the guy who can suspend all laws of physics at his will.
There are a couple such people here in fact. It works for him because the
heads are "magnetic" and so are the data. But the head has a super-magnetic
mode: So, the head just comes down and sucks up all the data it needs from
the disk in one fell swoop. It can tell which ones to slurp up by the
arrangement of the magnetic field on the disk; so when the head goes
super-magnetic, it's only for those data parts that are of the right
polarity; the head just has to sit the until they all collect on it, and
then it moves them over to RAM to be used.!
Sounds pretty simple to me! lol!


LOL, thanks for that excellent explanation. ;-)

I always find it difficult when to respond and when not.
In cases I feel I see serious misinformation, like here with Heybub, I
feel sorry for people who don't know that, and subsequentially take that
kind of advice seriously.

Ah well, that is how usenet was, is, and probably always will be. ;-)

Regards,
Erwin Moller
HTH,

Twayne`



--
"There are two ways of constructing a software design: One way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
-- C.A.R. Hoare
 
T

Twayne

In
Erwin Moller said:
Twayne schreef:


LOL, thanks for that excellent explanation. ;-)

I always find it difficult when to respond and when not.
In cases I feel I see serious misinformation, like here
with Heybub, I feel sorry for people who don't know that,
and subsequentially take that kind of advice seriously.

Ah well, that is how usenet was, is, and probably always
will be. ;-)
Regards,
Erwin Moller

I know what you mean, Erwin. Sometimes there's an excuse for it such as
they just don't know better, but even then they have to be urged to pay
attention to the details.

Luck,

Twayne`
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Where the hell is Scandisk? 9
Scheduled Tasks Problem 1
Scheduled Task Important To Know 1
Disk Defragmenter 2
Urgent! Virus Attack 18
can't Defragment C drive 6
Defraging and modern disks 10
defraging 37

Top