"Deleting all the 'junk' on my hard drive made my computer run faster"

R

Raymond J. Johnson Jr.

Bruce said:
Well, it would certainly prove that you don't understand the glaring
difference between "flying" and "plummeting."

Merry Christmas and Happy New Year.
 
B

Bruce Chambers

Raymond said:
You're actually a bit dumber than I thought. One of the things I do for
a living is writing;

Not very well, apparently.
I don't need a nitwit like you to explain what an
analogy is.

Why then do you always resort to obsenities and name-calling? Have you
begun to realize that your self-proclaimed logic is somehow faulty and
inadequate to the task at hand?

The analogy makes no sense, although there was a time when
it might have. If I take a file and cut it up into 25 pieces and
scatter them across a disk, of course there is more energy and motion
required in reading the file than if the file were contiguous.

So you do agree that fragmentation of files can cause a degradation of
performance.
It does not necessarily
or logically follow, however, that the extra time it takes to read that
file will ever be perceptible, or cause a user to think there's
something wrong.

The additional time it takes to read one, single fragmented file will
be imperceptible, but computers don't ever read just one single file
during the performance of a task. Systems files, bits of the
applications open at the moment, and all of their several data and
configuration files are all being accessed - to the user's perception -
simultaneously. (The true access of the files occurs serially, of
course.) If all of these programs and files are badly fragmented, the
computer's task of accessing them all sequentially *will* take longer.
This even you have acknowledged.
If reading a contiguous file takes x amount of time
(with x equaling an imperceptible interval) and if the same read
operation takes x times five for a fragmented file, but x times five is
*still* imperceptible or negligible, where's the gain?

Try looking up the word "cumulative." Better yet, try supporting
several hundred machines on a corporate LAN for several years. Then,
you might begin to see just how woefully lacking are the so-called
"objective" conclusions based entirely upon controlled laboratory testing.

By the way, I notice you completely disregarded my question. If it's
not the defragmentation of the hard drive that noticably improves a
computer's performance, what is it? Did the word "magic" strike too
close to home? Is it, after all, an accurate description of your
understanding of computers?


--

Bruce Chambers

Help us help you:



You can have peace. Or you can have freedom. Don't ever count on having
both at once. - RAH
 
B

Bruce Chambers

Leythos said:
LOL, I actually cracked up out loud when I read that one :)


Glad you liked it. ;-}

--

Bruce Chambers

Help us help you:



You can have peace. Or you can have freedom. Don't ever count on having
both at once. - RAH
 
B

Bruce Chambers

Raymond said:
Merry Christmas and Happy New Year.

Even a non-writer such as myself can recognize a "non sequitur." Very
good job of applying one.

--

Bruce Chambers

Help us help you:



You can have peace. Or you can have freedom. Don't ever count on having
both at once. - RAH
 
T

Tom

Raymond J. Johnson Jr. said:
It is a myth, to a certain extent, especially when it comes to registry
cleaners. Having junk in temp folders can occasionally cause specific
problems, and it's a good idea to clean them out occasionally. XP's Disk
Cleanup function works nicely for this purpose. There have been a lot
of discussions here about the relative merits of registry cleaning, but
none of the people who tout it ever have any objective evidence that it
does anything other than separating gullible people from their money.
The registry in XP is best left alone unless there is a *specific*
problem that one has good reason to believe is being caused by something
in it.

While I agree that 3rd party registry editors should never be used, as XP does just fine. If one know XPs registry well, then that is OK. But NTRegOpt does a good job at compacting and minimising the total size of the registry hive that can be spread from changes made, and add/removing programs. it is totally harmless, and does not changes any setting or alter the hive, just compacts it.

http://home.t-online.de/home/lars.hederer/erunt/
 
A

Alex Nichol

Grant said:
I hear this all the time from people who think they are computer
savvy. I cannot come up with any reasons why this could be true.
(except maybe a situation where one has so many files on the HD that
it impacts the size of the swap file such that it impacts
performance).

Deleting files that are on the hard disk will not make any difference of
itself. Cutting out unneeded files from the Startup, and avoiding
loading many multiple copies of programs can
 
L

Leythos

Deleting files that are on the hard disk will not make any difference of
itself.

Alex, think about it for a second - were I to have a disk, badly
fragmented, and it contained several files that were 1GB in size that
were contagious, deleting those 1GB files would permit the addition of
new files that were not fragmented. So, while deleting files, large
ones, does not decrease the current state of fragmentation, it can
decrease the future state of fragmentation until such time as the freed
segments are consumed. This means that for the new files, the
unfragmented ones, performance would be better, however, deleting files
does not help the "current" state of fragmentation which means it does
not help performance.

This is a classic case of a question with a couple answers based on
conditions.

1) Deleting files does not help performance for files that remain.

2) Deleting files with large contagious area segments will help with
lowering future fragmentation rates until that space is use up.

3) The best method would be to delete the files and then defrag the
drive so that more contigious open spaces are available, and so that
more files are unfragmented.
 
R

Raymond J. Johnson Jr.

| In article <[email protected]>,
| (e-mail address removed) says...
| > Deleting files that are on the hard disk will not make any difference of
| > itself.
|
| Alex, think about it for a second - were I to have a disk, badly
| fragmented, and it contained several files that were 1GB in size that
| were contagious, deleting those 1GB files would permit the addition of
| new files that were not fragmented. So, while deleting files, large
| ones, does not decrease the current state of fragmentation, it can
| decrease the future state of fragmentation until such time as the freed
| segments are consumed. This means that for the new files, the
| unfragmented ones, performance would be better, however, deleting files
| does not help the "current" state of fragmentation which means it does
| not help performance.
|
| This is a classic case of a question with a couple answers based on
| conditions.
|
| 1) Deleting files does not help performance for files that remain.
|
| 2) Deleting files with large contagious area segments will help with
| lowering future fragmentation rates until that space is use up.
|
| 3) The best method would be to delete the files and then defrag the
| drive so that more contigious open spaces are available, and so that
| more files are unfragmented.
|
|
| --
| --
| (e-mail address removed)
| (Remove 999 to reply to me)

I was going to let this go, but this makes so little sense that a response
is appropriate. First, in order to enhance your credibility, you should
probably learn the difference between "contagious" and "contiguous." If you
have contagious files I don't blame you for wanting to get rid of them.
Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved. Using the XP defrag utility
doesn't consolidate free space. We are also assuming that what you're
proposing would actually result in perceivable performance improvement, and
all we've gotten so far on that subject is reference to objective data that
says that in general it doesn't, and your unsupported assertions that it
does.
 
L

Leythos

| In article <[email protected]>,
| (e-mail address removed) says...
| > Deleting files that are on the hard disk will not make any difference of
| > itself.
|
| Alex, think about it for a second - were I to have a disk, badly
| fragmented, and it contained several files that were 1GB in size that
| were contagious, deleting those 1GB files would permit the addition of
| new files that were not fragmented. So, while deleting files, large
| ones, does not decrease the current state of fragmentation, it can
| decrease the future state of fragmentation until such time as the freed
| segments are consumed. This means that for the new files, the
| unfragmented ones, performance would be better, however, deleting files
| does not help the "current" state of fragmentation which means it does
| not help performance.
|
| This is a classic case of a question with a couple answers based on
| conditions.
|
| 1) Deleting files does not help performance for files that remain.
|
| 2) Deleting files with large contagious area segments will help with
| lowering future fragmentation rates until that space is use up.
|
| 3) The best method would be to delete the files and then defrag the
| drive so that more contigious open spaces are available, and so that
| more files are unfragmented.
|

I was going to let this go, but this makes so little sense that a response
is appropriate. First, in order to enhance your credibility, you should
probably learn the difference between "contagious" and "contiguous." If you

I hit send and the spell checker did that as I blindly clicked accept, I
sent a cancel message and it was not honored - let it slide.
have contagious files I don't blame you for wanting to get rid of them.
Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you

You really need to look at drives more, there are large spaces on many
drives that have contiguous open spaces - meaning many sectors in a
block forming one sequential open space at the sector level.
 
L

Leythos

Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved.

This is totally untrue. Using the term freespace in the context you imply cannot, nor does count in this duscussion. To put data on an empty drive, one has to have that drive formatted to a certain filing system (ie NTFS). In this case, the drive, or partition, can have free space for data storage, but there are contiguous alignments of the clusters on that drive once it is formatted, and they are used contiguously as data is written to them.

He won't listen, he still thinks that fragmentation does not degrade
performance to a level that any user could perceive.
 
T

Tom

Raymond J. Johnson Jr. said:
| In article <[email protected]>,
| (e-mail address removed) says...
| > Deleting files that are on the hard disk will not make any difference of
| > itself.
|
| Alex, think about it for a second - were I to have a disk, badly
| fragmented, and it contained several files that were 1GB in size that
| were contagious, deleting those 1GB files would permit the addition of
| new files that were not fragmented. So, while deleting files, large
| ones, does not decrease the current state of fragmentation, it can
| decrease the future state of fragmentation until such time as the freed
| segments are consumed. This means that for the new files, the
| unfragmented ones, performance would be better, however, deleting files
| does not help the "current" state of fragmentation which means it does
| not help performance.
|
| This is a classic case of a question with a couple answers based on
| conditions.
|
| 1) Deleting files does not help performance for files that remain.
|
| 2) Deleting files with large contagious area segments will help with
| lowering future fragmentation rates until that space is use up.
|
| 3) The best method would be to delete the files and then defrag the
| drive so that more contigious open spaces are available, and so that
| more files are unfragmented.
|
|
| --
| --
| (e-mail address removed)
| (Remove 999 to reply to me)
Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved.

This is totally untrue. Using the term freespace in the context you imply cannot, nor does count in this duscussion. To put data on an empty drive, one has to have that drive formatted to a certain filing system (ie NTFS). In this case, the drive, or partition, can have free space for data storage, but there are contiguous alignments of the clusters on that drive once it is formatted, and they are used contiguously as data is written to them.
 
A

Alex Nichol

Tom said:
This is totally untrue. Using the term freespace in the context you imply cannot, nor does count in this duscussion. To put data on an empty drive, one has to have that drive formatted to a certain filing system (ie NTFS). In this case, the drive, or partition, can have free space for data storage, but there are contiguous alignments of the clusters on that drive once it is formatted, and they are used contiguously as data is written to them.

It is desirable to have files defragmented; and I regard it as important
that the free space is a continuous area as well. But this really has
nothing to do with the system being slowed simply by there being more
files on the disk
 
R

Raymond J. Johnson Jr.

| In article <[email protected]>, (e-mail address removed)
| says...
| > > Next, there is no such thing as "'contigious' [contiguous] open
spaces."
| > > Free space is free space; how can a void be contiguous? Finally, what
you
| > > are proposing *might* be beneficial in terms of the future
fragmentation of
| > > files (or lack thereof), but there's no way to tell because whether
new
| > > files become fragmented or not is dependent on the sizes of free
spaces
| > > available at the time the files are saved.
| >
| > This is totally untrue. Using the term freespace in the context you
imply cannot, nor does count in this duscussion. To put data on an empty
drive, one has to have that drive formatted to a certain filing system (ie
NTFS). In this case, the drive, or partition, can have free space for data
storage, but there are contiguous alignments of the clusters on that drive
once it is formatted, and they are used contiguously as data is written to
them.
|
| He won't listen, he still thinks that fragmentation does not degrade
| performance to a level that any user could perceive.
|
| --
| --
| (e-mail address removed)
| (Remove 999 to reply to me)

And you insist on mischaracterizing what I'm trying to say here. Go back to
the start of the thread, and read the OP's question again, and my initial
response to you, when you posted something about a 386 and a tiny hard
drive, and I was trying to make the point that modern drives, being
physically smaller and much faster, make fragmentation *less* of an issue:
"Your allusion to a dinosaur machine and hard drive is the source of the
current fallacious beliefs about compulsive defgragging. Modern drives
are much smaller physically and much, much faster, making the need for
defragging much less significant. Add NTFS to the mix and regular
defragging is unnecessary. In fact, the argument could be made that the
significant activity of defragging actually shortens the life of the
drive. Perhaps not by much, depending on how often one defrags, but in
most cases there's no real payback unless there's a specific performance
issue."

I never said that defragging is unnecessary, or unwarranted. In some cases
it's a good idea, but it is *not* necessary or beneficial for an everyday
home or small office user to do it on a regular basis, and there is
*empirical* evidence that supports that contention.
 
R

Raymond J. Johnson Jr.

Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved.
This is totally untrue. Using the term freespace in the context you imply
cannot, >nor does count in this duscussion. To put data on an empty drive,
one has to >have that drive formatted to a certain filing system (ie NTFS).
In this case, the >drive, or partition, can have free space for data
storage, but there are >contiguous alignments of the clusters on that drive
once it is formatted, and they >are used contiguously as data is written to
them.

Sorry, but you're not making any sense. There are indeed contiguous open
sectors, and they are indeed used contiguously as data is written to them
(and in that sense I misspoke earlier), but only to the extent that such
open space will hold the data being written. The phenomenon of
fragmentation is caused by the fact that the OS uses the first available
space regardless of size rather than looking for one contiguous area that
will hold all of the file to be written.
 
T

Tom

Alex Nichol said:
It is desirable to have files defragmented; and I regard it as important
that the free space is a continuous area as well. But this really has
nothing to do with the system being slowed simply by there being more
files on the disk

Note, I didn't say "files", I stated data. And as data gets added, and used, it becomes fragmented, but the actual sectors will always be side by side. I also didn't say that the system slows by adding files, I was clarifying a point that Raymond made reagarding what free space means to him.
 
T

Tom

Raymond J. Johnson Jr. said:
Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved.
This is totally untrue. Using the term freespace in the context you imply
cannot, >nor does count in this duscussion. To put data on an empty drive,
one has to >have that drive formatted to a certain filing system (ie NTFS).
In this case, the >drive, or partition, can have free space for data
storage, but there are >contiguous alignments of the clusters on that drive
once it is formatted, and they >are used contiguously as data is written to
them.

Sorry, but you're not making any sense. There are indeed contiguous open
sectors, and they are indeed used contiguously as data is written to them
(and in that sense I misspoke earlier), but only to the extent that such
open space will hold the data being written. The phenomenon of
fragmentation is caused by the fact that the OS uses the first available
space regardless of size rather than looking for one contiguous area that
will hold all of the file to be written.

HUH? First you say I am not making sense, but then you concur that you misspoke and admit that sectors are contiguous, and are written as such (essentially as I stated before). Then you state a phenomenon of fragmentation (which isn't anything phenomenal) since that is how it works in data storage. But you claim phenomenon by essentially concurring the same thing you agreed with me on in your first sentence. What makes sense now?
 
L

Leythos

I was trying to make the point that modern drives, being
physically smaller and much faster, make fragmentation *less* of an issue:
"Your allusion to a dinosaur machine and hard drive is the source of the
current fallacious beliefs about compulsive defgragging. Modern drives
are much smaller physically and much, much faster, making the need for
defragging much less significant. Add NTFS to the mix and regular
defragging is unnecessary. In fact, the argument could be made that the
significant activity of defragging actually shortens the life of the
drive. Perhaps not by much, depending on how often one defrags, but in
most cases there's no real payback unless there's a specific performance
issue."

And many of us will tell you, again, that you are wrong. I have servers
and workstations alike that run a product called DisKeeper, and have the
auto defrag option enabled, they defrag when there is idle time, they
have been running the same disks for years. While I agree that
mechanical movement does indeed cause failure rates to increase, it's
not near as significant as you might think - I have drives in machines
that are almost 15 years old that still work, and most drives in the
machines are at least 1 year old - all being defragged as needed.

NTFS has nothing to do with fragmentation and performance. NTFS let's me
create smaller segments on the disk than FAT did, so I get less slack
space waste, but it still fragments like FAT.

While the current batch of drives in most computers is larger than they
need, most people will not experience serious fragmentation, BUT there
are many that will. For the ones that have drives that are more than 50%
full, they could easily benefit from a defrag/pack of the drive. If the
head has to move less, the throughput will be faster, it's as simple as
that.
 
L

Leythos

Note, I didn't say "files", I stated data. And as data gets added, and used, it becomes fragmented, but the actual sectors will always be side by side. I also didn't say that the system slows by adding files, I was clarifying a point that Raymond made reagarding what free space means to him.

In general, data is part of a file of some type - there are few
applications or things that write to the drives today that don't put the
DATA in a FILE. Files can and do become fragmented, that would also mean
the data is fragmented across non-consecutive clusters.

1) Files contain data.
2) Files get fragmented - means data is fragmented across clusters.
3) Fragmentation causes the r/w heads to move more than if the files
were not fragmented.
4) Movement of the heads, when not reading files/data, is a performance
loss that can be corrected by defragmenting and packing the drive.
5) Size of drive has nothing to do with fragmentation, all drives become
fragmented sooner or later.
6) When defragging a heavily fragmented file system, performance
increases are easy to feel and determine.
 
R

Raymond J. Johnson Jr.

Raymond J. Johnson Jr. said:
Next, there is no such thing as "'contigious' [contiguous] open spaces."
Free space is free space; how can a void be contiguous? Finally, what you
are proposing *might* be beneficial in terms of the future fragmentation of
files (or lack thereof), but there's no way to tell because whether new
files become fragmented or not is dependent on the sizes of free spaces
available at the time the files are saved.
This is totally untrue. Using the term freespace in the context you imply
cannot, >nor does count in this duscussion. To put data on an empty drive,
one has to >have that drive formatted to a certain filing system (ie NTFS).
In this case, the >drive, or partition, can have free space for data
storage, but there are >contiguous alignments of the clusters on that drive
once it is formatted, and they >are used contiguously as data is written to
them.

Sorry, but you're not making any sense. There are indeed contiguous open
sectors, and they are indeed used contiguously as data is written to them
(and in that sense I misspoke earlier), but only to the extent that such
open space will hold the data being written. The phenomenon of
fragmentation is caused by the fact that the OS uses the first available
space regardless of size rather than looking for one contiguous area that
will hold all of the file to be written.

HUH? First you say I am not making sense, but then you concur that you
misspoke and admit that sectors are contiguous, and are written as such
(essentially as I stated before). Then you state a phenomenon of
fragmentation (which isn't anything phenomenal) since that is how it works
in data storage. But you claim phenomenon by essentially concurring the same
thing you agreed with me on in your first sentence. What makes sense now?

Your statment: "...the drive, or partition, can have free space for data
storage, but there are contiguous alignments of the clusters on that drive
once it is formatted, and they are used contiguously as data is written to
them."

What you describe as "contiguous alignments of clusters" can only store
files that are small enough to fit within any given contiguous space. If
the files to be stored are larger than the contiguous space available, the
OS will deposit as much as it can in the contiguous space and then move to
next available free space. That is what fragmentation is all about. It is
inevitable unless a disk is defragmented and free space is consolidated
(which XP's defragger doesn't do) every time a file is saved or deleted.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top