200GB Hard Disk

G

Gerry Cornell

Jonny

It will vary according to your usage of newsgroups and how you read. Leaving
routine maintenance for two weeks for me was dramatically slowing the system
or it did until I upgraded from a P3 to a P4.

Some dbx file sizes get very large and this newsgroup is a good example.
They then are slow to open and slow the initial stages of synchronisation.
If a
user has limited free disk space as the space become even more limited then
performance can fall off rapidly until it grinds to a halt. System
bottlenecks can
dictate how frequently you undertake routine maintenance.

--

Regards.

Gerry
~~~~
FCA
Stourport, England

Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
M

Mak

Gerry,

Gerry Cornell said:
Mak

There have always been opposing viewpoints on this issue and rarely does
one side concede ground to the other.

Absolutely. you are entitled to your opposing viewpoints.
It's not illogical. If partitioned along the lines of placing rapidly
fragmenting files
in separate partitions to those that do not you obviate the need to
defragment
large sectors of the hard drive, save at particular times for particular
reasons.

Rapidly fragmented files are accessed often (files don't get magically
fragmented when left alone), you want all your files that you access
constantly as close to each other as possible - for performance, so that the
heads don't fly all over the plater. Partitioning will not make those files
fragment less, but it will surely make seek times longer accessing them.
The larger the drive the longer any disk utilitity takes given other
circumstances
are comparable.

Actually, no. With the same given set of files and fragmentation level
defrag will finish faster on the larger drive because there will be more
free space for the defrag to work with.
Your argument is a two edged sword. Files that are not contiguous increase
the time taken to read / write.

..Only assuming sequential reads dominate your day-to-day workload. Very
unusual. Sysinternals FileMon can help determine your read / write patterns.

However, I feel you are missing the point: there is only one need for
defrag - performance. Otherwise defrag wears off your drive and wastes your
time. (I'm sorry, but I don't have my computer for defrag to work with. My
computer is for me). For performance it is more important to place your
files as close to each other as possible, than make them contiguous and
place them far apart. Or, in other words, performance is better with
fragmented files when they are close to each other, than when contiguous and
are far away.
Your approach will tend to increase the number
of files that are not contiguous, especially where there is limited free
disk space.

Number will stay the same or will be smaller because there will be more
contiguous free space on single partition - files will not be inheritably
fragmented on creation.
RAID is a particular type of technology, which I and many others do not
use, so
your statements based on that technology are simply not relevant. They may
be
relevant when you are directing your comments to another using the same
technology.

That was not statement based on technology, but rather an observation on how
partitioning can destroy the advantage of faster drive(s).
Experience based on observation. I am an accountant and my experience over
40 years is that you can usually provide numbers to support any side of an
argument. You just add or omit factors. which strengthen your argument or
weaken
your opponents. This may be a conscious or unknowing action by the
compiler
of the numbers.

I'm sorry, why would you consciously make up numbers supporting slower
setups? I don't need the numbers you may come up with, the above was a
suggestion to verify the benefits of single partition vs multiple. Humans
are not measurement instruments we have tendency to feel (or observe if you
will) one way or the other.. depending on what we had for breakfast.
The other week in the UK Parliament a government minister stated, quoting
government statistics, that there were 90 odd Polish plumbers currently
working
in the UK. A national newspaper, not believing the statement, identified
within a
day many more than 90 working in London alone. It emerged that the
government
statistics only related to employed persons and took absolutely no account
of self
employed Polish plumbers, who greatly outnumber their employed
counterparts.

No comments on how relevant the above is to partitions. Not a single
comment.
The way in which the computer is used and the user's resources and
capabilties
will determine the content of the routine maintenance of the computer. I
am not
into gaming but many others are. The way gamers maintain their computer
will be
different to my approach because their priorities are different to mine
.There is no
point in expounding on the benefits gained from compacting dbx files to
someone
who rarely uses Outlook Express.


Well that's what you get if you partition using the tool provided with
Windows,
which lacks the facility to easily adjust partition sizes. If you had to
dig a trench
100 metres long, a metre wide and a metre deep would you prefer to do it
using
a pick and shovel or a mechanical excavator?

I will rather play some game on my computer, digging is not my hobby.
Or, I would rather not dig the above trench at all, save buying fancy
expansive excavator to do this, so, don't make partitions, don't buy
Partition Tragic.
 
B

Bob Eyster

Having a 10gig partition for the OS is asking for trouble in the long run.
Even if you don't install programs on C: some file will get installed there
anyway. The OS will use up 2gig or more depending on what parts of the OS
you install.

Bob Eyster
 
B

Bob Eyster

The reason for partitioning in the first place was because back in the DOS
days DOS could only read 2+gig at a time so you had to partition the drive
if they were larger than 2gig.


Bob Eyster
 
G

Gerry Cornell

Mak

Mak said:
Rapidly fragmented files are accessed often (files don't get magically
fragmented when left alone), you want all your files that you access
constantly as close to each other as possible - for performance, so that
the heads don't fly all over the plater. Partitioning will not make those
files fragment less, but it will surely make seek times longer accessing
them.

Files are not fragmented by being accessed. They become fragmented when
they are changed. Rewriting files leaves available free space scattered all
over
the drive, where there is a no partitioning . This means that any new large
file
can be significantly fragmented when it is first written. You need less free
space
on a partitition containing windows and programmes or archived data, where
changes occur occasionally and more free space on a current data or a
virtual
memory partition where changes are taking place all the time.
Actually, no. With the same given set of files and fragmentation level
defrag will finish faster on the larger drive because there will be more
free space for the defrag to work with.

Actually it can be the reverse because one should allocate a higher
percentage
free space to a data partition thanto a windows / programme partition.
.Only assuming sequential reads dominate your day-to-day workload. Very
unusual. Sysinternals FileMon can help determine your read / write
patterns.

I have used FileMon. I would not dream of trying to draw such a conclusion
from
a programme that deluges the user with so much information. You cannot see
the
wood for the trees.
However, I feel you are missing the point: there is only one need for
defrag - performance. Otherwise defrag wears off your drive and wastes
your time.

Hard drive will usually become obsolete before they wear out. That's why
many
manufacturers offer a 3 year warranty
(I'm sorry, but I don't have my computer for defrag to work with. My
computer is for me).

I am with you there.
For performance it is more important to place your files as close to each
other as possible, than make them contiguous and place them far apart.

That is what I am doing. The fragments are closer together in a dedicated
partition than
they would be in a single partition when they would be scattered all the
disk !

Or, in other words, performance is better with fragmented files when they
are close to each other, than when contiguous and are far away.

Exactly! Close to each other.Thank you for arguing my case so well said:
Number will stay the same or will be smaller because there will be more
contiguous free space on single partition - files will not be inheritably
fragmented on creation.

The size of files is a big factor. Large files when written will naturally
break
into many fragments because of the size of available free space. The System
Restore folders are a good illustration of what I mean.
That was not statement based on technology, but rather an observation on
how partitioning can destroy the advantage of faster drive(s).

I can only reply on the basis of what you say not what I think you intended
to say.
If you want to continue to advance that as an argument you will have to
restate
you case omitting everything than relies on the RAID technology.
I'm sorry, why would you consciously make up numbers supporting slower
setups? I don't need the numbers you may come up with, the above was a
suggestion to verify the benefits of single partition vs multiple. Humans
are not measurement instruments we have tendency to feel (or observe if
you will) one way or the other.. depending on what we had for breakfast.

Why do people bombard others with spam and nasties. Some will make up
numbers but more frequently persons will put forward statements including
numbers favourable to their objective and omitting those which are
unhelpful.
You can measure individual performances, although the results are applied in
the wrong way they can be demotivating and you end up with wrong outcome.
No comments on how relevant the above is to partitions. Not a single
comment.

I made the point to demonstrate that often all is not what it seems to be.
I will rather play some game on my computer, digging is not my hobby.
Or, I would rather not dig the above trench at all, save buying fancy
expansive excavator to do this, so, don't make partitions, don't buy
Partition Tragic.

Partition Magic has never let me down.

--

Hope this helps.

Gerry
~~~~
FCA
Stourport, England

Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
G

Gerry Cornell

Bob

If it is set in stone!

You can manage on significantly less. It all depends on looking at system
default disk space settings and what you relocate elsewhere. Defaults work
against the user with the larger disks now available.


--

Regards.

Gerry
~~~~
FCA
Stourport, England

Enquire, plan and execute
~~~~~~~~~~~~~~~~~~~
 
K

Ken Blake, MVP

Bob said:
The reason for partitioning in the first place was because back in
the DOS days DOS could only read 2+gig at a time so you had to
partition the drive if they were larger than 2gig.


Not quite. Back in the days before drives as big as 2GB were even available,
many people had multiple partitions for the purpose of reducing cluster
size, thereby wasting less space to slack.

Having multiple partitions because the drive was larger than FAT16's limit
of 2GB per partition was a later development.
 
T

Tim Slattery

Bob Eyster said:
The reason for partitioning in the first place was because back in the DOS
days DOS could only read 2+gig at a time so you had to partition the drive
if they were larger than 2gig.

You're thinking of the FAT16 file system's limit of 2.1GB per
partition. Remember that the original version of Win95 could handle
only FAT16, so it's not only DOS where we had to worry about that.

There are valid reasons to partition a disk, even though we don't
absolutely *have* to any more.
 
M

Mak

Gerry.

Gerry Cornell said:
Mak



Files are not fragmented by being accessed. They become fragmented when
they are changed.

"Accessed" does not automatically mean "accessed with read function", Gerry.
"Accessed" was used as opposite to 'not accessed', 'left alone'. To change
the file, you need to access it first. Does that make sense now?
Rewriting files leaves available free space scattered all over
the drive, where there is a no partitioning .

(Semantics: we are talking about multiple partitions vs. single partition
(still a partition)). I have no problem with free space all over the drive.
What I do have problem with is severely fragmented free space. More room to
breath on single partition makes more room for contiguous free space.
This means that any new large file
can be significantly fragmented when it is first written.

There is no problem with the number of fragments, Gerry. The problem is only
with the number of split I/Os compared to total I/O made to "access" the
file. As I was trying to explain to you: sequential access is very, very
rare. So, we have a severely fragmented file. When the file is accessed it's
normally _not_ read from the beginning to the end in one large I/O
(Something the makers of defrag programs don't tell you, and their synthetic
tests don't show). It's read in chunks, between the I/Os to the above file
Windows will read / write to other files, the heads will move no matter
what. So, it doesn't matter if file is contiguous or not, it matters
however, how close (something that Prefetch does, or smart defrag with file
placement optimization) to the other last file Windows was accessing the
next chunk of the above file is . and how many times I/Os were split because
I/O has to cross the border of 2 fragments. If that split I/O number
_compared_ (as a percentage for example) to total number of I/Os is
insignificant, so will be the performance benefit of running defrag.
You need less free space
on a partitition containing windows and programmes or archived data, where
changes occur occasionally

<sarcasm on> Nothing is written to %windir% and %programfiles% during
operation of a computer, ever. <sarcasm off>
You know how often stuff got written to %windir% / %programfiles% for a
fact?
and more free space on a current data

but not as much as there can be with single partition.
or a virtual
memory partition

WTF is "virtual memory partition"? Windows doesn't have that.
But if you are talking (big stretch, paging file is not virtual memory)
about a partition where all by itself, your paging file lives - that is a
big no-no, bad, stupid thing to do.
You want your paging file in the middle of your most used partition of your
least used physical drive. Since we are talking about single drive here, you
want your paging file to be in the middle of your OS + Program Files
partition (if you're still making multiple partitions).
where changes are taking place all the time.


Actually it can be the reverse because one should allocate a higher
percentage
free space to a data partition thanto a windows / programme partition.

One should do no such thing. One should stop and think about reasons for
multiple partitions first, before looking up the price of PM.
I have used FileMon. I would not dream of trying to draw such a conclusion
from
a programme that deluges the user with so much information. You cannot
see the
wood for the trees.

There are no problems with info in FileMon, besides, there are built-in
filters when "so much information" is present.
Hard drive will usually become obsolete before they wear out. That's why
many
manufacturers offer a 3 year warranty

You've got that backwards. 3 years are offered because of expected lifespan,
not the other way around. I have few 2GB SCSI drives (proxy cache) that are
close to celebrating their 7th birthday. I will not replace them til they
die.
I am with you there.


That is what I am doing. The fragments are closer together in a dedicated
partition than
they would be in a single partition when they would be scattered all the
disk !

See above on the reading patterns, that's not what you are doing /
promoting.
Exactly! Close to each other.Thank you for arguing my case so well <g>.

Read: close to other frequently accessed files as well (reading patterns) -
your setup lacks that.
The size of files is a big factor. Large files when written will naturally
break
into many fragments because of the size of available free space. The
System
Restore folders are a good illustration of what I mean.

They will not break "naturally", but for the reason: lack of contiguous free
space. Windows has no evil intentions fragmenting your big files.
I can only reply on the basis of what you say not what I think you
intended to say.
If you want to continue to advance that as an argument you will have to
restate
you case omitting everything than relies on the RAID technology.

I don't "have to", Your Honour, you, however, _may_ make some efforts where
/ if they are due to understand the example.
Why do people bombard others with spam and nasties. Some will make up
numbers but more frequently persons will put forward statements including
numbers favourable to their objective and omitting those which are
unhelpful.
You can measure individual performances, although the results are applied
in
the wrong way they can be demotivating and you end up with wrong outcome.

Relevance?
What I'm saying / suggesting is this: do the tests, keep the numbers and try
not lie to yourself. (You _don't_have_to_ do the tests, I'm not seeking to
convert you at all - it's not important.)
I made the point to demonstrate that often all is not what it seems to be.

There wasn't a need for one, considering I'm not particularly interested in
your numbers, I though you will be, it looks like I was wrong.
Partition Magic has never let me down.

Never is such a long time, Gerry.
You are not the only one who uses PM.
I can say: PM will never let me down, can you?

Not looking for one ATM. Simply having a discussion. <- scratch that if it
just a signature.>
 
C

Charlie Tame

Hehe, always a lively debate and I think I have to agree "Mostly" with MAK
so excuse the snippage but here is my view of it.

Whatever you think are suitable sizes for partitions will be wrong. For
example a drive for movies, a drive for music, a drive for photos seems
reasonable. Until one fills up, but two others have a lot of free space
left, then your plan is doomed. It might seem more logical than "Folders"
until you actually do it.

Some programs do not take kindly to being installed in anything other than
default directories. In addition, if you have a dual boot system or mess
with changing partitions later all kinds of nasty things can happen when you
"Forget" what you did earlier. And OS partition + one "Data" partition is
okay I guess, but then if you make the OS partition too small you run into
the problem of creating more fragmentation and maybe the 75% full problem.
IOW some "Wasted" capacity is a fact of life, generally more partitions =
more space you have to waste.

If you are not careful, trying to be a "Purist" will have you "Fixing" or
"Maintaining" the computer more than enjoying it.

I agree absolutely that defrag utility vendors don't tell you all the facts,
for example if you have a very heavily fragmented file but only use it once
a month it will only affect you once a month, not all the time. Defrag is
only necessary if you regularly delete or change files. An example would be
the temp internet files. Sure it does no harm to have this sorted out but it
does not make as much difference as we would like to believe having just
lashed out $100 on some utility :)

The reason a different partition for long time data files helps is simply
that by their nature it never needs to be defragmented, thus removing a lot
of the work for the defragmenter to do.

If you have an automated one like diskeeper then it is effective to have it
run regularly because the time taken is minimized, alternatively if you have
a manual one then run it right after deleting a bunch of files and accept an
hour or two overnight ...

Windows is Windows, Linux is Linux and the same philosophy does not apply.
In fact trying to apply the Linux philosophy to Windows is probably counter
productive.

I make these observations after years of watching the results of enthusiasts
playing with Norton Speed disk, Disk keeper and other stuff :)

My overall conclusion is "If it makes you feel better then do it, Windows
doesn't really care". Any speed increase you achieve is more than
compensated for by the time spent fiddling with it, but of course you forget
that in the pursuit of your goal. We tend to attribute human attributes to
machines that are invalid. None read files in the same way we read a page.

Charlie
 
J

Jonny

Yep.

Have to flush the newreader cache of this newsgroup in particular every
month. It gets too big. A pause, then a longer and longer pause develops
when I switch to another newsgroup from this one as this grows in size.
Other newsgroups I frequent don't have nearly the number of posts, so, its
not a problem. My newsreader database is not archived either. This large
database is not really a fragmentation problem, rather closing and opening
database files.
 
J

Jonny

Basically, I make the partition size based on XP default swapfile location,
what 3rd party programs I intend to install, and double that size for
possible future 3rd party installs and XP update stuff, then double that for
defrag allowances and slop. My final figure was 26GB.
 
B

Bob Eyster

Win95 was a 16bit DOS based OS, it would not run without it. The only
windows versions not requiring DOS were the NT OS's.


Bob Eyster
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top