Shrink basic volume

R

Robin Bignall

I've got a 2T USB3 external HDD (a laCie, which are supposed to be
excellent quality) that I want to turn into two 1T volumes.
I used Perfect Disk to get all of the files up front (I see now that I
needn't have done that) and it's running. And running. And running.
I've never shrunk a volume before. Can I expect it to run more or less
as long as a defrag on that size of volume?
 
F

Flasherly

I've got a 2T USB3 external HDD (a laCie, which are supposed to be
excellent quality) that I want to turn into two 1T volumes.
I used Perfect Disk to get all of the files up front (I see now that I
needn't have done that) and it's running. And running. And running.
I've never shrunk a volume before. Can I expect it to run more or less
as long as a defrag on that size of volume?

Depends on the defragger. I'm running ...

http://www.disktrix.com/screens.php

so it's different.

I try and speed it up by consolidating with small gaps permitted, and
write its support scripts according to how and where I put directories
or portions of the disc ahead of time. Whether things are more or
less accessed and organized on the outside or inside of the disc.

Speed is secondary to avoiding a disc that thrashes, or good
organizational techniques. It's main drawback, though few of them
have it, is power interruption fault tolerance. If there's a power
issue and it's running on a file that's important, there's only one
way to say what messed up means.
 
P

Paul

Robin said:
I've got a 2T USB3 external HDD (a laCie, which are supposed to be
excellent quality) that I want to turn into two 1T volumes.
I used Perfect Disk to get all of the files up front (I see now that I
needn't have done that) and it's running. And running. And running.
I've never shrunk a volume before. Can I expect it to run more or less
as long as a defrag on that size of volume?

For a Windows 7 shrink, the max shrink is 50% (as long as no
other tool has been fooling with the NTFS metadata). There is some
metadata the OS doesn't know how to move, that PerfectDisk does know
how to move. So PerfectDisk is the "intervener" that enables
the Windows 7 built-in shrink, to get below the 50% mark. Note
that PerfectDisk doesn't do such a good job, that you get to the
desired size in one shot. The metadata is moved, but it's not "jammed
against the left snugly".

You don't need PerfectDisk, to go from a 2TB partition, to a 1TB partition.
The OS capability, should be able to do that by itself.

If you didn't defrag first, then even the OS tool will need
to move the data (using the internal defrag API to safely
move data around), The OS capability, can move data around,
but certain metadata cannot be moved by it.

In terms of moving the data clusters around, you pay the
same time price, whether the OS capability does it, or
PerfectDisk does it. The difference is, an extra few seconds
while PerfectDisk moves the metadata the OS doesn't know
how to move.

So if data was physically spread all over the 2TB partition,
*something* has to move the clusters below the 1TB mark.
You can do it in advance with PerfectDisk, in which case
the Windows 7 shrink should take no time at all. If you don't
use PerfectDisk, and just use Windows 7, it'll need the same
amount of time to move the data clusters out of the way.

Depending on the kind of content, the fastest way to
do this kind of op, is to copy the files off the 2TB,
delete the empty partition, make a new partition, copy
the files back. (I use Robocopy for this.) Robocopy may
be included on the newer OSes, while on older OSes, you
download it. Note - the "mir" or mirror is dangerous.
Triple-check your command syntax! You've been warned.

robocopy Y:\ F:\ /mir /copy:datso /dcopy:t /r:3 /w:2 /zb /np /tee /v /log:y_to_f.log

If it is actually the OS C: partition,
then you'll need a second OS to do it. For example,
until mid-January, you could use Win8 Preview as your
second OS, and do the necessary maintenance. (On a desktop,
you put Windows 8 on a separate disk, and unplug Windows 7
until it's done. On a laptop, offline data movement with a
second OS, is much harder to achieve. In which case, you
unscrew the hard drive from the laptop, install it in your
desktop PC, and do your operations there.)

So in answer to your question, yes, defragmenting a
2TB volume, is going to take a while. If you were *not*
doing shrink at the moment, you can stop a defragmenter at
any time. There should be a Stop option. On the shrink,
I don't know how you safely stop that. It may be
inherently safe (due to using the defragmenter APIs designed
into the OS), but I'm not turning off the PC in
the middle of one of those, to find out :)

The reason the performance in megabytes/sec is so low on
the defragmenter API, is the transfers are supposed to be
done in a "power safe" way. If the defragmenter API is
being used, the setup is supposed to survive a power
outage. The transfer size is relatively tiny, and the disk
heads fly around a lot, dropping performance to the 1MB/sec
to 3MB/sec range. Even if the drive has a disk cache, the
commands issued may insist the data not be cached. So for the
most part, the design is brain dead and about as slow as
you can possibly do it.

One of the older OSes, had a registry option where you could
claim your storage device was "power safe". Say for example,
you were on a super-UPS, where the disk would have power
for an entire day. You could set the registry entry, and
tell the OS that the power cannot possibly fail. The idea
was, a "more generous" sector transfer size could be
used, and the API would run faster. The info for doing this,
is hard to find on purpose, and if you look for it, I doubt
you'll find the bits and pieces necessary. I don't really think
they want people using that (because, guys like me would
flip that bit on my non-protected PC and just gamble on it).
So this idea is purely of historical interest.

Paul
 
R

Robin Bignall

Depends on the defragger. I'm running ...

http://www.disktrix.com/screens.php

so it's different.

I try and speed it up by consolidating with small gaps permitted, and
write its support scripts according to how and where I put directories
or portions of the disc ahead of time. Whether things are more or
less accessed and organized on the outside or inside of the disc.

Speed is secondary to avoiding a disc that thrashes, or good
organizational techniques. It's main drawback, though few of them
have it, is power interruption fault tolerance. If there's a power
issue and it's running on a file that's important, there's only one
way to say what messed up means.

Thanks, flash. This is a weekly backup disk plus archive, so no
thrashing (except during the copies that are running right now).
 
R

Robin Bignall

Thanks, Paul. Another keeper.
For a Windows 7 shrink, the max shrink is 50% (as long as no
other tool has been fooling with the NTFS metadata). There is some
metadata the OS doesn't know how to move, that PerfectDisk does know
how to move. So PerfectDisk is the "intervener" that enables
the Windows 7 built-in shrink, to get below the 50% mark. Note
that PerfectDisk doesn't do such a good job, that you get to the
desired size in one shot. The metadata is moved, but it's not "jammed
against the left snugly".
This is a weekly backup plus archive disk, so Perfect Disk's defrag
setting is 'consolidate free space'. I let that run overnight and it
did a great job of shuffling everything to the start end. I realised
that I needn't have done that with 'shrink' only after I read the help.
I let the shrink run while I had my dinner and it was done when I got
back
You don't need PerfectDisk, to go from a 2TB partition, to a 1TB partition.
The OS capability, should be able to do that by itself.

If you didn't defrag first, then even the OS tool will need
to move the data (using the internal defrag API to safely
move data around), The OS capability, can move data around,
but certain metadata cannot be moved by it.

In terms of moving the data clusters around, you pay the
same time price, whether the OS capability does it, or
PerfectDisk does it. The difference is, an extra few seconds
while PerfectDisk moves the metadata the OS doesn't know
how to move.

So if data was physically spread all over the 2TB partition,
*something* has to move the clusters below the 1TB mark.
You can do it in advance with PerfectDisk, in which case
the Windows 7 shrink should take no time at all. If you don't
use PerfectDisk, and just use Windows 7, it'll need the same
amount of time to move the data clusters out of the way.
Well, 'no time at all' is somewhere between me getting fed up watching
the Win7 wait circle revolving, and having my dinner!
Depending on the kind of content, the fastest way to
do this kind of op, is to copy the files off the 2TB,
delete the empty partition, make a new partition, copy
the files back. (I use Robocopy for this.) Robocopy may
be included on the newer OSes, while on older OSes, you
download it. Note - the "mir" or mirror is dangerous.
Triple-check your command syntax! You've been warned.

robocopy Y:\ F:\ /mir /copy:datso /dcopy:t /r:3 /w:2 /zb /np /tee /v /log:y_to_f.log
I don't have another disk big enough to do that.
If it is actually the OS C: partition,
then you'll need a second OS to do it. For example,
until mid-January, you could use Win8 Preview as your
second OS, and do the necessary maintenance. (On a desktop,
you put Windows 8 on a separate disk, and unplug Windows 7
until it's done. On a laptop, offline data movement with a
second OS, is much harder to achieve. In which case, you
unscrew the hard drive from the laptop, install it in your
desktop PC, and do your operations there.)
No, it wasn't the system.
So in answer to your question, yes, defragmenting a
2TB volume, is going to take a while. If you were *not*
doing shrink at the moment, you can stop a defragmenter at
any time. There should be a Stop option. On the shrink,
I don't know how you safely stop that. It may be
inherently safe (due to using the defragmenter APIs designed
into the OS), but I'm not turning off the PC in
the middle of one of those, to find out :)
No, nor me!
The reason the performance in megabytes/sec is so low on
the defragmenter API, is the transfers are supposed to be
done in a "power safe" way. If the defragmenter API is
being used, the setup is supposed to survive a power
outage. The transfer size is relatively tiny, and the disk
heads fly around a lot, dropping performance to the 1MB/sec
to 3MB/sec range. Even if the drive has a disk cache, the
commands issued may insist the data not be cached. So for the
most part, the design is brain dead and about as slow as
you can possibly do it.
Yep, I can see that, moving the archive on H: to the new I:.
One of the older OSes, had a registry option where you could
claim your storage device was "power safe". Say for example,
you were on a super-UPS, where the disk would have power
for an entire day. You could set the registry entry, and
tell the OS that the power cannot possibly fail. The idea
was, a "more generous" sector transfer size could be
used, and the API would run faster. The info for doing this,
is hard to find on purpose, and if you look for it, I doubt
you'll find the bits and pieces necessary. I don't really think
they want people using that (because, guys like me would
flip that bit on my non-protected PC and just gamble on it).
So this idea is purely of historical interest.
Don't have a UPS and have never come across that.
 
R

Robin Bignall

On Wed, 19 Dec 2012 20:47:05 +0000, Robin Bignall

Concerning this shrink, I just found that it turned the registry key
that corresponds to Perfdisk "disable performance counters" (the ones
that show up in HD Sentinel) on (=1) and I just had to turn it off (=0)
again to see the performance statistics. Annoying.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top