My daugther's hard disk become unallocated !

S

Sydney

The PC has 2 hard disks. the boot one was created in november with a fresh
windows home XP . It replaced the old Hitachi disk which was very slow.
The Hitachi was installed as a slave primary to access the data.
Last week, my daugther asked me to retrieve her personal contacts which were
on the Hitachi. So I had to connect it as primary master and booted on it.
That worked fine.
When returning to the actual boot, the Hitachi disappeared and Windows sees
it as unallocated.
I suppect my USB memory key as the culprit.
Are the data lost ?
What to do know ?
Please help.
 
P

Paul

Sydney said:
No it is not All HDs are with cable select. Thanks anyhow

I assume this is a desktop computer ?

First step, is to enter the BIOS and verify that the BIOS sees
two disk drives. There may be an IDE setup screen, which reports
the detected hard drives and optical drive. If the ASCII text
name of the drive is distorted, then you'd know there was a
communications problem on the cable. The BIOS screen shows the
results of the minimum effort to talk to the drive.

If you're not seeing the disk in the BIOS, there is no point
in working to find it in Windows.

You'd review your cable setup and jumpers, to see if you missed
something. Do both drives have power cables plugged in ? Is
the power connector seated ? Are there any burned pins on the
power connector ? Verify that if there are two drives on the
cable, they're Master:Slave or CS:CS (and that the cable is
80 wire if CS:CS is chosen).

If the drives are both detected in the BIOS, then you can use some
OS to work on them. I use a Linux LiveCD, such as Ubuntu or Knoppix, to
work on Windows disks. Linux can now read NTFS and FAT32, so it is no
problem to work there.

If they're visible in the BIOS, your next stop could be
Disk Management in Windows. If you don't know how to find it,
basically just run diskmgmt.msc . If you see two Disk entries
but no partitions on the second disk, then it could be that
the MBR (Master Boot Sector, that holds the four primary partition
entries) is corrupted.

The "TestDisk" program is an example of a free utility that can
examine a disk and try to reconstruct the MBR. In the process of
doing that, the program will be doing lots of reads on the disk,
so it would give you some idea whether the disk is dying or not,
just based on whether any errors are thrown or not.

http://www.cgsecurity.org/wiki/TestDisk

TestDisk isn't likely to be the solution to your problem. I find
about 50% of the time, if I were to accept what it found, it would
mess up the disk. It requires knowledge from the user, as to what it
finds is reasonable or not. For example, if you know the disk had
three partitions (C:, D:, hidden recovery) and it didn't find three
entries, you'd know better than to accept its findings.

In any case, at least work up to the step of using diskmgmt.msc
to prove the disk is detected. If the disk is not there, you have
to work on getting the hardware to access it, before any more progress
can be made.

If a disk has an internal failure, they're designed to not respond
in the event of failure. I disagree with the design philosophy, but there
it is. For example, some disks "disappear", when a defect table used
in firmware overflows. So in some cases, the disk dying, is not
mechanical or logical, and is a firmware bug. At least some firmware
inspired bugs can easily be fixed by data recovery companies.

In other cases, your own senses can give you an idea what happened to
the drive. I had an old 2GB drive, and one day when I turned on the
computer, I heard a loud "sproing" sound. That was the head assembly
getting snagged in the landing ramp and being torn to shreds. When
you hear a noise like that, you don't need a copy of TestDisk :-(

Paul
 
S

Sydney

Paul said:
I assume this is a desktop computer ?

First step, is to enter the BIOS and verify that the BIOS sees
two disk drives. There may be an IDE setup screen, which reports
the detected hard drives and optical drive. If the ASCII text
name of the drive is distorted, then you'd know there was a
communications problem on the cable. The BIOS screen shows the
results of the minimum effort to talk to the drive.

If you're not seeing the disk in the BIOS, there is no point
in working to find it in Windows.

You'd review your cable setup and jumpers, to see if you missed
something. Do both drives have power cables plugged in ? Is
the power connector seated ? Are there any burned pins on the
power connector ? Verify that if there are two drives on the
cable, they're Master:Slave or CS:CS (and that the cable is
80 wire if CS:CS is chosen).

If the drives are both detected in the BIOS, then you can use some
OS to work on them. I use a Linux LiveCD, such as Ubuntu or Knoppix, to
work on Windows disks. Linux can now read NTFS and FAT32, so it is no
problem to work there.

If they're visible in the BIOS, your next stop could be
Disk Management in Windows. If you don't know how to find it,
basically just run diskmgmt.msc . If you see two Disk entries
but no partitions on the second disk, then it could be that
the MBR (Master Boot Sector, that holds the four primary partition
entries) is corrupted.

The "TestDisk" program is an example of a free utility that can
examine a disk and try to reconstruct the MBR. In the process of
doing that, the program will be doing lots of reads on the disk,
so it would give you some idea whether the disk is dying or not,
just based on whether any errors are thrown or not.

http://www.cgsecurity.org/wiki/TestDisk

TestDisk isn't likely to be the solution to your problem. I find
about 50% of the time, if I were to accept what it found, it would
mess up the disk. It requires knowledge from the user, as to what it
finds is reasonable or not. For example, if you know the disk had
three partitions (C:, D:, hidden recovery) and it didn't find three
entries, you'd know better than to accept its findings.

In any case, at least work up to the step of using diskmgmt.msc
to prove the disk is detected. If the disk is not there, you have
to work on getting the hardware to access it, before any more progress
can be made.

If a disk has an internal failure, they're designed to not respond
in the event of failure. I disagree with the design philosophy, but there
it is. For example, some disks "disappear", when a defect table used
in firmware overflows. So in some cases, the disk dying, is not
mechanical or logical, and is a firmware bug. At least some firmware
inspired bugs can easily be fixed by data recovery companies.

In other cases, your own senses can give you an idea what happened to
the drive. I had an old 2GB drive, and one day when I turned on the
computer, I heard a loud "sproing" sound. That was the head assembly
getting snagged in the landing ramp and being torn to shreds. When
you hear a noise like that, you don't need a copy of TestDisk :-(

Paul
Paul, Thanks for this thorough answer and comments. This is a lot of care.
The Bios sees two disks. Cables setup and jumpers are correct;
Windows disk manager sees a 137 Go not allocated disk (it is 160 Go ) and
starts the init and conversion assistant for the disk.I refused that;
Windows explorer nor disk defrag do not see the disk;
Ubuntu 9.04 (Linux ) sees a 8.2 Go disk with 814 bad sectors. It realocated
809 sectors.
No change in windows behavior after that.
Should I run TestDisk under Ubuntu since windows does see the disk ?

your advice is highly appreciated
 
J

Jan Alter

Sydney said:
Paul, Thanks for this thorough answer and comments. This is a lot of care.
The Bios sees two disks. Cables setup and jumpers are correct;
Windows disk manager sees a 137 Go not allocated disk (it is 160 Go ) and
starts the init and conversion assistant for the disk.I refused that;
Windows explorer nor disk defrag do not see the disk;
Ubuntu 9.04 (Linux ) sees a 8.2 Go disk with 814 bad sectors. It
realocated 809 sectors.
No change in windows behavior after that.
Should I run TestDisk under Ubuntu since windows does see the disk ?

your advice is highly appreciated


It might be worth running a Hitachi diagnostics tool to check if the drive
is in order. Since you didn't mention which drive you have of theirs you
would need to confirm by checking before downloading the proper tool.


http://www.hitachigst.com/hdd/support/download.htm#DFT
[/QUOTE]
 
J

Jan Alter

The only diagnostic tool I see in your link is "OGT diagnostic tool".
It does'not refer to the Dekstar model which I have.[/QUOTE]



Download the 'CD Image' on that page and burn the ISO. It should give you a
bootable disk that should work with the Deskstar. Read the pdf for it.
[/QUOTE]
 
P

Paul

Sydney said:
Paul, Thanks for this thorough answer and comments. This is a lot of care.
The Bios sees two disks. Cables setup and jumpers are correct;
Windows disk manager sees a 137 Go not allocated disk (it is 160 Go )
and starts the init and conversion assistant for the disk.I refused that;
Windows explorer nor disk defrag do not see the disk;
Ubuntu 9.04 (Linux ) sees a 8.2 Go disk with 814 bad sectors. It
realocated 809 sectors.
No change in windows behavior after that.
Should I run TestDisk under Ubuntu since windows does see the disk ?

your advice is highly appreciated

My first concern is, your OS reporting a "137 Go" disk. That means
your WinXP doesn't have enough Service Pack installed.

Imagine the following scenario. You have a 160GB disk. It has a single
partition that uses all the space. Now, connect the disk to an OS that
only supports up to 137GB. The OS attempts a write to a location on
the disk, above the 137GB mark. Instantly, the file system is corrupted,
due to address rollover on the IDE interface.

This document addresses the issue a bit. It says, for Windows XP, you should
be using Service Pack 1 or higher. If all you have is the original
WinXP Gold release, then you could cause problems for the information
on that disk. If you look in your Control Panels, for the "System" one,
it will tell you the current Service Pack. Mine says "Version 2002
Service Pack 3".

http://web.archive.org/web/20070121085230/http://www.seagate.com/support/kb/disc/tp/137gb.pdf

I would not run any tools on the disk, until I was absolutely sure
the computer you're using can handle a 160GB disk properly.

The Seagate document suggests an UltraATA/133 PCI controller card
as one solution. Another solution would be to use a USB to IDE
disk enclosure for the hard drive. As far as I know, the USB
Mass Storage driver comes in Service Pack 1 or later.

The Ubuntu disc reporting an 8.2GB disk, suggests the motherboard
is reporting a strange CHS value. The motherboard should be
set for LBA, in which case a bogus value of CHS is used to
signal that LBA is in use. I don't think I've ever had a Linux
CD do that here. I have one 10 year old computer, that only supports
up to 137GB in hardware, and don't remember seeing that as a
symptom (8.2GB disks). I'd want to drop down into the BIOS setup screens
and verify whether someone has been messing around with the settings
there.

CHS can only handle storage up to a certain size, and then a
magic CHS value is supposed to indicate to the system that
LBA is to be used.

http://en.wikipedia.org/wiki/Cylinder_Head_Sector

http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Large-Disk-HOWTO.html

"Hard drives over 8.4 GB are supposed to report their geometry as 16383/16/63.
This in effect means that the `geometry' is obsolete, and the total disk size
can no longer be computed from the geometry, but is found in the LBA capacity
field returned by the IDENTIFY command. Hard drives over 137.4 GB are supposed
to report an LBA capacity of 0xfffffff = 268435455 sectors (137438952960 bytes).
Now the actual disk size is found in the new 48-capacity field."

In any case, you're not ready for tools like TestDisk, until
the issues with seeing the disk fully are resolved.

On my oldest computer, I can plug in my Promise Ultra133 TX2
PCI card, as a means to handle large disks (160GB or larger).
Promise has stopped making those, but if you have one in your
junk box, install it and its driver, and give that a try.

To summarize:

1) To handle a 160GB disk, both the hardware and the operating system,
must be able to deal with the large disk. You have received two
indications (137 Go in Windows, 8.2 Go in Linux), that something
is wrong with the reported geometry, as if the hardware isn't
capable of operating with a large drive.

2) Windows will refuse to make a partition larger than 137 Go, if
it isn't patched to the right Service Pack level. I don't
really think that is your problem - the problem could be
the age of the motherboard being used.

3) If the motherboard was designed before 2003, it is possible
it isn't ready for large disks. If you use a PCI IDE controller
card, you can fix that. Generally, "Ultra133" type cards are
recommended, as Ultra133 is a feature of ATA/ATAPI 7, and was
released after there was 48 bit LBA support for large disks.
So when someone recommends an Ultra133 card, it is with the intention
of getting a recent enough card to also have 48 bit LBA support.
The fact the card supports large disks, may not be documented.
The Ultra133 is a visible marketing term, while 48 bit LBA is
less prominently mentioned.

The ATA/ATAPI spec versions and feature sets are in a table here.
If you get a card with Ultra133 support, that means the card
was made around ATA/ATAPI-7 timeframe. Whereas, the feature you
want, is support for 48 bit LBA, which came in ATA/ATAPI-6.
There are some Ultra100 cards, that with a firmware update,
are ready for large disks.

http://en.wikipedia.org/wiki/ATA/ATAPI

*******
Picture of an Ultra133 TX2.

http://ic.tweakimg.net/ext/i/1011195793.jpg

Examples of the kinds of cards you can find now. This one uses
an ITE8212 IDE chip (something that used to be provided on
some motherboards).

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158081

This card uses a VT6421A, and has one IDE connector and two SATA.
When using a SATA drive with this, you'd insert the Force150 jumper.
For IDE, there should be no special precautions.

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158092
*******

Using an add-in card would require a driver.

As long as you haven't allowed the OS to write to the disk,
the information could still be OK on it. If you've been
"reformatting" or doing other kinds of stuff, it could be
in a real mess.

Paul
 
J

Jan Alter

Paul said:
My first concern is, your OS reporting a "137 Go" disk. That means
your WinXP doesn't have enough Service Pack installed.

Imagine the following scenario. You have a 160GB disk. It has a single
partition that uses all the space. Now, connect the disk to an OS that
only supports up to 137GB. The OS attempts a write to a location on
the disk, above the 137GB mark. Instantly, the file system is corrupted,
due to address rollover on the IDE interface.

This document addresses the issue a bit. It says, for Windows XP, you
should
be using Service Pack 1 or higher. If all you have is the original
WinXP Gold release, then you could cause problems for the information
on that disk. If you look in your Control Panels, for the "System" one,
it will tell you the current Service Pack. Mine says "Version 2002
Service Pack 3".

http://web.archive.org/web/20070121085230/http://www.seagate.com/support/kb/disc/tp/137gb.pdf

I would not run any tools on the disk, until I was absolutely sure
the computer you're using can handle a 160GB disk properly.

The Seagate document suggests an UltraATA/133 PCI controller card
as one solution. Another solution would be to use a USB to IDE
disk enclosure for the hard drive. As far as I know, the USB
Mass Storage driver comes in Service Pack 1 or later.

The Ubuntu disc reporting an 8.2GB disk, suggests the motherboard
is reporting a strange CHS value. The motherboard should be
set for LBA, in which case a bogus value of CHS is used to
signal that LBA is in use. I don't think I've ever had a Linux
CD do that here. I have one 10 year old computer, that only supports
up to 137GB in hardware, and don't remember seeing that as a
symptom (8.2GB disks). I'd want to drop down into the BIOS setup screens
and verify whether someone has been messing around with the settings
there.

CHS can only handle storage up to a certain size, and then a
magic CHS value is supposed to indicate to the system that
LBA is to be used.

http://en.wikipedia.org/wiki/Cylinder_Head_Sector

http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Large-Disk-HOWTO.html

"Hard drives over 8.4 GB are supposed to report their geometry as
16383/16/63.
This in effect means that the `geometry' is obsolete, and the total
disk size
can no longer be computed from the geometry, but is found in the LBA
capacity
field returned by the IDENTIFY command. Hard drives over 137.4 GB are
supposed
to report an LBA capacity of 0xfffffff = 268435455 sectors
(137438952960 bytes).
Now the actual disk size is found in the new 48-capacity field."

In any case, you're not ready for tools like TestDisk, until
the issues with seeing the disk fully are resolved.

On my oldest computer, I can plug in my Promise Ultra133 TX2
PCI card, as a means to handle large disks (160GB or larger).
Promise has stopped making those, but if you have one in your
junk box, install it and its driver, and give that a try.

To summarize:

1) To handle a 160GB disk, both the hardware and the operating system,
must be able to deal with the large disk. You have received two
indications (137 Go in Windows, 8.2 Go in Linux), that something
is wrong with the reported geometry, as if the hardware isn't
capable of operating with a large drive.

2) Windows will refuse to make a partition larger than 137 Go, if
it isn't patched to the right Service Pack level. I don't
really think that is your problem - the problem could be
the age of the motherboard being used.

3) If the motherboard was designed before 2003, it is possible
it isn't ready for large disks. If you use a PCI IDE controller
card, you can fix that. Generally, "Ultra133" type cards are
recommended, as Ultra133 is a feature of ATA/ATAPI 7, and was
released after there was 48 bit LBA support for large disks.
So when someone recommends an Ultra133 card, it is with the intention
of getting a recent enough card to also have 48 bit LBA support.
The fact the card supports large disks, may not be documented.
The Ultra133 is a visible marketing term, while 48 bit LBA is
less prominently mentioned.

The ATA/ATAPI spec versions and feature sets are in a table here.
If you get a card with Ultra133 support, that means the card
was made around ATA/ATAPI-7 timeframe. Whereas, the feature you
want, is support for 48 bit LBA, which came in ATA/ATAPI-6.
There are some Ultra100 cards, that with a firmware update,
are ready for large disks.

http://en.wikipedia.org/wiki/ATA/ATAPI

*******
Picture of an Ultra133 TX2.

http://ic.tweakimg.net/ext/i/1011195793.jpg

Examples of the kinds of cards you can find now. This one uses
an ITE8212 IDE chip (something that used to be provided on
some motherboards).

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158081

This card uses a VT6421A, and has one IDE connector and two SATA.
When using a SATA drive with this, you'd insert the Force150 jumper.
For IDE, there should be no special precautions.

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158092
*******

Using an add-in card would require a driver.

As long as you haven't allowed the OS to write to the disk,
the information could still be OK on it. If you've been
"reformatting" or doing other kinds of stuff, it could be
in a real mess.

Paul

Nice catch with the '137 G' Paul. That went right by me. It would be nice to
know how old the machine is and just as important if the OS patches have
been applied to SP2. If the patches get applied and the disk still can't be
seen or in its entirety I would then try the diagnostics utility, although
in trying it at anytime I don't quite see a problem as long as one doesn't
ask the utility to do any formating..
 
S

Sydney

Paul said:
My first concern is, your OS reporting a "137 Go" disk. That means
your WinXP doesn't have enough Service Pack installed.

Imagine the following scenario. You have a 160GB disk. It has a single
partition that uses all the space. Now, connect the disk to an OS that
only supports up to 137GB. The OS attempts a write to a location on
the disk, above the 137GB mark. Instantly, the file system is corrupted,
due to address rollover on the IDE interface.

This document addresses the issue a bit. It says, for Windows XP, you
should
be using Service Pack 1 or higher. If all you have is the original
WinXP Gold release, then you could cause problems for the information
on that disk. If you look in your Control Panels, for the "System" one,
it will tell you the current Service Pack. Mine says "Version 2002
Service Pack 3".

http://web.archive.org/web/20070121085230/http://www.seagate.com/support/kb/disc/tp/137gb.pdf

I would not run any tools on the disk, until I was absolutely sure
the computer you're using can handle a 160GB disk properly.

The Seagate document suggests an UltraATA/133 PCI controller card
as one solution. Another solution would be to use a USB to IDE
disk enclosure for the hard drive. As far as I know, the USB
Mass Storage driver comes in Service Pack 1 or later.

The Ubuntu disc reporting an 8.2GB disk, suggests the motherboard
is reporting a strange CHS value. The motherboard should be
set for LBA, in which case a bogus value of CHS is used to
signal that LBA is in use. I don't think I've ever had a Linux
CD do that here. I have one 10 year old computer, that only supports
up to 137GB in hardware, and don't remember seeing that as a
symptom (8.2GB disks). I'd want to drop down into the BIOS setup screens
and verify whether someone has been messing around with the settings
there.

CHS can only handle storage up to a certain size, and then a
magic CHS value is supposed to indicate to the system that
LBA is to be used.

http://en.wikipedia.org/wiki/Cylinder_Head_Sector

http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Large-Disk-HOWTO.html

"Hard drives over 8.4 GB are supposed to report their geometry as
16383/16/63.
This in effect means that the `geometry' is obsolete, and the total
disk size
can no longer be computed from the geometry, but is found in the LBA
capacity
field returned by the IDENTIFY command. Hard drives over 137.4 GB are
supposed
to report an LBA capacity of 0xfffffff = 268435455 sectors
(137438952960 bytes).
Now the actual disk size is found in the new 48-capacity field."

In any case, you're not ready for tools like TestDisk, until
the issues with seeing the disk fully are resolved.

On my oldest computer, I can plug in my Promise Ultra133 TX2
PCI card, as a means to handle large disks (160GB or larger).
Promise has stopped making those, but if you have one in your
junk box, install it and its driver, and give that a try.

To summarize:

1) To handle a 160GB disk, both the hardware and the operating system,
must be able to deal with the large disk. You have received two
indications (137 Go in Windows, 8.2 Go in Linux), that something
is wrong with the reported geometry, as if the hardware isn't
capable of operating with a large drive.

2) Windows will refuse to make a partition larger than 137 Go, if
it isn't patched to the right Service Pack level. I don't
really think that is your problem - the problem could be
the age of the motherboard being used.

3) If the motherboard was designed before 2003, it is possible
it isn't ready for large disks. If you use a PCI IDE controller
card, you can fix that. Generally, "Ultra133" type cards are
recommended, as Ultra133 is a feature of ATA/ATAPI 7, and was
released after there was 48 bit LBA support for large disks.
So when someone recommends an Ultra133 card, it is with the intention
of getting a recent enough card to also have 48 bit LBA support.
The fact the card supports large disks, may not be documented.
The Ultra133 is a visible marketing term, while 48 bit LBA is
less prominently mentioned.

The ATA/ATAPI spec versions and feature sets are in a table here.
If you get a card with Ultra133 support, that means the card
was made around ATA/ATAPI-7 timeframe. Whereas, the feature you
want, is support for 48 bit LBA, which came in ATA/ATAPI-6.
There are some Ultra100 cards, that with a firmware update,
are ready for large disks.

http://en.wikipedia.org/wiki/ATA/ATAPI

*******
Picture of an Ultra133 TX2.

http://ic.tweakimg.net/ext/i/1011195793.jpg

Examples of the kinds of cards you can find now. This one uses
an ITE8212 IDE chip (something that used to be provided on
some motherboards).

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158081

This card uses a VT6421A, and has one IDE connector and two SATA.
When using a SATA drive with this, you'd insert the Force150 jumper.
For IDE, there should be no special precautions.

http://www.newegg.com/Product/Product.aspx?Item=N82E16815158092
*******

Using an add-in card would require a driver.

As long as you haven't allowed the OS to write to the disk,
the information could still be OK on it. If you've been
"reformatting" or doing other kinds of stuff, it could be
in a real mess.
For your information, I reinstalled the Hitachi disk on the original
daughter's PC.
It is recognized by the bios with LRG ATA 100 8192 MB configuration.
If I switch to LBA it's still not bootable The PC is WinXP home SP3 . The
mother board was built in 2004.

The last results 137 GB and 8.2 GB were collected on one of my PCs, (WIN XP
SP3) but old maybe 2001. I think we should ignore them.
So I have installed the Hitachi disk on another one bought in 2002 (I call
it A7PRO )
The bios shows the disk as 164.7 GB capacity. Windows XP pro SP3 shows in
the disk manager section with 3 partitions 1 sane active 19.6 GB, 2 Sane 149
GB, 3 16 GB unallocated. i think we can rely on this information.
The windows explorer gives information: it responds for both sane partitions
: not formated , would you like to format.
Maybe I can run TestDisk on this computer. What do you think ?
 
P

Paul

Sydney said:
"Paul" <[email protected]> a écrit dans le message de groupe de
discussion :
[email protected]...
For your information, I reinstalled the Hitachi disk on the original
daughter's PC.
It is recognized by the bios with LRG ATA 100 8192 MB configuration.
If I switch to LBA it's still not bootable The PC is WinXP home SP3 .
The mother board was built in 2004.

The last results 137 GB and 8.2 GB were collected on one of my PCs, (WIN XP
SP3) but old maybe 2001. I think we should ignore them.
So I have installed the Hitachi disk on another one bought in 2002 (I call
it A7PRO )
The bios shows the disk as 164.7 GB capacity. Windows XP pro SP3 shows in
the disk manager section with 3 partitions 1 sane active 19.6 GB, 2 Sane
149
GB, 3 16 GB unallocated. i think we can rely on this information.
The windows explorer gives information: it responds for both sane
partitions
: not formated , would you like to format.
Maybe I can run TestDisk on this computer. What do you think ?

OK, *maybe* this computer is working a bit better.

What I recommend, when doing any kind of data recovery operation,
is backing up the drive to a second hard drive. Using a tool like
"dd", you can copy every sector from the broken disk, to a second disk
which is the same size or larger than the original disk. The purpose of
doing this, is so your repair efforts will not cause further damage.

This is a port of "dd" for Windows. It will display information about
the disk, when you use "dd --list"

http://www.chrysocome.net/dd

This is how I'd copy an entire smaller hard drive, to an equal sized
or larger second hard drive.

For example,

dd if=\\?\Device\Harddisk0\Partition0 of=\\?\Device\Harddisk1\Partition0

will copy Harddisk0 to Harddisk1, including the MBR and all partitions.
It doesn't matter whether the partitions are logically damaged or not.
"Partition0" is a shorthand which means start at offset 0 of the disk and
copy the whole thing. References to "Partition1", "Partition2" would make
it possible to copy individual primary partitions (i.e. MBR not copied).
For the purposes of preserving the state of the damaged disk, I always
recommend a backup type operation first.

TestDisk is a "repair in place" tool. Which means it can do additional
damage to the disk, over and above what already exists. As long as you
don't accept any of its attempts to write to the disk, nothing would be
harmed. But if you accepted an invitation to overwrite the MBR sector,
which contains the primary partition table entries, that *could* be a
disaster. If you're in a menu, and are uncertain of the safety of the
command, you can press <control>-C to instantly quit the program.
(Some menus don't have a quit option, but I've found the Unix keyboard
shortcut control-C works within that program.)

Another tool you can use, is PTEDIT32.

PTEDIT32 for Windows
ftp://ftp.symantec.com/public/english_us_canada/tools/pq/utilities/PTEDIT32.zip

PTEDIT32 screenshot
http://www.vistax64.com/attachments...n-partiton-recovery-dell-xps-420-dell-tbl.gif

If you use that tool in Windows, it will allow you to write down the partition
information of the damaged disk. You could use PTEDIT32, before using TestDisk
for example.

The TestDisk web page, has some examples of how you can use the tool. It
cannot fix all possible disk problems, only a few.

http://www.cgsecurity.org/wiki/TestDisk_Step_By_Step

So if you're going to use TestDisk, you'd at least want the disk to be copied
to another for safety.

The other approach, is "file scavenging", where you use tools which search
for files or file fragments. I don't know whether tools like this, can handle
a heavily fragmented disk well or not. I've tested PhotoRec here, and it
didn't recover anything for me. One poster here, used the "driverescue"
program, and claims to have got the important files off his NTFS disk. I
haven't tested it myself. Driverescue was originally a free program, in which
the author sold it to some other company, and it was removed from his web site.
The link below, is an archive site.

http://www.cgsecurity.org/wiki/PhotoRec

http://www.pricelesswarehome.org/WoundedMoon/win32/driverescue19d.html

If you use a file scavenger program, you need a second disk big enough to
hold the output from the program.

So, to proceed safely, you need storage space, and a careful philosophy to
using the free tools.

If you have money to spend, there are any number of $39.95 programs that
claim to be able to recover files. I haven't tested them, so can't comment
on whether any of them are good or not.

I have no confidence in the Windows "chkdsk" to fix things. I had a
trivial problem one day, on a disk, the data was still intact, but
chkdsk would exit with an error and it would not attempt a repair.
I copied the files off the damaged disk and reformatted the partition,
before moving the files back. I don't really have a good suggestion
for anything that might involve repairing a partition - using a file
scavenger right now, is about the best I can offer.

So if you came to me with a broken disk, and asked for help, I would
need to have two spare disks on hand of equal or greater size, to use
for backups or file recovery operations.

When a disk has actual bad sectors, that will prevent the regular "dd"
program from doing the job in a reasonable time, and in a logically correct
way. The cgsecurity site has a web page dedicated to the handling of
physically bad disks, and in that case, the tool they recommend, is only
available in Linux. What the tools do in this case, is substitute a
sector of zeros, in place of a sector which cannot be read. This
keeps the sector offsets correct, so that most of the file system
structures will still be consistent. And by not spending a lot of time
attempting to read the sectors, it might only take hours for the backup
operation to run, instead of years.

http://www.cgsecurity.org/wiki/Damaged_Hard_Disk

If you wish to scan the disk surface, for bad sectors, you can use
the free version of HDTune. By using the HDTune option, that will
tell you whether there is physical damage present on the disk or not.
(And whether you'd need to treat the disk differently as a result.)
This older version of the program is free. The "error scan" tab on
the right, is the one you want to try. HDTune also sells a more full
featured version of the program, but for simple things, the free
version is good as well.

http://www.hdtune.com/files/hdtune_255.exe

Good luck,
Paul
 
S

Sydney

Paul said:
Sydney wrote:
OK, *maybe* this computer is working a bit better.

What I recommend, when doing any kind of data recovery operation,
is backing up the drive to a second hard drive. Using a tool like
"dd", you can copy every sector from the broken disk, to a second disk
which is the same size or larger than the original disk. The purpose of
doing this, is so your repair efforts will not cause further damage.

This is a port of "dd" for Windows. It will display information about
the disk, when you use "dd --list"

http://www.chrysocome.net/dd

This is how I'd copy an entire smaller hard drive, to an equal sized
or larger second hard drive.

For example,

dd if=\\?\Device\Harddisk0\Partition0
of=\\?\Device\Harddisk1\Partition0

will copy Harddisk0 to Harddisk1, including the MBR and all partitions.
It doesn't matter whether the partitions are logically damaged or not.
"Partition0" is a shorthand which means start at offset 0 of the disk and
copy the whole thing. References to "Partition1", "Partition2" would make
it possible to copy individual primary partitions (i.e. MBR not copied).
For the purposes of preserving the state of the damaged disk, I always
recommend a backup type operation first.

TestDisk is a "repair in place" tool. Which means it can do additional
damage to the disk, over and above what already exists. As long as you
don't accept any of its attempts to write to the disk, nothing would be
harmed. But if you accepted an invitation to overwrite the MBR sector,
which contains the primary partition table entries, that *could* be a
disaster. If you're in a menu, and are uncertain of the safety of the
command, you can press <control>-C to instantly quit the program.
(Some menus don't have a quit option, but I've found the Unix keyboard
shortcut control-C works within that program.)

Another tool you can use, is PTEDIT32.

PTEDIT32 for Windows

ftp://ftp.symantec.com/public/english_us_canada/tools/pq/utilities/PTEDIT32.zip

PTEDIT32 screenshot

http://www.vistax64.com/attachments...n-partiton-recovery-dell-xps-420-dell-tbl.gif

If you use that tool in Windows, it will allow you to write down the
partition
information of the damaged disk. You could use PTEDIT32, before using
TestDisk
for example.

The TestDisk web page, has some examples of how you can use the tool. It
cannot fix all possible disk problems, only a few.

http://www.cgsecurity.org/wiki/TestDisk_Step_By_Step

So if you're going to use TestDisk, you'd at least want the disk to be
copied
to another for safety.

The other approach, is "file scavenging", where you use tools which search
for files or file fragments. I don't know whether tools like this, can
handle
a heavily fragmented disk well or not. I've tested PhotoRec here, and it
didn't recover anything for me. One poster here, used the "driverescue"
program, and claims to have got the important files off his NTFS disk. I
haven't tested it myself. Driverescue was originally a free program, in
which
the author sold it to some other company, and it was removed from his web
site.
The link below, is an archive site.

http://www.cgsecurity.org/wiki/PhotoRec

http://www.pricelesswarehome.org/WoundedMoon/win32/driverescue19d.html

If you use a file scavenger program, you need a second disk big enough to
hold the output from the program.

So, to proceed safely, you need storage space, and a careful philosophy to
using the free tools.

If you have money to spend, there are any number of $39.95 programs that
claim to be able to recover files. I haven't tested them, so can't comment
on whether any of them are good or not.

I have no confidence in the Windows "chkdsk" to fix things. I had a
trivial problem one day, on a disk, the data was still intact, but
chkdsk would exit with an error and it would not attempt a repair.
I copied the files off the damaged disk and reformatted the partition,
before moving the files back. I don't really have a good suggestion
for anything that might involve repairing a partition - using a file
scavenger right now, is about the best I can offer.

So if you came to me with a broken disk, and asked for help, I would
need to have two spare disks on hand of equal or greater size, to use
for backups or file recovery operations.

When a disk has actual bad sectors, that will prevent the regular "dd"
program from doing the job in a reasonable time, and in a logically
correct
way. The cgsecurity site has a web page dedicated to the handling of
physically bad disks, and in that case, the tool they recommend, is only
available in Linux. What the tools do in this case, is substitute a
sector of zeros, in place of a sector which cannot be read. This
keeps the sector offsets correct, so that most of the file system
structures will still be consistent. And by not spending a lot of time
attempting to read the sectors, it might only take hours for the backup
operation to run, instead of years.

http://www.cgsecurity.org/wiki/Damaged_Hard_Disk

If you wish to scan the disk surface, for bad sectors, you can use
the free version of HDTune. By using the HDTune option, that will
tell you whether there is physical damage present on the disk or not.
(And whether you'd need to treat the disk differently as a result.)
This older version of the program is free. The "error scan" tab on
the right, is the one you want to try. HDTune also sells a more full
featured version of the program, but for simple things, the free
version is good as well.

http://www.hdtune.com/files/hdtune_255.exe

Good luck,
Paul

Paul
Thanks a lot for your thorough answer
I ran TestDisk on the Hitachi disk; Results don't seem encouraging.
TestDisk recognize 2 partitions F and J in the tested environnemnt.
I analyzed J which was the data partition (F: was the old boot one)
Here is a partial copy of the log.
"Geometry from i386 MBR: head=255 sector=63
BAD_RS LBA=33555007 32319
check_part_i386 failed for partition type 07
check_part_i386 failed for partition type 07
Current partition structure:
Invalid NTFS or EXFAT boot
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719

Bad relative sector.
Invalid NTFS or EXFAT boot
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
Space conflict between the following two partitions
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
Ask the user for vista mode
Allow partial last cylinder : No
search_vista_part: 0

search_part()
Disk /dev/sdb - 164 GB / 153 GiB - CHS 20023 255 63

Results

interface_write()

No partition found or selected for recovery"

I am not sure if PTedit would repair this partition. Here is what it says

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 2 3 1 1023 254 63 33555007 41094719
07 00 1023 2 1 1023 254 63 41094782 314130169
00 00 2 2 0 2 2 0 33554944 33564944
00 00 2 2 0 2 2 0 33554944 33564944
I need your advice if you can spend the time
 
P

Paul

Sydney said:
"Paul" <[email protected]> a écrit dans le message de groupe de
discussion :
[email protected]...

Paul
Thanks a lot for your thorough answer
I ran TestDisk on the Hitachi disk; Results don't seem encouraging.
TestDisk recognize 2 partitions F and J in the tested environnemnt.
I analyzed J which was the data partition (F: was the old boot one)
Here is a partial copy of the log.
"Geometry from i386 MBR: head=255 sector=63
BAD_RS LBA=33555007 32319
check_part_i386 failed for partition type 07
check_part_i386 failed for partition type 07
Current partition structure:
Invalid NTFS or EXFAT boot
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719

Bad relative sector.
Invalid NTFS or EXFAT boot
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
Space conflict between the following two partitions
1 * HPFS - NTFS 2088 179 11 4646 186 18 41094719
2 P HPFS - NTFS 2558 8 9 22111 186 18 314130169
Ask the user for vista mode
Allow partial last cylinder : No
search_vista_part: 0

search_part()
Disk /dev/sdb - 164 GB / 153 GiB - CHS 20023 255 63

Results

interface_write()

No partition found or selected for recovery"

I am not sure if PTedit would repair this partition. Here is what it says

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 2 3 1 1023 254 63 33555007 41094719
07 00 1023 2 1 1023 254 63 41094782 314130169
00 00 2 2 0 2 2 0 33554944 33564944
00 00 2 2 0 2 2 0 33554944 33564944

I need your advice if you can spend the time

I can give you some PTEDIT32 examples from my computer. This is to help
familiarize you with what might be more normal. Note - Partition Magic
doesn't like what it sees here, so I'm making no claim these tables
are perfect. Just that the values in the table, are "mostly sane".

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

0C 80 637 1 1 1023 254 63 10233468 40965687
0C 00 1023 254 63 1023 254 63 51199155 40965750
83 00 1023 254 63 1023 254 63 92164968 37495647
07 00 1023 254 63 1023 254 63 129660615 182916090

My disk apparently has a space at the beginning. The first partition starts
at 10233468. If you add 40965687 (size) to the start value, that gives
the next start at 51199155. If I haven't made any typos, I think you'll
see my four primary partitions are stacked end to end, but there is a blank
space in front of the first partition.

My first two partitions are FAT32 (0C). The third one is for Linux. The
fourth one is an NTFS data partition. That is based on the values I see
in the Type field. The first partition is bootable (and happens to
contained Win2K).

*******

Here is my second disk. On this one, there is no blank space before the
first partition. (I don't think Disk Management will start a partition
at 0, since the MBR is there, and Disk Management is going to try to
start partitions on track boundaries. The geometry specification claims
tracks contain 63 sectors, which is physically unlikely. It is a convention
of sorts. That could account for starting at 63.)

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

0B 00 0 1 1 636 254 63 63 10233342
0C 80 637 1 1 1023 254 63 10233468 152215812
0C 00 1023 0 1 1023 254 53 162449280 4080510
07 00 1023 0 1 1023 254 63 166529790 92164905

That is three FAT32 partitions and an NTFS partition. The second partition
contains an OS ("Boot") and that is my WinXP C: partition. The first partition
starts at 63, leaving a track at the beginning of the disk. And sector 0 actually
contains the MBR and the above table. Between the first and second partition,
is a blank track.

63+10233342 + 63 = 10233468

I don't know why Disk Management did that.

10233468 + 152215812 = 162449280 so there is no space between the second
and third partition. Similarly, the third and fourth partitions are
right next to one another as well, with no space.

*******

OK, my third disk is a temporary, where I'm experimenting with a Ubuntu install.

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

83 80 0 1 1 1023 254 63 63 300527892
05 00 1023 254 63 1023 243 63 300527955 12048750
00 00 0 0 0 0 0 0 0 0
00 00 0 0 0 0 0 0 0 0

In that example, there are two entries. The second entry is special, and is
type 05. That is an "extended partition". It is a "container". It holds
multiple "logical partitions". This is how you manage to get more than four
partitions on a hard drive. Three of them could be primary, while the fourth
"extended" one, could hold a dozen logical partitions if you wanted.
The extended partition doesn't have to have all the contained space
used, so you can do as follows. In this example, there is room to make
another logical partition, within the extended one. Doing so, would
not alter the contents of the MBR/primary entries, as shown in the
above table. I can't tell from the above table, how many logical
partitions are within the Extended (there happens to be only one, a
Linux swap partition). All I can say, is the example disk above, has
a relatively small extended section (6.17GB)

+---------------+--------------------------------------------------+
| Primary (83) | Extended (type 05) |
+---------------+------------+------------+------------------------+
| Logical #1 | Logical #2 | Empty space |
+------------+------------+------------------------+

Notice, that in the example, we can't tell what partition type is
in Logical #1, and you'd need something other than PTEDIT32 to get
that info. I expect TestDisk would know what was there.

*******

If we look at your partition table, the first two are NTFS. The second
two have the Type field set to 00, so I presume there is nothing in
those partition entries. (Some BIOS support dynamic updating of MBR, and
attempts to access a recovery partition, may cause one of the other partition
entries to be updated when it is needed. I'm guessing that mechanism isn't
being used here.)

33555007 + 41094719 overlaps with 41094782, so your partition table says
the first partition is long enough, to overlap the second partition.
That is *not* good. If that were true, a write operation on the first
partition, would corrupt the second partition.

Now, if we take 33554944 + 63, that gets us to 33555007. And putting a
track of spacing between partitions, might be a thing to do. The 33554944
number does not "ring any bells", and I cannot tell you why some tool has
loaded values like that. But at least the distance between 33554944 and
33555007 makes some sense.

Now, imagine the following. I've edited your table, to what I think makes sense.
This is your original.

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 2 3 1 1023 254 63 33555007 41094719
07 00 1023 2 1 1023 254 63 41094782 314130169
00 00 2 2 0 2 2 0 33554944 33564944
00 00 2 2 0 2 2 0 33554944 33564944

My guess would be, the first two entries would make more sense if they
were as follows. (Hmmm. the 2 3 1 should probably be changed to 0 1 1
as well, to be consistent with the 63 sector start. Not sure what to
do with the 1023 2 1 yet.)

07 80 0 1 1 1023 254 63 63 41094719
07 00 1023 2 1 1023 254 63 41094782 314130169

The question would then be, why didn't TestDisk detect something at
sector 63 ?

NOTE - TestDisk is a "repair in place" program. Before selecting an
option like "Write", you want a backup of the sick drive. By doing
so, you can undo whatever experiments you try.

You can use "dd" to make a backup. You need enough space to hold the
entire sick drive somewhere. This is a port of "dd" for Windows.

http://www.chrysocome.net/dd

Say I run "dd --list" in an MSDOS (command) window. And I see
entries like this.

\\?\Device\Harddisk0\Partition0
link to \\?\Device\Harddisk0\DR0
Fixed hard disk media. Block size = 512
\\?\Device\Harddisk0\Partition1
link to \\?\Device\HarddiskVolume1

In that example Harddisk0\Partition0, is the entire disk contents including
MBR. The Harddisk1\Partition1, represents the first partition on the drive,
so using that would not backup everything.

Now, I can do a backup. The simplest command would be:

dd if=\\?\Device\Harddisk0\Partition0 of=C:\mybackup.dd

Here, I'm copying all the sectors from the disk, and holding them
in the file "mybackup.dd". If the source disk is 80GB, the mybackup.dd
file will be 80GB as well. In my example, the C: drive would
have to be of type NTFS (as NTFS supports files larger than 4GB in size),
and the C: partition would need at least 80GB of spare room on it.

Later, say I discover I screwed up the repair, and the sick disk is
now trashed. I can put back the data, in original sick form, by doing

dd if=C:\mybackup.dd of=\\?\Device\Harddisk0\Partition0

and that puts back everything, including the original sick partition
table and all the (scrambled) data. The command works in its
simplified form, because "dd" can detect the "end" of the
disk, and knows when to stop. If you use this simple syntax with
a USB flash drive, there is a bug in "dd" and it won't stop at the
end properly. To prevent that, there are block size (bs) and count
parameters, to more precisely control the size of transfer. But
we don't need to worry about that now.

Before starting a "dd" transfer, you'd run HDTune first and do a
bad block scan. The purpose of checking for bad blocks, is to determine
whether the "dd" command is going to be able to capture all the data.
If HDTune shows all "green" squares, you're then ready to use the "dd"
command to make your backup.

*******

OK, we've made the backup, so we're protected against "repair-in-place"
accidents. We've done the extra work, to protect your daughter's data.

Now, if we were to edit the partition table, and change the start of
the first partition to 63, in principle, that makes the partition
table look a little more sane.

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 0 1 1 1023 254 63 63 41094719
07 00 1023 2 1 1023 254 63 41094782 314130169

The first partition is 21,040,496,128 bytes. The second
partition is 160,834,646,528. Hmmm. So the total disk must be
at least 181,875,142,656. I don't think I like this either.
Something still isn't right! You stated in a previous post,
that the disk is 160GB. It would then not be valid,
for the second partition to have a size of 160GB. It would
run "off the end". 314130169 * 512 = 160,834,646,528.

At this point, about all I can suggest, is to try to use TestDisk
to do a "deeper scan" or the like, to see if it can figure it out.
It seemed, from the messages you got, that it was looking at
the boot sectors at the beginning of the partition, and didn't
like what it found. The MBR itself, has a small piece of code that
starts the boot process. But there are also sectors within
the partition itself, which aid the boot process. And corrupting
those sectors will prevent booting. In Recovery Console, you
use "fixboot" to repair those sectors. (Note - don't do that
right now, because your partition table is a mess!)

+-----------------------------------------------------+-- - -
| MBR | Partition Boot Sectors , File system |
| | <------ entire partition ---------------->|
+---------+-------------------------------------------+-- - -

The problem I see right now, is there are at least two errors in
your MBR partition table entries. If there was only one significant
error, we could experiment with it and see what happens (by starting
the first partition at sector 63). But at least one partition has
a suspect partition size. And this is where, some knowledge about
what is "reasonable" for a value, helps.

If the partitions themselves are scrambled, now we have at least
three faults with the structure of the disk. Unless you've got
very good auxiliary information to work with, it is then getting
less and less likely, that you're going to find anything.

You can always try a scavenger, and see what it finds. I don't know
how much scrambling this tool can take. I don't really know what
it uses for a cue, for where the data is located. If it relied on
the partition table alone, it would be in deep trouble.

http://www.pricelesswarehome.org/WoundedMoon/win32/driverescue19d.html

Paul
 
S

Sydney

Paul said:
I can give you some PTEDIT32 examples from my computer. This is to help
familiarize you with what might be more normal. Note - Partition Magic
doesn't like what it sees here, so I'm making no claim these tables
are perfect. Just that the values in the table, are "mostly sane".

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

0C 80 637 1 1 1023 254 63 10233468 40965687
0C 00 1023 254 63 1023 254 63 51199155 40965750
83 00 1023 254 63 1023 254 63 92164968 37495647
07 00 1023 254 63 1023 254 63 129660615 182916090

My disk apparently has a space at the beginning. The first partition
starts
at 10233468. If you add 40965687 (size) to the start value, that gives
the next start at 51199155. If I haven't made any typos, I think you'll
see my four primary partitions are stacked end to end, but there is a
blank
space in front of the first partition.

My first two partitions are FAT32 (0C). The third one is for Linux. The
fourth one is an NTFS data partition. That is based on the values I see
in the Type field. The first partition is bootable (and happens to
contained Win2K).

*******

Here is my second disk. On this one, there is no blank space before the
first partition. (I don't think Disk Management will start a partition
at 0, since the MBR is there, and Disk Management is going to try to
start partitions on track boundaries. The geometry specification claims
tracks contain 63 sectors, which is physically unlikely. It is a
convention
of sorts. That could account for starting at 63.)

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

0B 00 0 1 1 636 254 63 63 10233342
0C 80 637 1 1 1023 254 63 10233468 152215812
0C 00 1023 0 1 1023 254 53 162449280 4080510
07 00 1023 0 1 1023 254 63 166529790 92164905

That is three FAT32 partitions and an NTFS partition. The second partition
contains an OS ("Boot") and that is my WinXP C: partition. The first
partition
starts at 63, leaving a track at the beginning of the disk. And sector 0
actually
contains the MBR and the above table. Between the first and second
partition,
is a blank track.

63+10233342 + 63 = 10233468

I don't know why Disk Management did that.

10233468 + 152215812 = 162449280 so there is no space between the second
and third partition. Similarly, the third and fourth partitions are
right next to one another as well, with no space.

*******

OK, my third disk is a temporary, where I'm experimenting with a Ubuntu
install.

<-- Start ---> <--- End -----> Sectors Total
Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

83 80 0 1 1 1023 254 63 63 300527892
05 00 1023 254 63 1023 243 63 300527955 12048750
00 00 0 0 0 0 0 0 0 0
00 00 0 0 0 0 0 0 0 0

In that example, there are two entries. The second entry is special, and
is
type 05. That is an "extended partition". It is a "container". It holds
multiple "logical partitions". This is how you manage to get more than
four
partitions on a hard drive. Three of them could be primary, while the
fourth
"extended" one, could hold a dozen logical partitions if you wanted.
The extended partition doesn't have to have all the contained space
used, so you can do as follows. In this example, there is room to make
another logical partition, within the extended one. Doing so, would
not alter the contents of the MBR/primary entries, as shown in the
above table. I can't tell from the above table, how many logical
partitions are within the Extended (there happens to be only one, a
Linux swap partition). All I can say, is the example disk above, has
a relatively small extended section (6.17GB)

+---------------+--------------------------------------------------+
| Primary (83) | Extended (type 05) |
+---------------+------------+------------+------------------------+
| Logical #1 | Logical #2 | Empty space |
+------------+------------+------------------------+

Notice, that in the example, we can't tell what partition type is
in Logical #1, and you'd need something other than PTEDIT32 to get
that info. I expect TestDisk would know what was there.

*******

If we look at your partition table, the first two are NTFS. The second
two have the Type field set to 00, so I presume there is nothing in
those partition entries. (Some BIOS support dynamic updating of MBR, and
attempts to access a recovery partition, may cause one of the other
partition
entries to be updated when it is needed. I'm guessing that mechanism isn't
being used here.)

33555007 + 41094719 overlaps with 41094782, so your partition table says
the first partition is long enough, to overlap the second partition.
That is *not* good. If that were true, a write operation on the first
partition, would corrupt the second partition.

Now, if we take 33554944 + 63, that gets us to 33555007. And putting a
track of spacing between partitions, might be a thing to do. The 33554944
number does not "ring any bells", and I cannot tell you why some tool has
loaded values like that. But at least the distance between 33554944 and
33555007 makes some sense.

Now, imagine the following. I've edited your table, to what I think makes
sense.
This is your original.

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 2 3 1 1023 254 63 33555007
41094719
07 00 1023 2 1 1023 254 63 41094782
314130169
00 00 2 2 0 2 2 0 33554944
33564944
00 00 2 2 0 2 2 0 33554944
33564944

My guess would be, the first two entries would make more sense if they
were as follows. (Hmmm. the 2 3 1 should probably be changed to 0 1 1
as well, to be consistent with the 63 sector start. Not sure what to
do with the 1023 2 1 yet.)

07 80 0 1 1 1023 254 63 63
41094719
07 00 1023 2 1 1023 254 63 41094782
314130169

The question would then be, why didn't TestDisk detect something at
sector 63 ?

NOTE - TestDisk is a "repair in place" program. Before selecting an
option like "Write", you want a backup of the sick drive. By doing
so, you can undo whatever experiments you try.

You can use "dd" to make a backup. You need enough space to hold the
entire sick drive somewhere. This is a port of "dd" for Windows.

http://www.chrysocome.net/dd

Say I run "dd --list" in an MSDOS (command) window. And I see
entries like this.

\\?\Device\Harddisk0\Partition0
link to \\?\Device\Harddisk0\DR0
Fixed hard disk media. Block size = 512
\\?\Device\Harddisk0\Partition1
link to \\?\Device\HarddiskVolume1

In that example Harddisk0\Partition0, is the entire disk contents
including
MBR. The Harddisk1\Partition1, represents the first partition on the
drive,
so using that would not backup everything.

Now, I can do a backup. The simplest command would be:

dd if=\\?\Device\Harddisk0\Partition0 of=C:\mybackup.dd

Here, I'm copying all the sectors from the disk, and holding them
in the file "mybackup.dd". If the source disk is 80GB, the mybackup.dd
file will be 80GB as well. In my example, the C: drive would
have to be of type NTFS (as NTFS supports files larger than 4GB in size),
and the C: partition would need at least 80GB of spare room on it.

Later, say I discover I screwed up the repair, and the sick disk is
now trashed. I can put back the data, in original sick form, by doing

dd if=C:\mybackup.dd of=\\?\Device\Harddisk0\Partition0

and that puts back everything, including the original sick partition
table and all the (scrambled) data. The command works in its
simplified form, because "dd" can detect the "end" of the
disk, and knows when to stop. If you use this simple syntax with
a USB flash drive, there is a bug in "dd" and it won't stop at the
end properly. To prevent that, there are block size (bs) and count
parameters, to more precisely control the size of transfer. But
we don't need to worry about that now.

Before starting a "dd" transfer, you'd run HDTune first and do a
bad block scan. The purpose of checking for bad blocks, is to determine
whether the "dd" command is going to be able to capture all the data.
If HDTune shows all "green" squares, you're then ready to use the "dd"
command to make your backup.

*******

OK, we've made the backup, so we're protected against "repair-in-place"
accidents. We've done the extra work, to protect your daughter's data.

Now, if we were to edit the partition table, and change the start of
the first partition to 63, in principle, that makes the partition
table look a little more sane.

Type Boot Cyl Head Sector Cyl Head Sector Before Sectors

07 80 0 1 1 1023 254 63 63
41094719
07 00 1023 2 1 1023 254 63 41094782
314130169

The first partition is 21,040,496,128 bytes. The second
partition is 160,834,646,528. Hmmm. So the total disk must be
at least 181,875,142,656. I don't think I like this either.
Something still isn't right! You stated in a previous post,
that the disk is 160GB. It would then not be valid,
for the second partition to have a size of 160GB. It would
run "off the end". 314130169 * 512 = 160,834,646,528.

At this point, about all I can suggest, is to try to use TestDisk
to do a "deeper scan" or the like, to see if it can figure it out.
It seemed, from the messages you got, that it was looking at
the boot sectors at the beginning of the partition, and didn't
like what it found. The MBR itself, has a small piece of code that
starts the boot process. But there are also sectors within
the partition itself, which aid the boot process. And corrupting
those sectors will prevent booting. In Recovery Console, you
use "fixboot" to repair those sectors. (Note - don't do that
right now, because your partition table is a mess!)

+-----------------------------------------------------+-- - -
| MBR | Partition Boot Sectors , File system |
| | <------ entire partition ---------------->|
+---------+-------------------------------------------+-- - -

The problem I see right now, is there are at least two errors in
your MBR partition table entries. If there was only one significant
error, we could experiment with it and see what happens (by starting
the first partition at sector 63). But at least one partition has
a suspect partition size. And this is where, some knowledge about
what is "reasonable" for a value, helps.

If the partitions themselves are scrambled, now we have at least
three faults with the structure of the disk. Unless you've got
very good auxiliary information to work with, it is then getting
less and less likely, that you're going to find anything.

You can always try a scavenger, and see what it finds. I don't know
how much scrambling this tool can take. I don't really know what
it uses for a cue, for where the data is located. If it relied on
the partition table alone, it would be in deep trouble.

http://www.pricelesswarehome.org/WoundedMoon/win32/driverescue19d.html

Paul
I have changed with PTedit32 the values on line 1 as suggested. Results are
as before i.e.
I see two partitions f:\ and J:\ When i tried to see them the answer is as
before Non formatted.
I recall that this disk was an old one with Xp installed . It was put behind
the new disk as primary slave
to use the data. Any other suggestion ?
Thanks a lot for your work !
 
P

Paul

Sydney wrote:

I have changed with PTedit32 the values on line 1 as suggested. Results
are as before i.e.
I see two partitions f:\ and J:\ When i tried to see them the answer is
as before Non formatted.
I recall that this disk was an old one with Xp installed . It was put
behind the new disk as primary slave
to use the data. Any other suggestion ?
Thanks a lot for your work !

Since the partition table does not seem to be sane, you can try
scavenging any files that are visible.

http://www.pricelesswarehome.org/WoundedMoon/win32/driverescue19d.html

If TestDisk could find something, I would be more optimistic. You'd
think that even if all my theories about what should be in the
partition table are wrong, that TestDisk would say it has found
partitions. I don't know how damaged the two partitions would have to be,
before TestDisk could not find them.

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top