Comparison of NTFS/MFT recovery software?

C

cquirke (MVP Win9x)

Maybe all the conditions that chkdsk /F can't fix are things that are
impossible to fix ?

That's the arrogant assertion, sure. But the abilities of auto-fixing
tools are always pretty crummy, and generally work from a fixed
assumption base; that the only type of problem that will arise is an
interruption of sane file operations (which journalling can cope with)
... Usenet asking if there was a document that listed every
possible messages from chkdsk, similar to the documententation for
fsck on any unix system, and the answer was no, why do I want one ?

Yep; the classic "why would you want that?" myopia ;-)
Compare that to every windows98 laptop I every saw that had CHK files
in the C root because if incorrect shutdown.

Well, that's a good case and point. Would FATxx look "better" if
Scandisk automatically deleted lost cluster chains, instead of
preserving them as .CHK?

MS seemed to think so; that's why they set Win98's defaults in
Scandisk.ini to automatically "fix" errors and delete recovered files
the "kill, bury, deny" approach to file system maintenance.

In the case of NTFS, that's exactly what happens. When transaction
logging undoes a change, what do you think it falls back to? What if
you wanted the partially-completed file update, rather than lose it?

If I add 15 bytes at offset 200 into a 200M .AVI file, do you think
transaction logging preserves the entire 200M file (so it can fall
back cleanly) and creates an entirely new copy to fall-forward to when
it's done? Leaving aside the performance impact, how would NTFS
manage that if there's insufficient free space?

MS's docs on this are clear; transaction rollback does not preserve
user data. You can't preserve data that has not been written do HD
from RAM yet; the best you can do is hand on to the bit that was
written to disk, and that's basically what .CHK files do, if you like.

The point I'm making here is:
- interrupted file operations *will* lose data
- bad hardware issues *will* corrupt file systems
- raw disk writes *can* trash file systems

There's no "magic bullet" to NTFS that stops any of these things from
happening; the second two are below the file system's layer of
abstraction, so FATxx vs. NTFS has no meaning other than to what
extent repairs can be guided by redundant information.

You might have expected NT's hardware abstraction and NTFS's security
awareness to prevent malware from writing directly to raw disk.

See Witty.


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
A

Al Dykes

It's inevitable if you take user expectations as to what "backup" does
into account, i.e. that it loses unwanted changes while preserving
wanted changes. Implicit is the idea that the unwanted changes are
more recent than the changes you want to keep; therefore, falling back
to an earlier state will preserve data while losing the damage.

What the F are your talking about ?

You do a full backup of a system, with an appropriate tool, and if you
rebuild from that backup you get a functional equivalent system when
you are done.

If you have open files while you are running a backup you have to know
what you're doing or you get what you deserve.
 
C

cquirke (MVP Win9x)

Um, by your definition, perhaps. That's just a little too facile to be a
general definition.

It's inevitable if you take user expectations as to what "backup" does
into account, i.e. that it loses unwanted changes while preserving
wanted changes. Implicit is the idea that the unwanted changes are
more recent than the changes you want to keep; therefore, falling back
to an earlier state will preserve data while losing the damage.

Clearly, falling back to an earlier state loses data saved or changes
made after the backup was made; thus "loses data".

Now you can hedge this in various ways:

1) Reduce time lapse between backup and live data

The extreme of this is real-time mirroring, such that changes are made
to "live" and "backup" data at the same time - in essence, both copies
of data are "live". This protects against a very specific type of
problem; death of one half of the mirror.

But anything that writes junk to the HD will write junk to both HDs
equally, unless the junk arises within half of the HD subsystem of
course. So in that sense, zero-lag backup isn't really a "backup".

Also, several things that kill one HD will very likely kill both HDs;
power spike, site disaster, theft of PC, flooding, etc.

2) Keep multiple time-lapse backups

Now we're getting somewhere; instead of having one big backup, you
keep a number of these made at different times, and can fall back as
far as needed; assuming you discover the data loss you wish to reverse
within the time period you are covering in your backup spread.

You will still lose whatever data you saved between the last sane
backup, and the time of data loss. The only way to avoid that is to
have transaction-grain steps between successive backups.

The assumption this approach rests on is that the disaster is such
that all further work ceases, so that the time between the data state
you want to keep and the disaster you want to lose is always positive.

3) Selective scope

This counters the negative lead time problem that is inherent in the
malware infection-stealth-payload sequence of events.

By including only non-infectable data in your backup, you will lose
malware, as well as losing content that ties the backup to particular
hardware or application version.

These backups can then be restored onto new replacement PCs with less
worry about inappropriate drivers, version soup, or malware restore.
You can reasonably expect to have saved only what data you've
saved in your backup before your head crashed

My point exactly; if you want anything more recent that - or you find
all your backups are unacceptable when restored - then the "other"
stuff you want to see again will have to be recovered.
If my filesystem or disk crashes (and any disk can crash at any time,
leaving moot the question of running chkdsk), I count myself lucky if I can
save *anything*. That's why I often backup.

Sure, that's why we <cough> all backup. My approach is to:
- keep a small data set free of infectables and incoming junk
- automate a daily backup of this elsewhere on HD
- scoop the most recent of these to another PC daily
- dump collected recent backups from that PC to CDRW

If you can image the entire system, then you'd keep the last image
made after the last significant system change, and use that as your
rebuild baseline before restoring the most recent data backup.

In practice, users tend to skip the "last mile" to CDRW for one reason
or another (out of disks, didn't get it together, etc.). If it's a
stand-alone PC, that leaves only the local HD backups, which remain
available only as long as that part of the HD works. If they have
been switching the PC off overnight, they won't even have that.
I fail see how this moves us towards *how* a naive user may
recover data from a crashed disk or severely damaged filesystem.

My point was that backups do not remove the role of data recovery,
even if they do reduce what is at stake.

The user's environment includes support techs, and in such cases,
you'd expect these to be involved if the user isn't keen on firing up
the Diskedit chainsaw themselves.

Data recovery is not always a costly clean-room epic undertaking;
sometimes it's a couple of snips here and there, and can be faster and
cheaper than rebuilding from scratch and restoring backups.

http://www.windowsubcd.com/index.htm

Ah! This time the page loaded!!
Looks verry interesting, thanks!!


--------------- ----- ---- --- -- - - -
The memes will inherit the Earth
 
C

cquirke (MVP Win9x)

"There is a timeout on un-registred versions (60 days from release),"
Maybe you should read first before you snip?

Ah, so it's going to die on Day 60 even if I don't install it or use
it until Day 59. Bummer; I'll just have to take my chances then.

That's assuming "release" isn't already 50+ days ago ;-p
 
J

J. Clarke

cquirke said:
Yes it is; and it should be there for that reason alone, if nothing
else. It's easier to understand what Scandisk says about what it
finds than, say, a raw register dump you get in Stop errors ;-)


Now you are saying that becausae most folks lack clue, we should
declare darkness as the standard? The "ChkDsk Knows Best, even if it
kills your data to the point that it can no longer be recovered" is
high-handed nonsense, geared to the convenience of "support" at the
expense of the client. We'd like a lot less of that, please.

This is one of the most ludicrous arguments I've ever seen. If you don't
like chkdsk then just don't use it.
Sure; that's a given - it's a one-pass automated tool with no
"big-picture" awareness, how smart can you really expect it to be?

If I show you a FAT1 that has 512 bytes of ReadMe.txt in it, and FAT2
that has sane-looking values in it, your guess at what to do would be
correct. If a few sectors further in, you found the same thing, but
the other way round, you'd guess how to fix that too.

You would not just splat the whole of FAT1 over FAT2 because it
"looked better", on the ASSumption that every part of FAT1 is as
correct or otherwise as every other part of FAT1.

You'd also not be so dumb as to chop the Windows directory in half,
just because at that point a dir entry started with a null, and throw
the rest of it away. In fact, even if there were 512 bytes of zeros
or ReadMe.txt content in the middle of a dir, you would recognise that
as a sector splat and append the distant part of the same dir,
excising the garbaged sector's contents.

That's not rocket science to a tech with an interest in such matters,
even if "your average user" couldn't do that themselves.

What a number of "average" users can (and do) do is call up and say:

"I had a bad exit, and Scandisk ran as usual, but this
time it wanted to delete half the Windows directory.
So I switched off the PC and I'm bringing it in for file
system repair and data recovery."

With NTFS, AutoChk robs them of that chance.

You might want to study what's publicly available about the file structure
of NTFS. It doesn't work the way you seem to think it does.
 
F

Folkert Rienstra

cquirke (MVP Win9x) said:
Ah, so it's going to die on Day 60 even if I don't install it or use
it until Day 59. Bummer; I'll just have to take my chances then.

That's assuming "release" isn't already 50+ days ago ;-p

As I said in another post:
... if you're not downright stupid you just set your clock back
and save you a 1.5 MB download that may not even be different.
 
C

cquirke (MVP Win9x)

cquirke (MVP Win9x) <[email protected]> wrote:
What the F are your talking about ?
You do a full backup of a system, with an appropriate tool, and if you
rebuild from that backup you get a functional equivalent system when
you are done.

Yes - with loss of all data done since the backup was created.

Got it?


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
F

Frank Jelenko

How about running chkdsk without any switches, reading the log and deciding
how you want to proceed?
 
C

cquirke (MVP Win9x)

On Wed, 08 Sep 2004 23:12:19 GMT, "Frank Jelenko"
How about running chkdsk without any switches, reading the log and deciding
how you want to proceed?

That's what I'd do, but there are limitations here:
- ChkDsk known to throw spurious errors if volume "in use"
- AutoChk simply will NOT work in this mode
- the log is so buried in Event Log it's near-impossible to find
- requires NT to run, which writes to at-risk file system (if C:)
- Event Log also requires NT to run, risks as above

What one typically wants to do is:
- after bad exit, before OS writes to HD, have AutoChk check
- AutoChk should stop and prompt on errors
- then can either proceed, or abort both AutoChk and OS boot
- if abort, then need a safe mOS from which to re-test etc.

That's exactly how the original auto-Scandisk works. Win.com runs DOS
mode Scandisk with implicit /Custom parameter, which thus facilitates
fine-grain control via Scandisk.ini, before Windows starts booting up
or writing to the file system.

Scandisk.ini can be set so the scan stops on errors. At that point,
it's safe to reset out of the boot process, press F8 on next boot,
choose Command Prompt Only as a safe mOS, and do an elective Scandisk
from there (or run alternate recovery/repair tools).

A "better" OS should at least match this sensible and prudent design.


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

address of mft 17
$Mft is missing error 3
Fix corrupt NTFS partition? 9
Master File Table (MFT) 2
Image backups 20
Recreate Missing MBR & MFT 2
corrupt MFT in harddisc 14
Chkdsik wiped entire disk contents 2

Top