Perhaps you have a reading comprehension problem.
This is what Quirk says, and what I've experienced first-hand when I see
IIS log file data being wiped away because of power failures:
-----------
It also means that all data that was being written is smoothly and
seamlessly lost. The small print in the articles on Transaction Rollback
make it clear that only the metadata is preserved; "user data" (i.e. the
actual content of the file) is not preserved.
-----------
You REALLY don't understand anything! What Chris is saying is that when
writes are interrupted the *NEW* data being written is not kept, not
that what is *already* on the disk (flushed) is discarded or in anyway
deleted! Listen, most of us who have been using NTFS have at one time
or another experienced glitches, crashes or unprotected power failures
while working with files, with NTFS when the computer is rebooted most
of the time it's like nothing happened at all, you might have lost the
work that was being saved at the time of the crash but the file itself
and what was successfully saved and flushed while working will still be
stored on the disk and will still be intact, don't try to lie and twist
the facts, everyone reading here will see right through your lies! Your
statement that NTFS silently deletes user-data to restore it's own
integrity was made in ignorance and to make readers think that any and
all of their files are at risk as NTFS will modify their user data, the
false statement even gives the impression that this will even happen on
files that are not being used.
Do you understand the difference between metadata and "user data" ?
Oh please, don't try to be smart and to obfuscate the issue by trying to
bring in things that will only end up biting you in the a$$! If you are
so smart about metadata you should already know that some of it is user
defined or user owned! Or do you think that the file system should
sacrifice critical system metadata and risk corrupting the MFT in order
to try save user data which was damaged or lost during a write
operation? Are you saying that the file system should not first and
foremost attempt to guarantee the integrity of of the file system
structure and the safe keeping of all the files on the disk at the
expense of one user file when glitches and failures occur?
Journalling ensures the *complete-ness* of write operations. Partially
completed writes are rolled back to their last complete state. That can
mean that user-data is lost.
It means that the incomplete write was not flushed to the disk and that
the old version of the file will not be updated, what will be lost will
be what was in the RAM when the file system was attempting to commit and
flush it to the disk!
In my experience, drive reliability, internal caching and bad-sector
re-mapping have made most of what NTFS does redundant.
The odd thing is - I don't believe I've ever had to resort to scouring
through .chk files for data that was actually part of any sort of user
file that was corrupted. Any time I've come across .chk files, I've
never actually had any use for them.
And I can tell you that I would really be pissed off if I was working on
a file on an NTFS system and it suffered a power failure or some other
sort of interruption and my file got journalled back to some earlier
state just because the file system didn't fully journal it's present
state or last write operation.
You still don't understand, the last successful write will be present,
what was successfully saved and flushed while you were working with the
file will be intact.
I've seen too many examples of NT-server log files that contain actual
and up-to-date data one hour, and because of a power failure the system
comes back up and half the stuff that *was* in the log file is gone.
That's an example of meta-data being preserved at the sake of user data.
You're lying again and the above statement proves beyond the shadow of a
doubt that you have absolutely no experience whatsoever on NT server
systems!
Look, no one is saying that everything NTFS is perfect and that data
loss never occurs with NTFS, that is why smart computer users keep
backups! On the other hand stop lying about things you know nothing of
and stop trying to make us believe that FAT32 is more robust than NTFS,
those who have real life experience know better. FAT32 has some
advantages in certain situations and NTFS has advantages in other
situations, by and large in today's computing environment for most users
the advantages offered by NTFS far outweigh those offered by FAT32.
John