CHKDSK killed my OpenGL subsystem

S

Skybuck Flying

Today when I booted my computer checkdisk ran automatically, "fixed" some
errors, and simply rebooted leaving me in a state of cluelessness.

Fortunately I could retrieve the information via:

Event Viewer->Application Source:WinLogon:

Here is the chkdisk report:

"
Checking file system on C:
The type of the file system is NTFS.


One of your disks needs to be checked for consistency. You
may cancel the disk check, but it is strongly recommended
that you continue.
Windows will now check the disk.
Index entry nvoglnt.dll of index $I30 in file 0x1d points to unused file
0xc958.
Deleting index entry nvoglnt.dll in index $I30 of file 29.
Index entry resume.el of index $I30 in file 0xc76b points to unused file
0xc959.
Deleting index entry resume.el in index $I30 of file 51051.
Index entry Unit1.dfm of index $I30 in file 0xc90d points to unused file
0xc95a.
Deleting index entry Unit1.dfm in index $I30 of file 51469.
Index entry Unit1.pas of index $I30 in file 0xc90d points to unused file
0xc95b.
Deleting index entry Unit1.pas in index $I30 of file 51469.
Cleaning up minor inconsistencies on the drive.
Cleaning up 48 unused index entries from index $SII of file 0x9.
Cleaning up 48 unused index entries from index $SDH of file 0x9.
Cleaning up 48 unused security descriptors.
CHKDSK discovered free space marked as allocated in the
master file table (MFT) bitmap.
CHKDSK discovered free space marked as allocated in the volume bitmap.
Windows has made corrections to the file system.

11550703 KB total disk space.
10344776 KB in 215692 files.
91180 KB in 22706 indexes.
0 KB in bad sectors.
321155 KB in use by the system.
65536 KB occupied by the log file.
793592 KB available on disk.

4096 bytes in each allocation unit.
2887675 total allocation units on disk.
198398 allocation units available on disk.

Internal Info:
54 e1 03 00 49 a3 03 00 62 8c 05 00 00 00 00 00 T...I...b.......
60 27 00 00 00 00 00 00 f0 01 00 00 00 00 00 00 `'..............
20 85 2e 12 00 00 00 00 20 08 2c db 01 00 00 00 ....... .,.....
f0 05 5b 0e 00 00 00 00 00 00 00 00 00 00 00 00 ..[.............
00 00 00 00 00 00 00 00 40 0b c2 06 02 00 00 00 ........@.......
99 9e 36 00 00 00 00 00 8c 4a 03 00 00 00 00 00 ..6......J......
00 20 65 77 02 00 00 00 b2 58 00 00 00 00 00 00 . ew.....X......

Windows has finished checking your disk.
Please wait while your computer restarts.


For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.
"

Then later when I tried to start "Return to Castle Wolfenstein Multiplayer
Demo":

"
Wolf 1.1 win-x86 Dec 17 2001
----- FS_Startup -----
Current search path:
F:\GAMES\Return to Castle Wolfenstein Multiplayer DEMO/main

----------------------
0 files in pk3 files

Running in restricted demo mode.

----- FS_Startup -----
Current search path:
F:\GAMES\Return to Castle Wolfenstein Multiplayer DEMO\demomain\pak0.pk3
(1846 files)
F:\GAMES\Return to Castle Wolfenstein Multiplayer DEMO/demomain

----------------------
1846 files in pk3 files
execing default.cfg
couldn't exec language.cfg
execing wolfconfig_mp.cfg
usage: seta <variable> <value>
execing autoexec.cfg
Hunk_Clear: reset the hunk ok
....detecting CPU, found Intel Pentium III
Bypassing CD checks
----- Client Initialization -----
----- Initializing Renderer ----
-------------------------------
Loaded 714 translation strings from scripts/translation.cfg
----- Client Initialization Complete -----
----- R_Init -----
Initializing OpenGL subsystem
....initializing QGL
....calling LoadLibrary( 'C:\WINDOWS\System32\opengl32.dll' ): succeeded
....setting mode 3: 640 480 FS
....using colorsbits of 32
....calling CDS: ok
....registered window class
....created window@0,0 (640x480)
Initializing OpenGL driver
....getting DC: succeeded
....GLW_ChoosePFD( 32, 24, 0 )
....35 PFDs found
....GLW_ChoosePFD failed
....failed to find an appropriate PIXELFORMAT
....restoring display settings
....WARNING: could not set the given mode (3)
....setting mode 3: 640 480 FS
....using colorsbits of 16
....calling CDS: ok
....created window@0,0 (640x480)
Initializing OpenGL driver
....getting DC: succeeded
....GLW_ChoosePFD( 16, 24, 0 )
....35 PFDs found
....GLW_ChoosePFD failed
....failed to find an appropriate PIXELFORMAT
....restoring display settings
....WARNING: could not set the given mode (3)
....shutting down QGL
....unloading OpenGL DLL
....assuming '3dfxvgl' is a standalone driver
....initializing QGL
....WARNING: missing Glide installation, assuming no 3Dfx available
....shutting down QGL
----- CL_Shutdown -----
RE_Shutdown( 1 )
-----------------------
GLW_StartOpenGL() - could not load OpenGL subsystem
"

Apperently checkdisk was bold enough to simply delete these four files:

Index entry nvoglnt.dll of index $I30 in file 0x1d points to unused file
0xc958.
Deleting index entry nvoglnt.dll in index $I30 of file 29.

Index entry resume.el of index $I30 in file 0xc76b points to unused file
0xc959.
Deleting index entry resume.el in index $I30 of file 51051.

Index entry Unit1.dfm of index $I30 in file 0xc90d points to unused file
0xc95a.
Deleting index entry Unit1.dfm in index $I30 of file 51469.

Index entry Unit1.pas of index $I30 in file 0xc90d points to unused file
0xc95b.
Deleting index entry Unit1.pas in index $I30 of file 51469.

This is not enough information for me, I want to know in what folder these
files were so I can asses the damage better.

Apperently nvoglnt.dll is related to opengl which is now damaged.

resume.el ? I have no idea what this is. Maybe a harmless file of
somebodie's resume which I got via spam/e-mail... maybe a virus... maybe
just a continue file for some program.

Unit1.dfm and Unit1.pas is some delphi source code.... I would like to know
which one has been deleted since I have many of these files on my PC....
it's probably not an important one... but stilll I would like to make sure.

How do I restore my system ? Do I simply install the latest nvidia driver
and hope for the best ?

How did this happen in the first place ? (I have been playing around with
RTCW and a debugger which I shutdown multiple times but I dont think that
was dangerous ? )

Some scenerio's

1. Some kind of crash corrupted it

2. Windows xp pro contains a bug somewhere which corrupted it

3. RTCW has a bug somewhere overwriting this file and corrupting it.

4. Somebody broke into the pc and corrupted it on purpose.

5. A virus/worm corrupted it.

Is there anything that can be done to make chkdsk give more information and
maybe ask me what I want it to do before it does it ?

Well so much for that

Bye,
Skybuck.
 
S

Spack

Skybuck Flying said:
Today when I booted my computer checkdisk ran automatically, "fixed" some
errors, and simply rebooted leaving me in a state of cluelessness.
Apperently nvoglnt.dll is related to opengl which is now damaged.

Sounds like it - nvoglnt sounds like nVidia OpenGL NT driver - and XP is
NT5.1.

No idea about the other files.
How do I restore my system ? Do I simply install the latest nvidia driver
and hope for the best ?

First run System File Checker, which will restore any missing or damaged XP
files. At the Run option in the Start menu type

sfc /scannow

and then press OK. Have your XP CD at hand.

Then reinstall your nVidia drivers. If you're lucky, that'll sort it all
out. I wouldn't risk using a Restore Point for 2 reasons:

1) Anything you installed since the last restore point will be lost

2) If the restore point files were damaged, you could end making things
worse.
How did this happen in the first place ? (I have been playing around with
RTCW and a debugger which I shutdown multiple times but I dont think that
was dangerous ? )

Either turning the power off before cached updates have been written back to
the hard drive, or hard disk could be on the way out. Get the disk utilities
for the drive you have from the manufacturers web site and run a few tests.
Is there anything that can be done to make chkdsk give more information
and
maybe ask me what I want it to do before it does it ?

Normally chkdsk runs automatically on start up only when XP hasn't been shut
down normally. If you look at the text that comes up it does give you an
option to skip the checking - if you do this, and XP boots up, you can run a
command prompt and type chkdsk to get it to run without fixing anything, it
will display what is wrong (you need to run chkdsk /f to tell it to fix
any problems, and this will only be done on your system drive on reboot
anyway).

Dan
 
S

Skybuck Flying

Spack said:
Sounds like it - nvoglnt sounds like nVidia OpenGL NT driver - and XP is
NT5.1.

Yeah ;)
No idea about the other files.


First run System File Checker, which will restore any missing or damaged XP
files. At the Run option in the Start menu type

sfc /scannow

and then press OK. Have your XP CD at hand.

Then reinstall your nVidia drivers. If you're lucky, that'll sort it all
out. I wouldn't risk using a Restore Point for 2 reasons:

1) Anything you installed since the last restore point will be lost

2) If the restore point files were damaged, you could end making things
worse.

Well since only the opengl dll was damaged I was lucky and simply installed
the latest nvidia drivers and voila it s working again ;)
Either turning the power off before cached updates have been written back to
the hard drive, or hard disk could be on the way out. Get the disk utilities
for the drive you have from the manufacturers web site and run a few
tests.

Well power off and cache stuff I dont think so at least not for the dll...
why would anything write to the dll ? Except maybe defrag or repairing the
dll or something wacky.

Harddisk could be getting old... though I hope not ;)

No I dont trust the utilities... I dont wanna erase my harddisk ;)
Normally chkdsk runs automatically on start up only when XP hasn't been shut
down normally. If you look at the text that comes up it does give you an
option to skip the checking - if you do this, and XP boots up, you can run a
command prompt and type chkdsk to get it to run without fixing anything, it
will display what is wrong (you need to run chkdsk /f to tell it to fix
any problems, and this will only be done on your system drive on reboot
anyway).

Well so far it's has not turn out bad... so I will simply let chkdsk
continue to run as it is...

But at the next sign of trouble... I'm gonna disable it ;)

I dont wanna run the risk of losing important files :D

Though I do make backups... good reason to make another backup soon :)

Bye,
Skybuck.
 
S

Spack

Skybuck Flying said:
tests.

Well power off and cache stuff I dont think so at least not for the dll...
why would anything write to the dll ? Except maybe defrag or repairing the
dll or something wacky.

By not lettin the machine shut down properly you can mess up the MFT (Master
File Table) which is an index of all the files on the drive and their
locations - it's not the nVidia dll that got messed up, it was the index
information, so the index was removed because XP could not reliably be sure
of where the file was and where any fragmented pieces might be. MFT
corruption is less likely to happen than with FAT or VFAT, but not shutting
your PC down correctly always runs the risk of messing something up.

Dan
 
S

Skybuck Flying

DaveL said:
He'll screw around with a debugger but won't trust disk utilities?

Nope these utitilities are complex and hard to understand and before you
know... you're doing a low level format heheheheh.
 
S

Skybuck Flying

DaveL said:
He'll screw around with a debugger but won't trust disk utilities?

Yeah.... I'll pass on disk utilities that do a low level format to "test"
the drive :D LOL.

Bye.
Skybuck
 
S

Skybuck Flying

Besides....

I checked this site:

http://www.hitachigst.com/hdd/support/download.htm

It does mention possibility of losing data... scary.

But even if I wanted to give it a go...

I dont have a diskdrive anymore hehehehehehe removed yessss to make room for
the king the new drive... so it s cooool.

I could make a bootable cd... but spending on 1 euro on it is LOL to much :D
what a waste :D
 
C

cquirke (MVP Win9x)

Two bad OS design "features" at work here:

1) Auto-fixing ChkDsk (or really in this case, AutoChk)
2) Automatically reboot on system errors

Even if you say "well, unattended servers would want reboot on errors
to maintain downtime", what's the point of rebooting on errors that
arise before the OS boot process is complete?

Fortunately, you can (and IMO should) kill (2) via System, Properties,
Advanced etc. Unfortunately, the UI design of ChkDsk and AutoChk date
from before MS-DOS 6 brought Scandisk to the world, so they simply
don't offer any interactive (user-controlled) mode of operation at all

Not only do AutoChk and ChkDsk /F automatically "fix" without
prompting you, but they bury their results deep in the bowels of Event
Viewer. Not somewhere obvious like "ChkDsk", but "Winlogin" or
something. It's hard to read that material if XP can't boot.
Well since only the opengl dll was damaged I was lucky and simply installed
the latest nvidia drivers and voila it s working again ;)
why would anything write to the dll ? Except maybe defrag or repairing the
dll or something wacky.

Yes, code files would normally be written to only when:
- installed
- updated
- infected
- disinfected
- moved by defrag
- "fixed" by ChkDsk or AutoChk
- splatted by wild writes

HD file system (structure, i.e. that ChkDsk can detect) can be
corrupted in various ways:

1) Interruption of sane writes

This is what ChkDsk and AutoChk ASSume is going on, when they "fix"
things, and what NTFS's vaunted "transaction rollback" is designed to
mitigate (though the small print says only metadata is protected).

2) Wild writes

If the file system's layer of abstraction is undermined by bad RAM or
other flaky hardware, or deranged/malicious software, then the file
system's "rules" may be broken too. For example, a data cluster
destined for volume cluster address 63412 may be bit-punned (by a
reset address bit) to 30644, and thus overwrite data from some
completely unrelated file. At the lower level of raw sector
addresses, this can corrupt the raw bones of the file system itself.

3) Bad HD

Just about everything conspires to hide HD failure from you. First,
the HD's own firmware "fixes" rotting sectors on the fly; then NTFS's
code does the same thing, and finally ChkDsk /R paints over bad
clusters and ignores existing ones. And by default, CMOS setup
disables SMART reporting on POST. So it's up to you to suspect this
possibility whenever the mouse pointer sticks with the HD LED on (bad
sector retries) and chase it up before your data's hosed.


On HD diagnostics: Yes, take care to avoid destructive tests, and
abort testing as soon as physical errors show up and proceed directly
to evacuating your data, from outside "I can't run without writing to
C:, too bad if that kills the data" Windows.

SMART reporters from HD vendors will look at the SMART history and
typically go "hmmm, only a few thousand wobbly sectors that so far
we've auto-'fixed'... OK, call it a 'good' HD, no need to issue an
RMA". Don't accept a glib one-line "everything's fine" report; use
something like AIDA32 to show you all the detail that SMART can cough
up. SMART's potentially good in that it's the only window into bad
sectors that are already hidden by the firmware's "fixing".

In addition, use something that can non-destructively check the HD
surface. Watch for slowdowns in the progress - if outside of
Windows's constant background tasks, they can be taken to mean retries
of sick sectors - even if the utility says everything is "OK".


---------- ----- ---- --- -- - - - -
"He's such a character!"
' Yeah - CHAR(0) '
 
C

cquirke (MVP Win9x)

On Tue, 28 Dec 2004 13:55:00 -0000, "Spack"
By not lettin the machine shut down properly you can mess up the MFT (Master
File Table) which is an index of all the files on the drive and their
locations - it's not the nVidia dll that got messed up, it was the index
information, so the index was removed because XP could not reliably be sure
of where the file was and where any fragmented pieces might be.

Great logic, eh? "I can't figure out what's going on, so best we just
kill, bury and deny anything happened, even if it means this makes it
impossible for a tech to fix things and recover data".
MFT corruption is less likely to happen than with FAT or VFAT

Well, VFAT is the code that manages FATxx, and FATxx doesn't have an
MFT. The most crucial file system structure in FATxx is the FAT
themselves, and because these are so crucial, FATxx maintains two
copies and updates them both within such a small critical period that
"mismatched FAT" from interruption of sane file ops is very rare.

What does "better" file system NTFS do about hedging against
interrupted MFT updates? AFAIK, nothing beyond keeping duplicates of
a handful of crucial system entries. It's a case of the OS saying
"I'm alright Jack; too bad about your data".
not shutting down correctly always runs the risk of messing up.

Yep. Forget that at your peril ;-)


--------------- ----- ---- --- -- - - -
Tech Support: The guys who follow the
'Parade of New Products' with a shovel.
 
R

Robert Hancock

cquirke said:
Well, VFAT is the code that manages FATxx, and FATxx doesn't have an
MFT. The most crucial file system structure in FATxx is the FAT
themselves, and because these are so crucial, FATxx maintains two
copies and updates them both within such a small critical period that
"mismatched FAT" from interruption of sane file ops is very rare.

What does "better" file system NTFS do about hedging against
interrupted MFT updates? AFAIK, nothing beyond keeping duplicates of
a handful of crucial system entries. It's a case of the OS saying
"I'm alright Jack; too bad about your data".

NTFS is journalled, so if any sequence of file system operations was
interrupted there should be a record of what was being done in the
journal so that the missing operations can be reconstructed. Therefore
there should be no way for the file system structure to become corrupted
due to an unclean shutdown. Normally a full chkdsk is not necessary in
this situation since the journal can be quickly replayed. I suspect in
this case something else happened to cause some on-disk corruption of
the file system data structures..
 
C

cquirke (MVP Win9x)

cquirke (MVP Win9x) wrote:
NTFS is journalled, so if any sequence of file system operations was
interrupted there should be a record of what was being done in the
journal so that the missing operations can be reconstructed.

What specifically does journalling keep a backup of, that it can undo?

I ask, for two reasons:

1) MS's documentation suggests only metadata is preserved

2) Performance impact implies data cluster chains aren't duplicated

For example; I add two bytes at offset 37600 in a 125000567-byte file.
If journalling was to *totally* preserve the state of the original
file, it would have to retain all clusters from the original file from
that containing the offset, onwards, plus the original dir entry, plus
the old chaining info (which as I understand it, is not a "FAT" but a
set of start addresses for the cluster runs that make up the file).

Is this what journalling does? Or something rather less than this?
Therefore there should be no way for the file system structure to
become corrupted due to an unclean shutdown.

I suspect the devil may be in the details on this one ;-)
Normally a full chkdsk is not necessary in this situation since
the journal can be quickly replayed.

Sometimes, you may *want* the broken remains of the file that was
being created or updated, which journalling is likely to throw away.

If you disable AutoChk, does that also disable the journalling
"automatic fixing" feature as well?
I suspect in this case something else happened to cause some
on-disk corruption of the file system data structures.

That's my suspicion, too. These things happen, and when they do, NTFS
is far less fixable (in the sense of "preserve or recover my files"
rather than "do whatever it takes to sanify the file system")

--------------- ----- ---- --- -- - - -
Never turn your back on an installer program
 
R

Robert Hancock

cquirke said:
On Wed, 29 Dec 2004 17:54:09 GMT, Robert Hancock



What specifically does journalling keep a backup of, that it can undo?

I ask, for two reasons:

1) MS's documentation suggests only metadata is preserved

2) Performance impact implies data cluster chains aren't duplicated

For example; I add two bytes at offset 37600 in a 125000567-byte file.
If journalling was to *totally* preserve the state of the original
file, it would have to retain all clusters from the original file from
that containing the offset, onwards, plus the original dir entry, plus
the old chaining info (which as I understand it, is not a "FAT" but a
set of start addresses for the cluster runs that make up the file).

Is this what journalling does? Or something rather less than this?

In the case of NTFS I believe that only file system metadata is
preserved, i.e. the file system is guaranteed to be consistent, however
files that were being written at the time of the crash could contain old
data, new data, or a mixture of the two. There are some other file
systems that have stronger guarantees (Reiser4 on Linux is claiming to
have all file system operations fully atomic with no performance hit,
and I believe even ext3 has better guarantees than this by default).

Even in those cases though, the operation you describe wouldn't be
atomic, since that can't be done through a single file system operation
(you can't just tell the OS to insert data in the middle of a file, you
have to write the new data and then move the rest of the contents down
yourself).
 
C

cquirke (MVP Win9x)

What specifically does journalling keep a backup of, that it can undo?
1) MS's documentation suggests only metadata is preserved
2) Performance impact implies data cluster chains aren't duplicated
[/QUOTE]
In the case of NTFS I believe that only file system metadata is
preserved, i.e. the file system is guaranteed to be consistent, however
files that were being written at the time of the crash could contain old
data, new data, or a mixture of the two.

Well, that's the point. What you are saying is that the NTFS
journalling feature does absolutely nothing to preserve your data.
Even in those cases though, the operation you describe wouldn't be
atomic, since that can't be done through a single file system operation
(you can't just tell the OS to insert data in the middle of a file, you
have to write the new data and then move the rest of the contents down
yourself).

Quite. What you'd have to do, if you wanted to claim perfect
preservation of the previous state, is to write the new material to
unused clusters, then chain these into place, and finally update the
dir entry to point to the new form of the file as an atomic operation.

I'd do that by first adding the chain as am ADS, and then switching
pointers so that it becomes the main data stream, then I'd drop and
unlink the old version of the file.

But the performance impact could be really ugly, and one of the
drawbacks being you'd need space for both new and old file chains.


In practice, MS is not concerned with your data at all; only the
sanity of the file system. After all, the only impact these matters
have on MS is in terms of support calls. No vendor assumes any
responsibility for your data, so from their perspective, it's
irrelevant. From your perspective, it's the only unique part of the
system that cannot be restored by throwing money at new parts.

The trouble is, these matters will only come to a head when things go
wrong. Even then, the user has to believe what the tech says, and
many techs are insufficiently skilled or concerned about data
recovery. So bogus claims like "NTFS saves you from data loss because
of journalling and transaction rollback" are rarely challenged.


--------------- ----- ---- --- -- - - -
Tech Support: The guys who follow the
'Parade of New Products' with a shovel.
 
R

Robert Hancock

cquirke said:
In practice, MS is not concerned with your data at all; only the
sanity of the file system. After all, the only impact these matters
have on MS is in terms of support calls. No vendor assumes any
responsibility for your data, so from their perspective, it's
irrelevant. From your perspective, it's the only unique part of the
system that cannot be restored by throwing money at new parts.

The trouble is, these matters will only come to a head when things go
wrong. Even then, the user has to believe what the tech says, and
many techs are insufficiently skilled or concerned about data
recovery. So bogus claims like "NTFS saves you from data loss because
of journalling and transaction rollback" are rarely challenged.

I think that the main claimed advantage of NTFS journaling is that it
avoids the need for a full chkdsk after an unclean shutdown. This is not
such a big deal for a home system, but if you have a server containing
hundreds of user profiles and many thousands of files, the time it takes
to run a full file system check is very significant (many hours in
some cases, apparently), during which time the server is not available.

There are some advantages as far as data integrity however - in FAT32,
etc. there are probably some cases where the file system state after a
crash can't be reconciled properly and some files end up orphaned or
lost, NTFS would prevent this from happening.
 
C

cquirke (MVP Win9x)

cquirke (MVP Win9x) wrote:
I think that the main claimed advantage of NTFS journaling is that it
avoids the need for a full chkdsk after an unclean shutdown. This is not
such a big deal for a home system, but if you have a server containing
hundreds of user profiles and many thousands of files, the time it takes
to run a full file system check is very significant (many hours in
some cases, apparently), during which time the server is not available.

I attain that advantage in a different way; by keeping C: small and
uncluttered. Most write traffic (temp, pagefile, TIF) is on C:, so C:
usually has to be checked after a bad exit, plus the traffic load
means C: is most likely to get corrupted and lose data.

So with an 8G C: and data held on a 2G D:, the "difficult" parts of
disk maintenance (AutoChk after bad exits, defrag) are fast. It's
less often that the bulk of the HD (E:) would have to be checked.
There are some advantages as far as data integrity however - in FAT32,
etc. there are probably some cases where the file system state after a
crash can't be reconciled properly and some files end up orphaned or
lost, NTFS would prevent this from happening.

These are the details I'm trying to pin down, but most folks just trot
out "NTFS is more secure" (true, but not relevant at the level of
abstraction I'm looking at) or point to journalling as if that could
prevent wild writes or HD failure from corrupting data.

NTFS is not "like FATxx but better"; it's a completely different file
system. For example; there's no FAT, and directories are indexed
rather than searched from beginning to end. What do these differences
mean, when it comes to the risk of corruption and data loss?

Well, avoiding linear directory lookup could reduce the critical
window for directory updates, if it means an entire cluster chain
doesn't have to be written back to disk whenever a directory entry
changes. On the other hand, it's far easier to manually repair a
"flat" file than to fix a binary index.

As to "no FAT"; well, the information about which cluster comes next
has to be stored somewhere. As I understand it, free clusters are
tracked in a bitmap, while data cluster chains are managed as "runs".

Both of these shrink the size of the metadata required, compared to a
FAT that stores an entire address for every data cluster. A bit to
mark "used" may be required for every cluster, but that's a lot (er,
1/32) smaller than 32 bits per cluster address.

By the same token, it should take less space to hold only the starting
cluster address for each contiguous fragment of a file's total data
clusters. Only those clusters that start a piece of teh chain have to
be tracked; the rest are implicitly assumed to follow for the length
of that particular cluster run.

What is not clear, is to what extend these crucial structures are
duplicated for safety - given that there's no other way to deduce what
they should be. FATxx has two copies of FAT; does NTFS maintain two
copies of the free space bitmap and cluster run information?


Finally, there's the matter of how to fix things when (not if) they go
wrong - and this is where NTFS truly sucks. It's not the fault of
NTFS as a file system design; it's the lack of decent tools.

When FATxx gets bent, I can use Scandisk interactively to scout the
problem. Scandisk stops when it finds an anomaly and asks me if it
can "fix"; if it looks safe, I let it, else I abort and move on to
Diskedit (a 3rd-party tool from Norton Utilities).

Diskedit also checks the file system for errors, but doesn't fix;
instead, it lists the errors so I can "jump" to them. Diskedit shows
the raw contents of the HD in ways appropriate to the content, e.g.
MBR as MBR, FAT as FAT, dir as dir etc. but I can choose any view I
like, which is helpful for "lost" items.

With a working knowledge of FAT structure - it's simple, and it's
documented - I can manually repair or rebuild file system structures
to taste. In this way, small file system barfs can be fixed cleanly,
with less data loss than if it were left up to Scandisk.

In the case of NTFS, I don't even have an interactive Scandisk. In
fact, in some ways it's worse than the old disk compression stuff that
we avoided for fear of data loss. The only tool I have is ChkDsk,
which either fixes nothing and may throw spurious errors, or ChkDsk /F
that automatically and irreversably "fixes" things. It's a disaster!


--------------- ----- ---- --- -- - - -
Tech Support: The guys who follow the
'Parade of New Products' with a shovel.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top