Virtual Machine and NTFS

P

Paul

Bill said:
I don't know how you (or anyone else for that matter) can get used to such
small type (with a 1600x1200 screen resolution). I'm using 800x600 and
love it, and get mad when a couple of programs force me to use 1024x768.
Who needs all that extra real estate (and small type)? :) But at least
I can see it in the future (1024x768) - but only if I have to.

Actually, I'm running Ubuntu right now, in it's windowed world. And comparing
the typeface of the terminal windows I'm working in, they're the exact same
size characters as this Windows thing I'm typing in. So in fact, both
environments (real Windows, virtual Ubuntu), look quite similar. The color
scheme in Ubuntu is different, as are the window decorations, but the text is
just as readable.

I boot another environment occasionally, the Kaspersky offline virus scanner
(which is based on Gentoo), and for whatever reason, the text looks dreadful.
I actually have to open text versions of saved reports from that one, to
be able to read them. Any of the live windows, displaying results, are
unreadable. I think there is something slightly off, about the
resolution setting, but I can't find any tool on that particular
distro, to tell me what resolution is being used. The "xdpyinfo" tool
is missing. As is "xrandr".

So the Linux world can be perfectly normal, or terrible, purely
on somebody's whim.

I could probably fix that Gentoo one, by putting an argument on the
boot command line, to change the video mode, and maybe that would
fix it. But I wouldn't have a clue what I was doing. It
would be something like VGA=0x31A or VGA=792, and then maybe
the text would look better. (As is normal with Linux, you
can spend all day experimenting, and never get anything done.)

Linux kernel video mode numbers

640×480 800×600 1024×768 1280×1024
256 0×301 0×303 0×305 0×307
32k 0×310 0×313 0×316 0×319
64k 0×311 0×314 0×317 0x31A
16M 0×312 0×315 0×318 0x31B

Paul
 
J

J. P. Gilliver (John)

In message <[email protected]>, glee
Are you using XPSP3 Home or Pro Edition as the host OS?

If you find the old Connectix version 5 does not do all you want, try
the newer free version, Virtual PC 2007:
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=04D26402-319
9-48A3-AFA2-2DC0B40A73B6&displaylang=en
Thanks for the link.

That page says:

Supported Operating Systems:Windows Server 2003, Standard Edition
(32-bit x86);Windows Server 2003, Standard x64 Edition;Windows Vista
Business;Windows Vista Business 64-bit edition;Windows Vista
Enterprise;Windows Vista Enterprise 64-bit edition;Windows Vista
Ultimate;Windows Vista Ultimate 64-bit edition;Windows XP Professional
Edition;Windows XP Professional x64 Edition;Windows XP Tablet PC Edition

....

Virtual PC 2007 runs on: Windows Vistaâ„¢ Business; Windows Vistaâ„¢
Enterprise; Windows Vistaâ„¢ Ultimate; Windows Server 2003, Standard
Edition; Windows Server 2003, Standard x64 Edition; Windows XP
Professional; Windows XP Professional x64 Edition; or Windows XP Tablet
PC Edition

under "System Requirements". It's not clear to me, but I think the first
list must be the OSs the virtual machine can run, and the second list
the host OSs it'll run under. But anyway, I see no mention of Home in
either list; are you saying it will and they're just not telling us?
 
G

glee

J. P. Gilliver (John) said:
In message <[email protected]>, glee

Thanks for the link.

That page says:

Supported Operating Systems:Windows Server 2003, Standard Edition
(32-bit x86);Windows Server 2003, Standard x64 Edition;Windows Vista
Business;Windows Vista Business 64-bit edition;Windows Vista
Enterprise;Windows Vista Enterprise 64-bit edition;Windows Vista
Ultimate;Windows Vista Ultimate 64-bit edition;Windows XP Professional
Edition;Windows XP Professional x64 Edition;Windows XP Tablet PC
Edition

...

Virtual PC 2007 runs on: Windows Vistaâ„¢ Business; Windows Vistaâ„¢
Enterprise; Windows Vistaâ„¢ Ultimate; Windows Server 2003, Standard
Edition; Windows Server 2003, Standard x64 Edition; Windows XP
Professional; Windows XP Professional x64 Edition; or Windows XP
Tablet PC Edition

under "System Requirements". It's not clear to me, but I think the
first list must be the OSs the virtual machine can run, and the second
list the host OSs it'll run under. But anyway, I see no mention of
Home in either list; are you saying it will and they're just not
telling us?

The second list are the operating systems you can install it on, as a
host machine. I have read elsewhere that it will install and run on XP
Home as well as Pro, but have never tried.

The first list is what operating systems are "supported" to be run as a
virtual system on the host. Other systems can be run....Win98, Linux,
etc...they are just not "supported" , meaning you won't get any help or
support for issues, there may not be Additions available for everything,
or there may only be partial functionality of the unsupported virtual
system.
 
P

Philo Pastry

Bill said:
Maybe I'm forgetting something, but I seem to recall that as the
partition size got bigger, the cluster size also HAD to get larger
(up to 32K max) to keep the maximum allowable number of clusters
within the max 16 bit value (65,536) for FAT32.
So how could one possibly have 4 KB clusters on a 500 GB volume
with FAT32?

You should read the following:

http://support.microsoft.com/kb/184006

It contains a mix of truth and fiction.

True:

-------------------
A FAT32-formatted volume *must* contain a minumum of 65,527 clusters.
That's the minimum value - not the max value.

The maximum possible number of clusters on a volume using the FAT32 file
system is 268,435,445. That would equate to a volume size of about
1.099 trillion bytes (1024 gb) using 4kb cluster size.
-------------------

False:

--------------------
You cannot decrease the cluster size on a volume using the FAT32 file
system so that the FAT ends up larger than 16 MB less 64 KB in size.
--------------------

Microsoft claims that the FAT can't exceed 16 mb in size, which equates
to about 4 million clusters given that the FAT uses 4 bytes per cluster.

They say that the FAT can't exceed 16 mb in size because the DOS version
of scandisk is a "16-bit" program that can't read more than 16 mb of
data into memory at once. I showed this was false several years ago by
having DOS scandisk process very large FAT32 volumes of various
configurations, including my 500 gb single-partition FAT32 volume which
had 120 million clusters (4kb cluster size) which would have had a FAT
size of over 450 mb.

Microsoft's statement that you can't end up with a FAT larger than 16 mb
is true - if they mean by using Microsoft's own software tools (like
format.com).

Microsoft's own FAT32 formatting tools are designed to keep the FAT size
at or under 16 mb, which means that a FAT32 volume should max out at 128
gb (32kb cluster size, 4.177 million total clusters). However,
Microsoft's fdisk and format.com will correctly create a FAT32 volume of
up to 512 gb - but not more. This results in about 16 million clusters.
 
G

glee

Bill in Co said:
Maybe I'm forgetting something, but I seem to recall that as the
partition size got bigger, the cluster size also HAD to get larger (up
to 32K max) to keep the maximum allowable number of clusters within
the max 16 bit value (65,536) for FAT32. So how could one possibly
have 4 KB clusters on a 500 GB volume with FAT32?

You can force the cluster size....it just means there is a ridiculously
large number of clusters on a drive that size, and among other things,
most drive tools will not work on a drive with that many clusters
(scandisk, defrag, drive diagnostic apps).
 
P

Philo Pastry

glee said:
You can force the cluster size....it just means there is a
ridiculously large number of clusters on a drive that size,
and among other things, most drive tools will not work on a
drive with that many clusters (scandisk, defrag, drive
diagnostic apps).

DOS scandisk has no problems scanning volumes with many millions of
clusters (120 million was the most I've tried and it worked).

Windows ME versions of defrag and scandisk (scandskw + diskmaint.dll)
have a cut-off somewhere around 28 to 32 million clusters. The Windows
ME versions of scandisk and defrag are frequently transplanted into
tweaked Win-98 installations.

MS-DOS version of Fdisk (may 2000 update) has a limit of 512 gb (that's
the largest drive that it can correctly partition). There is something
called "Free Fdisk" that can partition larger drives (at least 750 gb,
and probably up to 1 tb). MS-DOS format.com can format volumes of up to
1024 gb (1 tb).
 
J

John John - MVP

You can force the cluster size....it just means there is a ridiculously
large number of clusters on a drive that size, and among other things,
most drive tools will not work on a drive with that many clusters
(scandisk, defrag, drive diagnostic apps).

Not to mention that it will result in a ridiculously big FAT of about
500MB! Anyone who understands how the FAT is read in a linear fashion
understands the folly of such a formatting scheme! This formatting
scheme effectively ensures that much the disk structure will be paged
out, what an incredible hit on disk performance! The disk is already
the single biggest performance bottleneck on any computer and this silly
formatting scheme will make it an even bigger bottleneck. Good thing
98Guy isn't handing out car advice, he would have us fill the bumpers
with lead while claiming that the added ballast makes cars go faster
while consuming less fuel...

John
 
J

John John - MVP

While that is true, it rarely comes up as a realistic or practical
limitation for FAT32. The most common multimedia format in common use
is the DVD .VOB file, which self-limit themselves to be 1 gb.

People working with video editing and multimedia files often run across
this 4GB file limitations. Backup/imaging utilities also often run into
problems caused by this file size limitation, it is a very practical
limitation of FAT32 and one that users often experience, people often
post asking about this problem.



The only file type that I ever see exceed the 4 gb size are virtual
machine image files, which you will not see on a win-9x machine but you
would see on an XP (or higher) PC running VM Ware, Microsoft Virtual PC,
etc. But 4 gb should be enough to contain a modest image of a virtual
windows-98 machine.

Windows XP cannot format partitions larger than 32GB to FAT32 because
the increasing size of the FAT for bigger volumes makes these volumes
less efficient and for performance reasons Microsoft decided to draw the
line at 32GB for FAT32 volumes.


[snip...]

The extra sophistication and transaction journalling performed by NTFS
reduces it's overall performance compared to FAT32. So for those who
want to optimize the over-all speed of their PC's, FAT32 is a faster
file system than NTFS.

That is not completely true. FAT32 is generally faster on smaller
volumes but on larger volumes NTFS is faster. This is why Microsoft
decided to put a limit of 32GB on the size of volumes which can be
formatted to FAT32 on Windows 2000 and later NT operating systems, the
size of the FAT on larger volumes is a hindrance on performance and
Microsoft decided that 32GB was an acceptable cut off point for FAT32
volumes.


That's another common myth about FAT32 - that the appearance of many
.chk files must mean that it's inferior to NTFS.

While it might look untidy, the mere existance of those .chk files don't
mean anything about how compentent or capable FAT32 is, and it's not
hard to just delete them and get on with your business.

You did not say in your example if the user's drive and OS was operable
and functional despite the existance of those .chk files.

These .chk files are lost file segments that the scandisk utility could
not recover, damaged data! That the operating system remains "operable"
is a laughable excuse if user data is lost! Open a user file on a FAT32
drive then while the user is making changes to his file yank the plug on
the machine and tell us how well (or not) the user data survives such an
event!



What you don't understand about NTFS is that it will silently delete
user-data to restore it's own integrity as a way to cope with a failed
transaction, while FAT32 will create lost or orphaned clusters that are
recoverable but who's existance is not itself a liability to the user or
the file system.

Citations please...

John
 
J

J. P. Gilliver (John)

In message <[email protected]>, glee
The second list are the operating systems you can install it on, as a
host machine. I have read elsewhere that it will install and run on XP
Home as well as Pro, but have never tried.

That's what I thought. (Anyone else know?)
The first list is what operating systems are "supported" to be run as a
virtual system on the host. Other systems can be run....Win98, Linux,
etc...they are just not "supported" , meaning you won't get any help or
support for issues, there may not be Additions available for
everything, or there may only be partial functionality of the
unsupported virtual system.

Yes, I thought so too (-:. [What's an "Addition" in this context?]
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)Ar@T0H+Sh0!:`)DNAf

"The people here are more educated and intelligent. Even stupid people in
Britain are smarter than Americans." Madonna, in RT 30 June-6July 2001 (page
32)
 
H

Hot-Text

Virtual PC 2004 SP1

http://www.microsoft.com/downloads/...9D-DFA8-40BF-AFAF-20BCB7F01CD1&displaylang=en

System Requirements

Supported Operating
Systems:Windows 2000 Service Pack 4
;Windows Server 2003,
Standard Edition (32-bit x86);
Windows XP Service Pack 2

Processor: Athlon®, Duron®, Celeron®, Pentium® II, Pentium III, or Pentium 4
Processor speed: 400 MHz minimum (1 GHz or higher recommended)

RAM: Add the RAM requirement for the host operating system that you will be
using
to the requirement for the guest operating system that you will be using.
If you will be using multiple guest operating systems simultaneously,
total the requirements for all the guest operating systems that you need to
run simultaneously.

Available disk space: To determine the hard disk space required,
add the requirement for each guest operating system that will be installed.

Virtual PC 2004 SP1 runs on:
Windows 2000 Professional SP4,
Windows XP Professional,
and Windows XP Tablet PC Edition.
 
P

Philo Pastry

John said:
Citations please...

http://cquirke.blogspot.com/2006/01/bad-file-system-or-incompetent-os.html

I'll repeat it below.

I'll be waiting for your response.

-------------------

14 January 2006
Bad File System or Incompetent OS?

"Use NTFS instead of FAT32, it's a better file system", goes the
knee-jerk. NTFS is a better file system, but not in a sense that every
norm in FAT32 has been improved; depending on how you use your PC and
what infrastructure you have, FATxx may still be a better choice. All
that is discussed here.

The assertion is often made that NTFS is "more robust" than FAT32, and
that FAT32 "always has errors and gets corrupted" in XP. There are two
apparent aspects to this; NTFS's transaction rollback capability, and
inherent file system robustness. But there's a third, hidden factor as
well.

Transaction Rollback

A blind spot is that the only thing expected to go wrong with file
systems is the interruption of sane write operations. All of the
strategies and defaults in Scandisk and ChkDsk/AutoChk (and automated
handling of "dirty" file system states) are based on this.

When sane file system writes are interrupted in FATxx, you are either
left with a length mismatch between FAT chaining and directory entry (in
which case the file data will be truncated) or a FAT chain that has no
directory entry (in which case the file data may be recovered as a "lost
cluster chain" .chk file). It's very rare that the FAT will be
mismatched (the benign "mismatched FAT", and the only case where blind
one-FAT-over-the-other is safe). After repair, you are left with a sane
file system, and the data you were writing is flagged and logged as
damaged (therefore repaired) and you know you should treat that data
with suspicion.

When sane file system writes are interrupted in NTFS, transaction
rollback "undoes" the operation. This assures file system sanity without
having to "repair" it (in essence, the repair is automated and hidden
from you). It also means that all data that was being written is
smoothly and seamlessly lost. The small print in the articles on
Transaction Rollback make it clear that only the metadata is preserved;
"user data" (i.e. the actual content of the file) is not preserved.

Inherent Robustness

What happens when other things cause file system corruption, such as
insane writes to disk structures, arbitrary sectors written to the wrong
addresses, physically unrecoverable bad sectors, unintentional power
interruptions, or malicious malware payloads a la Witty? (Witty worm,
March 2004). That is the true test of file system robustness, and
survivability pivots on four things; redundant information,
documentation, OS accessibility, and data recovery tools.

FATxx redundancy includes the comparison of file data length as defined
in directory entry vs. FAT cluster chaining, and the dual FATs to
protect chaining information that cannot be deduced should this
information be lost. Redundancy is required not only to guide repair,
but to detect errors in the first place - each cluster address should
appear only once within the FAT and collected directory entries, i.e.
each cluster should be part of the chain of one file or the start of the
data of one file, so it is easy to detect anomalies such as cross-links
and lost cluster chains.

NTFS redundancy isn't quite as clear-cut, extending as it does to
duplication of the first 5 records in the Master File Table (MFT). It's
not clear what redundancy there is for anything else, nor are there
tools that can harness this in a user-controlled way.

FATxx is a well-documented standard, and there are plenty of repair
tools available for it. It can be read from a large number of OSs, many
of which are safe for at-risk volumes, i.e. they will not initiate
writes to the at-risk volume of their own accord. Many OSs will tolerate
an utterly deranged FATxx volume simply because unless you initiate an
action on that volume, the OS will simply ignore it. Such OSs can be
used to safely platform your recovery tools, which include
interactively-controllable file system repair tools such as Scandisk.

NTFS is undocumented at the raw bytes level because it is proprietary
and subject to change. This is an unavoidable side-effect of deploying
OS features and security down into the file system (essential if such
security is to be effective), but it does make it hard for tools
vendors. There is no interactive NTFS repair tool such as Scandisk, and
what data recovery tools there are, are mainly of the "trust me, I'll do
it for you" kind. There's no equivalent of Norton DiskEdit, i.e. a raw
sector editor with an understanding of NTFS structure.

More to the point, accessibility is fragile with NTFS. Almost all OSs
depend on NTFS.SYS to access NTFS, whether these be XP (including Safe
Command Only), the bootable XP CD (including Recovery Console), Bart PE
CDR, MS WinPE, Linux that uses the "capture" approach to shelling
NTFS.SYS, or SystemInternals' "Pro" (writable) feeware NTFS drivers for
DOS mode and Win9x GUI.

This came to light when a particular NTFS volume started crashing
NTFS.SYS with STOP 0x24 errors in every context tested (I didn't test
Linux or feeware DOS/Win9x drivers). For starters, that makes ChkDsk
impossible to run, washing out MS's advice to "run ChkDsk /F" to fix the
issue, possible causes of which are sanguinely described as including
"too many files" and "too much file system fragmentation".

The only access I could acquire was BING (www.bootitng.com) to test the
file system as a side-effect of imaging it off and resizing it (it
passes with no errors), and two DOS mode tactics; the LFN-unaware
ReadNTFS utility that allows files and subtrees to be copied off, one at
a time, and full LFN access by loading first an LFN TSR, then the
freeware (read-only) NTFS TSR. Unfortunately, XCopy doesn't see LFNs via
the LFN TSR, and Odi's LFN Tools don't work through drivers such as the
NTFS TSR, so files had to be copied one directory level at a time.

FATxx concentrates all "raw" file system structure at the front of the
disk, making it possible to backup and drop in variations of this
structure while leaving file contents undisturbed. For example, if the
FATs are botched, you can drop in alternate FATs (i.e. using different
repair strategies) and copy off the data under each. It also means the
state of the file system can be snapshotted in quite a small footprint.

In contrast, NTFS sprawls its file system structure all over the place,
mixed in with the data space. This may remove the performance impact of
"back to base" head travel, but it means the whole volume has to be
raw-imaged off to preserve the file system state. This is one of several
compelling arguments in favor of small volumes, if planning for
survivability.

OS Competence

From reading the above, one wonders if NTFS really is more survivable or
robust that FATxx. One also wonders why NTFS advocates are having such
bad mileage with FATxx, given there's little inherent in the file system
structural design to account for this. The answer may lie here.

We know XP is incompetent in managing FAT32 volumes over 32G in size, in
that it is unable to format them. (see below). If you do trick XP into
formatting a volume larger than 32G as FAT32, it fails in the dirtiest,
most destructive way possible; it begins the format (thus irreversibly
clobbering whatever was there before), grinds away for ages, and then
dies with an error when it gets to 32G. This standard of coding is so
bad as to look like a deliberate attempt to create the impression that
FATxx is inherently "bad".

But try this on a FATxx volume; run ChkDsk on it from an XP command
prompt and see how long it takes, then right-click the volume and go
Properties, Tools and "check the file system for errors" and note how
long that takes. Yep, the second process is magically quick; so quick,
it may not even have time to recalculate free space (count all FAT
entries of zero) and compare that to the free space value cached in the
FAT32 boot record.

Now test what this implies; deliberately hand-craft errors in a FATxx
file system, do the right-click "check for errors", note that it finds
none, then get out to DOS mode and do a Scandisk and see what that
finds. Riiight... perhaps the reason FATxx "always has errors" in XP is
because XP's tools are too brain-dead to fix them?

My strategy has always been to build on FATxx rather than NTFS, and
retain a Win9x DOS mode as an alternate boot via Boot.ini - so when I
want to check and fix file system errors, I use DOS mode Scandisk,
rather than XP's AutoChk/ChkDsk (I suppress AutoChk). Maybe that's why
I'm not seeing the "FATxx always has errors" problem? Unfortunately, DOS
mode and Scandisk can't be trusted > 137G, so there's one more reason to
prefer small volumes.

---------------

While the author quite correctly observes the XP can't format a FAT32
volume larger than 32gb, it's been my experience that when a FAT32
volume (or drive) of any size is pre-formatted and then presented to XP,
that XP has no problems mounting and using the volume / drive, and XP
can even be installed on and operate from such a volume / drive.

The author also mentions the 137 gb volume size issue that is associated
with FAT32, but that association is false. It originates from the fact
that the 32-bit protected mode driver (ESDI_506.PDR) used by win-98 has
a "flaw" that prevents it from correctly addressing sectors beyond the
137 gb point on the drive. There are several work-around for this
(third party replacement for that driver, the use of SATA raid mode,
etc) but that issue is relavent only to win-98 and how it handles large
FAT32 volumes, not how XP handles large FAT32 volumes.
 
B

Buffalo

Philo said:
philo top-poasted:


This is how Philo surrenders an argument. Watch:


Because he didn't quote the rest of my statement:
How stupid!!
You truely should just keep quiet and leave with any dignity that may
possibly remain.
Buffalo
 
P

Philo Pastry

John said:
How absurd, it appears that some don't understand file system
atomicity!

And with your arm-waving dismissal, you concede the debate and surrender
your position.

Don't think that other people don't notice how you just baled out of
this argument.

Atomicity does not equal superiority, btw.

If you had any real insight into this issue or a defendable position you
would counter the points that Quirke makes, one by one. But you have
neither.
 
P

Philo Pastry

John said:
People working with video editing and multimedia files often run
across this 4GB file limitations. Backup/imaging utilities also
often run into problems caused by this file size limitation,

About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard
drive and installed Adobe Premier CS3. It had no problems creating
large video files that spanned the 4 gb file-size limit of FAT32.
Windows XP cannot format partitions larger than 32GB to FAT32
because the increasing size of the FAT for bigger volumes makes
these volumes less efficient (bla bla bla)

Other than saying that this behavior was "by design", Microsoft has
never said *why* they gave the NT line of OS's the handicap of not being
able to create FAT32 volumes larger than 32 gb.

It's a fallacy that the entire FAT must be loaded into memory by any OS
(win-9x/XP, etc) for the OS to access the volume.

Go ahead and cite some performance statistics that show that performance
of random-size file read/write operations go down as the FAT size (# of
clusters) goes up.

Remember, we are not talking about cluster size here. FAT32 cluster
size (and hence small file storage efficiency) can be exactly the same
as NTFS regardless the size of the volume.
 
J

John John - MVP

And with your arm-waving dismissal, you concede the debate and surrender
your position.

Don't think that other people don't notice how you just baled out of
this argument.

Atomicity does not equal superiority, btw.

If you had any real insight into this issue or a defendable position you
would counter the points that Quirke makes, one by one. But you have
neither.

You don't understand a thing that Chris wrote about in his blog!

Let's address your blatant lie:

"What you don't understand about NTFS is that it will silently delete
user-data to restore it's own integrity as a way to cope with a failed
transaction..."

It is you who doesn't understand anything about how NTFS works so you
spread lies and nonsence! NTFS DOES NOT silently delete user data to
replace it to restore it's own integrity and C. Quirke does not in
anyway say that in his blog. What is being described is journaling and
it is perfectly normal NTFS behaviour, this journaling ensures atomicity
of the write operations. You on the other hand seem to think that it is
preferable to have the file system keep incomplete or corrupt write
operations and then have scandisk run at boot time so that it may /try/
to recover lost clusters or so that it may save damaged file segments in
..chk files so that you may then /try/ to fix the file. There is no
saying if one of the segments pieced together by scandisk doesn't
contain zeros to make up for the data that was lost in the segment
during the failed write operation, Chris mentions in his blog how data
which was recovered by Scandisk should be treated as suspicious. The
NTFS method is to use journaling instead to guarantee atomicity of the
write operation, to guarantee that the write is complete and free of
errors. I prefer the later and I am sure that most reading here would
prefer to keep the previous good version of the file that experienced a
write failure rather than have a the file system keep a newer copy of
the file when it is incomplete or corrupt! Keeping the older version of
a file when atomicity cannot be guaranteed is not silently deleting
user-data to so that NTFS can restore it's own integrity, it's a way of
ensuring the integrity of the user data rather than saving garbage!

John
 
M

mm

If you're actually starting from scratch (which "have all the parts"
suggests to me that you are) anyway, received wisdom here seems to be

I had everything but the harddrive, so yeah, I'm starting from scratch
on that. Thanks. There is a lot of thread to read. Been very busy,
but I'll have time soon, I think. I plan to get back to you.
 
J

John John - MVP

About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard
drive and installed Adobe Premier CS3. It had no problems creating
large video files that spanned the 4 gb file-size limit of FAT32.


Other than saying that this behavior was "by design", Microsoft has
never said *why* they gave the NT line of OS's the handicap of not being
able to create FAT32 volumes larger than 32 gb.

Raymond Chen talks about this here:

Windows Confidential A Brief and Incomplete History of FAT32
http://technet.microsoft.com/en-us/magazine/2006.07.windowsconfidential.aspx
It's a fallacy that the entire FAT must be loaded into memory by any OS
(win-9x/XP, etc) for the OS to access the volume.

Of course it's a fallacy and no one here said that the entire FAT had to
be loaded in memory, what you don't understand is that the FAT is
extensively accessed during disk operations and having it cached in the
RAM is one of the most efficient methods of speeding up disk operations.
You on the other hand seem to think that having the FAT as large as
possible and then page it to disk is a smart thing to do... Why else
would anyone format a 500gb FAT32 volume with 4K clusters? What exactly
do you think that you will gain with this formatting scheme that will be
so great as to dismiss the whopping performance hit provided by a 500MB FAT?

John
 
P

Philo Pastry

John said:
Let's address your blatant lie:

"What you don't understand about NTFS is that it will silently
delete user-data to restore it's own integrity as a way to cope
with a failed transaction..."

It is you who doesn't understand anything about how NTFS works
so you spread lies and nonsence! NTFS DOES NOT silently delete
user data to replace it to restore it's own integrity and
C. Quirke does not in anyway say that in his blog.

Perhaps you have a reading comprehension problem.

This is what Quirk says, and what I've experienced first-hand when I see
IIS log file data being wiped away because of power failures:

-----------
It also means that all data that was being written is smoothly and
seamlessly lost. The small print in the articles on Transaction Rollback
make it clear that only the metadata is preserved; "user data" (i.e. the
actual content of the file) is not preserved.
-----------

Do you understand the difference between metadata and "user data" ?
What is being described is journaling and it is perfectly normal
NTFS behaviour, this journaling ensures atomicity of the write
operations.

Journalling ensures the *complete-ness* of write operations. Partially
completed writes are rolled back to their last complete state. That can
mean that user-data is lost.
You on the other hand seem to think that it is preferable to
have the file system keep incomplete or corrupt write operations
and then have scandisk run at boot time so that it may /try/
to recover lost clusters or so that it may save damaged file
segments

In my experience, drive reliability, internal caching and bad-sector
re-mapping have made most of what NTFS does redundant.

The odd thing is - I don't believe I've ever had to resort to scouring
through .chk files for data that was actually part of any sort of user
file that was corrupted. Any time I've come across .chk files, I've
never actually had any use for them.

And I can tell you that I would really be pissed off if I was working on
a file on an NTFS system and it suffered a power failure or some other
sort of interruption and my file got journalled back to some earlier
state just because the file system didn't fully journal it's present
state or last write operation.

I've seen too many examples of NT-server log files that contain actual
and up-to-date data one hour, and because of a power failure the system
comes back up and half the stuff that *was* in the log file is gone.
That's an example of meta-data being preserved at the sake of user data.
Chris mentions in his blog how data which was recovered by
Scandisk should be treated as suspicious.

Recovered - as in the creation of .chk files? Like I said, I've never
had a use for them in the first place.
The NTFS method is to use journaling instead to guarantee
atomicity of the write operation, to guarantee that the write
is complete and free of errors.

No. You can still have erroneous write operations under NTFS and FATxx,
and the OS is supposed to retry the operation until the write succeeds.
If the write occurs during a system crash or power failure, there can be
no re-try. Journalling is meant to detect an erroneous write event that
was never corrected / completed and restore the file system to the
previous state before the event, even if some (or most, or all) user
data was in fact written to the drive prior to the failure but was not
journaled. That's where FAT32 will retain the user data, but it will be
lost under NTFS.

And as Quirke says, under FAT you can have a mis-match between the
file-size as recorded in the FAT vs the length of the file-chain, and
for which is easily fixable.

He talks a lot more about the relative complexity of the actual
structure of NTFS compared to FAT32, the lack of proper documentation,
of diagnostic and repair tools, and the idea that the MFT may not be as
recoverable or redundant as the dual FATs of FAT32.

What is especially interesting is that a faulty FAT32 volume can be
mounted and inspected with confidence that it won't be immediately
"attacked" by unknown or uncontrolled read/write operations during it's
mounting as a faulty NTFS volume would be by NTFS.SYS. You basically
have to trust that NTFS.SYS knows what it's doing, and that it knows
best how to recover a faulty NTFS volume, and if it places more value on
file recovery vs file-system integrity (there is a huge difference
between the two).
I prefer the later and I am sure that most reading here would
prefer to keep the previous good version of the file that
experienced a write failure rather than have a the file system
keep a newer copy of the file when it is incomplete or corrupt!

That depends on how large your "atoms" are in your "atomicity" analogy.

I have been burned and shake my head many times because I've seen data
lost on NTFS volumes because of the interplay between journalling and
write-caching after unexpected system shutdown events.

NTFS is more than journalling. There's the organizational structure or
pattern as to how you store files and directories, and there's the event
or transaction-monitoring and logging operations above that. You could
theoretically have journalling performed on a FAT32 file structure.

But like I said, NTFS is more convoluted and secretive than it needs to
be in the way it stores files on a drive (journalling or no
journalling).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top