FAT32 or NTFS?

A

Al Dykes

10% wastage on 10 volumes is one whole hard disk wasted. Significant? Hell
yes, it increases costs! You should've known that considering it's the job
you do.


Well I have seen it. The data is not recoverable, because it simply gets
corrupted. Data recovery software does little to recover completely corrupt
files.

If you've got corrupt data it's because you've got a hardware problem.

I hope you are not an aircraft engine mechanic on your day job. :) I
It sounds to me that you'd be promoting the continued use of piston
engines over turbines, "because they are easy to repair", ignoring the
fact that jet's are about 1000 times as reliable.
 
A

Andrew Rossmann

It's nice to keep your movies, mp3s, etc on a separate fat32 drive.
Large files anyway so no real disk space is wasted. NOT to mention its a
snap to recover them when your XP partition done in NTFS fails.

Try putting a 5G file on your FAT32 partition...
 
A

Alias

Andrew Rossmann said:
Try putting a 5G file on your FAT32 partition...

Can't, my entire C drive has 3 gigs on it and that has all my programs and
the OS. My D drive has 398 megs and my E drive has 1.62 gigs. My F drive has
176K. I often wonder why I bought an 80 gig hard drive.

How many people that don't do videos have a 5G file?

Alias
 
J

J. Clarke

cquirke said:
On Tue, 11 May 2004 06:25:51 -0400, "J. Clarke"


Not a matter of "keep crashing during writes". If things never went
wrong, it wouldn't matter what file system one used, as it would
always work exactly as designed, right?

If it happens *once*, I might want a chance of data recovery, and I
won't get that with NTFS.-

You seem to worry a great deal about low-probability events, while not
concerning yourself with the higher-probability events that NTFS addresses.

If the the cost of loss during a write is high, then the correct solution is
to use a purpose-made database management program with its own specialized
file system, not to hope that somebody can recover the data from a
corrupted FAT32 partition.
 
J

J. Clarke

Alias said:
Can't, my entire C drive has 3 gigs on it and that has all my programs and
the OS. My D drive has 398 megs and my E drive has 1.62 gigs. My F drive
has 176K. I often wonder why I bought an 80 gig hard drive.

How many people that don't do videos have a 5G file?

Most people who "keep your movies . . ." "do videos", so the activities of
those who "don't do videos" is not relevant to the point being made.
 
J

J. Clarke

Bob said:
You are the one wasting "bandwidth". Bandwidth would not be a problem
if you got DSL or cable.

We'll chat about any thing we want to chat about. If you don't like it
then call 1-800-EAT-SHIT.

Now you bugger off, troll.

Apparently you are not aware that in some countries one pays by the byte for
network access.
 
B

Bob

Apparently you are not aware that in some countries one pays by the byte for
network access.

If so, then download only headers and DEL the ones you do not want to
retrieve in entirety. The header is so small that it does not "waste
bandwidth".

Or stay away from Usenet.


--

Map Of The Vast Right Wing Conspiracy:
http://www.freewebs.com/vrwc/

"You can all go to hell, and I will go to Texas."
--David Crockett
 
C

cquirke (MVP Win9x)

On Wed, 12 May 2004 04:41:52 GMT, "Alexander Grigoriev"
Consider expanding a file. Before any data can be written to a file, new
clusters has to be allocated, free clusters bitmap modified, directory entry
modified to reflect new extent allocated (although the file length won't be
written immediately to the directory entry). Only after that you will be
able to write new data to a file.
You're more likely to have a file with garbage data, than to get valid data
truncated. I'd say the latter case is close to impossible.

Just thinking of prior experience with FATxx where files-in-progress
are so often recoverable as lost cluster chains (where dir entry has
either been deleted or not yet created) and length mis-matches (where
cluster chaining implies a length at odds with dir entry).

As I understand it, the dir entry is created when the file is opened,
but only when the file is closed, is the length set to correct value.
In between these events, data is indeed written to the cluster chain,
and FAT updates happen at this time.

Presumably a similar sequence of events occurs with NTFS; dir entry
created if it does not yet exist, data is written to the current data
run in the data stream, free cluster bitmap hopefully updated; then
when done, dir entry length value is updated.

I take it that NTFS's transaction logging starts when the file is
opened and ends (and commits) when the file is closed.

Good programming practice may prefer to hold "real" files open for as
short a time as possible. The longer the file is held open, the more
likely it is to get messed up, and it also acts as a sand-bar or
skip-file for defrag, backup, etc.

But as opening and closing files are slow, hoggy tasks, you want to do
multiple small updates as a single operation rather than have to open
and close the file each the time. The solution may be to hold a
temporary file open just for the bits you are working on, and then
commit these into the "real" file when saved.

In between opening the file and saving it, the "real" file may be left
closed, though some checks must be done to resolve situations where
something else has modified the file.

From what I see, it's mainly log files and System Restore stuff that
commonly gets broken on bad exits from XP. Presumably these file are
held open, but (in the case of log files) flushed frequently?
If you want your data consistent, you have to implement your own transaction
system for your application files. You can use FILE_FLAG_NO_BUFFERING, or,
for special cases, FILE_FLAG_WRITE_THROUGH.

It's been a long while since I worked at that level ;-)


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
C

cquirke (MVP Win9x)

10% wastage on 10 volumes is one whole hard disk wasted. Significant? Hell
yes, it increases costs! You should've known that considering it's the job
you do.

That's worst-case. And if the files are 2M, you'd have to use 100 HDs
to need the extra one, which really is a shrug.
I know about the shuffling it performs. It is not as terrible as you make
it seem. For the benefit of the others reading this thread - Basically 12%
of the initial part of the HD is set aside initially (saving the 16
metafiles) for the MFT to prevent fragmentation (it's not even used at this
point of time). As and when data is required, the MFT is adjusted. Nothing
very "problematic" or sinister in there.

And if the MFT is caught halfway through this process?

Perhaps I don't see the bad mileage if FATxx that you do, because I
don't use one big C: for everything. All of my FATxx-based systems
have C: with 4k clusters, limited to just under 8G, and that's where
most of the write traffic is going on. The bulk of the capacity in E:
is FAT32 with big clusters, but there's not much traffic there.

I'm certainly not recommending 120G HDs be set up as one big FAT32
volume; even though that still has maintainability advantages over
NTFS, the reliability gap may be as wide as you claim.

There is one factor that could lead to software crashes, and that is
uncertainty about free space. An app may query the system for free
space, be told there's enough, and then dump on the HD without
checking for success. Normally it would be concurrant traffic and
disk compression that would cause this "oops", but FAT32 (not FAT16)
does add an extra factor; the free space value that is buffered in the
volume's boot record, which so often gets bent after a bad exit.

Then again, AFAIK startup always checks and recalculates this value;
it's one of the extra overheads of FAT32 at boot time.
I have had an NTFS volume with as less as 25MB free out of 10GB
running with absolutely no problems over a long period of time.

Well I should hope so, as that's the mileage I'd expect in FATxx as
well. It shouldn't blink even if free space fell to 5M. C: might
look like it blew up with 25M to go, but that's prolly because a few
seconds ago it may have been down to 0k free due to a temp or swap
splurge... then again, I've seen 0k-free C: and I haven't seen data
carnage. The data carnage I see is usually where RAM has been flaky
for some time, or there's been a malware strike, or the HD is failing,
or the PC was overclocked, etc.

FATxx doesn't just fall over for no good reason, from what I've seen,
though persistant issues from bad exits might compound into
cross-links later. Not sure if that does happen, but possible, though
I'd expect to see a lot more cross-links if that were the case.
Lost data OK. How about corrupt data.

Depends what has gone wrong. Corrupt data can be better than none,
especially where text is concerned; OTOH, half a .DLL is no bread :)

It's important to know what is corrupted and what is not - and that is
what I have against "auto-fixing" junk. It's the equivalent of
throwing the needles back into the haystack.
The data remains intact. I have had power failures while working with
files and defragging. The files remain - albeit at the cost of the
changes. No issues there. I don't know what and why you're trying to
project this wrongly.

What happened to the incomplete transaction? That's the data I want
back. I don't want some clueless fixer deciding for me that it's
corrupted and therefore should be discarded without a trace.

It's very easy to look bulletproof if you simply destroy everything
that may be damaged. You could do that in FATxx as well - in fact,
the duhfault behaviour is close - simply by maintaining a list of
files open for writes, which are automatically deleted on bootup.

But that is not data preservation.
I don't understand this statement. Cut the flowery language and start
explaining.

Throwing away any transaction that is not complete is not data
preservation. Basically, you get the worst-case auto-fixing Scandisk
result, i.e. as if you'd let Scandisk automatically fix all errors and
throw away all lost cluster chains.

All of this nonsense about transaction rollback "doing away with the
need" for disk maintenance utilities hinges on the only problem being
the interruption of sane file operations.

What about damage from other causes, such as wild disk writes, bad RAM
corrupting content and address of writes, program crashes, and
deliberate malware raw disk writes as per Witty?

Transaction rollback can do nothing useful here, because these issues
are below the "transaction" level of abstraction. It's like pointing
to a "silence please" sign in a library and thinking this will stop
you getting hit during a bombing raid.

In circumstances like that, the LAST thing you need is a dumbo
auto-fixer that ASSumes all anomalies it sees should be resolved as if
they were interrupted sane file operations.
I don't know why you are propagating falsehood. Maybe because by
implementing FAT32 you get more problems to attend to, and hence more
business.

Nope. I'm busy enough as it is, and I see plenty of NTFS systems with
sick HDs, "do I have a virus?" and messed-up file systems. The FATxx
systems are far quicker to deal with, and have better results. That's
the experience on which I base my recommendations.
Your site is as vague as the rest of your posts (I had not seen
it earlier) and the statement that NTFS is for people who need to hide data
more than recover it is just plain condescending. Don't you understand that
even a basic org would want to implement best practices; public and private
data, templates and files.

Who'se talking about "orgs"? This is consumerland we are talking
about; NT is no longer sold only to professionally-administered
installations. Yes, with pro backup and so on, you may care less
about what happens to live data - and some installations will indeed
see unauthorised access as a bigger crisis than data loss. For those
priorities and that environment, NTFS is a useful *part* of what makes
up a secure and robust system.

It's also interesting that you refer to my site as "vague", given the
only evidence that the NTFS file system is strucurally less prone to
corruption has been "transaction rollback!" and "because MS says so".
I've yet to see any structural detail to support that claim, whereas
my site does delve into that level in the FATxx recovery topics.
Perhaps that's why you and your biz will always cater to small/medium sized
clients. Unless you think big, you don't get big.

Quite a revealing comment that - implies we should all seek to serve
only the biggest clients, and that small clients aren't worthy of
serious attention. Also implies "one size fits all", i.e. that it is
appropriate to foist solutions derived in big business on small
clients, because bigger is better.

Yes, I will probably stay with small clients, because they interest me
in the same way that large clients do not. XP Home is intended for
this market, and it is from that perspective that I will assess it.
As far as the sig goes, regarding certainty, an uncertain person or one
in two minds (with a disclaimer hanging on his neck) will never take
crucial decisions - It shows lack of confidence in yourself.

Nope. When you're certain, you stop looking, and it takes longer to
realise that your assumption base has been kicked out from under you -
and then you fall further, and harder.

It's dumb-ass certainty that preceded the Titanic, Hindenberg and
other disasters. It's thinking that code was certain to behave as
designed or intended that leads to holes and exploits - the holes
would be there even without the certainty, but the doubt would have
led to better contingency planning, such as being able to turn off a
damaged service without crippling the whole system.

This latter-day awareness is dawning on MS, as the documentation of XP
SP2 illustrates. Broken certainties have become so common that the OS
needs updating allmost as often as av software did a few years ago;
there has to be a mechanism in place to do this *routinely*. SP2's
documentation speaks for the first time of how to manage exploits
*before* patches become available, which is refreshingly realistic.
Essentially, your advice could be viewed as a dis-service to society
and the computing fraternity in general, since you're stuck in a
generation no one really cares about.

Nope. When NTFS gets the maintenance tools it deserves, it may be
seen as a best-for-all-jobs solution. But right now, I'd rather have
a maintainable, recoverable and formally-av-scannable solution.

As your current generation has not been able to prevent hardware from
getting flaky, and has notably failed to make malware a thing of the
past, I'll prefer something that offers management of these crises.


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 
C

cquirke (MVP Win9x)

If you've got corrupt data it's because you've got a hardware problem.

So what? I'd still want to be able to recover my data ;-)
(Hint: Hardware failures do happen, and we have to live with that)


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
J

J. Clarke

Bob said:
If so, then download only headers and DEL the ones you do not want to
retrieve in entirety. The header is so small that it does not "waste
bandwidth".

Or stay away from Usenet.

The trouble is that the header in this case gives no indication that the
content is off-topic.
 
B

Bob

The trouble is that the header in this case gives no indication that the
content is off-topic.

If I start an off-topic thread, I try to remember to put OT at the
beginning of the subject. However, it is not possible to change the
header once a thread has diverged. One possibility is to start a new
thread with the old subject but with OT prepended.

It has been my observation over the years that bitching on Usenet
about this alleged wasting of precious bandwidth (whatever that means)
wastes more bandwidth than if the whiner had just said nothing.

But then Usenet is a public forum, so I do not complain about whiners.
Let them whine all they want for all I care. It's good for a laugh.

--

Map Of The Vast Right Wing Conspiracy:
http://www.freewebs.com/vrwc/

"You can all go to hell, and I will go to Texas."
--David Crockett
 
D

Dave LB

If I start an off-topic thread, I try to remember to put OT at
the beginning of the subject. However, it is not possible to
change the header once a thread has diverged.


It is quite easy. See the header.

Maybe Agent can't do this?

One possibility
is to start a new thread with the old subject but with OT
prepended.

It has been my observation over the years that bitching on
Usenet about this alleged wasting of precious bandwidth
(whatever that means) wastes more bandwidth than if the whiner
had just said nothing.

True enough.
 
D

Dave LB

This is a new thread, not the original one with the header
changed.


This is the same thread. You can tell by the "References" header
and all the message-IDs in it.

Maybe your newsreader is not threading postings if the subject
changes?
 
?

=?ISO-8859-1?Q?=BBQ=AB?=

[followup set to n.s.r]

(e-mail address removed) (Bob) wrote in
This is a new thread, not the original one with the header
changed.

Nope, this is an old thread (from csiph.storage, I guess). The
References header, present in your post and mine, tells you which
posts are upthread from here; there are quite a few of them.
Depending on how you have your reader set up to view threads, it may
or may not be displayed below its antecedents in the message list.
I believe the convention would be

OT - FAT32 or NTFS?

as the new subject.

Generally, the convention for changing the Subject header is

new topic (was: old topic)

and the parenthetical should be removed from the Subject in replies.
(Some software will automagically strip the parenthetical, so it's
wise to use the convention, including the colon after the 'was'.)

In the case of simply marking a thread OT, conventions seem to vary
from group to group. [OT] is a good choice IMO, as people can filter
on it without catching things like 'disk rotation' inadvertently.
 
B

Bob

This is the same thread. You can tell by the "References" header
and all the message-IDs in it.

Sure could have fooled me.
Maybe your newsreader is not threading postings if the subject
changes?

I am using Free Agent ver. 1.2/32 - and this is a new thread. The
other thread shows separately on the thread page.


--

Map Of The Vast Right Wing Conspiracy:
http://www.freewebs.com/vrwc/

"You can all go to hell, and I will go to Texas."
--David Crockett
 
S

SINNER

* Bob Wrote in news.software.readers:
Sure could have fooled me.

Here is the Refs header for this thread (CR's added for clarity):

References:
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<eD4g#[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]

There is a setting in most newsreaders (I know Agent has it as well)
That changing the Subject starts a new thread. All that function does
is strip out the Refrences header. This is controlled by the first
person altering the header, IOW if you have it set to do that in Agent
it will not break the thread unless YOU change the topic.
I am using Free Agent ver. 1.2/32 - and this is a new thread. The
other thread shows separately on the thread page.

That is quite an old version of FA, maybe there was a bug.
 
J

J.B. Moreno

Bob said:
Sure could have fooled me.


I am using Free Agent ver. 1.2/32 - and this is a new thread. The
other thread shows separately on the thread page.

Your newsreader is configured to show it as a separate thread, but
anyone that is scoring based upon References and anyone that has their
newsreader to not treat a change of Subject as a new thread, won't see a
difference caused by the change in Subject.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top