Virtual Machine and NTFS

M

mm

Hi! I"m moving to a new machine that probably won't run win98, so I
planned to run it from a Virtual Machine under winxpsp3

Is it okay to have all the harddrive partitions NTFS, even though
win98 can't normally read NTFS?

Thanks


Much Less important:
Is Connectix Virtual PC for Windows, version 5, okay? Or is it
obsolete by now. It lists XP on the box, but I wonder if it will have
USB support with version 5.
 
P

Paul

mm said:
Hi! I"m moving to a new machine that probably won't run win98, so I
planned to run it from a Virtual Machine under winxpsp3

Is it okay to have all the harddrive partitions NTFS, even though
win98 can't normally read NTFS?

Thanks


Much Less important:
Is Connectix Virtual PC for Windows, version 5, okay? Or is it
obsolete by now. It lists XP on the box, but I wonder if it will have
USB support with version 5.

I'm using VirtualPC 2007, which is a free download from Microsoft.
There was an original file, and an SP1 file, and since they're both
about the same size, I expect the SP1 file is standalone and ready to go.
(I probably installed them sequentially, but it likely isn't necessary.)

"Microsoft Virtual PC 2007 SP1"
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=28C97D22-6EB8-4A09-A7F7-F6C7A1F000B5

You can install Win98 inside the Virtual Machine and use FAT32
if you want. The virtual environment emulates a "hard drive", so
even Linux EXT2/EXT3/EXT4 or DOS is possible. It's up to your installer CD
or floppies, to do the formatting etc. (Or alternately, you can even run
multiple different installer CDs, like run a GParted CD in the
virtual environment, set up partitions perhaps, reboot the VM,
then run the Win98 installer CD and so on.)

A 512MB RAM setting, is the best choice for Win98, before you
do the install. You can change the RAM setting later, as long
as the VM is shut down first so you can make the change.

VPC2007 is very flexible. The only thing it isn't good at, is support
for Linux via the add-ons pack. Add-ons are provided for
Windows OSes, so the desktop integration is seamless (drag
and drop into a VPC window works). But for Linux, only a couple
OSes have support, and I didn't have any luck doing anything
with the provided files. So I don't know if the limited support
for Linux is helping any actual users or not. Getting files
across to Linux in there, is a PITA. Normally, I keep
file sharing turned off in WinXP, so I don't really want
to go the SAMBA route in Linux just so I can transfer files.
I have used SAMBA, to transfer files across, but that messes
up some other things.

In any case, I don't want to spoil your fun. Start experimenting!
You can't break anything. You can run out of disk space or
RAM of course.

And if you need to "drag a real environment" into the virtual world,
there is this. My WinXP is on a ~70GB partition, and using this
tool, it copies just the files into a VHD. The resulting file when
I did it, was about 46GB. When you run a virtual machine, using
that VHD file, of course the emulated hardware doesn't match,
and in the case of WinXP, you'd be facing activation. The implication
for you is, you may be able to copy your existing Win98 image across,
application programs and all. I haven't tried that with Win98 (I
don't have a working install any more of that). So if you really
want to be heroic, you can give this a shot. You'll need to store
the output from this tool on an NTFS partition (due to the file size),
but the emulated machine inside can be FAT32. So if your Win98
is installed on a FAT32 partition, this tool can still make a
VHD from it. This tool copies the MBR of the real disk, onto
the virtual VHD, but only so that the boot sequence inside
the virtual environment will work properly.

http://technet.microsoft.com/en-ca/sysinternals/ee656415.aspx

BTW - if you have a multi-core processor, VPC2007 emulates a single
core inside the virtual environment. The disk2vhd tool has a "HAL
change" option, which is how I could move my WinXP image on a dual
core processor, inside the single core environment, and still
have it boot properly. When I did my experiment, I wasn't expecting
to have a working WinXP (I don't have two license keys for that
purpose). What I did want though, is to be able to use other tools,
like an offline virus scanner - in other words, I wanted a
realistic set of files for test purposes. While WinXP did boot,
of course it gave me a 72 hour deadline to activate, so such a
move from real to virtual, would only have lasted that long if
I intended to work in it.

Paul
 
M

mm

It should work just fine.

If there are any problems they will not be due to the drive being NTFS
at any rate

Great, thank you. Now I have all the parts to fix up my friends old
2.4 gig Dell for myself. I think I'll like the increased speed.
 
J

J. P. Gilliver (John)

mm said:
Great, thank you. Now I have all the parts to fix up my friends old
2.4 gig Dell for myself. I think I'll like the increased speed.
If you're actually starting from scratch (which "have all the parts"
suggests to me that you are) anyway, received wisdom here seems to be
that you should set it up as FAT anyway: the alleged benefits of NTFS
being largely moot for the single home user, and XP will operate
perfectly happily under FAT.
 
G

glee

J. P. Gilliver (John) said:
If you're actually starting from scratch (which "have all the parts"
suggests to me that you are) anyway, received wisdom here seems to be
that you should set it up as FAT anyway: the alleged benefits of NTFS
being largely moot for the single home user, and XP will operate
perfectly happily under FAT.

I too must beg to differ, for many of the same reasons as philo. There
are other reasons besides security to use NTFS in this scenario.
Also, if you use FAT32, you will have to limit Virtual PC's VHD files'
size to 4GB....another shortcoming.
 
P

Philo is wrong

philo said:
I beg to differ.

First off, on a large partition fat32 has very poor cluster
size as compared to NTFS.

That is myth #1.

I have formatted a 500 gb SATA drive as single partition FAT32 using 4
kb cluster size (the default cluster size for NTFS) and have installed
and run Windows 98se from such a drive. That drive had 120 million
clusters, and is not compatible with certain drive diagnostic and
optimization tools (like the windows me version of scandisk). The DOS
version of scandisk does run and function properly, however.

You must use third-party drive preparation software to create a FAT32
volume with non-standard cluster size, because Microsoft intentionally
forces format.com to scale up the cluster size along with the volume
size so as to maintain about a max of 2 million clusters. There is no
technical reason for doing this, but it established the concept in the
minds of many that FAT32 has this problem where it must use large
cluster sizes as volumes get bigger.

All that said, it should be noted that maintaining a small cluster size
(say 4 kb) on a relatively large volume (say, anything larger than 32
gb) is not really useful from a file-layout perspective. For those that
have large drives (250 gb or larger) and that create large partitions
just to store large media files, the use of 32kb clusters is more
optimal than 4 kb.
Additionally , since many people are now storing movies and such
with large files sizes, fat32 cannot handle any files over 4 gigs.

While that is true, it rarely comes up as a realistic or practical
limitation for FAT32. The most common multimedia format in common use
is the DVD .VOB file, which self-limit themselves to be 1 gb.

The only file type that I ever see exceed the 4 gb size are virtual
machine image files, which you will not see on a win-9x machine but you
would see on an XP (or higher) PC running VM Ware, Microsoft Virtual PC,
etc. But 4 gb should be enough to contain a modest image of a virtual
windows-98 machine.
Additionally, XP is *deliberately* crippled in that it cannot create
a fat32 partition larger than 32 gigs.

That is true, but it's not a limitation of FAT32 (I thought this was a
list of bad things about FAT32).

There is plenty of third-party software that allows you to create FAT32
volumes larger than 32 gb on a win-2k/XP/etc machine, and one can always
boot an MS-DOS floppy with format and fdisk and create such a volume
that way.
If one wanted to install XP on a fat32 partition larger than
32 gigs, though it's possible to do... it's not possible to
do from the XP installer.

If a the desired FAT32 partition has already been created before
starting the installation of XP, then XP will install itself onto that
partition, even if the partition is larger than 32 gb.
Though for a home user, the security features of NTFS may not be
needed, what's extremely important is the fault tolerance of NTFS.

Given modern drives that for the past 5 to 8 years have had their own
ability to detect and re-map bad sectors and their own internal caching,
the need for the transaction journalling performed by NTFS has been
greatly reduced. And for the typical home or SOHO PC that is not a
server, NTFS is more of a liability than a benefit.

NTFS is a proprietary format and is not fully documented. It's
directory structure is stored in a distrubuted way across the drive,
mixed in with user data. An NTFS volume can be hosed in such a way as
to render recovery practically impossible, and most NTFS recovery
software is very expensive. FAT32 file structure is simple and
file-chain reconstruction is trivial and can restore any volume that at
first look appears to be completely trashed.

The extra sophistication and transaction journalling performed by NTFS
reduces it's overall performance compared to FAT32. So for those who
want to optimize the over-all speed of their PC's, FAT32 is a faster
file system than NTFS.
I do a lot of computer repair work and have seen entire fat32
file systems hosed by a bad shut down. The user, in attempt to
fix things has typically run scandisk and *sometimes* has ended
up with a drive full of .chk files.

That's another common myth about FAT32 - that the appearance of many
..chk files must mean that it's inferior to NTFS.

While it might look untidy, the mere existance of those .chk files don't
mean anything about how compentent or capable FAT32 is, and it's not
hard to just delete them and get on with your business.

You did not say in your example if the user's drive and OS was operable
and functional despite the existance of those .chk files.
What I am saying is that NTFS is considerably more resilient.

What you don't understand about NTFS is that it will silently delete
user-data to restore it's own integrity as a way to cope with a failed
transaction, while FAT32 will create lost or orphaned clusters that are
recoverable but who's existance is not itself a liability to the user or
the file system.
The thing some people find convenient about fat32 is that the
system can easily be accessed by a win98 boot floppy.

Or, if you've installed DOS first on an FAT32 drive, and then install XP
as a second OS, you can have a choice at boot-up to run DOS or XP.
However an NTFS drive can still be accessed from the repair
console...

The repair console is garbage and does not compare in any way to the
utility and capability of a real DOS-type command environment.
 
H

Hot-Text

FAT32 system and NTFS system set up the cluster sizes the same way to the
size of the Hard Drive!
Philo that's way they say you are wrong
XP FAT32 system and NTFS repair the same
Philo that's way they say you are wrong
And after running scandisk, if their a lot of .chk files it's time to get
and New Hard Drive and Xcopy the old Hard Drive it!
Philo that's way they say you are wrong
And you have a Linux i686; and reading the Newsgroups from a Thunderbird
Philo that's way they say you are wrong
 
P

Paul

glee said:
I too must beg to differ, for many of the same reasons as philo. There
are other reasons besides security to use NTFS in this scenario.
Also, if you use FAT32, you will have to limit Virtual PC's VHD files'
size to 4GB....another shortcoming.

I think what I'm seeing here, is Virtual PC will make multiple files when
the virtual disk goes above 4GB. If you create a virtual machine on a
FAT32 partition, and the virtual machine stores 12GB of files, you'll
see three VHD files on disk. If the same virtual machine was started
on an NTFS partition, then you'd see a single 12GB file.

The upper limit on virtual volume size, could be 128GB. If that was
stored on a FAT32 partition, that would take 32 VHDs.

Paul
 
P

Patok

Paul said:
I'm using VirtualPC 2007, which is a free download from Microsoft.
There was an original file, and an SP1 file, and since they're both
about the same size, I expect the SP1 file is standalone and ready to go.
(I probably installed them sequentially, but it likely isn't necessary.)

"Microsoft Virtual PC 2007 SP1"
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=28C97D22-6EB8-4A09-A7F7-F6C7A1F000B5

Thanks for the heads-up. I had assumed (and you know how that turns out) :)
that the MVPC is only for Windows 7 (because of the name). I need to have some
Linux running because of some compilers, but couldn't install Ubuntu because of
Wubi and Grub issues in the latest version, which are solvable in principle, but
require too much effort. I have run VmWare before with no problems - is VPC 2007
similar in performance and functionality? I'll give it a try.
 
P

Paul

Patok said:
Thanks for the heads-up. I had assumed (and you know how that turns
out) :) that the MVPC is only for Windows 7 (because of the name). I
need to have some Linux running because of some compilers, but couldn't
install Ubuntu because of Wubi and Grub issues in the latest version,
which are solvable in principle, but require too much effort. I have run
VmWare before with no problems - is VPC 2007 similar in performance and
functionality? I'll give it a try.

I run Ubuntu 10.04 in a VPC2007 virtual machine. The virtual machine
only emulates one core of my dual core processor. There is no
desktop integration - I can't drag and drop a file from Windows
into the Linux desktop. I also had to "jump through hoops" to set the
emulated screen resolution. Using an Nvidia "xorgconfig" utility,
I generated a half decent xorg.conf and used that so I could get
more than 800x600 resolution. Currently, I'm at 1600x1200, but
I can't drop the screen size to anything smaller (screen gets messed
up). I've left it alone, because at least I can see plenty of desktop
space now :-(

If you want to share files, the best way would be with Windows file
sharing (and Linux SAMBA). I don't like to leave file sharing
turned on, but if you need to copy files bad enough, that
is the easiest thing to do. There is also FTP, if you happen
to have an FTPD you can use.

Other than that, it works. To "get the cursor out of the Ubuntu window",
you press "right-Alt" key, and move the cursor out. If there were
proper VPC "add-ins" for the popular Linux distros, things might be
different. Microsoft does offer some, but only for the more
"commercial" versions of Linux (SUSE?).

These never helped me, with any experiments I was doing, but
you never know - someone must use this stuff.

http://www.microsoft.com/downloads/...2f-77dc-4d45-ae4e-e1b05e0a2674&DisplayLang=en

Paul
 
G

glee

Bill in Co said:
We could get into a debate on this, but with someone posing as "Philo
is wrong", one wonders if it would be worth it. Are you "98guy" in
disguise? :)

I'd say that's quite likely, if not outright obvious. A 500GB SATA
drive as a single 4KB-cluster FAT32 partition, running Win98? Who else
do we know that does this and recommends it? ;-)
 
G

glee

Paul said:
I run Ubuntu 10.04 in a VPC2007 virtual machine. The virtual machine
only emulates one core of my dual core processor. There is no
desktop integration - I can't drag and drop a file from Windows
into the Linux desktop. I also had to "jump through hoops" to set the
emulated screen resolution. Using an Nvidia "xorgconfig" utility,
I generated a half decent xorg.conf and used that so I could get
more than 800x600 resolution. Currently, I'm at 1600x1200, but
I can't drop the screen size to anything smaller (screen gets messed
up). I've left it alone, because at least I can see plenty of desktop
space now :-(

If you want to share files, the best way would be with Windows file
sharing (and Linux SAMBA). I don't like to leave file sharing
turned on, but if you need to copy files bad enough, that
is the easiest thing to do. There is also FTP, if you happen
to have an FTPD you can use.

Other than that, it works. To "get the cursor out of the Ubuntu
window",
you press "right-Alt" key, and move the cursor out. If there were
proper VPC "add-ins" for the popular Linux distros, things might be
different. Microsoft does offer some, but only for the more
"commercial" versions of Linux (SUSE?).

These never helped me, with any experiments I was doing, but
you never know - someone must use this stuff.

http://www.microsoft.com/downloads/...2f-77dc-4d45-ae4e-e1b05e0a2674&DisplayLang=en

Paul


I've been told VirtualBox is better than VPC for running Linux in a VM.
 
W

Wrong is Philo

Bill in Co wrote top-poasted and unnecessarily full-quoted:
We could get into a debate on this, but with someone posing
as "Philo is wrong", one wonders if it would be worth it.

You should have just continued the debate, because of course it's going
to be worth it. Do you actually have any ammunition to counter the
points I raised?
Are you "98guy" in disguise?
:)

Affirmative.

philo also top-poasted:
In some cases, after running scandisk,

there were a lot of .chk files but the operating system and data
are intact...the .chk files can simply be deleted

Yup - that's usually the case.
However in *some* situations I've seen all or most data on the
drive converted to .chk files and a data recovery of any type
would be close to impossible.

I've also seen NT servers destroy 14 days worth of IIS log files because
of a power failure. You would think that once a file is closed, that
it's secure and wouldn't be touched by journal recovery, but that's
exactly what happened when real data got replaced by nulls in those
files after the next boot-up.

I have probably over 200 years worth of FAT32 hard-drive usage
experience (if you add up all the years of service of various FAT32
drives that I've installed, maintained, touched in one way or another,
etc) over the past dozen years.

The single most frequent cause of having to really pull out the
sophisticated tools to recover a FAT32 drive is not because of the
design of FAT32 or an issue with the OS itself (which for me is
win-95/98). The reason is the inherent stability or design or proper
functionality of the drive itself.

And this is perhaps why a lot of people frown on FAT32 (and on win-9x in
general) is because of the caliber of hardware that was available during
their heyday. Back during 1995 through I'll say 2002, hard drives were
shit when it came to reliability and stability, and NTFS was designed to
do things like journalling and dynamic bad-sector remapping because that
stuff wasn't done in the drive.

A simple OS like Win-9x running FAT32 could tolerate flaky drive
operation (even if it meant leaving a trail of .chk files) but a flaky
drive running on an NT-based PC in a server role can really cause
problems for an organization.

So again, let's review:

There were huge changes in PC hardware and hard drives during the 1995 -
2002 timeframe. The amount of ram installed in the average PC,
stability of drivers for new chipsets, video cards, etc. Designers were
still learning how to make a stable AGP interface on the motherboard and
the video card. Hard drives were shit in terms of performance and
reliability. Win-9x and FAT32 got a bad-rap during that time frame
because of the shitty hardware and pathetic computer specs they were
faced with using.

Hard drives in the range of 1 to 10 gb were the most problematic, and
they date to that era. Once the 20 and (more like) the 40 gb drives
began to appear, that marked a new era in hard drive reliability and
sophistication and the benefits of NTFS from an error-correction
standpoint became irrelavent.

The low point for me was that I had to recover an 8 gb FAT32 drive that
had no discernable file-system on it (for what-ever reason). I used
"Lost and Found" which was able to rebuild all of the files on that
drive to blank slaved recovery drive using chain reconstruction. That
was 9 or 10 years ago.

Those days are long gone since all of my win-98 machines got 80 gb
drives running on 512 mb, P4 2.5 ghz machines 6 years ago.

I'll say this again: NTFS will sacrifice user-data in order to maintain
file-system integrity as it recovers from faulty transactions or
unexpected shutdown events, but FAT32 can tolerate many faulty
transactions without needing to do anything to maintain file-system
usability and accessibility.

If you haven't had much exposure to FAT-32 as implimented on a 40 gb or
larger drive during the past 6 years, then you really don't have enough
relevent experience to say that FAT32 is inferior to NTFS in terms of
real-world operational usefulness, stability or data integrity.

NTFS is not needed for home or soho computers, it has no true bootable
command shell environment, it's a proprietary design and recovery tools
are far more expensive compared to FAT32, it has several design elements
that add rarely used features but which aid malware installation and
operation (root-kits, hidden streams, etc), it does not lend itself for
use on flash or solid-state drives, (I could go on).
The likelihood of a "repair" turning that catastrophic on an
NTFS file system is considerably less...though of course not
impossible.

The way that the directory structure is designed and stored on an NTFS
volume is far more complex, distributed and "delicate" as compared to
FAT32. Which is why it's like a living thing - always looking out for
itself, healing itself, etc. Those activities place additional burdens
on the hard drive (additional transactions) which themselves take a toll
on the drive mechanics. And they certainly cause a reduction in
file-system performance. FAT32 has no such dynamic overhead - it's a
true static structure.
As I've mentioned, I've seen some nearly miraculous recoveries on
NTFS systems...one I recall vividly was on a drive that had
physically gone into failure and had severe read/write errors.

And I've held failing FAT32 drives in my hand as they were powered up
and operating, as I manipulate the drive into various positions and
angles as I try to coax a read operation to be successful - sometimes
giving the drive a jolt or knock with my other hand to tease that last
cluster to be read from it as I copied an important file from it to
another drive.

And it worked.

After which I naturally retired that drive - never to be used again in
any of my computers.

That was years ago, and I've never since had to do anything like that.

If it was an NTFS drive, I'm sure that the file system would have nuked
that sector if not the entire file and made it impossible for me to
recover it.
Though it was tedious I ended up retrieving 99% of the data...
and that was due to NTFS' MFT which is of course lacking on
fat32

FAT32 has 2 FAT structures (two complete copies of the FAT tables) and
even if they are completely destroyed, the simple way that files are
laid out on a FAT32 drive means that it is still possible to reconstruct
the files and get them back - something that can't be done on NTFS.
 
P

Philo Surrenders

philo top-poasted:

This is how Philo surrenders an argument. Watch:
200 years of experience...
ok you win, my computer experience only goes back to 1968.

Because he didn't quote the rest of my statement:
Sheesh

I didn't even know they had computers 200 years ago.

So this is how you bail on an argument?

Or do you really not understand that 200 years could mean 5 years worth
of experience with 40 hard drives?
damn

I sure missed that one.

Yes - yes you did.

You old phart.

Tell me, do you miss your bum-buddy MEB?
 
P

Philo Surrenders

Bill said:
Hate to break this to ya, but that's not the same thing.

Oh, because I didn't say 200 *drive* years?

So you must agree with most else of what I wrote, since this is the only
nit you're picking...
 
H

Hot-Text

Glen Ventura why you debate with the Sons of Linux for.
Do you see they do not have a Window to look out of, just to see if a dog in
the street!

Now on too 500GB SATA you can't Install windows 98 on a partition no
bigger then 32GB.
But you can Xcopy a 32GB partition with a working windows 98 to a 500GB
SATA .with a FAT32 and it will run and same time Error!
to keep it from Erring you have to do it like the old 3.0 or 95 windows by a
2GB partitions..

With 98 you would have to make a 1 partition 32GB C:\ to install on
and 1 partition 76 GB 4 partitions 100GB to keep it from have ERRING all the
time
that give you 6 partitions and it okay to have 1 partition FAT32 and 5
partitions NTFS, win98 can read NTFS just can't run on it!

or
you can make one 1 partition 32GB and make no more partition on it and 98
will run with no Errors!
or
Glen you know you can run windows 98 in a Virtual Machine. on a 500GB SATA
Partition #1 200GB NTFS for C:\Virtual Machine and partition #2 32GB
FAT32 D:\Windows 98
And the 268 Free GB for a rainy Day.

Partition #1 can be 2GB C:\Virtual Machine

Calculator Calculator Calculator
Use it it go a long ways
 
H

Hot-Text

By the size of the partition like 32Gb drive 4KB-cluster FAT32 partition,
running Win98..
on a 500GB SATA Make it a 32GB partition will give you 468 free GB,
Making that 500GB SATA in to a 32GB SATA if that all you running that all
it is!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top