J
Jaz
Hi all, I hope someone can suggest a method of moving filesystems on
Windows. This is just silly, the lack of tools and methods...
I've tried ghost, cygwin (tar piped to cd;tar), but can't figure out
how to get a large filesystem moved without destroying it. (By
"destroyed" I mean, timeststamps changed, shortcuts replaced by
targets, long filenames truncated, etc.)
In the case of Ghost -- booting from floppy AND installing Ghost
Enterprise (which then reboots into dos) -- I get 200MB per minute...
PER MINUTE!!! On a 160MByte/second disk interface!!! This between two
Maxtor Atlas 10K drives which should have a throughput of nearly
90MByte per second (at best). I'm therefore getting 200/(90*60)= 3.7%
of threoritical maximum data transfer. Average!
Ummm, this simply doesn't happen with Unix systems. Why doesn't M$
build in tools for managing disk subsystems?! I searched for an
entire day for tools to do this, but with no luck whatsoever.
<rant>
I love Sun equipment. I've used Sun from SunOS 3.x, and have
maintained filesystems well over a TB. If I need to a filesystem to a
larger disk or to raid, I just boot into single user, mount up a new
device, and: tar cf - . | (cd newpath; tar xfBp -)
This preserves long paths and symlinks, and i can compare the count of
each file type using find just to make sure nothing is missing.
But Windows?! What a joke! Why is it that all the companies I've
worked for over the past two years use Windows as file servers (or any
server for that matter). Okay, the answer is clear: Exchange,
Outlook, Project, etc., and the executive mentality is that they can
have their cake and eat it too, if they throw a ton of money at the
security and maintenance problems.
But for us techies who maintain the hardware... We have to deal with
crap support from Dell (flame on) -- tho I give good marks to HP and
IBM -- and a disgusting mire of OS problem from Microsoft.
</rant>
Phew! Feels good to get that off my chest.
So, any suggestions, ideas, or expressions of pity?
Thanks in advance,
JAZ
(Please excuse the 'burp' when replying)
Windows. This is just silly, the lack of tools and methods...
I've tried ghost, cygwin (tar piped to cd;tar), but can't figure out
how to get a large filesystem moved without destroying it. (By
"destroyed" I mean, timeststamps changed, shortcuts replaced by
targets, long filenames truncated, etc.)
In the case of Ghost -- booting from floppy AND installing Ghost
Enterprise (which then reboots into dos) -- I get 200MB per minute...
PER MINUTE!!! On a 160MByte/second disk interface!!! This between two
Maxtor Atlas 10K drives which should have a throughput of nearly
90MByte per second (at best). I'm therefore getting 200/(90*60)= 3.7%
of threoritical maximum data transfer. Average!
Ummm, this simply doesn't happen with Unix systems. Why doesn't M$
build in tools for managing disk subsystems?! I searched for an
entire day for tools to do this, but with no luck whatsoever.
<rant>
I love Sun equipment. I've used Sun from SunOS 3.x, and have
maintained filesystems well over a TB. If I need to a filesystem to a
larger disk or to raid, I just boot into single user, mount up a new
device, and: tar cf - . | (cd newpath; tar xfBp -)
This preserves long paths and symlinks, and i can compare the count of
each file type using find just to make sure nothing is missing.
But Windows?! What a joke! Why is it that all the companies I've
worked for over the past two years use Windows as file servers (or any
server for that matter). Okay, the answer is clear: Exchange,
Outlook, Project, etc., and the executive mentality is that they can
have their cake and eat it too, if they throw a ton of money at the
security and maintenance problems.
But for us techies who maintain the hardware... We have to deal with
crap support from Dell (flame on) -- tho I give good marks to HP and
IBM -- and a disgusting mire of OS problem from Microsoft.
</rant>
Phew! Feels good to get that off my chest.
So, any suggestions, ideas, or expressions of pity?
Thanks in advance,
JAZ
(Please excuse the 'burp' when replying)