Simple RAID 0, 1 ? from first time nubie...

B

Bob Davis

I wasnt offended, I just pointed out that what applys
to your PC use is irrelevant to his. Because most dont
actually do much that will actually benefit from raid0.


I think you are right that most should not bother, but my point is that
there are cases where RAID0 provides a meaningful performance enhancement.
I don't agree on the 2x claim for a two-drive array, not in real-life
performance anyway, but I see 30-50% for the work I do, and that is a big
help. The other RAID fan is claiming 2x performance based on benchmarks,
HDTach I believe, and he is right on the burst rate. R/a and read speed
will not be that high. Here's how my C: drive stacked up with one and two
36gb Raptors on the ICH5R controller:

RAID0: 74.4, 8.7, 5%, 189.5
Single: 49.0, 8.8, ?, 99.0

The order is sequential read, random access, CPU usage, and burst. I'm
using a large stripe size (128k), which reportedly hurts benchmark results,
but helps with the large files I work with. A smaller stripe size might
expand the performance gap even more on this benchmark. Sandra also reports
an impressive performance boost, but I don't have the single-drive result.
For RAID, my two Raptors were 82mb/s and the new array 89mb/s. I did not
expect the two 7200-rpm WD RE's to beat the Raptors here!

That was JUST a comment on that DOUBLE YOUR DRIVE SIZE.
Raid0 is a stupid approach if that is all you want to do.


Not necessarily. If you're properly backed up, which should be the rule
anyway, it can be a much cheaper alternative. In my last post I showed my
latest example, comparing two 250's vs. one 500, the former fitting my needs
much better than any single drive. The two-drive solution was 60% cheaper,
assuming you already have the RAID controller. If you must buy a controller
the savings dwindle or disappear.
Awfully long winded approach if all you need is the drive space.
Yes, you say you need the better speed too.


It can still be a cheaper approach for an upgrade if space is the main
concern. If you are properly backed up, a requisite for regardless of the
config IMO, there is little downside--only a bit more heat dissipation into
the room and slightly more pull on the PSU from the second drive. If you
are well-ventilated, are well backed up, and use a high-quality PSU, I see
no practical downside. As for the PSU, a HD uses less wattage at spin-up
than most CPU's do continually, so adding a drive probably isn't a factor
for most people.

You still have a much more complicated config and
you are at significant risk if the raid controller fails etc.


Not at all. I have four IDE clones, plus one off-site, sitting two feet
from me now in mobil racks. If the whole computer goes up in flames I can
recover, after replacing the computer of course. If the array goes down, I
can be up and running in <30 min., probably more like five min. I can
merely attach the cloned drive to an IDE port at any time, which will work
even if the entire RAID controller goes on the fritz.

Irrelevant to whether the increased complexity and risk thats
inevitable with raid0 is worth it if you dont need the increased speed.


As I said, it isn't for everyone, and I have recommended it to few who ask
me for computer advice. One recent exception was another photographer who
needed 1TB of space on one volume, and RAID0 or JBOD were the only options
that I could think of. JBOD was out without buying another controller, as
neither the on-chip Intel nor on-board Promise will do it.

I do think you're right that the majority shouldn't bother.

Thats overstating it. There is a real risk
with the level of backup you are doing.


What is the risk? If the house burns down I still have a clone of C: and a
firewire drive containing my business records and photo archives at an
off-site location. If one drive, both drives, or the controller dies the
fix is quick and easy. I can boot from any one of my cloned drives right
now, and the data missing in the interim (between clones) is available on
two other backup drives that can be restored in a matter of minutes.

You persist on harping about the extreme risk factors I face, but there are
none short of a nuclear attack, in which case I won't need the data anyway.
Two tornados could wipe out my house and the off-site location
simultaneously, I suppose. I'll live with that small risk.

Sure. Again, thats why I asked the OP, because there are indeed
some that do benefit when using raid0. It is however much fewer
than it used to be, so its important to get that clear from the OP
because of the risks that are inevitable with raid0.


I think you are correct about not entering into a RAID0 configuration for
speed alone in most cases, but I think the risk factor isn't that great even
with a minimum of effort to keep backed up. I feel that there should be a
fixed level of redundancy in place regardless of whether you have one or two
drives used in a given volume.

He also doesnt appear to have ANY redundancy whatever,
or any backups either, otherwise he wouldnt have asked
about how to create a raid0 array in the way he did. He
could have just created it in the normal way and restored
from the backups if he actually had full backups.


Well, I'll go on the record in saying that anyone with even half-way
valuable data on their computers should plan on a HD crash. I've been lucky
and have lost two drives in 20 years of using HD's, the first being an old
5mb (not gb) MFM. The only other failure was the IBM I referred to earlier,
and I was able to recover all data on it without resorting to restoring a
clone. I don't take that record for granted, however, and stay ready for a
disaster.

Its not clear that you are tho, do you have
the raid hardware redundant too ?


I don't need any RAID hardware redundancy. I can plug one of these IDE
clones into an IDE controller this minute and boot right into Windows. My
other array containing interim backups and my main photo archive is on
another RAID controller (Sil3112), and it is backed up on three separate
firewire drives, one off-site.

I bet I couldnt even pick it in a proper double blind trial with what
I do where I actually give a damn about how long something
takes, as opposed to it happening while I'm sleeping etc.

Sorry if I came across a bit strong, I've got a VERY blunt style. Some
have unkindly claimed it can be like a slap in the face with a dead fish
|-)


That's okay, and I have read your posts with interest regardless of the
"edge." Keep it coming.
 
B

Bob Davis

Thats a very unusual config tho, the photoshop scratch disk is notorious
for that.


I'm not sure it's that unusual. More and more photographers, amateur as
well as pros, are using the computer in their digital photo pursuits--and
RAID0 can help with performance in that arena. That said, I have set up
several computers recently for amateur photographers and opted for a single
74gb Raptor for C: and a larger SATA or PATA for D: (storage).

But usually its better to avoid moving them at
all than to use raid0 to speed up the moving.


I don't follow. If you need to move you move.

They move instantly on a single physical drive
when you just move them in the folder trees.


Right, but I my moving is from drive-to-drive. I work on C: and move to the
archive disk (D:) when finished. I have batch files set up that I can run
to backup selected files to the firewire drives when I'm ready.
Additionally, another batch file runs every hour comparing my working
folders on C: and D:. When there are changes made it updates to D:.

Yeah, that was my point, that very few
will get any real benefit from them today.


Again, with digital photography on the boom, I see the application of RAID0
becoming more mainstream. That said, I still probably won't install it for
most people I help with upgrades.

They used to help with video capture, but now the best approach
is decent digital capture cards and they dont need raid0 anymore.


I also do video capture, but never had any trouble with the single SATA or
PATA for that task. I occasionally had trouble with multitasking during
capture, but I cured that by using some discretion, disabling Task Scheduler
while capturing, and setting Priority on the capture software to "high." I
found that sometimes when doing my hourly auto-backup from C: to D: (using
Task Scheduler) the capture software would freak.

I have replaced that single drive recently with the 250x2 RAID0 array mostly
for the economy ($/gb) and speed for the PhotoShop scratch disk. I already
had the on-board controller that wasn't being used, so there was no added
expense for that.

As I've pointed out, you should be comparing it
with a pair of 300G drives not in a raid0 config.


I must yield to that point, 'cause you're right. There is economy in the
250x2 setup I just adopted, but not so much for 100x2, for example. Perhaps
in most cases the cost savings of GB x 2 vs. GB x 1 isn't overwhelming. In
my case, tho, I saved $115.
 
R

Rod Speed

I'm not sure it's that unusual.

It is anyway.
More and more photographers, amateur as well as pros, are using the
computer in their digital photo pursuits--and RAID0 can help with
performance in that arena.

Its only photoshop with its completely ****ed approach with its scratch
disk that gets that effect. Its notorious for that ****ed approach.
That said, I have set up several computers recently for amateur
photographers and opted for a single 74gb Raptor for C: and a larger SATA
or PATA for D: (storage).

Yep, like Eric said, that config has real advantages,
particularly the much lower risk to your data.

While you do backup at a decent rate, a weeks work
is still a complete pain in the arse to lose for many.

You dont need to backup the scratch disk at all if its on
a decent performance single drive and you dont even need
raid1 for a decent high frequency backup of the data drive.
I don't follow. If you need to move you move.

You shouldnt need to move if you do the use of the drive properly.
Right, but I my moving is from drive-to-drive. I work on C: and move to
the archive disk (D:) when finished.

So that is a poor approach. If that is done in different folders
on the same physical drive, the 'move' is instantaneous.
I have batch files set up that I can run to backup selected files to the
firewire drives when I'm ready.

Backup is a different issue, because there is no need for faster
access to the source since the backup depends on the speed
of the slowest drive involved, in this case the firewire drive.

If you want to improve the speed of THAT operation,
you'd be better with an external or removable eSATA
drive than raid0 and firewire.
Additionally, another batch file runs every hour comparing my working
folders on C: and D:. When there are changes made it updates to D:.

Again, a rather poor config.
Again, with digital photography on the boom, I see the application of
RAID0 becoming more mainstream.

You only get that stupid approach to the scratch drive with photoshop.
That said, I still probably won't install it for most people I help with
upgrades.

Yep, makes a hell of a lot more sense to put the scratch drive
on a raptor. That way you dont have the risk thats inevitable
with raid0. That scratch drive doesnt need to be that large.
I also do video capture, but never had any trouble with the single SATA
or PATA for that task.

Raw analog capture was the problem in some situations.

You dont see it anymore with the only sensible approach, digital capture.
I occasionally had trouble with multitasking during capture, but I cured
that by using some discretion, disabling Task Scheduler while capturing,
and setting Priority on the capture software to "high." I found that
sometimes when doing my hourly auto-backup from C: to D: (using Task
Scheduler) the capture software would freak.

It shouldnt with decent digital capture. I capture 4 channels
simultaneously on a relatively low horsepower PVR and can
do anything I like like burn DVDs etc on the same system
at the same time with the capture software handling it fine.
I have replaced that single drive recently with the 250x2 RAID0 array
mostly for the economy ($/gb) and speed for the PhotoShop scratch disk.
I already had the on-board controller that wasn't being used, so there
was no added expense for that.

You dont have redundant raid hardware tho, so you cant
come back anything like as quickly as you claimed if that fails.
I must yield to that point, 'cause you're right. There is economy in the
250x2 setup I just adopted, but not so much for 100x2, for
example. Perhaps in most cases the cost savings of GB x 2 vs. GB x 1
isn't overwhelming. In my case, tho, I saved $115.

Only because you arent comparing what you should be comparing price wise.
 
R

Rod Speed

I think you are right that most should not bother, but my point is that
there are cases where RAID0 provides a meaningful performance
enhancement.

Trouble is that its only seen because of the completely
****ed approach photoshop takes to a swap drive.

We dont even know if he does any digital photography
at all, let alone whether he uses photoshop, which is why
I asked why he had decided that he needs raid0 at all.
I don't agree on the 2x claim for a two-drive array, not in real-life
performance anyway,

Yeah, thats never ever seen.
but I see 30-50% for the work I do, and that is a big help.

It isnt the only way to get that sort of improvement tho even with
photoshop.
The other RAID fan is claiming 2x performance based on benchmarks, HDTach
I believe, and he is right on the burst rate.

Pity the burst rate is completely irrelevant to real world work.
R/a and read speed will not be that high. Here's how my C: drive stacked
up with one and two 36gb Raptors on the ICH5R controller:
RAID0: 74.4, 8.7, 5%, 189.5
Single: 49.0, 8.8, ?, 99.0
The order is sequential read, random access, CPU usage, and burst.

Measured using what ?

The only number that matters is the sequential read,
and that increase in performance isnt anything like 2x.
I'm using a large stripe size (128k), which reportedly hurts
benchmark results, but helps with the large files I work with. A smaller
stripe size might expand the performance gap even more on this benchmark.
Sandra also reports an impressive performance boost, but I don't have the
single-drive result.

Sandra is completely useless for benchmarking.
For RAID, my two Raptors were 82mb/s and the new array 89mb/s. I did not
expect the two 7200-rpm WD RE's to beat the Raptors here!

Thats because you dont understand where the raptor does well.
Not necessarily.

Fraid so.
If you're properly backed up, which should be the
rule anyway, it can be a much cheaper alternative.

Nope, not when you calculate it properly
and duplicate the raid hardware.
In my last post I showed my latest example, comparing two 250's vs. one
500, the former fitting my needs much better than any single drive.

Again, you're doing the wrong comparison. You should have
compared the cost of 2 300G drives not in raid0 with the 250G
raid0 config and if you had done that you'd have found that
you get more space for your buck when you dont use raid0.
The two-drive solution was 60% cheaper,

Only because you're comparing with the wrong alternative.
assuming you already have the RAID controller. If you must buy a
controller the savings dwindle or disappear.

Disappear completely if you compare what you should be comparing.
It can still be a cheaper approach for an upgrade if space is the main
concern.

No it cant, you should be comparing a pair of 300G drives not in raid0
If you are properly backed up, a requisite for regardless
of the config IMO, there is little downside--

Wrong. You cant come back anywhere near as
quickly if you have a failure of the raid hardware.
only a bit more heat dissipation into the room and slightly more pull on
the PSU from the second drive. If you are well-ventilated, are well
backed up, and use a high-quality PSU, I see no practical downside.

The obvious downside is that you get more
space if you have two 300G drives not in raid0
As for the PSU, a HD uses less wattage at spin-up than most CPU's do
continually, so adding a drive probably isn't a factor for most people.

Sure, but you end up with less space than with two 300G drives.
Not at all.

Fraid so.
I have four IDE clones, plus one off-site, sitting two feet from me now
in mobil racks. If the whole computer goes up in
flames I can recover, after replacing the computer of course.

You cant recover in anywhere near the
time you claimed if the raid hardware fails.
If the array goes down, I can be up and running in <30 min., probably
more like five min.

Not if the raid hardware fails you cant.
I can merely attach the cloned drive to an IDE port at any time, which
will work even if the entire RAID controller goes on the fritz.

Still a much more complicated config than without raid0.
As I said, it isn't for everyone,

No one ever said you did.
and I have recommended it to few who ask me for computer advice. One
recent exception was another photographer who needed 1TB of space on one
volume, and RAID0 or JBOD were the only options that I could think of.
JBOD was out without buying another controller, as neither the on-chip
Intel nor on-board Promise will do it.
I do think you're right that the majority shouldn't bother.

Yep, **** all actually use photoshop and you have only
shown that those who do will get any real benefit at all.
And thats because of the completely ****ed approach
that photoshop uses with its scratch disk.
What is the risk? If the house burns down I still have a clone of C:

No you dont, you claim they are next to the PC. They'll be gone too.
and a firewire drive containing my business records and photo archives at
an off-site location.

And that is only done weekly, so you'll lose a week's work.
If one drive, both drives, or the controller dies the fix is quick and
easy.

No it isnt if the motherboard dies.
I can boot from any one of my cloned drives right now, and the data
missing in the interim (between clones) is available on two other backup
drives that can be
restored in a matter of minutes.
You persist on harping about the extreme risk factors I face,

Never used the word extreme once.
but there are none short of a nuclear attack,

Bullshit. If the house burns down, you loose a weeks work.
in which case I won't need the data anyway. Two tornados could wipe out
my house and the off-site location simultaneously, I suppose. I'll live
with that small risk.

Sure, but that isnt the only risk. Your main risk is
that you lose a weeks work. There arent many
that would be happy to do that professionally.

And its not as if its necessarily even possible to do that work again.
I think you are correct about not entering into a RAID0 configuration
for speed alone in most cases, but I think the risk factor isn't that
great even with a minimum of effort to keep backed up.

The approach you use is nothing like a minimum effort to backup.
Its a complicated manual proceedure that few will do reliably.
I feel that there should be a fixed level of redundancy in place
regardless of whether you have one or two drives used in a given volume.

Sure, but there is a world of a difference between an approach to
backup that requires the individual to change drives in removable
bays and an approach which is completely automatic and where
you only need to do anything when the shit has hit the fan.

I'd go the completely automatic route myself in a
professional situation like yours because few are
reliable enough to do the manual approach religiously.
Well, I'll go on the record in saying that anyone with even half-way
valuable data on their computers should plan on a HD crash.

Sure, but that doesnt mean that most do that, even when you tell them to.

When he doesnt appear to have any backup at all, its
completely mad to be increasing the risk by using raid0.
Which is why I asked him whether he actually needs raid0 at all.
I've been lucky and have lost two drives in 20 years of using HD's, the
first being an old 5mb (not gb) MFM. The only other failure was the IBM
I referred to earlier, and I was able to recover all data on it without
resorting to restoring a clone. I don't take that record for granted,
however, and stay ready for a disaster.

I've only lost one in the entire PC era, and that wasnt
a full failure, only a couple of files were even affected
and I had to kill the drive literally to make a warranty
claim. I still backup everything that matters anyway.
I don't need any RAID hardware redundancy. I can plug one of these IDE
clones into an IDE controller this minute and boot right into Windows.

Trouble is you have lost what was done since that clone was made.
Many would prefer to have raid hardware redundancy to avoid that loss.
My other array containing interim backups and my main photo archive is on
another RAID controller (Sil3112), and it is backed up on three separate
firewire drives, one off-site.

So you will lose a weeks work in a house fire. No thanks.
That's okay, and I have read your posts with interest regardless of the
"edge." Keep it coming.

I basically keep the edge because its a very effective
way of sorting out the prats from the rest. Most
obviously with that prat Clarke most recently.
 
B

Bob Davis

Its only photoshop with its completely ****ed approach with its scratch
disk that gets that effect. Its notorious for that ****ed approach.


Perhaps, but I gotta live with it. Dumping PS is not an option. I might as
well throw my cameras in the garbage.

Anyway, the scratch disk is not the only speed increase I've noticed.
Archiving and copying files from C: to D: (two RAID0 arrays) is much faster
with this new array over the single-drive SATA on D:. Keep in mind that
these are very large files we're dealing with, and the average user likely
won't benefit as much. I assume we agree on at least this aspect of it.

While you do backup at a decent rate, a weeks work
is still a complete pain in the arse to lose for many.


I agree, but the use of RAID0 doesn't change the need to keep adequate
backups. It could be accurately said that it does increase the risk of a
failure since two drives are used, but
one can have a failure with one drive, and losing the data on one drive is
the same as losing it on one or both drives of the array. I've used
single-drives at times in recent years, and my backup/clone scheme hasn't
changed when moving to RAID0. Either way I take a paranoid, anal approach
and cover myself in multiples.

You dont need to backup the scratch disk at all if its on
a decent performance single drive and you dont even need
raid1 for a decent high frequency backup of the data drive.


I don't backup the scratch files and don't even know where PS keeps them,
but have designated D: as the PS scratch disk. It happens to also be my
interim backup (hourly) and first archive disk.

BTW, in the past I used Ghost with Win98SE, then left the clone inserted in
the mobile rack to act as D:, which meant an exact clone of C: was present
in the system at all times. Critical files on C: were backed up to their
corresponding folders on D: on an hourly schedule with the auto-backup batch
files. But XP has a rep for disliking clones containing the OS to be
active, and I've opted to keep the cloned OS removed from the system when
possible for this reason. I have, however, occasionally booted with a clone
inserted to retreive certain files and never had a problem.

To clarify my anal-retentive backup scheme, if anyone cares:

1. Certain folders are backed up (incremental) hourly through a batch file
run thru Task Scheduler. They include working photos, documents, business
databases, etc. It is seamless and occurs without an overt display that it
is even running. I do notice lots of drive activity at three minutes before
every hour, though.

2. D: contains the first archive of digital photos, arranged by Inv# and
compressed using Winzip. That folder is presently 75gb in size. A copy of
this archive is interactively made to J:, an external firewire drive that is
powered up only when needed. A second backup of J: is made on L: (external
FW) about once per month and kept off-site.

3. Clones of C: are made every Sat. morning. I have five drives that are
rotated, one kept off-site. Since I have been known to delete an important
file and not realized it for two weeks or more, the rotating clones can help
retrieve a file lost in this act of dementia.

If a failure occurs, I can restore C: using the latest clone, then update
files that were backed up in the interim on D:. This procedure takes <30
min. If the entire controller or both drives are wiped out, I can boot with
any of the clones. This won't restore any program updates or new installs
made between the last clone and the failure date, but I'm not concerned with
that, as they can be repeated.

You shouldnt need to move if you do the use of the drive properly.


I used the term "move" improperly. I mostly copy, but when photos have been
edited and sent-out to clients by email or FTP, I archive them to ZIP and
then delete them off C:. I like to keep C: as lean as possible, and it
rarely exceeds 20gb. Need to defrag about once per week, tho.

So that is a poor approach. If that is done in different folders
on the same physical drive, the 'move' is instantaneous.


Again, bad use of "move" on my part.

Backup is a different issue, because there is no need for faster
access to the source since the backup depends on the speed
of the slowest drive involved, in this case the firewire drive.


Speed isn't an issue for the backups. They involve very large ZIP files,
sometimes 1gb or more, and when the batch file runs I can do other things
while it completes. It only copies files that have been changed, so it
usually doesn't take more than a few minutes to finish.

If you want to improve the speed of THAT operation,
you'd be better with an external or removable eSATA
drive than raid0 and firewire.


Firewire is slow by comparison, as you mentioned, but I don't care. It is
something I launch and forget. When it's finished I power down the FW drive
and that's the end of it. What I don't like is waiting for PS to complete a
task because the drive is thrashing, and RAID0 helps greatly in this regard.
For example, I often have several layers open and usually when I open the
liquify dialog window there is a hesitation with the D: drive thrashing away
for as much as 20 sec. using the single-drive volume. Not sure what it is
doing, but PS freezes during the wait. When I set up the RAID0 array this
cut that delay to about 1/3 of that time, and that is a welcome improvement.

Again, a rather poor config.


It is a good config for my workflow. I don't think you understand it, and
I'll take the blame for not being able to make it clear.

You only get that stupid approach to the scratch drive with photoshop.


Again, I gotta live with it. PS is as important to my photo business as a
camera...well, almost.

Yep, makes a hell of a lot more sense to put the scratch drive
on a raptor. That way you dont have the risk thats inevitable
with raid0. That scratch drive doesnt need to be that large.


I'm not at all worried about the data sent to the scratch disk by PS. When
I'm finished editing a photo it can be lost for all I care. Anyway, Adobe
recommends using a second physical drive for the scratch disk, even if it is
slower than the one containing the program.
Don't ask me why, 'cause I dunno. I just do what they say.

Raw analog capture was the problem in some situations.
You dont see it anymore with the only sensible approach, digital capture.


I record from a cable box, so I guess that's a digital signal converted to
analog in the cable box, then transferred to the tuner card via S-video and
recorded as digital again. It works very well however it is captured.

It shouldnt with decent digital capture. I capture 4 channels
simultaneously on a relatively low horsepower PVR and can
do anything I like like burn DVDs etc on the same system
at the same time with the capture software handling it fine.


I can do that most of the time, but it occasionally burps, usually by losing
the audio/video sync. No big deal, as I just try not to tax the system
while capturing. This occurs rarely, even when doing lots of multitasking,
but if a recording is important I just leave the desktop alone and use the
notebook. No big shake.

You dont have redundant raid hardware tho, so you cant
come back anything like as quickly as you claimed if that fails.


I've tried to explain how I clone and backup the system and don't know a
better way to clarify it. You are correct that my clones are not current,
but important files are backed up to D: hourly and can be copied back to C:
after restoring the clone. It might take 30 min., max to perform these
operations. I haven't had to do it even once in five years of using this
procedure for a hardware problem, although on two occasions in the past
three years I've had to restore C: because of an OS or software snafu.

If I've installed a new program or upgraded something I'll simply need to do
it again. I keep a chronological record of everything I do in this regard
(new hardware, software, driver updates, etc.), and I can backtrack my
activities quite easily.

Only because you arent comparing what you should be comparing price wise.


I'm comparing what was important to me at the time, not what someone else
may want to do with two 300's or two 100's. The 250 x 2 array was an
economical route for me to take given my storage-upgrade needs, and that was
a part of the decision to go with a second RAID0 array. I'm a cheapskate
and saving $115 over the single-drive option is a triumph, and achieving a
meaningful speed increase is a bonus. It was a win-win proposition for me
given my workflow.
 
B

Bob Davis

It isnt the only way to get that sort of improvement tho even with
photoshop.


I'd like to know how. A 150gb Raptor? Not big enough for this application.
A 15k SCSI? Can't afford it.

Pity the burst rate is completely irrelevant to real world work.


I've approached the benchmarking as a curiosity, not something that makes or
breaks a drive setup. I wanna know how it works in real life using real
programs.

Measured using what ?

Oops, omitted a minor detail. It was HDTach.

The only number that matters is the sequential read,
and that increase in performance isnt anything like 2x.


I know, and I claimed 30-50%.

Sandra is completely useless for benchmarking.


People quote results from Sanda quite a bit, and again I don't dwell on
benchmarks anyway.

Thats because you dont understand where the raptor does well.


It doesn't matter, as the benchmark is only a curiosity for me. Both of
these two arrays are very fast.

Again, you're doing the wrong comparison. You should have
compared the cost of 2 300G drives not in raid0 with the 250G
raid0 config and if you had done that you'd have found that
you get more space for your buck when you dont use raid0.


Rod, I wasn't in the market for two 300's. I only needed 500gb and might've
settled for 400gb had the option been there. I wanted the WD RE drives,
mostly because of the enterprise status and 5-year warranty, and 250gb is
the smallest option in this model line.
Wrong. You cant come back anywhere near as
quickly if you have a failure of the raid hardware.


Why? What difference does it make if the RAID hardware is completely wiped
out? My clones are IDE. I can plug them into the standard IDE port and
boot right into Windows.
If one drive, two drives, or the whole controller dies it is the same
situation. If your single drive dies it is still the same scenario. I must
not be making this clear.

If a failure occurs with one or two drives there is no difference. At that
time I go to the most-recent clone and can boot into Windows directly with
this drive (IDE). Or, restore the clone to a SATA if I wish, which takes
about 20 min. After restoring interim backups, perhaps a 10 min. task, I'm
back to square one. What is unclear about this?

Sure, if a RAID1 drive goes south I can recover more quickly, but I also
won't realize the performance increase. I don't mind the 30-min. recovery
ritual if I ever need it.

Sure, but you end up with less space than with two 300G drives.


And the pair costs 50% more than 250x2 for 20% more drive space. I'm a
cheapskate, remember?, and I don't need 600gb at this time. Maybe in two
years I will when large drives are cheaper.

Fraid so.

No it isn't! <g>

Okay then, it is complicated--but I understand it and the process is not
time-consuming...for me. The risk factor is not there, however. You say
"significant risk if the raid controller fails...." Please explain why the
RAID controller is even a factor in the recovery process.

You cant recover in anywhere near the
time you claimed if the raid hardware fails.


Yes I can, and I don't know a way to make the process any clearer. Sorry.

Not if the raid hardware fails you cant.


Rod, for crissakes! Let's let the RAID controller die in smoke and flames,
okay? It doesn't matter. The clones are IDE, not SATA and not RAID, and I
can plug them into either of my four IDE ports and boot into Windows in
minutes. I don't need SATA or RAID to recover quickly. Why do you think I
need RAID to recover?

Still a much more complicated config than without raid0.


No. A failure is a failure, and my recovery procedure would be the same
regardless of what controller or interface runs the OS.

Yep, **** all actually use photoshop and you have only
shown that those who do will get any real benefit at all.
And thats because of the completely ****ed approach
that photoshop uses with its scratch disk.


Irrelevant. Gotta have it! There's no other option, as no other
photo-editing app has the power of PS.

And that is only done weekly, so you'll lose a week's work.


How many computer owners or even businesses have off-site backups at all?
I'll admit that this gap is a kink in the system, but I don't know a better
way. I'm open to suggestion, except don't tell me to get rid of the RAID
No it isnt if the motherboard dies.

That's irrelevant to RAID. I can do a repair install on the clone using
another mobo and it'll recover. If not, the data is intact. I can
reinstall XP on another system and restore the data, including all
digital-photo files and business databases.

Bullshit. If the house burns down, you loose a weeks work.


Okay, so do 99.99% of computer users. I'd like to do better, and that fact
is irrelevant to the RAID discussion. If you know a way to backup
everything and keep it current in an off-site location, I'd like to know
more. On-line backup sites are not an option--far too expensive.

Sure, but that isnt the only risk. Your main risk is
that you lose a weeks work. There arent many
that would be happy to do that professionally.


In an honest appeal, I'd like to know more about how to maintain an off-site
backup and keep it current. On-line plans that cost $2000/yr. are not
options. I'm a one-man shop here and my gross isn't that high.

The approach you use is nothing like a minimum effort to backup.
Its a complicated manual proceedure that few will do reliably.


It isn't hard once you get the hang of the system, and takes little time.

Sure, but there is a world of a difference between an approach to
backup that requires the individual to change drives in removable
bays and an approach which is completely automatic and where
you only need to do anything when the shit has hit the fan.


RAID1 is good as long as the source drive is reliable. If it is writing
corrupt info or has a virus infecting it, the mirror will also be corrupted.
I can recover as well with my system, only it takes longer. In the interim,
I am doing my PS work and other tasks with less waiting. It's worth the 30
min. I'll expend in the event of a RAID failure. I'm not that busy that 30
min. is a big shake time-wise.

I'd go the completely automatic route myself in a
professional situation like yours because few are
reliable enough to do the manual approach religiously.


I've described this system to people verbally and I usually get a blank
stare. You're right, most people would find this too complex. I did
describe it to a friend and fellow photographer, and she has adopted largely
the same procedure. She is intimidated by complexity yet caught on easy
enough. It isn't rocket science.

Sure, but that doesnt mean that most do that, even when you tell them to.


You are so correct. It is amazing to me how many people come to me with sob
stories about crashed HD's and lost files that are "important." I guess
that's why HD-recovery outfits do a booming business.

Trouble is you have lost what was done since that clone was made.
Many would prefer to have raid hardware redundancy to avoid that loss.


No. For umpteenth time, I have an interim backup on D: for important files
(photos, databases, even my precious MP3's), updated once every hour.
They're also off-site.

So you will lose a weeks work in a house fire. No thanks.


In some weeks it would be no loss at all, as I'm semi-retired. Anyway, that
would happen with or without RAID0. What do you suggest? I'm all ears and
would like to fill this gap.

I basically keep the edge because its a very effective
way of sorting out the prats from the rest. Most
obviously with that prat Clarke most recently.


Not familiar with the term "prat," but it doesn't sound flattering. <g>
 
R

Rod Speed


No perhaps about it.
but I gotta live with it. Dumping PS is not an option. I might as well
throw my cameras in the garbage.

Photoshop aint the only thing that can be used with digital photos.
Anyway, the scratch disk is not the only speed increase I've noticed.
Archiving and copying files from C: to D: (two RAID0 arrays) is much
faster with this new array over the single-drive SATA on D:.

I've already covered the copy question, you shouldnt be copying
between physical drives if you care about the speed of that.

Archiving is a separate matter entirely. Automated archiving
doesnt need the increased speed that raid0 brings with it,
because its automated archiving that you dont wait for.
Keep in mind that these are very large files we're dealing with, and the
average user likely won't benefit as much. I assume we agree on at least
this aspect of it.

Not really, because as you say, most are moving to digital
photos now, and PVRs replacing VCRs etc in spades.

We dont even agree on that copy/move question.
A properly designed config shouldnt see any time
taken at all to move for organisation when the move
is between folders on the same physical drive,
because that is vastly faster than raid0 can ever be.

And like I said, the time to archive is irrelevant since it should
be automated and the user isnt waiting for that to happen.
I agree, but the use of RAID0 doesn't change the need to keep adequate
backups.

Yes, that was just a comment on your claim that
you are completely backed up. You clearly arent.
It could be accurately said that it does increase the risk of a failure
since two drives are used,

Yes, thats the main area where raid0 has real downsides.
raid1 or shadowing at the OS level is much better in that area.
but one can have a failure with one drive, and losing the data on one
drive is the same as losing it on one or both drives of the array.

No its not. If you lose one drive in a raid0 array, you lose all the
data in the array, and hence what hasnt yet made it to backup.

With raid0 not being used, the chance of losing what
hasnt yet made it to backup is significantly reduced.

And with raid1 or OS level shadowing you
wont lose anything with a single drive failure.
I've used single-drives at times in recent years, and my backup/clone
scheme hasn't changed when moving to RAID0. Either way I take a
paranoid, anal approach and cover myself in multiples.

But you are still vulnerable to losing a week's work with a house fire.
I don't backup the scratch files and don't even know where PS keeps
them, but have designated D: as the PS scratch disk. It happens to
also be my interim backup (hourly) and first archive disk.
BTW, in the past I used Ghost with Win98SE, then left the clone
inserted in the mobile rack to act as D:, which meant an exact clone of
C: was present in the system at all times. Critical files on C: were
backed up to their corresponding folders on D: on an hourly schedule with
the auto-backup batch files. But XP has a rep for disliking clones
containing the OS to be active,

Thats a myth. Its perfectly possible to clone in XP.

Corse its arguable if backup by cloning on a single PC is worthwhile,
it isnt very likely that you will ever need to boot the clone, it makes
more sense to backup by imaging instead, and if you do need to come
up quickly after a failure, a single PC is a very bad approach for that.
and I've opted to keep the cloned OS removed from the system when
possible for this reason.

You dont need to remove it, just dont boot
off it with the original still visible to XP.
I have, however, occasionally booted with a clone inserted
to retreive certain files and never had a problem.

Yes, the problem is actually booting the clone with the
original visible to XP, not having the clone visible to XP.
To clarify my anal-retentive backup scheme, if anyone cares:
1. Certain folders are backed up (incremental) hourly through a
batch file run thru Task Scheduler. They include working photos,
documents, business databases, etc. It is seamless and occurs
without an overt display that it is even running. I do notice lots
of drive activity at three minutes before every hour, though.

That should really be done to a different PC for the best security.

It may be viable to do that offsite too.

And raid0 is irrelevant to that operation, the most it does is
affect how long that 3 mins of drive activity lasts for, marginally.
2. D: contains the first archive of digital photos, arranged by Inv#
and compressed using Winzip. That folder is presently 75gb in size. A
copy of this archive is interactively made to J:, an external
firewire drive that is powered up only when needed. A second backup
of J: is made on L: (external FW) about once per month and kept off-site.
3. Clones of C: are made every Sat. morning. I have five drives
that are rotated, one kept off-site. Since I have been known to
delete an important file and not realized it for two weeks or more,
the rotating clones can help retrieve a file lost in this act of
dementia.
If a failure occurs, I can restore C: using the latest clone, then
update files that were backed up in the interim on D:. This
procedure takes <30 min. If the entire controller or both drives are
wiped out, I can boot with any of the clones. This won't restore any
program updates or new installs made between the last clone and the
failure date, but I'm not concerned with that, as they can be repeated.

None of that benefits from raid0
I used the term "move" improperly. I mostly copy, but when photos have
been edited and sent-out to clients by email or FTP, I archive them to
ZIP and then delete them off C:.

Again, raid0 isnt necessary, that stuff should just be initiated
and it doesnt matter how long it takes to actually get done.
I like to keep C: as lean as possible, and it rarely exceeds 20gb.
Need to defrag about once per week, tho.

No you dont.
Again, bad use of "move" on my part.

Nope, the one just above is a move. If you use OS level
compression instead of zip files, its still an instantaneous
move if its between folders and not physical drives.
Speed isn't an issue for the backups. They involve very large ZIP
files, sometimes 1gb or more, and when the batch file runs I can do
other things while it completes. It only copies files that have been
changed, so it usually doesn't take more than a few minutes to finish.

So you havent established any need for raid0 except for the photoshop
scratch disk and thats only because photoshop is ****ed in that area.
Firewire is slow by comparison, as you mentioned, but I don't care. It is
something I launch and forget. When it's finished I power down
the FW drive and that's the end of it. What I don't like is waiting
for PS to complete a task because the drive is thrashing, and RAID0
helps greatly in this regard.

Yes, those comments were about your claim that you benefit
from raid0 other than with photoshop. You dont actually.
For example, I often have several layers open and usually when I open the
liquify dialog window there is a hesitation with the D: drive thrashing
away for as much as 20 sec. using the single-drive volume. Not sure what
it is doing, but PS freezes during the wait.

Yes, thats just because PS is completely ****ed in that area.
When I set up the RAID0 array this cut that delay to about 1/3 of that
time, and that is a welcome improvement.

It isnt the only way to cut that time with PS.
It is a good config for my workflow.

No it isnt. OS level shadowing does that much better.
I don't think you understand it,

I understand it fine.
and I'll take the blame for not being able to make it clear.
Again, I gotta live with it.

Again, raid0 aint the only way to deal with it.
PS is as important to my photo business as a camera...well, almost.

Again, PS aint the only thing that can be used to edit digital photos.
I'm not at all worried about the data sent to the scratch disk by PS.

It isnt that that matters, its what else is on that physical drive.
When I'm finished editing a photo it can be lost for all I care. Anyway,
Adobe recommends using a second physical drive for the scratch disk, even
if it is slower than the one containing the program. Don't ask me why,
'cause I dunno. I just do what they say.

Its because of the completely ****ed approach PS uses.
I record from a cable box, so I guess that's a digital signal converted
to analog in the cable box, then transferred to the tuner card via
S-video and recorded as digital again. It works very well however it is
captured.

What matters is that raid0 isnt needed for capture anymore.
I can do that most of the time, but it occasionally burps, usually by
losing the audio/video sync. No big deal, as I just try not to tax the
system while capturing. This occurs rarely, even when doing lots of
multitasking, but if a recording is important I just leave the desktop
alone and use the notebook. No big shake.
I've tried to explain how I clone and backup the system and don't
know a better way to clarify it. You are correct that my clones are
not current, but important files are backed up to D: hourly and can
be copied back to C: after restoring the clone. It might take 30
min., max to perform these operations.

And that wont work if the power supply fails and kills the drives
or the motherboard dies. You wont be coming back in 30 mins.
I haven't had to do it even once in five years of using this procedure
for a hardware problem,

Just because that type of failure is rare.
although on two occasions in the past three years I've had to restore C:
because of an OS or software snafu.
If I've installed a new program or upgraded something I'll simply
need to do it again. I keep a chronological record of everything I
do in this regard (new hardware, software, driver updates, etc.), and I
can backtrack my activities quite easily.

But not in that 30 mins.
I'm comparing what was important to me at the time, not what someone else
may want to do with two 300's or two 100's.

The two 300s is the only sensible config to compare.
The 250 x 2 array was an economical route for me to take given my
storage-upgrade needs,

No it wasnt, 300G x 2 not in raid was more economical.
and that was a part of the decision to go with a second RAID0 array. I'm
a cheapskate and saving $115 over the single-drive option is a triumph,

Again, you're not comparing the config that matters.
and achieving a meaningful speed increase is a bonus.

You havent established the SECOND raid0 array
gives you any speed improvement that matters.
It was a win-win proposition for me given my workflow.

Nope, you ended up with less drive space for the same dollars.
 
R

Rod Speed

I'd like to know how.

A single drive thats faster than the raid0 array.
A 150gb Raptor? Not big enough for this application.

Wrong, it only needs to be big enough for the PS scratch drive.
A 15k SCSI? Can't afford it.
I've approached the benchmarking as a curiosity, not something that makes
or breaks a drive setup. I wanna know how it works in real life using
real programs.
Oops, omitted a minor detail. It was HDTach.

HDTach isnt a real world benchmark.
I know, and I claimed 30-50%.
People quote results from Sanda quite a bit,

Irrelevant, they're pig ignorant.
and again I don't dwell on benchmarks anyway.

You just cited it.
It doesn't matter, as the benchmark is only a curiosity for me.

It does matter what the real world performance is.
Both of these two arrays are very fast.

And you havent established that you need speed in the second raid0 array.
Rod, I wasn't in the market for two 300's.

Bob, you should have been when making that
claim about what you had purportedly saved.
I only needed 500gb and might've settled for 400gb had the option been
there.

Irrelevant to the point that for the same money as you spent
on the 250G drives, you could have had more drive space.

Drive space is never something that isnt worth having if its the same
price.
I wanted the WD RE drives, mostly because of the enterprise status

More fool you.
and 5-year warranty,

Again, more fool you.
and 250gb is the smallest option in this model line.
Please, let's not argue brands and models. <g>

Its relevant to your claim about saving money.
Why? What difference does it make if the RAID hardware is completely
wiped out? My clones are IDE. I can plug them into the standard IDE
port and boot right into Windows.

And if the motherboard had died, you cant even do that.
If one drive, two drives, or the whole controller dies it is the same
situation. If your single drive dies it is still the same scenario.

Not if you have more than one PC.
I must not be making this clear.

You cant grasp the basics, just like you cant with the 300G drives.
If a failure occurs with one or two drives there is no difference. At
that time I go to the most-recent clone and can boot into Windows
directly with this drive (IDE).

But your automated proceedures wont work. So it will only
work until the time for the next hourly backup, then you're ****ed.
Or, restore the clone to a SATA if I wish, which takes about 20 min.
After restoring interim backups, perhaps a 10 min. task, I'm back to
square one.

No you arent, the raid0 array is still gone because
the raid hardware is gone. So the automated
backups will fail within the hour. Thats nothing
like back to square one at all.
Sure, if a RAID1 drive goes south I can recover more quickly, but I also
won't realize the performance increase. I don't mind the 30-min.
recovery ritual if I ever need it.

Sure, but you clearly arent back to square one in 30 mins.
And the pair costs 50% more than 250x2

Like hell they do.
for 20% more drive space. I'm a cheapskate, remember?, and I don't need
600gb at this time. Maybe in two years I will when large drives are
cheaper.

Get sillier by the minute.
No it isn't! <g>

Yes it is.
Okay then, it is complicated--but I understand it and the process is not
time-consuming...for me.

And if it fails when you are in hospital etc, you're ****ed.
The risk factor is not there, however. You say "significant risk if the
raid controller fails...." Please explain why the RAID controller is
even a factor in the recovery process.

Basically because without it the automated backup
system will end up flat on its face within the hour.

If you werent using raid0, the most you have to do is
replace the failed drive with one of the drives thats used
offline and the automated backup carrys on regardless fine.

And if you used raid1 or OS level shadowing things
would carry on regardless with a single drive failure too.
Yes I can,

No you cant.
and I don't know a way to make the process any clearer. Sorry.
Rod, for crissakes! Let's let the RAID controller die in smoke and
flames, okay? It doesn't matter. The clones are IDE, not SATA and not
RAID, and I can plug them into either of my four IDE ports and boot into
Windows in minutes.

And then the automated backup mechanism will fail within the hour.
I don't need SATA or RAID to recover quickly.

Correct, but you do need it for the automated backup to keep working.
Why do you think I need RAID to recover?

See above.

Yep.

A failure is a failure, and my recovery procedure would be the
same regardless of what controller or interface runs the OS.

Wrong. With raid1 or OS level shadowing you'd just yawn and
replace the failed drive. With no raid0 you'd still just replace
the failed drive and the hourly backup would carry on regardless.
Irrelevant.

Not when discussing your claim that more and more will be needing raid0
Gotta have it! There's no other option, as no other photo-editing app
has the power of PS.

Raid0 aint the only way to speed up photoshop.
How many computer owners or even businesses have off-site backups at all?
Irrelevant.

I'll admit that this gap is a kink in the system, but I don't know a
better way.

The obvious way is to have the hourly backup offsite too, over the net.
I'm open to suggestion, except don't tell me to get rid of the RAID
arrays. <g>

Get rid of the raid arrays, particularly the second one.

Use shadowing between PCs to protect against
a power supply failure killing both drives at once.

Have the hourly backup done intelligently over the
net offsite to protect against the house burning down.
That's irrelevant to RAID.
Nope.

I can do a repair install on the clone using another mobo and it'll
recover.

And when they arent raid0 arrays, thats much easier than with raid0 arrays.
If not, the data is intact. I can reinstall XP on another system and
restore the data, including all digital-photo files and business
databases.

Sure, but not in that 30 mins you claimed.
Okay, so do 99.99% of computer users.

Irrelevant. Most of them dont run businesses on their PC.

And **** all generate anything like as much stuff in a week as you do.
I'd like to do better, and that fact is irrelevant to the RAID
discussion.

Nope. Its a lot easier to get a non raid config up again completely
after a motherboard failure than when its a raid0 config.
If you know a way to backup everything and keep it current in an off-site
location, I'd like to know more. On-line backup sites are not an
option--far too expensive.

One obvious approach is another PC
or file server at the offsite location.
In an honest appeal, I'd like to know more about how to maintain an
off-site backup and keep it current. On-line plans that cost $2000/yr.
are not options. I'm a one-man shop here and my gross isn't that high.

Sure, but an offsite PC doesnt cost anything like that.
It isn't hard once you get the hang of the system, and takes little time.

Time isnt the problem.
RAID1 is good as long as the source drive is reliable.

Thats mangling the story completely. And its
just one way to completely automate backups.
If it is writing corrupt info or has a virus infecting it, the mirror
will
also be corrupted. I can recover as well with my system, only it
takes longer. In the interim, I am doing my PS work and other tasks
with less waiting. It's worth the 30 min. I'll expend in the event of a
RAID failure. I'm not that busy that 30 min. is a big shake time-wise.

The 30 mins is a myth if the raid hardware has failed.
I've described this system to people verbally and I usually get a
blank stare. You're right, most people would find this too complex. I
did describe it to a friend and fellow photographer, and she has
adopted largely the same procedure. She is intimidated by complexity yet
caught on easy enough. It isn't rocket science.

Few are anal enough to continue with a system with
the manual steps that yours involves, realiably.
You are so correct. It is amazing to me how many people come to me
with sob stories about crashed HD's and lost files that are "important."
I guess that's why HD-recovery outfits do a booming business.

Precisely.

And many who do understand the risk are still not anal
enough to do the manual steps yours involves reliably.

Essentially because drive failure is so rare.
Yep.

For umpteenth time, I have an interim backup on D: for important files
(photos, databases, even my precious MP3's), updated once every hour.

And you can lose them both with a power supply failure.
They're also off-site.

And that involves the loss of a week's work. And
that work may not even be possible to do again.
In some weeks it would be no loss at all, as I'm semi-retired. Anyway,
that would happen with or without RAID0. What do you suggest? I'm all
ears and would like to fill this gap.

See above.
Not familiar with the term "prat," but it doesn't sound flattering. <g>

Hilarious, it doesnt even get a mention in our national dictionary.

Never noticed that before.

Fool, clown, arsehole, idiot, take your pick.
 
B

Bill

Bob said:
I too was hoping for something more substantive. I am very persuadable in
the face of reasoned responses, and have learned much from rational posts on
these newsgroups.

I've seen some rational posts as well, so I was surprised to see such a
vehement response.

I refuse to argue with someone who is unwilling to listen to reason. If
he doesn't believe it, and doesn't want to see proof, then so be it.

The only thing I don't like about all this is that he may persuade
others that can benefit from the performance increase to NOT try it and
then they suffer from his ignorance.

So anyone reading this thread, I suggest you do your own investigation.
There are plenty of web sites that describe how RAID works, and why it
can offer a very cost effective performance gain.
 
R

Rod Speed

Bill said:
Bob Davis wrote
I've seen some rational posts as well, so I was
surprised to see such a vehement response.

Just another of your pathetic little drug crazed fantasys.
I refuse to argue with someone who is unwilling to listen to reason.

More of your pathological lying.
If he doesn't believe it, and doesn't want to see proof, then so be it.

You cant proof your 2x and 4x claims, you silly little terminal ****wit.
The only thing I don't like about all this is that he may persuade
others that can benefit from the performance increase to NOT
try it and then they suffer from his ignorance.

Never ever could bullshit its way out of a wet paper bag.
So anyone reading this thread, I suggest you do your own
investigation. There are plenty of web sites that describe how RAID
works, and why it can offer a very cost effective performance gain.

And plenty that point out that its nothing like fools like you claim too.
 
P

Peter

Aha, "wet paper bag"; I knew it must happen
Just another of your pathetic little drug crazed fantasys, child.

The last word I take as a compliment. Thanks aussie.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top