Raid 1 Performance Impact

M

me

A question on the performance impact of Raid 1. I realize up front
that I might have too little info for a perfect decision... so I will
take whatever input I can get on performance and configuration.

I have an Asus P4P800SE circa 2004 mobo with a 3.0G CPU which runs
win2003 Server as a SOHO file server. The load on it is light with
just one or two users and not a lot of concurrent access or large
files. It has an ICH5R Southbridge Controller supporting RAID 0 / 1 on
the 150 SATA drives. The two hard drives are matching SATA 5400 RPM
80gb 13ms/14ms Read/Write.

Questions:

1. How much of a performance impact will it be to have RAID 1 setup
and then writing to both drives?

2. The mobo instructions take me through using the BIOS to write RAID
info to the drives. They also discuss removing RAID info;

a. Does this mean that in the event the mobo crashes that I would
need to have another mobo with the same RAID controller on it to use
the drives or remove the RAID info? Would I be able to access the
drive in a another non-RAID machine as pure SATA in the event of a
mobo failure?

b. Does the process of writing or removing the RAID info affect the
data and OS already on the drive? That is, if I write this info to a
drive already in use on the machine, do I have to start fresh or will
all the data be preserved?

3. The mobo instructions mention setting up a RAID driver on a floppy
for Windows to use when starting XP or Win2000. Will Win 2003 have the
controller built in or do I still need an external driver?

4. Anything else I should know about RAID before heading in this
direction? I'd rather not be saying "Doh!" after I finish :)



Thanks for any insight,
 
D

David Brown

A question on the performance impact of Raid 1. I realize up front
that I might have too little info for a perfect decision... so I will
take whatever input I can get on performance and configuration.

I have an Asus P4P800SE circa 2004 mobo with a 3.0G CPU which runs
win2003 Server as a SOHO file server. The load on it is light with
just one or two users and not a lot of concurrent access or large
files. It has an ICH5R Southbridge Controller supporting RAID 0 / 1 on
the 150 SATA drives. The two hard drives are matching SATA 5400 RPM
80gb 13ms/14ms Read/Write.

Questions:

1. How much of a performance impact will it be to have RAID 1 setup
and then writing to both drives?

That depends on your usage patterns. If you think about what the system
actually has to do for raid 1, it will give you a better idea of how it
will affect performance. Writing to disk involves writing two copies of
everything. But these writes should happen in parallel to the two
disks, so they take much the same time as writing to a single disk.
When reading, the system can get the data from either drive - it should
therefore be faster, especially if you have concurrent reads. How much
faster will depend on your load, and on how smart the fakeraid and
windows combination is.
2. The mobo instructions take me through using the BIOS to write RAID
info to the drives. They also discuss removing RAID info;

a. Does this mean that in the event the mobo crashes that I would
need to have another mobo with the same RAID controller on it to use
the drives or remove the RAID info? Would I be able to access the
drive in a another non-RAID machine as pure SATA in the event of a
mobo failure?

It is sometimes possible to recover data from fakeraid drives using a
different motherboard, or using something like linux mdadm, but it is
not an easy job. If you choose to use fakeraid, you should assume that
losing the motherboard means losing the disks.

If you want to have portable raid that can be recovered on a different
machine, you either have to have real hardware raid, or real software
raid - not the hybrid worst-of-both-worlds fakeraid. With hardware
raid, you have a separate raid card - if it fails (and that's very
unlikely), you will have an easier time getting a compatible
replacement. With software raid, any other system running the same OS
will be able to use the disks.

I have used Linux software raid, but I've had no experience with Windows
software raid. It would certainly be possible to put a data on a raid
partition, but I don't know if it is possible to install an entire
windows server system on software raid - other windows experts will have
to answer there.

But in general, software raid is going to be at least as fast, possibly
faster, than fakeraid. In some circumstances, it will be faster than
all but the most expensive hardware raid cards. Software raid has at
least as good reliability as fakeraid, though not as good as a solid
hardware raid card.
b. Does the process of writing or removing the RAID info affect the
data and OS already on the drive? That is, if I write this info to a
drive already in use on the machine, do I have to start fresh or will
all the data be preserved?

Start from scratch. It's possible that you could use some sort of disk
imaging software to do the transfer, but it is likely to be more time
and effort than a re-install. Again, others here have more practice
with such software, and may give other advice.
3. The mobo instructions mention setting up a RAID driver on a floppy
for Windows to use when starting XP or Win2000. Will Win 2003 have the
controller built in or do I still need an external driver?

4. Anything else I should know about RAID before heading in this
direction? I'd rather not be saying "Doh!" after I finish :)

Remember, raid is not about keeping your data safe - that's what backups
are for. raid is about speed, and uptime - redundant raid means you
won't have to restore data from backups or re-install your OS just
because a hard disk died.
 
M

me

On Wed, 12 May 2010 10:13:33 +0200, David Brown

Remember, raid is not about keeping your data safe - that's what backups
are for. raid is about speed, and uptime - redundant raid means you
won't have to restore data from backups or re-install your OS just
because a hard disk died.

David:

Thanks for all the info. What makes the Intel mobo implementation
"fakeraid" instead of hardware raid?
 
R

Roger Blake

Thanks for all the info. What makes the Intel mobo implementation
"fakeraid" instead of hardware raid?

It's a BIOS-assisted software RAID implementation, there's no hardware
RAID controller. (At least not unless you have a server-class motherboard
that specifically includes one.)

My experience has been that these motherboard fakeRAID systems tend to
be less reliable than either real hardware RAID or the operating
system's native software RAID.
 
A

Arno

It's a BIOS-assisted software RAID implementation, there's no hardware
RAID controller. (At least not unless you have a server-class motherboard
that specifically includes one.)
My experience has been that these motherboard fakeRAID systems tend to
be less reliable than either real hardware RAID or the operating
system's native software RAID.

And in addition, they are often also OS specific (meaning, tools
are only available under Windows) and are therefore inferiour
to an OS integrated solution. Therefore the 'fakeRAID'. BTW,
a lot of these work with the Linux 'mraid' driver/tool, which
offers far superior management and recovery. Sometmes these
fakeRAIDs will not even boot if a disk is missing ans you
have to attack and lengthy resync a replacement disk before
you can access your data. Also monitoring and alerting
often sucks badly.

General advce for RAID: Test that you can recover your data
by unplugging a drive, before depending on it. The risk of
messing up a recovery and loosing all data is to large if
you have to find out how to recover in an emergency later.
Do it before and document (on paper!) how it is done.

Arno
 
R

Rod Speed

David Brown wrote
"Fakeraid" is a generic term for motherboard-based raid that has
become common in the last few years, which is not really anything
more than a limited form of software raid supported by the bios.
In proper software raid, the OS low-level layers accesses the disks as individual SATA (or IDE, SCSI, whatever) disks.
The OS raid layer
handles the combining of these, and the file systems and user-level
software see the raid setup as a single disk. This gives a lot of
flexibility - the OS can support many types of raid setups, it can
combine partitions into raid sets rather than whole disks, it can take
advantage of greater knowledge of the file access patterns to improve
performance, and it can provide features that you don't get with
hardware raid systems (or at least, not without paying a great deal of
money). For example, I believe Linux mdadm raid is the only system
that will let you have raid 1+0 on any number of disks (greater than
1, obviously) - it will happily let you have striped and mirrored raid on 2 or 3 disks, while hardware solutions will
require a multiple of 4
disks. And if the OS or the hardware dies, you can put the same disk
set in another computer with the same OS, and access your drives.
The disadvantage of software raid is that if the OS dies, or the power
to the motherboard fails, you could be in big trouble and get your
disks out of sync. You are also using the main processor to do the
work, but that's seldom an issue these days unless your processor is
already heavily loaded. Raid 0 and 1 levels are particularly light on
processor use.
With proper hardware raid, the controller card is separate so the OS
only ever sees a single large disks. All processing such as parity
generation, syncing, checking, etc., is handled by the card without
taking host processor cycles. And the controller card will typically
have a battery backup (or non-volatile memory) to provide consistency
and reliability even if you get a power fail. It won't protect you from logical faults if the OS dies, but it will
protect you from raid set inconsistencies. For very large setups and expensive hardware,
hardware raid can perform better than software raid even with just raid 0 or 1.

But as you say, thats not usually a problem with modern systems.
If the hardware controller dies, you will typically need to get the
same model or a similar model to replace it, since manufacturers use different arrangements for their raid setups.

And you really need to keep a spare of what is the most expensive approach too.
Fakeraid gives you the worst of both worlds.

Thats overstating it, particularly with price.
There is no separate processor, so everything is handled by the host - either by BIOS routines, or drivers loaded by
the OS.

But as you say, thats not a problem with modern systems and the simpler forms of RAID.
You don't get the full flexibility of proper software raid, but only the limited functionality provided by the
fakeraid. And access to the disk is limited to the chipset used in the motherboard - if the motherboard dies (more
likely than a hardware raid controller dying),

That last is very arguable.
you could lose access to your disks.

Not necessarily forever and a spare motherboard is going to
cost less than the spare controller with fancy hardware raid too.
Where fakeraid wins is if you are using a limited OS like windows, and want to install the whole system on a raid
drive but don't want to pay for a hardware raid card.

So it isnt actually the worst of both worlds.
As far as I know, you can't install windows on a windows software raid drive

Yes you can.
- you can only use such drives for non-system partitions.

Thats just plain wrong.
 
R

Rod Speed

David Brown wrote
Rod Speed wrote
That depends on why you are using raid - if it is for improved uptime, you might well want a spare card on-hand. If
it is for improved speed,

Thats unlikely with real hardware raid. It makes more sense to buy faster drives instead.
it is probably not necessary.
Generally speaking, I would say that if you are concerned enough about uptime to want a spare hardware raid
controller, you are better off setting up a redundant system or a hot spare of the entire system.

Yes, but if you want improved speed, you are better off with faster drives etc.
After all, the raid card is not a weak point in the system - things like power supplies are much more likely to fail,
as is the motherboard.

I dont believe that last.
But you will at least want to make sure replacement parts are easily available.

And the only real way to do that is to buy two.
The price is good, certainly - but not any better than the price of software raid.

Depends. The versions of Win that can do that arent cheap.

Its essentially free with motherboard raid.
It's not a problem - certainly not in terms of performance. But it means you have the same vulnerabilities to power
failures

Not really if you are fully backed up.
or system crashes as you do with software raid.

Thats very arguable too.
Motherboards are a higher risk component than a raid card.

Easy to claim. Have fun actually substantiating that claim.
They are high power,

Nope. And that doesnt affect reliability anyway.
high speed, and dense boards, leaving them very vulnerable to overheating if you have problems with your fans or
cooling systems.

Not if you have a clue and use decent alarm systems for fan failure etc.
They are connected to all sorts of external devices, making them vulnerable to external wrong connections or misuse

In practice thats not normally a problem in the sense of the raid.
(depending on the usage environment, of course). And the heavy competition in the market leads to low-cost shortcuts
sometimes being taken in the design and manufacture of motherboards.

Or you can buy decent quality motherboards.
Raid controller cards are simpler,

In some way they are, in some ways they arent.
dedicated to a specific job,

Irrelevant to the reliability.
and aimed for a market which places value on reliability and stability.

And you pay a lot for that.
Of course, you can still argue about it - and you can probably find motherboards with a better track record than some
hardware raid cards, so it's worth doing some research before buying.

Its not that easy to research, particularly with hardware raid reliability.
It's certainly true that a good hardware raid card will cost more than a typical motherboard,

And you need some motherboard anyway with
good hardware raid, so thats more than double.
so if you are buying spares, the motherboard is cheaper. But if you are looking at this over a long term, motherboard
designs (and in particular, chipsets and bios versions) come and go a lot faster than raid card designs.

Doesnt mean you have to bother with many of the changes.
And even if your particular model of raid card is no longer available, newer models from the same company will often
be compatible with the older drives

Not if the manufacturer has disappeared.
- motherboard fakeraid gives no such promises.

But anyone with a clue has full backups anyway.
Of course, software raid is a better choice if this is an important
issue, since the same OS will work on a wide range of hardware.

Yes, but OSs that support software raid well arent necessarily that cheap.
OK - I was careful to prefix that statement with "as far as I know".
It's not something I've tried, and the quick google I did about
windows software raid turned up plenty of information about making
dynamic disk raid sets on extra drives and partitions, but the only
article I found about windows software raid and windows system
partitions said that it couldn't be done.

It can anyway.
 
R

Rod Speed

Drive speed is limited

No it isnt any more than raid speed is limited.
- people use raid as a way to increase speed.

Not much anymore with personal desktop systems.
I agree that it doesn't normally make sense to use hardware raid cards for speed purposes

Which is what I said.
- the money is better spent on the drivesand using software raid.

Not necessarily, particularly with expensive software raid.
Hardware raid cards make sense from a reliability viewpoint, not speed.

Which is what I said.
But that does not mean that people /don't/ use hardware raid cards when they want a faster disk system.

Very few do with the personal desktop systems being discussed.
Do you think that hardware raid cards are more likely to fail than motherboards?

Nope, you have a problem with your logic there. ALL I said that motherboards
are not MUCH MORE LIKELY to fail than a hardware raid card.
Well, I guess you are free to believe that if you want,

Having fun thrashing that straw man ?
and I'm not likely to be able to change your mind.

Yes, you dont have a shred of evidence that motherboards
are MUCH MORE LIKELY to fail than a hardware raid card.
I have seen enough motherboard failures to know that they do sometimes fail - I have not seen enough hardware raid
cards to be sure that they /don't/ fail.

So your opinion on that is completely irrelevant.
That depends on the level of security you are looking for. Buying a spare is, of course, the best security - but you
might have a supplier that you trust enough to believe their assurances that they have replacements in stock.

Doesnt mean that they will have when you need one.
I might be wrong here, but I thought even the "home" versions of Win7
supported some sort of raid (raid 0, and perhaps raid 1)?

Yes, you are wrong.
But if you have to buy a more expensive version of windows just to get the software raid, then obviously it is no
longer free.

What I said.
And of course any version of Linux will support software raid (though some distros make it easier to use than others).

And plenty dont want to use that OS for various reasons.
Backup protects your data, but not your uptime.

Uptime is not normally a problem with the personal desktop systems being discussed
and duplication is a much better way of ensuring uptime when it is important.
The problem with raid and power failures is that you can get your stripes out of sync

And that can be avoided with a UPS.
perhaps you have written data to one part of a mirror, but not the second copy. A good hardware raid card will
protect you from that situation using a battery or non-volatile caches -

And a UPS avoids the problem with motherboard raid or software raid.
software raid or fakeraid will not.

Yes it will with a UPS.
The file system's logs and journals may be sufficient to recover from such a situation.

And you dont need that if you use a UPS.
And of course, as you took pains to point out to me in a previous
thread, backups only save your data from your last backup - raid can help preserve data saved since the last backup.

And continuous backup is also perfectly possible.
A system crash can have the same effect on the data storage as having the power pulled out unexpectedly.

But hardly ever does.
Linux has better support for software raid than anything else I know of.

And plenty prefer not to use it for personal desktop systems for a reason.
You can get it free, or pay for supported versions, according to your requirements.

And plenty prefer not to use it for personal desktop systems for a reason.
If you are thinking about things like the windows server versions, or AIX, Solaris, or other bigger OS's, it's likely
that you'll choose
these for other reasons - the software raid support is probably a
small issue, and is thus effectively a free bonus.

I wasnt talking about support.
 
A

Arno

Ed Light said:
I'm assuming that Win 7 Home Premium doesn't do software raid?

That is an interesting question. I somwhow doubt it, but it is not
even easy to find out (demonstrating again that Windows is a toy...).
If it is in a "feature cluster with disk encryption, it looks like
it will require enterprise or ultimate.

While I have Win7 ulimate (professional reasons :-/ ), it seems
I canno use dynamic disks either as they seem to have been
engineered to prevent using anything on them from another OS.

Arno
 
A

Arno

David Brown said:
Rod Speed wrote: [...]
And you really need to keep a spare of what is the most expensive
approach too.
That depends on why you are using raid - if it is for improved uptime,
you might well want a spare card on-hand. If it is for improved speed,
it is probably not necessary.
Agreed.

Generally speaking, I would say that if you are concerned enough about
uptime to want a spare hardware raid controller, you are better off
setting up a redundant system or a hot spare of the entire system.

Not really. The controller is the only non-generic part. Everything
else you can replace with generic spares you can get fast and locally.
Or you can re-purose a different machine. For the controller alone,
this does not work.
After all, the raid card is not a weak point in the system - things like
power supplies are much more likely to fail, as is the motherboard. But
you will at least want to make sure replacement parts are easily available.

Indeed. The RAID conroller is typically the only part not easily
avaliable.

The price is good, certainly - but not any better than the price of
software raid.

And the flexibility and problem resilience is worse.
It's not a problem - certainly not in terms of performance. But it
means you have the same vulnerabilities to power failures or system
crashes as you do with software raid.

Actually, the system crash risk you get in addition with software
RAID as compared to hardware RAID is pretty low or non-existent.

As for power-failures, that is not a RAID risk at all, but a filesystem
risk. RAID does not make it worse, unless you do stupid things like
using a hardware RAID controller with a large buffer an no battery backup.

The way to deal with power-failures is UPS and journalling file
system deswigned to be power-failure resistant. Of course the
relevant applications also need to be resilient, for example
databases need to use a recovery log and text editors need to
do automatic, regular backups while you write.
Motherboards are a higher risk component than a raid card. They are
high power, high speed, and dense boards, leaving them very vulnerable
to overheating if you have problems with your fans or cooling systems.
They are connected to all sorts of external devices, making them
vulnerable to external wrong connections or misuse (depending on the
usage environment, of course). And the heavy competition in the market
leads to low-cost shortcuts sometimes being taken in the design and
manufacture of motherboards. Raid controller cards are simpler,
dedicated to a specific job, and aimed for a market which places value
on reliability and stability.

Well, fakeRAID cards are still in the low-cost segment. Don't depend
on them too much. Having a recovery strategy for RAID failure saves
one from loosing all data after the last backup. The same is true
for proper RAID.
Of course, you can still argue about it - and you can probably find
motherboards with a better track record than some hardware raid cards,
so it's worth doing some research before buying.
It's certainly true that a good hardware raid card will cost more than a
typical motherboard, so if you are buying spares, the motherboard is
cheaper. But if you are looking at this over a long term, motherboard
designs (and in particular, chipsets and bios versions) come and go a
lot faster than raid card designs. And even if your particular model of
raid card is no longer available, newer models from the same company
will often be compatible with the older drives - motherboard fakeraid
gives no such promises.

Actually the fakeRAID compatibility might be higher, provided you
stay with the same controller, whether card or mainboard. For example
Silicon Image seems to have a single fakeRAID implementation for all
their chips. However I recently ran into another nice problem: I
was unable to reeflash a fakeRAID card with a non RAID BIOS, because
the SiL flasher does not support the flash chip used on the card.

Overall, I tink fakeRAID has severe limits that most people do
not understand and that are likely to bite them.
Of course, software raid is a better choice if this is an important
issue, since the same OS will work on a wide range of hardware.

Indeed. My recovery strategy for my linux arrays is to attach them
in any way (even USB enclosures attached to a laptop would work) to
a different Linux box. And I have done this just to make sure it works.

Now I only with Win7 had something compatible with Linux and
remotely as flexible and powerful. No such luck. If virtualization
gets at any time t the pont it supports games well (my main Win
application), I will restric Wndows to run that way and use
Linux as basis system. Not too surprising that Vmware ESXi
basically does that with a custom Linux basis.
OK - I was careful to prefix that statement with "as far as I know".
It's not something I've tried, and the quick google I did about windows
software raid turned up plenty of information about making dynamic disk
raid sets on extra drives and partitions, but the only article I found
about windows software raid and windows system partitions said that it
couldn't be done. But you know more about such setups than I do, so
I'll take your word for it that it's possible.

I am also not sure. But I think you can install on non-raided
and then turn your systempartition into a dynamic disk afterwards.

Arno
 
R

Rod Speed

David Brown wrote
Rod Speed wrote
Well, I hope the OP got the information he was looking for here,
because now that you've started revving up the rodbot autoresponder,
I'm dropping the thread before you get /really/ silly.

You never ever could bullshit your way out of a wet paper bag.
 
A

Arno

David Brown said:
Arno said:
David Brown said:
Rod Speed wrote: [...]
And you really need to keep a spare of what is the most expensive
approach too.
That depends on why you are using raid - if it is for improved uptime,
you might well want a spare card on-hand. If it is for improved speed,
it is probably not necessary.
Agreed.

It's also true of the third common reason for using raid, that I forgot
in the previous post - making a larger partition than will fit on a
single disk.
Not really. The controller is the only non-generic part. Everything
else you can replace with generic spares you can get fast and locally.
Or you can re-purose a different machine. For the controller alone,
this does not work.
I agree that the controller is the only potentially difficult part. But
if you are looking for very high uptimes, you need to get redundancy on
as many parts of the system as possible. Raid gives you redundancy on
the hard disks, and you can get things like redundant power supplies.
Having spare hardware raid controllers on hand gives you offline
redundancy there. But there comes a point where it is best to simply
have a complete replacement machine ready to take over on short notice.
I don't mean that you are more likely to get a system crash when using
software raid compared to hardware raid (or fakeraid) - there should be
negligible extra risk when using any sort of raid (unless the drivers
are bad, of course).
I am referring to the risk to the integrity of the raid array if you get
a system crash. Obviously the main risk is always of logical errors in
the file system - a system crash in the middle of write can leave your
file system inconsistent, and while journalling or logging can generally
recover metadata you may lose file data. This sort of risk is (or
should be!) also unaffected by the use of raid.
But with software raid or fakeraid you have other failure modes for the
raid array integrity. In particular, the system can update part of a
stripe but not all of it - on reboot, the system must somehow try to
piece together what has failed. You may get correct data when reading
from the stripe, you may get wrong data, you may get mixed correct and
wrong data, or you may get an error because the system knows the data is
inconsistent and would rather give nothing than wrong data. You are
also likely to face long consistency checks which reduce the performance
of the array while checking for dirty data.

Unless the RAID implementor was really, really stupid, this is not
an issue either. With RAID 1 you have a disk that is updated first.
This means you look at this disk, unless it dropped out of the array.
The riek is then the same with an ordinary interrupted write.
With a /good/ hardware raid system, including battery backed memory or
involatile caches, writing data to a stripe should be atomic, so you
don't have this risk of inconsistency. A hardware raid card that
doesn't provide this is pretty much worthless.

Well, true, but it does not matter, as you do not have atomicity before.
The rpoblem here is more that the OS cannot enforce atomicity with
RAID controllers that force-buffer every write. With software RAID
it can. The battery on a hardware RAID cache really only serves
to bring the risk down to what you have ordinarily. That said, with
a modern filesystem it does not matter much anyways.
Linux mdadm raid can give you a good step towards this by using write
intention maps - then it at least knows which areas of the disk may be
inconsistent after a failure.
On a slightly side topic, this is one place where software solutions has
the potential to be much better than hardware solutions. Advanced
filesystems like ZFS and btrfs handle such situations much better than
current software raid systems - they handle the mirroring and data
integrity at a higher level, and support checksums for consistency.
Thus if you have such a failure, these systems can identify the correct
data using checksums.

Indeed. It still is just a cpnvenience, as it lowers the likelyhood
of not hard flushed data not being on disk.
That's good advice for any system. But see my comment above for special
risks with raid systems - they apply equally to power failure and system
crashes (unless you have a hardware raid system that takes care of these
risks).

As I said, I do not think these apply. Basically a lot of hardware RAID
will prevent an ATA/SCSI disk flush to actually reach the disk, but
redirect it to buffer memory and write at their leisure. So with hardware
RAID you can have the fuill buffer's with in flight, while the OS
thinks it has been flushed to disk. You do not have that problem
withsoftware RAID or no RAID. Hardware RAID with activated write
buffering actually has a higher risk of causing rea; problems here,
hence the battery.
Yes - it can't be repeated often enough that raid is not a backup
solution. After all, the most common way to lose data is through human
error - raid won't help there!

I view RAID is a partial backup, that protects agains some of the risks
that a proper backup protects against. But indeed that limitation
needs to be restated clearly and frequently.
Have you tried making windows software raid partitions, then trying to
mount them using Linux? I have no idea if it would work, but it would
be very nice for recovery.
Virtual Box is getting steadily better for running demanding programs
within a virtual machine. Of course, it depends on the sorts of games
you want to run, and the power of the host machine. Eventually there
comes a point when you want a windows machine for your windows-only
games, and a Linux machine for everything else (including Linux games).
Maybe that's the answer. Perhaps I'll make up a new virtual box machine
and try it out. On my office machine (XP host) I've got a virtual box
Linux machine with software raid using two virtual disks on two host
disks - the virtual machine has faster disk access than the host machine!

Interesting set-up!

Arno
 
M

me

OP back again... interesting discussion


The particular system I am thinking of using RAID 1 on is a Win2003
Server. While I know a bit about Win2003, I know very little about the
RAID abilities.

If I understand correctly from the above discussions, I can enable
RAID 1 in WIn 2003 and it will do ALL the RAID work through software,
without worrying about the "fakeraid" BIOS or clunky drivers?

Is it a good solution?
 
M

Mike Tomlinson

me said:
The particular system I am thinking of using RAID 1 on is a Win2003
Server. While I know a bit about Win2003, I know very little about the
RAID abilities.

If I understand correctly from the above discussions, I can enable
RAID 1 in WIn 2003 and it will do ALL the RAID work through software,
without worrying about the "fakeraid" BIOS or clunky drivers?

Yes. It's also very portable, so you can take the drives to another
motherboard and it'll just work.
Is it a good solution?

I think it's better than the cheap fakeRAID cards or built-in
controllers. It'll be more cpu-intensive than a single drive, but
that's hardly a problem nowadays.
 
R

Rod Speed

me said:
OP back again... interesting discussion
The particular system I am thinking of using RAID 1 on is a Win2003
Server. While I know a bit about Win2003, I know very little about the
RAID abilities.
If I understand correctly from the above discussions, I can enable
RAID 1 in WIn 2003 and it will do ALL the RAID work through software,
without worrying about the "fakeraid" BIOS or clunky drivers?

Yes. Win calls it mirroring. Its in the help.
Is it a good solution?

Yes.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top