[Adaptec 2610SA] Add HDD to existing RAID 5 array

B

Bubba

Greetings,

is there a way to add a disk to a RAID 5 array (4x 500GB) in order to
expand capacity? The only option I can manage to find in Adaptec's BIOS is
adding a hot spare, but no sign of expanding array.

I use Debian Lenny - is there any CLI tool for Adaptec SATA RAID
management?

TIA!
 
B

Bob Willard

Bubba said:
Greetings,

is there a way to add a disk to a RAID 5 array (4x 500GB) in order to
expand capacity? The only option I can manage to find in Adaptec's BIOS is
adding a hot spare, but no sign of expanding array.

I use Debian Lenny - is there any CLI tool for Adaptec SATA RAID
management?

TIA!

No. You must backup the current array, then create a new array,
then restore the backed-up data to the new array.
 
B

Bubba

Bob Willard's log on stardate 16 ruj 2009
No. You must backup the current array, then create a new array,
then restore the backed-up data to the new array.

This is a joke based on me missing something crucial in the whole story or
you are actually telling me that #$%! Adaptec has no support for on-line
array expansion on this controller? :(
 
B

Bob Willard

Bubba said:
Bob Willard's log on stardate 16 ruj 2009


This is a joke based on me missing something crucial in the whole story or
you are actually telling me that #$%! Adaptec has no support for on-line
array expansion on this controller? :(

It is not Adaptec, it is the nature of RAID5. Draw a picture of how the
data and parity blocks are spread over the HDs in the RAIDset, and how
you would go about adding another HD to an existing array. While an
incremental expansion could be done, if the existing array has enough
unused space, it is far faster and safer to just do the full monte:
backup/recreate/restore.
 
M

Mike Ruskai

Greetings,

is there a way to add a disk to a RAID 5 array (4x 500GB) in order to
expand capacity? The only option I can manage to find in Adaptec's BIOS is
adding a hot spare, but no sign of expanding array.

I use Debian Lenny - is there any CLI tool for Adaptec SATA RAID
management?

http://preview.tinyurl.com/lsfe5s

I despise URL shorteners, but that one was just too hideous to post directly.

The 2610SA is an OEM-only board made by Adaptec for Dell and HP, so if the
above doesn't help, try poking around their sites.
 
D

DevilsPGD

In message <[email protected]> Bob Willard
It is not Adaptec, it is the nature of RAID5. Draw a picture of how the
data and parity blocks are spread over the HDs in the RAIDset, and how
you would go about adding another HD to an existing array.

That's fairly trivial.

First, take a look at the map as it exists now, and the preferred
result.

Start off by moving any existing blocks to the new drive as needed. This
frees up gaps on the existing drives, now look for what data should go
to each of those empty blocks. Rinse, repeat.
While an
incremental expansion could be done, if the existing array has enough
unused space,

No unused space is needed, luckily by definition we have a new drive
available to start the process, the new drive offers enough free space.
it is far faster and safer to just do the full monte:
backup/recreate/restore.

It will always take longer to do a full backup/recreate/restore process,
and may end up being riskier to your data during the process.

In an ideal situation, your suggestion requires a read of the entire
array, write to another place, then the new array configuration can be
initialized (a full write cycle to the entire drive). At this point you
need to read all of the data again and write it to the RAID-5 array, but
as you're no doubt aware, writing to a RAID-5 array isn't just a single
write, but rather, it's a minimum of two reads and one writes and more
likely is two reads and two writes with some math in the middle there
(unless your data happens to match the results of your array
initialization)

Compare this to reorganizing an existing array, where the process
requires every sector to be read once and written once at best, or at
worst and read once and written twice (allowing for data to be
temporarily stored elsewhere during sector reorganizations, a process
which is entirely safe if properly journaled)

That being said, a live migration is a significantly more complex
process from the controller's point of view, so it's less likely that
inexpensive controllers can implement live migrations.
 
D

David Brown

Bob said:
It is not Adaptec, it is the nature of RAID5. Draw a picture of how the
data and parity blocks are spread over the HDs in the RAIDset, and how
you would go about adding another HD to an existing array. While an
incremental expansion could be done, if the existing array has enough
unused space, it is far faster and safer to just do the full monte:
backup/recreate/restore.

I have no idea about Adaptec's controller, but with Linux md software
raid 5 there is no problem adding or removing drives. It can also
convert a raid 5 to a raid 6 when you add an extra disk. This won't
really help the OP if he is stuck with a proprietary controller and its
limits (if any - as I say, I don't know the controller or its
capabilities), but there is certainly nothing in raid 5 that makes
changing the number of devices an impossible task.
 
B

Bob Willard

David said:
I have no idea about Adaptec's controller, but with Linux md software
raid 5 there is no problem adding or removing drives. It can also
convert a raid 5 to a raid 6 when you add an extra disk. This won't
really help the OP if he is stuck with a proprietary controller and its
limits (if any - as I say, I don't know the controller or its
capabilities), but there is certainly nothing in raid 5 that makes
changing the number of devices an impossible task.

Converting from RAID5 to RAID6 is trivial, since the added HD is not
used -- just a hot spare. Failover from RAID6 to RAID5 is pretty
messy, but growing from RAID5 to RAID6 is simple.
 
B

Bob Willard

DevilsPGD said:
In message <[email protected]> Bob Willard


That's fairly trivial.

First, take a look at the map as it exists now, and the preferred
result.

Start off by moving any existing blocks to the new drive as needed. This
frees up gaps on the existing drives, now look for what data should go
to each of those empty blocks. Rinse, repeat.


No unused space is needed, luckily by definition we have a new drive
available to start the process, the new drive offers enough free space.


It will always take longer to do a full backup/recreate/restore process,
and may end up being riskier to your data during the process.

In an ideal situation, your suggestion requires a read of the entire
array, write to another place, then the new array configuration can be
initialized (a full write cycle to the entire drive). At this point you
need to read all of the data again and write it to the RAID-5 array, but
as you're no doubt aware, writing to a RAID-5 array isn't just a single
write, but rather, it's a minimum of two reads and one writes and more
likely is two reads and two writes with some math in the middle there
(unless your data happens to match the results of your array
initialization)

Compare this to reorganizing an existing array, where the process
requires every sector to be read once and written once at best, or at
worst and read once and written twice (allowing for data to be
temporarily stored elsewhere during sector reorganizations, a process
which is entirely safe if properly journaled)

That being said, a live migration is a significantly more complex
process from the controller's point of view, so it's less likely that
inexpensive controllers can implement live migrations.

Huh. I hadn't even considered adding a HD to an offline array; clearly
that is simple. And the case of a controller with enough ports to
have the old (N HD) array and the new (N+1 HD) array concurrently
connected is also relatively simple.

The only interesting (to me) case is adding a HD to a RAIDset which
is being used -- with files that are active and open. And doing that
cleanly and safely, without the cooperation of the filesystem, is a
challenge: rather likely to result in substantial loss of performance
during the addition and/or a badly fragmented array. Even if the
controller claimed to be able to dynamically add HDs to an active
RAID5/6 RAIDset, I doubt if I'd trust it without backups.
 
D

David Brown

Bob said:
Converting from RAID5 to RAID6 is trivial, since the added HD is not
used -- just a hot spare. Failover from RAID6 to RAID5 is pretty
messy, but growing from RAID5 to RAID6 is simple.

Either I'm missing something here, or /you/ are missing something.

Raid5 to Raid6 is far from trivial, because it is not a hot spare.
Raid6 has two parities per stripe rather than one, so you need to go
through the entire array calculating the second parity. That in itself
is pretty easy if you are happy storing the Q parity on the new disk,
but re-doing the stripe layout to standard raid 6 will involve
re-writing everything on the disk. Converting raid6 to raid 5 is sort
of the reverse - change the raid 6 standard layout to putting Q on one
disk, then removing the Q disk.

Have a look at this post from the Linux md developer:

http://neil.brown.name/blog/20090817000931
 
B

Bubba

Mike Ruskai's log on stardate 17 ruj 2009

Thanks for the link. It does say what I wanted to know, however, all the
tools that are mentioned are not available to me, except the configuration
tool I can access by pressing Alt+A (or whatever that combination is). :(
The 2610SA is an OEM-only board made by Adaptec for Dell and HP, so
if the above doesn't help, try poking around their sites.

This one is Dell's, you are right. I just might throw them an e-mail, who
knows...

In any case, thanks to all who took part in this thread; I'll try to dig
some more and post back for the record. :)
 
A

Arno

Bubba said:
Greetings,
is there a way to add a disk to a RAID 5 array (4x 500GB) in order to
expand capacity? The only option I can manage to find in Adaptec's BIOS is
adding a hot spare, but no sign of expanding array.
I use Debian Lenny - is there any CLI tool for Adaptec SATA RAID
management?

I asume this is Adaptec RAID, not Linux software RAID. This
controller does not offer RAID expansion and no CLI tool will
help.

My advice would be to throw this pice of junk away (I had one
of these too a long time ago), get a linux supported regular
SATA controller with Linux hotplug and move to Linux software
RAID. If you get one or several PCI-E controllers, things should
also get faster and Linux does support array expansion.

Arno
 
B

Bubba

Arno's log on stardate 18 ruj 2009
My advice would be to throw this pice of junk away (I had one
of these too a long time ago), get a linux supported regular
SATA controller with Linux hotplug and move to Linux software
RAID. If you get one or several PCI-E controllers, things should
also get faster and Linux does support array expansion.

Any suggestions?

I bought Adaptec from Ebay for 20$ (well, it figures why it sucks, than
:D), but I'm open to more expensive suggestions, since until now, I
"overlooked" Adaptec being brutally slow, but this issue I'm having now is
unexcused.

Mind you, server is older and has only PCI-X, not PCI-E.
 
D

David Brown

Bob said:
Yes, we are both missing something: there are at least two def's of
RAID6: one is RAID with double parity blocks, and the other def
is RAID5 with a hot spare. Clearly, RAID5=>6 is hard with the
first def, and simple with the second.

I've never heard of "raid5 with a hot spare" being referred to as
"raid6". Perhaps it was from older times when raid 5 was
state-of-the-art - different companies had various proprietary
improvements or extensions to raid 5 and picked their own conflicting
names and levels to describe them. I think you'll be hard pushed to
find a modern reference describing raid 6 as "raid 5 with a hot spare".

But at least we are agreed on the technical side, once we agreed on what
we were discussing!
 
A

Arno

Bubba said:
Arno's log on stardate 18 ruj 2009
Any suggestions?
I bought Adaptec from Ebay for 20$ (well, it figures why it
sucks, than :D), but I'm open to more expensive suggestions,
since until now, I "overlooked" Adaptec being brutally slow,
but this issue I'm having now is unexcused.
Mind you, server is older and has only PCI-X, not PCI-E.

I think you should try Linux software RAID, if you want to
use te Array just under Linux. If too slow, look again.

Arno
 
B

Bubba

Arno's log on stardate 18 ruj 2009
I think you should try Linux software RAID, if you want to
use te Array just under Linux. If too slow, look again.

I'll check it out if I can use that Adaptec as "plain" SATA controller,
that is, without RAID/JBOD functionality, and create Linux software RAID.

Mind you, with 4 500GB SATA drives I barely get 40 or so MB/s.

Thx. :)
 
D

DevilsPGD

In message <[email protected]> Bob Willard
Huh. I hadn't even considered adding a HD to an offline array; clearly
that is simple. And the case of a controller with enough ports to
have the old (N HD) array and the new (N+1 HD) array concurrently
connected is also relatively simple.

The only interesting (to me) case is adding a HD to a RAIDset which
is being used -- with files that are active and open. And doing that
cleanly and safely, without the cooperation of the filesystem, is a
challenge: rather likely to result in substantial loss of performance
during the addition and/or a badly fragmented array. Even if the
controller claimed to be able to dynamically add HDs to an active
RAID5/6 RAIDset, I doubt if I'd trust it without backups.

Highpoint does it on online arrays just as well as offline arrays, with
the obvious performance hit. In fact, you can migrate any supported
"array" from one type to another as long as the resulting array is equal
or larger then the previous one.

The only exception is that a single drive in passthrough mode cannot be
migrated, but a single drive in JBOD mode can be migrated to or from.

Doing it online is somewhat more complex, but the RAID array isn't even
aware of what files are active and open, only what is being written to
at this moment. Since only expansion is supported, nothing at all
happens to the filesystem during the migration, the controller just
remaps blocks around, if a block is being written I believe the
controller pauses until it's finished and then resumes. Once the
expansion is complete the controller informs the OS of the change, if
you're using Highpoint's drivers you can see the space immediately, if
not then you need to unmount the array or reboot before the additional
space is visible.

All that being said, I'd never attempt the above without backups, nor
would I usually trust a single RAID controller with the only copy of
valuable data.
 
F

Fred Bloggs

Bubba said:
Any suggestions?

3ware cards are highly regarded under Linux, with the caveat that I have
not personally used them. They do make PCI-X cards - have a look at

http://www.3ware.com/products/Serial_ata2-9000.asp

Note that simply swapping the drives from the Adaptec card to a
different brand is very unlikely to work "straight off". You may well
have to backup, install the new card and additional drive, create the
array, then restore your data. If you need to expand again in the
future, the spec includes "online capacity expansion"

Or you could go with Arno's suggestion. Use a couple of SATA cards and
install your drives using Linux's software RAID, which is reliable and
easy to manage. For simplicity, boot the OS off another drive and use
the RAID for data.

Re. PCI-X. I feel your pain. It's supposed to be possible to plug
32-bit universal PCI cards into -X slots (at the cost of dropping the
bus speed from 133MHz to 33MHz) but I could not make it work with USB
cards of several different makes and chipsets. Either the card stayed
stubbornly invisible or the Proliant it was installed in would complain
about incompatible power requirements.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top