aaccli> container create raid5

  • Thread starter Frantisek Rysanek
  • Start date
F

Frantisek Rysanek

Took me some time to figure out, so I guess I'd better publish this.
Maybe someone else has been wondering how to create a RAID 5 from
the AACCLI command line, in Linux or FreeBSD or what have you...

Suppose you have 3 disk drives, known as ID's 0, 1 and 2 on
the SCSI bus (or three SATA drives on channels 0, 1 and 2).
This is perhaps the simplest way to do it:

container create raid5 ((0,0,0)) (0,1,0) (0,2,0)

The internal logic of the FSA raid is non-trivial and the AACCLI
is rather spartan, including the documentation.
The double brackets mean that you're supplying a "freespace"
rather than a plain SCSI device, but that you would like to use
a default capacity for the new volume (available maximum) so
there's actually no particular capacity given after the SCSI device
within the "freespace" definition. See also the documentation
provided by Adaptec in the box.

Frank Rysanek
 
A

Arno Wagner

Previously Frantisek Rysanek said:
Took me some time to figure out, so I guess I'd better publish this.
Maybe someone else has been wondering how to create a RAID 5 from
the AACCLI command line, in Linux or FreeBSD or what have you...

Cryptic, yes.
Suppose you have 3 disk drives, known as ID's 0, 1 and 2 on
the SCSI bus (or three SATA drives on channels 0, 1 and 2).
This is perhaps the simplest way to do it:
container create raid5 ((0,0,0)) (0,1,0) (0,2,0)
The internal logic of the FSA raid is non-trivial and the AACCLI
is rather spartan, including the documentation.

Not to forget unusable for any automatic monitoring.
The double brackets mean that you're supplying a "freespace"
rather than a plain SCSI device, but that you would like to use
a default capacity for the new volume (available maximum) so
there's actually no particular capacity given after the SCSI device
within the "freespace" definition. See also the documentation
provided by Adaptec in the box.

Personally I have now an expensive paperweight in the form
of an Adaptec 2810 controller. It kept crashing, it was slow,
it did not have a commandline control tool (AACCLI is a GUI
on text-mode, not a commandline tool that you can use, e.g. in
a cron-job) and now I am running LINUX software RAID on Promise
TX4's, which is also much, much faster.

Arno
 
F

Frantisek Rysanek

The internal logic of the FSA raid is non-trivial and the AACCLI
Not to forget unusable for any automatic monitoring.
hmm. I guess once upon a time I was able to get some sort of e-mail
notifications to work, with an ASR-2120 (aacraid).

At that time, there was a demon called "anotifyd" - unfortunately
it didn't use ioctl(/dev/aac0) directly, but rather it relied on
the other demons in the bunch: aiomgrd and arcpd. In other words,
you needed to install the whole HTTP/SSL-based "Adaptec Storage
Manager"
just to get e-mail notifications to work.
That was with the original aacraid CD. Today, the AAR-2410/2810 are
shipped with aacraid CD version 2 - I haven't yet investigated what's
new.
it did not have a commandline control tool (AACCLI is a GUI
on text-mode, not a commandline tool that you can use, e.g. in
a cron-job)
I believe aaccli *can* be scripted. You merely supply the commands
in a file. A somewhat funny point is that you have to supply an
"exit" command on the very last line of the script, otherwise
aaccli doesn't terminate :) See the AACCLI manual on the CD.
and now I am running LINUX software RAID on Promise
TX4's, which is also much, much faster.
as long as you can afford to spend the CPU time on RAID5
XOR crunching, right?

By Promise you probably mean the non-RAID flavour,
i.e. without the FastTrak RAID BIOS, right?

Anyway it's interesting to know that the AAR-2810 kept
crashing for you. This is something I should focus on :)

On a somewhat unrelated note, right now I have an Areca
1120 in my torture chamber. Its drivers are not in Vanilla
kernels yet, but its RAID5 performance is nice, and hot-swap
behavior even in non-SAF-TE drawers is excellent.

Frank Rysanek
 
A

Arno Wagner

Previously Frantisek Rysanek said:
hmm. I guess once upon a time I was able to get some sort of e-mail
notifications to work, with an ASR-2120 (aacraid).
At that time, there was a demon called "anotifyd" - unfortunately
it didn't use ioctl(/dev/aac0) directly, but rather it relied on
the other demons in the bunch: aiomgrd and arcpd. In other words,
you needed to install the whole HTTP/SSL-based "Adaptec Storage
Manager"
just to get e-mail notifications to work.
That was with the original aacraid CD. Today, the AAR-2410/2810 are
shipped with aacraid CD version 2 - I haven't yet investigated what's
new.
I believe aaccli *can* be scripted. You merely supply the commands
in a file. A somewhat funny point is that you have to supply an
"exit" command on the very last line of the script, otherwise
aaccli doesn't terminate :) See the AACCLI manual on the CD.

I tried that. It works. But the output is mostly unusable,
since it still contains vt100 control sequences. And
of course AACCLI does _not_ support terminal type "dumb".
as long as you can afford to spend the CPU time on RAID5
XOR crunching, right?

Not really an issue. It is needed only on writing, or reading
from a degraded array. It is difficult to really measure how
much effort goes into the RAID, but when I stream data to
an 8 disk RAID5 array on a dual Athlon 2800+ box, it takes
overall less than 50% of one CPU and gets 35MB/s real, sustained
speed. Could possibly be faster, but I did not invest much time
into tuning, since it is mostly read from. I suspect the bottleneck
is the PCI switching between the different controllers (4, since
each TX4 is essentially 2 controller son one chip).
By Promise you probably mean the non-RAID flavour,
i.e. without the FastTrak RAID BIOS, right?
Right.

Anyway it's interesting to know that the AAR-2810 kept
crashing for you. This is something I should focus on :)

It just froze. No diagnostics, the kernel just stopped being
able to communicated with the controller. Was not a kernel
problem either. I had some problems with some SATA enclosures
that were out of spec and they likely crashed the controller.
Still, a hard freeze is unacceptable, regardlass of what nonsense
the disks do. After I moved to software RAID, the kernel
just kicked a disk _with_ diagnostics from time to time until
I replaced the enclosures.
On a somewhat unrelated note, right now I have an Areca
1120 in my torture chamber. Its drivers are not in Vanilla
kernels yet, but its RAID5 performance is nice, and hot-swap
behavior even in non-SAF-TE drawers is excellent.

Well, hot-swap is nice. Linux needs still some time to do that
for SATA, but the capability is in the standard-SATA hardware.

Arno
 
F

Frantisek Rysanek

I believe aaccli *can* be scripted. You merely supply the commands
I tried that. It works. But the output is mostly unusable,
since it still contains vt100 control sequences. And
of course AACCLI does _not_ support terminal type "dumb".
another interesting point on your part :)

By the output of ldd, it seems to me that aaccli relies on ncurses.
I believe that the handling of terminal types is done by ncurses,
rather than by aaccli directly.
It's indeed a deficiency in aaccli that it can't recognize raw
file/pipe on the output and avoid curses altogether, the way some
better behaved software does it.

You made me try this on my own.
I've made a simple script containing just "open aac0" and "exit".
If I ran "aaccli <sample.scp >output.txt" under TERM=linux, i could
indeed see some escape sequences in the output.
If I did "export TERM=dumb" though, the output seemd to lack all
escape sequences, except for the terminal-style EOLs (just CR == \r).
Nonetheless, it seems that Curses translate a "clear screen" into
several hundred space characters (0x20) ahead of useful data, so there
*is* some formatting to crunch, if you want to process this output
by Perl or something.

Maybe I should try some more complex aaccli commands to learn
how messy this can get :) Backspace characters and such...
Not really an issue. It is needed only on writing, or reading
from a degraded array. It is difficult to really measure how
much effort goes into the RAID, but when I stream data to
an 8 disk RAID5 array on a dual Athlon 2800+ box, it takes
overall less than 50% of one CPU and gets 35MB/s real, sustained
speed. Could possibly be faster, but I did not invest much time
into tuning, since it is mostly read from. I suspect the bottleneck
is the PCI switching between the different controllers (4, since
each TX4 is essentially 2 controller son one chip).
thanks for that data, this is perhaps the first time I've seen
a real-world benchmark of this kind posted by anyone.
Then again I didn't try too hard to find one :)

In my experience, various RAID5 controllers based on the
Intel IOP302/303 can do about 50 MBps sustained sequential
writes, averaged over 2 seconds (so the peak rate may actually
be much higher). The newer solutions based on IOP321/331 are
perhaps 2x or 3x as fast.
RAID0 transfer rates are yet somewhat faster, so indeed the RAID5
crunching is a bit of a bottleneck, rather than the PCI bus.

Regarding the "PCI switching between different controllers":

These IOP chips are also using PCI as their primary and secondary
bus (dual-ported design) - the IOP302/303 have PCI66@64, the
IOP321/331
have PCI-X@133. That's 533 MBps or 1.066 GBps respectively of
theoretical
bandwidth. Compare that to 50 or 150 MBps.

The SCSI or SATA bus controller on a RAID card is usually a single
chip. This design style saves package count, up to 8-channel
controllers
are available. Sometimes two chips - it appears that there's no other
way
to achieve 16 SATA channels.

I don't believe that multiplexing PCI transactions among two or four
discrete controller chips on a PCI bus would add some overhead
compared to multiplexing different channels in a multi-channel
controller IC.
My impression is that different channels, even in a single controller
chip, tend to have separate "register footprints" in the PCI bus
address space, separate state machines etc. Even if not, you can't
merge tranfers from two different channels into a single PCI
bus-master transaction, so you have to do the whole transaction setup
"per channel" anyway, and the different channels have to arbitrate PCI
DMA concurrently anyway.
Different IRQ sharing setups are possible either way - there are no
straight implications.
==> There's no principal difference between a single-chip controller
and four separate chips. In this respect, a PC system is not much
different from an IOP-based RAID controller.

I guess I've wandered way off topic here :)

I'm running a software-based mirror (under Linux 2.4 => "md" devices)
on a small fileserver and I have to say I'm completely satisfied.

Frank Rysanek
 
A

Arno Wagner

Previously Frantisek Rysanek said:
another interesting point on your part :)
By the output of ldd, it seems to me that aaccli relies on ncurses.
I believe that the handling of terminal types is done by ncurses,
rather than by aaccli directly.
It's indeed a deficiency in aaccli that it can't recognize raw
file/pipe on the output and avoid curses altogether, the way some
better behaved software does it.
You made me try this on my own.
I've made a simple script containing just "open aac0" and "exit".
If I ran "aaccli <sample.scp >output.txt" under TERM=linux, i could
indeed see some escape sequences in the output.
If I did "export TERM=dumb" though, the output seemd to lack all
escape sequences, except for the terminal-style EOLs (just CR == \r).
Nonetheless, it seems that Curses translate a "clear screen" into
several hundred space characters (0x20) ahead of useful data, so there
*is* some formatting to crunch, if you want to process this output
by Perl or something.

Well, then you seem to have a more advanced version that I have.
With TERM=dumb I could not even get output...
Maybe I should try some more complex aaccli commands to learn
how messy this can get :) Backspace characters and such...

Pretty messy. I actually geban making a Perl-wrapper before
I had seen enough crashes to stpu uing the controller.

[...]
The SCSI or SATA bus controller on a RAID card is usually a single
chip. This design style saves package count, up to 8-channel
controllers
are available. Sometimes two chips - it appears that there's no other
way
to achieve 16 SATA channels.
I don't believe that multiplexing PCI transactions among two or four
discrete controller chips on a PCI bus would add some overhead
compared to multiplexing different channels in a multi-channel
controller IC.

Possible. As I said, I did not do extensive benchmarking.
My impression is that different channels, even in a single controller
chip, tend to have separate "register footprints" in the PCI bus
address space, separate state machines etc. Even if not, you can't
merge tranfers from two different channels into a single PCI
bus-master transaction, so you have to do the whole transaction setup
"per channel" anyway, and the different channels have to arbitrate PCI
DMA concurrently anyway.
Different IRQ sharing setups are possible either way - there are no
straight implications.
==> There's no principal difference between a single-chip controller
and four separate chips. In this respect, a PC system is not much
different from an IOP-based RAID controller.
I guess I've wandered way off topic here :)

This is Usenet, you are welcome to wander ;-)
I'm running a software-based mirror (under Linux 2.4 => "md" devices)
on a small fileserver and I have to say I'm completely satisfied.

I have several larger servers wit RAID5 and RAID1 and my normal
workstatuons with RAID1 and I am also competely satisfied with
the "md" driver. After the experience with the Adaptec attempt
at a SATA RAID controller, I guess I will stay with software
RAID for some time.

Arno
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top