SCSI vs SATA Hih-Perf

L

lmanna

Hello all,

Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly random
in nature:

Architecture 1:
Tyan 2882
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
2x12-Way 3Ware Cards
24 73 GB 10k rpm Western Digital Raptors
Software RAID 10 on Linux 2.6.x
XFS

Architecture 2:
Tyan 2881 with Dual U-320 SCSI
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
12x146Gb Fujitsu 10k SCSI
Software RAID 10 on Linux
XFS

The price for both system is almost the same. Considerations:

- Number of Spindles: Solution 1 looks like it might have an edge here
for small sequential reads and writes since there are just twice as
many spindles.

- PCI Bus Saturation: Solution 1 also appears to have an edge in case
we use large sequential reads. Solution 2 would be limited by the Dual
SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
bandwidth in any random-read or random-write situation and in our small
random file scenario I think both system would perform equally. Any
comments ?

- MTBF: Solution 2 has a definite edge. Some numbers:

MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

Not surprisingly Solution 2 is twice as reliabe. This doesn't take
into account the novelty of the SATA Raptor drive and the proven track
record of the SCSI solution. In any case comments on this MTBF point
are welcomed.

- RAID Performance: I am not sure about this. In principle both
solution should behave the same since we are using SW RAID but I don't
know how the fact that SCSI is a bus with overhead would affect RAID
performance ? What do you think ? Any ideas as to how to spread the
RAID 10 in a dual U 320 SCSI Scenario ?
SATA being Point-To-Point appears to have an edge again but your
thoghts are welcomed.

- Would I get a considerable edge if I used 15k SCSI Drives ? I am not
totally convinced that the SATA is our best choice. Any help is greatly
appreciated.

Many thanks,

Parsifal
 
A

Arno Wagner

In said:
Hello all,
Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly random
in nature:
Architecture 1:
Tyan 2882
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
2x12-Way 3Ware Cards
24 73 GB 10k rpm Western Digital Raptors
Software RAID 10 on Linux 2.6.x
XFS
Architecture 2:
Tyan 2881 with Dual U-320 SCSI
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
12x146Gb Fujitsu 10k SCSI
Software RAID 10 on Linux
XFS
The price for both system is almost the same. Considerations:
- Number of Spindles: Solution 1 looks like it might have an edge here
for small sequential reads and writes since there are just twice as
many spindles.
- PCI Bus Saturation: Solution 1 also appears to have an edge in case
we use large sequential reads. Solution 2 would be limited by the Dual
SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
bandwidth in any random-read or random-write situation and in our small
random file scenario I think both system would perform equally. Any
comments ?
- MTBF: Solution 2 has a definite edge. Some numbers:
MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
Not surprisingly Solution 2 is twice as reliabe. This doesn't take
into account the novelty of the SATA Raptor drive and the proven track
record of the SCSI solution. In any case comments on this MTBF point
are welcomed.
- RAID Performance: I am not sure about this. In principle both
solution should behave the same since we are using SW RAID but I don't
know how the fact that SCSI is a bus with overhead would affect RAID
performance ? What do you think ? Any ideas as to how to spread the
RAID 10 in a dual U 320 SCSI Scenario ?
SATA being Point-To-Point appears to have an edge again but your
thoghts are welcomed.
- Would I get a considerable edge if I used 15k SCSI Drives ? I am not
totally convinced that the SATA is our best choice. Any help is greatly
appreciated.

One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.

For you access scenario, SCSI will also be superior, since SCSI
has supported command queuing for a long time.

I also would not trust the Raptors as I would trust SCSI drives.
The SCSI manufacturers know that SCSI customers expect high
reliability, while the Raptor is more a poor man's race car.

One more argument: You can put Config 2 on a 550W (redundant)
PSU, while Config 1 will need something significantly larger,
also because SATA does not support staggered start-up, while
SCSI does. Is that already factored into the cost?

Arno
 
J

J. Clarke

Arno said:
One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.

If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that the
Linux driver does is give commands to that processor.

Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?
For you access scenario, SCSI will also be superior, since SCSI
has supported command queuing for a long time.

I'm sorry, but it doesn't follow that because SCSI has supported command
queuing for a long time that the performance will be superior.
I also would not trust the Raptors as I would trust SCSI drives.
The SCSI manufacturers know that SCSI customers expect high
reliability, while the Raptor is more a poor man's race car.

Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
they're Western Digital's enterprise drive. WD has chosen to take a risk
and make their enterprise line with SATA instead of SCSI. Are you
suggesting that WD is incapable of producing a reliable drive?

If it was a Seagate Cheetah with an SATA chip would you say that it was
going to be unreliable?
One more argument: You can put Config 2 on a 550W (redundant)
PSU, while Config 1 will need something significantly larger,
also because SATA does not support staggered start-up, while
SCSI does. Is that already factored into the cost?

Uh, SATA requires one host interface for each drive. Whatever processor is
controlling those host interfaces can most assuredly stagger the startup if
that is an issue.

Not saying that SCSI is not the superior solution but the reasons given seem
to be ignoring the fact that a "smart" SATA RAID controller is being
compared with a "dumb" SCSI setup.
 
P

Peter

Hello all,
Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly random
in nature:

Architecture 1:
Tyan 2882
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
2x12-Way 3Ware Cards
24 73 GB 10k rpm Western Digital Raptors
Software RAID 10 on Linux 2.6.x
XFS

Architecture 2:
Tyan 2881 with Dual U-320 SCSI
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
12x146Gb Fujitsu 10k SCSI
Software RAID 10 on Linux
XFS

The price for both system is almost the same. Considerations:

- Number of Spindles: Solution 1 looks like it might have an edge here
for small sequential reads and writes since there are just twice as
many spindles.

Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.
- PCI Bus Saturation: Solution 1 also appears to have an edge in case
we use large sequential reads. Solution 2 would be limited by the Dual
SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
bandwidth in any random-read or random-write situation and in our small
random file scenario I think both system would perform equally. Any
comments ?

You are designing for NFS, right? Don't forget that network IO and
SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
throughput will be 800MB/s * 0.5 = 400MB/s

In random operations, if you get 200 IO/s from each SCSI disk,
you will have 12disks * 200 IO/s * 64KB = 154MB/s
- MTBF: Solution 2 has a definite edge. Some numbers:

MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

How did you calculated your total MTBF???
Your calcs maybe good for RAID0 but not for RAID10.

Assuming 5 year period, for 1,200,000 hour MTBF disk
reliabilty is about 0.964.

For RAID10 (stripe of mirrored drives) in 6x2 configuration
eqivalent MTBF will be 5,680,000 hours

Assuming 5 year period, for 1,000,000 hour MTBF disk
reliabilty is about 0.957.

For RAID10 (stripe of mirrored drives) in 12x2 configuration
eqivalent MTBF will be 2,000,000 hours

For a single RAID1 of the 1,000,000 hr MTBF drives
equivalent MTBF will be 23,800,000 hours

BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
I can't believe that their MTBF is so low (1,000,000 hr)
I you loose one, probably your RAID will go down too.
Not surprisingly Solution 2 is twice as reliabe. This doesn't take
into account the novelty of the SATA Raptor drive and the proven track
record of the SCSI solution. In any case comments on this MTBF point
are welcomed.

- RAID Performance: I am not sure about this. In principle both
solution should behave the same since we are using SW RAID but I don't
know how the fact that SCSI is a bus with overhead would affect RAID
performance ? What do you think ? Any ideas as to how to spread the
RAID 10 in a dual U 320 SCSI Scenario ?
SATA being Point-To-Point appears to have an edge again but your
thoghts are welcomed.

- Would I get a considerable edge if I used 15k SCSI Drives ?

In theory up to 40%.
 
L

lmanna

Arno said:
In comp.sys.ibm.pc.hardware.storage (e-mail address removed) wrote:
One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.

For you access scenario, SCSI will also be superior, since SCSI
has supported command queuing for a long time.

I also would not trust the Raptors as I would trust SCSI drives.
The SCSI manufacturers know that SCSI customers expect high
reliability, while the Raptor is more a poor man's race car.


My main concern is their novelty, rather then their performance. Call
it a hunch but it just doesn't feel right to risk it while there's a
proven solid SCSI solution for the same price.
One more argument: You can put Config 2 on a 550W (redundant)
PSU, while Config 1 will need something significantly larger,

Thanks for your comments. I forgot about the Power. Definitely worth
considering since we're getting 3 of these servers and UPS sizing
should also play in the cost equation.

also because SATA does not support staggered start-up, while
SCSI does. Is that already factored into the cost?

This I don't follow, what's staggered start-up ?

Parsifal
 
C

Curious George

This I don't follow, what's staggered start-up ?

It is a feature that staggers the spinup of each disk sequentially
leaving enough time between disk starts to prevent overloading the
power supply. I think he meant that because he believed SATA does not
do this you would need a beefier power supply than you would with the
scsi setup to avoid problems on powerup.

AFAIK delay start or staggered spinup (whatever you want to call it)
is available on SATA but it is controller specific (& most don't
support it) and it is not a standard feature like the delay start &
remote start jumpers on scsi drives & backplanes.
 
L

lmanna

Peter wrote:
[ Stuff Deleted ]
Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

Yeap ! I like those Fujitsus and they are cheaper then the cheetahs.
You are designing for NFS, right? Don't forget that network IO and
SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
throughput will be 800MB/s * 0.5 = 400MB/s

Uhmm .. you're right. I guess I'll place a dual e1000 on the other
PCI-X
channel. See:

ftp://ftp.tyan.com/datasheets/d_s2881_100.pdf

In random operations, if you get 200 IO/s from each SCSI disk,
you will have 12disks * 200 IO/s * 64KB = 154MB/s


How did you calculated your total MTBF???
Your calcs maybe good for RAID0 but not for RAID10.

Thanks for the correction. You're right again.
Assuming 5 year period, for 1,200,000 hour MTBF disk
reliabilty is about 0.964.

For RAID10 (stripe of mirrored drives) in 6x2 configuration
eqivalent MTBF will be 5,680,000 hours

Assuming 5 year period, for 1,000,000 hour MTBF disk
reliabilty is about 0.957.

For RAID10 (stripe of mirrored drives) in 12x2 configuration
eqivalent MTBF will be 2,000,000 hours

For a single RAID1 of the 1,000,000 hr MTBF drives
equivalent MTBF will be 23,800,000 hours

Excuse my ignorance but how did you get these numbers ? In any case
your numbers show that MTBF with solution 1 is about 1/2 than solution
2.
BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
I can't believe that their MTBF is so low (1,000,000 hr)
I you loose one, probably your RAID will go down too.

I thought it was a bit too low too but there was no info on the 3ware
site.
In theory up to 40%.

In reality though I would say 25-35%

Thanks !
 
L

lmanna

J. Clarke said:
on.

If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that the
Linux driver does is give commands to that processor.

Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?


I'm sorry, but it doesn't follow that because SCSI has supported command
queuing for a long time that the performance will be superior.


Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
they're Western Digital's enterprise drive. WD has chosen to take a risk
and make their enterprise line with SATA instead of SCSI. Are you
suggesting that WD is incapable of producing a reliable drive?

If it was a Seagate Cheetah with an SATA chip would you say that it was
going to be unreliable?


Uh, SATA requires one host interface for each drive. Whatever processor is
controlling those host interfaces can most assuredly stagger the startup if
that is an issue.

Not saying that SCSI is not the superior solution but the reasons given seem
to be ignoring the fact that a "smart" SATA RAID controller is being
compared with a "dumb" SCSI setup.


Good point. Would the SCSI performance improve if I used a Dual U-320
super duper SCSI RAID card ? Since the RAID was going to be in SW
anyways I didn't see the reason of getting such a card. I had no other
choice with the SATA solution though.

Parsifal
 
R

Rita Ä Berkowitz

Hello all,

Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly
random in nature:

I wouldn't use either one of them since your major flaw would be using an
Opteron when you should only be using Xeon or Itanium2 processors. Now, if
you are just putting an MP3 server in the basement of your home for
light-duty work you can squeak by with the Opterons. As for the drives, I
would only use SCSI in the system you mention.



Rita
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage J. Clarke said:
Arno Wagner wrote:
In said:
Hello all,
[...]
One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.
If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that the
Linux driver does is give commands to that processor.
Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?

No. I was mostly thinking of SMART support, which is not there
for SATA on Linux (unless you use the old IDE driver). Normal disk
access works fine in my experience.
I'm sorry, but it doesn't follow that because SCSI has supported command
queuing for a long time that the performance will be superior.

Actually for small reads command queuing helps massively. The
"has been available for a long time" just means that it will work.
Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
they're Western Digital's enterprise drive. WD has chosen to take a risk
and make their enterprise line with SATA instead of SCSI. Are you
suggesting that WD is incapable of producing a reliable drive?

I am suggesting that WDs strategy is suspicious. It may be up
to SCSI standards, but I have doubts. SATA is far to new to compete
with SCSI on reliability and compatibility. And SCSI has a lot of
features working for decades now that are still being implemented
or are being planned for SATA.
If it was a Seagate Cheetah with an SATA chip would you say that it was
going to be unreliable?

At least not as reliable as SCSI. The whole SATA technology is not as
mature as SCSI is. It is also not as well designed.
Uh, SATA requires one host interface for each drive. Whatever processor is
controlling those host interfaces can most assuredly stagger the startup if
that is an issue.

The problem is that most (all?) SATA disks start themselves, while
in SCSI that is usually a jumper-option. Typical is auto-start,
auto-start with a selectable delay and no auto-start. On SATA
you would have to to staggered power or the like to get the same
effect.
Not saying that SCSI is not the superior solution but the reasons
given seem to be ignoring the fact that a "smart" SATA RAID
controller is being compared with a "dumb" SCSI setup.

Not really. It is more a relatively new, supposedly smart technology
against a proven, older, reliable, knowen to be smart technology.
SCSI targets are really quite smart, while SATA targets are not too
bright. The 3ware controllers may help some, but I doubt they
can do that much.

In addition the kernel knows how to talk to SCSI targets, while SATA is
still in flux. Data transfer on SATA works, but everything else is
still being worked on, like SMART support.

The RAID logic is pretty smart in both cases, since done by the
kernel, but when having this many disks you _will_ want to
poll defective lists/counts. drive temperature and the like
periodically to get early warnings.

Arno
 
A

Arno Wagner

Previously said:
J. Clarke said:
Arno Wagner wrote: [...]
Uh, SATA requires one host interface for each drive. Whatever processor is
controlling those host interfaces can most assuredly stagger the startup if
that is an issue.

Not saying that SCSI is not the superior solution but the reasons given seem
to be ignoring the fact that a "smart" SATA RAID controller is being
compared with a "dumb" SCSI setup.

Good point. Would the SCSI performance improve if I used a Dual U-320
super duper SCSI RAID card ? Since the RAID was going to be in SW
anyways I didn't see the reason of getting such a card. I had no other
choice with the SATA solution though.

Don't think so. Your set-up will spend most time waiting for seeks
and rotational latency anyways IMO. Maybe if you put the RAID1
mirrors on separate channels that would bring some write speed
improvements.

Arno
 
A

Arno Wagner

In said:
My main concern is their novelty, rather then their performance. Call
it a hunch but it just doesn't feel right to risk it while there's a
proven solid SCSI solution for the same price.
Thanks for your comments. I forgot about the Power. Definitely worth
considering since we're getting 3 of these servers and UPS sizing
should also play in the cost equation.

Power is critical to reliability. If you have a PSU with, say
50% normal and 70% peak load, that is massively more reliable than
one with 70%/100%. Also many PSUs die on start-up, since e.g.
disks draw their peak currents on spindle start.
This I don't follow, what's staggered start-up ?

You can jumper most (all?) SCSI drive do delay their spindle-start.
Spindle start results in a massive amount of poerrt drawn for some
seconds. Maybe as much as 2-3 times the peaks you see during operation.

SCSI drives can be jumperd to spin-up on power-on or on receiving
a start-unit command. Some also support delays. You should be
able to set the SCSI controller to issue the start-unit command
to the drives with, say, 5 seconds delay between each unit or so.
This massively reduces power drawn on start-up.

SATA drives all (?) do spin-up on power-on. It is a problem
when you have many disks. The PSU needs the reserves to deal
with this worst case.

Arno
 
A

Arno Wagner

I wouldn't use either one of them since your major flaw would be using an
Opteron when you should only be using Xeon or Itanium2 processors.

Sorry, but that is BS. Itanium is mostly dead technology and not
really developed anymore. It is also massively over-priced. Xeons are
sort of not-quite 64 bit CPUs, that have the main characteristic of
being Intel and expensive.

I also know of no indications (except marketing BS by Intel) that
Opterons are unreliable.

Arno
 
R

Rita Ä Berkowitz

Arno said:
Sorry, but that is BS. Itanium is mostly dead technology and not
really developed anymore. It is also massively over-priced. Xeons are
sort of not-quite 64 bit CPUs, that have the main characteristic of
being Intel and expensive.

You need to catch up with the times. You are correct about the original
Itaniums being dogs, but I'm talking about the new Itanium2 processors,
which are also 64-bit. As for Intel being expensive, you get what you pay
for. The new Itanium2 sytems are SWEEEEEEET!
I also know of no indications (except marketing BS by Intel) that
Opterons are unreliable.

It's being proven in the field daily. You simple don't see Opteron based
solutions being deployed by major commercial and governmental entities.
True, there are a few *novelty* systems that use many Opteron processors,
but they are merely a curiosity than the mainstream norm. That said, if I
wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.





Rita
 
J

J. Clarke

Arno said:
In comp.sys.ibm.pc.hardware.storage J. Clarke
Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage (e-mail address removed) wrote:
Hello all, [...]
One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.
If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that
the Linux driver does is give commands to that processor.
Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?

No. I was mostly thinking of SMART support, which is not there
for SATA on Linux (unless you use the old IDE driver). Normal disk
access works fine in my experience.

Actually, that would be a function of the 3ware drivers. With a 3ware host
adapter you do not use the SATA drivers, you use drivers specific to 3ware,
and the 3ware drivers _do_ support SMART under Linux.
Actually for small reads command queuing helps massively. The
"has been available for a long time" just means that it will work.

So where is the evidence that SCSI command queuing works better for small
reads than does SATA command queuing? In the absence of other evidence one
might assume that SATA command queuing benefits from "lessons learned" with
SCSI.
I am suggesting that WDs strategy is suspicious.

Why? They see SATA as the coming thing. Are you suggesting that Western
Digital is incapable of producing a SCSI drive?
It may be up
to SCSI standards, but I have doubts. SATA is far to new to compete
with SCSI on reliability

Reliability in a disk is primarily a function of the mechanical components,
not the interface. It is quite possible to put a bridge-chip on a Cheetah
that carries the existing SCSI interface into an SATA interface. Would
that drive then be less reliable than the Cheetah that was not plugged into
a bridge chip? Or are you suggesting that the state of the art in the
manufacture of integrated circuits is such that for some reason a chip
containing the circuits that support SATA is more likely to fail in service
than one that contains the circuits that support SCSI?
and compatibility. And SCSI has a lot of
features working for decades now that are still being implemented
or are being planned for SATA.

Such as?
At least not as reliable as SCSI. The whole SATA technology is not as
mature as SCSI is. It is also not as well designed.

In what specific ways?
The problem is that most (all?) SATA disks start themselves,

Raptors have a jumper that selects startup in full power mode or startup in
standby, intended specifically to address this issue.
while
in SCSI that is usually a jumper-option. Typical is auto-start,
auto-start with a selectable delay and no auto-start. On SATA
you would have to to staggered power or the like to get the same
effect.

Just tell the drive to come out of standby whenever you are ready.
Not really. It is more a relatively new, supposedly smart technology
against a proven, older, reliable, knowen to be smart technology.
SCSI targets are really quite smart, while SATA targets are not too
bright. The 3ware controllers may help some, but I doubt they
can do that much.

You have made enough statements about SATA that are simply not true that I
wonder at the validity of your assessment.
In addition the kernel knows how to talk to SCSI targets, while SATA is
still in flux. Data transfer on SATA works, but everything else is
still being worked on, like SMART support.

So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
controller over a brand new LSI Logic SATA RAID controller because "the
kernel knows how to talk to SCSI targets" despite the fact that both
devices use brand new drivers?

You're assuming that all contact with drives is via the SCSI or SATA kernel
drivers and not through a dedicated controller with drivers specific to
that controller.
The RAID logic is pretty smart in both cases, since done by the
kernel, but when having this many disks you _will_ want to
poll defective lists/counts. drive temperature and the like
periodically to get early warnings.

With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
kernel.

The same is true for SATA RAID controllers from LSI Logic, Intel, Tekram,
and several other vendors.
 
J

J. Clarke

Arno said:
Power is critical to reliability. If you have a PSU with, say
50% normal and 70% peak load, that is massively more reliable than
one with 70%/100%. Also many PSUs die on start-up, since e.g.
disks draw their peak currents on spindle start.



You can jumper most (all?) SCSI drive do delay their spindle-start.
Spindle start results in a massive amount of poerrt drawn for some
seconds. Maybe as much as 2-3 times the peaks you see during operation.

SCSI drives can be jumperd to spin-up on power-on or on receiving
a start-unit command. Some also support delays. You should be
able to set the SCSI controller to issue the start-unit command
to the drives with, say, 5 seconds delay between each unit or so.
This massively reduces power drawn on start-up.

SATA drives all (?) do spin-up on power-on. It is a problem
when you have many disks. The PSU needs the reserves to deal
with this worst case.

Would you do the world a favor and actually take ten minutes to research
your statements before you make them? All SATA drives sold as "enterprise"
drives have the ability to perform staggered spinup.
 
A

Arno Wagner

In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz said:
Arno Wagner wrote:
You need to catch up with the times. You are correct about the original
Itaniums being dogs, but I'm talking about the new Itanium2 processors,
which are also 64-bit. As for Intel being expensive, you get what you pay
for. The new Itanium2 sytems are SWEEEEEEET!

You recommend a _new_ product for its reliability????
I don't think I need to comment on that.
It's being proven in the field daily. You simple don't see Opteron based
solutions being deployed by major commercial and governmental entities.

Which is a direct result of Intels FUD and behind-the-scenes politics.
In order to prove that something is unreliable it has to be used and
fail. It being not used does not indicate unreliability. It just
indicates "nobody gets fired for buying Intel".

So nothing is actually proven about reliability (or lack of)
of Opterons in the field.
True, there are a few *novelty* systems that use many Opteron
processors, but they are merely a curiosity than the mainstream
norm. That said, if I wanted a dirt-cheap gaming system I would opt
for an Opteron based SATA box.

That is certainly true. As allways the question is to get the
right balance for a specific application. If you have the money
to buy the most expensive solution _and_ the clout to make the
vendor not just rip you off, you certainly will get an andequate
solution. But you will pay too much. Not all of us can afford
to buy stuff the way the military does.

Arno
 
A

Arno Wagner

Previously J. Clarke said:
Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage J. Clarke
Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage (e-mail address removed) wrote:
Hello all, [...]
One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.
If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that
the Linux driver does is give commands to that processor.
Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?

No. I was mostly thinking of SMART support, which is not there
for SATA on Linux (unless you use the old IDE driver). Normal disk
access works fine in my experience.
Actually, that would be a function of the 3ware drivers. With a 3ware host
adapter you do not use the SATA drivers, you use drivers specific to 3ware,
and the 3ware drivers _do_ support SMART under Linux.

And, does that work reliably and with the usual Linux tools,
i.e. smartctl? Would kind of surprise me, since libata does
not have smart support at all at the moment, since the ATA
passthru opcodes have only very recently be defined by the
SCSI T10 committee.
So where is the evidence that SCSI command queuing works better for small
reads than does SATA command queuing?

At the moment there is no SATA command queuing under Linux, as you
can quickly discover by looking at the Serial ATA (SATA) Linux
software status report page here:

http://linux.yyz.us/sata/software-status.html

I was not saying that SATA queuing is worse. I was saying (or intended to)
that SCSI has command queuing under Linux while SATA does not currently.

[...]
Why? They see SATA as the coming thing. Are you suggesting that Western
Digital is incapable of producing a SCSI drive?

I am suggesting that WD is trying to create a market beween ATA
and SCSI by claiming to be as good as SCSI with SATA prices. If
it sounds to good to be true, it probably is.
Reliability in a disk is primarily a function of the mechanical components,
not the interface.

It is a driver and software questtion with newer interfaces as well.
I had numerous problems with SATA under Linux.

[...]
Raptors have a jumper that selects startup in full power mode or startup in
standby, intended specifically to address this issue.

Good. And does the 3ware controllers support staggered starts?
Just tell the drive to come out of standby whenever you are ready.

That should be sometheing the controller and the drive do. Id
the OS does it, it can fail in numerous interessting ways.
You have made enough statements about SATA that are simply not true that I
wonder at the validity of your assessment.

Of course you are free to do that. But I have 4TB or RAIDed storage
under Linux, about half of which is SATA. And I did run in the problems
I describe here.
So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
controller over a brand new LSI Logic SATA RAID controller because "the
kernel knows how to talk to SCSI targets" despite the fact that both
devices use brand new drivers?

You are talking about the LL drivers. There is an SCSI abstraction
layer in the kernel as well as an SATA abstraction layer. The former
is stable, proven and full-featured. The latter is pretty basic at
the moment.

To quote the maintainer:

Basic Serial ATA support

The "ATA host state machine", the core of the entire driver, is
considered production-stable.

The error handling is very simple, but at this stage that is an
advantage. Error handling code anywhere is inevitably both complex and
sorely under-tested. libata error handling is intentionally
simple. Positives: Easy to review and verify correctness. Never data
corruption. Negatives: if an error occurs, libata will simply send the
error back the block layer. There are limited retries by the block
layer, depending on the type of error, but there is never a bus reset.
You're assuming that all contact with drives is via the SCSI or SATA kernel
drivers and not through a dedicated controller with drivers specific to
that controller.

See above. Also if specific drivers are needed for specific
hardware, they tend to be less reliable because the user-base is
smaller.
With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
kernel.

Not in the set-up of the OP. You did read that, did you?

Seems to me we have a misunderstanding here. If the OP
wanted to do Hardware-RAID the assessment would look
different.

Arno
 
R

Rita Ä Berkowitz

Arno said:
You recommend a _new_ product for its reliability????
I don't think I need to comment on that.

Oh please, come on now! This is like saying BMW introduces a new car this
year and it is going to be a failure in the world for using cutting edge
technology that hasn't a single shred of old technology behind it. When you
lift the hood you still see the same old internal combustion engine that
they used for the last 50-years. The difference is they improved
manufacturing processes and materials to make the product better. They
didn't redesign the wheel for the sake of doing so.

Take a new Itanium2 box for a test drive and you'll open your eyes.
Which is a direct result of Intels FUD and behind-the-scenes politics.
In order to prove that something is unreliable it has to be used and
fail. It being not used does not indicate unreliability. It just
indicates "nobody gets fired for buying Intel".

Then again, if the box were being used in environments that were life
dependant such as on the battlefield, reliability is paramount over cost.
Intel has a proven track record for reliability in the field. I would feel
safe using an Intel solution over an AMD any day of the week.
So nothing is actually proven about reliability (or lack of)
of Opterons in the field.

Market share has a great way of defining reliability. It would seem that
the major players don't feel comfortable betting their livelihood on AMD.
That is certainly true. As allways the question is to get the
right balance for a specific application. If you have the money
to buy the most expensive solution _and_ the clout to make the
vendor not just rip you off, you certainly will get an andequate
solution. But you will pay too much. Not all of us can afford
to buy stuff the way the military does.

Define "pay too much"? Most people and I would rather pay too much upfront
instead of being backended with high maintenance and repair costs, not to
mention the disastrous outcome of total failure. Like I said, you get what
you pay for. If the military would go totally AMD than I would agree with
you. Till that day, AMD is not a processor to be taken seriously. Plus,
there resale value sucks!!!



Rita
 
B

Bill Todd

Rita Ä Berkowitz wrote:

[nothing very significant]

One really needs hip-boots to wade through the manure of these last few
posts.

1. Opteron systems have reliability comparable to Xeon systems, and if
they lag Itanics by any margin at all it's not by much (Itanics do have
a couple of additional internal RAS features that Opterons and Xeons
lack, but the differences are not major ones).

2. While Intel didn't do as excellent a job of adding 64-bit support to
Xeons as AMD did with AMD64, once again the difference is not a dramatic
one.

3. The first Itanic wasn't just a dog, it was an absolute joke.
McKinley and Madison are much more respectable but still consume
inordinate amounts of power and are in general not performance-leading
products: while the newest Madisons managed to regain a very small lead
in SPECfp over POWER5 that's the only major benchmark they lead in (at
least where the competition has bothered to show up: HP has done a fine
job of carefully selecting specific benchmark niches which lacked such
competition, though been a bit embarrassed in cases where it
subsequently appeared), and Itanic often winds up not in second place
but in third or even fourth behind POWER (not just POWER5 but often
behind POWER4+ as well in commercial benchmarks), Opteron, Xeon, and/or
SPARC64 - and for a year or so the top-of-the-line 1.5 GHz Madisons
couldn't even beat the aging and orphaned previous-process-generation
Alpha in SAP SD 2-tier, though they're now a bit ahead of it (this was
the only commercial benchmark HP was willing to allow EV7 to compete in:
it made Itanic look bad, but they needed it to beat the POWER4 score
there).

And that's for benchmarks, where the code has been profiled and
optimized to within an inch of its life. Itanic is more dependent on
such optimization to achieve a given level of performance than its more
flexible out-of-order competition is, and hence falls farther behind
their performance levels in real-world situations where much code is not
so optimized.

4. Nonetheless, Itanic is not an abandoned product. While its eventual
success or failure is still to be determined, Intel is at least
currently still pouring money, engineers, and time into it (though
apparently not at quite the rate it was earlier: in the past year it's
cut a new Itanic chipset from its plans which would have allowed faster
bus speeds and axed a new Itanic core that the transplanted Alpha team
was building for 2007, and what those engineers are now working may or
not be Itanic-related).

- bill
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top