Is SCSI still the most reliable?

P

Peter

These matters are not as simple as "SCSI is better" or "IDE is better".
In

Neither is "better". But the old generalization, IDE for the desktop, SCSI
for the server, is still basically true today.


In ten years, SATA will be ancient technology.
No, SATA will be still around.
IDE introduced by Imprimis (CDC) in 1985 is still with us today.
 
R

Rita Ä Berkowitz

Peter said:
No, SATA will be still around.
IDE introduced by Imprimis (CDC) in 1985 is still with us today.

You can look at SATA as nothing more than a venereal wart on technology.
It'll keep coming back like any other STD. SATA will never be taken
seriously in anything other than gaming systems.


Rita
 
J

J. Clarke

Chuck said:
You spoke of no calculations in the above paragraph.

Try running the numbers yourself, I doubt you'd believe mine anyway.
I made no argument, on one side or the other, about the "reliability" of
SCSI vs. IDE. It's like arguing about what is the "best" ______ (fill in
the blank). There is no correct answer. What is "best" or more "reliable"
for _me_ isn't necessarily the same for _you_.

Data loss is data loss. The probability of data loss is not subjective.
I merely pointed out that you were doing the same thing as you were
accusing Odie of... expressing _your_ opinion based on _your_ anecdotal
experience.

Which I stated clearly that I was doing, so why do you have a problem with
it?
I'm over thirty years downrange from a college term paper.

Try it anyway. You might find the results instructive. You might also
consider getting a humor transplant.
But it wouldn't change the fact that the industries that live and die by
_reliability_, banks and insurance companies, do _not_ use IDE drives in
their data center servers.

What is at issue is not the fact that they use such drives, but the reason.
You don't seem to be willing to even consider the possibility that there
might be reasons unrelated to the reliability of individual drives.
That's a fact, that I know, from personal
experience. Not my opinion, but a _fact_. B of A, Wachovia, State Farm,
Citibank, JP Morgan, Cap One, the Federal Reserve, SunTrust, the list goes
on and on.

So what? Nobody has disputed this.
If IT managers felt they could save 5% of their cap ex by
switching to IDE drives and have the same reliability _and_ performance,
they'd do it... in a heartbeat.

So? Do the exercise instead of acting like a broken record.
In

Neither is "better". But the old generalization, IDE for the desktop, SCSI
for the server, is still basically true today.

So what? IDE is obsolescent anyway.
In ten years, SATA will be ancient technology.

SCSI is already far older than SATA will be in ten years. So what?
 
A

Arno Wagner

Previously "Rita Ä Berkowitz said:
Peter wrote:
You can look at SATA as nothing more than a venereal wart on technology.
It'll keep coming back like any other STD. SATA will never be taken
seriously in anything other than gaming systems.

Not true. We have TB's of research data on SATA. And more on ATA.
Of course most is RAID5 and there is an additional copy on a tape robot.

But ATA/SATA is a cheap way to get lots of reasonably fast storage
if funds are limited. You need to know what you are doing, and I
found that regular surface checks and monitoring is needed, but
it does work.

In addition putting 8 SATA HDDs into a server case if far easier
than putting 8 ATA disks in there. SATA has clear advantages.
True, it still has problems because it is relatively new, but
those will go away.

Also in desktop systems having a pair of (S)ATA drives in RAID1
is more reliable and still cheaper than one high-quality
SCSI disk. I have made very good experiences with that.

Personally I see SCSI as solution for very high speeds and
places where you can only mount one disk or it is difficult
to replace a failed disk. Also where there is nobody available
that can follow the developments and can select good quality
(S)ATA disks. And of course if money is not an issue.

Bottom line: High quality is good, but if you can get medium
quality and redundancy that is even better. After all the 'I'
in RAID stands for 'inexpensive'.

Arno
 
J

J. Clarke

Arno said:
Not true. We have TB's of research data on SATA. And more on ATA.
Of course most is RAID5 and there is an additional copy on a tape robot.

But ATA/SATA is a cheap way to get lots of reasonably fast storage
if funds are limited. You need to know what you are doing, and I
found that regular surface checks and monitoring is needed, but
it does work.

In addition putting 8 SATA HDDs into a server case if far easier
than putting 8 ATA disks in there. SATA has clear advantages.
True, it still has problems because it is relatively new, but
those will go away.

Also in desktop systems having a pair of (S)ATA drives in RAID1
is more reliable and still cheaper than one high-quality
SCSI disk. I have made very good experiences with that.

Personally I see SCSI as solution for very high speeds and
places where you can only mount one disk or it is difficult
to replace a failed disk. Also where there is nobody available
that can follow the developments and can select good quality
(S)ATA disks. And of course if money is not an issue.

Bottom line: High quality is good, but if you can get medium
quality and redundancy that is even better. After all the 'I'
in RAID stands for 'inexpensive'.

It seems to me that from the viewpoint of an administrator for a large site
there were three problems with parallel ATA that had nothing to do with the
reliability of the drives--the first was that using them for hot-swap was
running them out of specification and the second was that there was not a
decent RAID controller from an established manufacturer (3ware was and is
good but they only support a few operating systems and who's ever heard of
them?) and the third was that there was no enterprise-quality NAS that
would accept PATA drives.

SATA addressed the first issue in the spec--any SATA device that doesn't
support hot-swap is out of spec--and there _are_ full-featured RAID
controllers available for SATA from LSI Logic (their RAID controller
operatin is the merger of Mylex, which was at one time an IBM subsidiary,
and AMI, which used to be their arch-rival) and as part of the Intel server
building blocks, in addition to Tekram (supports RAID6--don't know of any
SCSI RAID controllers that do that), Adaptec, and the various consumer
manufacturers, some of whom are slowly developing their line in a direction
that might have it competitive with LSI and Intel some day, so the second
has been addressed, and EMC, Sun, and several others have fibre-channel
arrays that take SATA drives, addressing the third.

It's going to be interesting to see how this plays out.
 
A

Arno Wagner

Previously J. Clarke said:
Arno Wagner wrote: [...]
But ATA/SATA is a cheap way to get lots of reasonably fast storage
if funds are limited. You need to know what you are doing, and I
found that regular surface checks and monitoring is needed, but
it does work.

In addition putting 8 SATA HDDs into a server case if far easier
than putting 8 ATA disks in there. SATA has clear advantages.
True, it still has problems because it is relatively new, but
those will go away.

Also in desktop systems having a pair of (S)ATA drives in RAID1
is more reliable and still cheaper than one high-quality
SCSI disk. I have made very good experiences with that.

Personally I see SCSI as solution for very high speeds and
places where you can only mount one disk or it is difficult
to replace a failed disk. Also where there is nobody available
that can follow the developments and can select good quality
(S)ATA disks. And of course if money is not an issue.

Bottom line: High quality is good, but if you can get medium
quality and redundancy that is even better. After all the 'I'
in RAID stands for 'inexpensive'.
It seems to me that from the viewpoint of an administrator for a large site
there were three problems with parallel ATA that had nothing to do with the
reliability of the drives--the first was that using them for hot-swap was
running them out of specification and the second was that there was not a
decent RAID controller from an established manufacturer (3ware was and is
good but they only support a few operating systems and who's ever heard of
them?) and the third was that there was no enterprise-quality NAS that
would accept PATA drives.

Well, yes, for a large site you are certainly correct about hot-plugging.

Persoannly I don't see the RAID-controller problem, but I have decided some
time ago to give up on hardware-RAID and use Linux software RAID
instead. But I admittedly only have 3 fileservers at the moment
and all run Linux, so I am biased. What is nice about SATA that you
can actually connect the specified number of drives to a controller
and still get decent performance. For PATA I found that two disks
per channel have a real speed problem.
SATA addressed the first issue in the spec--any SATA device that doesn't
support hot-swap is out of spec--and there _are_ full-featured RAID
controllers available for SATA from LSI Logic (their RAID controller
operatin is the merger of Mylex, which was at one time an IBM subsidiary,
and AMI, which used to be their arch-rival) and as part of the Intel server
building blocks, in addition to Tekram (supports RAID6--don't know of any
SCSI RAID controllers that do that), Adaptec,

Adaptec SATA RAID controllers are unusable under Linux in my personal
experience. The Linux support was done by somebody that does not
undertsand UNIX philosophy, e.g. no usable commandline-tools are
available. The one I had also crashed frequently.
and the various consumer
manufacturers, some of whom are slowly developing their line in a direction
that might have it competitive with LSI and Intel some day, so the second
has been addressed, and EMC, Sun, and several others have fibre-channel
arrays that take SATA drives, addressing the third.
It's going to be interesting to see how this plays out.

Indeed.

Arno
 
F

Frank W.

Nope. The modern reality is that few actually need the
Define "reliable". Two drives are not as reliable as one drive in the sense
of probability of needing repair. A mirrored pair of IDE drives will be
vastly more reliable than one SCSI drive in terms of probability of data
loss however.

More reliable than a SCSI system which costs the same.

If you think that a single SCSI drive is preferable in terms of preservation
of data to mirrored IDE drives, you need to start looking at the numbers
instead of the lining of your hat.

The simple fact is that SCSI is overpriced for what it delivers for all but
a few specialized applications.

In what applications are SCSI drives better?
 
J

J. Clarke

Frank said:
In what applications are SCSI drives better?

If the price was the same then pretty much across the board, but the price
is not the same. Meanwhile, the fastest SCSI drives are still faster, SCSI
allows much longer cables, and solutions that allow one to attach more than
a dozen or so drives to a single machine cost a good deal more than
solutions that allow a dozen or more SCSI drives to be attached--whether
the cost of the controller balances the cost of the drives I'm not sure.
 
J

J. Clarke

Arno said:
Previously J. Clarke said:
Arno Wagner wrote: [...]
But ATA/SATA is a cheap way to get lots of reasonably fast storage
if funds are limited. You need to know what you are doing, and I
found that regular surface checks and monitoring is needed, but
it does work.

In addition putting 8 SATA HDDs into a server case if far easier
than putting 8 ATA disks in there. SATA has clear advantages.
True, it still has problems because it is relatively new, but
those will go away.

Also in desktop systems having a pair of (S)ATA drives in RAID1
is more reliable and still cheaper than one high-quality
SCSI disk. I have made very good experiences with that.

Personally I see SCSI as solution for very high speeds and
places where you can only mount one disk or it is difficult
to replace a failed disk. Also where there is nobody available
that can follow the developments and can select good quality
(S)ATA disks. And of course if money is not an issue.

Bottom line: High quality is good, but if you can get medium
quality and redundancy that is even better. After all the 'I'
in RAID stands for 'inexpensive'.
It seems to me that from the viewpoint of an administrator for a large
site there were three problems with parallel ATA that had nothing to do
with the reliability of the drives--the first was that using them for
hot-swap was running them out of specification and the second was that
there was not a decent RAID controller from an established manufacturer
(3ware was and is good but they only support a few operating systems and
who's ever heard of them?) and the third was that there was no
enterprise-quality NAS that would accept PATA drives.

Well, yes, for a large site you are certainly correct about hot-plugging.

Persoannly I don't see the RAID-controller problem, but I have decided
some time ago to give up on hardware-RAID and use Linux software RAID
instead. But I admittedly only have 3 fileservers at the moment
and all run Linux, so I am biased. What is nice about SATA that you
can actually connect the specified number of drives to a controller
and still get decent performance. For PATA I found that two disks
per channel have a real speed problem.

For me the show-stopper was the RAID controller--the 3wares work well but
they never have come out with a Netware driver, and the other PATA RAID
controllers have all been crap. LSI has Netware drivers for their SATA
RAID controllers so that issue is also resolved. However my total Netware
installed base at the moment is a single server in my basement and I'm
finding it hard to justify the cost for that use <g>. If I ever get back
to running a Netware shop of any size then it may be another story. But
then the next generation of Netware is going to be Linux-based (as one
option--they'll also have a version on the Netware kernel) so that may no
longer be an issue.
Adaptec SATA RAID controllers are unusable under Linux in my personal
experience.

Adaptec ATA RAID controllers of any kind are unusable in my personal
 
C

Chuck U. Farley

Try running the numbers yourself, I doubt you'd believe mine anyway.

I don't have to, they'd be irrelevant.
Data loss is data loss. The probability of data loss is not subjective.

Vague generalities do not bolster your argument.
Which I stated clearly that I was doing, so why do you have a problem with
it?

Because when Odie did the exact same thing, you tried to ridicule him, as
in:

"As for the meaning of "subjective", if all you have is an opinion then you
really shouldn't pontificate quite so much about drive reliability."

Mr Kettle, meet Mr. Pot.

"From what I've been able to gather your "real world experience" is based in
the pattern of busted drives you see coming in from others. There are so
many ways that that data could be skewed that a student in a college level
statistics course could get a term paper out of it."

My "real world experience" is that major organizations do not use IDE
drives in their servers, for reasons of reliability _and_ performance. Now
you can blather on all you want about which one is "best" or "more reliable"
but that fact remains irrefutable and proves Odie's original point that you
disagreed with, SCSI drives are generally more reliable.
Try it anyway. You might find the results instructive. You might also
consider getting a humor transplant.

I don't have to, if it made business sense to do it, major data centers
would have _already_ switched to IDE drives. And btw, you might want to take
a look in the mirror regarding humor, as I wasn't the first one in this
thread to denigrate someone's opinion because I disagreed with it... you'll
see that persons face in the mirror.
What is at issue is not the fact that they use such drives, but the reason.
You don't seem to be willing to even consider the possibility that there
might be reasons unrelated to the reliability of individual drives.

I guess you've never worked in IT. Reliability first, performance second,
cost third.
So what? Nobody has disputed this.

Are you really this dense? I've seen your posts in here in the past and you
seemed to be fairly intelligent and knowledgeable. Let me speak slowly so
maybe you can understand, "If IDE drives were more reliable
than SCSI, major data centers would switch to them
because they are much cheaper so the CIO would save
money and look good to the CEO for delivering
the same reliability at a lower cost."
So? Do the exercise instead of acting like a broken record.

Your "exercise" has already been done in thousands and thousands of data
center installations already. It's called a Hardware Acquisition Cost
Benefit Analysis and guess what, here comes that broken record again, those
data centers have not changed to IDE technology.
So what? IDE is obsolescent anyway.


Boy, that logic really shot down that old generalization now, didn't it?
SCSI is already far older than SATA will be in ten years. So what?

Get ready, here it comes again...because SCSI technology is more reliable
than IDE, that's why it's used in major data centers throughout the world.
While _you_ may feel that IDE drives at the same price point as SCSI drives
are more reliable and provide more value, corporate managers with careers on
the line don't seem to share your beliefs.

Time to agree to disagree on this as you'll continue to say "do the study"
and I'll continue to say, "I don't have to".
 
B

Bob Willard

Frank said:
In what applications are SCSI drives better?

SCSI wins in multi-host environments; particularly useful with clusters running
shared-everything database applications. Requires a real OS, not a toy OS from
rain country.
 
A

Al Dykes

SCSI wins in multi-host environments; particularly useful with clusters running
shared-everything database applications. Requires a real OS, not a toy OS from
rain country.


Somewhere I've read that somewhere that disks sold for desktop systems
have the imbedded software optimized to read entire files, up to the
2MB or whatever buffer size, serially and code in the same disk for a
server app will have software optimized to buffer random blocks.

Comments ?
 
J

J. Clarke

Bob said:
SCSI wins in multi-host environments; particularly useful with clusters
running
shared-everything database applications. Requires a real OS, not a toy OS
from rain country.

??? Excuse me but what does the OS have to do with anything? IDE is
hardware, not software, and is supported by every major operating system.
 
F

Frank W.

Chuck you make some excellent points. Do you think that older SCSI drives (say 3 to 5 years old) are
significantly more reliable than newer IDE drives? I ask this because many people could put their OS and
programs on an older SCSI of about 10 gb capacity and most of us would have lots of room to spare. Then
they can get a cheap, larger IDE to run everything else. Then backup with a cheap, larger IDE as well.
For most of us, the equipment that a data center uses isn't very similiar to what we use. Though I sure
like your priorities....reliability, performance, cost. Considering how long it takes to get a formatted
system back up to where it was before a disastor, I would concur 100%.
 
F

Frank W.

SCSI wins in multi-host environments; particularly useful with clusters running
shared-everything database applications. Requires a real OS, not a toy OS from
rain country.

See - some people do have a sense of humour here! That was hilarious.
 
R

Rod Speed

Chuck you make some excellent points. Do you think that older SCSI drives
(say 3 to 5 years old) are significantly more reliable than newer IDE drives?
Nope.

I ask this because many people could put their OS and programs
on an older SCSI of about 10 gb capacity and most of us would
have lots of room to spare. Then they can get a cheap, larger IDE
to run everything else. Then backup with a cheap, larger IDE as well.

Makes a lot more sense to just install everything on a decent
modern IDE and have another as a backup for all of that.
For most of us, the equipment that a data center
uses isn't very similiar to what we use. Though I
sure like your priorities....reliability, performance, cost.

Not what is needed for a home system.

A home system doesnt need very quick restore on hard drive failure either.
Considering how long it takes to get a formatted system back
up to where it was before a disastor, I would concur 100%.

You shouldnt. A configured configured home system
can restore after a hard drive failure very quickly.

Very quickly indeed if you choose to have a mirrored system.
 
A

Al Dykes

Chuck you make some excellent points. Do you think that older SCSI drives (say 3 to 5 years old) are
significantly more reliable than newer IDE drives? I ask this because many people could put their OS and
programs on an older SCSI of about 10 gb capacity and most of us would have lots of room to spare. Then
they can get a cheap, larger IDE to run everything else. Then backup with a cheap, larger IDE as well.
For most of us, the equipment that a data center uses isn't very similiar to what we use. Though I sure
like your priorities....reliability, performance, cost. Considering how long it takes to get a formatted
system back up to where it was before a disastor, I would concur 100%.


Since the (statisticaly) most reliable model disk ever made might fail
in your machine RIGHT NOW, you have to live your life and do your
backups as if you've bought the crappiest disk ever made.

If you need non-stop operation you've got to do some sort of raid.

If your data is important you need to get copies of your backup
off-site. Your machine might be stolen, or heaven forbid you might
have a house fire.

If you do disk-disk image backups to a second disk in a machine you
can restore a machine to operation in a few minutes once the dead disk
is replaced. It's not a perfect solution, but it's very good, and
fast. If you've got a LAN and a second machine your can cross-backup
each machine to the other, which is fast and nearly bulletproof. This
doesn't address off-site backup. That needs a seperate solution.

IME keeping your disks cool does more to increase reliability than
picking a disk based on published MTBF numbers.

IOW, reliability is way down on the features I look for in a disk.
 
C

Chuck U. Farley

Chuck you make some excellent points. Do you think that older SCSI drives
(say 3 to 5 years old) are
significantly more reliable than newer IDE drives? I ask this because
many people could put their OS and

No. And they will probably be slower as well.
programs on an older SCSI of about 10 gb capacity and most of us would
have lots of room to spare. Then
they can get a cheap, larger IDE to run everything else. Then backup with a cheap, larger IDE as well.
For most of us, the equipment that a data center uses isn't very similiar to what we use. Though I sure
like your priorities....reliability, performance, cost. Considering how
long it takes to get a formatted
system back up to where it was before a disastor, I would concur 100%.

I use Ghost 8.0 to image my hd drive every Friday as a backup. In addition,
I also imaged my drive right after a fresh install of XP with my core group
of apps so getting my system back up will only take a matter of minutes, not
hours.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top