1.485 Gbit/s to and from HDD subsystem

S

Spoon

Hello,

I've been asked to build a system capable of reading and writing "raw"
high-definition video, namely HD-SDI.

http://en.wikipedia.org/wiki/Serial_Digital_Interface

AFAIK no single HDD can handle 1.485 Gbit/s (186 MB/s).

I suppose one way around this problem is RAID 0 (striping).
Are there other solutions?

http://en.wikipedia.org/wiki/RAID#RAID_0

According to storagereview, the Raptor WD1500 reaches 60-88 MB/s.

http://www.storagereview.com/articles/200601/WD1500ADFD_3.html

I could stripe 4 such drives to manage 240-350 MB/s (ideally). I thought
I'd use the RAID controller provided by the nForce 500 or 600 chipsets.
Do they perform well?

Comments? Suggestions? Remarks? :)

Cheers.
 
M

Michael Daly

Spoon wrote:

I can't give you a complete answer, but:
According to storagereview, the Raptor WD1500 reaches 60-88 MB/s.

http://www.storagereview.com/articles/200601/WD1500ADFD_3.html


That drive is a SATA 150. SATA 300 drives are somewhat faster with some
approaching 200 MB/s under idea circumstances (sustained is more like 120 MB/s).

Serial attached SCSI (SAS) are faster still, though by how much, I don't know
offhand.

If you redo your research and concentrate on SAS and SATA 300, coupled with
striping, you might hit the target. However, it will be dependent on matching
the controller, the drives and the software. Remember that a PCI controller
will never match an on-motherboard SATA controller due to the speed limits on
the PCI bus - for a controller card, you'll have to find a PCI Express (not
PCI-X) type. The same will affect SAS. These will be somewhat more expensive
than your run-of-the-mill drive/controller combos.

Mike
 
G

George Macdonald

Hello,

I've been asked to build a system capable of reading and writing "raw"
high-definition video, namely HD-SDI.

http://en.wikipedia.org/wiki/Serial_Digital_Interface

AFAIK no single HDD can handle 1.485 Gbit/s (186 MB/s).

I suppose one way around this problem is RAID 0 (striping).
Are there other solutions?

http://en.wikipedia.org/wiki/RAID#RAID_0

According to storagereview, the Raptor WD1500 reaches 60-88 MB/s.

http://www.storagereview.com/articles/200601/WD1500ADFD_3.html

I could stripe 4 such drives to manage 240-350 MB/s (ideally). I thought
I'd use the RAID controller provided by the nForce 500 or 600 chipsets.
Do they perform well?

I have no experience with nForce 5xx/6xx myself but there are lots of
reports of peope having trouble with RAID arrays. The most I've done is
RAID-1 with nForce4 and though there are lots of reports of trouble there
too, it works fine for me with Seagate drives... so it's difficult to say
if the 5xx/6xx reports are genuine or just incompetent dabblers.

The nForce RAID *is* a software RAID though, so for top performance you'd
probably be better with one of the hardware RAID cards and preferably a
PCI-E one with two lanes... so be careful to get a mbrd which has a PCI-E
x4 connector.
 
S

Spoon

Michael said:
That drive is a SATA 150. SATA 300 drives are somewhat faster with some
approaching 200 MB/s under idea circumstances (sustained is more like
120 MB/s).

What model do you have in mind that can sustain 120 MB/s?

Seagate's Cheetah 15K.5 manages "only" 135 MB/s on outer tracks down to
83 MB/s on inner tracks.
Serial attached SCSI (SAS) are faster still, though by how much, I don't
know offhand.

I don't think the speed of the interface is the bottleneck.
If you redo your research and concentrate on SAS and SATA 300, coupled
with striping, you might hit the target. However, it will be dependent
on matching the controller, the drives and the software. Remember that
a PCI controller will never match an on-motherboard SATA controller due
to the speed limits on the PCI bus - for a controller card, you'll have
to find a PCI Express (not PCI-X) type. The same will affect SAS.
These will be somewhat more expensive than your run-of-the-mill
drive/controller combos.

I plan to use the RAID controller provided by the nForce chipset.
 
S

Spoon

George said:
I have no experience with nForce 5xx/6xx myself but there are lots of
reports of peope having trouble with RAID arrays. The most I've done is
RAID-1 with nForce4 and though there are lots of reports of trouble there
too, it works fine for me with Seagate drives... so it's difficult to say
if the 5xx/6xx reports are genuine or just incompetent dabblers.

I've come across this review:
http://www.hothardware.com/printarticle.aspx?articleid=776

They've benchmarked a 2xWD1500 RAID-0 array (nForce4 controller).
The nForce RAID *is* a software RAID though, so for top performance you'd
probably be better with one of the hardware RAID cards and preferably a
PCI-E one with two lanes... so be careful to get a mbrd which has a PCI-E
x4 connector.

Point taken.
 
M

Michael Daly

Spoon said:
What model do you have in mind that can sustain 120 MB/s?

I don't remember - it was in a review of several HHDs I looked at a recently.
Try searching google for SATA 300 drive performance - that's where I found an
online review.
I don't think the speed of the interface is the bottleneck.

If you're on the margin of what the system can do, you have to consider all
components. It has been shown in tests that mismatching a drive and a
controller can significantly affect the end result. For example, a 10k rpm SATA
drive on a controller optimized for 7.2K rpm drives ran poorly compared to the
same drive on another controller.
I plan to use the RAID controller provided by the nForce chipset.

If you have already decided on the hardware platform, you are limited to what
you already have. Finding the "fastest" drive won't necessarily result in what
you want - see my comment on the mismatched controller/drive combo. You'll have
to find the fastest drive that has been tested for your controller - that may
not be adequate.

Your problem is not that you don't have a combination that works - it's that you
are looking at doing something that is at the limits of current desktop
technology. You may have to look at the OS you use to ensure that it will not
bump your program so often that it prevents the net throughput from dropping
below what you need. You might even have to look to queuing theory to predict
what levels of performance you'll get given the data arrival rate and the data
storage rate if the storage rate is only slightly higher than the arrival rate.

If you can provide some lossless compression of the data stream (data
compression, not video compression - the former would be simpler) you may be
able to enhance your ability to meet your requirements. You will have to
decompress the data for use, of course, which can complicate things. The data
stream is not compressed, according to the Wikipedia article, but it is fiddled
to minimize long runs of zeroes or ones.

Mike
 
T

Tony Hill

Hello,

I've been asked to build a system capable of reading and writing "raw"
high-definition video, namely HD-SDI.

http://en.wikipedia.org/wiki/Serial_Digital_Interface

Yeouch! That's no small feat! Keep in mind that you aren't just
going to need to worry about the hard drives, but also getting the
data to and from somewhere useful! That probably means some pretty
specialized video equipment (though I'm guessing you already know
about that part!) and probably some pretty beefy LAN.
AFAIK no single HDD can handle 1.485 Gbit/s (186 MB/s).

Definitely not.
I suppose one way around this problem is RAID 0 (striping).
Are there other solutions?

Other solutions do exist, but they are not cheap. Basically you would
be looking at some form of Network Attached Storage setup, though
depending on your application that might be totally pointless.
http://en.wikipedia.org/wiki/RAID#RAID_0

According to storagereview, the Raptor WD1500 reaches 60-88 MB/s.

http://www.storagereview.com/articles/200601/WD1500ADFD_3.html

I could stripe 4 such drives to manage 240-350 MB/s (ideally). I thought

If you plan on keeping costs semi-reasonable (ie no SCSI) then the
Raptor 150/Raptor X is pretty much the only drive that will fit the
bill for you. 4 of those should indeed do the trick, and I'm quite
certain that you're going to want the extra 50MB/s+ worth of
theoretical headroom.
I'd use the RAID controller provided by the nForce 500 or 600 chipsets.
Do they perform well?

Well now, here is where things get tricky. I really don't know how
these chipsets would perform because I've never had the need (or
budget!) to aim for such targets. However I wouldn't be counting on
them being up to the task. These are desktop chipsets and you're
looking at very much a workstation/server style application. I would
give it maybe a 50/50 shot of working reliably at your required
bandwidth.

What you might want to do is to buy the systems with a nForce chipset
and try it out. However in buying the system make sure that it has a
free PCI-Express 4x slot so that you can drop a full-fledged RAID
add-in card, something like a 3Ware 9590:

http://www.3ware.com/products/serial_ata2-9590.asp

Newegg lists the 8-port version of this card at just over $500:

http://www.newegg.com/Product/Product.asp?Item=N82E16816116037

Note that you might want to consider the possibility of expanding to 6
or 8 drives in your array if 4 won't cut it.


A few other points of note, many of which you are probably well aware
of already, but others might be new:

1. You'll almost certainly want a dual-core processor (if not 4
cores). This data streaming on it's own is going to be enough to
swamp a fairly capable single core. With only a single-core chip any
other tasks (programs, OS, whatever) are going to start eating into
your performance. A dual-core chip should go a long way to keeping
things running smoothly.

2. Enough memory that you basically won't ever need to worry about
paging out your OS or applications.

3. Spend some time tweaking the software for maximum throughput. You
can probably do away with a lot of the logging and system recovery
functionality in favor of pure performance. Also things like larger
than default cluster sizes are likely to be helpful. You might find a
few guides out there that can give you some suggestions, but a bit of
trial and error is likely to be necessary to really get things working
well.

4. You are obviously going to need a hefty computer case and power
supply. You're looking at a minimum of 5 hard drives (1 boot drive
and 4 for your array) and maybe more like 8 drives. Obviously your
plain-jane desktop case isn't going to cut it here. Similarly a 500W
power supply is probably the minimum you're going to want here.

5. Be sure that your case has lots of airflow. At the very least
you're going to have 4 drives spitting out a fair chunk of heat along
with one fairly high-end processor. And whatever you're using to take
data in and spit it out again are also going to be some high-end
parts. All in all, that's a LOT of heat being generated in a case,
even if it is going to be a pretty large case. Now SATA is a godsend
here when compared to PATA, since you'll have MUCH less ribbon cable
cluttering up your case, but you'll still need to make sure that the
cables stay neatly tied up and you've got fans sucking and blowing air
effectively throughout the case.
 
G

George Macdonald

I've come across this review:
http://www.hothardware.com/printarticle.aspx?articleid=776

They've benchmarked a 2xWD1500 RAID-0 array (nForce4 controller).

I'm thinking more of the complaints of data corruption and system crashing.
Take a look at some of the posts here
http://forums.nvidia.com/index.php?showforum=34 and
http://www.nforcershq.com/forum/nvidia-nforce4-nforce3-vf59.html.
Personally I haven't seen it but I've always used Seagate drives with my
nForce4 SATA systems - they are the best bet and even there the firmware
has to be at 3.AAH, which I think all new drives are now. Note also that
some drive mfrs sell HDDs which are "not RAID qualified" - though the
Seagate "desktop" drives work, they have their NS (nearline series) drives
for high reliability.
Point taken.

You didn't say what ultimate total size you're thinking of for a RAID-0 but
note that the nForce chipset RAID supports up to 32-bit addressing, so
there's a limit of 2TB - a few people got upset with that for 4x750GB
arrays.

I'd note that I pesonally would not trust RAID-0 for reliability -- drives
*will* go bad -- so I'd look at RAID-5 or RAID 0+1 or 1+0 to cover for
failures.
 
R

Ryan Godridge

Yeouch! That's no small feat! Keep in mind that you aren't just
going to need to worry about the hard drives, but also getting the
data to and from somewhere useful! That probably means some pretty
specialized video equipment (though I'm guessing you already know
about that part!) and probably some pretty beefy LAN.


Definitely not.


Other solutions do exist, but they are not cheap. Basically you would
be looking at some form of Network Attached Storage setup, though
depending on your application that might be totally pointless.


If you plan on keeping costs semi-reasonable (ie no SCSI) then the
Raptor 150/Raptor X is pretty much the only drive that will fit the
bill for you. 4 of those should indeed do the trick, and I'm quite
certain that you're going to want the extra 50MB/s+ worth of
theoretical headroom.


Well now, here is where things get tricky. I really don't know how
these chipsets would perform because I've never had the need (or
budget!) to aim for such targets. However I wouldn't be counting on
them being up to the task. These are desktop chipsets and you're
looking at very much a workstation/server style application. I would
give it maybe a 50/50 shot of working reliably at your required
bandwidth.

What you might want to do is to buy the systems with a nForce chipset
and try it out. However in buying the system make sure that it has a
free PCI-Express 4x slot so that you can drop a full-fledged RAID
add-in card, something like a 3Ware 9590:

http://www.3ware.com/products/serial_ata2-9590.asp

Newegg lists the 8-port version of this card at just over $500:

http://www.newegg.com/Product/Product.asp?Item=N82E16816116037

Note that you might want to consider the possibility of expanding to 6
or 8 drives in your array if 4 won't cut it.


A few other points of note, many of which you are probably well aware
of already, but others might be new:

1. You'll almost certainly want a dual-core processor (if not 4
cores). This data streaming on it's own is going to be enough to
swamp a fairly capable single core. With only a single-core chip any
other tasks (programs, OS, whatever) are going to start eating into
your performance. A dual-core chip should go a long way to keeping
things running smoothly.

2. Enough memory that you basically won't ever need to worry about
paging out your OS or applications.

3. Spend some time tweaking the software for maximum throughput. You
can probably do away with a lot of the logging and system recovery
functionality in favor of pure performance. Also things like larger
than default cluster sizes are likely to be helpful. You might find a
few guides out there that can give you some suggestions, but a bit of
trial and error is likely to be necessary to really get things working
well.

4. You are obviously going to need a hefty computer case and power
supply. You're looking at a minimum of 5 hard drives (1 boot drive
and 4 for your array) and maybe more like 8 drives. Obviously your
plain-jane desktop case isn't going to cut it here. Similarly a 500W
power supply is probably the minimum you're going to want here.

5. Be sure that your case has lots of airflow. At the very least
you're going to have 4 drives spitting out a fair chunk of heat along
with one fairly high-end processor. And whatever you're using to take
data in and spit it out again are also going to be some high-end
parts. All in all, that's a LOT of heat being generated in a case,
even if it is going to be a pretty large case. Now SATA is a godsend
here when compared to PATA, since you'll have MUCH less ribbon cable
cluttering up your case, but you'll still need to make sure that the
cables stay neatly tied up and you've got fans sucking and blowing air
effectively throughout the case.

I'd second all of Tony's suggestions. For the price of 4 Raptors you
might also consider 6 or 7 less expensive sata drives. They might
give you your throughput with more ceiling for a lower cost.
 
D

Del Cecchi

Michael Daly said:
I don't remember - it was in a review of several HHDs I looked at a
recently. Try searching google for SATA 300 drive performance - that's
where I found an online review.


If you're on the margin of what the system can do, you have to consider
all components. It has been shown in tests that mismatching a drive
and a controller can significantly affect the end result. For example,
a 10k rpm SATA drive on a controller optimized for 7.2K rpm drives ran
poorly compared to the same drive on another controller.


If you have already decided on the hardware platform, you are limited
to what you already have. Finding the "fastest" drive won't
necessarily result in what you want - see my comment on the mismatched
controller/drive combo. You'll have to find the fastest drive that has
been tested for your controller - that may not be adequate.

Your problem is not that you don't have a combination that works - it's
that you are looking at doing something that is at the limits of
current desktop technology. You may have to look at the OS you use to
ensure that it will not bump your program so often that it prevents the
net throughput from dropping below what you need. You might even have
to look to queuing theory to predict what levels of performance you'll
get given the data arrival rate and the data storage rate if the
storage rate is only slightly higher than the arrival rate.

If you can provide some lossless compression of the data stream (data
compression, not video compression - the former would be simpler) you
may be able to enhance your ability to meet your requirements. You
will have to decompress the data for use, of course, which can
complicate things. The data stream is not compressed, according to the
Wikipedia article, but it is fiddled to minimize long runs of zeroes or
ones.

Mike

I am confused. Are you trying to get by on a limited budget or are you
trying to make something that works for sure while not spending more than
necessary?

del cecchi
 
S

Spoon

George said:
You didn't say what ultimate total size you're thinking of for a RAID-0 but
note that the nForce chipset RAID supports up to 32-bit addressing, so
there's a limit of 2TB - a few people got upset with that for 4x750GB
arrays.

Thanks for pointing it out. I wasn't aware of such a limitation.

I'm disappointed that it is not mentioned in the Media Shield
User's Guide.

http://www.nvidia.com/object/feature_raid.html

Where did you read about it?

The data sheet for the controller Tony suggested (3ware 9590SE) states:

"Other scalability features include 64-bit LBA support for addressing
arrays greater than 2 TB and support for multiple cards within a system
for large storage requirements."
I'd note that I personally would not trust RAID-0 for reliability

Yes.

It is interesting to note that Western Digital claims "1.2 million hours
MTBF at 100% duty cycle".

1.2 million hours = 50000 days i.e. ~137 years :)

If one were to believe the marketing claims, a RAID-0 array of 2 or even
4 disks should still be quite reliable.

But I digress. The video streams are stored on a different server.
The new server will be used to capture and play back.

Regards.
 
S

Spoon

Ryan said:
I'd second all of Tony's suggestions. For the price of 4 Raptors you
might also consider 6 or 7 less expensive sata drives. They might
give you your throughput with more ceiling for a lower cost.

I've read about Physical Track Positioning.
http://www.nyx.net/~sgjoen/disk1.html#ss6.8

For example, the Raptor WD1500 manages ~88 MB/s on outer tracks and
~60 MB/s on inner tracks, while the Barracuda 7200.10 starts at ~80 MB/s
on outer tracks and ends at ~40 MB/s on inner tracks.

http://anandtech.com/printarticle.aspx?i=2760

However, the WD1500 holds "only" 150 GB while the 7200.10 holds 750 GB.
If one looks at the throughput of the 7200.10 on the first 500 GB, it
never falls below 60 MB/s. And the throughput on the first 650 GB never
falls below 50 MB/s.

If it were easy to specify that one wants to use "only the outer x GB"
then one can get good performance from large disks.

Has anyone played with this at all? in Windows? in Linux?

Regards.
 
S

Spoon

Tony said:
Yeouch! That's no small feat! Keep in mind that you aren't just
going to need to worry about the hard drives, but also getting the
data to and from somewhere useful! That probably means some pretty
specialized video equipment (though I'm guessing you already know
about that part!) and probably some pretty beefy LAN.

You're right. I also need an HD-SDI PCIe board with Linux 2.6 support.
If you plan on keeping costs semi-reasonable (ie no SCSI) then the
Raptor 150/Raptor X is pretty much the only drive that will fit the
bill for you. 4 of those should indeed do the trick, and I'm quite
certain that you're going to want the extra 50MB/s+ worth of
theoretical headroom.

Is it possible to use only the outer tracks of larger disks?
(As discussed in another message.)
Well now, here is where things get tricky. I really don't know how
these chipsets would perform because I've never had the need (or
budget!) to aim for such targets. However I wouldn't be counting on
them being up to the task. These are desktop chipsets and you're
looking at very much a workstation/server style application. I would
give it maybe a 50/50 shot of working reliably at your required
bandwidth.

I appreciate your taking the time to share your experience, Tony :)
What you might want to do is to buy the systems with a nForce chipset
and try it out. However in buying the system make sure that it has a
free PCI-Express 4x slot so that you can drop a full-fledged RAID
add-in card, something like a 3Ware 9590:

http://www.3ware.com/products/serial_ata2-9590.asp

Thanks for the link.
A few other points of note, many of which you are probably well aware
of already, but others might be new:

1. You'll almost certainly want a dual-core processor (if not 4
cores). This data streaming on it's own is going to be enough to
swamp a fairly capable single core. With only a single-core chip any
other tasks (programs, OS, whatever) are going to start eating into
your performance. A dual-core chip should go a long way to keeping
things running smoothly.

I'm aiming for socket AM2 Athlon 64 X2 4600+ (dual core, 2.4 GHz).
(I'm also considering Core 2 Duo.)
2. Enough memory that you basically won't ever need to worry about
paging out your OS or applications.

Aiming for 2-4 GB.
3. Spend some time tweaking the software for maximum throughput. You
can probably do away with a lot of the logging and system recovery
functionality in favor of pure performance. Also things like larger
than default cluster sizes are likely to be helpful. You might find a
few guides out there that can give you some suggestions, but a bit of
trial and error is likely to be necessary to really get things working
well.
Definitely.

4. You are obviously going to need a hefty computer case and power
supply. You're looking at a minimum of 5 hard drives (1 boot drive
and 4 for your array) and maybe more like 8 drives. Obviously your
plain-jane desktop case isn't going to cut it here. Similarly a 500W
power supply is probably the minimum you're going to want here.

I have what some consider one of the best power supplies available.
(Seasonic SS-600HT)
5. Be sure that your case has lots of airflow. At the very least
you're going to have 4 drives spitting out a fair chunk of heat along
with one fairly high-end processor. And whatever you're using to take
data in and spit it out again are also going to be some high-end
parts. All in all, that's a LOT of heat being generated in a case,
even if it is going to be a pretty large case. Now SATA is a godsend
here when compared to PATA, since you'll have MUCH less ribbon cable
cluttering up your case, but you'll still need to make sure that the
cables stay neatly tied up and you've got fans sucking and blowing air
effectively throughout the case.

For reference, the WD1500 dissipates 10W in use, 9W idle.
http://www.westerndigital.com/en/products/Products.asp?DriveID=189

Point taken, as far as airflow is concerned.

Thanks for the insight.
 
A

Al Dykes

You're right. I also need an HD-SDI PCIe board with Linux 2.6 support.


Is it possible to use only the outer tracks of larger disks?
(As discussed in another message.)

Easy. Use something like partition magic to make a space-filling
partition that uses the inside tracks.
 
K

krw

Easy. Use something like partition magic to make a space-filling
partition that uses the inside tracks.
If you're sure the LBA algorithm fills from outside cylinders to
inside.
 
A

Al Dykes

If you're sure the LBA algorithm fills from outside cylinders to
inside.



Test it. If the throughput decreases put the dummy partition on the
other end of the disk extent and test it again.

If you can't measure the difference it doesn't make any difference,
 
K

krw

Test it. If the throughput decreases put the dummy partition on the
other end of the disk extent and test it again.

The question is whether it fills cylinder first or surface first.
Since a head switch is about the same speed as a track change
you're not guaranteed which way the disk operates.
If you can't measure the difference it doesn't make any difference,

You may end up with a pretty small partition (one surface - outside
zone) that is faster than the rest.
 
T

The little lost angel

It is interesting to note that Western Digital claims "1.2 million hours
MTBF at 100% duty cycle".

1.2 million hours = 50000 days i.e. ~137 years :)

If one were to believe the marketing claims, a RAID-0 array of 2 or even
4 disks should still be quite reliable.

Actually, I think it means that if Western Digital sells 1.2 million
drives a month, one will fail every hour assuming 100% duty cycle. So
doing some maths, with some help, it works out to be around a 7.3%
chance of failure every year for 4 drives. Not as low as I would like
it if I were you.
 
A

Al Dykes

Actually, I think it means that if Western Digital sells 1.2 million
drives a month, one will fail every hour assuming 100% duty cycle. So
doing some maths, with some help, it works out to be around a 7.3%
chance of failure every year for 4 drives. Not as low as I would like
it if I were you.

Applying averages to a specific instance of the equipment is a classic
mistake. The above paragraph is correct.

Another way to restate MTBF numbers is that 50% of the units will fail
over the stated MTBF interval and unless there is a systematic flaw
the failures will be evenly distributed in time.

When you buy a disk you have to plan as if *yours* is going to fail
tommorrow and make your contingency plans according to your critical
priorities.
 
G

George Macdonald

Thanks for pointing it out. I wasn't aware of such a limitation.

I'm disappointed that it is not mentioned in the Media Shield
User's Guide.

http://www.nvidia.com/object/feature_raid.html

Where did you read about it?

Here's one thread: http://forums.nvidia.com/index.php?showtopic=18158 and
I'm sure I've seen a nVidia "confession" quoted somewhere in another.
The data sheet for the controller Tony suggested (3ware 9590SE) states:

"Other scalability features include 64-bit LBA support for addressing
arrays greater than 2 TB and support for multiple cards within a system
for large storage requirements."

Yes, even some of the RAID cards are limited to 2TB so be careful. That
card is also PCI-E x4 so even better (than x2) but watch for mbrds again:
some of them have a x4 slot but only 2 lanes on it.

In fact for the kind of system you're looking at, an SLI mbrd which allows
the 2nd x16 SLI slot to be used for x8 or x4 add-in (non-video) cards would
probably be the best route. I'm not sure how common this arrangement is -
I've only checked the one Asus board's manual; no reason it should not be
common but if the BIOS doesn't cater for it things could get messy. In
that respect I've even seen where some board mfrs' BIOS have screwed up the
Reserved Memory (640KB->1MB) so that the add-in controller can't hook its
BIOS.
Yes.

It is interesting to note that Western Digital claims "1.2 million hours
MTBF at 100% duty cycle".

1.2 million hours = 50000 days i.e. ~137 years :)

If one were to believe the marketing claims, a RAID-0 array of 2 or even
4 disks should still be quite reliable.

I'd check with WDC on RAID "suitability" before buying - they're pushing
"TLER" for RAID just now and that can be important for error recovery. I
don't recall which drive it was but I read recently of a WDC support reply
which disclaimed any liability for RAID functionality since " the drive is
not qualified for RAID use".
But I digress. The video streams are stored on a different server.
The new server will be used to capture and play back.

Ah OK, so you'd probably not even be booting off the big RAID-0 array.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top