Are there any PCIe x1 SCSI RAID cards out there that don't cost ~$1000?

S

spodosaurus

Hi all,

As the subject line asks: Are there any PCIe x1 SCSI RAID cards out
there that don't cost ~$1000?

I can find PCIe SCSI cards and PCI/PCIX SCSI RAID cards, but I can't
find an PCIe SCSI RAID cards for a x1 slot :(

TIA,

Ari

--
spammage trappage: remove the underscores to reply
Many people around the world are waiting for a marrow transplant. Please
volunteer to be a marrow donor and literally save someone's life:
http://www.abmdr.org.au/
http://www.marrow.org/
 
P

Paul

spodosaurus said:
Hi all,

As the subject line asks: Are there any PCIe x1 SCSI RAID cards out
there that don't cost ~$1000?

I can find PCIe SCSI cards and PCI/PCIX SCSI RAID cards, but I can't
find an PCIe SCSI RAID cards for a x1 slot :(

TIA,

Ari

I'm surprised these exist. With SAS as a competing standard,
you'd think they wouldn't bother spinning a chip with PCI
Express on it.

http://computers.pricegrabber.com/storage-device-controllers/m/35140586/

Paul
 
P

Paul

Paul said:
I'm surprised these exist. With SAS as a competing standard,
you'd think they wouldn't bother spinning a chip with PCI
Express on it.

http://computers.pricegrabber.com/storage-device-controllers/m/35140586/

Paul

I missed the RAID part.

They probably put PCI Express x4 on them, because they don't want to
restrict the performance of even a single bus. A U320 would be throttled
by a PCI Express x1 250MB/sec limit (minus overhead). Doing RAID, could
mean being able to sustain more than 250MB/sec on the SCSI bus. A PCI
Express x4 removes that limit, and leaves room for two SCSI busses.

You could use the above card, and software RAID, as an intermediate solution.

Or buy a "workstation" type motherboard, to make it easier to fit a good
card. There are 12"x9.6" boards with emphasis on either more PCI Express
or PCI-X slots. The PCI-X workstation boards, rely on a tunnel chip to
give a PCI-X interface (another chip with heatsink on the board).

P5K64 WS 12"x9.6" with four PCI Express x16 slots
http://www.asus.com.tw/products4.aspx?modelmenu=2&model=1692&l1=3&l2=82&l3=547&l4=0

Asus is not too forthright with what lane wiring is being used. The connectors
are x16 in size, but the P35 Northbridge only has x16 on it. Asus uses a
PCI Express switching chip, to provide interconnect to one or more of the
slots. The manual mentions the black slots are x4, but there really doesn't
appear to be enough bandwidth left, to run the blue slots at x16. I expect
the blue slot runs at x16 when the other slots are not occupied. The blue
and white run at x8 / x8 for Crossfire applications. But if all four
slots are occupied, I don't see a way to do that, without doing x4/x4/x4/x4.

Still, it might give you the freedom to do something a bit more creative.

Check the workstation section at the bottom here:

http://www.asus.com.tw/products2.aspx?l1=3&l2=-1

Paul
 
S

spodosaurus

Paul said:
I missed the RAID part.

They probably put PCI Express x4 on them, because they don't want to
restrict the performance of even a single bus. A U320 would be throttled
by a PCI Express x1 250MB/sec limit (minus overhead). Doing RAID, could
mean being able to sustain more than 250MB/sec on the SCSI bus. A PCI
Express x4 removes that limit, and leaves room for two SCSI busses.

You could use the above card, and software RAID, as an intermediate
solution.

Or buy a "workstation" type motherboard, to make it easier to fit a good
card. There are 12"x9.6" boards with emphasis on either more PCI Express
or PCI-X slots. The PCI-X workstation boards, rely on a tunnel chip to
give a PCI-X interface (another chip with heatsink on the board).

P5K64 WS 12"x9.6" with four PCI Express x16 slots
http://www.asus.com.tw/products4.aspx?modelmenu=2&model=1692&l1=3&l2=82&l3=547&l4=0


Asus is not too forthright with what lane wiring is being used. The
connectors
are x16 in size, but the P35 Northbridge only has x16 on it. Asus uses a
PCI Express switching chip, to provide interconnect to one or more of the
slots. The manual mentions the black slots are x4, but there really doesn't
appear to be enough bandwidth left, to run the blue slots at x16. I expect
the blue slot runs at x16 when the other slots are not occupied. The blue
and white run at x8 / x8 for Crossfire applications. But if all four
slots are occupied, I don't see a way to do that, without doing
x4/x4/x4/x4.

Still, it might give you the freedom to do something a bit more creative.

Check the workstation section at the bottom here:

http://www.asus.com.tw/products2.aspx?l1=3&l2=-1

Paul

Hi Paul,

It's an s5000PAL server board that I'm working with. Its riser card has
one PCIX slot and three others labelled PCIe x1...but in reviewing the
product specifications I'm wondering if those slots are mislabelled and
they're x8 slots. I've never handled an x8 card, so I didn't immediately
recognise if it was mislabelled. I'll have another look this weekend.

Ari

--
spammage trappage: remove the underscores to reply
Many people around the world are waiting for a marrow transplant. Please
volunteer to be a marrow donor and literally save someone's life:
http://www.abmdr.org.au/
http://www.marrow.org/
 
P

Paul

spodosaurus said:
Hi Paul,

It's an s5000PAL server board that I'm working with. Its riser card has
one PCIX slot and three others labelled PCIe x1...but in reviewing the
product specifications I'm wondering if those slots are mislabelled and
they're x8 slots. I've never handled an x8 card, so I didn't immediately
recognise if it was mislabelled. I'll have another look this weekend.

Ari

Table 5 on PDF page 33 mentions several riser options. Bandwidth seems to be
managed in x4 chunks. The riser slot pinout (further along in the document)
also tells you how many bus segments there are per riser.

http://download.intel.com/support/motherboards/server/s5000pal/sb/d31979007_s5000pal_tps_v1_4.pdf

I guess one riser slot has PCI Express lanes and PCI-X bus signals. The second
riser slot is just PCI Express.

On the riser assemblies themselves, they could use a x16 sized connector, and
only wire x4 lanes. That practice makes visual identification difficult.

Another way of detecting lane count on a motherboard, is to spot the pair of
capacitors used per lane. There is a pair for transmit and a pair for receive.
One pair is at the source end, the other pair at the destination end. But on
the riser, the riser is just wires, so you cannot use that trick to figure out
the lane wiring.

Lanes should be routed as diff pairs, so there is a visual clue available when
you examine the copper traces.

There is at least one motherboard, where they give you a PCI Express x4 connector,
and one end of the connector is open. With the opening, you can plug a x8 or
a x16 card into the slot, and the unused lanes just hang in the air. Since
PCI Express can figure out how many lanes are connected, it still works.
AFAIK, the valid combos are x1,x2,x4,x8,x16 and the hardware will use the
largest group of those that is available.

I have a suspicion that you have options for x4 PCI Express, in which case
you don't have to look for a x1 card.

There are some pictures of the slots, for comparison, here. The common blob
on the left, is power pins (+12V, +3.3V, 75W max). The variable sized section
on the right of each connector, is lanes.

http://en.wikipedia.org/wiki/Image:PCIExpress.jpg

The physical pinout is shown here. Capacitors for one direction are seen on
the left of the slot, with a pair of caps per lane. This picture makes
it easier to figure out why the connectors are the size they are.

http://images.tomshardware.com/2004/11/22/sli_is_coming/pcie-slot-big.gif

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top