HP MSA 2000 series san vs Promise or other brands out there?Thoughts?

M

markm75

--------------------------------------------------------------------------------

For our 40 user environment, with 3 physical servers and about 20
virtual servers (which run SQL on a few virtual boxes, about 13
instances of sql itself), 2007 exchange on the physical host.. we were
looking to go SAN from here on out..

We have 2TB of data usage, with about 700GB per year added (by the end
of this year, we'll be around 2.8TB). Currently we have 6TB of total
space available across the 3 servers.

I was thinking a SAS solution was the best bet, rather than sata and
found a few options.

Here are some of my notes next to each, anyone have any thoughts here?
I'm thinking iSCSI is much cheaper and a better choice, but the HP
requires their brand of drives and they dont offer 1TB SAS at this
point (thought the Seagate 1TB drive is actually 7200rpm, not 15K and
dual ported):

http://h71016.www7.hp.com/dstore/Mid...&ci_sku=AJ927A

HP MSA 2012I AJ927A $7546 SAN barebones , 2U, 12 bays, expandable to
48TB (48 drives) dual controller, iSCSI interface
**free driver from microsoft (dont need accelerator license).
More robust controllers.. more cache (1GB per controller cache). 3
years warranty (next day support). CANNOT use 3rd party party drives
here!
(450GB by HP in SAS just coming out, dual ported 15k sas.
*thinking go with 1TB sata for one LUN (say 5 of them raid 6), the
rest SAS in time.



http://www.provantage.com/promise-vte610fd~7PROM17H.htm

TB Promise Vtrak VTE610FD 3U 16slot, 3 year warranty, dual 4Gb Fibre
channel, raid6, $6632 SAS/SATA (and simultaneous too!) ($9168 with 8TB
of physical drives ANY brand, not counting fibre hardware) One
external SAS 3Gb/s 4-port for JBOD expansion (can connect multiple
chassis together)
must add on desired harddrives (any brand);
(switches: need 4Gb fibre switches 8 ports; pcie $900, host bus
adapter fibre channel about $800 per unit) : $3300 in fibre hardware
overall at least;


Thanks in advance
 
A

Arno Wagner

Previously markm75 said:
--------------------------------------------------------------------------------
For our 40 user environment, with 3 physical servers and about 20
virtual servers (which run SQL on a few virtual boxes, about 13
instances of sql itself), 2007 exchange on the physical host.. we were
looking to go SAN from here on out..
We have 2TB of data usage, with about 700GB per year added (by the end
of this year, we'll be around 2.8TB). Currently we have 6TB of total
space available across the 3 servers.
I was thinking a SAS solution was the best bet, rather than sata and
found a few options.
Here are some of my notes next to each, anyone have any thoughts here?
I'm thinking iSCSI is much cheaper and a better choice, but the HP
requires their brand of drives and they dont offer 1TB SAS at this
point (thought the Seagate 1TB drive is actually 7200rpm, not 15K and
dual ported):

HP MSA 2012I AJ927A $7546 SAN barebones , 2U, 12 bays, expandable to
48TB (48 drives) dual controller, iSCSI interface
**free driver from microsoft (dont need accelerator license).
More robust controllers.. more cache (1GB per controller cache). 3
years warranty (next day support). CANNOT use 3rd party party drives
here!
(450GB by HP in SAS just coming out, dual ported 15k sas.
*thinking go with 1TB sata for one LUN (say 5 of them raid 6), the
rest SAS in time.



TB Promise Vtrak VTE610FD 3U 16slot, 3 year warranty, dual 4Gb Fibre
channel, raid6, $6632 SAS/SATA (and simultaneous too!) ($9168 with 8TB
of physical drives ANY brand, not counting fibre hardware) One
external SAS 3Gb/s 4-port for JBOD expansion (can connect multiple
chassis together)
must add on desired harddrives (any brand);
(switches: need 4Gb fibre switches 8 ports; pcie $900, host bus
adapter fibre channel about $800 per unit) : $3300 in fibre hardware
overall at least;


Personally I think SATA if fine, but you should use RAID6,
possibly with hot spares. We have some RAID6 Linux servers
with 8 SATA drives and Arcea controllers, gives about 280MB/s
linear throughput, so SATA should be fast enough for most
applications. Stay away fromt hose that only let you use
their drives. You are at their mercy with regard to cost,
drive availability and drive cost. Typically not a good deal
on all three counts. You should also make sure you either
have a spare unit on site or have very fast service response
times. You might still have to deal with downtimes of a day
or more for even the fastest service response times, e.g.
if they have to FedEx a replacement part.

Arno
 
M

markm75

Personally I think SATA if fine, but you should use RAID6,
possibly with hot spares. We have some RAID6 Linux servers
with 8 SATA drives and Arcea controllers, gives about 280MB/s
linear throughput, so SATA should be fast enough for most
applications. Stay away fromt hose that only let you use
their drives. You are at their mercy with regard to cost,
drive availability and drive cost. Typically not a good deal
on all three counts. You should also make sure you either
have a spare unit on site or have very fast service response
times. You might still have to deal with downtimes of a day
or more for even the fastest service response times, e.g.
if they have to FedEx a replacement part.

Arno- Hide quoted text -

- Show quoted text -

The CDW rep seemed to think SAS made more sense, due to simultaneous
hits on the data array (IE: virtual servers).. i used to think SATA
was good enough..

If SATA is good enough, this opens up the field a bit. To me, the
downside of going with a unit requiring their own manufacturer drives
is cost.. whereas with the other ones that are open, would actually
have the downside of having to deal with 3rd party drive support teams
to get a drive sent out.

I think there was another version of the Promise unit, which was Sata
only and iscsi, though honestly, i'd rather have the option of going
either sas/sata down the road..

Any thoughts on fiber? Seems to costly to me..

It did seem like the controller cards in the HP were much more robust
too.

I wish i had a few more models to pick from that did both sata and sas
and were iscsi (any brand drives)...
 
A

Arno Wagner

The CDW rep seemed to think SAS made more sense, due to simultaneous
hits on the data array (IE: virtual servers).. i used to think SATA
was good enough..
If SATA is good enough, this opens up the field a bit. To me, the
downside of going with a unit requiring their own manufacturer drives
is cost.. whereas with the other ones that are open, would actually
have the downside of having to deal with 3rd party drive support teams
to get a drive sent out.

Not necessarily. With the low drive costs today, you could make
sure the drives are only used 99% or so (or get at least one of
the smallest in sectors and use that during array creation) and
then have your own spares handy. If one breaks, throw it away,
put in a spare and get a new spare from any source you like.
I think there was another version of the Promise unit, which was Sata
only and iscsi, though honestly, i'd rather have the option of going
either sas/sata down the road..
Any thoughts on fiber? Seems to costly to me..

The only reason for fiber is that it can be longer than copper.
It did seem like the controller cards in the HP were much more robust
too.
I wish i had a few more models to pick from that did both sata and sas
and were iscsi (any brand drives)...

Well, I don't think you really need SAS, it has higher profit margins
though. Question is of course what your mailserver needs in performance.
From tour numbers, I guess that was actually 700MB/day, not 700GB.
On Linux you could handle that with a standard SATA disk, it is
not very much. No idea what requirements exchange has. Although
you need to keep in mind, that while SAS disks are faster, they are
not that much faster. Maybe a factor of 2 for accesses. This means
there is only a relatively small window between "SATA is fast enough"
and "even SAS is far too painfully slow" in which SAS will actually
solve a problem and not just be more expensive.

BTW, if you really have a disk access speed issue, you could
move the critical directories to FLASH based disks later on or
to single (or RAID 1) 15.000 rpm SCSI/SAS/SATA disks.

Arno
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top