1TB Flash in 3.5" size?

T

trs80

Does anyone yet make a TB Flash memory in a 3.5" drive physical format. If
so, could you pass on a reference? The interface would need to support
about at a 400MB/s sustained rate. I can work with any interface such as
Fiber Channel or whatever .
thanks for any tips
 
A

Arno

trs80 said:
Does anyone yet make a TB Flash memory in a 3.5" drive physical format. If
so, could you pass on a reference? The interface would need to support
about at a 400MB/s sustained rate. I can work with any interface such as
Fiber Channel or whatever .
thanks for any tips

Nobody does and nobody gets that rate, not even for large
accesses. Although some manufacturers have SATA3 drives
planned with internal excessive multi channel architectures.
For small accesses FLASH can be significantly slower than
disks.

For what you want, you may want to look at a traditional RAM
fronted disk. Will be expensive though and definitely
not available in 3.5". Alternatively you could build a
RAID0 with a really fast controller and FLASH disks.

Arno
 
A

Arno

Flash drives are not normally considered "slow" - read performance of
200+ MB/s, and writes of maybe 70 MB/s are possible with good drives.
That's a lot faster for reading than even top-range hard disks, and
similar for writing (the OP doesn't specify if they want reading or
writing speeds).

Indeed. however a RAID0 with relatively cheap disks will give you
200+MB/s read and write speeds for large accesses. A 4-way
RAID0 (possible at least with Linux software RAID and likely with
the xBSDs as well) should reach 400MB/s large access speed.

For small accesses the story is different, onlt a RAM frontend will
reach this speed here, FLASH may be even slower than disk here,
especially on write.
I agree about the sizes - you don't get flash drives as big as 1 TB at
the moment (at least, not in standard 3.5" formats).

As for write endurance, you are about a decade out of touch...

Well, not for USB flash. I did recently torture a 2GB Kingston
USB to death and it had consistent data errors (with no error
meassege to make matter worse!) after about 3500 full overwrites.

Say the OP wants to overwrite his disk at 400MB/s, then 3500
full operwrites are reached after about 100 days of operation.

I expect SATA FLASH is better, but not all may be. What however
does not happen with modern FLASH is that writing a few 1000
times to a single location kills the drive. Traditional FLASH
without wear leveling had that problem, with some dying
after 10000...100000 writes to the same sector.

Arno
 
A

Arno

David Brown said:
Arno wrote: [...]
Small read accesses are extremely fast with flash disks - far faster
than with hard disks, since small reads are dominated by the seek times.
True.

For small writes, it is certainly true that these are slower to complete
on a flash disk than on a hard disk, and less efficient than streamed
writes. But outside of synthetic benchmarks, so what? Small writes are
cached by the OS - as far as the application is concerned, they happen
almost instantaneously. And as long as you don't have too many of them
flushing to the disk at the same time, the writes are not going to cause
other performance issues for the flash disk - if the disk needs to read
from the same flash chip, the erase/write can be paused temporarily.
You'll only see a real-world problem if you are dealing with an
application that makes small writes and lots of fsyncs, combined with an
older flash drive that is poor at hiding the garbage collection.
Well, not for USB flash. I did recently torture a 2GB Kingston
USB to death and it had consistent data errors (with no error
meassege to make matter worse!) after about 3500 full overwrites.
USB flash devices are generally optimised for low costs rather than high
quality or high endurance. They also often have very poor erase block
management, since they have few chips and also must minimise the risk of
data loss if the device is removed unexpectedly. This means a single
block write to the device can cause many erase/writes to the flash.
He doesn't say whether he wants to stream reads or writes, or how long
he wants to sustain the transfers. My guess would be that he'd like to
break off writing and do the occasional read - there are not many
applications which produce new data at a rate of 34 TB per day, nothing
of which needs to be kept for more than forty minutes!

I can think of an application or two that needs this. Quite
specialized though.

So here is a question to the OP: What is your access profile?
Modern SLC flash chips will have endurance in the range of at least a
million erase/writes if you are nice to them (i.e., keep them at room
temperature). MLC devices used in cheaper disks have significantly
lower endurance.
So for continuous writing using good SLC disks, he's got an average of
something like 80 years before write endurance is a problem.

Yes, but even cheaper MLCs should be at 100'000 cycles today, that
is why I find the 3500 overwrites figure I found so disappointing,
and this thing uses an Intel flash memory chip, not some unpranded
or Tier-3 vendor product.

Expensive SLC is very hard to break and should indeed survive
decades at full write rate.

Arno
 
C

calypso

Arno said:
Yes, but even cheaper MLCs should be at 100'000 cycles today, that

MLC's often have at max 10.000 E/W cycles... SLC's have around 100.000...
Reference is EMC's and STEC documentation for EFD drives (in fact it's STEC
ZeusIOPS)...
Expensive SLC is very hard to break and should indeed survive
decades at full write rate.

My calculations say that 400GB SLC drive can last 150 years with 24/7/365
writes on it (SLC, 100.000 E/W cycles)...

--
Kekso cvokoce divovski na brodu Hasoo siluje jucer.
By runf

Damir Lukic, calypso@_MAKNIOVO_fly.srk.fer.hr
http://inovator.blog.hr
http://calypso-innovations.blogspot.com/
 
A

Arno

MLC's often have at max 10.000 E/W cycles... SLC's have around 100.000...
Reference is EMC's and STEC documentation for EFD drives (in fact it's STEC
ZeusIOPS)...
My calculations say that 400GB SLC drive can last 150 years with 24/7/365
writes on it (SLC, 100.000 E/W cycles)...


Ok, say 400GB at 200MB/s. That gives 1.8 overwrites/h, i.e. about
55'000h, i.e. about 6.25 years. Sorry your math is off. And
the 6.25 years are with perfect wear leveling. Assuming my
experience of death after 3'500 cycles with 10'000 cycle MLC,
the disk could die as early as within 2 years.

Even 1'000'000 cycle MLC only gives you 20-60 years.

Arno
 
C

calypso

Ok, say 400GB at 200MB/s. That gives 1.8 overwrites/h, i.e. about
55'000h, i.e. about 6.25 years. Sorry your math is off. And
the 6.25 years are with perfect wear leveling. Assuming my
experience of death after 3'500 cycles with 10'000 cycle MLC,
the disk could die as early as within 2 years.
Even 1'000'000 cycle MLC only gives you 20-60 years.


Why are you referring to SSD drive and sequential writes? The main reason
why SSD are used are their high IOPS values! OK, I am talking from the
storage vendor perspective, and not from the home-user perspective...


--
Na tepihu se ponekad trijezan Maricao sere. By runf

Damir Lukic, calypso@_MAKNIOVO_fly.srk.fer.hr
http://inovator.blog.hr
http://calypso-innovations.blogspot.com/
 
A

Arno

Why are you referring to SSD drive and sequential writes? The main reason
why SSD are used are their high IOPS values! OK, I am talking from the
storage vendor perspective, and not from the home-user perspective...

I am talking sustained maximum write speed. Does not need to be
sequential, but it is the worst-case for the lifetime. Of course
a lower rate with small writes that still result in an effective
write rate (because of larger internal block size) of 200MB/s
also hits this worst case.

And while the high IOP is one desirable parameter, it is not
the only one. For example an SSD can well be used for an external
filesystem journal. This is a mostly write and mostly sequential
write operation. However when you recover the journal the IOPs
are the bottleneck. So you may want to put your journal on the
SSD to bring recovery speed down dramatically. Or rollback
speed if it is a database journal.

For a home-user, OTOH, you may actually hit your figure. But
home users do not have 24/7 anyways.

So while your 150 years figure is certainly good to boost
sales, it is unusable to evaluate practical endurance. For
that you need to look at the particular worst case.

And there is a second problem. On power-fail a SSD can
corrupt areas not written to because of large internal
block sizes. That means in high-reliability applications
you actually can only write it in a sequential fashion
and without filesystem as everything else is dangerous to
your data.

The short summary is that SSDs have write issues that
you need to understand in order to decide whether to
use them or not. They shine on read IOPs though.

Arno
 
C

calypso

I am talking sustained maximum write speed. Does not need to be
sequential, but it is the worst-case for the lifetime. Of course a lower
rate with small writes that still result in an effective write rate
(because of larger internal block size) of 200MB/s also hits this worst
case.

But SSD's in any serious enviroments are never used for sequential writes...
OLTP and similar enviroments need high IOPS, not MB/s... If you want MB/s,
go with a big bunch of SATA drives, and you'll get very cheap MB/s
performance...
So while your 150 years figure is certainly good to boost sales, it is
unusable to evaluate practical endurance. For that you need to look at the
particular worst case.

200MB/s is sequential read performance, and who knows what block size were
used... I stay with the 150 years, cause it's my calculation for STEC
ZeusIOPS drive used in EMC storage systems... So, here's the calculation:

400 * 10^9 * 100000 / 4000 / 2000 / 365 / 24 / 3600

400GB drive (400 * 10^9 Bytes)
SLC technology (100.000 E/W cycles)
4000 (block size is 4kBytes)
2000 (average write IOPS)
365 (days per year)
24 (hours per day)
3600 (seconds per hour)

The result is 158 years...
And there is a second problem. On power-fail a SSD can corrupt areas not
written to because of large internal block sizes. That means in
high-reliability applications you actually can only write it in a
sequential fashion and without filesystem as everything else is dangerous
to your data.

Nope... At least not with drives I was working with... They all have 64MB
cache that has battery backup... This can be a problem for SMB drives, but
not for enterprise...

Sorry, but I am all into enterprise, and have totally lost touch with
reality in the normal SMB market... :(

--
Boksaca svira nezdrav na maksimirskom stadionu cignu
maltretiru cijeli dan ? By runf

Damir Lukic, calypso@_MAKNIOVO_fly.srk.fer.hr
http://inovator.blog.hr
http://calypso-innovations.blogspot.com/
 
A

Arno

But SSD's in any serious enviroments are never used for sequential writes...
OLTP and similar enviroments need high IOPS, not MB/s... If you want MB/s,
go with a big bunch of SATA drives, and you'll get very cheap MB/s
performance...

You may have a combination of mostly sequential writes and under
some circumstances a lot of random reads. And yes, this can happen
in a serious environment as well although it requires a bit more
of a specual scenario.
200MB/s is sequential read performance, and who knows what block size were
used... I stay with the 150 years, cause it's my calculation for STEC
ZeusIOPS drive used in EMC storage systems... So, here's the calculation:
400 * 10^9 * 100000 / 4000 / 2000 / 365 / 24 / 3600
400GB drive (400 * 10^9 Bytes)
SLC technology (100.000 E/W cycles)
4000 (block size is 4kBytes)
2000 (average write IOPS)
365 (days per year)
24 (hours per day)
3600 (seconds per hour)
The result is 158 years...

Ah, you assume writes fall into one block of 4kB and the disk
block size is 4kB. Then you have a write speed of 8MB/s and yes,
your number fits. I expect these are a bit more expensive ;-)

However mass-market SSDs have 128kB blocks or even larger
(not exposed to the OS). There you get much lower numbers.
An affordable SSD with 4kB block size would be nice in fact,
due to much better small write performance.
Nope... At least not with drives I was working with... They all have 64MB
cache that has battery backup... This can be a problem for SMB drives, but
not for enterprise...

Well, if you have RAM fronted SSD, you are in a different class anyways.
I remember that some Linux Filesystem people are starting to worry
about this, because it can kill journalling. There are some ways
around the problem, but only if the SSD exposes the block size.
Or if you have enough money for the expensive stuff ;-)
Sorry, but I am all into enterprise, and have totally lost touch with
reality in the normal SMB market... :(

No problem. When you can really throw money at the problem, the
solutions look a bit different. Mass market can give you similar
performance and reliability a lot cheaper, but you have to go
some extra steps and really need to know what you are doing.

Arno
 
C

calypso

Ah, you assume writes fall into one block of 4kB and the disk block size
is 4kB. Then you have a write speed of 8MB/s and yes, your number fits. I
expect these are a bit more expensive ;-)

Nonono... :) There are few applications that has their block sizes with
which they deal... I believe Exchange uses 4kb block size, Oracle uses 8kb
and SQL uses 16k by default... I could be wrong here, it was a long time I
last checked this...

Anyway, storage systems write to cache, and cache flushes from time to time,
so there is no 24/7/365 in wildest dreams... :)

BTW., these drives are capable of 32-64 parallel IO operations...
Well, if you have RAM fronted SSD, you are in a different class anyways. I
remember that some Linux Filesystem people are starting to worry about
this, because it can kill journalling. There are some ways around the
problem, but only if the SSD exposes the block size. Or if you have
enough money for the expensive stuff ;-)

Nice to know this! Thanks! :) Will investigate it a bit further...
No problem. When you can really throw money at the problem, the solutions
look a bit different. Mass market can give you similar performance and
reliability a lot cheaper, but you have to go some extra steps and really
need to know what you are doing.

Well, my clients are buying, and are very affraid of SSD drives because of
the problems visible in mass market... So I need to explain them that these
SSD drives used in high-end equipment haven't got anything similar to mass
market SSD drives... :)

--
Indijanaca spava corav na rijeckom stadionu murjaku
komponiru za svaku Novu Godinu ? By runf

Damir Lukic, calypso@_MAKNIOVO_fly.srk.fer.hr
http://inovator.blog.hr
http://calypso-innovations.blogspot.com/
 
A

Arno

Arno <[email protected]> kenjka: [...]
No problem. When you can really throw money at the problem, the solutions
look a bit different. Mass market can give you similar performance and
reliability a lot cheaper, but you have to go some extra steps and really
need to know what you are doing.
Well, my clients are buying, and are very affraid of SSD drives because of
the problems visible in mass market... So I need to explain them that these
SSD drives used in high-end equipment haven't got anything similar to mass
market SSD drives... :)

Ah, I see your problem. And it explains your stance, which I
think is justified on the equipment you are selling. If anybody
ever asks you about consumer-grade SSDs, give them my figures ;-)

Well, as I do understand the technology, I typically go for
mass-market, but my main application for large disk storage
so far was research data which was backed up on an enterprise
class tape library as well, so not really critical.

Arno
 
C

calypso

Ah, I see your problem. And it explains your stance, which I think is
justified on the equipment you are selling. If anybody ever asks you about
consumer-grade SSDs, give them my figures ;-)

Deal... ;)

BTW., I am working as a presales engineer for EMC, VMware and Symantec... :)
Well, as I do understand the technology, I typically go for mass-market,
but my main application for large disk storage so far was research data
which was backed up on an enterprise class tape library as well, so not
really critical.

Huh, tape libraries are good for smaller backup enviroments, but for big
customers, disklibraries (virtual tape libraries) are much better
solution...


--
"Ogadjens li Kinezo zigoshe ?" upita kreker trci dzusog prdija.
"Ne znam ja nista !" rece curicaa dira "Ja samo paso likuje imbecilanm !" By runf

Damir Lukic, calypso@_MAKNIOVO_fly.srk.fer.hr
http://inovator.blog.hr
http://calypso-innovations.blogspot.com/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top