new ssd

F

Flasherly

Kinda neat - up to x5 my transfer speeds over platters and my fastest
USB flashdrive reads (hi-spec'd class 10 mem). No support whatsoever
- my OS is too old for their maintenance software, so I had to look
around to check it out generically instead.

New SSDs have better controllers, so the GC (garbage can) algorithm
for time stored on deletions isn't as much an issue without XP native
TRIM support. I put it in largely a read, least write capacity for
calls to program as links outside from where the OS is located.

Formatted it with EASEUS with 20% of a single FAT32 primary partition
left unallocated after reading about an IBM paper and added efficiency
derived from the free space by the controllers. Seems they'd just
give it that and sell the drive for 20% less storage capacity.

Not exactly Seagate's 7200 1T on sale this week for the same price,
but, hey, it's new for flat and black.

Cycled read/write degradation failure -- I don't see the deal there.
Buy a HD and let it spin for one or three years and if the warrantee
is out that's just your tough luck. Run a CPU with poor cooling or
massively over-clocked, what do you expect. Same deal here, except
it's the SSD manufacturers raising the issue. I got the warrantee, I
know the brand, and if I was afraid of easily burning up memory gates
prematurely I wouldn't have bought it.
 
Y

Yousuf Khan

Kinda neat - up to x5 my transfer speeds over platters and my fastest
USB flashdrive reads (hi-spec'd class 10 mem). No support whatsoever
- my OS is too old for their maintenance software, so I had to look
around to check it out generically instead.

New SSDs have better controllers, so the GC (garbage can) algorithm
for time stored on deletions isn't as much an issue without XP native
TRIM support. I put it in largely a read, least write capacity for
calls to program as links outside from where the OS is located.

TRIM just hints to the SSD controller that it's safe to begin a garbage
collection run at that moment. However, garbage collection happens
regardless.
Formatted it with EASEUS with 20% of a single FAT32 primary partition
left unallocated after reading about an IBM paper and added efficiency
derived from the free space by the controllers. Seems they'd just
give it that and sell the drive for 20% less storage capacity.

Sandforce controllers typically leave a large amount of memory reserved
for this very same reason. That's why you can always tell an SSD has a
Sandforce controller, because they typically have weird capacities, like
60/120/240 GB, instead of 64/128/256 GB. They reserve about 6%.
Not exactly Seagate's 7200 1T on sale this week for the same price,
but, hey, it's new for flat and black.

Cycled read/write degradation failure -- I don't see the deal there.
Buy a HD and let it spin for one or three years and if the warrantee
is out that's just your tough luck. Run a CPU with poor cooling or
massively over-clocked, what do you expect. Same deal here, except
it's the SSD manufacturers raising the issue. I got the warrantee, I
know the brand, and if I was afraid of easily burning up memory gates
prematurely I wouldn't have bought it.

I was pretty concerned about this previously too, before buying it. But
after buying it, it seemed like people are being a little overcautious
about this.

Yousuf Khan
 
F

Flasherly

Yousuf Khan

thanks, yo - all pretty new and tempting, very, to move it up into for
regular placement and usage. As it is, though, it's pretty cool, too,
for a "test box" considerations to explore new territory. Runs like a
top, quick and responsive for various software platform trials.
Usually don't have that caliber around, as I keep them pretty much
clean and dedicated to necessary routines that don't lend a lot to
experimentation. Either that or I get caught talking and some
swinging joe buys it, plops down a wad of cash I can't refuse.
Building's that way, also, semi-addictive once on roll up the road for
researching and implementing better, more expensive gear.
 
R

RayLopez99

Kinda neat - up to x5 my transfer speeds over platters and my fastest

I wish I could see a real-life benchmark to see what this solid state driveand fast uP system would do in compiling a complex program in Visual Studio, compared to my antiquated Core 2 Duo system (which scores about 2000 units on http://www.cpubenchmark.net/high_end_cpus.html for the cpu, and my traditional HD is about 75 MB/s average transfer speed). Instead of waiting a minute, would you wait 5 seconds? That sort of thing. The raw benchmarks of 5x better are misleading IMO.

RL
 
F

Flasherly

I wish I could see a real-life benchmark to see what this solid state drive and fast uP system would do in compiling a complex program in Visual Studio, compared to my antiquated Core 2 Duo system (which scores about 2000 units onhttp://www.cpubenchmark.net/high_end_cpus.htmlfor the cpu, and my traditional HD is about 75 MB/s average transfer speed). Instead of waiting a minute, would you wait 5 seconds? That sort of thing. The raw benchmarks of 5x better are misleading IMO.

RL

You know it seems more and more of a stretch for me to have said that,
now that I've used it a little longer. I don't really ever get 75MB/
s ... 60 would actually be a stretch over 50MB/s, tops. Whereas,
worst case, I can say I'll see as low as 12MB/s, with perhaps 20-30MB/
s being more the norm. I've since added another HD and another OS for
three OS boot choices -- two, though, being the same, XP, with
different service packs to get at some newer software stuff that runs
audio gear remotely through a USB connection (basic editing on EPROM
effect settings for guitar Fx and amplifier loops). Might put another
OS, something else or newer on there, as well.

That's two relatively ancient 200GByte HDs and a SDD from a 4-drive MB
SATA jumper block. If I get around to swapping in the next up, newer
4-year old 600Gbyte HD I may see discernible speed increases over
them.

We're talking the usual XP transfer rates, say as opposed to a DOS7
boot to Ghost a 700MByte compressed binary file, native MB
effectiveness, into an reserved 2Gbyte XP partition, from one HD to
another, at something approaching 2 minutes. Socket Intel 478.

Not so fast. On a dual-core AMD machine with a couple of my newest
HDs, nothing smaller than a terabyte, I've seen the same thing clock
in at 45 seconds. MB is dated roughly same circa, though.

But this is a 64GByte SDD, small enough on size alone to fit in with
what platters were a decade ago. So, it's a specialty drive, like you
say, for experimenting with. Then, as their designers like to say -
'It'll wear out, sooner or later, according to how often you write to
it.' I've set it up for a program-linked drive to the OS. I install
no programs to C: drive, never have, so it was a matter of
transferring something less than a couple hundred programs in named
program directories, from a drive with a named PRG directory in the
root, containing those programs, to that SDD. Two icons representing
folders in the XP TaskBar, containing icons for those two-hundred
programs. (Those two icons are also outside of C: Windows). My
desktop is otherwise totally black.

I could go a step further and separate any programs which I cannot
stop from writing back to themselves (logs, quarantines, or other
dynamic program entries), and then the SDD will effectively never be
written to, as a matter of routine or practical course, but only read.

As it is and so described, yes, there's benefit to be seen. Once the
Windows OS comes up from platters and hits quite a few AutoStart add-
ons linked to the SDD, they all happen at once. It's pretty.
Prettier yet, probably, just to dump Windows onto the SSD and just be
done with it. But for now the added speed for some reason isn't so
pressing. I'm for the most a bigger is better, than faster is smaller
type. Perhaps apart from program compilations, at some point, other
future uses will likely arise. I used to do quite a bit of video
editing and encodes, but hardly even that these days. I do have some
hellacious indices to databases for tracking various entities,
although indices since efficient database designs were updated by
TELCO's a decade ago, are lightening fast in themselves by design
proficiency, already.
 
Y

Yousuf Khan

I wish I could see a real-life benchmark to see what this solid state drive and fast uP system would do in compiling a complex program in Visual Studio, compared to my antiquated Core 2 Duo system (which scores about 2000 units on http://www.cpubenchmark.net/high_end_cpus.html for the cpu, and my traditional HD is about 75 MB/s average transfer speed). Instead of waiting a minute, would you wait 5 seconds? That sort of thing. The raw benchmarks of 5x better are misleading IMO.

RL

In my own personal life, I've seen huge increases in performance using
an SSD. Real world scenarios such as OS boot, OS patch, all go much
faster, I'm talking seconds on an SSD vs. minutes on an HDD: i.e. highly
noticeable. Smaller scale writes are not as huge of a boost as the
larger scale writes, but they are still a huge boost. All round, I would
say that the SSD has been the single most important performance boost
I've had in over 20 years with PC's! The previous biggest boost I had
gotten was from an upgrade from an 8088 XT processor to a 386DX
processor. :)

Yousuf Khan
 
F

Flasherly

The previous biggest boost I had gotten was from an upgrade from an 8088 XT processor to a 386DX processor. :)

Yo - when once upon a time it would take all night to convert 15meg of
rar>zip to format with a NEC V20, 2meg EMS 3.2 slotted Rampage Boards,
with Michael Bolton to explain for Desqview/EMS386 support swapping on
RIME relaynets, or how to setup modems to call out to Egypt from
within a Canadian blizzard.
 
R

RayLopez99

In my own personal life, I've seen huge increases in performance using

an SSD. Real world scenarios such as OS boot, OS patch, all go much

faster, I'm talking seconds on an SSD vs. minutes on an HDD: i.e. highly

noticeable. Smaller scale writes are not as huge of a boost as the

larger scale writes, but they are still a huge boost. All round, I would

say that the SSD has been the single most important performance boost

I've had in over 20 years with PC's! The previous biggest boost I had

gotten was from an upgrade from an 8088 XT processor to a 386DX

processor. :)



Yousuf Khan

Thanks YK, that's interesting. I'm curious as to the 'state of the art' ormaybe 'mainstream' ssd today: do they 'wear out' (read / write hysteresis) in a couple of years, as was rumored to be the case around 10 years ago or so? That's why I was not an early adopter--don't want my ssd to not workafter five years or so, even though I think that's about the average time for mean HD failure for mechanical platters (though I've had traditional HDs that last 10 years + no problem, luck of the draw I guess)

RL
 
F

Flasherly

Thanks YK, that's interesting. I'm curious as to the 'state of the art' or maybe 'mainstream' ssd today: do they 'wear out' (read / write hysteresis) in a couple of years, as was rumored to be the case around 10 years agoor so? That's why I was not an early adopter--don't want my ssd to not work after five years or so, even though I think that's about the average time for mean HD failure for mechanical platters (though I've had traditional HDs that last 10 years + no problem, luck of the draw I guess)

RL

They're cheap -- $50US on average sales, $40 or a little less
rebated, under $30 possible if less likely -- not a lot, anyway, for a
ticket to see the game at 64G. What's nailing it is IBM and Samsung -
among heavyweights names when considering quality, backing it up the
longest 3-yr warranties while mixing it up with low-ball competitive
pricing.
 
P

Paul

RayLopez99 said:
Thanks YK, that's interesting. I'm curious as to the 'state of the art' or maybe 'mainstream' ssd today: do they 'wear out' (read / write hysteresis) in a couple of years, as was rumored to be the case around 10 years ago or so? That's why I was not an early adopter--don't want my ssd to not work after five years or so, even though I think that's about the average time for mean HD failure for mechanical platters (though I've had traditional HDs that last 10 years + no problem, luck of the draw I guess)

RL

SSDs and other kinds of flash drives, use wear leveling.

http://en.wikipedia.org/wiki/Wear_leveling

Say you have a 32GB SSD, and the flash type used (MLC)
is rated for 1000 writes. It means basically, you
can do 32TB of writes to the drive, before it's exhausted
from a wear perspective. Without wear leveling, you
could "burn a hole in it", say, where the page file
is located. So wear leveling, is a big deal. There is
one level of indirection, between where you think the
data is stored, and where it is actually stored.

So I decide to write a program, which writes at 100MB/sec.
Perhaps it's a benchmark I wrote. I leave it running,
and forget about it. I come back later. I come back in

32x10**12 / 10**8 = 32x10**4 seconds or 320,000 seconds
[bytes] [b/sec] or very roughly 100 hours or about
four days depending on when I wake up

OK, now my SSD is ruined. I wore it out, even with wear
leveling. Such an accident, if it happens to a hard drive,
you'd never notice.

On the other hand, I can use the SSD for every day activities.
Perhaps reading email, Firefox cache, System Restore and a
few other things, cause 1GB of writes in a single day. I
can do that for about 32000 days before hitting 32TB. My
SSD will last for 32000/365 = 88 years.

A hard drive on the other hand, can run for at least a
year, doing random writes, at 100MB/sec say, with no obvious
wear. That's because the heads don't touch the disk. So rather
than last 4 days, like my first calculation, the hard drive
can last for a year of pretty heavy (server) usage. It
lasts for a few more years, just based on the motor wearing out.
If the motor had a good lubrication system, it might last
a lot longer than they currently do.

If you use the SSD in the very light fashion suggested in
the second calculation, it exceeds the mechanically related
failure of the hard drive by a fair bit. 88 years versus
4 years (motor failure).

It's all a matter of extremes, one way or another.

S.M.A.R.T for SSDs, includes a wear indicator, but
it's pretty hard to say how useful it is.

On each generation of flash, the endurance, or number
of writes per flash cell, is decreasing. The density is
increasing. Some day, the SSD will be as big as your
current hard drive, but will be about as reliable
as a sheet of toilet paper.

While SLC NAND has much greater endurance than MLC NAND,
all NAND types will go through density improvement, and
as such, the SLC NAND is also going to have reduced
wear properties, as time goes by. The difference would
be, if a consumer MLC has a joke value for endurance,
an enterprise SLC drive might still be useful. The SLC will
also be "behind" by a fair bit, in terms of capacity. And for
me at least, there's no way of knowing whether SLC will
stick around (since it means extra work to keep two kinds
of parts in production). If it's enough of a nuisance
financially, they could easily just stick to pure MLC.

Paul
 
R

RayLopez99

On each generation of flash, the endurance, or number

of writes per flash cell, is decreasing. The density is

increasing. Some day, the SSD will be as big as your

current hard drive, but will be about as reliable

as a sheet of toilet paper.



While SLC NAND has much greater endurance than MLC NAND,

all NAND types will go through density improvement, and

as such, the SLC NAND is also going to have reduced

wear properties, as time goes by. The difference would

be, if a consumer MLC has a joke value for endurance,

an enterprise SLC drive might still be useful. The SLC will

also be "behind" by a fair bit, in terms of capacity. And for

me at least, there's no way of knowing whether SLC will

stick around (since it means extra work to keep two kinds

of parts in production). If it's enough of a nuisance

financially, they could easily just stick to pure MLC.


Useful, thanks Paul. I guess in theory they should allow you to configure an SSD as a "D" drive (i.e., not where your primary boot partition is). Then, you can place your 'IO constrained' programs in that SSD "D" drive, like for me it might be Visual Studio, so that when I compile a program it would be ten times faster. Or so it seems. However, it might well be that ifyou have two drives, in a RAID or master/slave setup or whatever, with themechanical platter drive being C:, that the C: drive will become the bottleneck since all D: searches have to go through C: You would think the uP and OS is smart enough to avoid that bottleneck however, but I don't know.

I'll wait a few years to get a better price on SSD on my next PC, which I hope to have the uP to be at the 10 nm or 14 nm feature size (I have 32 and 45 nm chips now) see: http://en.wikipedia.org/wiki/22_nm

RL
 
T

Ting Hsu

[...] my traditional HD is about 75 MB/s average transfer speed).  Instead of waiting a minute, would you wait 5 seconds?  That sort of thing.  The raw benchmarks of 5x better are misleading IMO.

You aren't looking at the real problem with hard drives. The real
problem is that almost zero things you do perform a sequential read.
That is, besides movies and music, almost nothing else is reading from
your drive at 75 MB/s.

What happens is that your drive reads a small chunk of info, then
seeks to another location on the drive, reads another small chunk of
info, repeat ad nauseam. That's how your OS boots, that's how games
are loaded, that's how applications are loaded, and that's how
compilers work (compilers are worse, as they will interleave writes
with reads).

That's why hard drives are much, much slower than their 75 MB/s
transfer speeds indicate in real life. At 5 ms per seek (for a fast
hard drive), in typical usage, a hard drive spends as much (or more)
time seeking than it does data transferring.

Solid state drives not only have higher read speeds (typically around
300-500 MB/s), but seek times are almost non-existent (under 0.1 ms).

Care to guess how fast my SSD loads up Microsoft Excel for the first
time after a reboot? 1 second. It's as if I had Excel running in the
background the entire time, but no, I ran it cold, after a reboot.
This type of practical performance is why nearly everyone who owns a
SSD swears by it.
 
Y

Yousuf Khan

Care to guess how fast my SSD loads up Microsoft Excel for the first
time after a reboot? 1 second. It's as if I had Excel running in the
background the entire time, but no, I ran it cold, after a reboot.
This type of practical performance is why nearly everyone who owns a
SSD swears by it.

Yup, I was a doubter about the value of the SSD, until I finally got
one, it was a great purchase! Yes, it doesn't hold as much as my old
HDD's, but that's why I have a desktop, I get to keep all of my old
HDD's alongside the SSD, and I get speed of the SSD for my OS boot, and
the storage capacity of the HDD's for all of the rest.

Yousuf Khan
 
R

RayLopez99

[...] my traditional HD is about 75 MB/s average transfer speed).  Instead of waiting a minute, would you wait 5 seconds?  That sort of thing. The raw benchmarks of 5x better are misleading IMO.



You aren't looking at the real problem with hard drives. The real

problem is that almost zero things you do perform a sequential read.

That is, besides movies and music, almost nothing else is reading from

your drive at 75 MB/s.



What happens is that your drive reads a small chunk of info, then

seeks to another location on the drive, reads another small chunk of

info, repeat ad nauseam. That's how your OS boots, that's how games

are loaded, that's how applications are loaded, and that's how

compilers work (compilers are worse, as they will interleave writes

with reads).



That's why hard drives are much, much slower than their 75 MB/s

transfer speeds indicate in real life. At 5 ms per seek (for a fast

hard drive), in typical usage, a hard drive spends as much (or more)

time seeking than it does data transferring.



Solid state drives not only have higher read speeds (typically around

300-500 MB/s), but seek times are almost non-existent (under 0.1 ms).



Care to guess how fast my SSD loads up Microsoft Excel for the first

time after a reboot? 1 second. It's as if I had Excel running in the

background the entire time, but no, I ran it cold, after a reboot.

This type of practical performance is why nearly everyone who owns a

SSD swears by it.

Good to know, thanks T.Hsu. So should I buy an SSD and install it as a "D:" , "secondary drive"? I don't trust it as my "C" drive, which is a SATA.

RL
 
R

RayLopez99

Yup, I was a doubter about the value of the SSD, until I finally got

one, it was a great purchase! Yes, it doesn't hold as much as my old

HDD's, but that's why I have a desktop, I get to keep all of my old

HDD's alongside the SSD, and I get speed of the SSD for my OS boot, and

the storage capacity of the HDD's for all of the rest.

So, you think SSD as "C" is better than as "D"? What if your slow "IO seek" programs, like say Excell or Visual Studio, are on a non-system (D) partition? Would they perform better if on "D" on an SSD drive? Or do most programs default onto the C drive during installation? So many questions...I'll have to research this.

RL
 
P

Paul

RayLopez99 said:
[...] my traditional HD is about 75 MB/s average transfer speed). Instead of waiting a minute, would you wait 5 seconds? That sort of thing. The raw benchmarks of 5x better are misleading IMO.


You aren't looking at the real problem with hard drives. The real

problem is that almost zero things you do perform a sequential read.

That is, besides movies and music, almost nothing else is reading from

your drive at 75 MB/s.



What happens is that your drive reads a small chunk of info, then

seeks to another location on the drive, reads another small chunk of

info, repeat ad nauseam. That's how your OS boots, that's how games

are loaded, that's how applications are loaded, and that's how

compilers work (compilers are worse, as they will interleave writes

with reads).



That's why hard drives are much, much slower than their 75 MB/s

transfer speeds indicate in real life. At 5 ms per seek (for a fast

hard drive), in typical usage, a hard drive spends as much (or more)

time seeking than it does data transferring.



Solid state drives not only have higher read speeds (typically around

300-500 MB/s), but seek times are almost non-existent (under 0.1 ms).



Care to guess how fast my SSD loads up Microsoft Excel for the first

time after a reboot? 1 second. It's as if I had Excel running in the

background the entire time, but no, I ran it cold, after a reboot.

This type of practical performance is why nearly everyone who owns a

SSD swears by it.

Good to know, thanks T.Hsu. So should I buy an SSD and install it as a "D:" , "secondary drive"? I don't trust it as my "C" drive, which is a SATA.

RL

In terms of price/performance, the most noticeable improvement,
comes from buying a small SSD as your C:. If you still have
budget left, you can buy as big a D: as you want in an SSD.
But it'll cost you. If you need bulk storage (movie download
storage), that wouldn't be the best way to go (D: as SSD).

There is one poster on the groups, that uses nothing but
SSDs, and has a pile of them. But nobody else is
so enthused with them, to have entirely abandoned
hard drives. After all, you need a hard drive, to
back up your SSD (room to store multiple images of
it perhaps). Sooner or later, storage cost is important
enough, to have a hard drive somewhere in your storage
hierarchy. They're a pretty cheap form of storage.

Paul
 
R

RayLopez99

On Thursday, December 6, 2012 10:31:13 PM UTC+2, Paul wrote:

[good to know stuff]

Tx Paul and others.

Found the site for SSD's from a guy who has been writing on his website forover 10 years. http://www.storagesearch.com

I reproduce one article below (he's got a ton) that I found interesting. What it goes to is 'write endurance'. For most apps, you read more than write, something like 5:1. But assume it's 1:1, and making the worse case assumptions, for a relatively modern "March 2007" SSD drive the figure, says the author, is "51 years" before you get write endurance failure due to a 'rogue program' (since I code, this is a concern for me). But, since the author appears to be a SSD enthusiast, you have to figure some propaganda is possible. So instead of 2 M cycles, I figure 100k cycles before failure/saturation. Which cuts his 51 years by 20, or 51 years / 20 = 2.6 years. Actually that is still good--remember, we're taking worse case.

The other problem I have is whether my old mechanical drive, a Western Digital 500 GB drive from a few years ago, will go on my 3 GB/s SATA 2.0 ports of my mobo found here: http://www.asus.com/Motherboards/Intel_Socket_1155/P8H67M_LE/#overview I think the answer is clearly yes (I would be shocked otherwise), which leaves 2 x SATA 6Gb/s connector(s) for the SATA III SSD drive. In fact, now that I write this, I can put both the old WD HD and the new SSD hard on the two SATA III 6Gb/s connections, since I only have two drives = two connections.

So now the issue is how to set up the SSD. I am going to do a clean install of Windows 7 Professional, so that simplifies things. I assume there's asetting in BIOS to handle SSD? From this thread: http://www.tomshardware.co.uk/forum/254452-14-does-require-special-drivers-bios-settings I see I must Google "change windows to ahci" I get this Wikipedia link: http://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface and I get this MSFT link: http://support.microsoft.com/kb/922976#method1

and this link: http://www.ithinkdiff.com/how-to-enable-ahci-in-windows-7-rc-after-installation/

Question for the board: without reading these articles, because I kinda know what they are getting at, is it fair to say that if, for a clean installation of Windows 7 Pro, and if I change the BIOS before I do the clean install to "AHCI" (I assume I can do this), then, when I install Windows 7 for the first time, I'll not have any problems? I think the answer is "YES".

Finally, the issue is: what do you do with your "C:" drive, do you keep itSSD or make it a traditional HD? I think the answer is the former. I think you store data like video and photos on your "D" (mechanical) HD except stuff that needs to be fastly loaded, like for me my source code and libraries in Visual Studio, which I'll put on the SSD (C) drive.

One more noob question: I'll have a "D" drive being a mechanical drive, that's a SATA - 600 WDC WD5000AAKX-001CA0 drive. This, as I said above, I assume I can put on the SATA 3 Gb/s connections of the Mobo (I think this is called Sata II) and the new SSD drive on the SATA III 6 Gb/s connections, or, since I have two SATA III connections, put both on those two connections.. Does this mean that somehow the old mechanical drive will slow down the new SSD drive? I assume 'No', since SATA is parallel (even though it says serial) not like the old ATA drives where you had a master/slave ribbon. Of course if data is on both C: and D: drives then you'll get a bottleneck from the D: drive, but that's a different issue.

One more Noob question: what size drive for C:, the SSD? Newegg / Amazon sells 240 GB for USD$220, and 120 GB for half that. If I have a 500 GB HD for D:, I think 240 GB is big enough for "C", yes? 1:2 ratio seems about right in my mind.

Yet one more Noob question: I've heard that if you get a crash, power surge, or failure on an SSD, the data is wiped out. But if you backup doing Acronis on an external USB (traditional) HD, then of course you can reinstallWindows, Acronis, and restore your image files, yes? That should be an obvious yes, just double checking.

Thanks in advance for any answers.

RL


This article was written March, 2007--

http://www.storagesearch.com/ssdmyths-endurance.html


The nightmare scenario for your new server acceleration flash SSD is that apiece of buggy software written by the maths department in the university or the analytics people in your marketing department is launched on a Friday afternoon just before a holiday weekend - and behaves like a data recorder continuously writing at maximum speed to your disk - and goes unnoticed.

How long have you got before the disk is trashed?

For this illustrative calculation I'm going to pick the following parameters:-
Configuration:- a single flash SSD. (Using more disks in an array could increase the operating life.)
Write endurance rating:- 2 million cycles. (The typical range today for flash SSDs is from 1 to 5 million. The technology trend has been for this to get better.

When this article was published, in March 2007, many readers pointed out the apparent discrepancy between the endurance ratings quoted by most flash chipmakers and those quoted by high-reliability SSD makers - using the same chips.

In many emails I explained that such endurance ratings could be sample tested and batches selected or rejected from devices which were nominally guaranteed for only 100,000 cycles.

In such filtered batches typically 3% of blocks in a flash SSD might only last 100,000 cycles - but over 90% would last 1 million cycles. The difference was managed internally by the controller using a combination of over-provisioning and bad block management.

Even if you don't do incoming inspection and testing / rejection of flash chips over 90% of memory in large arrays can have endurance which is 5x better than the minimum quoted figure.

Since publishing this article, many oems - including Micron - have found the market demand big enough to offer "high endurance" flash as standard products.)

AMD marketed "million cycle flash" as early as 1998.
Sustained write speed:- 80M bytes / sec (That's the fastest for a flash SSD available today and assumes that the data is being written in big DMA blocks.)
capacity:- 64G bytes - that's about an entry level size. (The bigger the capacity - the longer the operating life - in the write endurance context.)

Today single flash SSDs are available with 160G capacity in 2.5" form factor from Adtron and 155G in a 3.5" form factor from BiTMICRO Networks.

Looking ahead to Q108 - 2.5" SSDs will be available upto 412GB from BiTMICRO. And STEC will be shipping 512GB 3.5" SSDs.
To get that very high speed the process will have to write big blocks (which also simplifies the calculation).

We assume perfect wear leveling which means we need to fill the disk 2 million times to get to the write endurance limit.

2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.

That's a meaningless number - which needs to be divided by seconds in an hour, hours in a day etc etc to give...

The end result is 51 years!

But you can see how just a handful of years ago - when write endurance was 20x less than it is today - and disk capacities were smaller.

For real-life applications refinements are needed to the model which take into account the ratio and interaction of write block size, cache operation and internal flash block size. I've assumed perfect cache operation - and sequential writes - because otherwise you don't get the maximum write speed.Conversely if you aren't writing at the maximum speed - then the disk willlast longer. Other factors which would tend to make the disk last longer are that in most commercial server applications such as databases - the ratio of reads to writes is higher than 5 to 1. And as there is no wear-out or endurance limit on read operations - the implication is to increase the operating life by the read to write ratio.

As a sanity check - I found some data from Mtron (one of the few SSD oems who do quote endurance in a way that non specialists can understand). In thedata sheet for their 32G product - which incidentally has 5 million cycleswrite endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.

How to interpret these numbers?

With current technologies write endurance is not a factor you should be worrying about when deploying flash SSDs for server acceleration applications - even in a university or other analytics intensive environment.
 
T

Ting Hsu

So, you think SSD as "C" is better than as "D"? What if your slow "IO seek" programs, like say Excell or Visual Studio, are on a non-system (D) partition? Would they perform better if on "D" on an SSD drive? Or do most programs default onto the C drive during installation? So many questions...I'll have to research this.

You mistake what's important and what's not.

OS and applications? Needs to be fast, but doesn't need to be "safe",
because you'll have DVDs and online downloads for recovery. Which
makes it a perfect candidate for SSDs. Thus my SSD is on my C drive.

Data? The vast majority of data you save is small files that are
rarely read in. Even for large files, like music or movies, you rarely
read them into the computer. And if you are wondering about slow read
speeds for movies, you should know that even slow 5400rpm hard drives
read in movies 10 times faster than you view them. Aka, as important
as your data is, you will never read in enough of it at a time to
justify a fast drive to store it on. Thus my hard drives are my D
drive, in raid 1 form, for redundancy.

I need a server storage device for my data, something like Apple's
Time Machine, only for Windows 7, copying snapshots to a network
server, but that's still on my "todo" list.
 
F

Flasherly

I'll probably be migrating over to the latest upgraded system I built
on a dual core P4 2.66Ghz, forgoing this P4 single core for the spare,
a general test-purposes computer, while not disturbing my my fastest
X2 4200 dual core from its state as a audiovisual media station.

The 2.66 dual has the best CPU heatsink and will be welcome for being
an indistinguishable silent system in a 24/7 on-state configuration.
The SDD is what I did to it once assembled and up, added as a sort of
cake-icing gift.

These things, at least for me, tend to drag out into and over years of
uninterrupted online services -(all else being equal to the creek not
rising, or an ISP trying to put the screw unto me)- which is how I'll
be looking to structure software on the SSD.

How's that going to work... in a sort of "gateway" capacity for
updates, software provisions, online purchasing, and information
gathering. Programs, then, wouldn't be as much an emphasis in any
novel sense - sweeping or involved, such as involves setting up an OS
- but choice programs rather established among an older cadre of known
and familiar lots.

Hence, I know them, among which are candidates for instigating
extraneous or ancillary writes, logging purposes might better
characterize than a direct note of subsequent output. I'll
consequently split the program drive, an entity physically apart[ion]
even though assigned to the OS further -- as two [sub]drive entities
across both SDD and a platter-drive array;- among a) those that
habitually write for a modus, and b) those that don't, for -a) being
of a near token dimension among a preponderance of program functions
not involving spurious output.

The intent is one overall subsidiary to a gateway concept. Data
received and to an extent processed from reserve storage capacities as
distributional in nature (the other two computers), is largely a focus
of containment involving known programs. As a sole or overriding
purpose to which the SSD will contribute to that role, it'll be a then
a repository, as it were, of reads to favor its bias of some projected
longevity based in theoretical non-write environs.
 
P

Paul

RayLopez99 said:
On Thursday, December 6, 2012 10:31:13 PM UTC+2, Paul wrote:

[good to know stuff]

Tx Paul and others.

Found the site for SSD's from a guy who has been writing on his website for over 10 years. http://www.storagesearch.com

I reproduce one article below (he's got a ton) that I found interesting. What it goes to is 'write endurance'. For most apps, you read more than write, something like 5:1. But assume it's 1:1, and making the worse case assumptions, for a relatively modern "March 2007" SSD drive the figure, says the author, is "51 years" before you get write endurance failure due to a 'rogue program' (since I code, this is a concern for me). But, since the author appears to be a SSD enthusiast, you have to figure some propaganda is possible. So instead of 2 M cycles, I figure 100k cycles before failure/saturation. Which cuts his 51 years by 20, or 51 years / 20 = 2.6 years. Actually that is still good--remember, we're taking worse case.

The other problem I have is whether my old mechanical drive, a Western Digital 500 GB drive from a few years ago, will go on my 3 GB/s SATA 2.0 ports of my mobo found here: http://www.asus.com/Motherboards/Intel_Socket_1155/P8H67M_LE/#overview I think the answer is clearly yes (I would be shocked otherwise), which leaves 2 x SATA 6Gb/s connector(s) for the SATA III SSD drive. In fact, now that I write this, I can put both the old WD HD and the new SSD hard on the two SATA III 6Gb/s connections, since I only have two drives = two connections.

So now the issue is how to set up the SSD. I am going to do a clean install of Windows 7 Professional, so that simplifies things. I assume there's a setting in BIOS to handle SSD? From this thread: http://www.tomshardware.co.uk/forum/254452-14-does-require-special-drivers-bios-settings I see I must Google "change windows to ahci" I get this Wikipedia link: http://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface and I get this MSFT link: http://support.microsoft.com/kb/922976#method1

and this link: http://www.ithinkdiff.com/how-to-enable-ahci-in-windows-7-rc-after-installation/

Question for the board: without reading these articles, because I kinda know what they are getting at, is it fair to say that if, for a clean installation of Windows 7 Pro, and if I change the BIOS before I do the clean install to "AHCI" (I assume I can do this), then, when I install Windows 7 for the first time, I'll not have any problems? I think the answer is "YES".

Finally, the issue is: what do you do with your "C:" drive, do you keep it SSD or make it a traditional HD? I think the answer is the former. I think you store data like video and photos on your "D" (mechanical) HD except stuff that needs to be fastly loaded, like for me my source code and libraries in Visual Studio, which I'll put on the SSD (C) drive.

One more noob question: I'll have a "D" drive being a mechanical drive, that's a SATA - 600 WDC WD5000AAKX-001CA0 drive. This, as I said above, I assume I can put on the SATA 3 Gb/s connections of the Mobo (I think this is called Sata II) and the new SSD drive on the SATA III 6 Gb/s connections, or, since I have two SATA III connections, put both on those two connections. Does this mean that somehow the old mechanical drive will slow down the new SSD drive? I assume 'No', since SATA is parallel (even though it says serial) not like the old ATA drives where you had a master/slave ribbon. Of course if data is on both C: and D: drives then you'll get a bottleneck from the D: drive, but that's a different issue.

One more Noob question: what size drive for C:, the SSD? Newegg / Amazon sells 240 GB for USD$220, and 120 GB for half that. If I have a 500 GB HD for D:, I think 240 GB is big enough for "C", yes? 1:2 ratio seems about right in my mind.

Yet one more Noob question: I've heard that if you get a crash, power surge, or failure on an SSD, the data is wiped out. But if you backup doing Acronis on an external USB (traditional) HD, then of course you can reinstall Windows, Acronis, and restore your image files, yes? That should be an obvious yes, just double checking.

Thanks in advance for any answers.

RL


This article was written March, 2007--

http://www.storagesearch.com/ssdmyths-endurance.html


The nightmare scenario for your new server acceleration flash SSD is that a piece of buggy software written by the maths department in the university or the analytics people in your marketing department is launched on a Friday afternoon just before a holiday weekend - and behaves like a data recorder continuously writing at maximum speed to your disk - and goes unnoticed.

How long have you got before the disk is trashed?

For this illustrative calculation I'm going to pick the following parameters:-
Configuration:- a single flash SSD. (Using more disks in an array could increase the operating life.)
Write endurance rating:- 2 million cycles. (The typical range today for flash SSDs is from 1 to 5 million. The technology trend has been for this to get better.

When this article was published, in March 2007, many readers pointed out the apparent discrepancy between the endurance ratings quoted by most flash chipmakers and those quoted by high-reliability SSD makers - using the same chips.

In many emails I explained that such endurance ratings could be sample tested and batches selected or rejected from devices which were nominally guaranteed for only 100,000 cycles.

In such filtered batches typically 3% of blocks in a flash SSD might only last 100,000 cycles - but over 90% would last 1 million cycles. The difference was managed internally by the controller using a combination of over-provisioning and bad block management.

Even if you don't do incoming inspection and testing / rejection of flash chips over 90% of memory in large arrays can have endurance which is 5x better than the minimum quoted figure.

Since publishing this article, many oems - including Micron - have found the market demand big enough to offer "high endurance" flash as standard products.)

AMD marketed "million cycle flash" as early as 1998.
Sustained write speed:- 80M bytes / sec (That's the fastest for a flash SSD available today and assumes that the data is being written in big DMA blocks.)
capacity:- 64G bytes - that's about an entry level size. (The bigger the capacity - the longer the operating life - in the write endurance context.)

Today single flash SSDs are available with 160G capacity in 2.5" form factor from Adtron and 155G in a 3.5" form factor from BiTMICRO Networks.

Looking ahead to Q108 - 2.5" SSDs will be available upto 412GB from BiTMICRO. And STEC will be shipping 512GB 3.5" SSDs.
To get that very high speed the process will have to write big blocks (which also simplifies the calculation).

We assume perfect wear leveling which means we need to fill the disk 2 million times to get to the write endurance limit.

2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.

That's a meaningless number - which needs to be divided by seconds in an hour, hours in a day etc etc to give...

The end result is 51 years!

But you can see how just a handful of years ago - when write endurance was 20x less than it is today - and disk capacities were smaller.

For real-life applications refinements are needed to the model which take into account the ratio and interaction of write block size, cache operation and internal flash block size. I've assumed perfect cache operation - and sequential writes - because otherwise you don't get the maximum write speed. Conversely if you aren't writing at the maximum speed - then the disk will last longer. Other factors which would tend to make the disk last longer are that in most commercial server applications such as databases - the ratio of reads to writes is higher than 5 to 1. And as there is no wear-out or endurance limit on read operations - the implication is to increase the operating life by the read to write ratio.

As a sanity check - I found some data from Mtron (one of the few SSD oems who do quote endurance in a way that non specialists can understand). In the data sheet for their 32G product - which incidentally has 5 million cycles write endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.

How to interpret these numbers?

With current technologies write endurance is not a factor you should be worrying about when deploying flash SSDs for server acceleration applications - even in a university or other analytics intensive environment.

Your "WD Blue 500 GB SATA Hard Drives ( WD5000AAKX)" is 126MB/sec max sustained.
You can put that on a SATA II port, without really hindering its performance.

http://www.wdc.com/global/products/specs/?driveID=896&language=1

*******

Your main concern for picking a disk controller operating mode,
might be what modes support TRIM.

Otherwise, AHCI might help with server style work loads. And
on servers using rotating hard drives.

The thing is, since a SSD has virtually zero seek time, and
no "head movement", there's no need to reorder operations to
optimize head movement, and complete the operations out of order.
I don't see how AHCI is a win otherwise, for an SSD. Just as
defrag of an SSD isn't necessary, even if the "colored graph"
in your third-party defragger says otherwise. It isn't necessary,
because the seek time is so low. You can read 8000 fragments just
as fast as reading one contiguous file (more or less). In terms
of user perception, I doubt you'd notice the difference.

http://en.wikipedia.org/wiki/TRIM

"Windows 7 only supports TRIM for ordinary (AHCI) drives
and does not support this command for PCI-Express SSDs
that are different type of device, even if the device
itself would accept the command"

I'm not going to stick my neck out and say that's the only
way to get TRIM. But it might be a safe assumption. The
Windows built-in AHCI driver is MSAHCI, but it's also possible
to install a custom driver (like the one from Intel) and
perhaps use an Intel version of it. I haven't memorized all
the details.

Changing disk operating modes in Windows 7, is a matter of
"re-arming" the disk discovery process at startup, by modifying
some registry settings. Otherwise, once the OS gets "comfortable"
with a certain discovered driver, it stops trying all of them
at startup. And that's where the "re-arm" comes in.
The situation gets slightly more complicated, if you install
your own copy of the Intel driver at some point (IASTOR versus
IASTORV versus RST ???). There is undoubtedly information
out there for you to look up on the topic.

*******

If money is no object, maybe the fluff in that storage article
can be achieved. But you and I will be buying commodity disks.
For decent size (you quote "240 GB for USD$220, and 120 GB for half that"),
those will be MLC flash drives. With write endurance in the 3500 to 5000
cycle range. Intel released some small 20GB drives in SLC. Those
might have a higher write endurance, but you'd need a crapload
of those in some RAID mode, to make an OS drive.

If the price of the whimsical SSD is high enough, people can
make DRAM based storage devices, with no wearout mechanism
at all for an equivalent price. So there's a cap on how much
they can charge for a whimsical SSD, how much overprovisioning
they can do and so on. Material cost still matters.

You might not see specialized enterprise drives at your local
computer shop. You might not even see them advertised on the web.
And even if you did, they'd be thousands of dollars if they
had whimsical write endurance. Nobody in their right mind, sorts
through flash chips and "picks out good ones just for your drive".
Sorting is done at the silicon fab, and perhaps they're graded there
to have fewer initial defects, but the physics of the devices don't
change. They all come off the same wafer. If you read the research
papers, about what affects the ability to write to flash, you'd get
an entirely different perspective on pushing the claimed write
endurance.

And as each generation doubles or quadruples density, the
write endurance drops. The ECC code becomes slightly longer,
to cover for the inevitable errors in each read. The code allows
a trivial number of errors to be corrected, in the same sense
that the codes used on CDs, allow a scratched or dust coated
CD to continue to be readable. The ECC code is picked in each
generation, with an eye to the error characteristic. But personally,
I'm not at all that enthusiastic about the direction all of this
is taking. I'd rather have the capacity of drives remain fixed,
and generational changes improve the quality of drives, rather
than have a "4TB MLC SSD" with the reliability of toilet paper.
If I need crappy 4TB drives, I can get that in a mechanical
drive.

http://en.wikipedia.org/wiki/Multi-level_cell

You know, as soon as TLC devices become available, they'll be
crammed into commodity drives, and the write endurance will drop
some more.

http://www.anandtech.com/show/5067/understanding-tlc-nand/2

I don't see a problem with buying SSDs, as long as you have
the right mindset. Take my handling of USB flash sticks. I
don't store the only copy of data on them. I assume I'll plug
one in some day, and it'll be completely dead. My copy on the
hard drive, will save me. While SSDs talk a great show,
you can still find people who wake up in the morning,
switch on, and the BIOS can't find the drive. For those
moments, you have backups (and warranty support for your
SSD).

Ask "John Doe" poster, what he thinks of SSDs so far.

Before buying, read the reviews carefully. If an SSD
has obvious controller or firmware design flaws, it
might show up in the Newegg or Amazon customer reviews.
And then you get some pre-warning, before you become
a victim.

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top