Trim Command

P

philo 

A while back I acquired a Dell Mini with a bad mini-ssd.

I replaced it with a Kingspec 16 gig drive and loaded XP.

I've recently noticed the performance was very sluggish so went to
Kingspec's website to see if they had a "Trim" utility for XP but found
nothing.


Even though defrag is supposedly not needed for an SSD ...when I tired
it, it helped a lot. The drive showed as /very/ fragmented before I ran
defrag.
 
D

David W. Hodgins

I replaced it with a Kingspec 16 gig drive and loaded XP.
I've recently noticed the performance was very sluggish so went to
Kingspec's website to see if they had a "Trim" utility for XP but found
nothing.
Even though defrag is supposedly not needed for an SSD ...when I tired
it, it helped a lot. The drive showed as /very/ fragmented before I ran
defrag.

Never defrag an ssd drive! It's not needed, and will wear out the drive
sooner.

For xp, I'd try booting to safe mode, and leaving the system idle
overnight, to let the drive's garbage collection do it's job.

Regards, Dave Hodgins
 
P

philo 

Never defrag an ssd drive! It's not needed, and will wear out the drive
sooner.

For xp, I'd try booting to safe mode, and leaving the system idle
overnight, to let the drive's garbage collection do it's job.

Regards, Dave Hodgins



It helped the performance and I am not too worried about the life of the
drive...the machine is a spare and rarely used.
 
N

Norm X

"David W. Hodgins" wrote
Never defrag an ssd drive! It's not needed, and will wear out the drive
sooner.

Yesterday, I was at the Sandisk site to download their SSD toolkit (TRIM).
On their website, they said the same thing as above. However, they do not
write the system software. I'll grant that to defrag an SSD drive too often
is wasteful. However, use of benchmark tools tells a different story. On the
one hand, you have the theoretical aspects which proclaim that an SSD has no
"seek time" because it is non mechanical. On the other hand, you have
benchmark tests that suggest (to the naive) that SSD has a nonzero seek
time. The issue is that the OS software is written for a block structured
device. Call it the seek time of the software. Even on an SSD, benchmarks
tools say that the seek time of the software is reduced after a defrag.

I also have a Super Talent SSD in which the manufacturer proclaims that TRIM
is handled automatically. I think what they say is "oversell". However, in
Win7 it is possible to move the location of the pagefile and the Readyboost
cache. I move the location of these files from time to time and it seems to
improve the perceived "lag time".

Here is another stupid fact. Google Android only incorporated support for
TRIM in Android 4.3. Android 4.3 is the next to last update. That means
there may be as many as 30 billion Android devices out there that do not
have support for SSD TRIM. So after a year of use an Android device is
frustratingly slow.

The problem of SSD lag time is also a worry in the iPad world. You need the
latest and greatest iOS and supporting hardware and a fat wallet.
 
P

Paul

Norm said:
"David W. Hodgins" wrote


Yesterday, I was at the Sandisk site to download their SSD toolkit (TRIM).
On their website, they said the same thing as above. However, they do not
write the system software. I'll grant that to defrag an SSD drive too often
is wasteful. However, use of benchmark tools tells a different story. On the
one hand, you have the theoretical aspects which proclaim that an SSD has no
"seek time" because it is non mechanical. On the other hand, you have
benchmark tests that suggest (to the naive) that SSD has a nonzero seek
time. The issue is that the OS software is written for a block structured
device. Call it the seek time of the software. Even on an SSD, benchmarks
tools say that the seek time of the software is reduced after a defrag.

I also have a Super Talent SSD in which the manufacturer proclaims that TRIM
is handled automatically. I think what they say is "oversell". However, in
Win7 it is possible to move the location of the pagefile and the Readyboost
cache. I move the location of these files from time to time and it seems to
improve the perceived "lag time".

Here is another stupid fact. Google Android only incorporated support for
TRIM in Android 4.3. Android 4.3 is the next to last update. That means
there may be as many as 30 billion Android devices out there that do not
have support for SSD TRIM. So after a year of use an Android device is
frustratingly slow.

The problem of SSD lag time is also a worry in the iPad world. You need the
latest and greatest iOS and supporting hardware and a fat wallet.

I've noticed something similar, when using a RAMDisk.

On the one hand, a software RAMDisk has a very high sustained
transfer rate. I can get a 4GB/sec bandwidth rating.

The fun begins, when you deal with 60000 small files and
attempt to do some things. The OS almost gives the impression
there is an IOP limit present for some reason. If I saw the
CPU being pegged, then I'd be satisfied the OS was doing
all that it could - it would be saturated. But instead,
I can see it do things at a certain speed, and there are
CPU cycles left over. And the operations I'm doing, don't
complete as fast as I would expect.

So while the seek time of a SATA SSD might be 25uS due to
Flash readout time, it's just possible the desktop OS
adds more time per IOP than we'd like. And a fragmented
file starts to cost us something.

*******

And the best way to defrag a device, when a decent amount
of fragmentation is present, is to copy the files off,
reformat (quick type), copy the files back, and do fixboot C:
or equivalent to fix up the partition boot code if present.
That's what I do for my WinXP machine on occasion. The fact
that takes less time than defragmentation, suggests a lower
number of writes for the same amount of benefit. If there was
a tiny amount of fragmentation on the partition, then the
defragmenter might finish sooner. If the partition is a mess,
then the copy method wins. My defragmentation attempts were
taking more than eight hours, while the copy method about
half an hour to forty minutes.

I can do that stuff on my desktop, because I have more than
one OS, and more than one hard drive. So it's relatively
easy to pick tools and situations for the job of copying off
C: .

I don't know how to do such a procedure safely for
any OS like Vista or later. I'd probably break something
on those. I don't know whether Robocopy would get everything.

Paul
 
D

David W. Hodgins

Yesterday, I was at the Sandisk site to download their SSD toolkit (TRIM).
On their website, they said the same thing as above. However, they do not
write the system software. I'll grant that to defrag an SSD drive too often
is wasteful. However, use of benchmark tools tells a different story. On the
one hand, you have the theoretical aspects which proclaim that an SSD has no
"seek time" because it is non mechanical. On the other hand, you have
benchmark tests that suggest (to the naive) that SSD has a nonzero seek
time. The issue is that the OS software is written for a block structured
device. Call it the seek time of the software. Even on an SSD, benchmarks
tools say that the seek time of the software is reduced after a defrag.

The only way defragging an ssd drive would make a difference, based on my
understanding, is if the fragments are smaller than the number of sectors
that can be transferred in one i/o operation. Then defragging would allow
the files to be retrieved in less operations.

That shouldn't happen, unless the drive is nearly full to begin with.

Regards, Dave Hodgins
 
D

David W. Hodgins

The fun begins, when you deal with 60000 small files and
attempt to do some things. The OS almost gives the impression
there is an IOP limit present for some reason. If I saw the
CPU being pegged, then I'd be satisfied the OS was doing
all that it could - it would be saturated. But instead,
I can see it do things at a certain speed, and there are
CPU cycles left over. And the operations I'm doing, don't
complete as fast as I would expect.

Check the interrupt count. The cpu can only handle so many interrupts
per second. I've seen systems slow to a crawl, because a low battery in
a wireless mouse caused it to generate thousands of interrupts/second,
even though htop (on linux) showed the cpu was mostly idle. I have no
idea why a low battery would cause the mouse to generate such a high
number of interrupts, but I have seen this happen.

Regards, Dave Hodgins
 
P

philo 

"David W. Hodgins" wrote


Yesterday, I was at the Sandisk site to download their SSD toolkit (TRIM).
On their website, they said the same thing as above. However, they do not
write the system software. I'll grant that to defrag an SSD drive too often
is wasteful. However, use of benchmark tools tells a different story. On the
one hand, you have the theoretical aspects which proclaim that an SSD has no
"seek time" because it is non mechanical. On the other hand, you have
benchmark tests that suggest (to the naive) that SSD has a nonzero seek
time. The issue is that the OS software is written for a block structured
device. Call it the seek time of the software. Even on an SSD, benchmarks
tools say that the seek time of the software is reduced after a defrag.


<snip>


The machine's performance was really poor...considering it has a 1.6ghz
cpu and 2 gigs of RAM it should have run XP quite well.
Since the mfg had no "trim" utility I figured it would not hurt to try
defrag...and that did the trick.

Since the machine does not get used very often I can't imagine I'd need
to defrag more than once a year (if even that much).

So, the issue of running defrag and shortening the drive's life is not
a big issue.
 
M

miso

And the best way to defrag a device, when a decent amount
of fragmentation is present, is to copy the files off,
reformat (quick type), copy the files back, and do fixboot C:
or equivalent to fix up the partition boot code if present.
That's what I do for my WinXP machine on occasion. The fact
that takes less time than defragmentation, suggests a lower
number of writes for the same amount of benefit. If there was
a tiny amount of fragmentation on the partition, then the
defragmenter might finish sooner. If the partition is a mess,
then the copy method wins. My defragmentation attempts were
taking more than eight hours, while the copy method about
half an hour to forty minutes.

I can do that stuff on my desktop, because I have more than
one OS, and more than one hard drive. So it's relatively
easy to pick tools and situations for the job of copying off
C: .

I don't know how to do such a procedure safely for
any OS like Vista or later. I'd probably break something
on those. I don't know whether Robocopy would get everything.

Paul

If you copy to an external drive then reload to the SSD, doesn't that
just use memory cells that haven't been written to lately? That is, the
chips in the SSD try to balance the write cycles. Leveling as they say.

IMHO a SSD is so damn fast compared to a hard drive that I don't give a
crap about optimization. I suppose if I were doing transactions for some
e-commerce service, I might care. But the speed is just so much better
than incremental increases don't me much to me.

It is kind of like building a system and not bothering to get the
fastest RAM. Been there, done that, and in reality the different is just
a few percent. The next generation CPU then blows you out of the water
speed wise. I do believe in stuffing the mobos to the max with RAM. Lots
of RAM never hurts.
 
P

Paul

miso said:
If you copy to an external drive then reload to the SSD, doesn't that
just use memory cells that haven't been written to lately? That is, the
chips in the SSD try to balance the write cycles. Leveling as they say.

IMHO a SSD is so damn fast compared to a hard drive that I don't give a
crap about optimization. I suppose if I were doing transactions for some
e-commerce service, I might care. But the speed is just so much better
than incremental increases don't me much to me.

It is kind of like building a system and not bothering to get the
fastest RAM. Been there, done that, and in reality the different is just
a few percent. The next generation CPU then blows you out of the water
speed wise. I do believe in stuffing the mobos to the max with RAM. Lots
of RAM never hurts.

The SSD has a level of indirection. The sector 0 we can see on the SSD,
is not stored in Flash Block 0. It can be stored anywhere,
and some kind of table keeps a map of what is stored where. It
is that indirection, that makes it possible to wear level the thing.
If you pulled the chips, and looked at the data in the flash blocks,
that randomization would make the partition virtually unreadable.
You really need the translation table, to put things back in
order.

If you keep an SSD powered up at night, it has its own processor
and firmware inside, and it will move around the data, to different
flash blocks, to consolidate used space. That's necessary, because the
size of the writes, is not the same as the natural block size of
the flash. If you do the Anandtech 4KB random write test to an
SSD, it can take the SSD hours to undo the damage and move the
remaining 4KB blocks together so that less space is wasted within
individual flash blocks (which will be larger than 4KB). If you write
very large files, there is less damage due to fractional block usage.
And less tidying necessary at night, or when the drive is idle
for a fraction of a second.

Paul
 
N

Norm X

philo said:
<snip>


The machine's performance was really poor...considering it has a 1.6ghz
cpu and 2 gigs of RAM it should have run XP quite well.
Since the mfg had no "trim" utility I figured it would not hurt to try
defrag...and that did the trick.

Since the machine does not get used very often I can't imagine I'd need to
defrag more than once a year (if even that much).

So, the issue of running defrag and shortening the drive's life is not a
big issue.

Someone important once said that without measurement and without testable
hypotheses, it is impossible to do science. I read a Ph.D. thesis in
computer science a while back, on the topic of optimizing SSD performance.
He/I used evolving software:

http://crystalmark.info/software/CrystalDiskMark/index-e.html

A while back I posted a bunch of benchmark tables that 'proved' that the
exFAT format is superior for SSD performance. I think however, that SSD
hardware/software implementation is still evolving and not yet optimal.
 
P

philo 

On 12/13/2013 01:36 AM, Norm X wrote:

Someone important once said that without measurement and without testable
hypotheses, it is impossible to do science. I read a Ph.D. thesis in
computer science a while back, on the topic of optimizing SSD performance.
He/I used evolving software:

http://crystalmark.info/software/CrystalDiskMark/index-e.html

A while back I posted a bunch of benchmark tables that 'proved' that the
exFAT format is superior for SSD performance. I think however, that SSD
hardware/software implementation is still evolving and not yet optimal.

I did not perform any scientific tests but I'm familiar enough with
hardware to know that a 1.6 ghz machine with two gigs of RAM should run
XP just fine.

Just for an example, it would sometimes take ten seconds for the machine
to react if I'd simply click on a dialog box.


After the defrag, XP performed the way I would have expected.

This is the only SSD drive I've used and it certainly has not been
impressive.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top