Lots of mapped out bad sectors cause trouble?

J

jack smith

AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.

If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?

I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?



BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.

The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.
 
G

Grant

AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.

Apart from the etra time taken to seek to the reserve track and back?
If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?

smart will show it, for example (with comments):
....
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 067 062 006 Pre-fail Always - 94656318

Equals Hardware_ECC_Recovered --> okay

3 Spin_Up_Time 0x0003 097 097 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 126
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0

No reallocated sectors

7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 312238221
9 Power_On_Hours 0x0032 059 059 000 Old_age Always - 36285

Over four years spinning away

10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 099 099 020 Old_age Always - 1785
194 Temperature_Celsius 0x0022 037 053 000 Old_age Always - 37

37'C now, max was 53'C -- okay, no overheating.

195 Hardware_ECC_Recovered 0x001a 067 062 000 Old_age Always - 94656318
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 199 000 Old_age Always - 2

Oops, bumped the data cable

200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0
202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?

Yep, or retries on an iffy sector. The drive cannot remap an internal
sector until OS asks to write the entire internal sector size, which
is different to the 'reported' sector size (the drive lies about its
physical geometry to the OS).
BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.

You're measuring something else, time taken to read a file depends on
fragmentation and seek distance between fragments.

Performing a sequential read of the drive surface[1] would be a better
test as you could then listen for the seeking to remapped sectors.

[1] dd if=/dev/sda bs=1M of=/dev/null # at a unix-like command prompt

If yoy don't run linux or unix, try a recent Live Linux cd.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.

Yes, read retries will dominate as the drive will do retries, then
the OS might also ask for more retries. HDD seek time is next if a
file is severely fragged or has many relocated sectors -- but a
modern drive with relocated sectors is on the way out and should
be replaced.

You may recover a HDD by writing zeroes to the entire drive, this
is the modern equivalent to 'low level format'.

It's easy in linux:

dd if=/dev/zero bs=1M of=/dev/sdX

For 'doze use the manufacturer's bootable CD image drive fixer --
it does the same thing a bit differently. The process gives the
drive smarts a chance to remap any iffy sectors.

Grant.
 
R

Rod Speed

jack said:
AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.

That last isnt necessarily true. If its bad on a read, it
wont get transparently remapped until its written to.
If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?

No, essentially because remapping doesnt necessarily affect performance.

In practice the drive turns the LBA into CHS values mathematically
and the remapped sectors are just part of that maths, and so that
has no effect on performance.

And if a drive does have a large number of bads, its dying, and
will be retrying on the not completely bad sectors, so that will
have a much bigger effect on preformance, particularly the retrys.
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?

Very unlikely. Most likely they are just retrying on the not completely bads.
BACKGROUND: The difference is most easily observable when
I do an online defrag of NTFS's own files (such as $MFT). The
defragger checks for and locks all data files before performing its
defrag. The speed it does this varies enormously between drives.

That can be due to other effects. Some defraggers vary very
significantly speed wise just on the file detail, not the physical drive.

It can also just be that what look to you like similar drives are very
different physically, particularly sectors per track and seek times.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.

Post the Everest SMART stats on the best and worst drives.
http://www.majorgeeks.com/download.php?det=4181
That will at least show what bad sectors the drives have.
 
R

Rod Speed

Grant wrote
Apart from the etra time taken to seek to the reserve track and back?

There is no reserve track with modern drives.
smart will show it, for example (with comments):
...
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f
067 062 006 Pre-fail Always - 94656318

Equals Hardware_ECC_Recovered --> okay

3 Spin_Up_Time 0x0003 097 097 000 Pre-fail
Always - 0 4 Start_Stop_Count 0x0032 100 100
020 Old_age Always - 126 5 Reallocated_Sector_Ct
0x0033 100 100 036 Pre-fail Always - 0

No reallocated sectors

7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail
Always - 312238221 9 Power_On_Hours 0x0032
059 059 000 Old_age Always - 36285

Over four years spinning away

10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail
Always - 0 12 Power_Cycle_Count 0x0032 099 099
020 Old_age Always - 1785 194 Temperature_Celsius
0x0022 037 053 000 Old_age Always - 37

37'C now, max was 53'C -- okay, no overheating.

195 Hardware_ECC_Recovered 0x001a 067 062 000 Old_age
Always - 94656318 197 Current_Pending_Sector 0x0012
100 100 000 Old_age Always - 0 198
Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline
- 0 199 UDMA_CRC_Error_Count 0x003e 200 199 000
Old_age Always - 2

Oops, bumped the data cable

200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age
Offline - 0 202 TA_Increase_Count 0x0032 100 253
000 Old_age Always - 0
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?

Yep, or retries on an iffy sector. The drive cannot remap an internal
sector until OS asks to write the entire internal sector size, which
is different to the 'reported' sector size (the drive lies about its
physical geometry to the OS).
BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.

You're measuring something else, time taken to read a file depends on
fragmentation and seek distance between fragments.

Performing a sequential read of the drive surface[1] would be a better
test as you could then listen for the seeking to remapped sectors.

[1] dd if=/dev/sda bs=1M of=/dev/null # at a unix-like command prompt

If yoy don't run linux or unix, try a recent Live Linux cd.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.

Yes, read retries will dominate as the drive will do retries, then
the OS might also ask for more retries. HDD seek time is next if a
file is severely fragged or has many relocated sectors -- but a
modern drive with relocated sectors is on the way out and should
be replaced.

You may recover a HDD by writing zeroes to the entire drive, this
is the modern equivalent to 'low level format'.

It's easy in linux:

dd if=/dev/zero bs=1M of=/dev/sdX

For 'doze use the manufacturer's bootable CD image drive fixer --
it does the same thing a bit differently. The process gives the
drive smarts a chance to remap any iffy sectors.

Grant.
 
A

Arno

In comp.sys.ibm.pc.hardware.storage jack smith said:
AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.
If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?

You can get poor performance, however not really from the remapping.
You get it while the drive is trying to recover the sectors it is
eventually going to remap.

But you can get this diagnised, just look at the SMART attribute
for remapped sector count raw value.
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?

Not really.
BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.

Drives at most remap a few 1000 sectors. That is not enough for a
strong performance degradation. Drives that remap this many
sectors are also typically in the process of dying.

You likely have a different issue, or it may just be natural speed
difference.

Arno
 
F

Franc Zabkar

AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.

If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?

I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?



BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.

The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.

If a drive experiences more than 6 CRC errors, Windows XP degrades its
performance from DMA mode to PIO.

See http://winhlp.com/node/10

To check for bad sectors, try a SMART utility such as HD Sentinel
(Linux & Windows):
http://www.hdsentinel.com/

For benchmarking, try HD Tune:
http://www.hdtune.com/

Here are more SMART diagnostic tools:

CrystalDiskMark:
http://crystalmark.info/software/CrystalDiskMark/index-e.html

smartmontools (Linux/Windows):
http://sourceforge.net/projects/smartmontools/files/
http://sourceforge.net/apps/trac/smartmontools/wiki/Download

See this article for SMART info:
http://en.wikipedia.org/wiki/S.M.A.R.T.

Comparison of S.M.A.R.T. tools:
http://en.wikipedia.org/wiki/Comparison_of_S.M.A.R.T._tools

- Franc Zabkar
 
F

Franc Zabkar

Apart from the etra time taken to seek to the reserve track and back?

AIUI, spare sectors are available on each track. If all have been
reallocated, then the next closest track is used.

Pages 182 & 183 of the following manual discuss defect management in a
Fujitsu drive:

MPG3xxxAH DISK DRIVES PRODUCT MANUAL:
http://www2.fcpa.fujitsu.com/sp_support/ext/desktop/manuals/mpg3xxxah-manual.pdf

AFAIK, the reserved track dates back to MFM days. This track was used
for diagnostic purposes.

Nowadays there is a hidden System Area in the "negative" cyclinders.
Among other things, it contains the bulk of the firmware and a P-list
(factory defects) and a G-list (grown defects). I would think that
these defect lists would be copied to RAM on power-up.

- Franc Zabkar
 
H

holarchy

AIUI, spare sectors are available on each track.

Not anymore.
If all have been reallocated, then the next closest track is used.
Pages 182 & 183 of the following manual discuss defect management in a Fujitsu drive:
MPG3xxxAH DISK DRIVES PRODUCT MANUAL:

Thats a real dinosaur now.
AFAIK, the reserved track dates back to MFM days.

Nope. They didnt even have spares at all.
This track was used for diagnostic purposes.

Different matter entirely.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top