Seagate Barracuda

T

TenPercent

Hi, I see the Seagate 80 gig barracuda sells
for about twice the price of other 80 gig EIDE/ATA drives
like Maxtor's and Western Digital's.

Specifically, I was looking at the:

"Seagate ST380013ARK Internal Barracuda 7200 RPM 80 GB
Ultra ATA/100 Hard Drive"

If I buy this drive, can I expect to have very
few bad blocks show up over the first couple of years?

I'm kind of tired of using Linux's "badblocks"
and "e2fsck" programs to isolate my current hard
disk's bad blocks, which are showing up with more
frequency.

Thank you very much for any helpful insights.
 
R

Rod Speed

TenPercent said:
I see the Seagate 80 gig barracuda sells for
about twice the price of other 80 gig EIDE/ATA
drives like Maxtor's and Western Digital's.

No it doesnt.
Specifically, I was looking at the:
"Seagate ST380013ARK Internal Barracuda
7200 RPM 80 GB Ultra ATA/100 Hard Drive"
If I buy this drive, can I expect to have very few
bad blocks show up over the first couple of years?

You can with any decent drive. I
dont include maxtors or WDs in that.
I'm kind of tired of using Linux's "badblocks" and
"e2fsck" programs to isolate my current hard disk's
bad blocks, which are showing up with more frequency.

Its dying, have decent backups and replace it now.
 
J

J. Clarke

TenPercent said:
Hi, I see the Seagate 80 gig barracuda sells
for about twice the price of other 80 gig EIDE/ATA drives
like Maxtor's and Western Digital's.

Specifically, I was looking at the:

"Seagate ST380013ARK Internal Barracuda 7200 RPM 80 GB
Ultra ATA/100 Hard Drive"

If I buy this drive, can I expect to have very
few bad blocks show up over the first couple of years?

I'm kind of tired of using Linux's "badblocks"
and "e2fsck" programs to isolate my current hard
disk's bad blocks, which are showing up with more
frequency.

Thank you very much for any helpful insights.

If bad blocks are showing up with increasing frequency then you need to
replace the drive. This is the classic symptom of impending drive failure.

You should not have to use "badblocks" or "e2fsck"--the drive should spare
bad sectors transparently--if it doesn't this suggests that its sparing has
been used up which means that it is in _really_ bad shape.

I don't know where you're getting that 80 gig Barracudas go for twice the
price of other 80 gig drives--Newegg lists Western Digital for $50, Seagate
for $53, and Samsung for $59.

Before you replace your drive make _sure_ that it is getting adequate clean
power and adequate cooling--that means check voltages with a meter to make
sure your motherboard is reporting them accurately then monitor for a while
to make sure that they aren't dropping out of spec under load and check
drive temperatures with a thermocouple probe or tempilstik. If you've got
a power or cooling problem fixing it might correct the problem you're
having with your existing drive.

Regardless of any of this, bad sectors after a year of operation is _not_
normal.
 
L

larry moe 'n curly

TenPercent said:
Hi, I see the Seagate 80 gig barracuda sells
for about twice the price of other 80 gig EIDE/ATA drives
like Maxtor's and Western Digital's.

Several months ago, I paid a final price of $20 for my 160G Seagate
Barracuda 7200.7, $40 for my 200G Seagate, and I haven't noticed any
local price differences among brands, except for Samsung, which I can't
buy as cheaply because it's never sold with a rebate here.
 
A

Arno Wagner

Previously J. Clarke said:
TenPercent wrote:
If bad blocks are showing up with increasing frequency then you need to
replace the drive. This is the classic symptom of impending drive failure.

I completely agree to this. The highest number of bad sectors I have
on a Maxtor disk is 279, but a) it has not changed for 1 year now
and b) they are all invisible to the user except in the SMART status.
You should not have to use "badblocks" or "e2fsck"--the drive should spare
bad sectors transparently--if it doesn't this suggests that its sparing has
been used up which means that it is in _really_ bad shape.

You should run e2fsck regularly, but not because of bad blocks. The
occasional automatic run on start-up is enough. ''badblocks'' is a
relict from an earlier time when HDDs did not hide defect sectors
from the user. As much other Unix tools it is very old. Today it
rarely serves a purpose.
I don't know where you're getting that 80 gig Barracudas go for twice the
price of other 80 gig drives--Newegg lists Western Digital for $50, Seagate
for $53, and Samsung for $59.

I would stay away from WD and Maxtor today. Seagate and Samsung seem both
ok.
Before you replace your drive make _sure_ that it is getting adequate clean
power and adequate cooling--that means check voltages with a meter to make
sure your motherboard is reporting them accurately then monitor for a while
to make sure that they aren't dropping out of spec under load and check
drive temperatures with a thermocouple probe or tempilstik.

Actually checking drive temperature with hdd_temp or smarctl should
be enough, if your drive supports this.
If you've got
a power or cooling problem fixing it might correct the problem you're
having with your existing drive.
Regardless of any of this, bad sectors after a year of operation is _not_
normal.

It might have been dropped. I saw this in several Maxtor drives
that had been dropped.

Arno
 
J

J. Clarke

Arno said:
I completely agree to this. The highest number of bad sectors I have
on a Maxtor disk is 279, but a) it has not changed for 1 year now
and b) they are all invisible to the user except in the SMART status.


You should run e2fsck regularly, but not because of bad blocks. The
occasional automatic run on start-up is enough. ''badblocks'' is a
relict from an earlier time when HDDs did not hide defect sectors
from the user. As much other Unix tools it is very old. Today it
rarely serves a purpose.


I would stay away from WD and Maxtor today. Seagate and Samsung seem both
ok.

I don't have enough recent experience with WD or Maxtor to comment--on my
own systems I just had to replace a bad Maxtor but it had been running
continuously for about four years and the other three Maxtors in that
system seem to be fine.
Actually checking drive temperature with hdd_temp or smarctl should
be enough, if your drive supports this.
Agreed.



It might have been dropped. I saw this in several Maxtor drives
that had been dropped.

For a while (during the 75GXP era by the way) a lot of drives seemed to be
arriving wrapped in a couple of layers of small-bubble bubble wrap, which
did not provide the three inches of padding that the drive manufacturers
specify. During that time I encountered a couple of drives that reported
"excessive shock" right out of the package--after that I started ordering
retail drives--it was worth a couple of bucks extra to be sure it was
properly packaged.
 
J

J. Clarke

larry said:
Several months ago, I paid a final price of $20 for my 160G Seagate
Barracuda 7200.7, $40 for my 200G Seagate, and I haven't noticed any
local price differences among brands, except for Samsung, which I can't
buy as cheaply because it's never sold with a rebate here.

May I ask where you found those prices?
 
F

Folkert Rienstra

J. Clarke said:
If bad blocks are showing up with increasing frequency then you need to
replace the drive. This is the classic symptom of impending drive failure.
You should not have to use "badblocks" or "e2fsck"--the drive should spare
bad sectors transparently--
if it doesn't this suggests that its sparing has been
used up which means that it is in _really_ bad shape.

Ignore this troll, he knows better than that.
Badly written 'bad' sectors are not spared automatically. So bad sectors
showing is *NOT* necessarily a sign 'that its sparing has been used up'.
A huge number of sector reallocations in SMART to the amount of the
number of spares as documented in the drive's specs is the only indicator
'that its sparing has been used up'. Other signs may be that the drive has
switched off it's write cache.
I don't know where you're getting that 80 gig Barracudas go for twice the
price of other 80 gig drives--Newegg lists Western Digital for $50, Seagate
for $53, and Samsung for $59.

Before you replace your drive make _sure_ that it is getting adequate clean
power and adequate cooling--that means check voltages with a meter to make
sure your motherboard is reporting them accurately then monitor for a while
to make sure that they aren't dropping out of spec under load and check
drive temperatures with a thermocouple probe or tempilstik. If you've got
a power or cooling problem fixing it might correct the problem you're
having with your existing drive.

See, he knows better.
Regardless of any of this, bad sectors after a year of operation is _not_
normal.

It is when you have PowerSupply or cooling problems.
Logical bad sectors developed during such incidents go away when these sectors
are overwritten
 
T

TenPercent

This is the funny thing about my
hard disk. I run badblocks and it reports
41 bad blocks on /home.

So I run "e2fsck -c -v /dev/hda10" so the
bad block inodes will be placed in the bad blocks
inode list. But after running "dumpe2fs -b /dev/hda10"
to see the inode numbers of the bad blocks in the list,
some of the 41 are NOT appearing in the list--some
are but some are not.

So I'm kind of confused about what's happening.
Why wouldn't e2fsck place all 41 inode numbers in the
list?

Thanks.
 
B

Bioboffin

J. Clarke said:
If bad blocks are showing up with increasing frequency then you need
to replace the drive. This is the classic symptom of impending drive
failure.

You should not have to use "badblocks" or "e2fsck"--the drive should
spare bad sectors transparently--if it doesn't this suggests that its
sparing has been used up which means that it is in _really_ bad shape.

I don't know where you're getting that 80 gig Barracudas go for twice
the price of other 80 gig drives--Newegg lists Western Digital for
$50, Seagate for $53, and Samsung for $59.

Before you replace your drive make _sure_ that it is getting adequate
clean power and adequate cooling--that means check voltages with a
meter to make sure your motherboard is reporting them accurately then
monitor for a while to make sure that they aren't dropping out of
spec under load and check drive temperatures with a thermocouple
probe or tempilstik. If you've got a power or cooling problem fixing
it might correct the problem you're having with your existing drive.

Regardless of any of this, bad sectors after a year of operation is
_not_ normal.

Good advice here - I recently (two weeks ago) bought a new Seagate SATA
drive - got lots of (about 20) bad blocks in the first couple of days.
Realised that the PSU was the problem and replaced it. No new bad blocks
now - system now working faster and better than before.

John.
 
F

Folkert Rienstra

Arno Wagner said:
I completely agree to this.

Fool you.
The highest number of bad sectors I have
on a Maxtor disk is 279, but a) it has not changed for 1 year now
and b) they are all invisible to the user except in the SMART status.

[nonsense snipped]
You should run e2fsck regularly, but not because of bad blocks. The
occasional automatic run on start-up is enough. ''badblocks'' is a
relict from an earlier time when HDDs did not hide defect sectors

They still don't. They replace 'm before they become bad.
That to happen requires successful read of or a write to it first.
Unrecoverable read error 'bad' sectors still show.
They disappear only on writes to such 'bad' sectors.
from the user. As much other Unix tools it is very old.
Today it rarely serves a purpose.

It does if you have no other tools to make the 'bad' sectors
disappear by themselfs without wiping the full harddrive.

[snip]
 
F

Folkert Rienstra

TenPercent said:
This is the funny thing about my
hard disk. I run badblocks and it reports
41 bad blocks on /home.

So I run "e2fsck -c -v /dev/hda10" so the
bad block inodes will be placed in the bad blocks
inode list. But after running "dumpe2fs -b /dev/hda10"
to see the inode numbers of the bad blocks in the list,
some of the 41 are NOT appearing in the list--some
are but some are not.

So I'm kind of confused about what's happening.
Why wouldn't e2fsck place all 41 inode numbers in the list?

Maybe because they were seperate runs and not all 'bad' blocks
are detected on each seperate run, (especially if it works on the
basis of a timeout).
Maybe 'badblocks' works on the basis of a time-out but the read
command eventually succeeds and the bad block magically disap-
pears (=reassigned) and isn't detected anymore on the next run.
Maybe your powersupply is marginal and fluctuates during a run
so seperate runs give a different outcome.

Then there's also a warning in the badblocks description:

"Important note: If the output of badblocks is going to be fed to the
e2fsck or mke2fs programs, it is important that the block size is prop-
erly specified, since the block numbers which are generated are very
dependent on the block size in use. For this reason, it is strongly
recommended that users not run badblocks directly, but rather use
the -c option of the e2fsck and mke2fs programs."

But it would be a rather big coincidence if that produced several
same block numbers if there was to be a discrepancy.
 
F

Folkert Rienstra

Bioboffin said:
Good advice here -

And contradicts the suggestion that "its sparing has been used up".
I recently (two weeks ago) bought a new Seagate SATA
drive - got lots of (about 20) bad blocks in the first couple of days.
Realised that the PSU was the problem and replaced it. No new bad blocks
now - system now working faster and better than before.

Nah, can't be. Obviously it must be "in _really_ bad shape".
 
R

Rod Speed

TenPercent said:
This is the funny thing about my hard disk. I run
badblocks and it reports 41 bad blocks on /home.

Is that completely reproducible, same bads on every run ?
So I run "e2fsck -c -v /dev/hda10" so the
bad block inodes will be placed in the bad blocks
inode list. But after running "dumpe2fs -b /dev/hda10"
to see the inode numbers of the bad blocks in the list,
some of the 41 are NOT appearing in the list--some
are but some are not.
So I'm kind of confused about what's happening.

Some bads arent actually bad in the sense of a bad
spot in the media but are bad because there is either
loose material floating around in the sealed chamber
or due to a poor connection to the heads which
doesnt happen in the same place all the time.
 
T

TenPercent

Rod said:
TenPercent wrote
Is that completely reproducible, same bads on every run ?


Yes, I've run badblocks 5 times now; each time the
same 41 bad blocks show up.

But, as I mentioned, "e2fsck -c -v" does not place
all 41 bad blocks in the bad blocks inode list for some
reason; it only places some of the bad blocks in the
list, which is viewable when doing "dumpe2fs -b"

Thanks again.
 
B

Bioboffin

Folkert said:
Nah, can't be. Obviously it must be "in _really_ bad shape".

What precisely do you mean by "it"? The old failed PSU - an Antec Truepower
380W less than 18 months old? The Motherboard which is now working
perfectly? Contrary to your earlier assertion, I would say that it is clear
that YOU are the troll here. You have nothing useful to contribute, but
choose to criticise contributions where you have no understanding of the
situation.
 
R

Rod Speed

TenPercent said:
Yes, I've run badblocks 5 times now; each time the
same 41 bad blocks show up.

But, as I mentioned, "e2fsck -c -v" does not place
all 41 bad blocks in the bad blocks inode list for some
reason; it only places some of the bad blocks in the
list, which is viewable when doing "dumpe2fs -b"

Then likely it is the cluster question thats involved.
 
F

Folkert Rienstra

Rod Speed said:
Is that completely reproducible, same bads on every run ?



Some bads arent actually bad in the sense of a bad
spot in the media but are bad because there is either
loose material floating around in the sealed chamber
Rotflol.

or due to a poor connection to the heads which
doesnt happen in the same place all the time.

Yeah, that only happens 41 in a zillion times, obviously.
The chance for that to happen over the (some) exact
same sectors -twice in a row- is nil.
 
L

larry moe 'n curly

J. Clarke said:
larry moe 'n curly wrote:

May I ask where you found those prices?

I got them from a Fry's (electronics stores the size of Wal-marts)
located within walking distance of a Fry's (supermarkets, known as
Kroger in most of the U.S.). Fry's ads can usually be found at
http://newspaperads.dfw.com (Dallas-Fort Worth) and www.ocregister.com
(Orange County Register), but exact deals often vary by region.

You can find lots of information about sales deals in the "Hot Deals"
forum at www.fatwallet.com.

www.salescircular.com lists many local computer & electronics deals,
but they often miss Fry's specials, probably because Fry's main ad
comes out on Friday instead of Sunday, and they never list anything
from Fry's. Best Buy, CompUSA, OfficeMax, Office Depot, Circuit City,
and Staples also often have hard drives cheap.
 
T

TenPercent

Why are some Seagate Barracuda's twice as expensive per
gigabyte than other similar models of Barracuda? Does it mean
the more expensive ones are more robust and built better, thereby
being less susceptible to bad blocks?

For example one retailer sells a 160-Gig Seagate Barracuda hard
disk for half the price per Gig than another retailer (about 50 cents
per gig; --> total $85 for 160-Gig). The other retailer sells an 80-Gig
Barracuda (half the size) for $1 per gig; --> total $80 for 80-Gigs.

Here are the model numbers:

7200.7 ST3160021A __160GB__ Ultra ATA/100 7200RPM Hard Drive
(160 gigs Selling for $85)

AND

ST380013ARK Internal Barracuda 7200 RPM __80 GB__ Ultra ATA/100 Hard Drive
(80 gigs Selling for $80.)

I'm thinking about buying the 80-gig even though it's virtually the
same price as the 160-gig, hoping that it's better made since more
expensive per gigabyte??
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top