buffer size on HDs does size matter?

  • Thread starter Thread starter Pdigmking
  • Start date Start date
P

Pdigmking

Been looking at HDs as of late. I see that Samsung has 2MB buffers while
almost everyone else has 8. I read a little hard drive explanation thing
that suggested that the manufacturers are adding buffer size because it's
cheap, but it doesn really add that much to performance?

Any comments?

Paul.
 
New drives are coming out with 16MB. It makes a small difference. From 2mb
to 8mb or 16mb is a noticeable difference. From 8 to 16 not much.
 
Pdigmking said:
Been looking at HDs as of late. I see that Samsung has 2MB
buffers while almost everyone else has 8. I read a little hard
drive explanation thing that suggested that the manufacturers are
adding buffer size because it's cheap, but it doesn really add
that much to performance?

If the answer is Yes, I'd like to tack on a follow up question.

Then how come buffer sizes are so small? Does making the buffer 32MB
add so much to the cost that people aren't willing to pay? Then
apparently it doesn't do much for performance. Or is it that adding
bigger buffers is more complex and expensive than just the memory?
Does it take special memory? Is it that most people just don't care
much about hard drive performance?

Sorry if that confuses things, I am really only asking one question.
How come (current typical) hard drive buffers are only 8 or 16MB?

Thank you.
 
That is just plain wrong, they have both.

Yep, you wouldnt be able to pick the drive with
the bigger cache without using a benchmark.
If the answer is Yes, I'd like to tack on a follow up question.
Then how come buffer sizes are so small?

Basically because 2MB is plenty.
Does making the buffer 32MB add so much
to the cost that people aren't willing to pay?

More that its pointless.
Then apparently it doesn't do much for performance.
Correct.

Or is it that adding bigger buffers is more
complex and expensive than just the memory?
Nope.

Does it take special memory?
Yes.

Is it that most people just don't care
much about hard drive performance?

Nope, it has minimal effect on performance.
Sorry if that confuses things, I am really only asking one question.
How come (current typical) hard drive buffers are only 8 or 16MB?

Because that's fine.
 
If the answer is Yes, I'd like to tack on a follow up question.
Then how come buffer sizes are so small? Does making the buffer 32MB
add so much to the cost that people aren't willing to pay? Then
apparently it doesn't do much for performance. Or is it that adding
bigger buffers is more complex and expensive than just the memory?
Does it take special memory? Is it that most people just don't care
much about hard drive performance?
Sorry if that confuses things, I am really only asking one question.
How come (current typical) hard drive buffers are only 8 or 16MB?

I think the problem is that in order to really speed up reading,
you would need more like 256MB and more of cache. What the small
sizes are good for is speeding up writes by buffering them. But
if you make these buffers too large, you might loose data if people
shut down their computer but the disk has not finished writing
when the power goes off.

Maybe the disk manufacturers do benchmarks with Windows on how
much time they have between the last write and the poweroff and
design buffer size accordingly...

The other possible reason is that it is SRAM, not DRAM as in
computer memory, and SRAM is more expensive.

Arno
 
If the answer is Yes, I'd like to tack on a follow up question.

Then how come buffer sizes are so small?

The idea of a buffer is twofold.

1) Cache frequently accessed files.
2) Hold data in fastest storage to make it immediately available to the
interface when the IDE/SATA port requests it.
 
Arno Wagner said:
I think the problem is that in order to really speed up reading,
you would need more like 256MB and more of cache.

A number plucked from your arse?
What the small sizes are good for is speeding up writes by buffering them.

Hardly.
Even if the full cache were to be used it only gives a .1 second advantage.
Maybe if the drive is badly fragmented it may get slightly more noticeable.
But if you make these buffers too large, you might loose data if people
shut down their computer but the disk has not finished writing when the
power goes off.

We are talking cache here, not (just) buffer.
The 'cache' is divided in read cache, write cache and buffer.
Caches are divided in segments of which only a percentage is used
to buffer (read ahead, write behind), apart from the 'buffer', that is
only 128kB and buffers only a single command, whether read or write.
The rest is used to keep earlier instances of reads and writes and they
will be overwritten in time, usually oldest data first.
Maybe the disk manufacturers do benchmarks with Windows on how
much time they have between the last write and the poweroff and
design buffer size accordingly...

So obviously you are not talking about the cache as a whole.
The other possible reason is that it is SRAM, not DRAM as in
computer memory, and SRAM is more expensive.

Or it is just single chip memory that is differently organized than
computer memory and therefor not produced in the same quantity
as computer memory and consequently more expensive.
 
Not really.

HDD buffers are mostly to cache writes. Caching reads are done by the
application, OS or controller. Moving read caching to the drive is possible
but involves quite a lot more than plugging a few ICs into the board. Read
aching is not rocket science, conceptually, but implemeting it efficiently
and to yield positive impact is almost rocket science.


joe.
 
Joe Yong said:
Not really.
HDD buffers are mostly to cache writes. Caching reads are done by the
application, OS or controller. Moving read caching to the drive is possible
but involves quite a lot more than plugging a few ICs into the board. Read
aching is not rocket science, conceptually, but implemeting it efficiently and
to yield positive impact is almost rocket science.

Particularly when done at the drive level when the drive
only knows about logical blocks and not about files etc.

Makes a lot more sense to do that sort
of caching at the OS level or the app level.
 
@individual.net:

Basically because 2MB is plenty.


More that its pointless.

So Rod,

Your saying that 8MB doesn't really improve performance over 2MB?

By the way, I was referring to certain Samsung IDE drives which are now
only available with 2MB buffer, I realize that Samsung has drives with 8MB,
but he question remains.. 2 vs. 8?

Paul
 
Troll

Folkert Rienstra said:
Path: newssvr14.news.prodigy.com!newsdbm05.news.prodigy.com!newsdbm04.news.prodigy.com!newsdst01.news.prodigy.com!newsmst01b.news.prodigy.com!prodigy.com!newscon06.news.prodigy.com!prodigy.net!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed-east.nntpserver.com!nntpserver.com!statler.nntpserver.com!canary.octanews.net!news-out.octanews.net!indigo.octanews.net!authen.yellow.readfreenews.net.POSTED!not-for-mail
Reply-To: "Folkert Rienstra" <folkertdotrienstra freeler.nl>
From: "Folkert Rienstra" <see_reply-to myweb.nl>
Newsgroups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage
References: <Xns9722B1D5215Epaugle 127.0.0.1> <Xns9722B44588B01follydom 207.115.17.102> <3vhiffF15r7l7U1 individual.net>
Subject: Re: buffer size on HDs does size matter?
Date: Mon, 5 Dec 2005 17:29:31 +0100
MIME-Version: 1.0
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2800.1437
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
Lines: 62
Message-ID: <43946b70$0$61254$892e7fe2 authen.yellow.readfreenews.net>
Organization: Read Free News
NNTP-Posting-Date: 05 Dec 2005 10:31:44 CST
X-Trace: DXC=1H_2^FU5k7?>YS=DM;9<J=bQ9W<K20`32O6Gh9bA988>U9RQ0K7o`01Y]NjTSMCeh=?M2[n03TfY;Q\eoJOUfaa5dUOPTj\i4_<
Xref: newsmst01b.news.prodigy.com alt.comp.hardware.pc-homebuilt:451540 comp.sys.ibm.pc.hardware.storage:363407

Arno Wagner said:
I think the problem is that in order to really speed up reading,
you would need more like 256MB and more of cache.

A number plucked from your arse?
What the small sizes are good for is speeding up writes by buffering them.

Hardly.
Even if the full cache were to be used it only gives a .1 second advantage.
Maybe if the drive is badly fragmented it may get slightly more noticeable.
But if you make these buffers too large, you might loose data if people
shut down their computer but the disk has not finished writing when the
power goes off.

We are talking cache here, not (just) buffer.
The 'cache' is divided in read cache, write cache and buffer.
Caches are divided in segments of which only a percentage is used
to buffer (read ahead, write behind), apart from the 'buffer', that is
only 128kB and buffers only a single command, whether read or write.
The rest is used to keep earlier instances of reads and writes and they
will be overwritten in time, usually oldest data first.
Maybe the disk manufacturers do benchmarks with Windows on how
much time they have between the last write and the poweroff and
design buffer size accordingly...

So obviously you are not talking about the cache as a whole.
The other possible reason is that it is SRAM, not DRAM as in
computer memory, and SRAM is more expensive.

Or it is just single chip memory that is differently organized than
computer memory and therefor not produced in the same quantity
as computer memory and consequently more expensive.
 
Your saying that 8MB doesn't really improve performance over 2MB?

Nope, that the improvement in performance is quite small and 32MB
will make very little difference at all since its basically a write buffer.
By the way, I was referring to certain Samsung IDE drives
which are now only available with 2MB buffer, I realize that
Samsung has drives with 8MB, but he question remains.. 2 vs. 8?

Like I said, the difference is quite small.
 
Joe Yong said:
Not really.

HDD buffers are mostly to cache writes. Caching reads are done by the
application, OS or controller. Moving read caching to the drive is possible
but involves quite a lot more than plugging a few ICs into the board.
Read aching is not rocket science,

Apparently it is to you, judging by your post.
conceptually, but implemeting it efficiently
and to yield positive impact is almost rocket science.


joe.

That's saying the same thing twice, more or less:
Caching "frequently accessed files" is "holding data in fastest storage to make it
immediately available to the interface when the IDE/SATA port requests it".

Presumably the second point is about caching ahead where a follow-up request
to a previous request for parts of a sequential file is cached even before the
followup request is issued and when it is issued the data comes from cache rather
than from the platters.

And then there is 3) and 4) as well, for the write cache side of the cache.

Let's see those headers again, John Troll.
 
Rod Speed said:
Particularly when done at the drive level when the drive only knows about logical blocks
and not about files etc.

It doesn't need to know about files.
A file is represented by block numbers and if a particular file is
requested it is requested by those blocknumbers. If those blocknumbers
happen to be available in cache, then that is where they will come from.

Read-ahead caching is not about pre-emptive file caching, it is about caching
sectors that are likely to be read next by the next IO but which IO may not
arrive in time so that when it eventually does the data is available without
having to wait a revolution to still pick that data up.
Makes a lot more sense to do that sort of
caching at the OS level or the app level.

Both don't know about your files either.
All they can do is load files that are adjacent to the file that you are loading
but that does nothing for the file that you are loading and the files that are
adjacent to the file that you are loading may have nothing whatsoever to do
with the app that you are using.
 
Yes, the buffer size makes a noticeable difference in how fast the system
can recall recent data from the harddrive. The 2 MB harddrives are old and
out of date.
 
prove it without a benchmark

DaveW said:
Yes, the buffer size makes a noticeable difference in how fast the system
can recall recent data from the harddrive. The 2 MB harddrives are old and
out of date.
 
Read-ahead caching is not about pre-emptive file caching, it is
about caching sectors that are likely to be read next by the
next IO but which IO may not arrive in time so that when it
eventually does the data is available without having to wait a
revolution to still pick that data up.

....and for sequential file reads this will work better if the disk
files are not fragmented.
 
Back
Top