8mb cache?

R

Rod Speed

Mr. Grinch said:
All other performance optimizations aside (OS, application) the
biggest difference will be seen only with certain types of applications.
Something that does sequential read / writes of a large file for example
will see no benefit. The file will never fit in the memory of the cache,
and a sequential operation there is no reason to ever re-read or write the
same portion of the file, you just keep going to the end. This would be
like a backup or restore or disk image operation. The bottle neck is still
the physical drive platters a 2mb vs 8mb cache will show no difference.
On the other hand, if you have a situation where the same sectors
of the disk are being re-read and re-written constantly, the 8mb
MIGHT be faster. I say might because if the data is less than 2
mb, then both caches will perform the same. But if it's significantly
bigger, then you'll see the 8mb version pull ahead. Examples that
would involve constant access to the same sectors would be a
file system that constantly access the FAT or equivalent,

That would normally be handled by the OS level caching.
for example, a database,

That would normally be done by the database system itself, that level of caching.
or a swap file / page file.

And it would be ****ed OS that did the swap file/page file that way.
The thing to keep in mind is we don't all run the same apps
and so some people will see a benefit and others won't.

Or it may well be that hardly any real world work
will actually see any significant benefit at all.
Some people could see a benefit but only if they've
configured the app to use that drive (ie - swap file).

No modern OS is that poorly implemented.
 
A

Alexander Grigoriev

OS readahead is no more than about 64K, and only if the file is opened with
some flags (in Windows - FILE_FLAG_SEQUENTIAL_SCAN).
Good media applications open big files with FILE_FLAG_NO_BUFFERING (to avoid
cache bloat), and then there is no OS readahead at all.
 
F

Folkert Rienstra

Mr. Grinch said:
All other performance optimizations aside (OS, application) the biggest
difference will be seen only with certain types of applications.

Something that does sequential read / writes of a large file for example
will see no benefit. The file will never fit in the memory of the cache,
and a sequential operation there is no reason to ever re-read or write the
same portion of the file, you just keep going to the end. This would be
like a backup or restore or disk image operation. The bottle neck is still
the physical drive platters a 2mb vs 8mb cache will show no difference.

On the other hand, if you have a situation where the same sectors of the
disk are being re-read and re-written constantly, the 8mb MIGHT be faster.
I say might
Correct.

because if the data is less than 2 mb, then both caches will perform the same.

And that's where you go wrong.
Any other reads will push some of that data out of the cache again.
So even when the frequently accessed data is less than 2 MB the
8 MB version may actually keep hold of that better.
But if it's significantly bigger, then you'll see the 8mb version pull ahead.

Or not, totally dependant on the usage pattern.
Examples that would involve constant access to the same sec-
tors would be a file system that constantly access the FAT

That's probably the worst that you could pick unless it is about using
a single file, like in a database.
 
F

Folkert Rienstra

Sam Williams said:
Sure, but quite a bit of the time the drive is way ahead
thruput capability wise anyway, so thats rather theoretical.


But cant know the physical geometry detail on whats in the current track

As if it needs to. Read-ahead is read-ahead.
and can be preloaded for close to zero overhead if nothing else is pending.

As if the drive would do it differently.
That's presuming that you switched subject halfway from OS to drive.
Else it didn't make any sense.

Nope, if anything that particular problem has got worse now
that the OS isnt working in terms of physical cylinders heads etc.

And neither does the drive.

No wonder you didn't dare use your own name, Simon.
 
A

Arno Wagner

Sure, but quite a bit of the time the drive is way ahead
thruput capability wise anyway, so thats rather theoretical.
But cant know the physical geometry detail on whats
in the current track and can be preloaded for close
to zero overhead if nothing else is pending.

As long as the sectors are sequential it does hardly matter.
Track-to-track seeks are very fast.
Nope, if anything that particular problem has got worse now
that the OS isnt working in terms of physical cylinders heads etc.

Does not matter anymore. It used to matter a lot with
track-to-track delays in the miliseconds and head-to-head delays
in the same range. Today you can mostly forget about it as long as
the reads/writes are sequantial.

Arno
 
S

Sam Williams

As if it needs to.

Corse it does for best results.
Read-ahead is read-ahead.

Wrong. As always. If the read ahead is in the current track,
thats going to pass under the heads anyway, when there is
nothing pending that needs the heads moved to a different
cylinder, reading ahead in THAT situation is always worthwhile.
zero overhead if nothing else is pending.
As if the drive would do it differently.

That bit was discussing the OS doing that.
That's presuming that you switched
subject halfway from OS to drive.

Pigs arse I did. I was just pointing out that only the
drive knows that cylinder detail and so read ahead
by the drive is not the same as by the OS.
Else it didn't make any sense.

Completely clueless. As always.
And neither does the drive.

It certainly knows that there is no penalty in reading ahead in
the current track if there is nothing else pending, so the contents
of the track will be available from the cache if a request for
that does show up later and there will be no need to wait
for the platter to rotate to where the requested sectors are.

Not a ****ing clue. As always, ****nut.
 
S

Sam Williams

As long as the sectors are sequential it does hardly matter.

Pity about when they arent sequential.
Track-to-track seeks are very fast.

But can never be as fast as no seek at all.
Does not matter anymore.

Corse it does.
It used to matter a lot with track-to-track delays
in the miliseconds and head-to-head delays in
the same range. Today you can mostly forget
about it as long as the reads/writes are sequantial.

Pity about when they arent.

And they mostly wouldnt be with the specific
multichannel situation being discussed anyway.
 
F

Folkert Rienstra

You actually pretend to understand that Simon gibber?
As long as the sectors are sequential it does hardly matter.
Track-to-track seeks are very fast.

Which only occur at the end of a cylinder so that's bull.
A full cylinder likely swamps the whole cache.

Track-to-track seeks have nothing to do with it, except
for once in so many accesses.
It's very doubtfull whether a cylinder crossing will affect
cache-ahead behaviour.
Does not matter anymore.

Never has mattered.
It used to matter a lot with track-to-track delays in the
milliseconds and head-to-head delays in the same range.

Nope. Nothing to do with cache-ahead.
Today you can mostly forget about it as long as the reads/writes
are sequential.

And what has that got to do with cache-ahead?
All physical reads/writes are sequential. All clusters are sequential.
It's impossible to read/write nonsequential in a single command.
 
F

Folkert Rienstra

Sam Williams said:
Pity about when they arent sequential.

Nonsequential sectors, what else is new, Simon?
But can never be as fast as no seek at all.



Corse it does.


Pity about when they arent.

All accesses are sequential, oh clueless Simon.
You can't do it differently when all that you can specify in a
command is the starting sector and the number of sectors to get.
 
S

Sam Williams

just the pathetic excuse for a mindless troll
that fools absolutely no one at all. As always.
 
S

Sam Williams

Folkert Rienstra said:
You actually pretend to understand that Simon gibber?


Which only occur at the end of a cylinder so that's bull.
A full cylinder likely swamps the whole cache.

Track-to-track seeks have nothing to do with it, except
for once in so many accesses.
It's very doubtfull whether a cylinder crossing will affect
cache-ahead behaviour.


Never has mattered.


Nope. Nothing to do with cache-ahead.



And what has that got to do with cache-ahead?
All physical reads/writes are sequential. All clusters are sequential.
It's impossible to read/write nonsequential in a single command.

Maybe you actually are that stupid.

Its the READ AHEAD by the OS that wont
necessarily pick up sequential sectors, moron.
 
M

Mr. Grinch

And what has that got to do with cache-ahead?
All physical reads/writes are sequential. All clusters are sequential.
It's impossible to read/write nonsequential in a single command.

Some SCSI drives used to have a feature called "elevator" seeks where they
would queue operations in the cache then perform them in the order the
sectors / tracks were found on the disk to minimize the distance seeked.

Do ATA drives do this at all?
 
F

Folkert Rienstra

Mr. Grinch said:
Some SCSI drives used to have a feature called "elevator" seeks where they
would queue operations

Operations = commands.
in the cache then perform them in the order the sectors / tracks were found

i.e Command reordering.
on the disk to minimize the distance seeked.

Do ATA drives do this at all?

Not in general but IBMs might do that when used in conjunction with a HighPoint driver.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top