kony said:
Yes it seems true in this case, but consider IE is MS
software and so integrated, relying on a lot of loosely
related things such as networking settings and registry
preferences- moreso than many apps which may be larger but
fewer files.
Firefox performs more I/Os than MSIE.
I suppose one of my points is that while IE is
constrained by seek, it IS only a 850ms in your case which
isn't very long to wait, especially in the context that it
may take longer to load the (webpage, etc).
When you've paid money for a fast processor and memory, it's an
eternity. What's the point of buying faster systems if it will always
take 850 ms to open the browser?
Now contrast
that with things that DO take a long time, large files which
are not typically cached into memory continuously or
regularly, unless the system had a near infinite amount of
memory and had been running a long time. XP's cache isnt'
that bit anyway, is it? I don't recall the exact figure but
one from 2000 Server is 980-something MB total.
There is no way to effectively cache large, randomly-accessed files.
For such files, you'll be hit with seek time for each and every file
I/O. Fortunately, efficiently-written database systems (which
automatically excludes Access or SQL Server) can do one access with
only 1-3 disk I/Os, which is only 24 ms or so. Still, if you have 100
requests a second coming in, that's 2.4 seconds of seek time. Even if
it is distributed over several drives, the system will rapidly hit a
wall in terms of performance, and that wall will be imposed by disk
access time, not processor time.
Ultimately, almost everything becomes I/O-bound, except the most
processor-intensive stuff, such as drawing fractals or playing (most)
video games, etc. Even video games can have annoying pauses if they
have to reference disk.
These lesser things are not primary bottlenecks, but it does
all add up. For instance when you load IE, the only reason
it can do so in the time the drive takes is that it takes so
long. Cut the drive access time in half and you probably
won't get that webpage in 425ms instead of 850ms.
Oh, I think you will. I don't think MSIE is likely to peg a 3.0 GHz
processor at start-up, even as bloated as it is. Half a second is
about four or five billion instructions, which is a lot for any
program. The disk drive will still be the limiting factor.
Worse yet, a lot of programs do disk _writes_ all the time. This
cannot be cached, at least not for long. Some types of writes must be
done immediately and the application will wait on the I/O. This slows
things down even more.
Some of the I/O is also paging, which becomes more and more frequent
as software becomes more bloated. Paging is a lot of small I/Os;
disks with faster transfer rates have essentially no influence on
paging time; only better access times can help, and those are not
really improving.
One solution is to write better programs that don't treat disk as if
it were a zero-access-time device. But I see no signs of anyone
writing better software anywhere.
I do agree seek times are very important, perhaps even most
important for smaller, typical PC tasks. They're not the
whole story though, and paying more for a lower seek time
drive for these tasks using so many small files may not
always be better than spending that cost difference on more
memory instead.
If you can add memory, it always helps. But you can't configure an
infinite amount of memory on a system. Even if you have a 64-bit
system that can go beyond 4 GB, you'd have to have many dozens of
gigabytes to make a difference in some cases, because disk access is
so random. When disk access is random, the only way to improve
performance with cache is to make the cache a large fraction of the
size of the disk. With disks that are 500 GB each, that's pretty
tough.