SATA drives-- are they faster than IDE/ATA?

J

John Weiss

Beowulf said:
I really know little about SATA drives. Always used ATA/IDE/EIDE hard
drives. Going to build a new system soon-- what performance benefit would
I gain, if any, adding a SATA drive? And if I did add a SATA drive, what
is it best used for-- the OS, or data files accessed often (like images if
doing digital art, etc)?

SATA has a POTENTIAL performance advantage over EIDE, but will only be realized
if the physical limitations of the HD are overcome. A 7200 RPM ATA100/133 will
perform as well as a 72-- RPM SATA 150. The only mainstreams HDs that take
advantage of the SATA bus for performance are the WD Raptor 10K RPM HDs.

OTOH, you may be able to take advantage of a motherboard with RAID capability on
a built-in SATA controller. Also, the extra SATA connectors will give you the
ability to install more total SATA+EIDE devices.
 
D

Dale Brisket

cliff said:
What do you use as a benchmark? From what I can tell my Maxtor 300Gig PATA
is faster than my 74Gig Raptor using HAD track and a read/write test with
Pinnacle Studio 9.

No benchmarks, just how quickly apps load, defrag gets done, desktop shows
up at boot. Not scientific, but not my imagination either.
 
B

Bob Davis

kony said:
Not necessarily, the larger PATA drives make up a lot in
linear speed due to their higher platter density. Combine
that with the fact that anyone doing large jobs on an system
with a single drive, or otherwise a large % of the space on
the drive used, will find the Raptor has then been reading
and writing to the slower end of the platter. Sandra and
HDTach, being synthetic benchmarks, are not necessarily
accurate for all scenarios. That's not to take away from
the Raptor, it's a good choice for the OS or smaller data
sets, but can't replace a larger drive well for large linear
work... so as always the best power-user config is to have
at least 2 drives in the system.

I agree. I have RAID0 (2x 36gb Raptors) on this system with a 300gb IDE for
storage, and I usually use one Raptor for C: and a larger IDE or SATA for D:
on systems I build for others.

I am surprised the Raptor won't beat any PATA or standard SATA drive in
benchmarks, as every one I've tested will do so. This RAID0 setup is
especially fast, and the 74gb or new 150gb Raptors in an striped array will
be even faster.
 
M

Mike

Bob Davis said:
I agree. I have RAID0 (2x 36gb Raptors) on this system with a 300gb IDE
for storage, and I usually use one Raptor for C: and a larger IDE or SATA
for D: on systems I build for others.

I am surprised the Raptor won't beat any PATA or standard SATA drive in
benchmarks, as every one I've tested will do so. This RAID0 setup is
especially fast, and the 74gb or new 150gb Raptors in an striped array
will be even faster.
***************************************************************************************
Trying to digest your post. I'm going to build a PC this summer to be used
primarily for photo editing,
web browsing, Office apps, Google Earth and some browsing. What is
recommended for HDD array? I'd
planned to use a SATA Raptor for the OS and another largerr HDD for
applications and data.

Mike
 
B

Bob Davis

Mike said:
***************************************************************************************
Trying to digest your post. I'm going to build a PC this summer to be
used primarily for photo editing,
web browsing, Office apps, Google Earth and some browsing. What is
recommended for HDD array? I'd
planned to use a SATA Raptor for the OS and another largerr HDD for
applications and data.

That sounds like a good plan. I'm using Raptors for C: (OS and programs)
and larger drives for D: (storage) for the systems I've built in the past
two years or more. If you are considering RAID0, I would not go that route
for the applications you indend to run, unless you will be doing a lot of
photo editing and manipulating large files. RAID0 will speed up this
process to some degree, but there is a downside: Increased complexity and
slightly higher risk of data loss in the event of a hardware failure. You
have two drives running for a given volume, and if one fails the entire
volume has failed.

Personally, I like RAID0 for my needs as a commerical photographer. I'm
backed up in multiples and can recover from a HD or RAID failure with little
effort or time expended. To me, secure backup is a given regardless of the
drive setup chosen.

RAID1 is a redundant array, the second drive mirroring the source drive, but
to me it is overkill considering you can achieve basically the same thing by
performing diligent backups. RAID0 is also overkill if you are doing casual
photo editing, IMO. I would stick to single drive volumes unless you are
doing lots of photo editing and the small savings in time add up.
 
D

digisol

150 sata V 133ata ? ummm, you guess

But apart from that I have two boxes out of 9 that use SATA strippin
on raid 0, performance can be visible, BTW this note comes courtes
of 2 80gb Maxtors raided with only a 2800 "my game box",
have faster systems by the numbers but the 2800 has the best o
boards, video and sound and gets the nod over many much faste
systems
 
K

kony

***************************************************************************************
Trying to digest your post. I'm going to build a PC this summer to be used
primarily for photo editing,
web browsing, Office apps, Google Earth and some browsing. What is
recommended for HDD array? I'd
planned to use a SATA Raptor for the OS and another largerr HDD for
applications and data.

Mike

Unless the files are very large, there isn't much of a
storage subsystem bottleneck. Photoediting unlike some
tasks, tends to also open the entire file and be only
writing, not reading simultaneously so the need for a
dedicated drive for the jobs is much lower.

Primarily the system should have ample memory and fast CPU.
An array is not needed for those tasks in particular, but if
your work (data) is valuable enough that you don't want to
risk loss inbetween backups, /then/ consider a RAID1 array
instead of a single drive, in addition to the Raptor for OS.
 
M

Mxsmanic

All of this discussion seems a bit misplaced. The major bottleneck
for disk drives is usually seek time, not data transfer rate. And
seek time does not vary with the disk interface.

For example, Internet Explorer on my system does a minimum of 85
separate disk I/Os when it starts. If none of these I/Os can be
satisfied from cache, that's 85 seeks to disk. With an average seek
time of 10 ms, that's 850 ms to start the program; and in fact that's
about how long IE takes to start, meaning that the bulk of the delay
in starting IE (or just about any other program) comes from the volume
of disk I/Os it must do. And the I/Os are numerous but small, as
opposed to being few in number but large--which means that access time
is the primary bottleneck, not transfer rates.

RAID can help in certain cases, particularly in overall system
performance, but it may not make much difference for individual
programs.

Today, most of the delay you see in response time on a PC is due
either to network or disk delays. Processor delay is almost never a
factor. If an application isn't doing network I/O, just about all the
time you spend waiting for a response comes from disk I/O delays, and
almost all of the disk delays are seek delays.

Current processors are 10,000 to 100,000 times faster than they were
30 years ago, but disk drives are only 3 times faster. The disparity
between processor-intensive tasks and I/O-intensive tasks is thus
growing extremely large. The odd thing is that nobody seems to
realize this, and companies continue to concentrate on processor
power. Perhaps this is because processor power is still easy to
increase, whereas very little can be done for seek times on disk
drives (unless someone comes up with something very revolutionary and
new, which doesn't appear to be on the horizon).
 
K

kony

All of this discussion seems a bit misplaced. The major bottleneck
for disk drives is usually seek time, not data transfer rate. And
seek time does not vary with the disk interface.

Often yes, seek time does matter most. However, you might
be thinking mostly of the OS itself, and if in that context,
it's a bit beside the point because the primary issue there
is ample memory to keep all the code resident instead of
being reread from the drive.

For example, Internet Explorer on my system does a minimum of 85
separate disk I/Os when it starts. If none of these I/Os can be
satisfied from cache, that's 85 seeks to disk. With an average seek
time of 10 ms, that's 850 ms to start the program; and in fact that's
about how long IE takes to start, meaning that the bulk of the delay
in starting IE (or just about any other program) comes from the volume
of disk I/Os it must do. And the I/Os are numerous but small, as
opposed to being few in number but large--which means that access time
is the primary bottleneck, not transfer rates.

Yes it seems true in this case, but consider IE is MS
software and so integrated, relying on a lot of loosely
related things such as networking settings and registry
preferences- moreso than many apps which may be larger but
fewer files. I suppose one of my points is that while IE is
constrained by seek, it IS only a 850ms in your case which
isn't very long to wait, especially in the context that it
may take longer to load the (webpage, etc). Now contrast
that with things that DO take a long time, large files which
are not typically cached into memory continuously or
regularly, unless the system had a near infinite amount of
memory and had been running a long time. XP's cache isnt'
that bit anyway, is it? I don't recall the exact figure but
one from 2000 Server is 980-something MB total.


RAID can help in certain cases, particularly in overall system
performance, but it may not make much difference for individual
programs.
Today, most of the delay you see in response time on a PC is due
either to network or disk delays. Processor delay is almost never a
factor. If an application isn't doing network I/O, just about all the
time you spend waiting for a response comes from disk I/O delays, and
almost all of the disk delays are seek delays.

These lesser things are not primary bottlenecks, but it does
all add up. For instance when you load IE, the only reason
it can do so in the time the drive takes is that it takes so
long. Cut the drive access time in half and you probably
won't get that webpage in 425ms instead of 850ms.



Current processors are 10,000 to 100,000 times faster than they were
30 years ago, but disk drives are only 3 times faster. The disparity
between processor-intensive tasks and I/O-intensive tasks is thus
growing extremely large. The odd thing is that nobody seems to
realize this, and companies continue to concentrate on processor
power. Perhaps this is because processor power is still easy to
increase, whereas very little can be done for seek times on disk
drives (unless someone comes up with something very revolutionary and
new, which doesn't appear to be on the horizon).


I do agree seek times are very important, perhaps even most
important for smaller, typical PC tasks. They're not the
whole story though, and paying more for a lower seek time
drive for these tasks using so many small files may not
always be better than spending that cost difference on more
memory instead.
 
M

Mxsmanic

kony said:
Yes it seems true in this case, but consider IE is MS
software and so integrated, relying on a lot of loosely
related things such as networking settings and registry
preferences- moreso than many apps which may be larger but
fewer files.

Firefox performs more I/Os than MSIE.
I suppose one of my points is that while IE is
constrained by seek, it IS only a 850ms in your case which
isn't very long to wait, especially in the context that it
may take longer to load the (webpage, etc).

When you've paid money for a fast processor and memory, it's an
eternity. What's the point of buying faster systems if it will always
take 850 ms to open the browser?
Now contrast
that with things that DO take a long time, large files which
are not typically cached into memory continuously or
regularly, unless the system had a near infinite amount of
memory and had been running a long time. XP's cache isnt'
that bit anyway, is it? I don't recall the exact figure but
one from 2000 Server is 980-something MB total.

There is no way to effectively cache large, randomly-accessed files.
For such files, you'll be hit with seek time for each and every file
I/O. Fortunately, efficiently-written database systems (which
automatically excludes Access or SQL Server) can do one access with
only 1-3 disk I/Os, which is only 24 ms or so. Still, if you have 100
requests a second coming in, that's 2.4 seconds of seek time. Even if
it is distributed over several drives, the system will rapidly hit a
wall in terms of performance, and that wall will be imposed by disk
access time, not processor time.

Ultimately, almost everything becomes I/O-bound, except the most
processor-intensive stuff, such as drawing fractals or playing (most)
video games, etc. Even video games can have annoying pauses if they
have to reference disk.
These lesser things are not primary bottlenecks, but it does
all add up. For instance when you load IE, the only reason
it can do so in the time the drive takes is that it takes so
long. Cut the drive access time in half and you probably
won't get that webpage in 425ms instead of 850ms.

Oh, I think you will. I don't think MSIE is likely to peg a 3.0 GHz
processor at start-up, even as bloated as it is. Half a second is
about four or five billion instructions, which is a lot for any
program. The disk drive will still be the limiting factor.

Worse yet, a lot of programs do disk _writes_ all the time. This
cannot be cached, at least not for long. Some types of writes must be
done immediately and the application will wait on the I/O. This slows
things down even more.

Some of the I/O is also paging, which becomes more and more frequent
as software becomes more bloated. Paging is a lot of small I/Os;
disks with faster transfer rates have essentially no influence on
paging time; only better access times can help, and those are not
really improving.

One solution is to write better programs that don't treat disk as if
it were a zero-access-time device. But I see no signs of anyone
writing better software anywhere.
I do agree seek times are very important, perhaps even most
important for smaller, typical PC tasks. They're not the
whole story though, and paying more for a lower seek time
drive for these tasks using so many small files may not
always be better than spending that cost difference on more
memory instead.

If you can add memory, it always helps. But you can't configure an
infinite amount of memory on a system. Even if you have a 64-bit
system that can go beyond 4 GB, you'd have to have many dozens of
gigabytes to make a difference in some cases, because disk access is
so random. When disk access is random, the only way to improve
performance with cache is to make the cache a large fraction of the
size of the disk. With disks that are 500 GB each, that's pretty
tough.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top