IDE or AHCI ?

A

Arno

Lynn McGuire said:
What is the fastest hard drive access method for
Windows 7 x64, IDE or AHCI ? I have a WD 1 TB
caviar black, a Gigabyte z68xp-ud5 motherboard
and a Intel I3-2500K with 8 GB of ram.
Thanks,
Lynn

It should not matter much for speed. AHCI has hotplug, IDE does
not though. And AHCI dirvers may be newer, improving speed.
Unfortunately even Win 7 has problematic (no?) AHCI support out
of the box and requires drivers. AFAIK this is motsly an issue for
new installations. Under Linux it does not matter.

Arno
 
R

Rod Speed

Lynn said:
What is the fastest hard drive access method for Windows 7 x64, IDE or AHCI ?

There clearly isnt a lot in it give the stats in your first link.

I doubt you'd be able to pick the difference in a proper double blind
trial without being allowed to use a benchmark with normal work.

If you can, with normal work, use the config which gives the best result WITH THAT WORK.
I have a WD 1 TB caviar black, a Gigabyte z68xp-ud5 motherboard and a Intel I3-2500K with 8 GB of ram.
 
Y

Yousuf Khan

What is the fastest hard drive access method for
Windows 7 x64, IDE or AHCI ? I have a WD 1 TB
caviar black, a Gigabyte z68xp-ud5 motherboard
and a Intel I3-2500K with 8 GB of ram.

I see these thoughts:
http://expertester.wordpress.com/2008/07/24/ahci-vs-ide-–-benchmark-advantage/

http://tweaks.com/windows/44119/improve-sata-hard-disk-performance-convert-from-ide-to-ahci/

I did the switch to AHCI back in the Windows XP days. At that time it
was a difficult transition as there were no default AHCI drivers, and
switching to AHCI without doing some preparation meant that your OS
would not boot. It's still not an easy switch with Windows 7 either, you
basically have to install Windows 7 with AHCI already enabled or else
it'll default to IDE and not include the AHCI drivers in the install.
Otherwise, switching to AHCI after installing Win 7 is already installed
is nearly as difficult as XP. Linux could use either type transparently,
not sure why Microsoft didn't make it as simple with its own drivers.

After doing the switch, I find absolutely no difference in performance.
However, I do have an external eSATA drive which can be enabled and
disabled on the fly just like a USB drive. I think if I were still using
IDE drivers, that wouldn't be nearly as easy though.

Yousuf Khan
 
R

Rod Speed

David Brown wrote
Yousuf Khan wrote
It's hard to comprehend MS's difficulty here.

For you, sure.
There is little measurable difference in performance between IDE mode and AHCI mode,
Yes.

but people often /perceive/ "native SATA" mode as newer and faster than "IDE emulation" mode.

More fool them.
So even if you can't measure a difference,

Corse you can.
it still seems absurd that you have to jump through hoops to run "native SATA".

Why when the difference is so trivial ?
There are two main differences in practice between SATA and IDE modes.
One is hotplug, as you mentioned, and the other is NCQ - native command queueing. (There are also a few other SATA
commands, such as SSD trim and secure erase.)
NCQ won't make a significant difference in Linux, since it has always
had good algorithms to order disk accesses to minimise head movement.

And so does Win.
It will sometimes make things worse, such as when the OS wants to enforce a particular order (for transactions to
filesystem journals,
for example). And NCQ doesn't help windows much either
Wrong.

- after all, it only applies when you do more than one thing at a time.

Which Win does all the time.
 
Y

Yousuf Khan

NCQ won't make a significant difference in Linux, since it has always
had good algorithms to order disk accesses to minimise head movement. It
will sometimes make things worse, such as when the OS wants to enforce a
particular order (for transactions to filesystem journals, for example).
And NCQ doesn't help windows much either - after all, it only applies
when you do more than one thing at a time.

I find that it doesn't even help even when multitasking. I monitor the
disk subsection of the Resource Monitor regularly, and very often when
the disk is busy the Disk Queue Length is over 1.00 (meaning more than 1
process is actively waiting on the disk) and the Active Time is pegged
near 100%. Nothing that can be done about it till SSD's are more affordable.

Yousuf Khan
 
A

Arno

It's hard to comprehend MS's difficulty here. There is little
measurable difference in performance between IDE mode and AHCI mode, but
people often /perceive/ "native SATA" mode as newer and faster than "IDE
emulation" mode. So even if you can't measure a difference, it still
seems absurd that you have to jump through hoops to run "native SATA".

Indeed. And you onlu get reliably working hotplug with AHCI,
which is a factor for eSATA. Basically shows that when it
comes to things on the hardcore tech layer, MS is still
pretty far behind.
There are two main differences in practice between SATA and IDE modes.
One is hotplug, as you mentioned, and the other is NCQ - native command
queueing. (There are also a few other SATA commands, such as SSD trim
and secure erase.)
NCQ won't make a significant difference in Linux, since it has always
had good algorithms to order disk accesses to minimise head movement.
It will sometimes make things worse, such as when the OS wants to
enforce a particular order (for transactions to filesystem journals, for
example). And NCQ doesn't help windows much either - after all, it only
applies when you do more than one thing at a time.

Indeed again. NCQ is motsly for server loads, where a lot
of things run in parallel, with a sub-standard buffer-cache.
This certainly does not apply to Linux or the BSDs. No
idea whether it applies to Windows on servers.

Arno
 
A

Arno

I find that it doesn't even help even when multitasking. I monitor the
disk subsection of the Resource Monitor regularly, and very often when
the disk is busy the Disk Queue Length is over 1.00 (meaning more than 1
process is actively waiting on the disk) and the Active Time is pegged
near 100%. Nothing that can be done about it till SSD's are more affordable.
Yousuf Khan

Interessting.

Arno
 
A

Arno

NCQ can only really help if you have multiple outstanding transactions,
and the OS itself hasn't ordered them appropriately. Since the OS
(Windows or Linux) /does/ order transactions, NCQ will only help if the
OS is doing a bad job. The disk knows a bit more than the OS regarding
disk ordering (since it knows the full 3D geometry, rather than just a
linear LBA number), but on the other hand it knows nothing about which
processes are waiting for disk access, or the priorities of said
accesses, and it knows nothing about barrier writes. I don't know how
Windows handles write barriers, but on Linux they are important to
ensure the integrity of critical disk accesses such as journalling -
they ensure that everything that was supposed to be written earlier
/has/ been written. NCQ totally screws this up, and means that the OS's
IO subsystem must ensure the disk queue is completely empty before
sending the barrier write, and wait for it to finish completely before
sending anything else. If the disk handles transactions in the order
they are given, then such writes can be buffered better.

Well, yes. Write-barriers are getting more and more important
on Linux, with filesystems deferring more and using jourmalling
more agressively. Looks like NCQ is basically obsolete.

If I remember correctly, it is a thing that was brought on in
SCSI disks a long time ago, when it still had merit. Not so
anymore, just one more TLA that can be trhown at customers
to make them think they are getting more for their money.

Arno
 
R

Rod Speed

David Brown wrote
Rod Speed wrote
You do realise you are arguing against yourself here, don't you?

Like hell I am.
First you agree that there is only a trivial difference in performance between IDE and AHCI modes,

I didnt agree with your TRIVIAL claim, just what you actually said, LITTLE difference.
then you argue that "of course" you can measure it,

Her first link clearly shows that it can be measured.
then you argue that there is little point in using it (on Windows) since the differences are trivial...

No I did not. One obvious reason to use it is if you use the hot plugging.
Back to reality.

You wouldnt know what reality was if bit you on your lard arse.
Yes, the performance differences are trivial.

Not always.
Yes, they /can/ be measured - but the differences are below the noise threshold for most windows machines.

That is just plain wrong.
To measure them, you have to be careful about test conditions, background services, repetition of the tests, clean
installs, etc.

Wrong, as always.
That's fine for a website specialising in tests and benchmarks, but of little use to most people.

Irrelevant to it providing support for hot plugging that IDE does not.
However, whatever the technical benefits (or lack thereof) of using AHCI instead of IDE, user perception and
expectation should be important to a supplier like MS.

Mindlessly silly.
The effort needed to get hard drive drivers in place and working in Windows,

Is completely trivial with Win7
and the scope for getting it wrong and causing problems,

Thats a lie with Win7.
is just silly when you look at how simple it is with Linux.

Its even simpler with Win7.
I wasn't talking about Windows here.

You clearly were.
No, it is correct - NCQ doesn't help windows much.

Depends on how you define much and which sort of work you are talking about.
Benchmarks vary, as it depends heavily on the usage patterns.

And it does help some usage patterns significantly.
It is a win in some cases,
Yes.

and a loss on others

Hardly ever in many real world situations.
- but seldom by particularly large margins.

Only because most real work isnt particularly drive IO bound anymore.
I knew that would provoke you :)

You spew mindless silly shit, you can be quite confident that I will point that out if I
notice that and can be bothered to expose you stupiditys for the world to laugh at, again.
 
R

Rod Speed

Yousuf Khan wrote
David Brown wrote
I find that it doesn't even help even when multitasking.

The benchmarks clearly show that it does.

Not very dramatically tho.
I monitor the disk subsection of the Resource Monitor regularly, and very often when the disk is busy the Disk Queue
Length is over 1.00 (meaning more than 1 process is actively waiting on the disk) and the Active Time is pegged near
100%.

Doesnt mean that NCQ doesnt help in that situaiton.
Nothing that can be done about it till SSD's are more affordable.

Wrong. NCQ does help in that situaition, albeit not very dramatically.
 
R

Rod Speed

David Brown wrote
Yousuf Khan wrote

And any modern OS is doing that all the time.
NCQ can only really help if you have multiple outstanding transactions,
Yes.

and the OS itself hasn't ordered them appropriately.
Yes.

Since the OS (Windows or Linux) /does/ order transactions, NCQ will only help if the OS is doing a bad job.

It will also help when it can do a better job.
The disk knows a bit more than the OS regarding disk ordering (since it knows the full 3D geometry, rather than just a
linear LBA number),

And so it can do a much better job when knowing that allows it to decide what
ordering makes sense, something the OS can never do, most obviously with
what is currently the biggest variable now, when something has just gone past
the heads and you will need to wait an entire revolution before it shows up again.
but on the other hand it knows nothing about which processes are waiting for disk access, or the priorities of said
accesses,

The OS doesnt necessarily know that either except with what it initiates itself.
and it knows nothing about barrier writes. I don't know how Windows handles write barriers, but on Linux they are
important to ensure the integrity of critical disk accesses such as journalling - they ensure that
everything that was supposed to be written earlier /has/ been written. NCQ totally screws this up,

Not if the OS allows for it being there.
and means that the OS's IO subsystem must ensure the disk queue is completely empty before sending the barrier write,
Wrong.

and wait for it to finish completely before sending anything else.
Wrong.

If the disk handles transactions in the order they are given, then such writes can be buffered better.

Wrong.
 
M

Mike Tomlinson

David Brown said:
It doesn't matter what I write - your Rodbot mode goes on automatic.
But it is sometimes mildly entertaining to see how easy it is to trigger
your outbursts of witless knee-jerk childishness.

Funny isn't it, how he feels threatened and lashes out when confronted
by someone who actually knows what they are talking about?
It's a pity, really. I know that deep down below the image of a sad,
angry flamer lies a fair amount of knowledge and experience.

i.e. he's a dinosaur
Do you act the same in real life, or is this just your Usenet persona?

I have in my mind's eye a sad, lonely individual who masturbates
obsessively over his collection of ST225's.
 
R

Rod Speed

Mike Tomlinson wrote
Funny isn't it, how he feels threatened and lashes out when confronted
by someone who actually knows what they are talking about?

That fool never does with Win.

You never ever do with anything at all, ever.
I have in my mind's eye a sad, lonely individual who
masturbates obsessively over his collection of ST225's.

You're projecting now. I dont even have even one, ****wit.
 
A

Arno

The feature introduced on SCSI was TCQ - Tagged Command Queuing. It was
more flexible, and more useful - each command sent to the disk could be
tagged as "head of queue" (do this command with highest priority),
"ordered" (enforcing the order of the tagged commands) and "simple" (do
in whatever order the disk wants). NCQ is pretty much TCQ "simple". If
SATA supported the "ordered" mode of TCQ, it would be very useful. I
don't know what happens when mixing different tag types in TCQ, but I
think if "order" had top priority for writes, "head of queue" had top
priority for reads, and "simple" had lowest priority for both, then
you'd have a system that would improve speed in almost all cases as well
as being easy to make write-barrier safe.
I assume that SAS supports TCQ.

Ah, yes. TCQ it was indeed.

Arno
 
R

Rod Speed

Yousuf Khan wrote
Rod Speed wrote
Too bad you can't run benchmarks as your applications.

You can however use what you care about the speed of as the benchmark.
When I'm talking about the disk queue being higher than 1.00, I don't mean just something minor like 1.01, or 1.10,
but I'm talking about 5.00, or even 10.00! There could be 10 process waiting on the disk queue at any given time.

I just dont believe that that happens all that much for long to matter.
This normally happens during boot-up time,

Like I have said to you before, anyone with even half a clue
boots so rarely that that situation is completely irrelevant. If
you care about the speed of your system, the only thing that
makes any sense at all is to only boot very rarely, weeks or
months apart, and suspend or hibernate, not shutdown.

Even if you are silly enough to religiously update as often as you can,
any reboot involved should happen when you arent using the system.
but it doesn't take very long for the disk queue to kick up to the stratosphere at any time.

Thats just plain wrong with numbers like that.
Just a few apps trying to access the same disk at the same time, and you got major delays.

Thats just plain wrong with modern hard drives. Very
minor delays in fact with modern fast seeking drives.
 
A

Arno

Yousuf Khan said:
Yousuf Khan wrote
David Brown wrote [...]
I monitor the disk subsection of the Resource Monitor regularly, and very often when the disk is busy the Disk Queue
Length is over 1.00 (meaning more than 1 process is actively waiting on the disk) and the Active Time is pegged near
100%.

Doesnt mean that NCQ doesnt help in that situaiton.
When I'm talking about the disk queue being higher than 1.00, I don't
mean just something minor like 1.01, or 1.10, but I'm talking about
5.00, or even 10.00! There could be 10 process waiting on the disk queue
at any given time. This normally happens during boot-up time, but it
doesn't take very long for the disk queue to kick up to the stratosphere
at any time. Just a few apps trying to access the same disk at the same
time, and you got major delays.
Yousuf Khan

Seems tsomething was done here in 3.2 and moire maybe done in
the near future. Although from an article on LWN, its seems
the curent FS people have trouble understanding some of the
proposals made.

Arno
 
K

Krypsis

Yousuf Khan wrote


You can however use what you care about the speed of as the benchmark.



I just dont believe that that happens all that much for long to matter.


Like I have said to you before, anyone with even half a clue
boots so rarely that that situation is completely irrelevant. If
you care about the speed of your system, the only thing that
makes any sense at all is to only boot very rarely, weeks or
months apart, and suspend or hibernate, not shutdown.

I turn my computers off when not in use. No point using electricity when
I'm not using the computer. I turn the power off at the UPS but not at
the wall socket. Waiting for a bootup is no great pain. I walk past my
computer, hit a few buttons, do a few other things and by the time I
have finished that, the beast is up and ready.

I've yet to see Windows last months without a complete shutdown. Friends
of mine are forced to reboot often because Windows gets itself tied up
in knots. Linux, on the other hand, can go for years without even
suspending or hibernating.
Even if you are silly enough to religiously update as often as you can,
any reboot involved should happen when you arent using the system.


Thats just plain wrong with numbers like that.


Thats just plain wrong with modern hard drives. Very
minor delays in fact with modern fast seeking drives.

You don't know what you're talking about. I have modern fast seeking
drives in all my computers bar my Powermac and they ALL bog down when
accessed by multiple programs at the same time. I suggest you do a few
simple experiments to prove this to yourself. Do you reckon the seek
limitations of mechanical hard drives might be the reason SSDs are so
popular in applications where speed is paramount?
 
A

Arno

I turn my computers off when not in use. No point using electricity when
I'm not using the computer. I turn the power off at the UPS but not at
the wall socket. Waiting for a bootup is no great pain. I walk past my
computer, hit a few buttons, do a few other things and by the time I
have finished that, the beast is up and ready.
I've yet to see Windows last months without a complete shutdown. Friends
of mine are forced to reboot often because Windows gets itself tied up
in knots. Linux, on the other hand, can go for years without even
suspending or hibernating.

Windows was never intended as a server OS. It still shows.
There is a lot that cannot be done on Windows without a shurdown.
Machines get slower and slower with uptime. I know a few people
that administrate Windows servers, and they usually do scheduled
reboots every 30 days or so.

Longest uptime I had with with Linux server/firewall box was
400 days, then I replaced the kernel. No issues at that
time despite constant i/o and network load during the day.
This experience is fairly typical. Still, for my desktop system
I shut down Linux as well. Hibernating is at the very least a
security risk and basically unneccessary. I do the same as you,
1-2 minutes are not hard to pass.
You don't know what you're talking about.

As usual for him. Even the fastest stinning disks can
be brought to their knees with a few processes that are I/O
intensive. (That means aggregated delivered I/O bandwidth is far
lower than the maximum.) For SSDs the situation is different
at least for large accesses. For small accesses you can run
into the same problem.
I have modern fast seeking
drives in all my computers bar my Powermac and they ALL bog down when
accessed by multiple programs at the same time. I suggest you do a few
simple experiments to prove this to yourself. Do you reckon the seek
limitations of mechanical hard drives might be the reason SSDs are so
popular in applications where speed is paramount?

Or even RAM-disks in some applications. At least before SSDs became
cheap.

Don't worry about Rod, he is not using any kind of
understanding to post his opinions, he uses the parrot
model with some obscure selction function. Most of us
have him filtered out.

Arno
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top