Hi Synapse,
I admire and respect the excellent, high-quality answers you provide in
this newsgroup. So, I really hate to disagree with you. But, um ... I
disagree
Well actually, I both agree and disagree. Yes, replacing the HAL may be
necessary; but it may not be sufficient for transferring a Windows
installation to another motherboard.
The role of the Hardware Abstraction Layer is often exaggerated in popular
thinking about Windows. It is often claimed that the HAL provides a
generalised abstraction of the hardware; almost to the point of providing
a "virtual machine" for Windows to run on (they may not use that "v" word,
but that's what is implied). In fact the HAL has a much narrower focus -
it places a layer of code between the NT Executive and the hardware to
handle processor-dependent routines (eg, masking big-endian and
little-endian architectures). It does not replace device drivers, for
example, which contain the device-specific code to drive the hardware; but
with the HAL in place, hardware vendors can write device drivers to handle
their hardware in a fairly processor-independent way, without needing to
supply different versions of the driver for every possible processor
(obviously, different object code is still required for different
instruction sets). Device drivers don't interface direct to the hardware,
but instead call kernel routines in the HAL. But you still need to install
and configure the right device drivers, to match the specific hardware on
which you are running.
This role of the HAL was more important in the past, when 2 circumstances
needed to be dealt with:
1) Windows ran on many different architectures besides x86: NT 3.5 and
3.51 ran on RISC processors like MIPS R4000, DEC Alpha and Motorola/IBM
PowerPC, as well as the traditional Intel IA-32. The RISC chips were all
big-endian. Today, there are only 2 processor families: IA-32 and IA-64
Itanium.
2) standard PC hardware had not converged as much as it has today; so each
PC vendor supplied a custom HAL specific to their hardware. For example,
Compaq SystemPro servers needed a specific HAL to use their Triflex buss
design; Toshiba PCs needed aspecific HAL or they wouldn't boot, and so on.
These days, there are only a few variations in possible HAL:
- APCI or not?
- uniprocessor or multiprocessor?
(And even the uniprocessor factor is going away, as dual core machines
become standard. There are also Itanium machines, but I'll skip over them
... not many Itanium users in this forum, anyway
.
The list of HALs in a Vista distribution is far shorter than it was in NT
3.51! We are down to about 6 major HALs, I think (haven't checked the
exact number).
Additionaly, the hardware waters were muddied again in NT 4.0 and Windows
2000, with the introduction of Plug-n-Play. The kernel-mode Plug-and-Play
Manager runs in the NT Executive, above the HAL layer. Like all
components, it relies on the HAL to provide processor-independent
interfaces to low level hardware events, like interrupts etc; but the
mapping of compatible PnP Device IDs happens "way up here" in the PnP
Manager, not "way down there" in the HAL.
So for example you could have 2 machines, each with a SATA Hard disk. On
Machine "A", the device ID of the SATA Contoller is:
PCI\VEN_8086&DEV_2824&SUBSYS_514D8086&REV_02
On machine "B", the device ID of the SATA Controller is:
PCI\VEN_10DE&DEV_0054&SUBSYS_B0031458&REV_F3
If you move the Windows installation from machine A to machine B, you are
relying on *something* in Windows to be smart enough, during the boot
process, to work out at a very low level, whether or not the device called
PCI\VEN_8086&DEV_2824&SUBSYS_514D8086&REV_02 can be treated the same way
as the device called PCI\VEN_10DE&DEV_0054&SUBSYS_B0031458&REV_F3.
This is *not guaranteed* to FAIL for every move to a new machine ... hey,
Windows is pretty smart!
... but, it *is guaranteed* to fail for SOME
cases. That's when you see errors like the STOP 0x7B, described in KB
article 314082 (
http://support.microsoft.com/kb/314082). Changing the HAL
won't fix that problem, you need a whole new IDE driver (or whatever). So
as I described, it's not a case of "This will never work"; it's a case of
"Sometimes this will work, and sometimes this will fail". Once we accept
that, then we are in the world of risk management - what frequency of
failures can we expect, relative to non-failures? And what level of
frequency is acceptable to us, given other extraneous (non-technical)
factors like, how much time are we willing to repair the cases that don't
work, etc.
If the board uses exactly the same chipset, then: Yes, the chances of
getting a successful move are fairly good (eg P965 to P965). If the new
board uses a similar chipset in the same family, the chances are good but
not as high (eg move from P915 to P965). If the chipset is from a
different vendor (eg NForce 4 to P965) the chances of a successful move
are starting to get pretty low.
Most of Windows' hardware detection happens at a level which requires no
user interaction; so many users - even quite technical users working up in
user-mode applications - are fairly oblivious to it. This is a good thing,
and how it should be. But once you start writing device drivers, or doing
any kind of hardware debugging down in kernel mode, you realise there is a
ton of stuff that appears to happen "by magic", but in fact is very
complex, detailed, fiddly and sometimes fragile work.
The common home user is not in a very good position to make these kinds of
assessments. So - IMHO - their best bet is to stick to reliable and proven
safe techniques. Back up their data, and make a clean installation of
Windows from scratch. This is what I always do, myself.
But there's no clear right-or-wrong answer. So ... we can disagree, and
we'll still both be correct!
(or, um, both wrong
)