PC/AT 8259 PIC Operation

M

michael.young

Sorry for the long post... I've searched for info on this (here and
elsewhere), but I haven't found any answers (the only previous post, by
Zdenek Sojka, asked similar questions, but nobody replied)... if you
know of a more appropriate forum for these questions, please let me
know. (BTW - I have read the datasheet multiple times - certain
details are still not clear to me.) And, yes, I know that APIC is the
"current" architecture for PCs, but embedded systems still use PC/AT
and PC104 architectures.

Considering a standard PC/AT compatible architecture (PIC is edge
triggered, no AEOI, fully nested mode, IRQ lines are normally pulled
high on ISA bus), are the following assertions correct?

1) My understanding is that a hw device (board in ISA slot) is supposed
to trigger an interrupt by pulling the assigned IRQ line low. It must
not do this again until the device is "serviced" / communicated to by
the interrupt service routine. The reason I say this is that there is
no way for the device to know when the CPU acknowledges the interrupt
request (the INTA* line is not on the ISA bus, nor can the devices
"see" the ISR in the PIC), and pulling the line low again could cause a
"phantom" interrupt (and the requests are "lost").

2) If an interrupt is masked using the IMR (set via OCW1), then a
rising edge on the IRQ line does not cause a bit to be set in the IRR.


3) The PIC will assert its INT output (INTR input on CPU) until the CPU
acknowledges the interrupt (asserts INTA*). If the CPU has interrupts
disabled (IF = 0), then this acknowledgement will not occur until the
CPU reenables interrupts (sets IF = 1, either via STI or POPF).

4) If an interrupt is being serviced (bit set in ISR, interrupt service
routine executed on CPU), and it hasn't disabled interrupts, and a
higher priority interrupt occurs, then the CPU will acknowledge the new
interrupt (causing another ISR bit, corresponding to the higher
priority interrupt, to be set) and execute the appropriate interrupt
routine ("preempting" the lower priority interrupt, thereby "nesting"
the higher priority interrupt execution within the lower priority
interrupt service routine). The non-specific EOI clears the highest
priority bit set in the ISR, since it is the one corresponding to the
most recent interrupt.

5) If an interrupt is being serviced and it has disabled interrupts,
then a higher priority interrupt occurring before interrupts are
enabled will not be acknowledged until after interrupts are enabled
again (it will cause the PIC to assert it's INT output - the INTA*
won't be asserted until interrupts are enabled). When IF is set to 1
again, the highest priority "pending" interrupt (highest priority bit
set in IRR), is set in the ISR, and the corresponding interrupt routine
is executed. So there could be a significant delay to the interrupt
acknowledgement / service, but it will eventually occur (based on
assertion 3, above).

6) If an interrupt is being serviced (i.e., it has been acknowledged by
the CPU and the corresponding ISR bit has been set), then another lower
(or equal) priority interrupt request sets the bit in the IRR, but the
PIC does not pass the request on to the CPU (no assertion of the INT
output; this behavior is independent of the setting of the IF flag in
the CPU). Therefore, the request does not cause the bit to be set in
the ISR. When the ISR bit is cleared (via EOI command), I saw nothing
in the specs that indicate that the highest priority bit set in the IRR
then causes the interrupt sequence to occur (i.e., no INT / INTA*,
IRR=>ISR, interrupt routine execution). Therefore, I can only conclude
that lower priority interrupts are "lost" when they occur during the
execution of a higher priority interrupt!

So, based on these assertions (particuarly 1 and 6) :

a) How does a lower priority device know that its request was "ignored
/ lost" (due to the ongoing service of a higher priority interrupt)?

b) How will it know that it needs to retrigger the interrupt and it
won't cause a "phantom" interrupt?

c) How can a device be sure that it will be serviced?

d) Is there something that an interrupt routine or device must do to
ensure that interrupts aren't "lost"?

Thanks for any help!
 
M

michael.young

The following was found at
"http://www.beyondlogic.org/interrupts/interupt.htm". The only
clarification on the text below that I'd really like is regarding how
the PIC determines the "next highest priority interrupt" - is this done
from the IRR bits or only the (remaining) ISR bits? If from the IRR,
then assertion 6 (in my previous post) is invalid; if from the ISR,
then I'll need to keep looking for an answer to my questions... Again,
the info is "sketchy"/simplistic, as it states that the entire ISR is
reset, not just the highest priority bit - i.e., the article assumes
that the interrupt routine immediately disabled interrupts and
succeeded in doing so before a higher priority interrupt was
acknowledged by the CPU, so that only one bit was set in the ISR.

****
"Once your ISR has done everything it needs, it sends an End of
Interrupt (EOI) to the PIC, which resets the In-Service Register. If
the request came from PIC2, then EOI's are required to be sent to both
PICs. The PIC will then determine the next highest priority interrupt
and repeat the same process. If no Interrupt Requests are present, then
the PIC waits for the next request before interrupting the processor."
****
 
M

michael.young

I found the following at
"http://webster.cs.ucr.edu/AoA/DOS/pdf/ch17.pdf"; this is somewhat
contrary to my explanation in assertion 4 (see previous post) - which
will still stand if the routine enables interrupts (rather than failing
to disable them). The text below also corroborates the comment in the
Intel datasheet (under the paragraph for "Fully Nested Mode") - "While
the IS bit is set, all further interrupts of the same or lower priority
are inhibited, while higher levels will generate an interrupt (which
will be acknowledged only if the microprocessor internal interrupt
enable flip-flop has been re-enabled through software)." So it appears
that HW interrupts are disabled (IF is set to zero) automatically when
the CPU acknowledges the interrupt (after it pushes the flags onto the
stack, because I presume that IF must be restored (interrupts
reenabled) by the IRET).

****
"There is one minor difference between how the 80x86 processes hardware
interrupts and other types of interrupts - upon entry into the
hardware interrupt service routine, the 80x86 disables further hardware
interrupts by clearing the interrupt flag. Traps and exceptions do not
do this. If you want to disallow further hardware interrupts within a
trap or exception handler, you must explicitly clear the interrupt flag
with a cli instruction. Conversely, if you want to allow interrupts
within a hardware interrupt service routine, you must explicitly turn
them back on with an sti instruction. Note that the 80x86's interrupt
disable flag only affects hardware interrupts. Clearing the interrupt
flag will not prevent the execution of a trap or exception."
****

I have also seen some references to the IRR bits being set even when
the IMR bit is set -the logic is supposedly that the IMR masks the
output of the IRR, not the input. (This is contrary to my assertion
#2.) This may have significant ramifications, particularly if the IMR
bit is reset when the IRR bit has been set (particularly if it was set
while the mask was in effect). There are a number of websites
discussing this, but nothing definitive, as the sites contradict each
other - there is simply a lot of misinformation (bad data) out there;
many of the sites are simply incorrect or make a number of (unstated)
assumptions in their explanations.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top