The AGP is a ‘special’ PCI bus hanging off a northbridge and runs at a
higher data rate because its data lines are not shared with anything else.
One could think of it as a high-speed ‘expressway’ in a city. You can run
faster when the light is green, but you still need to have the green at the
next ‘intersection’ (when your bus grant expires and your burst transfer is
still unfinished) or you’ll have to stop. The arbitration is done by the
northbridge (aka host-PCI bridge) and therefore is common for all PCI busses
(it has to be due to the PCI ‘protocol’ allowing a bus grant to be given to
only one device at any given time).
Of course the arbiter may think that ‘AGP device must be important’ and
favor it over other devices. If an AGP card has a high ‘min grant’ request
the arbiter might just satisfy it at the expense of others and give it
‘longer’ green lights. The PCI spec before 2.2 was kinda vague on how long
one device can hold a bus, so a 2.1-compliant card can hold it for quite
long time. Or the AGP card itself may think it is so important and keep
trying to finish the transaction regardless of the arbiter removing the
Finally, I don’t think QNX actually drives the AGP bus to its full
capability (it is treated just like regular PCI with no AGP-specific
features being utilized), which may aggravate situation for some cards.
“Art Hays” <email@example.com> wrote in message
But the graphics card is on the AGP bus. Can it still tie up
the PCI slots?
“Igor Kovalenko” <> firstname.lastname@example.org> > wrote in message
news:bei8gs$av8$> email@example.com> …
May be, may be not.
According to PCI spec (2.2) no bus master may hold the bus for more than
PCI cycles in a row (well, 8 + 1 because it may hold it for one more
after #GNT was deasserted). This is of course non-enforceable, since a
non-compliant device can hold the bus even after the arbiter deasserts
The spec also recommends that all devices that detect a non-compliant
behavior on the bus, should cease activity (in hope the bad guy gets
wanted and goes away). So the bad guys rule >
The spec also does not say how exactly the arbitration must be
Most x86 implementations use ‘fairness’ approach. It is adjustable to
extent (and some designs have programmable arbiter so that the
scheme may be changed by software). If memory serves me well, each
has ‘Min Gnt’ and ‘Max Lat’ parameters in the PCI header, that are
by the PCI as ‘requests’, based on which they get assigned ‘effective’
Latency parameter and (ideally) IRQ level. The arbiter is supposed to
decision about granting the bus based on its idea about ‘fairness’ and
assigned latency parameter. This whole scheme works essentially like a
‘cooperative multitasking’ with those parameters being some kind of
equivalent for the priority. They don’t call it ‘Wintel’ for no reason
Then of course, the driver may be the bad guy. For all I know, video and
realtime do not go along well. Not on x86 anyway, it was simply not
“Art Hays” <> firstname.lastname@example.org> > wrote in message
news:behkjj$j81$> email@example.com> …
I am seeing interrupt latencies of up to 120 microseconds if I resize
pterm that is
displaying continuous output using devg-matroxg. If I switch the
card to the ati-rage128 the issue goes away.
The interrupt I am measuring the latency of is at IRQ9, and the video
so it’s not a priority issue. devg-matroxg must be disabling all
for this time. Running 6.2.1 on a PC.
National Institutes of Health