Hello,
I am writing a real-time control loop. In this loop I used InterruptAttach() to attach to INT0 (timer at 100us) with an handler that returns a timer event every 500us.
In my control loop, I do a InterruptWait(), I make some calculations and then do a hardware call to output to DAC. Using ClockCycles() I determined that my control loop is usual around 500us ±10us (which is fine). Except every now and then my DAC takes a bit extra time which pushes my control loop to > 510us. The weird thing is, the calculations in my control loop (including DAC) is <200us long. So even if DAC is 40us longer, it shouldn’t make my overall control loop >510us. Second weird thing, right after the time instance where my control loop >510us, the next time instance is always much < 500us, such that the average time between the two instances is ~500us.
Is the timed interrupt screwing up, or is ClockCycles()? Does anyone have any guesses to what’s happening? Is there any way to fix this?
Thanks.
struct sigevent timevent;
volatile unsigned timctr;
//Interrupt Handler for INT0 set to 100us by calling ClockPeriod()
const struct sigevent *handler (void *area, int id)
{
if(++timctr == 5)
return (&timevent);
else
return(NULL);
}
//Control loop
uint64_t cps, calcClock, loopClock;
float fBuffer[NUM_SAMPLES][2];
cps = SYSPAGE_ENTRY(qtime)->cycles_per_sec;
for (i=0;i<NUM_SAMPLES;i++)
{
loopClock=ClockCycles();
InterruptWait(0, NULL);
calcClock = ClockCycles();
//Do calculations
…
//Call Hardware to output DAC
…
//Save timed values to buffer
fBuffer[i][0]=(ClockCycles()-calcClock)/cps;
fBuffer[i][1]=(ClockCycles()-loopClock)/cps;
}
//Output values
for (i=0;i<NUM_SAMPLES;i++)
printf("%f %f\n", fBuffer[i][0], fBuffer[i][1]);
*** Output example ****
0.000064 0.000498
0.000063 0.000492
0.000065 0.000500
0.000140 0.000584
0.000066 0.000433
0.000064 0.000497
0.000064 0.000506