Problem with clock_gettime( CLOCK_REALTIME, &TimeStart)

I have two threads. In each thread I use clock_gettime to measure how long a data transfer takes. (The data transfers take anything from 2 to 10 seconds) If only one thread is running then i get a sensible value. If both threads are running I start getting unrealistic values. What could cause this?

( I am using QNX 6.3 on a MGT5200 PowerPC)

The function is documented as thread safe, so it should work. Either you are using it wrong (could you post some code) or there is a bug in the library.

Thanks for the reply. The code looks like this (both threads identical apart from variable names)

struct timespec NANDstart, NANDstop; 
T_DOUBLE d_Accum;

........

//-----------------------------------------------
// Get start time
//-----------------------------------------------
if( clock_gettime( CLOCK_REALTIME, &NANDstart) == -1 ) 
{
    fprintf( stderr,"NAND: Could not get clock time!\n" );
}

........
........ (Data Transfer)
........
 
//-----------------------------------------------
// Get stop time
//-----------------------------------------------
if( clock_gettime( CLOCK_REALTIME, &NANDstop) != 0 ) 
{
    fprintf( stderr,"NAND: Could not get clock time!\n" );
}

d_Accum = ( NANDstop.tv_sec - NANDstart.tv_sec );
d_Accum += (double)( (double)NANDstop.tv_nsec - (double)NANDstart.tv_nsec ) / (double)1000000000L;
                                        
fprintf(stderr, "NAND: Read   66MBytes in %5.2f secs => %5.2f MBytes/sec \n\n", d_Accum, (65.999/d_Accum));

That was my original suspicious, are you sure you are really using different variable names ;-)

I’d hoped that that was the problem too but unfortunately it’s not the case → the variable names are most definitely different!
:confused:

I wrote a small test case on an x86 and could not reproduce your problem. Either it’s a PowerPC specific bug and there is a bug in your code.

What about writing a small test case to reproduce the problem on your setup.

Well, I am a little wiser now but not much!

I wrote a small test case to reproduce the problem (as you suggested) and it works fine, not a problem in sight. So… it’s not a problem with the PowerPC or with the library and it’s very probably something in our code which is causing the problem. Unfortunately I’ve no idea what at the moment…

When the problem occurs it looks like the clock is running too slowly => Is it possible for something in the code to affect the operation of the clock? I presume the clock always has the highest priority?

Thanks in advance for any suggestions!

On x86 clock is managed through interrupt 0, it could be slow down if interrupt of higher priority takes too much CPU( the machine would appear to freeze though) or interrupt are disabled for long time.

On PowerPC I don’t know, but it must be hardware based. Again, build a small test case, or better run the test program you already wrote while your real application runs. If the test program still behave properly while your application is not then you know it’s not a problem with the system clock.

I noticed you used fprintf(stderr). Although fprintf is thread safe, the file descriptor isn’t, hence in theory you should protected usage of a file descriptor. Probably not related to your problem though.

After much testing, checking HW-Specs and a few emails with QNX i am now a little wiser :slight_smile:

The CLOCK_REALTIME on the MGT5200 is generated from the Decrementer in the core. Unfortunately the decrementer interrupt has a lower priority than the external interrupt. The load on the external interrupt was so high that the interrupt from the decrementer wasn’t always serviced and hence the CLOCK_REALTIME ran slowly.

For anyone who has the same problem:
The problem was solved by writing a routine in assembler which reads directly from the time base register in the core. This routine was then used instead of clock_gettime(…)

Thanks Mario for the help & tips!

You also could have let the other interrupts run while your long ISR was still processing.