Timer quantization error

Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Thanks,

Mustafa Yavas

Mustafa Yavas wrote:

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Don’t use the Posix timers. You have to hook the system tick and count them, then generate your own event.

It should be safe to use Interrupt_Attach_Event() with this IRQ as it won’t be device sharing.


Evan

Evan Hillas wrote:

Don’t use the Posix timers. You have to hook the system tick and count them, then generate your own event.

It should be safe to use Interrupt_Attach_Event() with this IRQ as it won’t be device sharing.

Posix timers are not designed to give a regular interval. They are designed to give an amount of time. There is no guarantee it will be exact for any one period.


Evan

Hello Mustafa,

Please check this link:


http://www.qnx.com/developers/articles/article_826_2.html



Regards,

Yuriy



“Mustafa Yavas” <mustafayavas@gmail.com> wrote in message
news:fa7b1s$3fb$1@inn.qnx.com

Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect
to previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds.
I have tried to set this values 500, 200, or 100 microseconds. But the
actual tick rate was all the time a bit different than what I had set.
Are there any way to handle this problem.

Thanks,

Mustafa Yavas
\

Mustafa Yavas <mustafayavas@gmail.com> wrote:

Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

David Gibbs wrote:

Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.

I wouldn’t trust that.

Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn’t
implemented yet. I take it from your post it is now?

Mitchell

Mustafa,

try to use the timer interrupt IRQ8 … that interrupt is independent
from the OS.

Regards

–Armin

PS: Let me know if you need the code


Mustafa Yavas wrote:

Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Thanks,

Mustafa Yavas
\

maschoen <maschoen@pobox-dot-com.no-spam.invalid> wrote:

Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn’t
implemented yet. I take it from your post it is now?

Maybe you should read the docs more than once every six years. :slight_smile:


Steve Reid stever@qnx.com
Technical Editor
QNX Software Systems

Evan Hillas <evanh@clear.net.nz> wrote:

David Gibbs wrote:
Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.


I wouldn’t trust that.

Why would you not trust that?

Ok… missed interrupts and stuff can still screw with it, and it may not
run exactly on the time due to quartz chip irregularities and other stuff,
but it may still be the best choice.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

maschoen <maschoen@pobox-dot-com.no-spam.invalid> wrote:

Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn’t
implemented yet. I take it from your post it is now?

It doesn’t say that any more. (6.3.0 SP2 documentation.)

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

David Gibbs wrote:

Why would you not trust that?

Two reasons:

  • It’s simply not a guaranteed method of achieving a sampling rate. And by this I mean not adding jitter. In fact, to the contrary, the articles on “tick-tock” make it clear that jitter is added. Btw, sampling is the usual reason for needing a perfectly regular trigger. On the other side of this coin is the question of why such a design isn’t using some hardware assist to perform the sampling into/out-of a hardware buffer.

  • I may be out-of-date now but, to backup the above point, the results from clock_getres() may not exactly match the OS’s calculated interval of time per system tick and therefore not map one-to-one with the IRQ even when the application tries to.

Evan Hillas <evanh@clear.net.nz> wrote:

David Gibbs wrote:
Why would you not trust that?


Two reasons:

  • It’s simply not a guaranteed method of achieving a sampling rate.
    And by this I mean not adding jitter. In fact, to the contrary,
    the articles on “tick-tock” make it clear that jitter is added. Btw,
    sampling is the usual reason for needing a perfectly regular trigger.
    On the other side of this coin is the question of why such a design
    isn’t using some hardware assist to perform the sampling into/out-of a
    hardware buffer.

True it isn’t a guarenteed method – but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.

Now, I’m not sure what you mean by “added jitter”, especially as mentioned
in the tick-tock articles.

Clearly if you use delay(), nanosleep(), sleep(), or whatever, there is
the need to start from the next tick.

But, if you use a repeating timer that is a multiple of the period
of the hardware clock (e.g. a multiple of 999847 ns on an x86), then
nothing in those articles suggests any further jitter is added.

If using such a repeating timer off CLOCK_REALTIME, there is one
possible source of jitter which is not described in either of the
tick-tock articles, and that is any use of ClockAdjust(). Using
CLOCK_MONOTONIC should avoid the jitter from ClockAdjust() as well.

  • I may be out-of-date now but, to backup the above point, the results
    from clock_getres() may not exactly match the OS’s calculated interval
    of time per system tick and therefore not map one-to-one with the IRQ
    even when the application tries to.

I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

David Gibbs wrote:

True it isn’t a guarenteed method – but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.

What I mean is it’s not clear that the stored resolution is the only factor in the calculation for each clock tick. That we are not guaranteed that using this resolution figure will produce a flawless metronome. Ie, Absolutely no error accumulating resulting in no skipping/dropping ticks for event generation.


Now, I’m not sure what you mean by “added jitter”, especially as mentioned
in the tick-tock articles.

The skipping/dropping is what I mean by intentional adding of jitter - from the app’s point-of-view. The tick-tock article refers to this as a beat but from the pov of an unwanted signal it’s also jitter. I guess I could have also said noise.


I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.

Yup, and any future enhancements have a chance to break any app that relies on shuffling an exact multiple of the OS calculation. You are right about using ClockPeriod() - any use of the extended data-structure will show up there but not so likely in clock_getres().


Evan

Evan Hillas <evanh@clear.net.nz> wrote:

David Gibbs wrote:
True it isn’t a guarenteed method – but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.


What I mean is it’s not clear that the stored resolution is the only
factor in the calculation for each clock tick. That we are not
guaranteed that using this resolution figure will produce a flawless
metronome. Ie, Absolutely no error accumulating resulting in no
skipping/dropping ticks for event generation.

Unless you miss hardware interrupts (due to extended periods of
interrupts being disabled, masked, or some hardware errors), as long
as you use a multiple of the timer frequency and use CLOCK_MONOTONIC,
there should be no further accumulated error – there should be no
other factor in the calculation for clock tick.

QNX Neutrino stores time internally as 2 64-bit nanosecond values:
a) nanoseconds since boot +
b) boot time in nanoseconds since Jan 1st 1970.

On each timer interrupt, once clock period (as reported by ClockPeriod())
is added to the nanoseconds since boot. If there is a current active
ClockAdjust(), then the adjust is added to the boot time as well.

Current time is a+b.

CLOCK_REALTIME timers fire based on a+b.
CLOCK_MONOTIC timers fire based on a.

Now, I’m not sure what you mean by “added jitter”, especially as mentioned
in the tick-tock articles.


The skipping/dropping is what I mean by intentional adding of jitter -
from the app’s point-of-view. The tick-tock article refers to this as
a beat but from the pov of an unwanted signal it’s also jitter.
I guess I could have also said noise.

Yes, but I said to use an exact multiple of the ClockPeriod(), so you won’t
get this noise/jitter/beat.

I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.


Yup, and any future enhancements have a chance to break any app that
relies on shuffling an exact multiple of the OS calculation. You are
right about using ClockPeriod() - any use of the extended data-structure
will show up there but not so likely in clock_getres().

You have to be a bit smart about figuring out your exact multiple. Query
the fundamental clock period, then calculate the closest match between
integer multiples of that value and what your actual wanted period is.

You may, even, need to as part of your system design choose a different
clock period to give closer/better/more accurate results.

But, yeah, if you naively hard-code it as 15* ClockPeriod(), that could
definitely cause problems.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

David Gibbs wrote:

QNX Neutrino stores time internally as 2 64-bit nanosecond values:
a) nanoseconds since boot +
b) boot time in nanoseconds since Jan 1st 1970.

On each timer interrupt, once clock period (as reported by ClockPeriod())
is added to the nanoseconds since boot. If there is a current active
ClockAdjust(), then the adjust is added to the boot time as well.

Current time is a+b.

CLOCK_REALTIME timers fire based on a+b.
CLOCK_MONOTIC timers fire based on a.

Thankyou, the docs need to be more clear on this. This is the crux of my concern, that we were not told this nor were we told it would stay this way. Not anything to do with starting alignment or possible conflicts. And yes, pre CLOCK_MONOTIC days clearly would have had issues.

My view is to play it safe and directly hook the IRQ myself.


Evan

Evan Hillas wrote:

David Gibbs wrote:
Now, I’m not sure what you mean by “added jitter”, especially as
mentioned in the tick-tock articles.


The skipping/dropping is what I mean by intentional adding of jitter

  • from the app’s point-of-view. The tick-tock article refers to this
    as a beat but from the pov of an unwanted signal it’s also jitter. I
    guess I could have also said noise.

Intentionally added by the OS. Ie, The OS’s internal compensation to match what it thinks is real time.


Evan

Evan Hillas wrote:

David Gibbs wrote:
Current time is a+b.

CLOCK_REALTIME timers fire based on a+b.
CLOCK_MONOTIC timers fire based on a.


Thankyou, the docs need to be more clear on this. This is the crux
of my concern, that we were not told this nor were we told it would
stay this way. Not anything to do with starting alignment or
possible conflicts. And yes, pre CLOCK_MONOTIC days clearly would
have had issues.

Maybe a change to CLOCK_MONOTIC is in order where it will not perform any accumulating compensation. The app’s request gets rounded to the nearest integral interrupt period and stays that way until it’s stopped or the interrupt period is adjusted.

However, this idea is not ideal either. When it comes to interrupt period adjustment such sampling systems are screwed unless the new period is lucky enough to land on a multiple of the integral.

The best idea, as mentioned by others, is to use a different IRQ source. The Posix timers are not suited for sampling and servo timing loops.


Evan

Evan Hillas <evanh@clear.net.nz> wrote:

Evan Hillas wrote:
David Gibbs wrote:
Current time is a+b.

CLOCK_REALTIME timers fire based on a+b.
CLOCK_MONOTIC timers fire based on a.


Thankyou, the docs need to be more clear on this. This is the crux
of my concern, that we were not told this nor were we told it would
stay this way. Not anything to do with starting alignment or
possible conflicts. And yes, pre CLOCK_MONOTIC days clearly would
have had issues.


Maybe a change to CLOCK_MONOTIC is in order where it will not perform
any accumulating compensation.

CLOCK_MONOTONIC doesn’t do any accumulating compensation. The compensation
is done on the boot time.

The app’s request gets rounded to the
nearest integral interrupt period and stays that way until it’s stopped
or the interrupt period is adjusted.

I don’t think we can do that, round the applications request that way. If
the application wants rounding, it needs to do the rounding itself.

However, this idea is not ideal either. When it comes to interrupt
period adjustment such sampling systems are screwed unless the new period
is lucky enough to land on a multiple of the integral.

Systems should NOT be adjusting the interrupt period on the fly. It should
be set once, as a system design consideration, shortly after boot time – and
before anything that will depend on it configures – and never be changed.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com

Evan Hillas <evanh@clear.net.nz> wrote:

Evan Hillas wrote:
David Gibbs wrote:
Now, I’m not sure what you mean by “added jitter”, especially as
mentioned in the tick-tock articles.


The skipping/dropping is what I mean by intentional adding of jitter

  • from the app’s point-of-view. The tick-tock article refers to this
    as a beat but from the pov of an unwanted signal it’s also jitter. I
    guess I could have also said noise.


    Intentionally added by the OS. Ie, The OS’s internal compensation to
    match what it thinks is real time.

The OS doesn’t do any compensation unless you request it (with
ClockAdjust()). (Or run an application like NTP that adjusts it
with ClockAdjust().) And, as I mentioned, CLOCK_MONOTONIC is not
affected by this compensation to match real world time.

-David

David Gibbs
QNX Training Services
dagibbs@qnx.com