Tick-tock: Understanding the Neutrino microkernel's concept

In this new weekly article series on the QNX Developer’s Network,
(http://qdn.qnx.com), QNX experts give their insights on programming under
the QNX realtime platform. The first in the series of articles is:

Tick-tock: Understanding the Neutrino microkernel’s concept of time by Brian
Stecher

Comments are welcome!


Tick-tock: Understanding the Neutrino microkernel’s concept of time
By Brian Stecher

With a few hundred thousand new users and developers, we’re quite certain
that you’re all going to have programming questions and problems. To help
you on your journey towards realtime development, let’s take a quick look at
Neutrino’s concept of time.

When you’re dealing with timing, every moment within Neutrino microkernel is
referred to as a tick. A tick is measured in milliseconds; its initial
length is determined by the clock rate of your processor. If your CPU is 40
MHz or better, a tick is 1 ms. For slower processors a tick represents 10
ms. Programmatically you can change the clock period via the ClockPeriod()
function.

This becomes important just about every time you ask the kernel to do
something relating to pausing or delaying your process. These include the
functions: select(), alarm(), nanosleep(), nanospin(), sigaction(), delay,
as well the whole family of timer_*() functions. Normally, we use these
function assuming they’ll do exactly what we say … “Sleep for 8 seconds!”,
“Sleep for 1 minute!” and so on. Unfortunately, we get into problems when
you ask, “Sleep for 1 millisecond, ten thousand times!”

Does this code work assuming a 1 ms tick?

void OneSecondPause()
{
for ( i=0; i<1000; i++ ) delay(1); // Wait 1000 milliseconds
}

Unfortunately, no, this won’t return after one second on IBM PC hardware. It’ll likely wait for three seconds. In fact, when you call any function based on the nanosleep or select functions, with an argument of n milliseconds, it actually takes anywhere from n to infinity milliseconds. But more than likely,
this
example will take (n+2) milliseconds for each delay(1) call, for a total of
three seconds.


So why, exactly does this function take three seconds?

What you’re seeing is called timer quantization error. One aspect of this
error is actually something that’s so well understood and accepted that it’s
even documented in a standard - the POSIX Realtime Extension
(1003.1b-1993/1003.1i-1995). Within this document, it says that it’s OK to
delay too much, but it’s not OK to delay too little. I’m sure we all know
that the premature firing of a timer is undesirable…

Since the calling of delay() is asynchronous with the running of the clock
interrupt, that means that we have to add one clock tick to a relative delay
to ensure the correct amount of time (consider what would happen if we
didn’t and a one tick delay was requested just before the clock interrupt
went off). That normally adds half a millisecond each time, but in the
example given we end up synchronized with the clock interrupt, so the full
millisecond gets tacked on each time.

OK, that should make the loop last 2 seconds, where’s the extra second
coming from?

The problem is that when you request a 1 ms tick rate, we may not be able to
actually give it too you because of the frequency of the input clock to the
timer hardware. In those cases we choose the closest number that’s faster
than what you requested. In terms of IBM PC hardware, requesting a 1 ms tick
rate actually gets you 999,847 nanoseconds between each tick. With the
requested delay, that gives us the following:

1,000,000 ns + 999,847 ns = 1,999,847 ns of actual delay.

1,999,847 ns / 999,847 ns = 2.000153 ticks before the timer expires

Since we only expire timers at a clock interrupt, ceil(2.000153) gives us
that each delay(1) call actually waits:

999,847 ns * 3 = 2,999,541 ns

Multiply that by a 1000 for the loop count and you get a total loop time of
2.999541 seconds.

So this code should work?

void OneSecondPause()
{
for ( i=0; i<100; i++ ) delay(10); // Wait 1000 milliseconds
}

It will certainly get you closer to the time you expect, with an accumulated error of only 1/10 of a second.

In Conclusion…

Certainly, this is a simple error, but you’ll probably encounter the smallest errors first! Don’t let these things slow you down - when you run into something that you’re sure should work and the documentation just isn’t cutting it, please post on the newsgroups and ask for help. Your development will go a lot smoother and we’ll know when we should improve our documentation or write an article about your problems.




If you want a topic covered - or just have questions or comments - feel free
to post in qdn.public.articles or use the QDN suggestion box at
http://support.qnx.com/report/rate.html

Tick-tock: Understanding the Neutrino microkernel’s concept of time
By Brian Stecher

Does this code work assuming a 1 ms tick?

void OneSecondPause()


for ( i=0; i<1000; i++ ) delay(1); // Wait 1000 milliseconds
}

Unfortunately, no, this won’t return after one second on IBM PC hardware.
It’ll likely wait for three seconds. In fact, when you call any function

based on the nanosleep or select functions, with an argument of n
milliseconds, it actually takes anywhere from n to infinity milliseconds.
But more than likely,

this
example will take (n+2) milliseconds for each delay(1) call, for a total
of
three seconds.

If a timer with a repeat interval was programmed we should expect a maximum
of 1002 millisenconds, right?

Bill at Sierra Design <BC@sierradesign.com> wrote:

If a timer with a repeat interval was programmed we should expect a maximum
of 1002 millisenconds, right?

You can’t ever expect a maximum - a higher priority process can always
indefinitely hold off your responding to the timer expiring.

Aside from that, yes the timer will fire 1000 times in about 1002 mS
(I haven’t done the analysis to check the edge boundry of the last firing).

\

Brian Stecher (bstecher@qnx.com) QNX Software Systems, Ltd.
phone: +1 (613) 591-0931 (voice) 175 Terence Matthews Cr.
+1 (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8

Comments are welcome!

The two Tick-tock articles give a good intro to the Neutrino concept of
time.

I still think the basic design of time is somewhat broken in Neutrino (this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased bit
when I used to use Transputer I could easily get timming precision of ~10us
for high priority process on a 20Mhz processor!

What I have never quite understood is why the system is stuck to a single
fixed interval time event. This event is the Neutrino “Tick” which is usualy
fixed at ~1ms and all timing is derived from it.

Any system that has a programmable timer that can generate events
(Interrupts) can be used in a far more effective manner. Instead of
maintaining a fix (course) Tick a sorted queue of Timer events is
maintained. The programmable timer is then programmed to generate an
interupt at the time of the first event in the queue. In this way far
greater timing precision can be achieved with incurring the high CPU load of
a very high Tick frequency.

Obvious there are a few complications. The scheduling of priorities and
timers is not quite this simple, but it would be a significant improvement
on the present rather simplistic arranagment. After all the Transputer was
able to do this in hardware for a 2 priority scheme many years ago.

As a work around it is possible to write a resource manager for an add on
programmable timer. This manager can then be source of Neutrino messages at
times requested by its clients.

What would really be elegent is if the Neutrino kernel could be configured
to make use of a custom time manager so that all processes can benefit.

Michael Stevens
Australian Centre for Field Robotics

“Nobody” <michael@acrfr.usyd.edu.au> wrote in message
news:8tqv52$50v$2@inn.qnx.com

Comments are welcome!


The two Tick-tock articles give a good intro to the Neutrino concept of
time.

I still think the basic design of time is somewhat broken in Neutrino
(this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased bit
when I used to use Transputer I could easily get timming precision of
~10us
for high priority process on a 20Mhz processor!

What I have never quite understood is why the system is stuck to a single
fixed interval time event. This event is the Neutrino “Tick” which is
usualy
fixed at ~1ms and all timing is derived from it.

Any system that has a programmable timer that can generate events
(Interrupts) can be used in a far more effective manner. Instead of
maintaining a fix (course) Tick a sorted queue of Timer events is
maintained. The programmable timer is then programmed to generate an
interupt at the time of the first event in the queue. In this way far
greater timing precision can be achieved with incurring the high CPU load
of
a very high Tick frequency.

I will make a guess why most real-time OS use a fix timer instead of a
“dynamic”
one.

Fix timer provides something many of us require: deterministic behavior!
Maybe not the behavior we’d all like, but it’s very deterministic. You won’t
have interrupts poping up all over the place. It often already troublesome
enough to deal with HD and Network interrupt … I also
think in general it uses let CPU resources. Imagine this case
10 programs, each asking for periodic timer of 1.1ms 1.2ms,1.3 ms, 1.4 ms,
1.5 ms, 1,6… What this would resulting is in an interrupt every 100us!
Pretty nasty on a 386 33Mzh ;-(

Furthermore the PC timer hardware is definitately not your best friend…
With a precision of .999 us seconds it would become nightmarish.

If you start a periodic timer, the start of the timer is rounded up
to the nearest tick. If you don’t round up, even though all programs
request a 1ms timer, they would never be started at the same time.
Again that could create a very high number of interrupts, hence you
need some sort of reference to round things up.

Obvious there are a few complications. The scheduling of priorities and
timers is not quite this simple, but it would be a significant improvement
on the present rather simplistic arranagment. After all the Transputer was
able to do this in hardware for a 2 priority scheme many years ago.

As a work around it is possible to write a resource manager for an add on
programmable timer. This manager can then be source of Neutrino messages
at
times requested by its clients.

What would really be elegent is if the Neutrino kernel could be configured
to make use of a custom time manager so that all processes can benefit.

Michael Stevens
Australian Centre for Field Robotics

\

Bill at Sierra Design <BC@sierradesign.com> wrote:

On the other side of the coin, I think that even a relitivly slow Pentium
processor can easily handle a timer interrupt of 50 uS if the system
admistrator feels that this is needed and configures it so. Yes, it will
have a noticable impact on the priority 10 process that is spinning it’s
wheels in a CPU intensive whatever. But the system administrator knows that
THIS system needs this kind of precision.

I’m guessing (read “hoping”) that the real timing dependant code (i.e.
maintaining the timing queues and scheduling processes to go ready when
their interval has expired) is in a single module. Perhaps QSSL could offer
a configuration choice of one timer method vs. the other. No application
should need to re-written or even recompiled or relinked. Just build the
kernel with a different timer module.

I am not sure why people want the OS to drive it’s clock faster when what
they really want is to be able to have thier program be driven faster.
That is pretty easy. Get a card with a programmable timer and setup an
interrupt handler in your application. BOOM! You have whatever timing
you want directly to your app and your app alone. You could easily
write a little hp_timer process that handled the interrupt/hardware
side of things and sent Pulses to the apps that wanted to be kicked
faster then the kernel kicks.

This is the joy of having drivers as processes and a really small
interupt hanlding overhead!! :slight_smile:

chris

cdm@qnx.com > “The faster I go, the behinder I get.”

Chris McKillop – Lewis Carroll –
Software Engineer, QSSL
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Hi Mario and Nobody (and my mind ponders the thought of a mother looking at
her newborn baby and saying, “We’ll call him ‘Nobody’.”)


I can see both sides of this coin.

For one thing, the customized programmable timer can certainly be abused.
Can an application say, “Please interrupt me every 3 nS?” This would be a
burdon that even the fastest procesors can’t handle. So do you arbitrarily
draw the line and say that you will only support an interval granularity of
10 uS? Surely someone will pose the argument that they have a system were
there is only one timing dependent process and it NEEDS an interrupt at 15
uS. So where do you draw the line?

On the other side of the coin, I think that even a relitivly slow Pentium
processor can easily handle a timer interrupt of 50 uS if the system
admistrator feels that this is needed and configures it so. Yes, it will
have a noticable impact on the priority 10 process that is spinning it’s
wheels in a CPU intensive whatever. But the system administrator knows that
THIS system needs this kind of precision.

I’m guessing (read “hoping”) that the real timing dependant code (i.e.
maintaining the timing queues and scheduling processes to go ready when
their interval has expired) is in a single module. Perhaps QSSL could offer
a configuration choice of one timer method vs. the other. No application
should need to re-written or even recompiled or relinked. Just build the
kernel with a different timer module.

RFC

Nobody <michael@acrfr.usyd.edu.au> wrote:

I still think the basic design of time is somewhat broken in Neutrino (this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased bit
when I used to use Transputer I could easily get timming precision of ~10us
for high priority process on a 20Mhz processor!

What I have never quite understood is why the system is stuck to a single
fixed interval time event. This event is the Neutrino “Tick” which is usualy
fixed at ~1ms and all timing is derived from it.

Any system that has a programmable timer that can generate events
(Interrupts) can be used in a far more effective manner. Instead of
maintaining a fix (course) Tick a sorted queue of Timer events is
maintained. The programmable timer is then programmed to generate an
interupt at the time of the first event in the queue. In this way far
greater timing precision can be achieved with incurring the high CPU load of
a very high Tick frequency.

We’ve actually talked about supporting this kind of thing in hallway meetings
here.

As you’ve noted, there are a few complications. One of which is making
sure your time of day stuff doesn’t suffer from clock skew (you have to
be very careful accounting for all the time spent while you’re reprogramming
the timer hardware). Also, talking to some timer hardware is pretty slow,
so you’re potentially increasing interrupt latency for other things if
you keep having to reprogram it. As Mario pointed out, it can cause
a high frequency of interrupts if you’re unlucky with the timer periods
people choose.

All solvable problems, but in the end, we’ve always decided that we’ve
got bigger fish to fry at the moment.

What would really be elegent is if the Neutrino kernel could be configured
to make use of a custom time manager so that all processes can benefit.

Or a custom scheduler, or a custom memory manager, etc. Yeah, that’s
another subject of hallway meetings. The devil’s in the details of
course :slight_smile:.


Brian Stecher (bstecher@qnx.com) QNX Software Systems, Ltd.
phone: +1 (613) 591-0931 (voice) 175 Terence Matthews Cr.
+1 (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8

Chris McKillop wrote:

Bill at Sierra Design <> BC@sierradesign.com> > wrote:

On the other side of the coin, I think that even a relitivly slow Pentium
processor can easily handle a timer interrupt of 50 uS if the system
admistrator feels that this is needed and configures it so. Yes, it will
have a noticable impact on the priority 10 process that is spinning it’s
wheels in a CPU intensive whatever. But the system administrator knows that
THIS system needs this kind of precision.

I’m guessing (read “hoping”) that the real timing dependant code (i.e.
maintaining the timing queues and scheduling processes to go ready when
their interval has expired) is in a single module. Perhaps QSSL could offer
a configuration choice of one timer method vs. the other. No application
should need to re-written or even recompiled or relinked. Just build the
kernel with a different timer module.


I am not sure why people want the OS to drive it’s clock faster when what
they really want is to be able to have thier program be driven faster.
That is pretty easy. Get a card with a programmable timer and setup an
interrupt handler in your application.

Use the IRQ8 (RTC) of your PC … it has a time
resolution down to 122us.
The time resolution of the OS timers is not
effected and can be set independently to the IRQ8.

Armin

http://www.steinhoff.de

Brian Stecher wrote:

Nobody <> michael@acrfr.usyd.edu.au> > wrote:
I still think the basic design of time is somewhat broken in Neutrino (this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased bit
when I used to use Transputer I could easily get timming precision of ~10us
for high priority process on a 20Mhz processor!

IMHO … polling with a cycle time of 10us is
just bad design.
Polling at that rate should be handled by an
intelligent device … powered
by a transputer(?).

What I have never quite understood is why the system is stuck to a single
fixed interval time event. This event is the Neutrino “Tick” which is usualy
fixed at ~1ms and all timing is derived from it.

[ clip …]

What would really be elegent is if the Neutrino kernel could be configured
to make use of a custom time manager so that all processes can benefit.

Or a custom scheduler, or a custom memory manager, etc. Yeah, that’s
another subject of hallway meetings. The devil’s in the details of
course > :slight_smile:> .

At the end … it is just a question of design.
Raw polling should be done at
hardware/firmware level … event driven
processing at the host CPU.

Armin Steinhoff

http://www.steinhoff.de

“Armin Steinhoff” <A-Steinhoff@web_.de> wrote in message
news:3A052F46.4CE6D939@web_.de…

Brian Stecher wrote:

Nobody <> michael@acrfr.usyd.edu.au> > wrote:
I still think the basic design of time is somewhat broken in Neutrino
(this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased
bit
when I used to use Transputer I could easily get timming precision of
~10us
for high priority process on a 20Mhz processor!

IMHO … polling with a cycle time of 10us is
just bad design.
Polling at that rate should be handled by an
intelligent device … powered
by a transputer(?).

I don’t think “Nobody” is talking about polling here Armin.
I think what he means is that timer have a precision of 10us.

What I have never quite understood is why the system is stuck to a
single
fixed interval time event. This event is the Neutrino “Tick” which is
usualy
fixed at ~1ms and all timing is derived from it.

[ clip …]

What would really be elegent is if the Neutrino kernel could be
configured
to make use of a custom time manager so that all processes can
benefit.

Or a custom scheduler, or a custom memory manager, etc. Yeah, that’s
another subject of hallway meetings. The devil’s in the details of
course > :slight_smile:> .

At the end … it is just a question of design.
Raw polling should be done at
hardware/firmware level … event driven
processing at the host CPU.

Armin Steinhoff

http://www.steinhoff.de

Mario Charest wrote:

“Armin Steinhoff” <A-Steinhoff@web_.de> wrote in message
news:3A052F46.4CE6D939@web_.de…


Brian Stecher wrote:

Nobody <> michael@acrfr.usyd.edu.au> > wrote:
I still think the basic design of time is somewhat broken in Neutrino
(this
applies to a few other “realtime” OS’s as well. Maybe I’m a bit biased
bit
when I used to use Transputer I could easily get timming precision of
~10us
for high priority process on a 20Mhz processor!

IMHO … polling with a cycle time of 10us is
just bad design.
Polling at that rate should be handled by an
intelligent device … powered
by a transputer(?).


I don’t think “Nobody” is talking about polling here Armin.

Oh … I read that someone needs some processing
every 60us … initiated by
a timer service.

I think what he means is that timer have a precision of 10us.

Hm … when I read that posting again, you are
right :slight_smile:

BTW … I would realize a set of high precision
timers with a FPGA
on a special timer board. (timer handling in the
FPGA …)

Cheers

Armin

Hi…

I have a couple of simple questions, for sometimes I miss the reason of
why some things are as they are.

\

  1. In the following code you have the following in between brackets
    {…}, and yet nothing precedes the brackets such as an ‘if’, ‘for’,
    etc. Did you miss this? (I have NOT compiled/run your code):

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}



2. Everywhere the docs say not to use getprio(…), but I wonder how
could you do the following line??

event.sigev_priority = getprio(0);

I suppose that we can use the SchedGet() instead of getprio(). Is this
correct?

Thank you. Bests…

Miguel

Miguel Simon wrote:

Hi…

I have a couple of simple questions, for sometimes I miss the reason of
why some things are as they are.

  1. In the following code you have the following in between brackets
    {…}, and yet nothing precedes the brackets such as an ‘if’, ‘for’,
    etc. Did you miss this? (I have NOT compiled/run your code):

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}

That’s convinient way to declare variables related to some small piece
of code declared close to the code itself. You can then easily to
comment it in/out. If brackets were not there, one would have to declare
param and ret before first operator of function. Such brackets don’t
have effect on execution, just a syntax trick.

  1. Everywhere the docs say not to use getprio(…), but I wonder how
    could you do the following line??

event.sigev_priority = getprio(0);

I suppose that we can use the SchedGet() instead of getprio(). Is this
correct?

sched_getscheduler() would POSIX way.

  • igor

“Igor Kovalenko” <Igor.Kovalenko@motorola.com> wrote in message
news:3A06F723.56F2620F@motorola.com

Miguel Simon wrote:

Hi…

I have a couple of simple questions, for sometimes I miss the reason of
why some things are as they are.

  1. In the following code you have the following in between brackets
    {…}, and yet nothing precedes the brackets such as an ‘if’, ‘for’,
    etc. Did you miss this? (I have NOT compiled/run your code):

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}


That’s convinient way to declare variables related to some small piece
of code declared close to the code itself. You can then easily to
comment it in/out. If brackets were not there, one would have to declare
param and ret before first operator of function. Such brackets don’t
have effect on execution, just a syntax trick.

It also creates a new ‘block’, which defines a new (temporary) scope, which
is sometimes useful (e.g., I often use it in switch cases).

“Frank Kolnick” <fkolnick@sentex.net> wrote in message
news:8u733q$aiq$1@inn.qnx.com

“Igor Kovalenko” <> Igor.Kovalenko@motorola.com> > wrote in message
news:> 3A06F723.56F2620F@motorola.com> …
Miguel Simon wrote:

Hi…

I have a couple of simple questions, for sometimes I miss the reason
of
why some things are as they are.

  1. In the following code you have the following in between brackets
    {…}, and yet nothing precedes the brackets such as an ‘if’, ‘for’,
    etc. Did you miss this? (I have NOT compiled/run your code):

The program does compile, i promise :wink:

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}


That’s convinient way to declare variables related to some small piece
of code declared close to the code itself. You can then easily to
comment it in/out. If brackets were not there, one would have to declare
param and ret before first operator of function. Such brackets don’t
have effect on execution, just a syntax trick.

It also creates a new ‘block’, which defines a new (temporary) scope,
which
is sometimes useful (e.g., I often use it in switch cases).

And it’s also usefull when you cut and paste code around :wink:

As for the getprio() that true I should have used something else.

“Mario Charest” <mcharest@zinformatic.com> wrote in message
news:8trpqf$1kn$1@inn.qnx.com

“Nobody” <> michael@acrfr.usyd.edu.au> > wrote in message
news:8tqv52$50v$> 2@inn.qnx.com> …

I will make a guess why most real-time OS use a fix timer instead of a
“dynamic” one.


Fix timer provides something many of us require: deterministic behavior!
Maybe not the behavior we’d all like, but it’s very deterministic. You
won’t
have interrupts poping up all over the place. It often already
troublesome
enough to deal with HD and Network interrupt … I also

The lack of determinancy is all too true. Things like IDE harddisk DMA
transfers can significantly effect the CPU and you have very little control
over this.

think in general it uses let CPU resources. Imagine this case
10 programs, each asking for periodic timer of 1.1ms 1.2ms,1.3 ms, 1.4 ms,
1.5 ms, 1,6… What this would resulting is in an interrupt every 100us!
Pretty nasty on a 386 33Mzh ;-(
This is why the kernel has a priority system. You ONLY schedule the next

timer timeout of processes with a higher priority.

Furthermore the PC timer hardware is definitately not your best friend…
Too true. However it not uncommon to add good timer hardware to a system and

want to be able to use it in a consistent fashion with


Michael Stevens
Australian Centre for Field Robotics

“Chris McKillop” <cdm@qnx.com> wrote in message
news:8tsaft$p4h$5@nntp.qnx.com

I am not sure why people want the OS to drive it’s clock faster when what
they really want is to be able to have thier program be driven faster.
That is pretty easy. Get a card with a programmable timer and setup an
interrupt handler in your application. BOOM! You have whatever timing
you want directly to your app and your app alone. You could easily
write a little hp_timer process that handled the interrupt/hardware
side of things and sent Pulses to the apps that wanted to be kicked
faster then the kernel kicks.

This is what I do! However one problem with Neutrino is that many of the API
have a built have a time facility built in. Sadly there is no way of
replacing these hardwired time parameters with a Neutrino message.
So there is no way you can make use of you precision timer at these points.
There are two solutions:
a) Make the Neutrino API more orthogonal so timeouts are really Neutrino
Message Pulses
b) Allow the kernel to make use of a precision timer for all time
scheduling.

As you comrade of you pointed out “The devil’s in the details of course
:slight_smile:.”

This is the joy of having drivers as processes and a really small
interupt hanlding overhead!! > :slight_smile:

I agree this part of Neutrino is it most outstanding design feature.

Michael

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}


That made me remember why I use

if ( … ) {

}

instead of

if ( …)
{

}

The reason being that if by mistake, the line with the if is deleted, in the
first style it will be detected at compile time. In the second style it is
not detected at all.

Hence if I see my own code like this:

{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}


I never have to ask myself if something got deleted :wink:

Of course some people will intent the {}

if ()
{

}

But I still find this dangerous in case of mismatch tab versus space setup
in a editor :wink:

Thanks. Now I know something new! :slight_smile: BTW, reading answers in the
qdn.* news group makes me realize how much more learning I have to do.
Thank you for helping us many-years-of-experience-newbies.

Bests…

Miguel.



Mario Charest wrote:

// — Set priority to max so we don’t get disrupted but anything
// — else then interrupts
{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}


That made me remember why I use

if ( … ) {

}

instead of

if ( …)
{

}

The reason being that if by mistake, the line with the if is deleted, in the
first style it will be detected at compile time. In the second style it is
not detected at all.

Hence if I see my own code like this:

{
struct sched_param param;
int ret;
param.sched_priority = sched_get_priority_max( SCHED_RR );
ret = sched_setscheduler( 0, SCHED_RR, &param);
assert ( ret != -1 );
}

I never have to ask myself if something got deleted > :wink:

Of course some people will intent the {}

if ()
{

}

But I still find this dangerous in case of mismatch tab versus space setup
in a editor > :wink: