timeslice question

Hi all,

I think I know the answer to this bt there is no harm in asking a question !!


Is there anyway to stop my process from being timesliced out by any and all processes (Other than increasing its priority).



You can disable interrupt on uni-processor machine, but that’s cruel ;-)

You could send a stop signal to all processes, again that would be very cruel.

You aren’t talking about FIFO scheduling policy, are you?

Hi guys,

Thanks for the speedy response your guys are worth 10 times a QNX support contract ;-).

It’s not a uniprocessor machine so that option is out. I suppose I could send the stop signal to all processes but I think that may be a bit like cracking a nut with a sledgehammer although I should maybe do some experimentation with that idea (I only need to process for about 100 mSec or so but I HAVE to have it for that period).

Not really talking about priority, what I am talking about is disabling all scheduling for a very short period programmatically :slight_smile:.

I think requiring this shows a bad design flaw in the driver. I may have to re-write the driver a bit so I do not require these ‘hacks’.

Thanks again for the fast response.

Why is rising the priority not an option? Remember that it could still get disrupted by interrupt.

When disabling interrupts they could be re-enabled at the first kernel call. On a non uniprocessor you can’t guaranty that another process will not perform a kernel call. However I think you could Mask every single interrupt. You said you need to do this 100ms, it my opinion that’s way too long to mask interrupt. I could have unforseen effect on other processes, like qnet timing out and breaking existing connection.

If you have a mean to get rid of that requirement, you should fix this problem at the source.


Is that 100 milliseconds or 100 microseconds you need the CPU for?


IF you are writing a driver, and IF the driver needs control over the processor without interruption for a SHORT period of time, then you can either turn off interrupts asm “cli” or is it “sti”? Doing this potentially adds to the real time latency which is why you should only turn off interrupts for a very short amount of time. If your code is in an interrupt handler, you only need to do this if you are worried about being interrupted by higher priority interrupts. If you are running in a thread and you are worried about being interrupted by your own interrupt handler, you can mask that interrupt line, which is kinder than turn off all interrupts.

With QNX6 and multi-core that’s not a viable solution because if a process on another core does a kernel call the interrupt will be enabled.

How unsporting.

That’s not true. The interrupt enable/disable is local to the processor, another cpu doing a kernel call is not going to affect your local interrupt status.

InterruptLock() is probably what you want to use, that combines and InterruptDisable with a spinlock.


If a thread disables interrupt while running on say process 2, is there anything that prevents it from be migrated to another processor then?

If processor one decides to reschedule a thread, and it thinks processor 2 is the best fit, then it will send an IPI to processor 2, telling it to reschedule. If you have interrupts disabled, then it can’t act on that,
until you re-enable them.

Thanks for the explanation!

Just out of curiosity … why wouldn’t you want to adjust its priority rather than taking the rather heavy handed interrupt approach?


Here’s a guess, Let’s say you are uh…mmmm, using a critical timing loop. Not much hardware I’ve worked with in the last 20 years needs such a thing but it is possible. Well even at high priority, you could get interrupted by a hardware interrupt.

If the machine has multiple core,s just for the critical loop the priority is set at the maximum that could work.

Yes I thought about setting the priority high when I enter the critical section although the machine that we are using is not multi-core (it is an industrial P.C.).

I also suggested an interrupt handler but this was not entertained.

To explain this a bit further the driver that I am trying to fix this problem is a modbus communications driver. The problem is specifically when using serial communications.

Unfortunately the communications are synchronise and are timing dependent meaning we are polling the serial port (bad) to see if there is new data on the serial port.

The problem occurs when we are polling the port we get message A and between us getting and sending the message something with a higher priority uses allot of cpu thus delaying us sending the message. When this delay happens the master, or requester, actually times-out onto the message and goes onto the next message. We then get cpu time and send the response but we are actually responding to a different message which unfortunately is the same size as the last message we sent and thus it take the data from message A as the data from message B. This causes the data in the master, or requester, to change dramatically.

My problem is I cannot spend any length of time on this as this is only happening with one client. I could probably spend one/two day(s) to fully write all specs. etc implement and test the solution.

So in truth I think there is no solution to this problem (at least that I can justify) and so thank you all very much for your help.


Maybe, the problem is to realize what that ‘something’ is…

Hi all,

Sorry for the necro of this thread but I am still interested in this. Are you saying that by disabling interrupts my thread will not be preempted? even if a thread with higher priority requires cpu?

What I am really looking for is an way to make a function run without being preempted.

Btw, I only use between 10-13milliseconds in my function (on a 1.7Ghz processor will be less on a faster processor), unless of course it has been preempted in which case I can see that value go up to 150milliseconds or above !

I may just raise the priority to 63 for this period. I think I may have found the offending proccess that causes the high CPU and I agree that this is the real problem here.


I didn’t say it, but yes that is correct. In a single processor system, if you turn off interrupts, you cannot be pre-empted. Without external interrupts, processors execute code linearly. Other than a peripheral triggering a hardware interrupt, only the timer would cause a pre-emption and/or rescheduling. The only exception I can think of to this would be an internal interrupt caused by something like executing a bad instruction or a segmentation violation.

Either turning off interrupts, or raising the priority will accomplish this. Which is best might depend on how critical the timing is. Raising the priority, you will still get interrupted, by other interrupts, or possibly by some OS process. I think in QNX 4 you can raise your priority above that of PROC but I’m not sure.

You should be very careful about either method if you are polling. If you have a hardware glitch and you are waiting for something that will never happen, you will hang your system. To prevent this, you might need to put a timeout counter in your code. This is a very old-school way of doing things.