Hogs and cpu load


We are developing an application which performs a lot of calculations. We were using a PCM-5896 board with a Pentium MMX 233 MHz, running the QNX 6.3.0 graphical environment from a hard disk.

Then, running /usr/bin/hogs, it was seen that the application was taking 73-75% of CPU.

We updated our processor to a K6-2 500 MHz. Running it at 300 MHz, the CPU load was the same, around 74%. Running it at 400 MHz, it was still the same!

Now, we have removed Photon, we have built an image in a Compact Flash memory, with only 5 processes running (of course in text mode)… and still hogs shows that the application consumes 75% of CPU…

What is wrong with it? How could it be possible? Still our program is not complete, some heavy calculations remain and we wouldn’t like to run at 100%. What can we do?

Thanks a lot,


When you run hogs make sure you run it at very high priority.

It depends on how you program is build. It’s easy to have a program eat a fix amount of CPU time.

Sorry can’t offer any other suggestion without more information.

Hi, thanks, Mario, I’ll try tomorrow morning to run it at high priority.

But… I don’t understand why it takes always the same amount of CPU… If it was very little, I would understand that the change in % would be very small if we upgraded the processor. But… taking 75% of CPU, I was expecting an important reduction… for example, to 50% or even less…

Of course, the BIOS recognizes perfectly the new processor, and the new type and speed appears at the log-in QNX graphical environment. The increase in speed is very noticeable while booting up or compiling, but it seems that hogs is not affected…



I’m pretty sure that is something your program is doing. If it’s not taking 100% (on an average of 1 sec). That is either because it give some CPU back. For example if the code is doing a delay(250) it will always seems to take 75% of the CPU what ever the CPU speed.

Or if the program is IO bound waiting for disk or data over the network it may look likes it’s limited to 75%.

What is your program supposed to do?

There are many reasons a program might take the same percentage of cpu when run on a faster processor.

Example 1) The program reads data from a file, and then processes it. Put the program on a computer with twice the cpu speed and twice the I/O speed, and it will take the same percentage of cpu. Of course it should do the entire job in half the time.

Example 2) A program provides a computing resource for other cpu bound programs.
The cpu bound programs do their work twice as fast, and request computing service twice as
often. So the percentage stays the same.

Hi, both of you are right.

Roughly, our program measures the period of a digital signal, which varies within a range. We use a Diamond Quartz-MM card to input this signal. But this card does not have any microcontroller or processor in it. It’s just an adquisition card. The main processor must check all the time (through a Diamond’s library: dscud) the level of the signal: ‘1’ or ‘0’, performs some calculations afterwards and sends the resulting data through the network.

Compiling it with profiling and using gprof, it is seen that the internal polling of the dscud library takes more than 80% of the program time and it is the main responsable of the 75% of CPU load. The later calculations and networking issues seem to be “negligible” in CPU consume (1-5% in gprof report).

Since the range of the period of the signal is fixed, by using different delays, the measuring thread gives the control to the rest of the threads (there are other things in parallel being done), so we made the CPU load decrease from around 85% (without delays) to 75% (with).

So, as you are saying, from there comes the limitation in percentage. At least, it seems that we will have enough margin to perform more calculations…

Any possible advice? ;) If we reach the 100% in other stages of the project (with more IO devices attached), some data could be lost, isn’t it? and even changing the processor speed it wouldn’t change…



You should also use the System Profiler to take a look at what happens in your machine. It always gives insights that may lead to interesting conclusions.

Seems the library sucks ;-) Check if you can use interrupts when data is changing. Also do you need to poll that often (as suggested by usage of 75% of cpu). At was maximum frequence can the input change?

The library does all the “dirty work” of polling the signal state with a single and blocking call to a function. It would be perfect if the board had a micro or a PIC to do that monitoring and serve the result to the main processor through the library… but the board is completely dumb…

How can I use interrupts?? I couldn’t manage to have access to hardware interrupts in linux/qnx. I’ve always been using “high-level” libraries or the ‘select’ function for sockets and serial ports.

The input has a frequency of 10Hz and the duration of the signal that we measure can range from 1 to 50 ms, with a resolution of 0.025 ms. From the time we obtain the duration till the 100 ms (maximum duration at the 10 Hz) we do a sleep, so the measuring thread can give the CPU to the rest of threads. Maybe adjusting even very carefully that sleep we can free the CPU a little more.

Thanks for the advices :slight_smile:


You need to measure duration of active signal up down to .025 ms (25us)? If the library doesnt support using interrupts then you have no other choice then to poll at at least 25us interval. Infortunately you cant sleep for that small amount of time. If you`d be using interrupt to read the input at 25us, that could create a huge number of interrupt and leave little for other stuff.


I read the documentation on the board diamondsystems.com/files/binaries/QMMv15.pdf (sometimes i wonder why I spend so much time helping people, oh well). The board can indeed generate an interrupt , but only on bit 0 if the input port. The problem with that is that you need to detect the signal going on an going off. The way the board works you could only detect the signal going active.

Maybe it would be possible to find a way to use 2 counters (1 for signal going active and another one to detect signal going unactive), but this can get messy fast.

What I would do, is design a small circuit that would generate a small pulse each time the input state changes. I would hook up that output to bit 0 and connect the real input to bit 1. At every interrupt generated by a pulse on bit 0 bit 0 you’d go read the bit 1 to detect the state of the input signal. Basically that would reduce the CPU usage to 0%, only 20 interrupts per seconds (10Hz), which is close to insignificant. That would leave almost 100% of the CPU for other stuff. Possible allowing you to use a slower CPU which in turn means less problem with heat and power usage.

Unfortunately only bit 0 can be use to generate the interrupt, if more then 1 input could have been used you’d simply have to feed the inverse of the signal to another bit.

  • Mario

The Onyx-MM board seems capable of using 3 input bits to generate interrupt.

Mario :smiley: Thanks a lot for helping people ;)

In fact, it’s what I’ve done… but without using the interrupts (I didn’t notice, I’ll try then): I built a digital circuit with a flip-flop to make a pulse when the signal begins and another when it finishes ;) And with the help of the real signal I see in which state I am, so I don’t confuse both pulses.

Then, I’m using a function of the Diamond’s dscud library that measures the duration between both pulses. The problem is that that function checks continuosly if the pulse has arrived, every 25us, so the CPU gets incredibly busy.

I’ll read more carefully how to capture those interrupts and, of course, that would the perfect solution :slight_smile:

Thanks a lot for reading the manual more carefully than me, he he. Even in Diamond’s help support didn’t notice this workaround to do this task…

I’ll tell you. Thanks again,