MySQL and io-net

I have an application which listens for multicast information and writes it to a MySQL database (at 5Hz). The problem we have is that the CPU usage of io-net progressively consumes more and more CPU with time, until you are no longer able to login into the computer, via ssh or other, or view it over qnet. The application comunicates with mysql over a socket and hence information is run through io-net.

Any ideas on what is happening with io-net?

QNX 6.3 with Service Pack 2 is the operating system.
MySQL version 5.0.37 from binaries compiled for QNX 6.2.1 (from mysql website).


My first impression is that you have reached a critical point having something to do with MySQL’s capacity.
If it is possible, I would start everything at a low frequency, and monitor it as you
increase the frequency. If the bad behavior starts abruptly, then I am probably right.
If this turns out to be the case, you might squeeze a little more performance out by tuning your database.

I wouldn’t knock MySQL at all. I think it is a really good product, and I use it regularly. But when I benchmarked it adding moderately big records to a database with a few indices against the same job with a simple btree database, MySQL lost out in speed by a factor of 6 - 8. You do pay for the full SQL capabilities.

Thanks for your comments.

To clarify, the mysql loading is as follows:

  • At 5 Hz 16 to 36 real values (depending on a state) are written to a table.
  • At around 20-30 second intervals ~60 values (reals and ints) are written to a table.
  • At around 1-2 minute intervals 5 values (ints, reals, and char) are written to table.

It is possible to remove each one of these stages independantly and to down sample the 5Hz stage.
I recall testing this but I don’t recall the results. I’ll have to try testing again.

The only one that could be causing problems is the 5hz write. I would bring it down to 4, 3, 2, 1hz monitoring the cpu at each level. If you are seen something similar at 1hz as at 5hz, then the problem is not MySql.

Thanks for this comment. It certainly stimulated some thought.

As you suggested, I tried altering the rate of the “5Hz multicast” from the source application. In summary, if you up the multicast rate the rate at which IO net consumes CPU is accelerated. If you drop the multicast rate, the rate at which IO net consumes CPU decreases. But unfortunately there is no critical value at which the problem dissappears.

To cl arify, I am measuring CPU usage via the hogs utility and if I plot cpu usage of io-net with time I get what is to a first approximation, a straight line. The slope of the line increases as I increase the multicast packet rate.

I wonder if the problem might be with io-net or more specifically with the use of AF_UNIX sockets through io-net.

Mysql (in its standard configuration) uses and AF_UNIX socket on localhost. I can’t find a way to make it use an AF_INET socket on localhost.

In the past we’ve thrashed QNX systems using AF_INET sockets without any problems. I don’t have a lot of experience with the AF_UNIX sockets under QNX, but maybe someone else out there has some thoughts on this. It seems strange that the usage of io-net should increase with time.

As an aside, I’ve noted that as io-net increases, sometime the procnto (pid = 1) process reports its percentage cpu usage (as measured by hogs) as being greater than 100% and it continues to rise. Its got as high as 1000% what ever that means. (Perhaps hogs may not be the best tool to measure cpu usage).

Futher, I’ve noted that If I close the socket, either implicitly (by slaying my process feeding mysql) or explicitly (by calling mysql_close()), io-net usage goes back to zero. But if I restart my process, cpu usage by io-net seems to increase faster second time through. This one is a bit anecdotal.

QNX is such a fantastic system that I’m loathe to criticise it. In over 15 years of using it starting with QNX2, I’ve encountered only a couple of issues. But this one has me stumped.

Any thoughts or comments are welcome at this stage.


I’m not that familiar with hogs, but what you are saying doesn’t make sense to me. It sounds like you are saying that the rate of cpu use increases as apposed to the amount. If that were the case, you would quickly end up using %100 of the cpu, and then you could use no more of course. But you are saying it does use more and goes up to %1000 which is meaningless.

Might I suggest you do something simple, and just look at the Photon processor cpu utilzation bar and see if you get confirming results. That is, the bar would just increase in length until %100 of the cpu is being used.

To me it sounds like there’s a buildup of pulses or messages somewhere. Are you using select() on the socket? Are you using a timeout of 0 in the select?
This causes a signal queue leak - it’s been fixed in our head libc. … iew=markup


Using the MysSQL api, the way in which MySQL interacts with the socket is not visible to us.

Can you suggest a way of testing if we have a buildup of pulses or messages?

Being an oldtimer, I haven’t yet embrased the eclipse interface - yet. :stuck_out_tongue:

Well you could do a system trace. That might allow you to see just such a buildup. Eclipse has a viewer for traces, but a word of warning. Don’t bother trying this on a self hosted QNX system. It’s like watching paint dry.