Basic IPC Question

Hi,
I am considering two methods for providing IPC API’s for our box using QNX. I just need simple blocking IPC accross few (<10) cards in my box.
one method is using resource manager.
The other method is is using the basic channelCreate and msgsend and a proprietary way of sharing the channel ids,
Which is the better method considering both efficiency, and ease of use. Also is there any other method?

Thanks,
Lingaraj

Hi lingarajsp, there are many topics in this forum that you can check which cover your question. Also QNX documentation is very complete and you should read it carefully. There are severals aproaches, but the method to implement depends on your needs.

In my particular opinion and experience, the short answer is RM are the most flexible and robust method to perform IPC in QNX6 (for example, you can send messages from the console with a simple echo, and the primitives you use are well known and portable, read, open, write).

If you need network wide name resolution, maybe you can get help from the name_* set of functions as we discuss in “Interesting query on IPC in QNX”. (name_attach() also implements a RM for you but hide the hard (not so) code to create a full one). I think any method you go for, is easy to use and there is much documentation that can support to you.

I repeat: the most important thing to keep in mind is “What is exactly what you need?”.

Hey, as I said, this is MY particular opinion I hope that it helps you a bit.

Regards,
Juan Manuel

thanks for the reply…
Is it going to affect the performance if we use resouce manager for simple client/server communication ?
Thanks,
Lingaraj

No, only the open() is “expensive” (table lookup for path name resolution) - read(), write() and friends are based on native message passing of QNX Neutrino with almost no overhead.

-Peter

thanks for the reply…
Is it going to affect the performance if we use resouce manager for simple client/server communication ?
Thanks,
Lingaraj
[/quote]

Yes it will affect performance. But why don’t you measure it yourself and see if it fits you requirement.

I mean you might as well ask if writing in C/C++ will affect performance over assembly code, yet I’m sure you’ll use C/C++ :wink:

I have to say that I find “direct” message passing to be far more useful than resource managers, and easier to code.

I’ll segue into my own question. How is a resource manager useful if you have anthing besides a “chunk” you want access to? Just as an example, imagine a dictionary (map, whatever.) I have some keys (e.g. strings), like “Big Box” and “Medium Box” that correspond to values (e.g. a pre-defined structure with doubles for length, width, and height). How I could possibly implement that with a resource manager? If I write() my string to the RM, I can’t trust the following read() because someone else may have done a write() or read() in between. Granted, the read() and write() themselves are “atomic,” but in between them anything goes.

I just don’t see how a resource manager fits.

-James Ingraham

James,

if you want to have a simple atomic write/read cycle with resource managers you could use devctl (_IO_DEVCTL).
Maybe not very reasonable, but you can even send your own messages with MsgSend to resource managers (_IO_MSG).

It is also possible to prevent the problems you describe with write() and read() by one of the following two methods:

  1. create one dedicated device of your resource manager per client
    or
  2. assign the “response queue” to the OCB (per open data structure) instead to the devices attribute structure

Personally I prefere the implementation of request/response services with write()/read(), as

  • it doesn’t block your current thread
  • it allows you to write more than one request before reading the responses
  • it is possible to have more than one response to a request
  • it is possible to have “responses” without a request
  • you can get notifyed by a pulse when the response is ready (ionotify)
  • it allows you to write single threaded applications “waiting” for more than one response

-Peter

I’m with Ingraham, Resource Managers don’t seem to be all that elegant. They’re good for drivers, but for regular apps that simply need to pass a messages now and then they require too much baggage. Actually, the whole channel/connection design and additional name_attach() complexity is cumbersome compared to QNX2/4. Yes, I know there were good reasons but that doesn’t make the code cleaner.

Hmm, channel/connection is for message passing in general (QNX6) and not only for resource managers.

The resource manager library is a framework that helps you using basic message passing in a client/server architecture (and it’s not too bad in my opinion). If you decide not to use it: there is no right or false in my eyes, but in more complex applications you will end up writing your own framework - or copying/pasting snippets of always the same code everywhere.

-Peter

Yes, channel/connection it is for message passing, and the resource managers framework is built on top of those features, but it doesn’t make it as simple or clean as it was “in the good old days” (or as clean and simple as it could be IMO).

The QNX project I’ve been most recently working on has been performance bound and we had to bypass a bunch of the QNX-ism’s to get the performance up. In this case the resmgr stuff just got in the way.

Yes, the need for performance can seriosly collide with “QNX-ism’s” like memory protection. But did you “replace” resource managers by basic MsgSend/Receive/Reply and get better performance by doing this?

I agree: the resource manager framework has a certain complexity - but there is always a trade-off between complexity and features of a framework. BTW, I like the simplicity and cleanliness of the resource manager framework compared to what I have when writing drivers for Windows or Linux - it depends from where you look…

-Peter

No, SRR wouldn’t have been any faster than a resmgr. Originally we had a resource manager that served up data to various clients. It was replaced with a chunk of shared memory carefully designed to work without mutex protection to reduce kernel calls. Other SRR mechanisms were also replaced by shared memory. We’re looking at making our USB client driver operate out of shared memory, too, because the resmgr framework results in too many kernel calls. If we had a mega gHz x86 we’d have enough cycles to do it all, but low power comes at a cost (even the “low power” Atom uses 24x the power of a pxa270).

You’re exactly right, looking from Linux/Windows to the QNX ResMgr framework would be like having a party!

I agree, there are always boundaries which prevents to use certain technologies (you wouldn’t think about writing a Java application for a 8-bit microcontroller, e.g.). It’s important to be clear about this: in your environment, resource managers and even S/R/R may be “heavy”, but in an other applications (you mentioned the mega gHz x86) they may be “lightweight”.

It is also worth to think about using resource managers in combination with shared memory when dealing with high data volumes. You would do the “signaling” with resource managers and hand over the big junk of data by shared memory in this scenario.

Have fun!

-Peter

I guess it’s time for me to weigh in with my 2 cents. Yes, if you look at QNX 2, it’s unbelievably simple to set up two way message passing communication, maybe 5 lines. That’s the setup, and I think that’s the only thing to discuss. Once an QNX 6 io-manager is set up, you can use it just like the QNX 2 or QNX 4 version using your own message types. I don’t think there’s any advantage in speed in QNX 6 to avoiding the io-manager library, and there are some big disadvantages, eg. thread pools.

About a year ago I had to deal with this problem on a very short schedule. I ended up building about 8 or 9 QNX 2 like managers under QNX 6 in under a week. Most (but not all) of these were directly or indirectly hardware interfaces (drivers). I dealt with the manager problem by building a template. When I was done, the template was easy to configure for each manager, maybe 2-3 minutes including a skeletal client routine. It pretty much felt like the old QNX 2 days.

So my conclusion is that will a little planning the whole thing is mute. As far as the message passing vs. shared memory argument, I think that there are very few examples where shared memory is needed for performance reasons. What you usually have is the need to move data back and forth between processes which means two things, memory copies and system calls for synchronization. Ignoring all overhead for a second, what is the difference. Ok, so when you include the overhead you do get into an area where some applications would be better off using shared memory, but that need suggests that the processor might be is a little underpowered for the project, something you can’t always avoide.

Lots of CPU cycles allow the luxury of robust process isolation by using lots of kernel calls, context switches and copies.

I never found message passing overhead to be a problem until the latest project that had a very underpowered processor. By using a combination of optimizations, predominantly a move away from message passing to on-chip shared memory in several different data paths, we were able to triple the performance. I’d still rather use message passing but this experience tempered my enthusiasm a bit.

But what does it buy me over S/R/R? It’s more complicated, requires layers of macros, and doesn’t actually do anything new for me.

Like you said, this is pointless.

This doesn’t actually solve the problem. My resource manager makes sure that I have atomic access. If the resource manager handles multiple connections from multiple clients as totally independent I lose that. The “one big message queue” solves all of my concurrency problems in one swell foop.

I have to admit I don’t know what that means. Ok, I’m an idiot. Still, the S/R/R setup works for me even though I’m an idiot. Why switch to something I don’t even understand?

Actually a downside for me. I NEED the thread to block. Again, it solves my concurrency issues.

That’s a good point. The only we do that right now is to hard-code an often-used multiple-data request. So I could see this being useful. Still, since performance hasn’t been an issue I don’t think it would have too big an impact on our applications.

I don’t think I understand that, either.

Nifty, but never needed it.

In our applications, all of our responses are essentially instantaneous. Again the dictionary / map / look-up table is the kind of thing we usually use message passing for. Even when we’re talking to hardware the responses are fast enough that I don’t need to work on another task. This is a bit like the “you don’t have to block” point.

Again, nifty but never needed it.

-James