Embedded world...

BTW, is there someone using QNX outside the embedded world?


Sure. We use QNX (4.25) as a server platform for our Security/Access Control Management Software.

Thanks for the reply.

Someone else??.. Someone using QNX 6.* ?


Most people I know do not use it in embedded system ;-)

ADDED: Maybe you should describe what you mean by embedded ?

I use QNX 6 for development (Duh!) but also as an Internet server.

Embedded != "Some mother board into a big case + some powerfull uP with a big fan cooler + lots of PCI slots with many plugged cards doing something + big hard drive (maybe a raid) + + + KVM +++ " and finally… many of this “packages” (Industrial PCs (maybe 20)), working together into a single (qnet?) network.

well… :open_mouth:

we are down from 20 (qnx4) to 3 pc ( qnx6 ) ;-)

Ah, the power of Moore’s Law + Multi-Cores.

We need only 2 QNX PC’s (one PC104 board on the vehicle and one industrial rankmount in the control center) for our real time work (the GUI’s are done on Windows). Furthermore we don’t have any PCI plug in cards. All our I/O is done by external boards (custom designed + 3rd party) communicating over serial or ethernet (this saves the trouble of writing drivers).


Has anyone noticed that Moore’s Law technically (if not effectively) has slowed down. Otherwise we’d have 8 or 16Ghz PIV’s by now.

Well I think it still applies because although you don’t have higher frequency, you have multiple core. AMD came out with a 12 Cores processor running at a maximum of 2.3GHz so that’s 27.6 GHz ;-)

Well that’s what I meant by effectively. The key of course is that the cost per core must continue to come down if the speed does not go up.

Also it’s worth mentioning that the theoretical 27.6 Ghz is only possible if you have a threaded program that can take advantage of those cores. If you don’t (say a PC game or formatting a Word document) then CPU speed hasn’t improved much in the last half dozen years.

It’s also obvious that Moore’s law can’t continue indefinitely because it’s a geometric rate of increase that would have processors with virtually infinite speed/cores by the start of the next century.


quote=“Tim”]Also it’s worth mentioning that the theoretical 27.6 Ghz is only possible if you have a threaded program that can take advantage of those cores. If you don’t (say a PC game or formatting a Word document) then CPU speed hasn’t improved much in the last half dozen years.

Yes but note that in the case of formatting a Word document, once it takes less than a .1 second to format a 10,000 page document, it doesn’t matter if you can speed it up or not. If you have 10,000 of these documents to format, well that’s where multi-core comes in handy. The only place where you run short is a program that must run linearly and for which there is always a gain for running it faster. I can’t think of any right now. There are probably some mathematical problems that work this way.

If we allow for increasing numbers of multi-cores in exchange for increasing processor speed, I think the end of Moore’s law will be market drive. There will be multi-core devices that are cheap and do pretty much whatever anyone would want to do already so why make anything better. Imagine the volume of an iPod filled with multi-cores.

The problems are temperature :blush: and the size of an atom. We need a hardware revolution not based in transistors… :stuck_out_tongue: :stuck_out_tongue: :stuck_out_tongue:

Doesn’t sorting fall into this linear category? The best sort algorithm is Quicksort which works by dividing the number of items into ever smaller groups that themselves get sorted. Now to take advantage of multicores (ie each sub group gets assigned to another core to allow parallel sorting of these sub groups) requires a software rewrite in addition to the hardware core increase. It’s not clear that rewrite is happening or is going to be happening except in special software (like 3D Studio Max) designed to benefit from multi-core. In a regular program it’s much faster to sort with a single core CPU with faster speed than a multi-core CPU with theoretical faster speeds.

I don’t know about you but I’d be much happer if my iPhone had multi-cores in order to run more than one app at a time :slight_smile:

how about a software revolution based on less piggy code :slight_smile:


you mean less microslob crud???

Not just Microslob but they are clearly the biggest offender.

Java is another big offender.

I mean it seems amazing to me that it takes 1-2 megs to do what used to routinely be done in a few K. Most of that bloat is due to all the higher level languages (C#, Java, virtually all the scripting language etc) doing more and more stuff behind the scenes.

But don’t discount newer software methodologies like Design Patterns (which I heavily use but also realize = massive code bloat if you use them to build for the future but that future never is needed/arrives).

This point became more salient recently when I did a project on a PIC micro at work. The processor only has 64K of code space and I was able to do a complete pressure monitoring device that included several I2C sensors, a 2 line LCD character display, serial port based calibration etc with about 10K space left over.

QNX is one of the great O/S’s because the micro kernal technology lets you use as little as you need.


Hear, hear Tim.

I too am amazed at code bloat, a simple “Hello, World” seems to take megabytes nowadays. I’m old enough to have cut my teeth on FORTRAN and punched cards, so perhaps I yearn for the good old days, however.

A little story.
Our facility produces Intrusion/Access Control panels. Up until very recently there were two lines of management software developed to control and manage systems of these panels. The original was (and still is) written in C running on QNX. We started with QNX 2 and are currently using QNX4.25. The product started as text-based and then used QNX Windows and is currently uses photon as a gui. Allows customers to use phindows. Management are reluctant to go to Neutrino(different story). The other management software was developed on Microsoft Windows in whatever language and using whatever methodology was the “latest”. This team were constantly changing tools. The Windows line was developed because the sales guys always complained they couldn’t sell a system that wasn’t Windows based.

There were two independent teams writing these products. A two man team on QNX and an eight man team for the Windows side.

Guess which team always delivered on time?

Guess which team hardly ever delivered on time and nearly always had buggy product? Guess which team had bugs because they claimed their tools were bad. This was the same team that insisted they have the latest and greatest. O/S, tools, methodology, etc. One of these guys refused to use C because he “knew it backwards” and it was no longer intellectually challenging! This same guy may have known his preferred language but there was no way he knew how to put a system together.

Guess which team is still employed?

Management used to love these windows guys because the buzzwords sounded so great. Unfortunately buzz-words don’t produce good product. Management also love consultants. I’ve lost count of the number of consultants who have come in and said “What!. You’re using “C”, OMG! That’s awful, you must stop that and become productive with our new development products.” No thanks, we are productive the way we are. The bonus is the old tools don’t cost us a red cent.

Moral of story. A small team using simple tools and rock solid o/s beats large team with “leading edge” desires and egos to match.

Enjoying C++ programming for the last few years I find that the tools in most cases is not to blame, but the depth of the knowledge and experience of the person using it. Some tools are easy to master other takes years. I find that “modern” tool are trying to make complex problem simple but in the process are hiding the details to the extent that you loose that sense of control and/or understanding of what is going on. How many people know the effect of using std::cout versus printf.

I’m currently working on a daily basis with a guy that is switching from Java to C++. He often find himself very frustrated at the level of knowledge that is required to writte good C++ programs compare to Java, he’s finding it real hard to not have garbage collection, even if I showed him that you can get something close to GC under C++. I have to admin I agree with him, however he is also getting use to this sense of control that you get with C and C++.

Someone could argue with Ianc that assembly is better then C, that C is bloated, that a hello world in assembly under DOS will use 100 bytes compare to
3K in C. He would be right ;-)

I think you’ve touched on the heart of the problem Mario. We all want to be better programmers. We think that if something is easier this will make us better. It’s easier if we don’t have to know what is going on below the surface. When we don’t know what is going on below the surface, we become poorer programmers.

Analogously, yes garbage collection is good because we don’t have to worry about always freeing up memory, a problem that will create a memory leak. So we are not as careful in constructing our code. And then when we have code that is not constructed as carefully, it is buggier.

One could argue that you do have to be careful with a garbage collector to de-reference any unused data. So how is that different from being careful to use free()?

Now if you accept my argument, most of you probably don’t, then the best environment would be the one where it is easiest to know exactly what you are doing because at its heart it is simple. I think that ‘C’ has an advantage here over C++, Java and Assembler.

BTW: I like and use all 4, I’m not a Zealot.