I’m a complete newbie to QNX and I’m trying to understand if it is realistic/possible to use QNX in a virtual reality setup, with stereo graphics. This would need an accelerated OpenGL video card supported by QNX. I’m a bit lost since the only chip QNX seems to support in the latest release is the Fujitsu Coral-P which doesn’t ring bells to me. Is it used in a stereo-compatible card with double buffering? It seems there’s development for the ATI Radeon series but not the (3D stereo) FireGL which is ATI’s truly professional line of OpenGL cards.
Am I right in concluding that 3D stereo with QNX is not possible at present? Is there any chance to see a 3D stereo board (like the FireGL) supported one day? Are there other ways to get stereo graphics output with QNX?
You can certainly write your own video driver, but at this point the openGL/3D support is not exported as part of the graphics DDK. This means you would currently have to contract QSS to write the driver for you if you want openGL.
If you plans are commercial, this would not be a hugely expensive way to do it. If you are just “fooling around”, then the you will have to wait until the 3D api is exported as part of the graphics DDK.
Sorry I’m an academic with no commercial plans (and also not an IT specialist) and I don’t feel like developing a graphics driver (or having one developed) if it takes more than one man-month as I assume it would.
So I reckon I’ll have to do with solution #2 but I’m unsure about what you mean by “wait until the 3D api is exported as part of the graphics DDK”. Momentics version 6.3 has a 3D Technology Development Kit. I don’t get the difference between TDK and DDK. And I don’t see how the OpenGL API being part of a DDK would help me get stereo-capable OpenGL acceleration. Because it would imply the existence of many more drivers? Or because it would make driver development a piece of cake?
Sorry for my thickness. I’m venturing far from my domain of knowledge…
Yeah. The problem is that the current graphics DDK doesn’t allow support for 3D. So there is no public knowledge on how to create a 3D driver in QNX. Hopefully QSS will release an updated graphics DDK at some point so if someone was interested, they could write a 3D driver.
At this point, the only way to get such a driver is to contract QSS.
Personnaly I would stick to Windows (or maybe linux don’t know) for stereo rendering. Get QNX to do the real work and let the Windows box take care of the display). Under Windows you don’t even need a card like a firegl. Most standard card can deal with it (i’m assuming looking at screen though 3d glasses)
But then you need to work on a communication protocol between the two boxes that is fast enough to ensure a minimal lag between the rendering request and the display (that’s especially crucial in VR, at least for scientific purposes: a video frame --around 10 ms-- means a lot). And if I understand well, your Windows display can be slowed down at random intervals by spurious background processes and a rather primitive scheduling system. So you lose the advantage of using QNX to handle the rest of your I/O, no?
Yes you would need a protocol of some sort (through TCP/IP) but then again it would be the same under QNX. The reason being that under QNX it would be natural to decouple the application (two or more processes). One program would deal with the rendering and the others deal with everything else. How the data is exchanged between them would need to be designed in either cases.
As for video processing slowing down rest of IO, again the problem is still there wether running on a single box or or multiple box. As a matter of fact, depending on hardware, heavy graphics isn’t very real-time friendly. With proper buffering and asynchronous data exchange these type of problem can easely be dealth with.
Having little details about your requirements, I’m just guessing here
Ok, I see that different processes can’t share a memory space so you need to implement a communication somehow.
However, my problem is not so much that graphics will slow down I/O (if the graphics run on a dedicated box, they’ll perhaps be very late compared to the input, but won’t affect it, using the solutions you mention – buffer+async com). Actually our way of thinking is to organize everything, data input and output, around a hard real time box (“orchestra conductor”) interfaced with all the devices via dual access memory banks (so this makes our buffering+async communications easy). My main concern is how to get the 3D graphics done in minimum time. I’m suspecting that graphics will be sometimes delayed by the Windows box for reasons that could not happen with QNX. Or am I idealizing QNX? I just remember seeing video rendering processes interrupted by Windows background processes, with little you could do to avoid the “gaps”. In a sentence: I’m looking for the fastest and most reliable 3D stereo rendering system (and a cheap one too ).