I’d like to use some IIDC 1394 (firewire) cameras on PCs running Neutrino 6.3 for a machine vision application. Anybody ever hear of a Resource Manager for any firewire cards?
QNX’s supported hardware list doesn’t have ANY kind of listing for firewire devices so I might have to figure out how to write my own Resource Manager for this project but if there’s any firewire support for Neutrino already out there somewhere I’d love to take a look at it.
I’ve never hear of any drivers for QNX. One thing to complicate matters is that 1394 has multiple faces. It connects to peripherals like video cameras, but it also can function as a network.
[color=darkred][b]
The versatility of 1394 is one of the reasons I like it - multiple hosts, peer to peer communication and higher actual throughput at 400 MHz than 480 MHz USB but it looks like it might be quite a challenge to implement in Neutrino.
I’ll take a look at what Mindready has to offer and perhaps should revisit the idea of using gigabit ethernet cameras instead of firewire…[/b]
Ethernet (at 1Gig) will give you all that, what you loose though is deterministic feature of 1394. However you gain lots of cable length !
We have currently 15 Coreco genie camera in the field. Works ok, although their first batch had problem keeping their firmware
We are currently using 640x480 60hz ( progressive ) and through put is 200MBytes/sec. That can keep a CPU rather busy The Intel 1Gig card helped a lot to reduce CPU usage.
Woops sorry I meant 20Mbytes (200Mbits) 1 fifth of 1G
It’s for machine vision each image is analysed in realtime. We have some system setup at 640x240 but at 120zh.
I have another customer who uses 1280x1024 but at 15fps which has the same bandwidth requirement as 640x480 60fps.
I don’t know what make the Intel 1Gig card so efficient, I’m guessing deep buffering. I know it’s not TCP/IP specific because even with QNX or FLEET the difference is apparent.
With QNX4 it takes less CPU to handle 20Mbytes with a 1Gig card then 10Mbits with a 100Mbit card.
Continuous throughput of 200 megabits per second IS pretty impressive for a single Ethernet port.
I presume you’re using DALSA/Coreco’s monochrome Genie M640 rather than the C640. Since they have three models with higher resolution in the same form factor to create an upgrade path, the C640 looks like it might be a good choice for me if I can make it work in Neutrino without spending six months on software development.
Are you using these things in Neutrino and doing all your own image processing instead of using Coreco’s Sapera image library?
If so, how big of a challenge was it to talk to the camera’s Ethernet port using Neutrino?
Does it limit you to the typical 1500 byte MTU or does it support Ethernet “jumbo frames” for greater throughput?[/b]
With a AMD X2 at 2.2G it takes about 20% of one core to handle 200megabits sec, that’s includes everything. networking and the driver itself, which store the image in a circular buffer.
We are using the monochrome version. We are not using the sapera library. About 2 years ago I ported the sapera library to QNX6 for ther X64 PCI frame grabber ( Coreco is 20 minutes from where I live and I have very good relation with them) and it was a real nightmare. Their library is HUGE with about 10 shared object. I finally made it work but it was very unstable and complicated to use and program with.
When it came time to write a driver for the ethernet camera, there was no way I’d use the Sapera stuff. It was actually technically simple to get this thing going on Neutrino, the problem is that the GigE protocol is not yet public… And even if it would some stuff in GigE is left to each manufacturer to implement so Coreco/Dalso spec are also no public. We manage to get some help from Dalsa because of the high volume of camera we purchase.
QNX6 doesn’t support jumbo frame.
Drop me a message if you’d be interested in the driver.
I’m assuming this is for the DARPA challenge? May I ask why QNX6 was chosen for the project?
Yes this is for a DARPA Urban Challenge vehicle. Although our project website is a bit out of date, take a look at www.SAMItheRobot.com. We’ve built a 6000 pound, 300 horsepower autonomous robotic vehicle out of a 1941 Dodge WWII military ambulance. It’s been running in DOS for the past two years as a large single process, single threaded application written in 32 bit protected mode x86 assembly language but implementing all of the vehicle control and sensor fusion with a single thread on a single processor was (and is) rather challenging. For the upcoming race in November we HAVE to have better perception than we’re currently getting with our three scanning SONAR units so I’ve decided to go multi-processor and multi-camera and wanted to run a modern OS so I could run multiple threads to simplify the software architecture and improve performance. I know conventional wisom would be that multi-threaded code is more complex than single threaded code but if the application is trying to manage multiple peripherals simultaneously in real time it’s appropriate (and easier) to use multiple threads. I think it’s going to be MUCH simpler to let procnto-smp manage multiple threads than it has been to manually manage the timing and scheduling of multiple tasks in my current DOS based application.
I wanted to use Neutrino because the idea of letting a 6000#, 300 hp vehicle run around autonomously under some flavor of Windows or Linux just seemed a bit too scary to me and Neutrino’s support for SMP/multi-core seems to be well thought out. And Neutrino has a reputation for high performance and stability; stability being critical for safety with a project of this sort. I realized that the somewhat limited hardware support of Neutrino was going to present some challenges but I figured I could work through those challenges easier than I could make multiple DOS based PCs work well together on a GigE network. In fact, as I read documents like the QNX Neutrino RTOS System Architecure book, I find Neutrino to be the first OS I’ve ever studied that seems like it was well planned and logically implemented rather than being thrown together in a hurry and patched for the last decade or two like DOS, Windows, Linux and so on seem to me to be.
So here I am thrashing about and running short of time because I underestimated my learning curves to become a Neutrino user AND a C/C++ programmer. Having never worked with any flavor of Unix it’s been no small challenge to reformat my brain to use forward slashes and multiple command line arguments (I’ve had to set aside at least 20% of my cerebral cortex to store Neutrino command line options) and the time I’ve spent trying to wrap my brain around mkifs and the build files it uses has actually caused me say nice things about Microsoft for the first time in a long time! :0)
At any rate, it’s slowly coming together and there’s probably at least a 30% chance I’ll get my new version of software up and running enough to pass the NEXT DARPA deadline which is two weeks from today.
Yes, I would be VERY interested in a Neutrino driver for Coreco’s Genie M640 GigE cameras. Let me know what you have in mind.
Also, if anyone out there in the QNX user/developer community has had any experience with using USB cameras in Neutrino I’d like to hear about it. That might also be another quick image capture solution that would help me get past the imminent deadlines while I work out all of the details for a better long term digital camera capture solution under Neutrino.
I would not go multi-thread. I would go multi-process. MUCH easier to develop, MUCH easier to debug. That being said in your case I would seriously consider something like Windows XP embedded. You will get a lot more flexibility in your hardware selection and be able to spend time on what is important.
Unless you have a special agreement with QSS you will find the multi-core license VERY expensive.
I’ll send you an email this weekend to talk about the GigE camera.
Mario can you also send me a email regarding GigE camera? I am planning to working on GigE camera driver and would be a great help to see how it is done.
Thanks in advance.
Officially I think you need to become a member of the GigE vision protocol to get access to the latest draft ( don’t think it’s completed yet). There is some old spec you can find on the web, but as far as I know it different enough to cause you grief. Or you can do like we did, hook up with a manufacturer of GigE Vision devices, and sign some NDA
Also note that GigE Vision allows for each manufacturer of devices to create their own extension, thus in some cases you need to get that info as well.
Some manufacturer of conversion device ( camlink to ethernet ) mention GigE but not GigE Vision which means it’s a different protocol, usually in-house stuff.
I am a new member joining the community, i browsed through previous articles and have learnt a lot from the postings especially by the senior members like mario, since we work in real world projects and are paid by the company for doing just that,most of the members are gaining knowledge or information from the experience from others.
Without such support from members like mario, i am sure most of us would not have progressed. I also feel at the same time, people should not be allowed to become code parasites.