precise serial port data reading

Dear all,
I am using an IMU sensor which gives out data at 76.29Hz i.e. 13.1078ms. I am presently using the realtime timer to poll the sensor at this rate.
timer_create(CLOCK_REALTIME, &event, &timer_id);
But as we know, the time cannot be that precise in micro-seconds. Hence at regualr intervals I get two data frames. Is there any way to solve this?
I am reading the port using he read function. Is there any function which can block the particular thread until any data is obtained? In this way I dont have to use a realtime timer to poll and the data would be obtained at precise timing.

This is the way I open the port
fd = open(device,O_RDWR | O_NDELAY);

and my settings are:
tcflush(fd, TCIOFLUSH)<0 );
int n = fcntl(fd, F_GETFL, 0);
fcntl(fd, F_SETFL, n & ~O_NDELAY);
tcgetattr(fd, &oldtio)<0 );
struct termios newtio = oldtio;
cfsetispeed(&newtio,bauds);
cfsetospeed(&newtio,bauds);
newtio.c_cflag = (newtio.c_cflag & ~CSIZE) | CS8;
newtio.c_cflag |= CLOCAL | CREAD;
newtio.c_cflag &= ~(PARENB | PARODD);
newtio.c_cflag &= ~IHFLOW;
newtio.c_cflag &= ~OHFLOW;
newtio.c_cflag &= ~CSTOPB;
newtio.c_iflag=IGNBRK;
newtio.c_iflag &= ~(IXON|IXOFF|IXANY);
newtio.c_lflag=0;
newtio.c_oflag=0;
newtio.c_cc[VTIME]=0;
newtio.c_cc[VMIN]=0;
tcsetattr(fd, TCSANOW, &newtio);
int mcs=0;
ioctl(fd, TIOCMGET, &mcs);
mcs |= TIOCM_RTS;
ioctl(fd, TIOCMSET, &mcs);
tcgetattr(fd, &newtio);
newtio.c_cflag &= ~IHFLOW;
newtio.c_cflag &= ~OHFLOW;
tcsetattr(fd, TCSANOW, &newtio);

Thanks
Aswin

Aswin,

read()

will do what you want. It blocks until data is available (assuming you set up the initial open on the serial port to be blocking).

Just how much data are you getting every 13 ms? If it’s a standard amount (ie always 10 bytes) it makes the read() command much simpler because you can wait for 10 bytes exactly. Otherwise you are going to have to write some code to put the read() in a loop till you get the number of bytes for your data packet.

Tim

Dear Tim,
Glad to know read() can do it. I am getting 22 bytes per frame. If this is possible, the good approach is to employ a real time timer to run a bit faster i.e. at 10ms. Let the read() then block for the next 3.1078ms. Yes of course, as you mentioned, we can count the bytes as we get it. But in cases where there is a lot of other computation done on the CPU, I think its best that the reading thread is inactive for some time. Please let me know how can I open the port in blocking mode.

Thanks a lot
Aswin

Dear Tim,
I see that setting VMIN = 22 is a possible solution. In such a case, does read() return in realtime? Does this approach block the thread till 22 bytes are obtained?

Thanks
Aswin

VMIN should work. Also, make sure to set the rx fifo to 1, otherwise there is a possible 50msec delay, since none of the FIFO depths divide evenly into 22.

Dear rgallen,
How can I set the reception buffer to 1? I could not get what you meant

He was referring to the UART fifo which may be set with a command line option to the devc serial driver (not sure what your platform is).

Thanks for all your replies. I am using the XT/104plus.
this has a -t option which says like:
-t number Enable receive FIFO and set receive trigger level (default -t 56)

So will the following work?
devc-serCtiPciUart -t 1 &

But I am not getting accurate timing for each frame yet. I tried printing the number o bytes read each time. I am getting it like this.
22
22
22
…around 20 to 25 times, then
64
…again the same with displaying 22, then
125

so at regular intervals there are worong number of bytes displayed. Why is this so?

Aswin

Aswin,

It could be because the read() thread is lower priority that another thread and at certain times that other thread delays the read() thread for a long time so that more than 1 frame worth of data is in the buffer.

The other possibility is that the fact you are printing to the screen causes some delays. I am not sure if you are printing directly to a console, or in a photon console or to the Momentics IDE running on Windows. But my experience is that printing to the Momentics IDE or a photon console causes delays that aren’t really there if I remove the prints. You are better off redirecting the output to a file and then running for a while then looking at the contents of the file to see what was received.

Tim

OK, it looks like you have a custom serial port driver. Perhaps your FIFO can be set to a value that divides evenly into 22.

As Tim says, a printf can take multiple ms to happen. You should insert a trace statement (see TraceEvent() in the docs) that logs the number of bytes as a user event, and then use the System Profiler to capture a trace while the system is running. This is minimally invasive and should not alter the results. You can then confirm that you get exactly 22 characters each and every time.

Of course, in the real-world there may be comm errors etc., so you need to select on OOB conditions, and do some jiggery-pokery (clear out buffers, etc.) to get things back on track should an error occur (should only occur very infrequently).