Armin Steinhoff wrote:
Tomas Högström wrote:
Armin Steinhoff wrote:
Tomas Högström wrote:
Armin Steinhoff wrote:
Tomas Högström wrote:
In my project we need to optimize the FIFO usage to tune
the system performance. We’ll use ConnectTech boards with 64 or 128 bytes
FIFOS. How can I control the FIFO usage from within
my application? The tcsetattr call does not seem to include this?
It depends on the driver … there must be normaly an option for
defining the length of the FIFO.
The boards uses the devc-ser8250 driver. This driver can set the FIFO trig level
to 1,4,8 and 14 bytes. From some digging I have now come to the conclusion that
these settings map to 8,16,56, and 60 bytes of FIFO depth for the CTI BlueHeat’s
16C654 UART. I.e. we specify the FIFO depth to 8 and hopefully get 56.
Not very obvoius, IMHO.
No … if you specify 8, you will get a FIFO depth of 8.
The devc-ser8250 driver doesn’t support the 16C654 UART fully.
(I have just written a driver for the 16C654 from the scratch … )
Are you sure? The FIFO trigger levels are specified with bit 7 and 6 in the FCR register.
8 bytes would correspond to code 2 when cofiguring a 16C550. Why would the driver
map this to code 0 (8 bytes) when configuring a 16C654?
Yes … the code 2 sets the FIFO trigger to 56 bytes, but the
driver will only handle a FIFO depth of 8.
Aha. This will mean that the driver reads chunks of eight bytes instead of the more optimal 56?
I can live with that. My main concern is to minimize the number of interrupts
(and context switches) or the system.
This means the driver have to detect that it’s a UART with 64 bytes instead of 16. Is this
really true?
Yes … an universal driver must handle the specific details of
the particular UARTs.
Yes. I had the devc-ser8250 driver in mind since this is what the manufacturer recommend.
Thanks / Tom