Some thoughts.
I’m assuming all these end-points are handled by the same resource
manager.
The one resource manager handles all the end-points.
Is your resource manager multi-threaded?
The resource manager is multithreaded.
Do you really need to notify your clients that the 1ms is up?
I need to notify the client app when it can send again (after the 1ms is
up). At this point it needs to assemble the data and write() it. I can’t
call the write() before and have it block, because by the time the write
executes, the data may be stale (real-time app).
That is, what should happen if:
client1 writes to device A
less than 1ms passes
client 2 writes to device B (or could be A, doesn’t matter, right?)
What does your resource manager do to client 2’s request?
– do you even see it? If you are single-threaded, do you spend the
full 1msec handling the write in the io_write() handler? If not,
you’ll never see it until you’re done client 1’s request, and then
you’ll be safe to handle client 2’s request
It is multithreaded, so I will see both requests.
– do you block client 2 and make it wait until client 1’s request is
done? (has client 2 open O_NONBLOCK ? Is that even meaningful?)
– this one is pretty easy to handle
– condvars might be useful for saying you can process the next
client
internally
Currently blocked writes are not implemented as there’s no use for a blocked
write message. By the time the blocked write could send, the data that was
blocked is probably stale.
– do you fail the client? What errno? EAGAIN?
I fail the client with EWOULDBLOCK (EAGAIN). My resource manager assumes
O_NONBLOCK was specified for all writable files.
– can the client KNOW that it is a 1ms delay, and just try again
itself
without needing to be signaled from you?
That’s sort of the system I’m using right now. I’m using an ad-hoc way of
waiting for the next system clock tick, at which point I know I can send
again. (the transmit is synced to the system clock)
– could it be a variable delay?
The delay is always 1ms.
If you have multiple clients to signal, each is going to get it nearly
simultaneously, most will fail…do they have to re-activate their
notification each time? Is this more overhead than you want every 1ms?
There’s likely only one client waiting, and if there is two only one will
get to write this time (the other will get it next). This case won’t occur
often enough that I’m worried about the overhead involved in re-activating
the notification. There is an iofunc_notify_t structure associated with
every file (managed by this one resource manager) though, and I want to
avoid calling iofunc_notify_trigger() on each one. I might work around this
by maintaining a list of each iofunc_notify_t structures that has a client
waiting on it, and iterating through that list when the timer is up.
The iofunc_notify_* helper functions are available in source from
cvs.qnx.com – they require an array of iofunc_notify_t structures,
so you can’t share – but you may find that you can make your own
special case set based on that code that does what you want.
I’ll look into this option too.
Thanks for your help,
Shaun