Resource manager access

I have a pretty simple question… I have created a resource manager to
handle some hardware and I want to be sure that access to the hardware is
“controlled”. If I have 2 threads, and one opens the RM(resource manager)
for read access and the other opens it for write access, can the threads
‘get in each others way’? Or does the RM read() code finish executing
before the RM write() code? Does having 2 separate file handles help? Am I
right in thinking that the following could happen?

Thread1 fd1 = open(/dev/rmgr);
Thread2 fd2 = open(/dev/rmgr);
// …

Thread2 write(fd2, “hello there”, 11);

Thread1 read(fd1, buf, sizeof(buf));

Thread2 write(fd2, “goodbye”, 7);

Thread1 read(fd1, buf, sizeof(buf));

Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
else?) Can the RM read() code get interrupted by Thread2 before it finishes
the read()? Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

How can I be guaranteed that I don’t drop data?


Thanks,

Mark

“Mark Welo” <mwelo@logisync.com> wrote in message
news:913j0l$mhb$1@inn.qnx.com

I have a pretty simple question… I have created a resource manager to
handle some hardware and I want to be sure that access to the hardware is
“controlled”. If I have 2 threads, and one opens the RM(resource manager)
for read access and the other opens it for write access, can the threads
‘get in each others way’?

If your process manager doesn’t use thread pool, each request is process
in order. However if your resmgr has threads there is no way could
can guaranty order (mostly true on SMP), you need do some sort
of synchronisation inside the resource manager.

Or does the RM read() code finish executing

before the RM write() code? Does having 2 separate file handles help? Am
I
right in thinking that the following could happen?

Thread1 fd1 = open(/dev/rmgr);
Thread2 fd2 = open(/dev/rmgr);
// …

Thread2 write(fd2, “hello there”, 11);

Thread1 read(fd1, buf, sizeof(buf));

Thread2 write(fd2, “goodbye”, 7);

Thread1 read(fd1, buf, sizeof(buf));

Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
else?)

It all depends how your resmgr is written.

Can the RM read() code get interrupted by Thread2 before it finishes
the read()?

No, a request never gets “interrupted”. However if your resmgr has
multiple threads more then one operation could be processed “simultaneously”

Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

If the resource manager is single thread and you have some sort
of sync between your thread1-thread2 to make sure the read
is done after the write yes.


How can I be guaranteed that I don’t drop data?


Thanks,

Mark

“Mark Welo” <mwelo@logisync.com> wrote in message
news:913j0l$mhb$1@inn.qnx.com

I have a pretty simple question… I have created a resource manager to
handle some hardware and I want to be sure that access to the hardware is
“controlled”. If I have 2 threads, and one opens the RM(resource manager)
for read access and the other opens it for write access, can the threads
‘get in each others way’?

Yes.

Or does the RM read() code finish executing
before the RM write() code?

Maybe, maybe not (non deterministic on SMP).

Does having 2 separate file handles help?

No.

Am I right in thinking that the following could happen?

(description of asynchronous behavior snipped)

Yes.

Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
else?)

Yes.

Can the RM read() code get interrupted by Thread2 before it finishes
the read()?

Yes.

Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

No.

How can I be guaranteed that I don’t drop data?

Not sure what you mean by this.

You really need to pick up a book on threaded programming; there are two
that
I can recommend: the O’Reilly pthread book, and the Dave Butenhof book
published by Addison-Wesley. Unfortunately, neither of these cover the
real-time
aspects of pthreads in a satisfying way (I haven’t found a pthread book that
does).

Mark Welo a écrit :

I have a pretty simple question… I have created a resource manager to
handle some hardware and I want to be sure that access to the hardware is
“controlled”. If I have 2 threads, and one opens the RM(resource manager)
for read access and the other opens it for write access, can the threads
‘get in each others way’? Or does the RM read() code finish executing
before the RM write() code? Does having 2 separate file handles help? Am I
right in thinking that the following could happen?

Thread1 fd1 = open(/dev/rmgr);
Thread2 fd2 = open(/dev/rmgr);
// …

Thread2 write(fd2, “hello there”, 11);

Thread1 read(fd1, buf, sizeof(buf));

Thread2 write(fd2, “goodbye”, 7);

Thread1 read(fd1, buf, sizeof(buf));

Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
else?) Can the RM read() code get interrupted by Thread2 before it finishes
the read()? Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

How can I be guaranteed that I don’t drop data?

Thanks,

Mark

I’m agree with Mario on any point. About the next point:

Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

If you really want such functionality, it’s always possible, instead of
synchronising clients as suggests Mario, to prevent a new write (so to block the
writer by not replying to it and bufferize its message) before a read request
comes to the resource manager. It maybe more easy to do. It’s very easy if your
resource manager is a thread pool, every writer thread will be blocked by a way
you decide, waiting for a reader to unblock the write condition. Then, there is
no way to know what writer thread will unblock, you have to implement your own
priority management. Quite funny to do :wink:
Of course, theoretically you could have as many writer thread blocked as write
request you will receive, but the thread pool initialization provide a working
thread limit. So you could have all your threads blocked on write requests and
you resource manager unable to receive a read. Subsequent messages will be
queued until… I don’t know what!

Any idea Mario to allways keep a thread available for a read message?

Alain.

I think you might want to look at iofunc_attr_lock().

In your following example, /dev/rmgr would have 1
iofunc_attr_t but 2 ocbs (1 for fd1, 1 for fd2).

-seanb



Mark Welo <mwelo@logisync.com> wrote:
: I have a pretty simple question… I have created a resource manager to
: handle some hardware and I want to be sure that access to the hardware is
: “controlled”. If I have 2 threads, and one opens the RM(resource manager)
: for read access and the other opens it for write access, can the threads
: ‘get in each others way’? Or does the RM read() code finish executing
: before the RM write() code? Does having 2 separate file handles help? Am I
: right in thinking that the following could happen?

: Thread1 fd1 = open(/dev/rmgr);
: Thread2 fd2 = open(/dev/rmgr);
: // …

: Thread2 write(fd2, “hello there”, 11);

: Thread1 read(fd1, buf, sizeof(buf));

: Thread2 write(fd2, “goodbye”, 7);

: Thread1 read(fd1, buf, sizeof(buf));

: Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
: else?) Can the RM read() code get interrupted by Thread2 before it finishes
: the read()? Is it guaranteed that I will get back both ‘hello there’ and
: ‘goodbye’?

: How can I be guaranteed that I don’t drop data?


: Thanks,

: Mark

Any idea Mario to allways keep a thread available for a read message?

I guess if the last thread available in the thread pools receive a message

and it’s not read it would reply with EAGAIN. But then how many
program handle EAGAIN properly?

I wonder if unionising mount point could help do that. The resource
manager would actually consists of two resource managers. One
handling write the other handling read, Thomas?

Alain.

And the short answer that John didn’t quite give was that you need to
impkement some form as semaphore mechanism for your threads to use, whether
that is one of the system routines or one built into your own code based on
what you are doing is up to you…

Marisa

Mutexes are a more appropriate primitive, and their correct
use is too large of a subject for a usenet post. In my defence,
I did reference 2 books on the subject.

“Marisa Giancarla” <mgiancarla@macromedia.com> wrote in message
news:916g9v$gu1$1@inn.qnx.com

And the short answer that John didn’t quite give was that you need to
impkement some form as semaphore mechanism for your threads to use,
whether
that is one of the system routines or one built into your own code based
on
what you are doing is up to you…

Marisa

Mark Welo <mwelo@logisync.com> wrote:

I have a pretty simple question… I have created a resource manager to
handle some hardware and I want to be sure that access to the hardware is
“controlled”. If I have 2 threads, and one opens the RM(resource manager)
for read access and the other opens it for write access, can the threads
‘get in each others way’? Or does the RM read() code finish executing
before the RM write() code? Does having 2 separate file handles help? Am I
right in thinking that the following could happen?

Thread1 fd1 = open(/dev/rmgr);
Thread2 fd2 = open(/dev/rmgr);
// …

Thread2 write(fd2, “hello there”, 11);

Thread1 read(fd1, buf, sizeof(buf));

Thread2 write(fd2, “goodbye”, 7);

Thread1 read(fd1, buf, sizeof(buf));

Can Thread1 read back ‘goodbye’ instead of ‘hello there’? (or something
else?) Can the RM read() code get interrupted by Thread2 before it finishes
the read()? Is it guaranteed that I will get back both ‘hello there’ and
‘goodbye’?

Lots of good questions (and good answers as well). I’ll just
add a little bit.

Each open() of the same name “should” (ie unless you
know why you don’t want to) map to the same attribute
structure in your resource manager. So for the above
situation you should end up with:

Client | Server
fd1 ----|—> ocb1 —> attr1 (for /dev/rmgr)
fd2 ----|—> ocb2 —> also to attr1

Now if your resource manager is single threaded then
you are guaranteed that the operations will be performed
serially (assuming that your threads do in fact run
as you have ordered them the client has to
guarantee that). No surprise the server only have one
thread.

Once you start using the thread pool functions then
there is a different chain of events.

Each read/write operation within the resource manager
is “atomic” if the opens were bound to the same attribute
structure. The attribute structures are locked (which
is why seanb mentioned iofunc_attr_lock()) before heading
into any io operation. This means that unless you
explicitely unlock the attribute in your read/write
handler (which means that read/write operations would
not be atomic) you will be guaranteed that only one
of the operations will take place. This means that in the
above scenario of the first read/write one will get
in before the other, then the other will block waiting.
So it could be possible that the read() reads from an
empty buffer.

So now knowing this we can look at your questions assuming
that things work the way we would like them to:

Can Thread1 read back ‘goodbye’ instead of ‘hello there’?
(or something else?).

I would say an unqualified “maybe”. The behaviour entirely
depends on the behaviour of your resource manager. If your
resource manager was a “normal” device then the data would
be written to a buffer that would look like:

Initially:
device buffer: []
ocb1 offset = 0, ocb2 offset = 0
(ocbs are equivalent to the client fds but in the server)

Thread2 write(fd2, “hello there”, 11);
device buffer: [hello there]

ocb1 offset = 0, ocb2 offset = 11

Thread1 read(fd1, buf, sizeof(buf));
device buffer: [hello there]

ocb1 offset = 11, ocb2 offset = 11

Thread2 write(fd2, “goodbye”, 7);
device buffer: [hello theregoodbye]

ocb1 offset = 11, ocb2 offset = 18

Now I’ve acted as if these requests take place serially
when we know that the ordering could be any of the
following (only looking at the three calls):

write(), read(), read()
write(), write(), read()
read(), write(), write()

Now since the server is the one who controls the advancement
of the offset pointer in the ocb, it is up to the resource
manager writer to decide what behaviour they want. Perhaps
this server was meant to be a logging server which only
records the last event. In this case rather than
[hello theregoodby] we would just have [goodbye] on the
last call. In this case readers might never have their
offset advanced, but would always read from the start of
the buffer.

Can the RM read() code get interrupted by Thread2 before
it finishes the read()?

Again an big “maybe”. This can happen under two conditions:

  1. The server doesn’t use the same attribute structure for
    both opens. Then the read() and the write() will be locking
    different structures so there could be contention (ie if you
    were reading from global resources which weren’t attached to
    the attribute structure)
  2. You do bind to the same attribute, but in your read/write
    handler you unlock the attribute structure by calling
    iofunc_attr_unlock(). You can’t use the attribute again
    until you lock the structure with iofunc_attr_lock().

You have to go out of your way to do these things, but you
should be aware of the behaviour of your server. So the
answer would be “no” in the default case.

Is it guaranteed that I will get back both ‘hello there’
and ‘goodbye’?

There are no guarantees =;-) It is up to you what you do
in your resource manager. Presumably you are storing the
writes somewhere … I think we covered the possible cases
above enough for you to be able to take a look and determine
for yourself. If you aren’t sure then ask again or drop
me a message.

How can I be guaranteed that I don’t drop data?

You should be able to guarantee that you don’t loose data
if you follow the guidelines above in terms of locking.
Now in terms of guaranteeing that you don’t run into a
situations where your client reads and empty buffer before
you have put anything there. That is a synchronization
and design issue you need to look at from both the
client and server point of view. Not knowing what you
are trying to design some suggestions might be:

  1. Have the thread continuously re-try (not so good)
  2. Use the iofunc_notify/select functions (requires work
    in the server).
  3. Have the server block any reads when there is no
    data available (ala pipe … again requires work in
    the server).

Hope this helps you somewhat.

Thomas