User-level thread package under Neutrino?

I am starting an academic project to evaluate a multiprocessor real-time
scheduling approach. To save time, we are building the scheduler as a thread
package on Neutrino 6.2, which is running on a 4-processor i686 SMP.
Effectively, a single Neutrino process will simulate the entire scheduled
system. Threads within this process will then be used to control the
physical processors (one thread per processor).

For this project, I need to implement user-level task initialization and
context switching code, i.e., the routines that will allow me to control
which processor thread is executing which user-level task. I’ve tried using
some Linux and XINU x86 code as models for these routines, but am
experiencing persistent segmentation violations whenever I try to set up the
stack pointers. These violations appear ot stem from memory segmentation
issues and may be related to how Neutrino implements its memory protection.
Does anyone know of source code that already implements a user-level thread
package under Neutrino on an x86 architecture? Such code would certainly be
a more reliable model than the code I’m currently using and would save me a
lot of experimental hacking.

AFAIR, memory layout of processes in QNX is indeed somewhat different from
one commonly used in Unixes. But if you’re trying to piggyback your design
on an existing OS, perhaps trying to set up stack pointer directly is not
the best approach. Why don’t you let the OS take care of the low level
stuff?

You could have your 4 threads running with some high priority and class
FIFO, so they don’t interrupt each other. Block each one on a different
semaphore. Your scheduler thread would be even higher priority, so it can
interrupt any of them. Set up a ‘tick’ timer (with signal notification -
make sure it will be delivered to the scheduler thread) and another
semaphore for the scheduler thread and let it wait on that semaphore. The
scheduler would unblock either due to a timer signal (EINTR) or due to one
of the ‘CPU threads’ making a ‘kernel call’ (that is, posting the scheduler
semaphore followed by wait on its own). The scheduler would then do its
decision and ‘return to user mode’ by posting the semaphore associated with
the CPU thread it wishes to run, followed by wait on its own.

– igor

“Philip Holman” <holman@cs.unc.edu> wrote in message
news:bmukgl$5j7$1@inn.qnx.com

I am starting an academic project to evaluate a multiprocessor real-time
scheduling approach. To save time, we are building the scheduler as a
thread
package on Neutrino 6.2, which is running on a 4-processor i686 SMP.
Effectively, a single Neutrino process will simulate the entire scheduled
system. Threads within this process will then be used to control the
physical processors (one thread per processor).

For this project, I need to implement user-level task initialization and
context switching code, i.e., the routines that will allow me to control
which processor thread is executing which user-level task. I’ve tried
using
some Linux and XINU x86 code as models for these routines, but am
experiencing persistent segmentation violations whenever I try to set up
the
stack pointers. These violations appear ot stem from memory segmentation
issues and may be related to how Neutrino implements its memory
protection.
Does anyone know of source code that already implements a user-level
thread
package under Neutrino on an x86 architecture? Such code would certainly
be
a more reliable model than the code I’m currently using and would save me
a
lot of experimental hacking.

The purpose of the project to is achieve comparable performance relative to
what we would get from a built-from-scratch system. I considered designs
like the one described below, but these result in too much activity, and
hence high overhead, during a context switch. The approach that is being
evaluated is designed for systems with lightweight switching and switches
often, so using “heavy” switches makes the performance study uninteresting
in that poor performance will be guaranteed.

Even with the design given below, I would still need the ability to transfer
the “identity” of the scheduled task into the semaphore-blocked thread via
some form of switching mechanism. Hence, this design does not entirely avoid
the problem. I could avoid manual switching and stack manipulation by making
every task a QNX thread and then having the scheduler migrate and unblock
the threads as they are scheduled. However, this would again lead to far too
much switching overhead.

I have been able to get a switching routine partially working in that it
jumps to the starting routine correctly, but does not set up all registers
properly. This appears to be caused by an incorrect stack frame format. I
have been trying to reverse-engineer the frame layout so far, which is why
it only partially works. If anyone knows a good source of information for
the layout of the GNU CC stack frames, that would be invaluable as well.
Unfortunately, my searchs of the net have turned up surprisingly little in
this area.

An existing thread package implementation is still preferable since I have
little confidence in “hack” switching.

“Igor Kovalenko” <kovalenko@attbi.com> wrote in message
news:bmvoj2$r17$1@inn.qnx.com

AFAIR, memory layout of processes in QNX is indeed somewhat different from
one commonly used in Unixes. But if you’re trying to piggyback your design
on an existing OS, perhaps trying to set up stack pointer directly is not
the best approach. Why don’t you let the OS take care of the low level
stuff?

You could have your 4 threads running with some high priority and class
FIFO, so they don’t interrupt each other. Block each one on a different
semaphore. Your scheduler thread would be even higher priority, so it can
interrupt any of them. Set up a ‘tick’ timer (with signal notification -
make sure it will be delivered to the scheduler thread) and another
semaphore for the scheduler thread and let it wait on that semaphore. The
scheduler would unblock either due to a timer signal (EINTR) or due to one
of the ‘CPU threads’ making a ‘kernel call’ (that is, posting the
scheduler
semaphore followed by wait on its own). The scheduler would then do its
decision and ‘return to user mode’ by posting the semaphore associated
with
the CPU thread it wishes to run, followed by wait on its own.

– igor

“Philip Holman” <> holman@cs.unc.edu> > wrote in message
news:bmukgl$5j7$> 1@inn.qnx.com> …
I am starting an academic project to evaluate a multiprocessor real-time
scheduling approach. To save time, we are building the scheduler as a
thread
package on Neutrino 6.2, which is running on a 4-processor i686 SMP.
Effectively, a single Neutrino process will simulate the entire
scheduled
system. Threads within this process will then be used to control the
physical processors (one thread per processor).

For this project, I need to implement user-level task initialization and
context switching code, i.e., the routines that will allow me to control
which processor thread is executing which user-level task. I’ve tried
using
some Linux and XINU x86 code as models for these routines, but am
experiencing persistent segmentation violations whenever I try to set up
the
stack pointers. These violations appear ot stem from memory segmentation
issues and may be related to how Neutrino implements its memory
protection.
Does anyone know of source code that already implements a user-level
thread
package under Neutrino on an x86 architecture? Such code would certainly
be
a more reliable model than the code I’m currently using and would save
me
a
lot of experimental hacking.

\

Source code of GDB and related libraries would be the best place to look for
stack frame info.
But if you need that little from the OS, you probably would be better off
with a thinner/simpler open source kernel.

“Philip Holman” <holman@cs.unc.edu> wrote in message
news:bn1dtj$3fq$1@inn.qnx.com

The purpose of the project to is achieve comparable performance relative
to
what we would get from a built-from-scratch system. I considered designs
like the one described below, but these result in too much activity, and
hence high overhead, during a context switch. The approach that is being
evaluated is designed for systems with lightweight switching and switches
often, so using “heavy” switches makes the performance study uninteresting
in that poor performance will be guaranteed.

Even with the design given below, I would still need the ability to
transfer
the “identity” of the scheduled task into the semaphore-blocked thread via
some form of switching mechanism. Hence, this design does not entirely
avoid
the problem. I could avoid manual switching and stack manipulation by
making
every task a QNX thread and then having the scheduler migrate and unblock
the threads as they are scheduled. However, this would again lead to far
too
much switching overhead.

I have been able to get a switching routine partially working in that it
jumps to the starting routine correctly, but does not set up all registers
properly. This appears to be caused by an incorrect stack frame format. I
have been trying to reverse-engineer the frame layout so far, which is why
it only partially works. If anyone knows a good source of information for
the layout of the GNU CC stack frames, that would be invaluable as well.
Unfortunately, my searchs of the net have turned up surprisingly little in
this area.

An existing thread package implementation is still preferable since I have
little confidence in “hack” switching.

“Igor Kovalenko” <> kovalenko@attbi.com> > wrote in message
news:bmvoj2$r17$> 1@inn.qnx.com> …
AFAIR, memory layout of processes in QNX is indeed somewhat different
from
one commonly used in Unixes. But if you’re trying to piggyback your
design
on an existing OS, perhaps trying to set up stack pointer directly is
not
the best approach. Why don’t you let the OS take care of the low level
stuff?

You could have your 4 threads running with some high priority and class
FIFO, so they don’t interrupt each other. Block each one on a different
semaphore. Your scheduler thread would be even higher priority, so it
can
interrupt any of them. Set up a ‘tick’ timer (with signal notification -
make sure it will be delivered to the scheduler thread) and another
semaphore for the scheduler thread and let it wait on that semaphore.
The
scheduler would unblock either due to a timer signal (EINTR) or due to
one
of the ‘CPU threads’ making a ‘kernel call’ (that is, posting the
scheduler
semaphore followed by wait on its own). The scheduler would then do its
decision and ‘return to user mode’ by posting the semaphore associated
with
the CPU thread it wishes to run, followed by wait on its own.

– igor

“Philip Holman” <> holman@cs.unc.edu> > wrote in message
news:bmukgl$5j7$> 1@inn.qnx.com> …
I am starting an academic project to evaluate a multiprocessor
real-time
scheduling approach. To save time, we are building the scheduler as a
thread
package on Neutrino 6.2, which is running on a 4-processor i686 SMP.
Effectively, a single Neutrino process will simulate the entire
scheduled
system. Threads within this process will then be used to control the
physical processors (one thread per processor).

For this project, I need to implement user-level task initialization
and
context switching code, i.e., the routines that will allow me to
control
which processor thread is executing which user-level task. I’ve tried
using
some Linux and XINU x86 code as models for these routines, but am
experiencing persistent segmentation violations whenever I try to set
up
the
stack pointers. These violations appear ot stem from memory
segmentation
issues and may be related to how Neutrino implements its memory
protection.
Does anyone know of source code that already implements a user-level
thread
package under Neutrino on an x86 architecture? Such code would
certainly
be
a more reliable model than the code I’m currently using and would save
me
a
lot of experimental hacking.



\