Idea: "Universal Control Panel"

Alain Bonnefoy <alain.bonnefoy@icbt.com> wrote:

About GUIs, we are thinking about something more general to
monitor/control our application, soething that could be used through
Internet and more, without necessarly having a QNX station.
I like very much the idea of SWAT, the little web server to configure SAMBA.

Did you use it?

No, I have not. I’ve heard of it.

On another note, there’s nothing saying that the text version of the
control panel front-end can’t generate HTML or just a raw number…

Cheers,
-RK

I think I will go to this direction.

Alain.

Robert Krten wrote:

Robert Rutherford <> ruzz@nospamplease.ruzz.com> > wrote:

We already do this (in a somewhat cruder way).


All our server processes accept a standard message which allows an external
process to easily change the server’s environment (and also triggers certain
processing based on such changes). For example if I want to turn on verbose
logging on an already running running server I just do:


$ setenv -P server_name “DEBUG_ALL=TRUE”


This is absolute life-saver at times. Your idea is a more general extension
of this. The main benefit I see from your concept is the ability to
determine at run-time the variables that are available for manipulation, and
to see their current value, rather than having to insepct the manual/source
to see what variables are available.


Go for it!


Of course the traditional Unix way of achieving the same (or similar) is
just to have your server re-read its configuration file on receipt of a
signal like SIGHUP.


YUCK! I HAVE FREAKIN’ SIGNALS!!! > :slight_smile:

The main advantages, are like you said the runtime thing, but also the fact
that it could be standardized with a QSSL or community buy-in. This then
means that one control panel GUI or text-mode app can analyze any number
of QSSL and third-party products!

The other brainfart I had was to include a “timed monitor” aspect to it,
so that it could be used as a passive “front panel” to your application.
Imagine being able to have a GUI app that refreshes the values of various
variables at fixed intervals – packet counts, errors, number of frames
processed – whatever; it would be a good “insite” into the operation of
the utility.

I’m all over it, it’s my next project, I’ll include it in my next
upcoming “The QNX Neutrino Cookbook” book (he said, completely ignoring
the name “Momentics” again) > :slight_smile:

Anyone care to write the GUI part once I get the API finalized?
I’m thinking something with a tree-widget on one side to select
processes under /dev/controlpanel, and then a bunch of selectors for
the variables, including a right-click to set a monitoring period, etc.
Heck, it could even periodically log-to-disk > :slight_smile:

Cheers,
-RK


Rob Rutherford


“Robert Krten” <> nospam86@parse.com> > wrote in message
news:ai78ke$ser$> 1@inn.qnx.com> …

How’s this for an off-the-wall product idea.

A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.

Why? I’ve often had some kind of 24x7 server running and decided,
“ah crap, I need to turn on debugging for a while”. I really hate
the thought of having to restart the server with the -D flag, run
it for a bit, and then restart it again with out the -D flag.

So, here’s a set of API’s that I’m proposing to fix this.

In your server’s main(), you’d put something like:

main ()
{
// initialize your normal stuff here

control_panel_register (“servername”);
control_panel_register_variable (&debug_flag, “Debug Flag”);

// proceed with server stuff and the rest of your code
}

Then, from a command line or GUI, the “universal control panel” would
simply open “/dev/controlpanel/servername”, and get a list of variables
that are controllable. In this example, it would just come up with
a text string that sez “Debug Flag” and some id number.

From the universal control panel, I could then turn on the debug flag,
turn it off, examine it, etc.

There are a few details here that are of interest as well.

  1. why not have every server do it its own way? (I think that question
    just answered itself)
  2. how do we control access between the implicit resmgr thread and the
    regular server threads to the variables?
    Well, I thought about that, and there are a number of clever solutions:
    a) we don’t. If we “cripple” this idea to just accessing single
    char variables, there’s no issue with atomic operations.
    Even if we don’t limit it to just char variables, we can still
    do a “good enough” approach even without atomic operations.
    Certain operations can be coded to be non-atomic tolerant.
    b) we implement a set of macros that do mutex operations on the
    controlled variables, effectively providing a read/write
    interface. I don’t like the idea of anyone explicitly having
    to worry about the mutex functions, because that just leads to
    trouble, and makes this idea a lot more intrusive to the code
    to implement. I like the idea of just coding up the usual
    “if (debug_flag) { … }” kind of stuff.
  3. for variables that come in and out of scope, there could be an
    “unregister” call as well, which would effectively remove that address
    from the control of the resmgr thread.
  4. why can’t we do this with GDB or the /proc filesystem? You could do

some

of it, but certainly not the synchronizational aspects.

The true beauty in this scheme lies in the fact that I have one year to
patent it! Er, I mean, the true beauty in this scheme lies in the fact
that it’s not intrusive to the code; just a few lines up front in the
code. The control_panel_*() functions could even look at an environment
variable to allow them to “opt-out” in case of limited sizes on embedded
systems. And, it’s not just for servers – it can be quite useful for
long-running tasks that aren’t servers.

Thoughts?

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

\


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Robert Krten <nospam86@parse.com> wrote:

Nope, not that I’m aware of. The solution is easy though – block off all signals
in all your threads, and create a thread that converts signals to pulses > :slight_smile:

I was going to call you a smart-ass for that, but damn, that’s a really good
idea. The ability to do that could even be plus for using QNX for a system,
with such an ability to remove the ugly async nature of signals.

Cheers,
Camz.

camz@passageway.com wrote:

Robert Krten <> nospam86@parse.com> > wrote:
Nope, not that I’m aware of. The solution is easy though – block off all signals
in all your threads, and create a thread that converts signals to pulses > :slight_smile:

I was going to call you a smart-ass for that, but damn, that’s a really good
idea. The ability to do that could even be plus for using QNX for a system,
with such an ability to remove the ugly async nature of signals.

I would also like to point out that this is the way how you port those
applications use SA_RESTART. Block all signals in your service threads,
and leave only one thread handle signals.

-xtang

Xiaodan Tang <xtang@qnx.com> wrote:
: camz@passageway.com wrote:
:> Robert Krten <nospam86@parse.com> wrote:
:>> Nope, not that I’m aware of. The solution is easy though – block off all signals
:>> in all your threads, and create a thread that converts signals to pulses :slight_smile:

:> I was going to call you a smart-ass for that, but damn, that’s a really good
:> idea. The ability to do that could even be plus for using QNX for a system,
:> with such an ability to remove the ugly async nature of signals.

: I would also like to point out that this is the way how you port those
: applications use SA_RESTART. Block all signals in your service threads,
: and leave only one thread handle signals.

I think I should try to find a place to mention this in the docs.


Steve Reid stever@qnx.com
TechPubs (Technical Publications)
QNX Software Systems

Robert Krten <nospam86@parse.com> wrote:

How’s this for an off-the-wall product idea.

A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.

Progress report: the initial cut of the resmgr library interface is
done. It allows the target program to register itself, and an ls of
the directory works, and we can at least read the variables via “read()”.

write() and devctl() are next.

Anyone want to pre-alpha test? :slight_smile:

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

“Robert Krten” <nospam86@parse.com> wrote in message
news:aiv7vf$od9$1@inn.qnx.com

Robert Krten <> nospam86@parse.com> > wrote:
How’s this for an off-the-wall product idea.

A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.

Progress report: the initial cut of the resmgr library interface is
done. It allows the target program to register itself, and an ls of
the directory works, and we can at least read the variables via “read()”.

write() and devctl() are next.

Anyone want to pre-alpha test? > :slight_smile:

Hi Robert,
I know I’m a little late to the thread on this one, but what you are
suggesting
is very much like a real-time database, which is a fairly common part of
many process control systems. A process registers itself and its variables
with a central server which then makes them accessible through some kind
of API. We make one as well - Cascade DataHub. The advantage of
looking at the central process as a data store is that you can construct an
efficient mechanism for alerting programs whenever any variable changes,
which is a refinement of the idea you are suggesting. This even works
across a network in QNX with no extra effort. Properly constructed,
you also don’t have the issue of synchronizing an external data change with
a “safe” point in the code. How do you propose to handle the case where
an external mechanism changes a variable in the middle of a computation?

The next logical step, of course, is to not just limit yourself to
variables, but
also to code blocks. In a 24/7 application, wouldn’t it be nice to alter
the
code itself if necessary without stopping the application? We also do this
on a regular basis - take a look at the Gamma language. When you combine
the DataHub (call it a universal control panel if you like) with a control
program that can hot-swap code at runtime, you have a system that really
can run through virtually any kind of maintenance activity without a
shutdown. If you embed Gamma in an application, you can write programs
that automate and re-code your applications at runtime.

I know this is not “universal” in the sense that some people are thinking -
you do need to use an API in one case, and an interpreted language in
the other, but no solution other than Rennie’s has come substantially
closer.

Just my $0.02.

Cheers,
Andrew

Andrew Thomas <andrew@cogent.ca> wrote:

“Robert Krten” <> nospam86@parse.com> > wrote in message
news:aiv7vf$od9$> 1@inn.qnx.com> …
Robert Krten <> nospam86@parse.com> > wrote:
How’s this for an off-the-wall product idea.

A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.

Progress report: the initial cut of the resmgr library interface is
done. It allows the target program to register itself, and an ls of
the directory works, and we can at least read the variables via “read()”.

write() and devctl() are next.

Anyone want to pre-alpha test? > :slight_smile:

Hi Robert,
I know I’m a little late to the thread on this one, but what you are

Not too late at all… it’s still in development.

suggesting
is very much like a real-time database, which is a fairly common part of
many process control systems. A process registers itself and its variables
with a central server which then makes them accessible through some kind
of API. We make one as well - Cascade DataHub. The advantage of

Cool. Out of curiousity, is yours open source? I’m not asking to be rude
or anything, but my main purpose in doing this (apart from the “cool” aspect
of it) is to be able to include the source and a blow-by-blow “why did I do
this or that” in the upcoming book…

looking at the central process as a data store is that you can construct an
efficient mechanism for alerting programs whenever any variable changes,
which is a refinement of the idea you are suggesting. This even works

That’s part of the “exercises left to the reader”. We can have “virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().

across a network in QNX with no extra effort. Properly constructed,

/net/nodename/dev/mcp/progname/variable :slight_smile:

you also don’t have the issue of synchronizing an external data change with
a “safe” point in the code. How do you propose to handle the case where
an external mechanism changes a variable in the middle of a computation?

Optional mutex. The idea is that if you want to “live on the edge”, you can
use the variables in an unprotected manner. The library will always use the
mutex, but your application-side code can choose to ignore it if it wants to
run the risk of a data collision. For something as simple as a “debug on off”
variable, we really don’t care – but for other things, yes, absolutely, you’ll
want to take precautions.

The next logical step, of course, is to not just limit yourself to
variables, but
also to code blocks. In a 24/7 application, wouldn’t it be nice to alter
the
code itself if necessary without stopping the application? We also do this
on a regular basis - take a look at the Gamma language. When you combine
the DataHub (call it a universal control panel if you like) with a control
program that can hot-swap code at runtime, you have a system that really
can run through virtually any kind of maintenance activity without a
shutdown. If you embed Gamma in an application, you can write programs
that automate and re-code your applications at runtime.

But but but… “Gamma is not C”, right? :slight_smile: I’m not trying to dismiss
the idea offhand, but there’s a whack of code already written in C,
so people are generally loathe to change (if it ain’t broken…)

The way I’d approach 24/7 HA applications is to use the pathname space,
have the “in-service upgrade” module register behind the current in-service
module, suck out the deltas, and kill the in-service module whenever it wants.
The client gets a hit, but then retries its connection, connects to the new-
and-improved module, and everyone is happy… For a true 24/7 HA system,
the HA part is not simply welded on at the end, it has to be designed in,
which means that you’ve already thought about clients retrying their connections,
restoring their states, and so on.

I know this is not “universal” in the sense that some people are thinking -
you do need to use an API in one case, and an interpreted language in
the other, but no solution other than Rennie’s has come substantially
closer.

Rennies idea doesn’t allow virtual variables, and imposed a “control variables
must be global” (which isn’t outrageous) requirement.

The amount of intrusion into a program is specifically designed to be minimal,
therefore, with the open source nature of it, I’m hoping for some adoption.
I’ll certainly be using it in my stuff, YMMV :slight_smile:

Just my $0.02.

Glad to hear from you, Andrew, always good to have input!

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

“Robert Krten” <nospam83@parse.com> wrote in message
news:aj9n8e$af5$1@inn.qnx.com

Andrew Thomas <> andrew@cogent.ca> > wrote:
suggesting
is very much like a real-time database, which is a fairly common part of
many process control systems. A process registers itself and its
variables
with a central server which then makes them accessible through some kind
of API. We make one as well - Cascade DataHub. The advantage of

Cool. Out of curiousity, is yours open source? I’m not asking to be rude
or anything, but my main purpose in doing this (apart from the “cool”
aspect
of it) is to be able to include the source and a blow-by-blow “why did I
do
this or that” in the upcoming book…

No, Cascade DataHub is not open source. It’s also not QNX6-specific,
as it compiles on QNX4 and Linux as well.

looking at the central process as a data store is that you can construct
an
efficient mechanism for alerting programs whenever any variable changes,
which is a refinement of the idea you are suggesting. This even works

That’s part of the “exercises left to the reader”. We can have “virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().

Efficient asynchronous notification of data changes is the hard part.
Writing a resource manager that passively manages a block of shared
memory is easy.

For virtual variables that are “calculate on demand”, you will need to
have a) some kind of built-in scripting language (see Gamma), or
b) some kind of messaging system that allows the reader of a variable
to block waiting on the resource manager while the resource manager
asks the owner of the variable what its value is. That implies that the
owner of the variable responds to well-defined messages. This can
either be simplistic, or an instance of a scripting language (again, see
Gamma).

across a network in QNX with no extra effort. Properly constructed,

/net/nodename/dev/mcp/progname/variable > :slight_smile:

That does not handle notification of change across the network. It also
means that the user of a variable knows on which node the variable
is located.

Optional mutex. The idea is that if you want to “live on the edge”, you
can
use the variables in an unprotected manner. The library will always use
the
mutex, but your application-side code can choose to ignore it if it wants
to
run the risk of a data collision. For something as simple as a “debug on
off”
variable, we really don’t care – but for other things, yes, absolutely,
you’ll
want to take precautions.

This is very dangerous. If any client to your resource manager abuses its
mutex (locks it but never unlocks it) then both the resource manager and
any other clients can be blocked. You can use a thread pool in the
resource manager, but you can still block them all. You can unblock
clients after a certain period of time, but you now have both a race
condition and a potential source of long delay. A resource manager
that relies on the “good” behaviour of its clients is an unstable
design.

The next logical step, of course, is to not just limit yourself to
variables, but
also to code blocks. In a 24/7 application, wouldn’t it be nice to
alter
the
code itself if necessary without stopping the application? We also do
this
on a regular basis - take a look at the Gamma language. When you
combine
the DataHub (call it a universal control panel if you like) with a
control
program that can hot-swap code at runtime, you have a system that really
can run through virtually any kind of maintenance activity without a
shutdown. If you embed Gamma in an application, you can write programs
that automate and re-code your applications at runtime.

But but but… “Gamma is not C”, right? > :slight_smile: > I’m not trying to dismiss
the idea offhand, but there’s a whack of code already written in C,
so people are generally loathe to change (if it ain’t broken…)

No, Gamma is not “C”. That’s the whole point. You cannot do this
at all in C. It simply is not possible without an interpreted language.
The sad fact is that control programs often follow a progression:

  1. Do simple control
  2. parameterize the simple control with configuration files of the
    form: variable = value
  3. Add an ad-hoc conditional mechanism to the config file
  4. Add ad-hoc math functions, typicall +, -, *, / to the config
    file
  5. Add visibility to the variable = value to the config file for use
    in the math functions
  6. Add parameterless subroutines to the config file

Before long you have an ad-hoc language in your config file in an
effort to move the control logic out of C. The trouble is, this
ad-hoc language has no parameterized subroutines, no local
variables, limited math support, low extensibility, terrible run-
time performance, no recursiveness, and no well-defined grammar.
It becomes an unmaintainble hodge-podge. Take a look at
Wonderware’s Intouch if you need a commercialized example.

If you’re really looking for a way to twiddle variables in an
existing C program, consider Rennie’s method. If you
are starting a new control application in C that requires any
kind of flexibility, consider a psychiatrist.

The way I’d approach 24/7 HA applications is to use the pathname space,
have the “in-service upgrade” module register behind the current
in-service
module, suck out the deltas, and kill the in-service module whenever it
wants.
The client gets a hit, but then retries its connection, connects to the
new-
and-improved module, and everyone is happy… For a true 24/7 HA system,
the HA part is not simply welded on at the end, it has to be designed in,
which means that you’ve already thought about clients retrying their
connections,
restoring their states, and so on.

This, as opposed to just using an environment that allows you
to update the code in situ? Why would you prefer to kill of the
application and hope that you remembered to grab all important
state and hope that the clients are able to recover their
connections well? I agree that a HA system plans for upgrades,
but why make it difficult on yourself? Use a tool that helps you.

I know this is not “universal” in the sense that some people are
thinking -
you do need to use an API in one case, and an interpreted language in
the other, but no solution other than Rennie’s has come substantially
closer.

Rennies idea doesn’t allow virtual variables.

I’m not convinced that yours does either, though. How do you
see it being implemented?

and imposed a “control variables
must be global” (which isn’t outrageous) requirement.

And how do you propose to perform in-situ modification of stack
variables? Are you thinking that every time a function enters it
will tell the resource manager where its variables are this time,
and then remove them at the end of the function? What happens
if the function re-enters? What happens if the program execs?
What happens if the program longjmps out of the function? In
practise I think you will find that control variables will be
global, even if that’s not an absolute requirement of the design.

The amount of intrusion into a program is specifically designed to be
minimal,
therefore, with the open source nature of it, I’m hoping for some
adoption.
I’ll certainly be using it in my stuff, YMMV > :slight_smile:

Your original goal, to set the debug flag in your running C
application, seems simple enough. I was really trying to
point out that the logical thought progression that you are
following is taking you into a very well-explored realm. The
problems that you will encounter have been solved, and the
depth to which they’ve been addressed exceeds what you
appear to be planning.

Besides, if you are trying to keep the intrusion minimal,
why not use Rennie’s model? Why not write a resource
manager that simply grabs the symbol table of an executable
and gives other programs a chance to modify those symbols?
Then that “whack of code already written in C” would not
have to be modified and re-linked with your API, and people
would not have to run your resource manager.

Cheers,
Andrew

Andrew Thomas <andrew@cogent.ca> wrote:

“Robert Krten” <> nospam83@parse.com> > wrote in message
news:aj9n8e$af5$> 1@inn.qnx.com> …
Andrew Thomas <> andrew@cogent.ca> > wrote:
suggesting
is very much like a real-time database, which is a fairly common part of
many process control systems. A process registers itself and its
variables
with a central server which then makes them accessible through some kind
of API. We make one as well - Cascade DataHub. The advantage of

Cool. Out of curiousity, is yours open source? I’m not asking to be rude
or anything, but my main purpose in doing this (apart from the “cool”
aspect
of it) is to be able to include the source and a blow-by-blow “why did I
do
this or that” in the upcoming book…

No, Cascade DataHub is not open source. It’s also not QNX6-specific,
as it compiles on QNX4 and Linux as well.

looking at the central process as a data store is that you can construct
an
efficient mechanism for alerting programs whenever any variable changes,
which is a refinement of the idea you are suggesting. This even works

That’s part of the “exercises left to the reader”. We can have “virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().

Efficient asynchronous notification of data changes is the hard part.

for (i = 0; i < num_clients; i++) {
MsgDeliverEvent (clientinfo .coid, clientinfo .event);
}

?? :slight_smile:

Writing a resource manager that passively manages a block of shared
memory is easy.

For virtual variables that are “calculate on demand”, you will need to
have a) some kind of built-in scripting language (see Gamma), or
b) some kind of messaging system that allows the reader of a variable
to block waiting on the resource manager while the resource manager
asks the owner of the variable what its value is. That implies that the
owner of the variable responds to well-defined messages. This can
either be simplistic, or an instance of a scripting language (again, see
Gamma).

Umm… maybe I’m missing something obvious here, but, couldn’t I just have:

devctl (fd, DEVCTL_GET_VIRTUAL_VARIABLE, &variable_info_block);

on the client side, and

mcp_register_read_access_callback (&variable, &callback);

on the server side, with the “callback” function doing the computation and
when it returns we assume the values is updated?

across a network in QNX with no extra effort. Properly constructed,

/net/nodename/dev/mcp/progname/variable > :slight_smile:

That does not handle notification of change across the network. It also
means that the user of a variable knows on which node the variable
is located.

Not necessarily. See “virtual variables”. If you really insist, you can have
the virtual variable “thing” manage the virtual variables across your network,
or you can have your variables symlinked to the appropriate nodenames, aiding
in HA, whereby if the node faults, you just change the symlink…

Optional mutex. The idea is that if you want to “live on the edge”, you
can
use the variables in an unprotected manner. The library will always use
the
mutex, but your application-side code can choose to ignore it if it wants
to
run the risk of a data collision. For something as simple as a “debug on
off”
variable, we really don’t care – but for other things, yes, absolutely,
you’ll
want to take precautions.

This is very dangerous. If any client to your resource manager abuses its
mutex (locks it but never unlocks it) then both the resource manager and
any other clients can be blocked. You can use a thread pool in the
resource manager, but you can still block them all. You can unblock
clients after a certain period of time, but you now have both a race
condition and a potential source of long delay. A resource manager
that relies on the “good” behaviour of its clients is an unstable
design.

The client only issues a devctl() or a read(), the resource manager is responsible
for the mutex. The other side of the mutex (i.e., the program using the MCP)
could be forced to access the mutex through a “well tested” API library.
So, instead of:

lock_the_mutex();
a = value;
unlock_the_mutex();

we could replace this with the less-dangerous:

copy_the_value (&a, &value);

which atomically locks the mutex, protecting the resource manager thread
from some “bad” behaviour of the client. You gotta put your trust somewhere.

_The next logical step, of course, is to not just limit yourself to
variables, but
also to code blocks. In a 24/7 application, wouldn’t it be nice to
alter
the
code itself if necessary without stopping the application? We also do
this
on a regular basis - take a look at the Gamma language. When you
combine
the DataHub (call it a universal control panel if you like) with a
control
program that can hot-swap code at runtime, you have a system that really
can run through virtually any kind of maintenance activity without a
shutdown. If you embed Gamma in an application, you can write programs
that automate and re-code your applications at runtime.

But but but… “Gamma is not C”, right? > :slight_smile: > I’m not trying to dismiss
the idea offhand, but there’s a whack of code already written in C,
so people are generally loathe to change (if it ain’t broken…)

No, Gamma is not “C”. That’s the whole point. You cannot do this
at all in C. It simply is not possible without an interpreted language.
The sad fact is that control programs often follow a progression:

  1. Do simple control
  2. parameterize the simple control with configuration files of the
    form: variable = value
  3. Add an ad-hoc conditional mechanism to the config file
  4. Add ad-hoc math functions, typicall +, -, *, / to the config
    file
  5. Add visibility to the variable = value to the config file for use
    in the math functions
  6. Add parameterless subroutines to the config file

    Before long you have an ad-hoc language in your config file in an
    effort to move the control logic out of C. The trouble is, this
    ad-hoc language has no parameterized subroutines, no local
    variables, limited math support, low extensibility, terrible run-
    time performance, no recursiveness, and no well-defined grammar.
    It becomes an unmaintainble hodge-podge. Take a look at
    Wonderware’s Intouch if you need a commercialized example._

I am not familiar with Wonderware’s products, nor with other products that have had this
particular pathology as you describe it. I’m not negating the fact that perhaps such
things exist. :slight_smile:

All I’m trying to do is come up with a simple example of a resource manager
that solves something useful. Once you have the abstraction of controlling an
aribtrary (though granted, well-instrumented) C program using the control panel,
you are certainly welcome to use that abstraction at a higher level using any
number of scripting languages.

Let’s focus back on the example that I had chosen as justification for writing
this – (I’m not sure if I stated it explicitly or not). A long running task
that I wish to “poke” periodically, to either obtain status or change operating
behaviour of. For example, a high-resolution graphics raytrace application.
I’m not about to recode POVray in gamma – that’s just plain out of the question.
I might add a tweak on a global variable or two using the universal control panel.
That’s the main point I was trying to make with the concept of the control panel.
Perhaps the word “universal” was taken in its technical meaning and not its marketing meaning :slight_smile:

If you’re really looking for a way to twiddle variables in an
existing C program, consider Rennie’s method. If you

Rennie’s method does not allow aribtrary twiddling of stack-based variables,
nor virtual variables, nor verification of write values – all things that
can be incorporate with little or no impact into EXISTING (perhaps HUGE)
installed codebases of C.

are starting a new control application in C that requires any
kind of flexibility, consider a psychiatrist.

:slight_smile:
Alas, not all projects are new projects.

The way I’d approach 24/7 HA applications is to use the pathname space,
have the “in-service upgrade” module register behind the current
in-service
module, suck out the deltas, and kill the in-service module whenever it
wants.
The client gets a hit, but then retries its connection, connects to the
new-
and-improved module, and everyone is happy… For a true 24/7 HA system,
the HA part is not simply welded on at the end, it has to be designed in,
which means that you’ve already thought about clients retrying their
connections,
restoring their states, and so on.

This, as opposed to just using an environment that allows you
to update the code in situ? Why would you prefer to kill of the
application and hope that you remembered to grab all important
state and hope that the clients are able to recover their
connections well? I agree that a HA system plans for upgrades,
but why make it difficult on yourself? Use a tool that helps you.

The argument of “hoping that you remembered to grab all important state”
and “hoping that the clients are able to recover” is very similar to the
argument against interpreted languages in general of “hoping you tested
all the codepaths” and “hoping that the data is compatible with the new
code” and so on. If we’re talking HA, then basically money is no concern. :slight_smile:
I mean this half-jokingly. In an HA system you want 100% code coverage, or
as close as possible thereto, which I believe means testing the entire
program, not just the patch. At that point, whether the program you
are testing is in language A or language B is pretty much a management
decision, isn’t it? HA, and this is a concept most people don’t get,
is tied 100% into the concept of restartability. If you can’t figure
out how to restart a crashed system to minimize your MTTR, then you’re
screwed. And since all programs crash, whether interpreted or compiled,
there doesn’t seem to be much difference between the nature of the
patch applied. I’d almost rather kill off a version and restart
a new one, because then I’d be assured that I have “everything I need”
to recover from a crash. I’ve seen too many systems that run along
for a long time, with little patches here and there applied at runtime,
until suddenly it’s time to restart from cold-start, and so many things
break. By forcing a restart-from-cold-start, and incorporating that
into the HA testing cycle, I believe you’ll get a more robust product in
the end…

I know this is not “universal” in the sense that some people are
thinking -
you do need to use an API in one case, and an interpreted language in
the other, but no solution other than Rennie’s has come substantially
closer.

Rennies idea doesn’t allow virtual variables.

I’m not convinced that yours does either, though. How do you
see it being implemented?

mcp_register_read_access_callback (&variable, &callback);
(see above).

and imposed a “control variables
must be global” (which isn’t outrageous) requirement.

And how do you propose to perform in-situ modification of stack
variables? Are you thinking that every time a function enters it
will tell the resource manager where its variables are this time,
and then remove them at the end of the function? What happens

That’s the only possible way of having that work. I’m not saying
that this will be a high-runner case, but it’s a nice-to-have,
and doesn’t differ significantly from malloc’d global variables,
which are transient as well…

if the function re-enters? What happens if the program execs?
What happens if the program longjmps out of the function? In
practise I think you will find that control variables will be
global, even if that’s not an absolute requirement of the design.

I tend to agree with you there – I just didn’t want to prevent it
in my design. You’ll also notice I agreed that having “control variables
as global” as a requirement was not outrageous.

The amount of intrusion into a program is specifically designed to be
minimal,
therefore, with the open source nature of it, I’m hoping for some
adoption.
I’ll certainly be using it in my stuff, YMMV > :slight_smile:

Your original goal, to set the debug flag in your running C
application, seems simple enough. I was really trying to

:slight_smile:

point out that the logical thought progression that you are
following is taking you into a very well-explored realm. The
problems that you will encounter have been solved, and the
depth to which they’ve been addressed exceeds what you
appear to be planning.

That’s a double-edged sword – I’ve seen them addressed to much
greater depths as well, but using C and various other methods.
My main goal is to show that it can be done using C – what you
choose to do with it at a higher level is up to you. You could
simply expose all this stuff for “massive public domain C programs”
and then control it with an interpreted language. :slight_smile:

Besides, if you are trying to keep the intrusion minimal,
why not use Rennie’s model? Why not write a resource
manager that simply grabs the symbol table of an executable
and gives other programs a chance to modify those symbols?

Not everything has a symbol table, or wants to have one :frowning:

Then that “whack of code already written in C” would not
have to be modified and re-linked with your API, and people
would not have to run your resource manager.

Then they’d be welcome to use Rennie’s approach :slight_smile: :slight_smile:

Cheers,
-RK

Cheers,
Andrew


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

“Robert Krten” <nospam83@parse.com> wrote in message
news:ajf3fe$a62$1@inn.qnx.com

Andrew Thomas <> andrew@cogent.ca> > wrote:
“Robert Krten” <> nospam83@parse.com> > wrote in message
news:aj9n8e$af5$> 1@inn.qnx.com> …
Andrew Thomas <> andrew@cogent.ca> > wrote:
That’s part of the “exercises left to the reader”. We can have
“virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are
validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().

Efficient asynchronous notification of data changes is the hard part.

for (i = 0; i < num_clients; i++) {
MsgDeliverEvent (clientinfo > .coid, clientinfo > .event);
}

?? > :slight_smile:

And when there are hundreds or thousands of different variables, strewn
over many processes, you are going to maintain an MxN table of event
structures in the server, and then expect the client to map between event
IDs and the variable in question?

How does your server discover that a client has altered his own data,
such that another client can be informed of that?

How does your server deal with the case where a client alters its own
data, and also wants to be notified of this change? You need to
differentiate this case from the case where a different client alters the
data or you could get a self-perpetuating notification cycle between the
server and one or more clients.

Writing a resource manager that passively manages a block of shared
memory is easy.

For virtual variables that are “calculate on demand”, you will need to
have a) some kind of built-in scripting language (see Gamma), or
b) some kind of messaging system that allows the reader of a variable
to block waiting on the resource manager while the resource manager
asks the owner of the variable what its value is. That implies that the
owner of the variable responds to well-defined messages. This can
either be simplistic, or an instance of a scripting language (again, see
Gamma).

Umm… maybe I’m missing something obvious here, but, couldn’t I just
have:

devctl (fd, DEVCTL_GET_VIRTUAL_VARIABLE, &variable_info_block);

on the client side, and

mcp_register_read_access_callback (&variable, &callback);

on the server side, with the “callback” function doing the computation and
when it returns we assume the values is updated?

I don’t know. What does mcp_register_read_access_callback do? If it
causes client-side code to be executed in the server’s context, I
suppose so, but that seems unlikely. If it causes client-side code to
execute
in the client’s context, then the client has to provide a message handler
for
this, and your server performs a blocking call. If it causes server-side
code
to execute in the server’s context, then you are obliged to recompile your
server every time you want to add a virtual variable. None of these is
a good situation. Can you explain more of what were you thinking?

across a network in QNX with no extra effort. Properly constructed,

/net/nodename/dev/mcp/progname/variable > :slight_smile:

That does not handle notification of change across the network. It also
means that the user of a variable knows on which node the variable
is located.

Not necessarily. See “virtual variables”. If you really insist, you can
have
the virtual variable “thing” manage the virtual variables across your
network,
or you can have your variables symlinked to the appropriate nodenames,
aiding
in HA, whereby if the node faults, you just change the symlink…

I have always been very skeptical of symlinks as a way to manage fail-over.

The client only issues a devctl() or a read(), the resource manager is
responsible
for the mutex. The other side of the mutex (i.e., the program using the
MCP)
could be forced to access the mutex through a “well tested” API library.
So, instead of:

lock_the_mutex();
a = value;
unlock_the_mutex();

we could replace this with the less-dangerous:

copy_the_value (&a, &value);

So now the non-intrusive facility replaces all variable accesses with a
function
call that includes a mutex lock. How long will it take somebody to realize
the
gross inefficiency of locking and unlocking the mutex many times when
performing a long series of calculations on many variables, and bypass the
well tested API?

_No, Gamma is not “C”. That’s the whole point. You cannot do this
at all in C. It simply is not possible without an interpreted
language.
The sad fact is that control programs often follow a progression:

  1. Do simple control
  2. parameterize the simple control with configuration files of the
    form: variable = value
  3. Add an ad-hoc conditional mechanism to the config file
  4. Add ad-hoc math functions, typicall +, -, *, / to the config
    file
  5. Add visibility to the variable = value to the config file for use
    in the math functions
  6. Add parameterless subroutines to the config file

    Before long you have an ad-hoc language in your config file in an
    effort to move the control logic out of C. The trouble is, this
    ad-hoc language has no parameterized subroutines, no local
    variables, limited math support, low extensibility, terrible run-
    time performance, no recursiveness, and no well-defined grammar.
    It becomes an unmaintainble hodge-podge. Take a look at
    Wonderware’s Intouch if you need a commercialized example.

    I am not familiar with Wonderware’s products, nor with other products that
    have had this
    particular pathology as you describe it. I’m not negating the fact that
    perhaps such
    things exist. > :slight_smile:_

This is not a pathology that’s limited to commercial products. I’ve seen it
happen in university labs and internal programs at companies numerous
times.

Let’s focus back on the example that I had chosen as justification for
writing
this – (I’m not sure if I stated it explicitly or not). A long running
task
that I wish to “poke” periodically, to either obtain status or change
operating
behaviour of. For example, a high-resolution graphics raytrace
application.
I’m not about to recode POVray in gamma – that’s just plain out of the
question.

_I understand this, and agree with you that recoding a compute-intensive
application like POVray in Gamma is not sensible. Though, you could:
\

  1. embed Gamma within POVray such that incoming messages in the
    scripting language are executed at well-known times, giving you much
    greater flexibility than simply tweaking variables.
    \
  2. embed POVray as a C extension to Gamma, so that you can run
    the POVray computation as an idle function at full speed, while still
    maintaining the ability to examine and alter variables and alter scripts
    as the POVray computation runs at full speed as optimized compiled
    \
  3. Have POVray register variables with the Cascade DataHub such
    that it keeps the DataHub up to date as it computes, and periodically
    checks for incoming messages from the DataHub for variable changes.
    These messages arrive at well-defined points in the processing and do
    not suffer from the behind-your-back syndrome of direct memory
    writes._

Rennie’s method does not allow aribtrary twiddling of stack-based
variables,
nor virtual variables, nor verification of write values – all things that
can be incorporate with little or no impact into EXISTING (perhaps HUGE)
installed codebases of C.

It requires the addition of an API and a new resource manager, along
with minor recoding of any portion of the application that accesses
any variables that it publishes.
If you used Cascade DataHub, you would add an API and a server,
and simply add calls at strategic locations with no recoding of existing
C code.
The direct-to-memory twiddling mechanism seems cool, but it also
looks like it would be more work to add it to an existing code base
than something like Cascade DataHub would be.

Rennies idea doesn’t allow virtual variables.

I’m not convinced that yours does either, though. How do you
see it being implemented?

mcp_register_read_access_callback (&variable, &callback);
(see above).

There were no more details above. Can you elaborate as to
how you see this working?

if the function re-enters? What happens if the program execs?
What happens if the program longjmps out of the function? In
practise I think you will find that control variables will be
global, even if that’s not an absolute requirement of the design.

I tend to agree with you there – I just didn’t want to prevent it
in my design. You’ll also notice I agreed that having “control variables
as global” as a requirement was not outrageous.

So just prohibit reentrancy and non-local jumps by convention.

Out of curiosity, what happens if a function forgets to unregister its
local variables? Doesn’t that expose its stack to damage by the
server? I’ll bet that would be tough to debug.

Besides, if you are trying to keep the intrusion minimal,
why not use Rennie’s model? Why not write a resource
manager that simply grabs the symbol table of an executable
and gives other programs a chance to modify those symbols?

Not everything has a symbol table, or wants to have one > :frowning:

I suppose that you could read the symbol table from a map
file instead of from the executable. Then you could produce
a reduced map file with some simple grep or sed scripts after
a compilation and treat them as initialization input to the
Rennie-model program. Not every program needs to have a
symbol table, just a map file. Heck, you could have the
program write its own map file at startup so you would not
even have to rely on your maps being up to date from your
build.

Cheers,
Andrew

Rennie Allen wrote:

Robert Krten wrote:


How’s this for an off-the-wall product idea.


A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.


Useful idea, however, the fact that the writer of the product to be
controlled must be aware of the api, and incorporate it into their
program, makes it unlikely to be truly “universal”. I wrote an almost
“universal control panel” for QNX4/Photon using the qnx_debug() interface.
With it I could attach to a program, get a list of symbols, and
read/write a variable (the requirement was that the executable have symbol
information). Basically a debugger that doesn’t halt the program, and
with a limited interface (read/write variables). No special code required
in the target program. It was almost universal, since it still required
that symbol information be present, and controllable variables had to be
global static (never-the-less I found it very handy).

For the product I currently work on we use the signal approach (which I
dislike as much as you do - wasn’t my idea).

Anyway, bottom line is I like the idea, but it would only be highly
successful if QSSL promoted it (and had it in the docs). Of course, a
nice little Photon app (available in a default installation under
“utilities”), that allowed you to get an “inventory” of controllable
symbols would be mandatory.

As a potentially more accessable alternative, you might consider
pitching your idea to the eQip Project ( http://www.qnxzone.com/ipaq/ ).
They are creating a complete open-source system for PDAs and handhelds.
If you could convince them to adopt your APIs, it would give you an
excellent proof-of-concept/demo.

Andrew Thomas <andrew@cogent.ca> wrote:

“Robert Krten” <> nospam83@parse.com> > wrote in message
news:ajf3fe$a62$> 1@inn.qnx.com> …
Andrew Thomas <> andrew@cogent.ca> > wrote:
“Robert Krten” <> nospam83@parse.com> > wrote in message
news:aj9n8e$af5$> 1@inn.qnx.com> …
Andrew Thomas <> andrew@cogent.ca> > wrote:
That’s part of the “exercises left to the reader”. We can have
“virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are
validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().

Efficient asynchronous notification of data changes is the hard part.

for (i = 0; i < num_clients; i++) {
MsgDeliverEvent (clientinfo > .coid, clientinfo > .event);
}

?? > :slight_smile:

And when there are hundreds or thousands of different variables, strewn
over many processes, you are going to maintain an MxN table of event
structures in the server, and then expect the client to map between event

No… it’s a distributed server – each program that uses the control
panel is a server, responsible for its own namespace, etc. Thus, the
clients of that server that want notification register with that server
only…

How does your server discover that a client has altered his own data,
such that another client can be informed of that?

The control panel will need to be informed. It’s an optional thing.
Just like we had the “atomic read” for the variable (below) that
guaranteed that the mutex happened, we can have a “notify variable changed”
(which can even be wrapped in an atomic write, to kill 2 birds with
one stone) function.

How does your server deal with the case where a client alters its own
data, and also wants to be notified of this change? You need to

That’s just perverse. If the client is modifying its own data, it
should then have its own internal way of notifying itself; condvar?

differentiate this case from the case where a different client alters the
data or you could get a self-perpetuating notification cycle between the
server and one or more clients.


Writing a resource manager that passively manages a block of shared
memory is easy.

For virtual variables that are “calculate on demand”, you will need to
have a) some kind of built-in scripting language (see Gamma), or
b) some kind of messaging system that allows the reader of a variable
to block waiting on the resource manager while the resource manager
asks the owner of the variable what its value is. That implies that the
owner of the variable responds to well-defined messages. This can
either be simplistic, or an instance of a scripting language (again, see
Gamma).

Umm… maybe I’m missing something obvious here, but, couldn’t I just
have:

devctl (fd, DEVCTL_GET_VIRTUAL_VARIABLE, &variable_info_block);

on the client side, and

mcp_register_read_access_callback (&variable, &callback);

on the server side, with the “callback” function doing the computation and
when it returns we assume the values is updated?

I don’t know. What does mcp_register_read_access_callback do? If it
causes client-side code to be executed in the server’s context, I
suppose so, but that seems unlikely. If it causes client-side code to
execute
in the client’s context, then the client has to provide a message handler
for
this, and your server performs a blocking call. If it causes server-side
code
to execute in the server’s context, then you are obliged to recompile your
server every time you want to add a virtual variable. None of these is
a good situation. Can you explain more of what were you thinking?

I think we may have a failure to communicate here – terminology issues.

There are two clients, one library, and one server being discussed here.

The “long running task” in my base example is a client, but doesn’t really
know it – let’s call it an “application”.

The application calls a few of the mcp
() functions, which cause a thread
to be created. This thread runs a resource manager. This is the server.

Some other process somewhere wants access to the applications variables.
This process, because it uses the resource manager to talk to the
application, is a client.

The library is the mcp_
() functions within the application, the server,
and the client.

So… what does mcp_register_read_access_callback do?

It allows the “application” to register a callback that’s executed in
the server’s context (i.e., by the server thread created by the library
in the application). This means that the application is the one who
recalculates the virtual variable – it’s really the only one who
could recalculate the virtual variable, as it knows what goes into
making this variable go. Yes, it blocks the server. That means it’s
a blocking call for the client, which is correct. It could be made to
not block the server (which would be preferable) by giving the client
its own calculation thread with a context; when the calculation thread
returned, the calculation thread would reply to the client. Or it
could send a notification pulse/signal/whatever to the client, asking
it to recalculate the virtual variable, and then call the “mcp_variable_done”
function call to transfer the variable’s contents back to the client…_

across a network in QNX with no extra effort. Properly constructed,

/net/nodename/dev/mcp/progname/variable > :slight_smile:

That does not handle notification of change across the network. It also
means that the user of a variable knows on which node the variable
is located.

Not necessarily. See “virtual variables”. If you really insist, you can
have
the virtual variable “thing” manage the virtual variables across your
network,
or you can have your variables symlinked to the appropriate nodenames,
aiding
in HA, whereby if the node faults, you just change the symlink…

I have always been very skeptical of symlinks as a way to manage fail-over.

Why? If you stop and think about it, pretty much everything in Neutrino
is a symlink – just not maintained in the filesystem space, but rather
in the process manager …



The client only issues a devctl() or a read(), the resource manager is
responsible
for the mutex. The other side of the mutex (i.e., the program using the
MCP)
could be forced to access the mutex through a “well tested” API library.
So, instead of:

lock_the_mutex();
a = value;
unlock_the_mutex();

we could replace this with the less-dangerous:

copy_the_value (&a, &value);

So now the non-intrusive facility replaces all variable accesses with a
function
call that includes a mutex lock. How long will it take somebody to realize
the
gross inefficiency of locking and unlocking the mutex many times when
performing a long series of calculations on many variables, and bypass the
well tested API?

Only if they want guaranteed access to the variable. For the kinds of things
I’m trying to solve, like setting debug variables, this will either be in
a very limited scope, or no mutexing will be used:

if (debug_flag) {
printf ("…");
}

_No, Gamma is not “C”. That’s the whole point. You cannot do this
at all in C. It simply is not possible without an interpreted
language.
The sad fact is that control programs often follow a progression:

  1. Do simple control
  2. parameterize the simple control with configuration files of the
    form: variable = value
  3. Add an ad-hoc conditional mechanism to the config file
  4. Add ad-hoc math functions, typicall +, -, *, / to the config
    file
  5. Add visibility to the variable = value to the config file for use
    in the math functions
  6. Add parameterless subroutines to the config file

    Before long you have an ad-hoc language in your config file in an
    effort to move the control logic out of C. The trouble is, this
    ad-hoc language has no parameterized subroutines, no local
    variables, limited math support, low extensibility, terrible run-
    time performance, no recursiveness, and no well-defined grammar.
    It becomes an unmaintainble hodge-podge. Take a look at
    Wonderware’s Intouch if you need a commercialized example.

    I am not familiar with Wonderware’s products, nor with other products that
    have had this
    particular pathology as you describe it. I’m not negating the fact that
    perhaps such
    things exist. > :slight_smile:

    This is not a pathology that’s limited to commercial products. I’ve seen it
    happen in university labs and internal programs at companies numerous
    times.

    Let’s focus back on the example that I had chosen as justification for
    writing
    this – (I’m not sure if I stated it explicitly or not). A long running
    task
    that I wish to “poke” periodically, to either obtain status or change
    operating
    behaviour of. For example, a high-resolution graphics raytrace
    application.
    I’m not about to recode POVray in gamma – that’s just plain out of the
    question.

    I understand this, and agree with you that recoding a compute-intensive
    application like POVray in Gamma is not sensible. Though, you could:
    \
  7. embed Gamma within POVray such that incoming messages in the
    scripting language are executed at well-known times, giving you much
    greater flexibility than simply tweaking variables.
    \
  8. embed POVray as a C extension to Gamma, so that you can run
    the POVray computation as an idle function at full speed, while still
    maintaining the ability to examine and alter variables and alter scripts
    as the POVray computation runs at full speed as optimized compiled
    \
  9. Have POVray register variables with the Cascade DataHub such
    that it keeps the DataHub up to date as it computes, and periodically
    checks for incoming messages from the DataHub for variable changes.
    These messages arrive at well-defined points in the processing and do
    not suffer from the behind-your-back syndrome of direct memory
    writes.

    Rennie’s method does not allow aribtrary twiddling of stack-based
    variables,
    nor virtual variables, nor verification of write values – all things that
    can be incorporate with little or no impact into EXISTING (perhaps HUGE)
    installed codebases of C.

    It requires the addition of an API and a new resource manager, along
    with minor recoding of any portion of the application that accesses
    any variables that it publishes.
    If you used Cascade DataHub, you would add an API and a server,
    and simply add calls at strategic locations with no recoding of existing
    C code.
    The direct-to-memory twiddling mechanism seems cool, but it also
    looks like it would be more work to add it to an existing code base
    than something like Cascade DataHub would be.

    Rennies idea doesn’t allow virtual variables.

    I’m not convinced that yours does either, though. How do you
    see it being implemented?

    mcp_register_read_access_callback (&variable, &callback);
    (see above).

    There were no more details above. Can you elaborate as to
    how you see this working?_

More elaboration added above :slight_smile:

if the function re-enters? What happens if the program execs?
What happens if the program longjmps out of the function? In
practise I think you will find that control variables will be
global, even if that’s not an absolute requirement of the design.

I tend to agree with you there – I just didn’t want to prevent it
in my design. You’ll also notice I agreed that having “control variables
as global” as a requirement was not outrageous.

So just prohibit reentrancy and non-local jumps by convention.

Out of curiosity, what happens if a function forgets to unregister its
local variables? Doesn’t that expose its stack to damage by the
server? I’ll bet that would be tough to debug.

Not any more difficult than “return (&var_on_stack);” :slight_smile:

Besides, if you are trying to keep the intrusion minimal,
why not use Rennie’s model? Why not write a resource
manager that simply grabs the symbol table of an executable
and gives other programs a chance to modify those symbols?

Not everything has a symbol table, or wants to have one > :frowning:

I suppose that you could read the symbol table from a map
file instead of from the executable. Then you could produce
a reduced map file with some simple grep or sed scripts after
a compilation and treat them as initialization input to the
Rennie-model program. Not every program needs to have a
symbol table, just a map file. Heck, you could have the
program write its own map file at startup so you would not
even have to rely on your maps being up to date from your
build.

_Well… if it’s writing its own map table, maybe it could do
that by calling, oh I don’t know,…

mcp_write_map_table (&variable, “tag”, “Description”);

:slight_smile: :slight_smile:

Cheers,
-RK

\

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com._

James MacMillan <jamesm@qnx.com> wrote:

Rennie Allen wrote:
Robert Krten wrote:


How’s this for an off-the-wall product idea.


A universal control panel for software. What it does is it allows
you to modify variables through a controlled-access resource manager
in running programs.


Useful idea, however, the fact that the writer of the product to be
controlled must be aware of the api, and incorporate it into their
program, makes it unlikely to be truly “universal”. I wrote an almost
“universal control panel” for QNX4/Photon using the qnx_debug() interface.
With it I could attach to a program, get a list of symbols, and
read/write a variable (the requirement was that the executable have symbol
information). Basically a debugger that doesn’t halt the program, and
with a limited interface (read/write variables). No special code required
in the target program. It was almost universal, since it still required
that symbol information be present, and controllable variables had to be
global static (never-the-less I found it very handy).

For the product I currently work on we use the signal approach (which I
dislike as much as you do - wasn’t my idea).

Anyway, bottom line is I like the idea, but it would only be highly
successful if QSSL promoted it (and had it in the docs). Of course, a
nice little Photon app (available in a default installation under
“utilities”), that allowed you to get an “inventory” of controllable
symbols would be mandatory.


As a potentially more accessable alternative, you might consider
pitching your idea to the eQip Project ( > http://www.qnxzone.com/ipaq/ > ).
They are creating a complete open-source system for PDAs and handhelds.
If you could convince them to adopt your APIs, it would give you an
excellent proof-of-concept/demo.

Thanks James; I’m going to be developing it a little further based on
the feedback I’ve received so far :slight_smile: Plus, my main goal in this is
to have something to put in my new book :slight_smile:

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.