Andrew Thomas <andrew@cogent.ca> wrote:
“Robert Krten” <> nospam83@parse.com> > wrote in message
news:aj9n8e$af5$> 1@inn.qnx.com> …
Andrew Thomas <> andrew@cogent.ca> > wrote:
suggesting
is very much like a real-time database, which is a fairly common part of
many process control systems. A process registers itself and its
variables
with a central server which then makes them accessible through some kind
of API. We make one as well - Cascade DataHub. The advantage of
Cool. Out of curiousity, is yours open source? I’m not asking to be rude
or anything, but my main purpose in doing this (apart from the “cool”
aspect
of it) is to be able to include the source and a blow-by-blow “why did I
do
this or that” in the upcoming book…
No, Cascade DataHub is not open source. It’s also not QNX6-specific,
as it compiles on QNX4 and Linux as well.
looking at the central process as a data store is that you can construct
an
efficient mechanism for alerting programs whenever any variable changes,
which is a refinement of the idea you are suggesting. This even works
That’s part of the “exercises left to the reader”. We can have “virtual”
variables, which means that they are “calculated on demand” when the
read() is issued, or we can have “control variables” which are validated
and cause a thread to unblock or be otherwise notified if the variable
changes via write().
Efficient asynchronous notification of data changes is the hard part.
for (i = 0; i < num_clients; i++) {
MsgDeliverEvent (clientinfo .coid, clientinfo .event);
}
??
Writing a resource manager that passively manages a block of shared
memory is easy.
For virtual variables that are “calculate on demand”, you will need to
have a) some kind of built-in scripting language (see Gamma), or
b) some kind of messaging system that allows the reader of a variable
to block waiting on the resource manager while the resource manager
asks the owner of the variable what its value is. That implies that the
owner of the variable responds to well-defined messages. This can
either be simplistic, or an instance of a scripting language (again, see
Gamma).
Umm… maybe I’m missing something obvious here, but, couldn’t I just have:
devctl (fd, DEVCTL_GET_VIRTUAL_VARIABLE, &variable_info_block);
on the client side, and
mcp_register_read_access_callback (&variable, &callback);
on the server side, with the “callback” function doing the computation and
when it returns we assume the values is updated?
across a network in QNX with no extra effort. Properly constructed,
/net/nodename/dev/mcp/progname/variable >
That does not handle notification of change across the network. It also
means that the user of a variable knows on which node the variable
is located.
Not necessarily. See “virtual variables”. If you really insist, you can have
the virtual variable “thing” manage the virtual variables across your network,
or you can have your variables symlinked to the appropriate nodenames, aiding
in HA, whereby if the node faults, you just change the symlink…
Optional mutex. The idea is that if you want to “live on the edge”, you
can
use the variables in an unprotected manner. The library will always use
the
mutex, but your application-side code can choose to ignore it if it wants
to
run the risk of a data collision. For something as simple as a “debug on
off”
variable, we really don’t care – but for other things, yes, absolutely,
you’ll
want to take precautions.
This is very dangerous. If any client to your resource manager abuses its
mutex (locks it but never unlocks it) then both the resource manager and
any other clients can be blocked. You can use a thread pool in the
resource manager, but you can still block them all. You can unblock
clients after a certain period of time, but you now have both a race
condition and a potential source of long delay. A resource manager
that relies on the “good” behaviour of its clients is an unstable
design.
The client only issues a devctl() or a read(), the resource manager is responsible
for the mutex. The other side of the mutex (i.e., the program using the MCP)
could be forced to access the mutex through a “well tested” API library.
So, instead of:
lock_the_mutex();
a = value;
unlock_the_mutex();
we could replace this with the less-dangerous:
copy_the_value (&a, &value);
which atomically locks the mutex, protecting the resource manager thread
from some “bad” behaviour of the client. You gotta put your trust somewhere.
_The next logical step, of course, is to not just limit yourself to
variables, but
also to code blocks. In a 24/7 application, wouldn’t it be nice to
alter
the
code itself if necessary without stopping the application? We also do
this
on a regular basis - take a look at the Gamma language. When you
combine
the DataHub (call it a universal control panel if you like) with a
control
program that can hot-swap code at runtime, you have a system that really
can run through virtually any kind of maintenance activity without a
shutdown. If you embed Gamma in an application, you can write programs
that automate and re-code your applications at runtime.
But but but… “Gamma is not C”, right? > > I’m not trying to dismiss
the idea offhand, but there’s a whack of code already written in C,
so people are generally loathe to change (if it ain’t broken…)
No, Gamma is not “C”. That’s the whole point. You cannot do this
at all in C. It simply is not possible without an interpreted language.
The sad fact is that control programs often follow a progression:
- Do simple control
- parameterize the simple control with configuration files of the
form: variable = value
- Add an ad-hoc conditional mechanism to the config file
- Add ad-hoc math functions, typicall +, -, *, / to the config
file
- Add visibility to the variable = value to the config file for use
in the math functions
- Add parameterless subroutines to the config file
Before long you have an ad-hoc language in your config file in an
effort to move the control logic out of C. The trouble is, this
ad-hoc language has no parameterized subroutines, no local
variables, limited math support, low extensibility, terrible run-
time performance, no recursiveness, and no well-defined grammar.
It becomes an unmaintainble hodge-podge. Take a look at
Wonderware’s Intouch if you need a commercialized example._
I am not familiar with Wonderware’s products, nor with other products that have had this
particular pathology as you describe it. I’m not negating the fact that perhaps such
things exist.
All I’m trying to do is come up with a simple example of a resource manager
that solves something useful. Once you have the abstraction of controlling an
aribtrary (though granted, well-instrumented) C program using the control panel,
you are certainly welcome to use that abstraction at a higher level using any
number of scripting languages.
Let’s focus back on the example that I had chosen as justification for writing
this – (I’m not sure if I stated it explicitly or not). A long running task
that I wish to “poke” periodically, to either obtain status or change operating
behaviour of. For example, a high-resolution graphics raytrace application.
I’m not about to recode POVray in gamma – that’s just plain out of the question.
I might add a tweak on a global variable or two using the universal control panel.
That’s the main point I was trying to make with the concept of the control panel.
Perhaps the word “universal” was taken in its technical meaning and not its marketing meaning
If you’re really looking for a way to twiddle variables in an
existing C program, consider Rennie’s method. If you
Rennie’s method does not allow aribtrary twiddling of stack-based variables,
nor virtual variables, nor verification of write values – all things that
can be incorporate with little or no impact into EXISTING (perhaps HUGE)
installed codebases of C.
are starting a new control application in C that requires any
kind of flexibility, consider a psychiatrist.
Alas, not all projects are new projects.
The way I’d approach 24/7 HA applications is to use the pathname space,
have the “in-service upgrade” module register behind the current
in-service
module, suck out the deltas, and kill the in-service module whenever it
wants.
The client gets a hit, but then retries its connection, connects to the
new-
and-improved module, and everyone is happy… For a true 24/7 HA system,
the HA part is not simply welded on at the end, it has to be designed in,
which means that you’ve already thought about clients retrying their
connections,
restoring their states, and so on.
This, as opposed to just using an environment that allows you
to update the code in situ? Why would you prefer to kill of the
application and hope that you remembered to grab all important
state and hope that the clients are able to recover their
connections well? I agree that a HA system plans for upgrades,
but why make it difficult on yourself? Use a tool that helps you.
The argument of “hoping that you remembered to grab all important state”
and “hoping that the clients are able to recover” is very similar to the
argument against interpreted languages in general of “hoping you tested
all the codepaths” and “hoping that the data is compatible with the new
code” and so on. If we’re talking HA, then basically money is no concern.
I mean this half-jokingly. In an HA system you want 100% code coverage, or
as close as possible thereto, which I believe means testing the entire
program, not just the patch. At that point, whether the program you
are testing is in language A or language B is pretty much a management
decision, isn’t it? HA, and this is a concept most people don’t get,
is tied 100% into the concept of restartability. If you can’t figure
out how to restart a crashed system to minimize your MTTR, then you’re
screwed. And since all programs crash, whether interpreted or compiled,
there doesn’t seem to be much difference between the nature of the
patch applied. I’d almost rather kill off a version and restart
a new one, because then I’d be assured that I have “everything I need”
to recover from a crash. I’ve seen too many systems that run along
for a long time, with little patches here and there applied at runtime,
until suddenly it’s time to restart from cold-start, and so many things
break. By forcing a restart-from-cold-start, and incorporating that
into the HA testing cycle, I believe you’ll get a more robust product in
the end…
I know this is not “universal” in the sense that some people are
thinking -
you do need to use an API in one case, and an interpreted language in
the other, but no solution other than Rennie’s has come substantially
closer.
Rennies idea doesn’t allow virtual variables.
I’m not convinced that yours does either, though. How do you
see it being implemented?
mcp_register_read_access_callback (&variable, &callback);
(see above).
and imposed a “control variables
must be global” (which isn’t outrageous) requirement.
And how do you propose to perform in-situ modification of stack
variables? Are you thinking that every time a function enters it
will tell the resource manager where its variables are this time,
and then remove them at the end of the function? What happens
That’s the only possible way of having that work. I’m not saying
that this will be a high-runner case, but it’s a nice-to-have,
and doesn’t differ significantly from malloc’d global variables,
which are transient as well…
if the function re-enters? What happens if the program execs?
What happens if the program longjmps out of the function? In
practise I think you will find that control variables will be
global, even if that’s not an absolute requirement of the design.
I tend to agree with you there – I just didn’t want to prevent it
in my design. You’ll also notice I agreed that having “control variables
as global” as a requirement was not outrageous.
The amount of intrusion into a program is specifically designed to be
minimal,
therefore, with the open source nature of it, I’m hoping for some
adoption.
I’ll certainly be using it in my stuff, YMMV >
Your original goal, to set the debug flag in your running C
application, seems simple enough. I was really trying to
point out that the logical thought progression that you are
following is taking you into a very well-explored realm. The
problems that you will encounter have been solved, and the
depth to which they’ve been addressed exceeds what you
appear to be planning.
That’s a double-edged sword – I’ve seen them addressed to much
greater depths as well, but using C and various other methods.
My main goal is to show that it can be done using C – what you
choose to do with it at a higher level is up to you. You could
simply expose all this stuff for “massive public domain C programs”
and then control it with an interpreted language.
Besides, if you are trying to keep the intrusion minimal,
why not use Rennie’s model? Why not write a resource
manager that simply grabs the symbol table of an executable
and gives other programs a chance to modify those symbols?
Not everything has a symbol table, or wants to have one
Then that “whack of code already written in C” would not
have to be modified and re-linked with your API, and people
would not have to run your resource manager.
Then they’d be welcome to use Rennie’s approach
Cheers,
-RK
Cheers,
Andrew
–
Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.