How to boot with -ptcpip [was: netstat and route]

Shashank <sbalijepalli@precitech.com> wrote:
: Hi,
: We have developed a real-time application in QNX 4.25.
: But, I guess QNX will stop supporting QNX 4.25 from 2005. Where can we find
: some information on how to
: port code from 4.25 to RTP if in the future we decide to
: move to QNX RTP.

Take a look at the QNX 4 to QNX 6 Migration Guide. It’s on the QDN website
and will be shipped with QNX 6.2.


Steve Reid stever@qnx.com
TechPubs (Technical Publications)
QNX Software Systems

Shashank <sbalijepalli@precitech.com> wrote:

Hi,
We have developed a real-time application in QNX 4.25.
But, I guess QNX will stop supporting QNX 4.25 from 2005. Where can we find
some information on how to
port code from 4.25 to RTP if in the future we decide to
move to QNX RTP.

There should be a migration toolkit available for free. It comes
with a migration document that talks about changes, and how to go
about performing the migration, and a migration library that can
help with the first stages of a port, and a utility that will walk
your code looking for things that might need to be changed and
suggesting replacement routines.

For more information:

http://qdn.qnx.com/download/migration/index.html


We also offer a migration course, for more information, contact:
training@qnx.com or your sales rep.

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

Xiaodan Tang <xtang@qnx.com> wrote:

The idea of “walking under /net”, could easily put in “name_open()”,
so that the application API (name_open()) is consistent in local/global
case.

Exactly. This is what I would like to see. The name_locate / name_attach
were most useful in the sense that they abstracted the service from the actual
network node that provided it. While walking /net isn’t that difficult, having
to add that to each client app is tedious, this is something that belongs wrapped
by an API function. I’m not even going to mention the potential added complexity
when qnet told to use something other then /net :slight_smile:

Now back to the global name staff, I agree the name under /net
is not 100% reliable, it based on broadcasting after all.
But my feeling is “nameloc” on QNX 4 is also not 100% reliable.
(it also based on a broadcast to populate the name space)

I’m not even suggesting that we need anything like a nameloc, as the only
advantage it would have is as a cache of the various /net entries.

As for the issue of systems connected via PPP or IPSec tunnels not showing
up, isn’t there some command line that can be issued to make them appear?
If so, then that could just be added to ip_up script (and ip_down).

I think all we need is something like this:

fd = name_locate( “global_name” );

Which of course means that we need to document a standard location for the
global namespace to exist. ie. /var/net or something like that.

The thing is for this to be useful it has to more or less be promoted as a
“standard” from QSSL. Otherwise there is no reason for anyone to follow the
same namespace/directory locations for global names.

Cheers,
Camz.

Chris McKillop <cdm@qnx.com> wrote:

I think the idea is to find the processes by always using /net. In the
local case you can use “/net/localhost/” and if you need to talk to a remote
machine you can use “/net/machinename”. This doesn’t cover global names
but the reality is you generally know the name of the machine you want to
talk to anyways.

This is potentially error prone. /net does not exist if qnet isn’t running, but
if you use something that isn’t dependant on /net… ie. /var/net then the
name_locate function could simply look there first and then walk /net if it
exists. As for knowing the name of the machine, that’s completely opposite
of why you want a global namespace. I don’t want to know the name of the
machine, because it can change. During development/testing, I might have
all the processes/resmgrs running on the same machine, but when I deploy, it
might be different, and possibly distributed. So, I think this is a bad idea
unless you make your app use some kind of configuration file that you can
edit, but even that is something that you don’t really need if you make the
namespace transparent.

One of the issues that I personally didn’t like about how naming worked on
QNX4 was the fact that you where basically forced to use the global name
space to communicate between two nodes and when there where mulitple
registrations of the same global name sometimes things worked out kinda funny.

Agreed. I actually liked the QNX2 method where the uniqueness of global names
was enforced (ie. you could not attach a duplicate name). True, the QNX4 ability
to have multiple allowed for the situation of some failover transparency, but
it could be rather complex to get it right. For the most part, I think developers
tried to avoid having duplicate names unless absolutely neccesary.

the case of multiple machines with the same named resource. Perhaps using
an open standard like LDAP or some other open scheme.

I hadn’t thought of LDAP. That’s an interesting idea, but is also pretty
heavy in terms of solutions, especially when working with small embedded systems.
For the larger systems that might have LDAP for other reasons, it’s a perfect
fit.

Another thing I found light-years ahead of QNX4 was the push to use resmgrs
so that all applications can use the POSIX APIs to communicate to the
managers over the network.

The push would have been ignored if they API hand not made it so easy. The
same push existed in QNX4, but it still wasn’t that easy, so it was rarely
done outside of QSSL. QNX4 still had some black magic associated with the
technique as well. QNX6 has some black magic required when you want to handle
directories rather than individual entries, and that needs to be addressed in
the docs (as I am sure it will).

Using a global name is pretty much useless if you want to use perl to talk
to your service.

Well, this depends. :slight_smile: If you use name_attach() and name_locate(), then you
could have a resmgr manage the /var/net space and perform a redirect/symlink
type service to point to the actual resource in /net/machine/… then you COULD.
Mind you, there isn’t any reason why you can’t have a resmgr sit “on top” of /var/net
and examine those requests to determine if it needed to let them pass-thru or
be redirected, somewhat like fs-pkg does already.

There is a related topic (cdm: you knew this was coming), which is remote spawn,
which is, IMO the other requirement of being able to actually create distributed
systems.

Right now remote spawn IS possible, but it needs to be wrapped up in an API call
of some sort to move it out of the realm of “guru black magic”.

Cheers,
Camz.

camz@passageway.com wrote:

Xiaodan Tang <> xtang@qnx.com> > wrote:

Now back to the global name staff, I agree the name under /net
is not 100% reliable, it based on broadcasting after all.
But my feeling is “nameloc” on QNX 4 is also not 100% reliable.
(it also based on a broadcast to populate the name space)

I think all we need is something like this:

fd = name_locate( “global_name” );

This is exactly name_open() does.

fd = name_open(“service_name”, FLAG_GLOBAL);

pass in FLAG_LOCAL to enforce it only search local service.

Which of course means that we need to document a standard location for the
global namespace to exist. ie. /var/net or something like that.

Well, we didn’t doc it, but actually it is in /dev/name/global/,
we only need a manager (nameloc !:slight_smile: to manage this name space, and
name_attach(“service_name”, FLAG_GLOBAL) well always talk to master
nameloc to regist the service (I am nodeX, provide service “service_name”).
All the name_open(FLAG_GLOBAL) then looking at local /dev/name/global
first, if services not exist, contact master nameloc, and create
a symlink in to local global name space.

Now, allow duplicate service name or not, is just a switch on master
nameloc.

Use a “master manager”, is the only way to make the whole thing
reliable. Otherwise, you have to relay name infos between managers,
and sync is always the problem.

-xtang

camz@passageway.com wrote:

Chris McKillop <> cdm@qnx.com> > wrote:

the case of multiple machines with the same named resource. Perhaps using
an open standard like LDAP or some other open scheme.

I hadn’t thought of LDAP. That’s an interesting idea, but is also pretty
heavy in terms of solutions, especially when working with small embedded systems.
For the larger systems that might have LDAP for other reasons, it’s a perfect
fit.

Since our request/reply is simple, maybe it is not that HEAVY.
I am actually investigating this one. So the “service look up” actually
sends out a LDAP message, and we will have a tiny LDAP server to answer
just those request.

Another thing I found light-years ahead of QNX4 was the push to use resmgrs
so that all applications can use the POSIX APIs to communicate to the
managers over the network.

Using a global name is pretty much useless if you want to use perl to talk
to your service.

Well, this depends. > :slight_smile: > If you use name_attach() and name_locate(), then you
could have a resmgr manage the /var/net space and perform a redirect/symlink
type service to point to the actual resource in /net/machine/… then you COULD.
Mind you, there isn’t any reason why you can’t have a resmgr sit “on top” of /var/net
and examine those requests to determine if it needed to let them pass-thru or
be redirected, somewhat like fs-pkg does already.

Exactly. A “service” could simply be a symlink to remote node. Thus, use
a “service” without knowing it’s server node, is very important. Think
you take an iPaq to company, and just ask “where is tcpip service”, and
“SOCK= voyager”, off you go! You don’t actually care who
IS providing tcpip :slight_smile:

There is a related topic (cdm: you knew this was coming), which is remote spawn,
which is, IMO the other requirement of being able to actually create distributed
systems.

Right now remote spawn IS possible, but it needs to be wrapped up in an API call
of some sort to move it out of the realm of “guru black magic”.

This got to be a set of APIs, daemons. Like a system that have daemons
exist in each node of a distirbute network, keep track of each others
CPU usage, and “spawn” process to those spare nodes. And better, with
support of HAT, we could sort of doing remote “fork()”, to push a
job from busy nodes to spare nodes :slight_smile:

You running mozilla, everytime you click a link, the actually process
fly from one node to another :slight_smile:

The PVM is actually something like that, except they can only decided
which node to start a job.

-xtang

camz@passageway.com wrote:

Agreed. I actually liked the QNX2 method where the uniqueness of
global names was enforced (ie. you could not attach a duplicate name).

Unfortunately, true uniqueness enforcement is a non-trivial problem.
For instance, how do you deal with a network split, two processes
one on each side of the split register the “unique” name, which is
unique for their currently visible network, then the network rejoins?

-David

QNX Training Services
http://www.qnx.com/support/training/
Please followup in this newsgroup if you have further questions.

David Gibbs <dagibbs@qnx.com> wrote:

Unfortunately, true uniqueness enforcement is a non-trivial problem.
For instance, how do you deal with a network split, two processes
one on each side of the split register the “unique” name, which is
unique for their currently visible network, then the network rejoins?

I know, I know… there are also race conditions and such. I don’t
expect to actually have globally unique names, we’ve managed to live without
them in QNX4 for years now without any real negative effects.

I do like the option of being able to specify that the name be globally
unique, but I realize that it isn’t overly realistic to actually achieve
it.

Cheers,
Camz.

Does anyone know how involved this is? Someone has told me
it would be just a few lines of code to support the ICH4 if you
already had support for the ICH2.

Art Hays

“S. Miller” <smiller@retia.cz> wrote in message news:3E6C365A.90BA2F5F@retia.cz

Hi,

I am seeking a powerful mainboard for Pentium 4 or Xeon. But problem is
with eide busmastering with chipsets: with southbridge - Intel ICH4, SiS
963 (These we have tested). When will QNX support them? And is eide
busmastering supported on Intel E7501 (ICH3-S)?

Thanks
Svatopluk Miller

william@bangel.demon.co.uk sed in <3E9D3CAE.BCCC7D1E@bangel.demon.co.uk>:

What determines the mount point reported for a package?

/pkgs/repository/GNU/GNUEmacs/core-21.2-bld8 on / type pkg

/pkgs/repository/waukes/htdoc/core-1.0 on / type pkg

Ignore them. They aren’t useful (now).

Current fs-pkg shows spurious locations for source mountpoint
(or wasit the first line in /etc/system/packages/package; dunno)

kabe

kabe@sra-tohoku.co.jp wrote:

What determines the mount point reported for a package?
Ignore them. They aren’t useful (now).

But if I try to remount / as read-only I get “resource busy”.
It was indicated in another thread that the likely reason for
this was that something else is mounted on / - am I right to
assume that these otherwise-unuseful mount reports are
responsible? If so, I cannot ignore the failure of the re-mount.

Thanks
William

Can’t you just change the attributes of the memory block after the DMA
transfer and before the calculations, so that the memory can be cached,
using mprotect()?

You’d also have to make it PROT_NOCACHE, and flush the cache line(s) before
the DMA transfer starts otherwise a delayed writeback could clobber your new
data. I think it’s not too easy or possible to get DMA transfer ‘begin’
notification and hold off the DMA write until all the cache has been
written back.

-Adam

Can’t you just change the attributes of the memory block after the DMA
transfer and before the calculations, so that the memory can be cached,
using mprotect()?

You’d also have to make it PROT_NOCACHE, and flush the cache line(s)
before
the DMA transfer starts otherwise a delayed writeback could clobber your
new
data. I think it’s not too easy or possible to get DMA transfer ‘begin’
notification and hold off the DMA write until all the cache has been
written back.

I was supposing that changing the attributes to PROT_NOCACHE would flush the
cache. Is this the case?
As for the buffer, I was assuming that the calculations involved only read
operations on the data coming from the video grabber, while the result of
the processing was written to another buffer.
If that’s the case then you should only need to flush the cache before
processing the data, right?

-Paolo

(Crossposted from qdn.public.qnxrtp.applications)

Wojtek Lerch <wojtek_l@yahoo.ca> wrote:

Pterm doesn’t understand the difference between Ctrl-C and Ctrl-S. If one
works and the other doesn’t, it must be because devc-pty treats them
differently.

“Bill Caroselli” <> qtps@earthlink.net> > wrote in message
news:bp3cf1$845$> 1@inn.qnx.com> …
I have a text application that spits stuff to the screen very fast
indefinitely. I can’t seem to stop pterm with +‘S’. If I put
any delay in the loop, I.E.
delay( 10 );
I can pause the output of pterm just fine and restart it when I’m
ready.

I can always +‘C’ the progrm. That happens immediately.

Why can’t I pause the output of a pterm?