QNX6 - System Backups

Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

So… How does one back-up all of this when 80% of /usr is virtual

If you just do a tar or pax, you won’t get what you want because the
virtual files appear quite real to them – if you do this the image
file ends up quite large because the “virtual” files are stored as
real ones and then the /pkgs directory is stored as well…

This wouldn’t be so bad if the restore worked - which it doesn’t
because after a restore, the “virtual” files are now physically on the
disk – doubling disk consumption and (probably) confusing the package

manager.

So how is one to do a system backup? Can you turn off the package
manager and then do a “standard” backup… what about other resource
managers are there any other standard ones to avoid - i.e exclude
//fs , /dev, /net. etc… Do you need to copy pax and tar out from
the repository first?

Has anyone done a successfull full backup/restore cycle yet?

Is the idea that ALL file be a in a repository and then only the
/pkgs directory need to be stored?

-Bill

I would…

  • copy what I need to do the backup into /dev/shmem (tar,cp, gzip, fs-pkg, slay, …)
  • add /dev/shmem to your PATH
  • slay the package filesystem
  • tar from / (excluding /fs /net /proc /dev) down and write the tar file in /fs somewhere
  • restart package filesystem

Jason

William M. Derby Jr. <derbyw@derbtronics.com> wrote:

Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

So… How does one back-up all of this when 80% of /usr is virtual

If you just do a tar or pax, you won’t get what you want because the
virtual files appear quite real to them – if you do this the image
file ends up quite large because the “virtual” files are stored as
real ones and then the /pkgs directory is stored as well…

This wouldn’t be so bad if the restore worked - which it doesn’t
because after a restore, the “virtual” files are now physically on the
disk – doubling disk consumption and (probably) confusing the package

manager.

So how is one to do a system backup? Can you turn off the package
manager and then do a “standard” backup… what about other resource
managers are there any other standard ones to avoid - i.e exclude
//fs , /dev, /net. etc… Do you need to copy pax and tar out from
the repository first?

Has anyone done a successfull full backup/restore cycle yet?

Is the idea that ALL file be a in a repository and then only the
/pkgs directory need to be stored?

-Bill

<jclarke@qnx.com> wrote in message news:96bu4r$f42$1@nntp.qnx.com

I would…

  • copy what I need to do the backup into /dev/shmem (tar,cp, gzip, fs-pkg,
    slay, …)
  • add /dev/shmem to your PATH
  • slay the package filesystem
  • tar from / (excluding /fs /net /proc /dev) down and write the tar file
    in /fs somewhere
  • restart package filesystem

That won’t be good for system that needs to be online 24/7.

Jason

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote:
Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

So… How does one back-up all of this when 80% of /usr is virtual

If you just do a tar or pax, you won’t get what you want because the
virtual files appear quite real to them – if you do this the image
file ends up quite large because the “virtual” files are stored as
real ones and then the /pkgs directory is stored as well…

This wouldn’t be so bad if the restore worked - which it doesn’t
because after a restore, the “virtual” files are now physically on the
disk – doubling disk consumption and (probably) confusing the package

manager.

So how is one to do a system backup? Can you turn off the package
manager and then do a “standard” backup… what about other resource
managers are there any other standard ones to avoid - i.e exclude
//fs , /dev, /net. etc… Do you need to copy pax and tar out from
the repository first?

Has anyone done a successfull full backup/restore cycle yet?

Is the idea that ALL file be a in a repository and then only the
/pkgs directory need to be stored?

-Bill

On Tue, 13 Feb 2001 14:22:08 -0500, “Mario Charest”
<mcharest@void_zinformatic.com> wrote:

jclarke@qnx.com> > wrote in message news:96bu4r$f42$> 1@nntp.qnx.com> …
I would…

  • copy what I need to do the backup into /dev/shmem (tar,cp, gzip, fs-pkg,
    slay, …)
  • add /dev/shmem to your PATH
  • slay the package filesystem
  • tar from / (excluding /fs /net /proc /dev) down and write the tar file
    in /fs somewhere
  • restart package filesystem


    That won’t be good for system that needs to be online 24/7.

I agree and the above is really pretty messy for something that should
be done regularly on a lot of larger systems… It seems like we need a
version of pax which knows whether the file is a package manager file
and can ignore it… sort of like how it can be told not to follow
symbolic links…

Absent that, have a separate mount for the physical partion (/backup)
and do the backup/restore there…

This whole thing is not a big deal for a development system sitting on
my desk – (including having a developer sitting in the chair at the
desk) but for real products there really is a need to be able to do
this simply and monolithically. i.e. 1 cd, one command - no
interaction would be the ideal. The further you get from this the
further the developer get from his desk as he will be flying to some
god-forsake town in the middle of nowhere where all you can get to eat
are those salty pancakes(!?) that Igor was talking about…

Jason

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote:
Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

So… How does one back-up all of this when 80% of /usr is virtual

If you just do a tar or pax, you won’t get what you want because the
virtual files appear quite real to them – if you do this the image
file ends up quite large because the “virtual” files are stored as
real ones and then the /pkgs directory is stored as well…

This wouldn’t be so bad if the restore worked - which it doesn’t
because after a restore, the “virtual” files are now physically on the
disk – doubling disk consumption and (probably) confusing the package

manager.

So how is one to do a system backup? Can you turn off the package
manager and then do a “standard” backup… what about other resource
managers are there any other standard ones to avoid - i.e exclude
//fs , /dev, /net. etc… Do you need to copy pax and tar out from
the repository first?

Has anyone done a successfull full backup/restore cycle yet?

Is the idea that ALL file be a in a repository and then only the
/pkgs directory need to be stored?

-Bill

“William M. Derby Jr.” wrote:

This whole thing is not a big deal for a development system sitting on
my desk – (including having a developer sitting in the chair at the
desk) but for real products there really is a need to be able to do
this simply and monolithically. i.e. 1 cd, one command - no
interaction would be the ideal. The further you get from this the
further the developer get from his desk as he will be flying to some
god-forsake town in the middle of nowhere where all you can get to eat
are those salty pancakes(!?) that Igor was talking about…

Indeed, except that I was served with those in quite metropolitan
Chicago area :slight_smile:

To save yourself some trouble don’t use package mangler on field
systems, unless they need very often upgrades/patches, but in that case
they probably aren’t really running 24/7 anyway ;-\

What I do is have some script which demangles distributive into my own
package format which is then used for installation on field systems. It
used to be simple enough before the embraced XML syntax…

  • igor

William M. Derby Jr. <derbyw@derbtronics.com> wrote:

Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

Exactly. So generally you wouldn’t need to maintain more than
one (secure … make as many copies as you want) backup of the
packages. What you want to do is to maintain a back-up of what
the user has modified in the system.

This falls into two categories. Things that were managed by the
package filesystem. These are all the files that are spilled and
are all visible whereever you put your spill directory. On most
RTP systems this is /var/pkg/spill.

The second class of things are those things that are added to the
system by the user. These will “fall through” the package filesystem
and end up on whatever you have put as your backing store underneath
the package filesystem. These files are actually harder to track
down without resorting to something like examinging the /proc/mount
entries to see which filesystem is managing what. Understandably
not an ideal solution.

So… How does one back-up all of this when 80% of /usr is virtual

If you just do a tar or pax, you won’t get what you want because the
virtual files appear quite real to them – if you do this the image
file ends up quite large because the “virtual” files are stored as
real ones and then the /pkgs directory is stored as well…

Correct … so either only back up the package repositories once since
they are not “transient” in that the only differences is new package
which are added to the system. This is something that you have to
take into account when you are setting up your system. For an RTP
like system where user modified entries are in the root.qfs file,
then you have a simple solution … backup the root.qfs file. For
partitions, then you would have to either un-mount them or re-mount
them read-only in another part of the path so that you can do your
backup.

This wouldn’t be so bad if the restore worked - which it doesn’t
because after a restore, the “virtual” files are now physically on the
disk – doubling disk consumption and (probably) confusing the package
manager.

Correct. Because you don’t really want to resotore the virtual
entries.

So how is one to do a system backup? Can you turn off the package
manager and then do a “standard” backup… what about other resource
managers are there any other standard ones to avoid - i.e exclude
//fs , /dev, /net. etc… Do you need to copy pax and tar out from
the repository first?

Exactly. Only you can decide what your backup policy is going to
require. Blindly doing a tar -cvf / is going to do you no good,
especially in a qnetted environment. It is almost as usefull as
find / =;-) If you can affort to, then one of the cleanest
ways to do the backup would be to shut-off the package filesystem
as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?) this obviously won’t work so you will have to be more
selective.

Has anyone done a successfull full backup/restore cycle yet?

Yes I have … but I do it based on my needs, which means that
I have an RTP cd with base packages (which I don’t need to backup),
I archive my spill directory and my home directory … which is
also mounted on a seperate disk which facilitates the backup
procedure.

Is the idea that ALL file be a in a repository and then only the
/pkgs directory need to be stored?

You can’t forget about things the user has modified or things
that the user may have added. How much this happens depends on
the administrative control over the system users have.

Thomas

Thomas Fletcher <thomasf@qnx.com> wrote:

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote:
Hi,

I’m trying to get a handle on how to do system backups on a Neutrino
system and am running into a problem (possibly conceptual) with the
package filesystem.

If I have this right, the package filesystem acts as an overlay on the
physical one and “maps” in file from its repositories - generally
located under the /pkgs mountpoint. This means the all the files are
logically on the system twice. once, physically in the repository then
again wherever it’s mapped. “spilled” or modified files are located in
another area…

Exactly. So generally you wouldn’t need to maintain more than
one (secure … make as many copies as you want) backup of the
packages. What you want to do is to maintain a back-up of what
the user has modified in the system.

The package filesystem should allow sysadmins to administer backups
using any reasonable method that works on common posixlike systems –
this includes find&tar. If the package filesystem is so inflexible
that the OS vendor has to dictate a specific backup method then QNX6
will not be a favorite in the sysadmin community :slight_smile:

[…]

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.

Thomas

On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <thomasf@qnx.com> wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.

Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

How incredably timely!

After loosing my QNXrtp in a fs-pkg experiment, I was thinking through
news:3a8410fe.712118@inn.qnx.com
(pretty much everything you guys thought of) the backup problem

a shell script to parse /var/pkg/spill, and dump the spillied files.

BUT I was thinking (your ‘appear as symbolic links’ idea may be better)

if resource managers are in charge of authentication
and you could link/rename your backup (pax, tar, cpio) utils to
backupNoFsPkg

that pkg-fs could look for *NoFsPkg and not vitualize the pkg fs for that
process.

I was also just realizing that SCO openserver looks like it has symbolic
links
everywhere /etc/utmp → /var/…something…/SCO5.0.2Dp/…/etc/utmp

William M. Derby Jr. <derbyw@derbtronics.com> wrote in message
news:3a8c2afc.554287@inn.qnx.com

On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <> thomasf@qnx.com> > wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.


Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

Well, If you are going to mess around with fs-pkg in order to support
backups, maybe fs-pkg could simply make another file path available that
would have the entire local machine beneath it - as fs-pkg sees it, before
re-directions - maybe /sysback or /locimg or /abs or … If done properly,
could this also allow for “restore-on-the-fly” as well as
“backup-on-the-fly”?
Just a thought, I have not thought through all the ramifications very well
yet…
Then backups revert to being pretty incredibly simple again. AND curious
sysadmins who really want to see the filesystem without fs-pkg re-directions
could still look at it while the other users are getting their re-directions
to their hearts content… :sunglasses:

(Of course there are probably 17 million reasons why this fairly simple
solution would not work - I am waiting to hear them…)


Michael J. Ferrador <n2kra@orn.com> wrote in message
news:96hfd0$4so$1@inn.qnx.com

How incredably timely!

After loosing my QNXrtp in a fs-pkg experiment, I was thinking through
news:> 3a8410fe.712118@inn.qnx.com
(pretty much everything you guys thought of) the backup problem

a shell script to parse /var/pkg/spill, and dump the spillied files.

BUT I was thinking (your ‘appear as symbolic links’ idea may be better)

if resource managers are in charge of authentication
and you could link/rename your backup (pax, tar, cpio) utils to
backupNoFsPkg

that pkg-fs could look for *NoFsPkg and not vitualize the pkg fs for
that
process.

I was also just realizing that SCO openserver looks like it has symbolic
links
everywhere /etc/utmp → /var/…something…/SCO5.0.2Dp/…/etc/utmp

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote in message
news:> 3a8c2afc.554287@inn.qnx.com> …
On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <> thomasf@qnx.com> > wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.


Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

Under /proc/mount you can see this but it isn’t very clear which directory to look
under. My devb-eide driver has a pid of 6, so if I look under /proc/mount/0,6,7,8,0
I can see my harddrive without the package filesystem, the problem is this directory
name is dynamic so it isn’t very good for a backup script.

Jason

Steve Munnings, Corman Technologies <steve@cormantech.com> wrote:

Well, If you are going to mess around with fs-pkg in order to support
backups, maybe fs-pkg could simply make another file path available that
would have the entire local machine beneath it - as fs-pkg sees it, before
re-directions - maybe /sysback or /locimg or /abs or … If done properly,
could this also allow for “restore-on-the-fly” as well as
“backup-on-the-fly”?
Just a thought, I have not thought through all the ramifications very well
yet…
Then backups revert to being pretty incredibly simple again. AND curious
sysadmins who really want to see the filesystem without fs-pkg re-directions
could still look at it while the other users are getting their re-directions
to their hearts content… > :sunglasses:

(Of course there are probably 17 million reasons why this fairly simple
solution would not work - I am waiting to hear them…)



Michael J. Ferrador <> n2kra@orn.com> > wrote in message
news:96hfd0$4so$> 1@inn.qnx.com> …
How incredably timely!

After loosing my QNXrtp in a fs-pkg experiment, I was thinking through
news:> 3a8410fe.712118@inn.qnx.com
(pretty much everything you guys thought of) the backup problem

a shell script to parse /var/pkg/spill, and dump the spillied files.

BUT I was thinking (your ‘appear as symbolic links’ idea may be better)

if resource managers are in charge of authentication
and you could link/rename your backup (pax, tar, cpio) utils to
backupNoFsPkg

that pkg-fs could look for *NoFsPkg and not vitualize the pkg fs for
that
process.

I was also just realizing that SCO openserver looks like it has symbolic
links
everywhere /etc/utmp → /var/…something…/SCO5.0.2Dp/…/etc/utmp

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote in message
news:> 3a8c2afc.554287@inn.qnx.com> …
On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <> thomasf@qnx.com> > wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.


Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

<jclarke@qnx.com> wrote in message news:96jfhk$3i3$1@nntp.qnx.com

Under /proc/mount you can see this but it isn’t very clear which directory
to look
under. My devb-eide driver has a pid of 6, so if I look under
/proc/mount/0,6,7,8,0
I can see my harddrive without the package filesystem, the problem is this
directory
name is dynamic so it isn’t very good for a backup script.

Well, that is not the entire thing I was looking for: That only shows a
single hard drive…
If your system has a couple of drives, plus (maybe) some other file systems,
that would not be sufficient…
I was thinking more along the lines of: if the user (whoever that might be)
simply puts the /abs (or whatever) prefix on a file name, that fs-pkg would
bypass the re-direction, and possibly also make sure that it was a local
file. Of course, this would only work on absolute file names, it would not
make sense on relative file names.


Jason

Steve Munnings, Corman Technologies <> steve@cormantech.com> > wrote:
Well, If you are going to mess around with fs-pkg in order to support
backups, maybe fs-pkg could simply make another file path available that
would have the entire local machine beneath it - as fs-pkg sees it,
before
re-directions - maybe /sysback or /locimg or /abs or … If done
properly,
could this also allow for “restore-on-the-fly” as well as
“backup-on-the-fly”?
Just a thought, I have not thought through all the ramifications very
well
yet…
Then backups revert to being pretty incredibly simple again. AND
curious
sysadmins who really want to see the filesystem without fs-pkg
re-directions
could still look at it while the other users are getting their
re-directions
to their hearts content… > :sunglasses:

(Of course there are probably 17 million reasons why this fairly simple
solution would not work - I am waiting to hear them…)


Michael J. Ferrador <> n2kra@orn.com> > wrote in message
news:96hfd0$4so$> 1@inn.qnx.com> …
How incredably timely!

After loosing my QNXrtp in a fs-pkg experiment, I was thinking through
news:> 3a8410fe.712118@inn.qnx.com
(pretty much everything you guys thought of) the backup problem

a shell script to parse /var/pkg/spill, and dump the spillied files.

BUT I was thinking (your ‘appear as symbolic links’ idea may be better)

if resource managers are in charge of authentication
and you could link/rename your backup (pax, tar, cpio) utils to
backupNoFsPkg

that pkg-fs could look for *NoFsPkg and not vitualize the pkg fs for
that
process.

I was also just realizing that SCO openserver looks like it has
symbolic
links
everywhere /etc/utmp → /var/…something…/SCO5.0.2Dp/…/etc/utmp

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote in message
news:> 3a8c2afc.554287@inn.qnx.com> …
On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <> thomasf@qnx.com> > wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.


Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to
new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

\

On Thu, 15 Feb 2001 15:58:33 -0500, “Michael J. Ferrador”
<n2kra@orn.com> wrote:

How incredably timely!

After loosing my QNXrtp in a fs-pkg experiment, I was thinking through
news:> 3a8410fe.712118@inn.qnx.com
(pretty much everything you guys thought of) the backup problem

a shell script to parse /var/pkg/spill, and dump the spillied files.

BUT I was thinking (your ‘appear as symbolic links’ idea may be better)

if resource managers are in charge of authentication
and you could link/rename your backup (pax, tar, cpio) utils to
backupNoFsPkg

that pkg-fs could look for *NoFsPkg and not vitualize the pkg fs for that
process.

This is really funny as the reason I got interested in this is while
trying to update to QNX6 from the June release -I ran out of room on
my primary partition and tried to move hunks of the system to a second
partition… this of course will fail miserably unless you move the
package repositories … so I created an additional repository on the
second partitiion and promply clobbered my core repository (I’m still
not sure how/when)…

There is no clearer time to think about backups as when you’ve just
clobbered your system – LOL

Actually a clearer understanding of the package filesystem (which I
have now) would have avoided most of the above pain – failure is how
one learns after all…

That said I think the package manager needs to make some concession
to the backup process – after all you generally (hopefully?) backup
more often than add new packages (on a stable system anyway).

The “symbolic links” idea per-se wouldn’t really work because then the
backup utility would skip non-package manager links as well. It would
need to be someting like a permission bit - i.e. someting stat’able
that the backup utility could see – of course you would need to
modify pax, give it a new flag etc… if this was done, the package
manager would become invisable to normal system use again
(as I assume it was intended. )

Another comment in this thread mentions a separate directorty where
the raw filesystem is visable. This is probably simpler to implement
but it doesn’t have the elegence of being invisable to the admin.

I suppose another approach would be for fs-pkg to allow processes to
register with it. Registered processes would not see the virtual
filesystem. Then pax tar, etc could register with fs-pkg if enabled
via command line option and for them and ONLY them the package
filesystem would disappear… This avoids the permission bits mess and
also would allow ather programs to look at the raw disk if needed…

Anyway we do have a workable solution – in time we may get an
elegant one - a QNX hallmark after all… I know that RTP is a beta
product and hope these suggestions are received as constructive.

-Bill

I was also just realizing that SCO openserver looks like it has symbolic
links
everywhere /etc/utmp → /var/…something…/SCO5.0.2Dp/…/etc/utmp

William M. Derby Jr. <> derbyw@derbtronics.com> > wrote in message
news:> 3a8c2afc.554287@inn.qnx.com> …
On 15 Feb 2001 16:28:08 GMT, Thomas Fletcher <> thomasf@qnx.com> > wrote:

Mario Charest <mcharest@antispam_zinformatic.com> wrote:

as jclarke has already mentioned. If you have a 24/7 environment
(in which case is the package filesystem something you want to
be using?)

I’m confuse by that statement. Do you mean that in a 24/7 the
package filesystem isn’t reliable enough or that it makes stuff
like backup rather complex.

It was my impression the package filesystem was be used
not only for the development environment, but mostly to
allow better handling of upgrade in the field and not to
mention easier tracking of version being installed. Was my
assumption out to lunch?

The package filesystem is still experiencing growing pains
as a product which is currently targetted at our desktop
RTP systems (ie it doesn’t ship with the Neutrino 2.1 for
example). It does have all of the benefits that you mention
(and that we have preached) but has additional overhead
compared to using the Neutrino filesystem directly to provide
layering.

I’m not saying that it isn’t what you want to use for your
environment … it might be a perfect fit. I’m saying that
you want to decide what level of technology best meets your
needs.

In any case the Neutrino filesystem (with or without the
addition of the package filesystem) makes creating backups
a bit more work to plan out and integrate into a product.


Well thinking about the 24/7 thing - if you are running an embedded
type system - you probably know what utilities could be used during
the backup and leave them in /dev/shmem… Most other processes will
already be running (i.e. in memory) anyway So… slaying the package
manager may not be the end of the world…now if your 24/7 system
allows general un-restricted shells… then you have more problems…

The other thing would be adding an ioctl or some such message to the
package filesystem which would make all “virtual” files appear as
symbolic links or something else which could be detected via pax and
ignored… Then you wouldn’t need to shut down the package manager
at all…

The downside to not using the package manager is that upgrading to new
QNX releases becomes much harder and tedious…

Oh well, thanks for a workable though somewhat clumsy solution,
Bill

Another comment in this thread mentions a separate directorty where
the raw filesystem is visable. This is probably simpler to implement
but it doesn’t have the elegence of being invisable to the admin.

At the risk of sounding defensive… What is inelegant about the separate
directory being visible to the sysadmin? (or to anybody else who wants to
have that vision of the system?)
Well, to be technically accurate, it would be another file path, not really
a seperate directory :sunglasses:

On Fri, 16 Feb 2001 10:52:37 -0500, “Steve Munnings, Corman
Technologies” <steve@cormantech.com> wrote:

snip
Another comment in this thread mentions a separate directorty where
the raw filesystem is visable. This is probably simpler to implement
but it doesn’t have the elegence of being invisable to the admin.


At the risk of sounding defensive… What is inelegant about the separate
directory being visible to the sysadmin? (or to anybody else who wants to
have that vision of the system?)
Well, to be technically accurate, it would be another file path, not really
a seperate directory > :sunglasses:

Nothing other than it requires you to know it’s there and that you
should ONLY back up that directory… Whereas with say the registration
method you would back up “the system” as you would a standard unix
box… In the end not really a big deal but just one more thing the SA
needs to remember as unique to QNX (and probably trip over the first
time) To me part of “elegence” is minimizing the requirement for
special knowlege and making things no harder than they absolutely have
to be… I mentioned your suggestion in my list because I thought it
was a good workable solution I just thought the others (particulaly
the registraton idea) were more invisable to the SA… Of course he/she
still needs to know the command line option unless ignoring the fs-pkg
virtual files is the default…

-Bill


\