I have a miniature x86 based computer with a solid state IDE drive. This
computer can be the brain of several different products that my company
builds.
What I’d like to do is create a smallish boot image with just the necessary
boot files in it that I can put onto every computer and then place into
inventory. When the techs pull the bootable computers out of inventory to
plug into whatever system they are building I would like for them to be able
to boot the system and then copy a single file over the network to the IDE
drive that then “customizes” or gives a personality to that computer to
perform the proper task. This secondary image would contain my
task-specific programs and any other necessary files and would be mounted by
the boot image. So one boot-image plus one task-specific image would yield
a fully functional system without the potential error of copying whole
directory structures. Managing versions would be vastly easier, verifying
that the proper image was installed would be simple and the onus of insuring
that the image is correct is on me (where it belongs) and not my techs.
I’ve done some searching and have read up on mkifs, mkefs, etc. and what it
appears that I need to do to accomplish my task is to create a file (perhaps
with dd) that is large enough to accomodate my programs, data and necessary
files (say…5 megabytes). Then dinit this file and mount it. Once mounted
I can operate within this file as part of my development machines directory
structure, updating and manipulating its contents as I see fit. I then
umount the image and copy it to my embedded system, where upon boot the
startup script mounts the image…
I have mounted a few little files on my dev machine and messed around with
them and it seems like this scheme will work.
Question is: what am i missing? It is harder or easier than what I’ve
described?
Thanks,
Jason Farque