How to boot with -ptcpip [was: netstat and route]

Wayne Fisher <wayne.fisher@focusautomation.com> wrote:

I expect that the answer may also be different depending on the CPU
selected. I’m interested in the x86 and PowerPC architectures.

QSSL, can we get an authoritative answer?

hey Wayne…

Well, here is an un-authoritative answer from someone @ QSSL. :wink:

Right now the kernel treats the address space as 32bit on all platforms.
It reserves 500M of the virtual address space for it’s own purposes.
There is also a chunk of space at the bottom of the virtual address space
used for stack, but I forget what the exact size of that area. So your
theoretical maximum area will be 3.5G - sizeof(code+data) - sizeof(stack area).
However, this is going to be CPU dependant. MIPS, for example, has that
whacky 500M partitioning. Things are also complicated by the locations
that are used for shared objects vs. static code/data. Finally, you should
also be asking what is the largest single address space vs. the sum total
of all address space that can mmap()ed.

chris

cdm@qnx.com > “The faster I go, the behinder I get.”

Chris McKillop – Lewis Carroll –
Software Engineer, QSSL
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Wayne Fisher <wayne.fisher@focusautomation.com> wrote:

But, if QNX6 uses 64bits behind the MMU than it’s theoretically possible
that a full 32 bits of addressing could be allocated to PCI devices
(assuming, of course, that the CPU itself can address more than 4GB).
However, with processes limitted to 4GB of address, no one process could
address all 32bits worth of the PCI bus.

The current implementation only supports 32 bits of physical address. This
will be increased in the future, but there’s no time frame for it yet.


Brian Stecher (bstecher@qnx.com) QNX Software Systems, Ltd.
phone: +1 (613) 591-0931 (voice) 175 Terence Matthews Cr.
+1 (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8

Previously, Bill Caroselli wrote in qdn.public.qnxrtp.os:

Not that having all that much address space is an issue for me, but . . . .

Isn’t it really 4 GB * 65536 segments?

Even in flat model you can point a segment regester to a different segment.
You just don’t (usually) need to.

Bill, I think the issue is PHYSICAL memory. Your computer might support
more than 4GB of memory, and there might be enough address lines for the
cpu to address it, but the NTO memory management might not be setup to
handle more than 4GB. This is a natural limitation, just like when 16bits
ran out of room at 64K. The solution is clunkier code to deal with multiple
4GB hunks of memory, or wait for 64bit processors.


Mitchell Schoenbrun --------- maschoen@pobox.com

Not that having all that much address space is an issue for me, but . . . .

Isn’t it really 4 GB * 65536 segments?

Even in flat model you can point a segment regester to a different segment.
You just don’t (usually) need to.


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122



“Chris McKillop” <cdm@qnx.com> wrote in message
news:9birk7$83h$1@nntp.qnx.com

Wayne Fisher <> wayne.fisher@focusautomation.com> > wrote:

I expect that the answer may also be different depending on the CPU
selected. I’m interested in the x86 and PowerPC architectures.

QSSL, can we get an authoritative answer?


hey Wayne…

Well, here is an un-authoritative answer from someone @ QSSL. > :wink:

Right now the kernel treats the address space as 32bit on all platforms.
It reserves 500M of the virtual address space for it’s own purposes.
There is also a chunk of space at the bottom of the virtual address space
used for stack, but I forget what the exact size of that area. So your
theoretical maximum area will be 3.5G - sizeof(code+data) - sizeof(stack
area).
However, this is going to be CPU dependant. MIPS, for example, has that
whacky 500M partitioning. Things are also complicated by the locations
that are used for shared objects vs. static code/data. Finally, you
should
also be asking what is the largest single address space vs. the sum total
of all address space that can mmap()ed.

chris

cdm@qnx.com > “The faster I go, the behinder I get.”
Chris McKillop – Lewis Carroll –
Software Engineer, QSSL

Well, when I can’t fit all of my girlfriends phone numbers in 4 GB of memory
then I’ll come bitchen to QNX to fit it. In the mean time I’m not too
worried.

P.S. Don’t tell my wife.
Just kidding honey ;~)


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122



“Mitchell Schoenbrun” <maschoen@pobox.com> wrote in message
news:Voyager.010418065245.23784C@node1…

Previously, Bill Caroselli wrote in qdn.public.qnxrtp.os:
Not that having all that much address space is an issue for me, but . .
… .

Isn’t it really 4 GB * 65536 segments?

Even in flat model you can point a segment regester to a different
segment.
You just don’t (usually) need to.

Bill, I think the issue is PHYSICAL memory. Your computer might support
more than 4GB of memory, and there might be enough address lines for the
cpu to address it, but the NTO memory management might not be setup to
handle more than 4GB. This is a natural limitation, just like when 16bits
ran out of room at 64K. The solution is clunkier code to deal with
multiple
4GB hunks of memory, or wait for 64bit processors.


Mitchell Schoenbrun --------- > maschoen@pobox.com

I’m jumping in completely off topic. I’m working on a QNX4 project but I suspect
the same problem would arise on QNX6.

I have a PCI card with memory mapped registers. It maps them to physical address
0xEFEFFD80. I need to mmap that address so I can read/write its registers. But
the off_t for the last argument of mmap won’t reach that far. Any suggestions?

Mitchell Schoenbrun wrote:

Previously, Bill Caroselli wrote in qdn.public.qnxrtp.os:
Not that having all that much address space is an issue for me, but . . . .

Isn’t it really 4 GB * 65536 segments?

Even in flat model you can point a segment regester to a different segment.
You just don’t (usually) need to.

Bill, I think the issue is PHYSICAL memory. Your computer might support
more than 4GB of memory, and there might be enough address lines for the
cpu to address it, but the NTO memory management might not be setup to
handle more than 4GB. This is a natural limitation, just like when 16bits
ran out of room at 64K. The solution is clunkier code to deal with multiple
4GB hunks of memory, or wait for 64bit processors.

Mitchell Schoenbrun --------- > maschoen@pobox.com

“Brian Stecher” <bstecher@qnx.com> wrote in message
news:9bk0mt$sk9$1@nntp.qnx.com

Wayne Fisher <> wayne.fisher@focusautomation.com> > wrote:
But, if QNX6 uses 64bits behind the MMU than it’s theoretically possible
that a full 32 bits of addressing could be allocated to PCI devices
(assuming, of course, that the CPU itself can address more than 4GB).
However, with processes limitted to 4GB of address, no one process could
address all 32bits worth of the PCI bus.

The current implementation only supports 32 bits of physical address. This
will be increased in the future, but there’s no time frame for it yet.

Ok, I think that I’m getting the picture. It goes something like this:

4096 MB Max. physical address range.
-512 MB Reserved for OS (for at least some of the supported
processors)
-M MB Size of physical memory.

P MB Space left for PCI.

Then out of the PCI space (P) we need to leave room for the network card,
and any other devices we might have. Now our cards will probably have 2
memory address blocks each and we need to install 2 cards. So, we could try
the following:

P / (( 2 * 2 ) + 1 ) = B

Then round B down to the nearest power of 2 to get B’, the largest address
block that our devices can request.

As an example,

4096
-512
-128

3456 = P

3456 / (( 2 * 2 ) + 1 ) = 3456 / 5 = 691 = B

Then, B’ = 512MB.

So, it looks like each card could request 2 blocks of 512MB each for a total
of 2GB for our devices. This still leaves plenty of PCI and CPU address
space available for the rest of the PCI devices.

Is this right? Or, did I miss something?

Wayne Fisher
Team Leader/Architect - Core Software
Focus Automation Systems Inc.

Wayne Fisher <wayne.fisher@focusautomation.com> wrote:

So, it looks like each card could request 2 blocks of 512MB each for a total
of 2GB for our devices. This still leaves plenty of PCI and CPU address
space available for the rest of the PCI devices.

Is this right? Or, did I miss something?

One other thing to watch. Many non-x86 boards are wired such
that there is a mapping from CPU address space to PCI address space.

I’ve seen one PPC board where the PCI bus was mapped to
0xc0000000 physical. In other words, a device whose aperture
is located at 0x08000000 in PCI memory space, actually shows
up at 0xc8000000 in the CPU’s physical address space.

This means there is a limit (on this particalur piece of hardware)
of 1 Gig (0xc0000000 - 0xffffffff) of PCI apterture space.

Hi Wayne,

What kind of card are you developing that you want to put 2 - 512MB buffers
on it?


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122



“Wayne Fisher” <wayne.fisher@focusautomation.com> wrote in message
news:9bl0rd$k95$1@inn.qnx.com

“Brian Stecher” <> bstecher@qnx.com> > wrote in message
news:9bk0mt$sk9$> 1@nntp.qnx.com> …
Ok, I think that I’m getting the picture. It goes something like this:

4096 MB Max. physical address range.
-512 MB Reserved for OS (for at least some of the supported
processors)
-M MB Size of physical memory.

P MB Space left for PCI.

Then out of the PCI space (P) we need to leave room for the network card,
and any other devices we might have. Now our cards will probably have 2
memory address blocks each and we need to install 2 cards. So, we could
try
the following:

P / (( 2 * 2 ) + 1 ) = B

Then round B down to the nearest power of 2 to get B’, the largest address
block that our devices can request.

As an example,

4096
-512
-128

3456 = P

3456 / (( 2 * 2 ) + 1 ) = 3456 / 5 = 691 = B

Then, B’ = 512MB.

So, it looks like each card could request 2 blocks of 512MB each for a
total
of 2GB for our devices. This still leaves plenty of PCI and CPU address
space available for the rest of the PCI devices.

Is this right? Or, did I miss something?

Dean Douthat <ddouthat@faac.com> wrote:

I’m jumping in completely off topic. I’m working on a QNX4 project
but I suspect the same problem would arise on QNX6.



I have a PCI card with memory mapped registers. It maps them to
physical address 0xEFEFFD80. I need to mmap that address so I can
read/write its registers. But the off_t for the last argument of mmap
won’t reach that far. Any suggestions?

Under QNX6, you’re fine because the mmap_device_memory() function takes
a uint64_t for the address.

Under QNX4, well, I’d try pass that value as a (negative) signed int, and
hope that the Process manager treats it as unsigned when actually giving
the mapping to you. It might just work.

The other question – does the card have to live with the registers that
high? Can they be reconfigured lower in anyway? (Maybe talking to the
PCI controller, you could re-assign that memory range. The BIOS does
come up with default values, but I think at least some can be re-arranged.)

-David

QNX Training Services
dagibbs@qnx.com

Thanks David. I was able to get them from lower values. Today, I got some
advice from the card manufacturer and it turns out there are redundant mappings
in another PCI base address. I didn’t see the redundant mappings because they
were reported as I/O mapped (a bug in the hardware) and, of course, didn’t work
when I tried to do inp/outp on them. Isn’t hardware fun! :frowning:

David Gibbs wrote:

Dean Douthat <> ddouthat@faac.com> > wrote:
I’m jumping in completely off topic. I’m working on a QNX4 project
but I suspect the same problem would arise on QNX6.

I have a PCI card with memory mapped registers. It maps them to
physical address 0xEFEFFD80. I need to mmap that address so I can
read/write its registers. But the off_t for the last argument of mmap
won’t reach that far. Any suggestions?

Under QNX6, you’re fine because the mmap_device_memory() function takes
a uint64_t for the address.

Under QNX4, well, I’d try pass that value as a (negative) signed int, and
hope that the Process manager treats it as unsigned when actually giving
the mapping to you. It might just work.

The other question – does the card have to live with the registers that
high? Can they be reconfigured lower in anyway? (Maybe talking to the
PCI controller, you could re-assign that memory range. The BIOS does
come up with default values, but I think at least some can be re-arranged.)

-David

QNX Training Services
dagibbs@qnx.com

“Bill Caroselli” <Bill@Sattel.com> wrote in message
news:9blb3u$qc9$1@inn.qnx.com

Hi Wayne,

What kind of card are you developing that you want to put 2 - 512MB
buffers
on it?

It is a custom high performance linescan image processing board. We are
planning to have six 200000-gate software programmable FPGAs per board. Each
FPGA has up to four memory banks connected to it - three of which can be
DRAM. We are expecting to populate some of these banks with 128Mbytes of
DRAM per bank.

We want to have random read/write access to some of the DRAM banks from
software, preferably without having to bank-switch it. We’re planning to
move extracted feature data from the boards using DMA but the random
accesses don’t DMA very well.

A fully configured system will have up to 11 of these boards spread across 6
single board computers. Basically we need to process a LOT of image data and
distill it down to a simple list of where defects in the product occur and
what type of defect it is.

Wayne

Cool. So AI type software ‘looks’ at the images for defects?


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122



“Wayne Fisher” <wayne.fisher@focusautomation.com> wrote in message
news:9bnctl$8g4$1@inn.qnx.com

A fully configured system will have up to 11 of these boards spread across
6
single board computers. Basically we need to process a LOT of image data
and
distill it down to a simple list of where defects in the product occur and
what type of defect it is.

Wayne Fisher <wayne.fisher@focusautomation.com> wrote:

“Bill Caroselli” <> Bill@Sattel.com> > wrote in message
news:9blb3u$qc9$> 1@inn.qnx.com> …
Hi Wayne,

What kind of card are you developing that you want to put 2 - 512MB
buffers
on it?

It is a custom high performance linescan image processing board. We are
planning to have six 200000-gate software programmable FPGAs per board. Each
FPGA has up to four memory banks connected to it - three of which can be
DRAM. We are expecting to populate some of these banks with 128Mbytes of
DRAM per bank.

We want to have random read/write access to some of the DRAM banks from
software, preferably without having to bank-switch it. We’re planning to
move extracted feature data from the boards using DMA but the random
accesses don’t DMA very well.

A fully configured system will have up to 11 of these boards spread across 6
single board computers. Basically we need to process a LOT of image data and
distill it down to a simple list of where defects in the product occur and
what type of defect it is.

Wayne

Sounds way-cool. That’s quite the system. Yup… embedded means small
memory footprint, doesn’t it? :slight_smile:

-David

QNX Training Services
dagibbs@qnx.com

“David Gibbs” <dagibbs@qnx.com> wrote in message
news:9bnf54$e76$1@nntp.qnx.com

Sounds way-cool. That’s quite the system. Yup… embedded means small
memory footprint, doesn’t it? > :slight_smile:

The first hard drive I ever worked on had 512 KW (16 bit words). This was
on an IBM 1130. Bill Flowers would remember one of those.

My video card today has 32 MB of ram on it.

Now there’s these tiny embedded system that can run in a mere 128 GB ;~)

But I do still remember writing tic-tac-toe in 128 bytes of ram on a MC6800,
in machine code!


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122

Wayne Fisher <wayne.fisher@focusautomation.com> wrote:
: “Brian Stecher” <bstecher@qnx.com> wrote in message
: news:9bk0mt$sk9$1@nntp.qnx.com
:> Wayne Fisher <wayne.fisher@focusautomation.com> wrote:
:> > But, if QNX6 uses 64bits behind the MMU than it’s theoretically possible
:> > that a full 32 bits of addressing could be allocated to PCI devices
:> > (assuming, of course, that the CPU itself can address more than 4GB).
:> > However, with processes limitted to 4GB of address, no one process could
:> > address all 32bits worth of the PCI bus.
:>
:> The current implementation only supports 32 bits of physical address. This
:> will be increased in the future, but there’s no time frame for it yet.

: Ok, I think that I’m getting the picture. It goes something like this:

: 4096 MB Max. physical address range.
: -512 MB Reserved for OS (for at least some of the supported
: processors)
: -M MB Size of physical memory.
: ---------
: P MB Space left for PCI.

Hi Wayne,

here’s a (hopefully) more detailed answer to your question, with some
specifics on PowerPC.

First, the virtual address space of a given process is (of course) 4GB,
of which the bottom 1GB is taken up by the kernel and related supervisor
mappings. So 3GB is available for each process.

On the hardware side, the PowerPC has a 32 bit physical address bus (except
the 7450, which has 36 bits). So the processor itself can address up
to 4GB of physical memory. Virtual “process” addresses (pointers) in the
TLB and in the MMU get translated to 32 bits of physical addresses.

Now, where things get a little more complicated is when you put a bridge
in the system. The bridge does CPU<–>PCI translation, and normally the
PowerPC bridges have fixed-size windows (in the 32 bit physical world) of
where memory, PCI memory, PCI I/O, ROM etc. go. As an example, the MPC107
bridge (probably what you’re using) has

0-1G for RAM
1-2G reserverd
2G-~3.5G PCI Memory
and then further windows for PCI I/O, ROM etc.

So the limit on PCI memory in that case is 1.5GB total,
quite a bit less than what the processor can address. Further, the memory
map can usually not be modified (this is a CHRP memory map, BTW). Mapping
512MB in multiple processes or 1.5GB in a given process would certainly be
possible, and would not hit any limits except the RAM needed for MMU
structures.

I hope that answers the question… as an aside, inside the kernel, we keep
track of physical addresses using 64 bit variables. This allows us to
address more than 4GB physical on processors that support this (MIPS R4000,
most notably). Some day, it may also be supported on some other processors.

Sebastien


Sebastien Marineau
Netcom Architect
QNX Software Systems Ltd
(613) 271-9336

sebastien@qnx.com

“Sebastien Marineau” <sebastien@qnx.com> wrote in message
news:9bngsg$f9k$1@nntp.qnx.com

Hi Wayne,

here’s a (hopefully) more detailed answer to your question, with some
specifics on PowerPC.

Must have been real kaboom to get Seb talking here these days, instead of
enjoying his roadster :wink:
Still too cold up in Ottawa I guess …

First, the virtual address space of a given process is (of course) 4GB,
of which the bottom 1GB is taken up by the kernel and related supervisor
mappings.

Now I see why IP values for procnto idle thread are so different on PPC and
x86 :wink:
They are just around 1G on PPC…

So 3GB is available for each process.

On the hardware side, the PowerPC has a 32 bit physical address bus
(except
the 7450, which has 36 bits). So the processor itself can address up
to 4GB of physical memory. Virtual “process” addresses (pointers) in the
TLB and in the MMU get translated to 32 bits of physical addresses.

Now, where things get a little more complicated is when you put a bridge
in the system. The bridge does CPU<–>PCI translation, and normally the
PowerPC bridges have fixed-size windows (in the 32 bit physical world) of
where memory, PCI memory, PCI I/O, ROM etc. go. As an example, the MPC107
bridge (probably what you’re using) has

0-1G for RAM
1-2G reserverd
2G-~3.5G PCI Memory
and then further windows for PCI I/O, ROM etc.

The CHRP memory map I found in Motorola docs is somewhat different.
0-dram_size for RAM
2G-3.8125G for PCI memory
3.8125 - 4G for all the rest.

It also says that the map can be changed (partucularly PCI memory size) by
programming PCI ASIC registers, at least with Hawk and Raven ASICs. Maximum
possible PCI window size is said to be 4Gb - dram_size. Practical limit
would be less since nobody usually wants to mess up much with upper 0.1875G,
but it sounds like 3Gb for PCI should be possible.

BTW, on MCP7xx boards CHRP is not even default map style after boot, PREP
style is default. Not sure about MPC107 and not sure what NTO does with it
when it starts…

So the limit on PCI memory in that case is 1.5GB total,
quite a bit less than what the processor can address. Further, the memory
map can usually not be modified (this is a CHRP memory map, BTW). Mapping
512MB in multiple processes or 1.5GB in a given process would certainly be
possible, and would not hit any limits except the RAM needed for MMU
structures.

I hope that answers the question… as an aside, inside the kernel, we
keep
track of physical addresses using 64 bit variables. This allows us to
address more than 4GB physical on processors that support this (MIPS
R4000,
most notably). Some day, it may also be supported on some other
processors.

You aren’t talking about IA64, are you? :slight_smile:

Thanks for explanations,

  • Igor

Most of the defect detection actually occurs in those large FPGAs. Today’s
microprocessors are fast but they can’t do much when you throw 600 million
pixels per second at them. So we use hardware to do the pixel rate stuff and
leave the software to collect and filter what the hardware finds.

I wouldn’t really call it AI since the rules for what should be considered a
defect are known and algorithmic in nature.

Wayne

“Bill Caroselli” <Bill@Sattel.com> wrote in message
news:9bnelf$9fp$1@inn.qnx.com

Cool. So AI type software ‘looks’ at the images for defects?


Bill Caroselli - Sattel Global Networks
1-818-709-6201 ext 122



“Wayne Fisher” <> wayne.fisher@focusautomation.com> > wrote in message
news:9bnctl$8g4$> 1@inn.qnx.com> …

A fully configured system will have up to 11 of these boards spread
across
6
single board computers. Basically we need to process a LOT of image data
and
distill it down to a simple list of where defects in the product occur
and
what type of defect it is.

Thanks Sebastien. This is a great help and explains everything I need to
know.

Looks like we will have to keep our address blocks to less than 256MB if we
want everything to fit in. This won’t bother our hardware guys much but it
will probably complicate the lives of my software group… :frowning:

Thanks again,

Wayne

“Sebastien Marineau” <sebastien@qnx.com> wrote in message
news:9bngsg$f9k$1@nntp.qnx.com

Hi Wayne,

here’s a (hopefully) more detailed answer to your question, with some
specifics on PowerPC.

First, the virtual address space of a given process is (of course) 4GB,
of which the bottom 1GB is taken up by the kernel and related supervisor
mappings. So 3GB is available for each process.

On the hardware side, the PowerPC has a 32 bit physical address bus
(except
the 7450, which has 36 bits). So the processor itself can address up
to 4GB of physical memory. Virtual “process” addresses (pointers) in the
TLB and in the MMU get translated to 32 bits of physical addresses.

Now, where things get a little more complicated is when you put a bridge
in the system. The bridge does CPU<–>PCI translation, and normally the
PowerPC bridges have fixed-size windows (in the 32 bit physical world) of
where memory, PCI memory, PCI I/O, ROM etc. go. As an example, the MPC107
bridge (probably what you’re using) has

0-1G for RAM
1-2G reserverd
2G-~3.5G PCI Memory
and then further windows for PCI I/O, ROM etc.

So the limit on PCI memory in that case is 1.5GB total,
quite a bit less than what the processor can address. Further, the memory
map can usually not be modified (this is a CHRP memory map, BTW). Mapping
512MB in multiple processes or 1.5GB in a given process would certainly be
possible, and would not hit any limits except the RAM needed for MMU
structures.

I hope that answers the question… as an aside, inside the kernel, we
keep
track of physical addresses using 64 bit variables. This allows us to
address more than 4GB physical on processors that support this (MIPS
R4000,
most notably). Some day, it may also be supported on some other
processors.

Sebastien

QNX RTP Patch C & Language Supplement Patch B (Beta Release Candidate)
now available for download!!!

Patch C is now available as a beta download from:

http://betas.qnx.com/beta

Just point your package installer at the url.

This distribution must be installed over a Patch B system.


Release Notes for this distribution:


QNX RTP - Patch C (Based on QNX RTOS v6.0.0 Patch C)


Fixes & errata


Note: If you have an earlier release of QNX RTP, you should recompile
ALL your existing QNX RTP code with this distribution.


This section covers the following:

  • New input (devi-*) drivers
  • Photon Library
  • phs-to-ps
  • PtOSContainer
  • PxLoadImage()
  • Render Library

New input (devi-*) drivers

  • Improved touchscreen support.

Photon Library

  • Fixed crash problem re: PtFileSelection.
  • Fixed unknown symbol replacement in translation lib routines.
  • Fixed buffer overflow problem in translation routines – the
    helpviewer can now display Japanese help pages correctly.

phs-to-ps

  • Fixed segment violation problem when processing draw streams with
    large images.
  • Added enhancements/improvements to the output of phs-to-ps and
    phs-to-escp2.
  • Fixed the behavior of offscreen contexts when used within a
    widget’s draw function (Pt_ARG_RAW_DRAW_F() of PtRaw widget).

PtOSContainer

  • Fixed problems with translation, blit, and render operations.
  • Added support for rendering images and rep-images (repeating
    images) with transparency into offscreen contexts or memory
    contexts that didn’t work correctly.

PxLoadImage()

  • Fixed segment violation problem when loading GIFs with a
    transparent color.

Render Library

  • Improved support for rendering and printing wide chars (UTF-16).
  • Updated to better support image rendering, translations, polygons,
    etc.



    \

QNX RTP - Language Supplements (2.0 Patch B)


Fixes & errata




vpim (Japanese Input Method)

  • Added new options:

-F font
Specify default font.

-@ x,y
Specify the default position for the input window.


Note: The -F and -@ options are applied when no FEP
(Front End Processor) rectangle is defined.


  • Fixed the input window size.

cpim (Chinese Input method)

  • ESC will cancel current input sequence.
  • Fixed a bug caused by changing the default font with the selection
    list.
  • Selection list is always on the screen.



Happy beta testing!!!

thanx
ben
QA
QSS