I just bothered to re-read Neutrino SysArch book after a long while 
- It says (in ‘Neutrino implementation’ section) that timeslice for RR
scheduling is 50msec. That was in QNX4, Neutrino use (ClockPeriod * 4) for
timeslice.
By the way, why don’t you make that ‘4’ factor a startup option so it can be
tuned in boot image? Trouble with current approach is, if I’m to set shorter
ClockPeriod, it will increase overhead not only by serving timers, but also
for scheduling. Too bad…
-
Discussion of TimerTimeout() in SysArch suggests that timer will be armed
by kernel atomically, which would imply that it can’t expire before you
entered next syscall, even if you were preempted at that unfortunate time in
between. However LibRef for TimerTimeout() contains ‘Caveat’ section which
claims that timers starts to tick as soon as you call TimerTimeou() and
warns that preemption scenario is possible. Which defeats the purpose by the
way since it makes TimerTimeout nothing better than a regular timer…
-
It keeps talking about process manager being ‘optional’ but in a too
vague way. I have to see yet how that is possible. There is no single word
anywhere else explaining how to implement such a thing. And perhaps it would
be interesting… I’d have some fun writing my own proc manager using
microkernel 
-
OTOH, all mention of 2 ‘intermediate’ memory models (‘system separated
from users’ and ‘users separated from system and each other but without
virtual address space’) were removed. I guess they never were implemented,
any comments why?
-
In case it was forgotten, docs for cam-disk (even updated ones) contain 2
contradicting ‘defaults’. From those 2, ‘always report 64 heads’ appears to
be in effect, rather than ‘ask BIOS’.
Igor Kovalenko <kovalenko@home.com> wrote:
- Discussion of TimerTimeout() in SysArch suggests that timer will be armed
by kernel atomically, which would imply that it can’t expire before you
entered next syscall, even if you were preempted at that unfortunate time in
between. However LibRef for TimerTimeout() contains ‘Caveat’ section which
claims that timers starts to tick as soon as you call TimerTimeou() and
warns that preemption scenario is possible. Which defeats the purpose by the
way since it makes TimerTimeout nothing better than a regular timer…
It improves on using a regular timer with signal in that if the timer
expires before you enter the blocking call (due to pre-emption) you
will be unblocked immediately with TimerTimeout, but with the signal
model, you will never be unblocked by your timeout. That is, with
TimerTimeout() you are guaranteed the unblock, but not guaranteed the
length of time blocked that you asked for. It fixes half the problem
with using a signal. (My understanding is that it was done this way
for efficiency reasons – doing it the other way would have required a
check for a pending timeout request on every kernel call, overhead we
didn’t want.)
- It keeps talking about process manager being ‘optional’ but in a too
vague way. I have to see yet how that is possible. There is no single word
anywhere else explaining how to implement such a thing. And perhaps it would
be interesting… I’d have some fun writing my own proc manager using
microkernel >
That may be a throwback to when we had thought about just releasing the
kernel as a standalone piece, without the process manager, without memory
protection. The architecture allows this – theoretically – but it isn’t
something we have any plans to release. (It was something that was being
thought about back in the days of Neutrino 1.0.)
This should be removed from the docs, and the docs group is aware that
it needs to be updated. The claim, though, that they need to be cloned. 
- OTOH, all mention of 2 ‘intermediate’ memory models (‘system separated
from users’ and ‘users separated from system and each other but without
virtual address space’) were removed. I guess they never were implemented,
any comments why?
My understanding is that they didn’t fit with the markets we were pursuing
or expected to be pursuing, and the type of product we wanted to sell.
These two questions are tied together – stuff that was theoretically
possible with the Neutrino kernel architecture, in Neutrino 1.0, but
which we narrowed our focus away from in 2.0.
(I didn’t make the decisions above, the above comments are based on my
best understanding of what decision were made and why.)
-David
QNX Training Services
dagibbs@qnx.com
Forgot one more:
SysArch also vaguely suggests that message-copying design does not
prevent kernel from detecting large transfers and choosing to execute
them by page flipping instead of actual copying. That would make
impression that Neutrino indeed does that thing, but it does not,
because that would require copy-on-write support which is not present.
It is misleading.
Igor Kovalenko wrote:
I just bothered to re-read Neutrino SysArch book after a long while > 
- It says (in ‘Neutrino implementation’ section) that timeslice for RR
scheduling is 50msec. That was in QNX4, Neutrino use (ClockPeriod * 4) for
timeslice.
By the way, why don’t you make that ‘4’ factor a startup option so it can be
tuned in boot image? Trouble with current approach is, if I’m to set shorter
ClockPeriod, it will increase overhead not only by serving timers, but also
for scheduling. Too bad…
-
Discussion of TimerTimeout() in SysArch suggests that timer will be armed
by kernel atomically, which would imply that it can’t expire before you
entered next syscall, even if you were preempted at that unfortunate time in
between. However LibRef for TimerTimeout() contains ‘Caveat’ section which
claims that timers starts to tick as soon as you call TimerTimeou() and
warns that preemption scenario is possible. Which defeats the purpose by the
way since it makes TimerTimeout nothing better than a regular timer…
-
It keeps talking about process manager being ‘optional’ but in a too
vague way. I have to see yet how that is possible. There is no single word
anywhere else explaining how to implement such a thing. And perhaps it would
be interesting… I’d have some fun writing my own proc manager using
microkernel > 
-
OTOH, all mention of 2 ‘intermediate’ memory models (‘system separated
from users’ and ‘users separated from system and each other but without
virtual address space’) were removed. I guess they never were implemented,
any comments why?
-
In case it was forgotten, docs for cam-disk (even updated ones) contain 2
contradicting ‘defaults’. From those 2, ‘always report 64 heads’ appears to
be in effect, rather than ‘ask BIOS’.
Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote:
Forgot one more:
SysArch also vaguely suggests that message-copying design does not
prevent kernel from detecting large transfers and choosing to execute
them by page flipping instead of actual copying.
Where are you reading?
I found this in the “Neutrino IPC”, “Message Copying” section:
Because message data is explicitly copied between address spaces (rather
than by doing page table manipulations)…
This states that the kernel will not be doing page flipping.
-David
QNX Training Services
dagibbs@qnx.com
Besides, there are problems with page flipping.
-
the offset of data within the page probibly won’t be the same in both
processes/threads.
-
I have the right to expect to expect that the data that I send won’t be
modified by the receiving process/thread (unless I specify the same buffer
for the reply). But the recieving process has the right to modify the data
once it has received it.
To use this kind of page flipping all applications need to know how to use
this commonly mapped buffer and know what the rules are involved with them.
David Gibbs <dagibbs@qnx.com> wrote in message
news:95s5go$913$1@nntp.qnx.com…
Igor Kovalenko <> Igor.Kovalenko@motorola.com> > wrote:
Forgot one more:
SysArch also vaguely suggests that message-copying design does not
prevent kernel from detecting large transfers and choosing to execute
them by page flipping instead of actual copying.
Where are you reading?
I found this in the “Neutrino IPC”, “Message Copying” section:
Because message data is explicitly copied between address spaces
(rather
than by doing page table manipulations)…
This states that the kernel will not be doing page flipping.
-David
QNX Training Services
dagibbs@qnx.com
Bill at Sierra Design <BC@sierradesign.com> wrote:
Besides, there are problems with page flipping.
-
the offset of data within the page probibly won’t be the same in both
processes/threads.
-
I have the right to expect to expect that the data that I send won’t be
modified by the receiving process/thread (unless I specify the same buffer
for the reply). But the recieving process has the right to modify the data
once it has received it.
Also, if I receive directly into DMA safe memory, it sure better still be
DMA safe memory after that receive. And, the physical address better not
have shifted around.
Lots of reasons to not do page-flipping.
-David
QNX Training Services
dagibbs@qnx.com
David Gibbs wrote:
Bill at Sierra Design <> BC@sierradesign.com> > wrote:
Besides, there are problems with page flipping.
- the offset of data within the page probibly won’t be the same in both
processes/threads.
It should be page-aligned actually. But if you’re writing a resource
manager anticipating rather large transfers and want to benefit from
such a feature, you probably are smart enough to place your buffers on
page boundary. Same applies to clients.
- I have the right to expect to expect that the data that I send won’t be
modified by the receiving process/thread (unless I specify the same buffer
for the reply). But the recieving process has the right to modify the data
once it has received it.
That is why copy-on-write is needed (which means if either process tries
to modifiy a shared page, it will get a private copy of that page).
Also, if I receive directly into DMA safe memory, it sure better still be
DMA safe memory after that receive. And, the physical address better not
have shifted around.
That is not a problem. The kernel knows if a buffer memory was mapped
with MAP_PHYS or not and can consider that when making decision.
Lots of reasons to not do page-flipping.
There are reasons to do it too and some systems do it (e.g., Mach), but
I’m not arguing about it. If you believe that it’s not worth it, it just
makes that statement even more misleading. Not to mention that the
previous paragraph contradicts it directly by pointing that Neutrino
benefits from not doing page flipping because message buffers can be
on the stack rather than in reserved page-aligned memory.
SysArch/NeutrinoMicrokernel/NeutrinoIPC/MessageCopying
Last paragraph before SimpleMessages, with finger in front of it 
But Igor,
if you are going to code such restrictions into both client and server, just
use commonly mapped memory. I have done this with audio buffers, thought
that was in Q4 not Nto.
Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote in message
news:3A81AD35.888CAB96@motorola.com…
David Gibbs wrote:
Bill at Sierra Design <> BC@sierradesign.com> > wrote:
Besides, there are problems with page flipping.
- the offset of data within the page probibly won’t be the same in
both
processes/threads.
It should be page-aligned actually. But if you’re writing a resource
manager anticipating rather large transfers and want to benefit from
such a feature, you probably are smart enough to place your buffers on
page boundary. Same applies to clients.
- I have the right to expect to expect that the data that I send
won’t be
modified by the receiving process/thread (unless I specify the same
buffer
for the reply). But the recieving process has the right to modify the
data
once it has received it.
That is why copy-on-write is needed (which means if either process tries
to modifiy a shared page, it will get a private copy of that page).
Also, if I receive directly into DMA safe memory, it sure better still
be
DMA safe memory after that receive. And, the physical address better
not
have shifted around.
That is not a problem. The kernel knows if a buffer memory was mapped
with MAP_PHYS or not and can consider that when making decision.
Lots of reasons to not do page-flipping.
There are reasons to do it too and some systems do it (e.g., Mach), but
I’m not arguing about it. If you believe that it’s not worth it, it just
makes that statement even more misleading. Not to mention that the
previous paragraph contradicts it directly by pointing that Neutrino
benefits from not doing page flipping because message buffers can be
on the stack rather than in reserved page-aligned memory.
SysArch/NeutrinoMicrokernel/NeutrinoIPC/MessageCopying
Last paragraph before SimpleMessages, with finger in front of it > 
Igor Kovalenko <Igor.Kovalenko@motorola.com> wrote:
Lots of reasons to not do page-flipping.
There are reasons to do it too.
Yes, there are. Usually as an efficiency issue – if you can page flip
faster than you can copy, it will speed up movement of data.
makes that statement even more misleading. Not to mention that the
previous paragraph contradicts it directly by pointing that Neutrino
benefits from not doing page flipping because message buffers can be
on the stack rather than in reserved page-aligned memory.
SysArch/NeutrinoMicrokernel/NeutrinoIPC/MessageCopying
Last paragraph before SimpleMessages, with finger in front of it > 
Thanks, those do definitely conflict don’t they. Will point out to docs.
(I think one is talking about actual implementation – we don’t page swap,
the other is talking about theoretical, the definition of the interface
doesn’t prevent us from implementing page swapping, should we chose to
do so. Of course, we’d have to do it intelligently.)
-David
QNX Training Services
dagibbs@qnx.com