Robert.
I don’t want to get into the functionality of the profiler too much, but
the test program in your source is going to spend 99.9% of it’s time in
libc functions usleep (which will end up being a TimerTimeout ) and printf
(all sorts of libc functions).
Since libc isn’t profiled, this will cause varying and unclear results.
Robert Muil wrote:
David,
I hope this screenshot is viewable. It should show how complicated a
process it is to profile even a stupidly simple program.
To determine how much time is spent in do_loop (), and where that is
spent, I first must find out what resolve_rels(), lookup(), hash(),
ConnectAttach(), static_strcmp(), __SysCpupageGet(), _dladdr() functions
are. Most are not even mentioned in the documentation (although this is
hard to be certain of because there is no index).
I must then work out where they are being called from.
I must then work out how much time they are using, presumably by guessing
percentage CPU from the %Time Usage bar or and calculating time in
functions as an percentage of the overall program run time (which would
need to be determined, I suppose, with a separate tool).
I hope that I am wrong about this. If so, please tell me how I can use
the information shown in the Profiler perspective to tell how much time
has been used by the do_loop() function, and where that time was spent).
Also note the coloured bars in the c editor. What is this supposed to
suggest, beyond that QNX likes the sound of its own name and has coloured
in the margin to highlight it?
Robert.
“David Gibbs” <> dagibbs@qnx.com> > wrote in message
news:cnl3nr$5n4$> 1@inn.qnx.com> …
Robert Muil <> r.muil@crcmining.com.au> > wrote:
David,
I am unable to determine how long a program spends, cumulatively, in a
function.
Total time directly in a function would be given by the total time in
Sampling information view.
Or, do you mean for a function, and all sub-functions, cumulated to that
function?
It does not help me to know that 5% of my time was in ldiv() or .plt()
(whatever they are). I want to know that 100% was in main(), then 90% of
that was in do_loop(), etc.
Sounds like that’s what you want. That can’t be done. Well, in theory
it can be done, but the data collection to provide that would be
enormous.
What the profiler does is collect 2 types of information – it annotates
any code compiled with profiling to get function call counts (basically
call pairs), and it samples the execution of the program from the timer
interrupt, storing the current IP, and active thread at that point.
Then,
the time useage is “estimated” based on the sampling – but it doesn’t
know what the call path to get to the function is. To get the cumulative
sub-function useage your asking for, at every sample point a full stack
backtrace would have to be collected and stored – the overhead to
collect
that information, and store that information, would be quite impressive,
and heavily impact whatever you were trying to profile. Also, the tool
chain (GCC) doesn’t supply tools to do that.
Why would full backtrace be neede?
Consider the following:
int func1() { /* use lots of CPU */}
int funca() { func1(); }
int funcb() { func1(); }
int main() { while (1) { funca(); funcb(); } }
Now, is CPU time spent in func1() attributed to funca() or funcb()?
Without
a stack backtrace, you can’t know.
Also, it does not seem to correctly read the symbol table. I have a very
simple program to test, which just loops and does a few printfs. When I
profile it, the calling information only displays call information for
the
source I compiled. For example, it does not tell me that do_loop()
called
printf().
Nope, it doesn’t. printf() is in our library – so it’s not
instrumented.
If I understand it properly, the call information is put in the prefix
of the called function – so any of your functions that get called should
have call count information – but if you call functions from our
library,
from source files not compiled -p, from your libraries not compiled -p,
etc, you won’t get call count information for those.
The sampling information is all over the place. If I reduce the
iterations
in do_loop(), the sampling information doesn’t even mention printf(). It
never mentions usleep(). With a higher number of iterations (like
99999), I
get a couple of little coloured bars in the text editor, but they don’t
correspond to where the program would have run. I only see 1 green and 1
blue bar - no breakdown at all. With 999 iterations, I don’t get any
coloured bars.
I ran it, I got coloured bars with 999 iterations. But, I ran it on
a pretty slow CPU target. (A VMWare session, in fact.) I got a little
bit of CPU attributed to the loop in do_loop(), but looking at the
Sampling information, most of the CPU was attributed to some unknown
function – I’m going to have to check with a developper as to what
is going on for that. I’m pretty sure that would be the printf().
(And, you won’t see the time in a function attributed to the function
call line in the editor.)
Your usleep() is 1000 usec, which is 1 ms – with a sampling rate of
1ms, I wouldn’t expect much attributed to usleep(). It might be another
of the “unknown” functions I’m seeing.
I have attached a screenshot of the editor after a profile. This is
about as
much useful information as I can get out of it.
I tried to view the screenshot, but got a garbled image. >
There is no way to look at the profiling data statistically. No logs, or
textual data, such as the actual numbers that the %Time Usage bars are
drawn
from. For CPU usage, all I can get are little coloured bars, and then
only
if I am perseverent and lucky.
Are you just looking at the editor for the sampling data, or are you
actually using the QNX Application Profiler perspective, which includes
the Sampling Information and Call Information views which both provide
statistics summaries?
When you launched with profiling, did you click the “Switch to this
Tool’s Perspective on Launch” choice to open it automatically?
This is not a tool worthy of the advertising hype that populates the
help
documentation. If the documents spent as much time telling me how to
use
their brilliant, time-saving, modular, single, consistent, integrated
environment as they spent placing adjectives in front of their products,
I
may have less to complain about. Unfortunately it seems the deficiencies
are
in the product also - not only the documentation. This is especially
true of
the profiling and update/installation perspectives.
The update/installation perspective is core Eclipse, and should be
documented in the Eclipse docs, through the Help->Help Contents
menu, in particular I think this is talked about in the Workbench
Users Guide → Tasks → Updating Features with the update manager.
The IDE docs seem to be task-oriented, rather than reference oriented.
Just try to save a source file to My Documents quickly - or open an
external
c file in an existing editor.
That is an Eclipse paradigm, it works within a subset of the directory
structure, the workspace, and expects everything to be there.
(I think this is not uncommon with IDEs – they import/export stuff,
but only really deal with stuff that is in their more limitted
view/working
area.)
I found it frustrating, too, though.
Hope some of this helps.
-David
David Gibbs
QNX Training Services
dagibbs@qnx.com
\
cburgess@qnx.com