Experiences with VxWorks compared to QNX

“Martin Zimmerman” <camz@passageway.com> wrote in message
news:Voyager.020208235833.1935A@wooga.wooga.passageway.com

Previously, Alec Saunders wrote in qdn.public.qnxrtp.advocacy:
Toolset. According to Wind River’s May price list, your costs would
be…

Allow me to make one little jab. Something that has annoyed every
potential
QNX customer, current customer, and consultant since we were first able to
buy QNX commercially.

AT LEAST WINDRIVER PUBLISHES THEIR BASE/LIST PRICES.

True. And this is a big deal. You should also publish some discount
schedule.

As a consultant, I’ve had customers call for prices for quantity X and the
very next day someone else in a different industry will call for prices on
the same quantity X and get quoted different prices. And both quotes are
accompanied by “Shhh, don’t tell anyone.”

You have no idea how much that pisses people off.


Bill Caroselli – 1(626) 824-7983
Q-TPS Consulting
QTPS@EarthLink.net

What does affect QNX sales is lack of good tools and a strategy that would
resolve the problem. People buy tools. The most of people don’t care about
OS architecture. But people believe that mighty tools will solve their
problems. Unfortunately at the time someone realizes that Tornado sucks it
is too late. And the reasonable ‘what’s the heck?’ makes WRS knocking your
door with new tools offer. (Just for fun, take a look how many debuggers WRS
has)… A few times I got thru the process of OS selection for telco projects
(here, in NA). All the time one of the most important criteria was/is the
comprehensive tool chain. Not only editor/compiler/debugger story but also
availability of tools like Insight from Parasoft, for example. Perhaps
Insight is QNX-compatible but who knows this? Could QSSL compile a list of
QNX compatible 3rd-party tools those QNX customers would buy and use? Kind
of “QNX-compatible”. Ask your big customers like Nortel and Motorola :wink: and
small ones too, what tools they use. That will facilitate the process of OS
selection and establishing development environment for your customers.
Those were bad news. The good one is QNX rocks :slight_smile:

“Alec Saunders” <alecs@qnx.com> wrote in message
news:a40rkh$a31$1@nntp.qnx.com

“Kris Warkentin” <> kewarken@qnx.com> > wrote in message
news:a40ovo$7ou$> 1@nntp.qnx.com> …
So, what you’re saying is, we’re selling too cheap > :wink:

You point out two issues in your message, Kris. One is that you wonder
whether the price is right, and the other is you wonder whether pricing is
affecting our sales.

I think we’re priced to hit the sweet spot in the market. Roughly 60% of
companies in the market have per developer budgets of $5,000 or more. If
we were to jack up our prices to Wind River levels, we would only be able
to
address 25% of the market. If we dropped our prices lower to the prices
that Microsoft charges for their products, we would add just 15 to 20% to
our total available market. So, yeah, not everyone can afford QNX, but
the
vast majority of the market can. And that’s where I think we should be –
not only is QNX a great product technically, but it’s also a great value
to
a purchaser.

To answer your second question, I don’t believe that pricing is affecting
our sales, based on the info in the previous paragraph. Yes, there’s
price
elasticity in our market place, but I think the issues of awareness are
much
much more detrimental to us than our pricing.


Alec Saunders
VP Marketing, QNX Software

Dmitri Poustovalov <pdmitri@sympatico.ca> wrote:

What does affect QNX sales is lack of good tools and a strategy that would
resolve the problem. People buy tools. The most of people don’t care about
OS architecture. But people believe that mighty tools will solve their
problems. Unfortunately at the time someone realizes that Tornado sucks it
is too late. And the reasonable ‘what’s the heck?’ makes WRS knocking your
door with new tools offer. (Just for fun, take a look how many debuggers WRS
has)… A few times I got thru the process of OS selection for telco projects
(here, in NA). All the time one of the most important criteria was/is the
comprehensive tool chain. Not only editor/compiler/debugger story but also
availability of tools like Insight from Parasoft, for example. Perhaps
Insight is QNX-compatible but who knows this? Could QSSL compile a list of
QNX compatible 3rd-party tools those QNX customers would buy and use? Kind
of “QNX-compatible”. Ask your big customers like Nortel and Motorola > :wink: > and
small ones too, what tools they use. That will facilitate the process of OS
selection and establishing development environment for your customers.
Those were bad news. The good one is QNX rocks > :slight_smile:

Hi Dmitri,

it’s sad that the “product” turned out by our universities and commercial
institutions doesn’t realize that “vi” and “make” and the “printf debugger”
are just about all you need. I blame government cutbacks :slight_smile:

Cheers,
-RK

“Alec Saunders” <> alecs@qnx.com> > wrote in message
news:a40rkh$a31$> 1@nntp.qnx.com> …
“Kris Warkentin” <> kewarken@qnx.com> > wrote in message
news:a40ovo$7ou$> 1@nntp.qnx.com> …
So, what you’re saying is, we’re selling too cheap > :wink:

You point out two issues in your message, Kris. One is that you wonder
whether the price is right, and the other is you wonder whether pricing is
affecting our sales.

I think we’re priced to hit the sweet spot in the market. Roughly 60% of
companies in the market have per developer budgets of $5,000 or more. If
we were to jack up our prices to Wind River levels, we would only be able
to
address 25% of the market. If we dropped our prices lower to the prices
that Microsoft charges for their products, we would add just 15 to 20% to
our total available market. So, yeah, not everyone can afford QNX, but
the
vast majority of the market can. And that’s where I think we should be –
not only is QNX a great product technically, but it’s also a great value
to
a purchaser.

To answer your second question, I don’t believe that pricing is affecting
our sales, based on the info in the previous paragraph. Yes, there’s
price
elasticity in our market place, but I think the issues of awareness are
much
much more detrimental to us than our pricing.


Alec Saunders
VP Marketing, QNX Software


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Robert Krten wrote:

Dmitri Poustovalov <> pdmitri@sympatico.ca> > wrote:
What does affect QNX sales is lack of good tools and a strategy that would
resolve the problem. People buy tools. The most of people don’t care about
OS architecture. But people believe that mighty tools will solve their
problems. Unfortunately at the time someone realizes that Tornado sucks it
is too late. And the reasonable ‘what’s the heck?’ makes WRS knocking your
door with new tools offer. (Just for fun, take a look how many debuggers WRS
has)… A few times I got thru the process of OS selection for telco projects
(here, in NA). All the time one of the most important criteria was/is the
comprehensive tool chain. Not only editor/compiler/debugger story but also
availability of tools like Insight from Parasoft, for example. Perhaps
Insight is QNX-compatible but who knows this? Could QSSL compile a list of
QNX compatible 3rd-party tools those QNX customers would buy and use? Kind
of “QNX-compatible”. Ask your big customers like Nortel and Motorola > :wink: > and
small ones too, what tools they use. That will facilitate the process of OS
selection and establishing development environment for your customers.
Those were bad news. The good one is QNX rocks > :slight_smile:

Hi Dmitri,

it’s sad that the “product” turned out by our universities and commercial
institutions doesn’t realize that “vi” and “make” and the “printf debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

Cheers,
-RK

As one of my colour-blind friends once said, “I thought the shell was an
Integrated Development Environment!” Actually the colour is mostly a distraction
for him - but I see his point.

Phil Olynyk

“Alec Saunders” <> alecs@qnx.com> > wrote in message
news:a40rkh$a31$> 1@nntp.qnx.com> …
“Kris Warkentin” <> kewarken@qnx.com> > wrote in message
news:a40ovo$7ou$> 1@nntp.qnx.com> …
So, what you’re saying is, we’re selling too cheap > :wink:

You point out two issues in your message, Kris. One is that you wonder
whether the price is right, and the other is you wonder whether pricing is
affecting our sales.

I think we’re priced to hit the sweet spot in the market. Roughly 60% of
companies in the market have per developer budgets of $5,000 or more. If
we were to jack up our prices to Wind River levels, we would only be able
to
address 25% of the market. If we dropped our prices lower to the prices
that Microsoft charges for their products, we would add just 15 to 20% to
our total available market. So, yeah, not everyone can afford QNX, but
the
vast majority of the market can. And that’s where I think we should be –
not only is QNX a great product technically, but it’s also a great value
to
a purchaser.

To answer your second question, I don’t believe that pricing is affecting
our sales, based on the info in the previous paragraph. Yes, there’s
price
elasticity in our market place, but I think the issues of awareness are
much
much more detrimental to us than our pricing.


Alec Saunders
VP Marketing, QNX Software


\

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

New thread time.

“Robert Krten” <nospam90@parse.com> wrote in message
news:a44mlj$h6f$1@inn.qnx.com

it’s sad that the “product” turned out by our universities and commercial
institutions doesn’t realize that “vi” and “make” and the “printf
debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

What!

Don’t get me wrong. I can work wonders with a few well placed printf()s.

When in a bind I often need to use a debugger to find out what led up to the
point where you are now executing that line that’s commented:
// this should never happen

However,

I’m also a strong believe that software should debug itself. I have
developed many of my own software tools over the years that I can
incorporate into my software development projects. These features just lurk
quietly in the background using very little (but some) overhead. And then,
BANGZOOM! On that rare occasion (yeah right!) when a bug does raise it’s
ugly head in my software I can just go back and look at the logs to see how
it got there.

I believe that (almost) as much time should go into the design of debugging
software as goes into the software itself.


Bill Caroselli – 1(626) 824-7983
Q-TPS Consulting
QTPS@EarthLink.net

Hi Bill…

Bill Caroselli wrote:

I used to laugh when the folks on CNN would say things like: “It’s a good
thing the terrorists aren’t doing (step 1), (step 2), (step 3), (etc.)”. I
thought that some of these reporters should have been shot.

But, instead, we in the QNX community have you Kris! ;~}

:slight_smile:)

I concur! :slight_smile:)

Miguel.

Hi Chris…

QNX is much more ‘gooder’ as VxWorks is bad. :slight_smile:)

Just an honest biased opinion. :slight_smile:

Cheers!

Miguel.

Chris Rose wrote:

I don’t know if I will get an unbiased response, but here it goes anyway.

My company is hours away from signing a purchase request for QNX development
seats which we will use as the OS for our digital servo controller.
We just learned that another division of the company is using VxWorks and
they have extra licenses available (we think). They also have programmers
experienced with VxWorks. (My division has no one experienced with QNX). So
at the last moment we are re-evaluating our decision to use QNX.
We had originally ruled out Vx due to cost, and we thought the code may not
be as portable. (VxWorks AE though appears to be POSIX compliant)

My question is: Does anyone here have experience with both operating
systems? If so can you give me an unbiased opinion of both OS’s?

my opinions are mine, only mine, solely mine, and they are not related
in any possible way to the institution(s) in which I study and work.

Miguel Simon
Research Engineer
School of Aerospace and Mechanical Engineering
University of Oklahoma
http://www.amerobotics.ou.edu/
http://www.saic.com

Bill Caroselli <qtps@earthlink.net> wrote:

New thread time.

“Robert Krten” <> nospam90@parse.com> > wrote in message
news:a44mlj$h6f$> 1@inn.qnx.com> …

it’s sad that the “product” turned out by our universities and commercial
institutions doesn’t realize that “vi” and “make” and the “printf
debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

What!

Don’t get me wrong. I can work wonders with a few well placed printf()s.

When in a bind I often need to use a debugger to find out what led up to the
point where you are now executing that line that’s commented:
// this should never happen

That’s about the only time I use a debugger as well – the damn thing SIGSEGV’d
and now I need to know where. I turn on the debugger, it translates the magical
hex goop virtual address into a line number, and I might poke around with a
few variables to see if it’s “obvious” why it died.

I guess my “bad experience” with debuggers was one guy who spent two days in a
debugger tracing through his program only to find that a half-hour spent with
the source would have found his problem.

However,

I’m also a strong believe that software should debug itself. I have
developed many of my own software tools over the years that I can
incorporate into my software development projects. These features just lurk
quietly in the background using very little (but some) overhead. And then,
BANGZOOM! On that rare occasion (yeah right!) when a bug does raise it’s
ugly head in my software I can just go back and look at the logs to see how
it got there.

I’m a great believer in the “default: printf (”%s %d should never happen\n", FILE, LINE);"
statement.

I believe that (almost) as much time should go into the design of debugging
software as goes into the software itself.

That’s the essence of “Software Quality Assurance” that MOST high tech companies
don’t seem to grasp.

The attitude seems to be one of “Hey, cool! It compiled! Ship it!”.

I worked at Canadian Marconi once on a radar. It was very instructive.
After the software was “complete” I had to write a test plan, to see how the
software measured up to the requirements document. Then I designed a hardware
test jig. Then the SQA person sat with me for two days and watched the software
being tested. THEN it was approved. After that point, I submitted it to their
configuration management group. The “proof” that it still worked was that I took
a stock IBM-PC, formatted the hard disk, installed the OS and compiler, and downloaded
the source from the CM system. Then I came up with a ROM image. IF the checksum
matched, then the product was deemed to be properly in the CM system.

Now, this may be a bit overboard, but there is a point – how many times have you
been told by tech support for any number of products, “Oh, you got a SIGSEGV at
address ? Huh, cool. Try the latest version.” In an embedded product,
this is simply not acceptable as a level of service. They should be able to track
the version that you have to their CM system or equivalent, and find the line that
failed, and even rebuild a fixed version from that branch that fixes only that
one bug. That’s how you prove that the fix worked. The old excuse of trying
the latest version only means that the bug is either fixed or masked by something
else…

</rant off> :slight_smile:

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

“Robert Krten” <nospam90@parse.com> wrote in message
news:a44mlj$h6f$1@inn.qnx.com

Dmitri Poustovalov <> pdmitri@sympatico.ca> > wrote:
What does affect QNX sales is lack of good tools and a strategy that
would
resolve the problem. People buy tools. The most of people don’t care
about
OS architecture. But people believe that mighty tools will solve their
problems. Unfortunately at the time someone realizes that Tornado sucks
it
is too late. And the reasonable ‘what’s the heck?’ makes WRS knocking
your
door with new tools offer. (Just for fun, take a look how many debuggers
WRS
has)… A few times I got thru the process of OS selection for telco
projects
(here, in NA). All the time one of the most important criteria was/is
the
comprehensive tool chain. Not only editor/compiler/debugger story but
also
availability of tools like Insight from Parasoft, for example. Perhaps
Insight is QNX-compatible but who knows this? Could QSSL compile a list
of
QNX compatible 3rd-party tools those QNX customers would buy and use?
Kind
of “QNX-compatible”. Ask your big customers like Nortel and Motorola > :wink:
and
small ones too, what tools they use. That will facilitate the process of
OS
selection and establishing development environment for your customers.
Those were bad news. The good one is QNX rocks > :slight_smile:

Hi Dmitri,

it’s sad that the “product” turned out by our universities and commercial
institutions doesn’t realize that “vi” and “make” and the “printf
debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

:slight_smile: Just a quote from Communication Sytem Design Magazine I received last
week:
" … There was a tremendous boom in XYZ for Dummies books over the last
decade.
Well, if you are a dummy then you shouldn’t be trying to do engineering."

I’d add to this that s/w development is becoming an industry in all areas
and, maybe,
embedded application development is still the vi-ed and printf-ed art.
In big companies where 200-300 developers are dayly adding lines of code to
flat-memory
system; for them, Insight and Purify can save a lot of manny. For QNX
availabilty of 3rd tools
means industry recognition that much more trustfull then 100 partnership
announces…

Cheers,
-RK

“Alec Saunders” <> alecs@qnx.com> > wrote in message
news:a40rkh$a31$> 1@nntp.qnx.com> …
“Kris Warkentin” <> kewarken@qnx.com> > wrote in message
news:a40ovo$7ou$> 1@nntp.qnx.com> …
So, what you’re saying is, we’re selling too cheap > :wink:

You point out two issues in your message, Kris. One is that you wonder
whether the price is right, and the other is you wonder whether pricing
is
affecting our sales.

I think we’re priced to hit the sweet spot in the market. Roughly 60%
of
companies in the market have per developer budgets of $5,000 or more.
If
we were to jack up our prices to Wind River levels, we would only be
able
to
address 25% of the market. If we dropped our prices lower to the
prices
that Microsoft charges for their products, we would add just 15 to 20%
to
our total available market. So, yeah, not everyone can afford QNX, but
the
vast majority of the market can. And that’s where I think we should
be –
not only is QNX a great product technically, but it’s also a great
value
to
a purchaser.

To answer your second question, I don’t believe that pricing is
affecting
our sales, based on the info in the previous paragraph. Yes, there’s
price
elasticity in our market place, but I think the issues of awareness are
much
much more detrimental to us than our pricing.


Alec Saunders
VP Marketing, QNX Software




\

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

I agree with what you have said, but you skipped an important part of your
case. That is: The software your try to fix was/is written by you. Your
statement sounds rather like: “I’m working along!, because this way I don’t
need to use a debugger”.
By taking an average company with at least 10 software guys, we can say with
very high probability that there is no project that is taken care of by one
or two guys. And very often the people that wrote/re-wrote the initial code
are not in the company any more. What happens next ? YOU end up with fifteen
(at least I guess) thousand lines of code and again, as very often happens,
the code is not written in your style (for some known reason;-[). In this
case use of the printf() debugging style becomes a difficult task and time
is definitely not on your side.

Debuggers are tools, to use a tool we need knowledge and experience
(TIME!).
Debuggers change, and they are very useful. We, software guys, very often
tend to pre-judge things. If we had a bad experience with a beta tool, we
will maintain our opinion, so that it is negative (especially if we don’t
need this tool).

Self-debugging is a good thing, but it needs programmers with experience and
“skill”.
This also raises a few concerns:

  • overall performance
  • you need a file system for logging
  • size :wink:

Don’t get me wrong :wink: I am on your side!

-Misha.

“Bill Caroselli” <qtps@earthlink.net> wrote in message
news:a46mge$54$1@inn.qnx.com

New thread time.

“Robert Krten” <> nospam90@parse.com> > wrote in message
news:a44mlj$h6f$> 1@inn.qnx.com> …

it’s sad that the “product” turned out by our universities and
commercial
institutions doesn’t realize that “vi” and “make” and the “printf
debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

What!

Don’t get me wrong. I can work wonders with a few well placed printf()s.

When in a bind I often need to use a debugger to find out what led up to
the
point where you are now executing that line that’s commented:
// this should never happen

However,

I’m also a strong believe that software should debug itself. I have
developed many of my own software tools over the years that I can
incorporate into my software development projects. These features just
lurk
quietly in the background using very little (but some) overhead. And
then,
BANGZOOM! On that rare occasion (yeah right!) when a bug does raise it’s
ugly head in my software I can just go back and look at the logs to see
how
it got there.

I believe that (almost) as much time should go into the design of
debugging
software as goes into the software itself.


Bill Caroselli – 1(626) 824-7983
Q-TPS Consulting
QTPS@EarthLink.net

Robert Krten wrote:

THEN it was approved. After that point, I submitted it to their
configuration management group. The “proof” that it still worked was that I took
a stock IBM-PC, formatted the hard disk, installed the OS and compiler, and downloaded
the source from the CM system. Then I came up with a ROM image. IF the checksum
matched, then the product was deemed to be properly in the CM system.

The only problem I have with this procedure, is that with modern
optimizing compilers, it is not necessarily easy to get them to produce
the same code on each invocation. Other than checksumming the final
executable, this is the exact procedure I advocate.


Now, this may be a bit overboard, but there is a point – how many times have you
been told by tech support for any number of products, “Oh, you got a SIGSEGV at
address ? Huh, cool. Try the latest version.” In an embedded product,
this is simply not acceptable as a level of service. They should be able to track
the version that you have to their CM system or equivalent, and find the line that
failed, and even rebuild a fixed version from that branch that fixes only that
one bug. That’s how you prove that the fix worked. The old excuse of trying
the latest version only means that the bug is either fixed or masked by something
else…

Amen.

Misha Nefedov <mnefedov@qnx.com> wrote:

I agree with what you have said, but you skipped an important part of your
case. That is: The software your try to fix was/is written by you. Your
statement sounds rather like: “I’m working along!, because this way I don’t
need to use a debugger”.
By taking an average company with at least 10 software guys, we can say with
very high probability that there is no project that is taken care of by one
or two guys. And very often the people that wrote/re-wrote the initial code

That’s a possible scenario, on the other hand, with proper abstraction you
could argue that the modules should be small and maintained by a small
team. How many people wrote the QNX kernel? How many people wrote the
QNX filesystem? Ethernet Driver? io-net? You’ll find the answer to these
questions to be on the order of “one”, “two”, or “three”.

Now, I’m not saying that a high-end telecom switch is written by three people,
but each component could be written by a small team…

are not in the company any more. What happens next ? YOU end up with fifteen

Hopefully someone understood the concept of “documentation” <gasp!>
Requirements specifications. Functional specifications. High-level architecture
documentation.

(at least I guess) thousand lines of code and again, as very often happens,
the code is not written in your style (for some known reason;-[). In this

:slight_smile:

case use of the printf() debugging style becomes a difficult task and time
is definitely not on your side.

Just rewrite it. The original author was probably an idiot :slight_smile:
(I say that comment half tounge-in-cheek because so often that’s what happens
in real life. “I can’t understand this module, so I rewrote it.”) Unfortunately.

OTOH, if the module is properly documented, it means that you can debug it.
I once ported a properly document 300k line system from Windoze to QNX in approximately
a weekend. Why? because of three words: abstraction, abstraction, and abstraction.
Now, granted, we’re talking about maintainability and not portability, but the
two are at least marginally related…

Debuggers are tools, to use a tool we need knowledge and experience
(TIME!).

I’ll agree with you on that point – if it’s the right tool for the job, I’ll use it.
Often, in my experience, it has not been the right tool for the job, and instead
resulted in wasted effort.

Debuggers change, and they are very useful. We, software guys, very often
tend to pre-judge things. If we had a bad experience with a beta tool, we
will maintain our opinion, so that it is negative (especially if we don’t
need this tool).

:slight_smile:

Self-debugging is a good thing, but it needs programmers with experience and
“skill”.
This also raises a few concerns:

  • overall performance
  • you need a file system for logging
  • size > :wink:

Don’t get me wrong > :wink: > I am on your side!

I’m just playing devil’s advocate here partly :slight_smile:

Cheers,
-RK

-Misha.

“Bill Caroselli” <> qtps@earthlink.net> > wrote in message
news:a46mge$54$> 1@inn.qnx.com> …
New thread time.

“Robert Krten” <> nospam90@parse.com> > wrote in message
news:a44mlj$h6f$> 1@inn.qnx.com> …

it’s sad that the “product” turned out by our universities and
commercial
institutions doesn’t realize that “vi” and “make” and the “printf
debugger”
are just about all you need. I blame government cutbacks > :slight_smile:

What!

Don’t get me wrong. I can work wonders with a few well placed printf()s.

When in a bind I often need to use a debugger to find out what led up to
the
point where you are now executing that line that’s commented:
// this should never happen

However,

I’m also a strong believe that software should debug itself. I have
developed many of my own software tools over the years that I can
incorporate into my software development projects. These features just
lurk
quietly in the background using very little (but some) overhead. And
then,
BANGZOOM! On that rare occasion (yeah right!) when a bug does raise it’s
ugly head in my software I can just go back and look at the logs to see
how
it got there.

I believe that (almost) as much time should go into the design of
debugging
software as goes into the software itself.


Bill Caroselli – 1(626) 824-7983
Q-TPS Consulting
QTPS@EarthLink.net
\


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Rennie Allen <rallen@csical.com> wrote:

Robert Krten wrote:

THEN it was approved. After that point, I submitted it to their
configuration management group. The “proof” that it still worked was that I took
a stock IBM-PC, formatted the hard disk, installed the OS and compiler, and downloaded
the source from the CM system. Then I came up with a ROM image. IF the checksum
matched, then the product was deemed to be properly in the CM system.



The only problem I have with this procedure, is that with modern
optimizing compilers, it is not necessarily easy to get them to produce
the same code on each invocation. Other than checksumming the final
executable, this is the exact procedure I advocate.

Why on earth not??? Do they take into account the phase of the moon? The time of
day? The number of bytes left on the disk? I can’t think of a convincing argument
that I’d let a compiler vendor get away with for not giving me 100% reproducibility
on the toolchain!

Remember, I’m talking specifically about an EPROM checksum, so I can get away with
this argument. A strict executable on disk, with debugging information, might
contain the time and date of the source files…

Anyway, it’s a good way to work, if you can afford the time. OTOH, can you afford
not to do it this way in the long run? :slight_smile:

Cheers,
-RK

Now, this may be a bit overboard, but there is a point – how many times have you
been told by tech support for any number of products, “Oh, you got a SIGSEGV at
address ? Huh, cool. Try the latest version.” In an embedded product,
this is simply not acceptable as a level of service. They should be able to track
the version that you have to their CM system or equivalent, and find the line that
failed, and even rebuild a fixed version from that branch that fixes only that
one bug. That’s how you prove that the fix worked. The old excuse of trying
the latest version only means that the bug is either fixed or masked by something
else…

Amen.


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Robert Krten wrote:


Why on earth not??? Do they take into account the phase of the moon? The time of
day? The number of bytes left on the disk?

The amount of memory available when doing code generation.

I can’t think of a convincing argument
that I’d let a compiler vendor get away with for not giving me 100% reproducibility
on the toolchain!

Watcom did have a facility for limiting the memory for code generation
to insure repeatability (WCGMEMORY envvar), but since most companies
don’t insist on binary equivalence from source for CM, I don’t know how
widely one can rely on compilers to support this (does gcc have an
equivalent feature ?).

Anyway, it’s a good way to work, if you can afford the time. OTOH, can you afford
not to do it this way in the long run? > :slight_smile:

We do everything except the final checksum comparison, and it doesn’t
take very much additional time (doing a checksum wouldn’t take much
longer either - I should look further into doing this). Checking
everything out from the version control system and building from a clean
install, does point out a lot of issues that go unnoticed in the
development cycle.

Robert Krten wrote:


Just rewrite it. The original author was probably an idiot > :slight_smile:
(I say that comment half tounge-in-cheek because so often that’s what happens
in real life. “I can’t understand this module, so I rewrote it.”) Unfortunately.

How true. I believe that one should seldom re-write something because
they can’t understand it (there are notable exceptions with code that
cannot possibly be understood by a human being :slight_smile:; however, far too
often code is understood, and there is an awareness that the basic
architecture is flawed, but it is not re-written. This is as bad as
re-writing because “I can’t understand it”.

That’s about the only time I use a debugger as well – the damn thing
SIGSEGV’d
and now I need to know where. I turn on the debugger, it translates the
magical
hex goop virtual address into a line number, and I might poke around
with a
few variables to see if it’s “obvious” why it died.

I guess my “bad experience” with debuggers was one guy who spent two days
in a
debugger tracing through his program only to find that a half-hour spent
with
the source would have found his problem.

I’ve seen this too, but it is so rare. I just finished a project were if
the team had used the debugger more, they would have found many of their
proplems much faster. In my opinion they spent far too much time staring at
incorrect results and trying to mentally deduce what happened and not enough
time stepping through code. There were some cases where would have been a
lot better knowning how to use the debugger effectivly. I really love QNX
4’s ability to debug processes acress the net. It solved so many problems.

I respect those who don’t like to use a debugger like I do, and its because
I belive that many more seasoned developers than I spent their time
debugging on paper as they were working with punch cards and really slow
compilers. I have to admit that my brain has not been trained that way,
although it is getting better.

I agree that there is a balance to be struck between using a source debugger
and printf’s and a log, I rely upon all three heavily. Some things just
can’t be debugged with a debugger…

Kevin

However,

I’m also a strong believe that software should debug itself. I have
developed many of my own software tools over the years that I can
incorporate into my software development projects. These features just
lurk
quietly in the background using very little (but some) overhead. And
then,
BANGZOOM! On that rare occasion (yeah right!) when a bug does raise
it’s
ugly head in my software I can just go back and look at the logs to see
how
it got there.

I’m a great believer in the “default: printf (”%s %d should never
happen\n", FILE, LINE);"
statement.

I believe that (almost) as much time should go into the design of
debugging
software as goes into the software itself.

That’s the essence of “Software Quality Assurance” that MOST high tech
companies
don’t seem to grasp.

The attitude seems to be one of “Hey, cool! It compiled! Ship it!”.

I worked at Canadian Marconi once on a radar. It was very instructive.
After the software was “complete” I had to write a test plan, to see how
the
software measured up to the requirements document. Then I designed a
hardware
test jig. Then the SQA person sat with me for two days and watched the
software
being tested. THEN it was approved. After that point, I submitted it
to their
configuration management group. The “proof” that it still worked was that
I took
a stock IBM-PC, formatted the hard disk, installed the OS and compiler,
and downloaded
the source from the CM system. Then I came up with a ROM image. IF the
checksum
matched, then the product was deemed to be properly in the CM system.

Now, this may be a bit overboard, but there is a point – how many times
have you
been told by tech support for any number of products, “Oh, you got a
SIGSEGV at
address ? Huh, cool. Try the latest version.” In an embedded
product,
this is simply not acceptable as a level of service. They should be able
to track
the version that you have to their CM system or equivalent, and find the
line that
failed, and even rebuild a fixed version from that branch that fixes
only that
one bug. That’s how you prove that the fix worked. The old excuse of
trying
the latest version only means that the bug is either fixed or masked by
something
else…

/rant off> > :slight_smile:

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

“Robert Krten” <nospam90@parse.com> wrote in message
news:a48qs6$gj0$2@inn.qnx.com

Rennie Allen <> rallen@csical.com> > wrote:
Robert Krten wrote:

THEN it was approved. After that point, I submitted it to their
configuration management group. The “proof” that it still worked was
that I took
a stock IBM-PC, formatted the hard disk, installed the OS and compiler,
and downloaded
the source from the CM system. Then I came up with a ROM image. IF
the checksum
matched, then the product was deemed to be properly in the CM system.


The only problem I have with this procedure, is that with modern
optimizing compilers, it is not necessarily easy to get them to produce
the same code on each invocation. Other than checksumming the final
executable, this is the exact procedure I advocate.

Why on earth not??? Do they take into account the phase of the moon? The
time of
day? The number of bytes left on the disk? I can’t think of a convincing
argument
that I’d let a compiler vendor get away with for not giving me 100%
reproducibility
on the toolchain!

Remember, I’m talking specifically about an EPROM checksum, so I can get
away with
this argument. A strict executable on disk, with debugging information,
might
contain the time and date of the source files…

Anyway, it’s a good way to work, if you can afford the time. OTOH, can
you afford
not to do it this way in the long run? > :slight_smile:

If your organisation ever tries to get ISO sertified, or SEI sertified,
procedure like that is mandatory. We do not use cheksums here and few other
things are bit more relaxed too, but essentially the process is similar. A
SEI sertified software development organisation must be able to produce any
arbitrary old version of released software on demand and fix any particular
bug in it without changing the rest (unless it can be proven that bug is
unfixable without changing something else).

  • igor

Kevin Stallard wrote:

There is a report that on the same site as the WinCE vs QNX report, but
costs some money.

Additionally you should check with VxWorks, as I believe they actuall charge
per project. So if another project has left over liceses, you may not be
able to use them w/o forking over some money… If I am incorrect in this
I’d like to know.

Good luck, I hope you win on this one.

Kevin

“Chris Rose” <> chris.rose@viasat.com> > wrote in message
news:a3p6tq$2mp$> 1@inn.qnx.com> …

I don’t know if I will get an unbiased response, but here it goes anyway.

My company is hours away from signing a purchase request for QNX

development

seats which we will use as the OS for our digital servo controller.
We just learned that another division of the company is using VxWorks and
they have extra licenses available (we think). They also have programmers
experienced with VxWorks. (My division has no one experienced with QNX).

So

at the last moment we are re-evaluating our decision to use QNX.
We had originally ruled out Vx due to cost, and we thought the code may

not

be as portable. (VxWorks AE though appears to be POSIX compliant)

My question is: Does anyone here have experience with both operating
systems? If so can you give me an unbiased opinion of both OS’s?




Absolutely not! QSSL paied for anybody could get it for free. You can

download it!
regards,
Alain

“Alain Bonnefoy” <alain.bonnefoy@icbt.com> wrote in message
news:3C68F041.5030402@icbt.com

Absolutely not! QSSL paied for anybody could get it for free. You can
download it!
regards,
Alain

That’s funny, I could have sworn that right before I posted that, I saw that

the WinCE vs QNX 6 report was free, but it looked like the VxWorks vs QNX
stuff wasn’t. Yesterday I looked and to my surprise, it is…I swear that I
must dream stuff while I’m awake…

Kevin

OK. It’s time to reviel “Bill’s Law of Documentation”

Simply statedf it sayd that, “When you find a piece of software that is
internally documented in much greater detail than the rest of the software
by that programmer, it is a sign that the programmer did not understand what
he was doing while he was doing it.”

Next time you read through someone else’s code, check it out. So, maybe
it’s the best documented code that SHOULD be rewritten.


Bill Caroselli – 1(626) 824-7983
Q-TPS Consulting
QTPS@EarthLink.net


“Rennie Allen” <rallen@csical.com> wrote in message
news:3C68028E.5030409@csical.com

Robert Krten wrote:


Just rewrite it. The original author was probably an idiot > :slight_smile:
(I say that comment half tounge-in-cheek because so often that’s what
happens
in real life. “I can’t understand this module, so I rewrote it.”)
Unfortunately.

How true. I believe that one should seldom re-write something because
they can’t understand it (there are notable exceptions with code that
cannot possibly be understood by a human being > :slight_smile:> ; however, far too
often code is understood, and there is an awareness that the basic
architecture is flawed, but it is not re-written. This is as bad as
re-writing because “I can’t understand it”.