Tirade (offtopic) "It's been done"

Remember the Simpson’s episode with Homer on top of Moe’s bar with his band?
Limo pulls up, George Harrison pokes his head out and says, “It’s been done”.

Well, that’s kind of the way I feel about the current state of affairs in
the software field.

Remember when an “operating system” was something to fear and hold in awe?
How did those guys do it? What made it work? How did their memory system
work? Nowadays, it’s “been done”. POSIX says “this is the set of functions
you will have”. QSSL, to their credit, has done an excellent job in implementing
the POSIX specs, leaving very little to the imagination. Sure, QNX 4 and Neutrino
are really “clean” implementations of operating systems, taking tons of talent
and so on to do, but… “it’s been done”.

The separation between filesystem, TCP/IP stack, device drivers, and core OS
is again so well done that it leaves little to “wonder” at.

Remember compilers? This was something that was almost “godlike” in its
ability to “understand” C code and generate low level machine code.
Nowadays, “it’s been done”.

There are tons of other examples.

The database guys haven’t progressed – a database is still not some kind of
wonderful, futuristic AI-like replacement for memory, it’s just a set of indexes
into files. Wheee… “it’s been done”.

The point of this tirade (and I did warn you-all it would be one :slight_smile:)
is that I’m really kinda bored – what’s going to be
the “next” exciting thing in the software field that you can hold in awe
and terror (like OS’s and compilers used to be)?

About the only thing that jumps out at me is the field of AI – and this is
because of two things: my own general ignorance of the state-of-the-art in
the field, and my perception that it’s all just the same old crap – do they
still use LISP? Are they any closer to making a truly artificial intelligence?
I don’t need something that looks and acts like a human – if it is truly
“artificial” it could have its own way of looking at the world and interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or be
able to compose music (necessarily). It just needs to be able to do something
that shows its intelligence.

Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart from
restoring old computers, of course :slight_smile:)

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

If I may recommend a book that is an incredibly readable overview of some of
the leading edges in computer science research, “The Computation Beauty of
Nature” by Gary William Flake is one of my favorites.

Another interesting thing is Steven Wolfram’s new book, “A New Kind of
Science”. It’s a 1200 page stump composed of 20 years of his research (he
got a PhD from MIT at the age of 20, got the MacArthur Foundation genius
award, wrote Mathematica…) which is basically theorizing that all of
nature and the universe might be reducable to cellular automata. That is,
there is no grand, unifying, horrendously complicated equation that returns
‘42’ but rather a beautiful interaction by many types of simple automata.
(note - I haven’t read this one :wink:

I know this is not related to software directly but I like the fact that
computers are opening up the wonders of the universe for us.

Kris

“Robert Krten” <nospam88@parse.com> wrote in message
news:ad0his$p52$1@inn.qnx.com

Remember the Simpson’s episode with Homer on top of Moe’s bar with his
band?
Limo pulls up, George Harrison pokes his head out and says, “It’s been
done”.

Well, that’s kind of the way I feel about the current state of affairs in
the software field.

Remember when an “operating system” was something to fear and hold in awe?
How did those guys do it? What made it work? How did their memory system
work? Nowadays, it’s “been done”. POSIX says “this is the set of
functions
you will have”. QSSL, to their credit, has done an excellent job in
implementing
the POSIX specs, leaving very little to the imagination. Sure, QNX 4 and
Neutrino
are really “clean” implementations of operating systems, taking tons of
talent
and so on to do, but… “it’s been done”.

The separation between filesystem, TCP/IP stack, device drivers, and core
OS
is again so well done that it leaves little to “wonder” at.

Remember compilers? This was something that was almost “godlike” in its
ability to “understand” C code and generate low level machine code.
Nowadays, “it’s been done”.

There are tons of other examples.

The database guys haven’t progressed – a database is still not some kind
of
wonderful, futuristic AI-like replacement for memory, it’s just a set of
indexes
into files. Wheee… “it’s been done”.

The point of this tirade (and I did warn you-all it would be one > :slight_smile:> )
is that I’m really kinda bored – what’s going to be
the “next” exciting thing in the software field that you can hold in awe
and terror (like OS’s and compilers used to be)?

About the only thing that jumps out at me is the field of AI – and this
is
because of two things: my own general ignorance of the state-of-the-art
in
the field, and my perception that it’s all just the same old crap – do
they
still use LISP? Are they any closer to making a truly artificial
intelligence?
I don’t need something that looks and acts like a human – if it is truly
“artificial” it could have its own way of looking at the world and
interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or
be
able to compose music (necessarily). It just needs to be able to do
something
that shows its intelligence.

Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart
from
restoring old computers, of course > :slight_smile:> )

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

“Kris Warkentin” <kewarken@qnx.com> wrote in message
news:ad0jmd$hlb$1@nntp.qnx.com

If I may recommend a book that is an incredibly readable overview of some
of
the leading edges in computer science research, “The Computation Beauty of
Nature” by Gary William Flake is one of my favorites.

Another interesting thing is Steven Wolfram’s new book, “A New Kind of
Science”. It’s a 1200 page stump composed of 20 years of his research (he
got a PhD from MIT at the age of 20, got the MacArthur Foundation genius
award, wrote Mathematica…) which is basically theorizing that all of
nature and the universe might be reducable to cellular automata. That is,
there is no grand, unifying, horrendously complicated equation that
returns
‘42’ but rather a beautiful interaction by many types of simple automata.
(note - I haven’t read this one > :wink:

I read an article on the guy and his book, quite an acheivement.

Actually my understanding of the article is that the Universe is one
simple equation. His beleif is that in mathematic it should be represented
in 4 lines. That’s what he plans on working on … Finding the formula.
His theory is complex thing emanate from very simple one.

I love drastic approch, one that tickle the mind. Being push to the edge
of knowledge, even passed it. I remember a book that based upon
current knowledge of science extrapolated a world beyond speed of light.
That sure didn’t fit into the “it’s been done” category.


What I find strange is that things are getting so complex that you see
less and less people becoming extremly good in lots of fields. Nowadays
it seems you have to become so specialize that you can see the big
picture anymore. Yet amazing thing are still acheived, power of management
I guess :wink:

If I had to start a new carrier I could probably switch to molecurar
biology,
genome and stuff like that. Quite facinating (too me at least)

What I find strange is that things are getting so complex that you see
less and less people becoming extremly good in lots of fields. Nowadays
it seems you have to become so specialize that you can see the big

can’t

picture anymore. Yet amazing thing are still acheived, power of
management
I guess > :wink:

If I had to start a new carrier I could probably switch to molecurar
biology,
genome and stuff like that. Quite facinating (too me at least)

Robert Krten <nospam88@parse.com> wrote:

About the only thing that jumps out at me is the field of AI – and this is
because of two things: my own general ignorance of the state-of-the-art in
the field, and my perception that it’s all just the same old crap – do they
still use LISP? Are they any closer to making a truly artificial intelligence?
I don’t need something that looks and acts like a human – if it is truly
“artificial” it could have its own way of looking at the world and interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or be
able to compose music (necessarily). It just needs to be able to do something
that shows its intelligence.

AI ? HAL was supposed to born on Jan 2000, but we are summer 2002 now, and
Big Blue still try to claim their Super fast, multi-processor chess
computer is “intelligent”.

With too much money send into Internet bubble, AI is almost dead.

-xtang

“Mario Charest” <goto@nothingness.com> wrote in message
news:ad0lmc$rro$1@inn.qnx.com

Actually my understanding of the article is that the Universe is one
simple equation. His beleif is that in mathematic it should be
represented
in 4 lines. That’s what he plans on working on … Finding the formula.
His theory is complex thing emanate from very simple one.

I remember that bit about four lines but I believe it was four lines of
CODE, not a four line equation - his theory is that the description of the
universe is not by a static equation but by a more dynamic algorithm (like
cellular automata).

Kris

“Xiaodan Tang” <xtang@qnx.com> wrote in message
news:ad0psu$lmj$1@nntp.qnx.com

Robert Krten <> nospam88@parse.com> > wrote:

About the only thing that jumps out at me is the field of AI – and this
is
because of two things: my own general ignorance of the state-of-the-art
in
the field, and my perception that it’s all just the same old crap – do
they
still use LISP? Are they any closer to making a truly artificial
intelligence?
I don’t need something that looks and acts like a human – if it is
truly
“artificial” it could have its own way of looking at the world and
interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or
be
able to compose music (necessarily). It just needs to be able to do
something
that shows its intelligence.

AI ? HAL was supposed to born on Jan 2000, but we are summer 2002 now, and
Big Blue still try to claim their Super fast, multi-processor chess
computer is “intelligent”.

With too much money send into Internet bubble, AI is almost dead.

I rather doubt that. The leading edge of research is in universities and
companies like IBM and AT&T Bell Labs which are not slowed down by the Dot
Bomb.

Kris

-xtang

“Kris Warkentin” <kewarken@qnx.com> wrote in message
news:ad0svq$ogm$1@nntp.qnx.com

“Mario Charest” <> goto@nothingness.com> > wrote in message
news:ad0lmc$rro$> 1@inn.qnx.com> …

Actually my understanding of the article is that the Universe is one
simple equation. His beleif is that in mathematic it should be
represented
in 4 lines. That’s what he plans on working on … Finding the
formula.
His theory is complex thing emanate from very simple one.

I remember that bit about four lines but I believe it was four lines of
CODE, not a four line equation -

Yeah that’s what I meant for line of mathematica (language is created)

codehis theory is that the description of the
universe is not by a static equation but by a more dynamic algorithm (like
cellular automata).

Kris

With too much money send into Internet bubble, AI is almost dead.

I rather doubt that. The leading edge of research is in universities and
companies like IBM and AT&T Bell Labs which are not slowed down by the Dot
Bomb.

AI is not dead, it’s just poorly understood by both the layman
(non-researchers)
and by the researchers themselves. I did my master’s degree in machine
intelligence, and I was astounded by a) the grandiose predictions for the
future
or AI, b) the trivial accomplishments that were treated as progress of
biblical proportions, and c) the lengths to which the nay-sayers were
willing
to go to denegrate the real accomplishments.

a) Remember robots cleaning your house by the mid-1970s? How about
baby-sitting androids? The pie-in-the-sky predictions of the money-hungry
and none too honest university researchers really stained the legitimate AI
community.

b) I personally observed nonsense like somebody getting a neural network
to perform an XOR operation writing 3 papers and getting a doctoral degree.
Somebody else uses predicate logic to order the stacking of three blocks.
Another doctoral degree. Nobody’s really sure what constitutes real
progress
as opposed to parlour tricks, so the whole field tends to applaud trivia in
the hopes that it will lead to some great new discovery.

c) The bigger problem that AI runs into is that every time a real step
forward is
made, and a new algorithm or modelling technique is created, people
immediately
put a “mundane” label on it and wrest it from the field of AI. Take, for
example,
expert systems. That’s not AI anymore. Now it’s just a cool algorithm for
performing inference with incomplete information. Neural networks are not
AI
anymore, they’re just multi-variat non-linear regression systems. LISP is
not
AI anymore, it’s just a 1960s symbolic processing language (Rob, I hope your
ears are burning). A* search trees, anybody? Predicate logic? All of
these, in
their times, were huge steps forward in the way people expressed and
modelled
problems that were felt to be along the road to artificial intelligence.
They have all
triggered hugely useful results in other areas, and have paved the way for
the
identification and analysis of the problems that they could not solve.
However, as
new algorithms and models are proposed, they too will be labelled by the
larger
computing community as nothing more than an interesting algoritm, because
nothing short of a talking android head will be good enough for most
people’s definitions of AI.

We could wander into the topic of the Turing test at this point, and try to
define
artificial intelligence. Has anybody else noticed how attrociously silly
the Turing
test is? What is the point of mimicing human behaviour, including speech
pauses
and indecision? Is this really what we want from an artificial
intelligence? I
sometimes think that the Turing test is the fabrication of a secret
pyramid-and-eyeball
AI-bashing society. Imagine how much fun you could have by feeding a self
defeating test of success into the popular culture, thereby arranging things
so that
your enemy had to shoot himself in the temple in order to “succeed”. Can
there be anything more sasisfying?

AI is dead in many people’s minds because they wouldn’t know “live” AI if it
walked up and bit them.

Cheers,
Andrew

“Robert Krten” <nospam88@parse.com> wrote in message
news:ad0his$p52$1@inn.qnx.com

Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart
from
restoring old computers, of course > :slight_smile:> )

I guess that depends on your idea of “fun”. I don’t see writing an
operating system
or a compiler as particularly fun. I know people who love that, and I would
enjoy
the challenge, but if I could choose my challenges, those would not get the
top
ranking.

I think that one future wave is the subject of world modelling. We have
seen an
upsurge in on-line world style games - massively multi-player on-line role
playing
games being the overwhelmingly largest sector. None of these seem to really
be
focused on the world modelling aspect of the problem. I think it would be
fun
to try to model a world with a closed economy, a robust political system,
industry and environmental impact. It would be fun to model inhabitants of
this world in terms of motivations such as greed, altruism, love, phobias,
mental
disorders, etc. It would be fun to model resource production and
consumption,
and population motion due to changes in resource availability.

A few people have scratched the surface of some of these things, but they
always
fall well short of an internally consistent closed system. Nobody has
figured out
how to make such a system interesting enough to a gamer to actually draw
paying customers. Remeber, you asked what would be “fun”, not what would
be profitable.

Cheers,
Andrew

“Robert Krten” <nospam88@parse.com> wrote in message
news:ad0his$p52$1@inn.qnx.com

So what do we have…

  1. Operating Systems
  2. Compilers
  3. Databases
  4. Computer Graphics
  5. Networking
  6. Add your own

Don’t you think that all above listed “kewl” stuff is just a tools, no ?
Maybe complicated somewhere, but just a tool. Like a hummer. They help me
sometimes to design some systems and reduce my development time. But “as is”
as a stand alone entities they are quite pointless and useless.

I don’t need a database if i have no data to store. I don’t need a modern
computer graphics power if i have nothing to show. I don’t need a
distributed networking cluster if i have nothing to calculate. I don’t need
a compiler if i have no ideas to express. And for sure operating system is
useless for me if i don’t have something to run :slight_smile:

Well, the idea is simple: i don’t point on a limited set of tools and treat
them as a main achievement but rather point on applications which use this
achievements. End systems. Complete sollutions. Complex designs which
include a variety of activities including sciense, IT, production,
application and exploitation and so on. Where IT is maybe very important
part but just a part of the whole design and not an end in itself.

This kind of system regardless to how is it simple or complex cannot be
booring or become antiquated :slight_smile: IMHO of course.

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

// wbr

Andrew Thomas <andrew@cogent.ca> wrote:

With too much money send into Internet bubble, AI is almost dead.

I rather doubt that. The leading edge of research is in universities and
companies like IBM and AT&T Bell Labs which are not slowed down by the Dot
Bomb.

AI is not dead, it’s just poorly understood by both the layman
(non-researchers)
and by the researchers themselves. I did my master’s degree in machine
intelligence, and I was astounded by a) the grandiose predictions for the
future
or AI, b) the trivial accomplishments that were treated as progress of
biblical proportions, and c) the lengths to which the nay-sayers were
willing
to go to denegrate the real accomplishments.

Bravo! It’s good to hear someone (other than Douglas Hofstadter (sp?)) echo
my sentiments :slight_smile:

a) Remember robots cleaning your house by the mid-1970s? How about
baby-sitting androids? The pie-in-the-sky predictions of the money-hungry
and none too honest university researchers really stained the legitimate AI
community.

b) I personally observed nonsense like somebody getting a neural network
to perform an XOR operation writing 3 papers and getting a doctoral degree.
Somebody else uses predicate logic to order the stacking of three blocks.
Another doctoral degree. Nobody’s really sure what constitutes real
progress
as opposed to parlour tricks, so the whole field tends to applaud trivia in
the hopes that it will lead to some great new discovery.

c) The bigger problem that AI runs into is that every time a real step
forward is
made, and a new algorithm or modelling technique is created, people
immediately
put a “mundane” label on it and wrest it from the field of AI. Take, for
example,
expert systems. That’s not AI anymore. Now it’s just a cool algorithm for
performing inference with incomplete information. Neural networks are not
AI
anymore, they’re just multi-variat non-linear regression systems. LISP is
not
AI anymore, it’s just a 1960s symbolic processing language (Rob, I hope your
ears are burning). A* search trees, anybody? Predicate logic? All of

Why would they be burning? :slight_smile:
My main point was, “it’s been done” – so now that it’s been done (i.e., LISP
has been “mainstreamed”, neural nets have been “mainstreamed” etc, the real
question is, “what’s next?” AI was only one example in my “ennui” of the
software field in general. The same thing could be said (and I did) about
OS’s and databases; they too have been “mainstreamed” and have lost their
charm and mystique.

these, in
their times, were huge steps forward in the way people expressed and
modelled
problems that were felt to be along the road to artificial intelligence.
They have all
triggered hugely useful results in other areas, and have paved the way for
the
identification and analysis of the problems that they could not solve.
However, as
new algorithms and models are proposed, they too will be labelled by the
larger
computing community as nothing more than an interesting algoritm, because
nothing short of a talking android head will be good enough for most
people’s definitions of AI.

We could wander into the topic of the Turing test at this point, and try to
define
artificial intelligence. Has anybody else noticed how attrociously silly
the Turing
test is? What is the point of mimicing human behaviour, including speech
pauses
and indecision? Is this really what we want from an artificial
intelligence? I
sometimes think that the Turing test is the fabrication of a secret
pyramid-and-eyeball
AI-bashing society. Imagine how much fun you could have by feeding a self
defeating test of success into the popular culture, thereby arranging things
so that
your enemy had to shoot himself in the temple in order to “succeed”. Can
there be anything more sasisfying?

That’s why I suggested an “Alien Intelligence” as the “true” goal of AI; not
the mimicry of an evolved-over-thousands-of-generations human brain, but something
that can “think”. It’s just the definition of “think” that’s fuzzy, and to an
extent I think we’re violently agreeing :slight_smile:

AI is dead in many people’s minds because they wouldn’t know “live” AI if it
walked up and bit them.

Agreed :slight_smile:

Cheers,
Andrew


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Andrew Thomas <andrew@cogent.ca> wrote:

“Robert Krten” <> nospam88@parse.com> > wrote in message
news:ad0his$p52$> 1@inn.qnx.com> …
Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart
from
restoring old computers, of course > :slight_smile:> )

I guess that depends on your idea of “fun”. I don’t see writing an
operating system
or a compiler as particularly fun. I know people who love that, and I would
enjoy
the challenge, but if I could choose my challenges, those would not get the
top
ranking.

I think that one future wave is the subject of world modelling. We have
seen an
upsurge in on-line world style games - massively multi-player on-line role
playing
games being the overwhelmingly largest sector. None of these seem to really
be
focused on the world modelling aspect of the problem. I think it would be
fun
to try to model a world with a closed economy, a robust political system,
industry and environmental impact. It would be fun to model inhabitants of
this world in terms of motivations such as greed, altruism, love, phobias,
mental
disorders, etc. It would be fun to model resource production and
consumption,
and population motion due to changes in resource availability.

A few people have scratched the surface of some of these things, but they
always
fall well short of an internally consistent closed system. Nobody has
figured out
how to make such a system interesting enough to a gamer to actually draw
paying customers. Remeber, you asked what would be “fun”, not what would
be profitable.

Yes, right now I’m more interested in fun than profitable. I’m a strong believer
in “do what is fun and it will become profitable”.

The idea of world modelling is an interesting one – I’ll have to ponder further
on that, thanks for the tip!

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Ian Zagorskih <NOSPAM-ianzag@mail.ru> wrote:

“Robert Krten” <> nospam88@parse.com> > wrote in message
news:ad0his$p52$> 1@inn.qnx.com> …

So what do we have…

  1. Operating Systems
  2. Compilers
  3. Databases
  4. Computer Graphics
  5. Networking
  6. Add your own

Don’t you think that all above listed “kewl” stuff is just a tools, no ?
Maybe complicated somewhere, but just a tool. Like a hummer. They help me
sometimes to design some systems and reduce my development time. But “as is”
as a stand alone entities they are quite pointless and useless.

I don’t need a database if i have no data to store. I don’t need a modern
computer graphics power if i have nothing to show. I don’t need a
distributed networking cluster if i have nothing to calculate. I don’t need
a compiler if i have no ideas to express. And for sure operating system is
useless for me if i don’t have something to run > :slight_smile:

Well, the idea is simple: i don’t point on a limited set of tools and treat
them as a main achievement but rather point on applications which use this
achievements. End systems. Complete sollutions. Complex designs which
include a variety of activities including sciense, IT, production,
application and exploitation and so on. Where IT is maybe very important
part but just a part of the whole design and not an end in itself.

This kind of system regardless to how is it simple or complex cannot be
booring or become antiquated > :slight_smile: > IMHO of course.

Intersting point; if I may summarize, you’re basically saying (and Andrew Thomas
has said similar things) that the things that were once considered “kewl” are
now “mainstreamed” and have lost their interest. Granted, you’re taking a
slightly different twist, and arguing further that the “next layer” is applications,
and not just the tools. To me, that’s “just” a systems-integration job; if
you have nice, decent, clean tools, “snapping together” applications becomes
easier.

I guess what I’m trying to get a sense of is, regardless of whether we call it
an application or a tool, what’s the next “kewl” thing? At some point, applications
become tools anyway – for example, you may now have a complex production application
that has “simply” become a tool for the next layer of systems integration. We just
“snap together” these applications to create the next layer…

Cheers,
-RK

// wbr


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

“Robert Krten” <nospam88@parse.com> wrote in message
news:ad1k88$haa$4@inn.qnx.com

Ian Zagorskih <> NOSPAM-ianzag@mail.ru> > wrote:


Intersting point; if I may summarize, you’re basically saying (and Andrew
Thomas
has said similar things) that the things that were once considered “kewl”
are
now “mainstreamed” and have lost their interest. Granted, you’re taking a
slightly different twist, and arguing further that the “next layer” is
applications,
and not just the tools. To me, that’s “just” a systems-integration job;
if
you have nice, decent, clean tools, “snapping together” applications
becomes
easier.

I guess what I’m trying to get a sense of is, regardless of whether we
call it
an application or a tool, what’s the next “kewl” thing? At some point,
applications
become tools anyway – for example, you may now have a complex production
application
that has “simply” become a tool for the next layer of systems integration.
We just
“snap together” these applications to create the next layer…

It depends. Integration of applications is also just a tool or technique.
Doing this i guess a person have some above, common, ambitious idea which
drives him/her. In the sake of what the precious time and efforts are spent
should be realized.

For example, researches. Geophysics as in our case, astronomy, physics,
chemistry, mathematics, etc. Perceive the object. As a part of the more
common idea of perceiving of the universe which is by default is not limited
by anything at any point so will never been percivated completely on the one
hand but also will never disappoint the researcher on the other. Question is
which objects are been investigated. Just an IMHO - IT been an artificial
entity which exists only in human minds and was designed to help human. Just
help in something other. As a wheel which helps me to move from point A to
point B. So when helper satisfy my requirements i can assume it’s been
completed and as a result it has reached its upper point i.e. the End. On
the other hand, dealing with some entity which was not created by human like
The Nature, i guess we will never reach the End. It is unlimited and you’ll
never be able to say like “So far so what, we’r kewl we know all we found
all → We’r booring → So what’s next ?”.

If you afraid to find out once the end point - deal with the full infinite
continuum :slight_smile:

ps: this tree looks a bit oddly… it’s more like philosopy.

Cheers,
-RK

// wbr

Ian Zagorskih wrote:

“Robert Krten” <> nospam88@parse.com> > wrote in message
news:ad1k88$haa$> 4@inn.qnx.com> …

Ian Zagorskih <> NOSPAM-ianzag@mail.ru> > wrote:


Intersting point; if I may summarize, you’re basically saying (and Andrew

Thomas

has said similar things) that the things that were once considered “kewl”

are

now “mainstreamed” and have lost their interest. Granted, you’re taking a
slightly different twist, and arguing further that the “next layer” is

applications,

and not just the tools. To me, that’s “just” a systems-integration job;

if

you have nice, decent, clean tools, “snapping together” applications

becomes

easier.

I guess what I’m trying to get a sense of is, regardless of whether we

call it

an application or a tool, what’s the next “kewl” thing? At some point,

applications

become tools anyway – for example, you may now have a complex production

application

that has “simply” become a tool for the next layer of systems integration.

We just

“snap together” these applications to create the next layer…



It depends. Integration of applications is also just a tool or technique.
Doing this i guess a person have some above, common, ambitious idea which
drives him/her. In the sake of what the precious time and efforts are spent
should be realized.

For example, researches. Geophysics as in our case, astronomy, physics,
chemistry, mathematics, etc. Perceive the object. As a part of the more
common idea of perceiving of the universe which is by default is not limited
by anything at any point so will never been percivated completely on the one
hand but also will never disappoint the researcher on the other. Question is
which objects are been investigated. Just an IMHO - IT been an artificial
entity which exists only in human minds and was designed to help human. Just
help in something other. As a wheel which helps me to move from point A to
point B. So when helper satisfy my requirements i can assume it’s been
completed and as a result it has reached its upper point i.e. the End. On
the other hand, dealing with some entity which was not created by human like
The Nature, i guess we will never reach the End. It is unlimited and you’ll
never be able to say like “So far so what, we’r kewl we know all we found
all → We’r booring → So what’s next ?”.

If you afraid to find out once the end point - deal with the full infinite
continuum > :slight_smile:

ps: this tree looks a bit oddly… it’s more like philosopy.

Cheers,
-RK





// wbr

\

The last turn of the century (1800 turning to 1900) had scientist fall
over themselves stating that all that needs to be found out is done etc.
Chemistry was a closed field for instance. Hindsight tells us they were
deeply wrong. You are starting to sound like them. The fact that you in
C are totally proficient in writing ‘x++;’ does not imply that the fun
is over and you can safely retire. It means that your time in SW
kindergarten is over, choose direction.

An engineering aproach to a ‘real’ problem is to split it up into sub-
problems, continue to do so until the problem presents itself as ‘done’
or so mundane it is not fun and therefore not worth the bother.
Forgetting to test if the process of subdivision (re)moved the problem
into the ‘virtual’, and not understanding why the result/product fixes
nothing (and does not sell) concludes the process.

George ;-))

“Robert Krten” <nospam88@parse.com> wrote in message
news:ad1jmb$haa$2@inn.qnx.com

Why would they be burning? > :slight_smile:
My main point was, “it’s been done” – so now that it’s been done (i.e.,
LISP
has been “mainstreamed”, neural nets have been “mainstreamed” etc, the
real
question is, “what’s next?” AI was only one example in my “ennui” of the
software field in general. The same thing could be said (and I did) about
OS’s and databases; they too have been “mainstreamed” and have lost their
charm and mystique.

We have different points of view here, I think. I tend to agree with Ian
(at
least I think I’m agreeing with Ian) that it’s not maningful to look at the
tools
and say “It’s been done, so it’s boring”. LISP is a tool, like a C compiler
or
a relational database. It fulfils a particular kind of modelling
requirement and
programming paradigm. If the job you are trying to do does not work well
within that paradigm, don’t use the the tool. The same holds true for
neural
networks. They are interesting tools that take a fascinating approach to
computer learning. It’s not that “neural network” == “artificial
intelligence”,
but that artificial intelligence, complete with artificial imperfection, is
most
likely to be achieved through a clever use or implementation of a neural
network. And that has not been done. Rather than be bored by the fact
that tools exist, be excited by the avenues of invention that they open up.
Hammers have been around since I was quite young, yet they are still used
to make enginerring and architectural wonders like the (sinking, but not due
to hammers) Osaka airport.

That’s why I suggested an “Alien Intelligence” as the “true” goal of AI;
not
the mimicry of an evolved-over-thousands-of-generations human brain, but
something that can “think”.

That’s one way to characterize the existing philosophical divide between the
artificial intelligence camp and the machine intelligence camp. The AI
people
want to create a machine that passes the Turing test. The machine
intelligence
people want to create a machine that is capable of intelligent behaviour and
learning within a problem domain.

Cheers,
Andrew

Andrew Thomas <andrew@cogent.ca> wrote:

“Robert Krten” <> nospam88@parse.com> > wrote in message
news:ad1jmb$haa$> 2@inn.qnx.com> …
Why would they be burning? > :slight_smile:
My main point was, “it’s been done” – so now that it’s been done (i.e.,
LISP
has been “mainstreamed”, neural nets have been “mainstreamed” etc, the
real
question is, “what’s next?” AI was only one example in my “ennui” of the
software field in general. The same thing could be said (and I did) about
OS’s and databases; they too have been “mainstreamed” and have lost their
charm and mystique.

We have different points of view here, I think. I tend to agree with Ian
(at
least I think I’m agreeing with Ian) that it’s not maningful to look at the
tools
and say “It’s been done, so it’s boring”. LISP is a tool, like a C compiler
or
a relational database. It fulfils a particular kind of modelling
requirement and
programming paradigm. If the job you are trying to do does not work well
within that paradigm, don’t use the the tool. The same holds true for
neural
networks. They are interesting tools that take a fascinating approach to
computer learning. It’s not that “neural network” == “artificial
intelligence”,
but that artificial intelligence, complete with artificial imperfection, is
most
likely to be achieved through a clever use or implementation of a neural
network. And that has not been done. Rather than be bored by the fact
that tools exist, be excited by the avenues of invention that they open up.
Hammers have been around since I was quite young, yet they are still used
to make enginerring and architectural wonders like the (sinking, but not due
to hammers) Osaka airport.

That’s why I suggested an “Alien Intelligence” as the “true” goal of AI;
not
the mimicry of an evolved-over-thousands-of-generations human brain, but
something that can “think”.

That’s one way to characterize the existing philosophical divide between the
artificial intelligence camp and the machine intelligence camp. The AI
people
want to create a machine that passes the Turing test. The machine
intelligence
people want to create a machine that is capable of intelligent behaviour and
learning within a problem domain.

I guess my “practical definition” of a particular class of “useful”
AI/MI would be a language translator that did better than the horror stories you
see on the web where some English->X->English translation of “my mother
gave me lunch” comes out as “the ate princess radish green wedding” :slight_smile:

BTW, Andrew, what’s your opinion on Hofstadter’s work? Have you read his
“Fluid Concepts and Creative Analogies” book?

Cheers,
-RK


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.

Yeah, even your whine has been done before. Should we pity poor Tom
Clancy, Terry Pratchett et al because they’re in the same boat?

It’s time for you to introspect. Where did the fun really come from
before? Learning about compilers? Building compilers? Hanging around
with other compiler writers? Having someone tell you what a good
compiler you wrote? Having someone tell you what a good
compiler writer you were? etc …

To the guy that wrote sysmon, which I use almost every day, may I
offer an observation: Sysmon is a gem - simple, reliable, useful,
although not the first or last of it’s kind. Do things like that
again. Consider it your contribution to humanity’s well-being.

Richard

Robert Krten wrote:

Remember the Simpson’s episode with Homer on top of Moe’s bar with his band?
Limo pulls up, George Harrison pokes his head out and says, “It’s been done”.

Well, that’s kind of the way I feel about the current state of affairs in
the software field.

Remember when an “operating system” was something to fear and hold in awe?
How did those guys do it? What made it work? How did their memory system
work? Nowadays, it’s “been done”. POSIX says “this is the set of functions
you will have”. QSSL, to their credit, has done an excellent job in implementing
the POSIX specs, leaving very little to the imagination. Sure, QNX 4 and Neutrino
are really “clean” implementations of operating systems, taking tons of talent
and so on to do, but… “it’s been done”.

The separation between filesystem, TCP/IP stack, device drivers, and core OS
is again so well done that it leaves little to “wonder” at.

Remember compilers? This was something that was almost “godlike” in its
ability to “understand” C code and generate low level machine code.
Nowadays, “it’s been done”.

There are tons of other examples.

The database guys haven’t progressed – a database is still not some kind of
wonderful, futuristic AI-like replacement for memory, it’s just a set of indexes
into files. Wheee… “it’s been done”.

The point of this tirade (and I did warn you-all it would be one > :slight_smile:> )
is that I’m really kinda bored – what’s going to be
the “next” exciting thing in the software field that you can hold in awe
and terror (like OS’s and compilers used to be)?

About the only thing that jumps out at me is the field of AI – and this is
because of two things: my own general ignorance of the state-of-the-art in
the field, and my perception that it’s all just the same old crap – do they
still use LISP? Are they any closer to making a truly artificial intelligence?
I don’t need something that looks and acts like a human – if it is truly
“artificial” it could have its own way of looking at the world and interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or be
able to compose music (necessarily). It just needs to be able to do something
that shows its intelligence.

Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart from
restoring old computers, of course > :slight_smile:> )

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.

Richard Kramer <rrkramer@kramer-smilko.com> wrote:

Yeah, even your whine has been done before. Should we pity poor Tom
Clancy, Terry Pratchett et al because they’re in the same boat?

I hardly claimed my whine was original :slight_smile:

It’s time for you to introspect. Where did the fun really come from
before? Learning about compilers? Building compilers? Hanging around
with other compiler writers? Having someone tell you what a good
compiler you wrote? Having someone tell you what a good
compiler writer you were? etc …

To an extent, that’s why I started this thread; I noticed that some of the
fun is gone, and now I’m fishing for ideas…

Maybe, reflecting on those days, it was the “arcane knowledge” aspect of it
that intrigued me – knowing how it works, and being able to explain some
of the “really neat tricks” that were in the given tool/product. Now that
compilers et al are “well known”, there isn’t an audience :wink: I guess I’m
just an instructor at heart :slight_smile:

To the guy that wrote sysmon, which I use almost every day, may I
offer an observation: Sysmon is a gem - simple, reliable, useful,
although not the first or last of it’s kind. Do things like that
again. Consider it your contribution to humanity’s well-being.

Sysmon’s been done :slight_smile: VAX/VMS “mon proc/topc” is what it’s based on.
(But thanks for the kind words!)

Actually, one of the next projects I was considering is a “resource manager
builder”. The idea is that it would be a CGI-BIN kind of thing that sits
on my website, and you fill in forms about what kind of resource manager
you want (e.g., “single/multi threaded”, or “directory vs file”, etc).
Once you’ve filled out all that stuff, it generates code, and gives you a
clicky for a .tar.gz that contains the code. It would even generate
prototype-style code for your I/O and Connect functions, maybe generate your
ISRs, that kind of thing…

Cheers,
-RK

Richard

Robert Krten wrote:

Remember the Simpson’s episode with Homer on top of Moe’s bar with his band?
Limo pulls up, George Harrison pokes his head out and says, “It’s been done”.

Well, that’s kind of the way I feel about the current state of affairs in
the software field.

Remember when an “operating system” was something to fear and hold in awe?
How did those guys do it? What made it work? How did their memory system
work? Nowadays, it’s “been done”. POSIX says “this is the set of functions
you will have”. QSSL, to their credit, has done an excellent job in implementing
the POSIX specs, leaving very little to the imagination. Sure, QNX 4 and Neutrino
are really “clean” implementations of operating systems, taking tons of talent
and so on to do, but… “it’s been done”.

The separation between filesystem, TCP/IP stack, device drivers, and core OS
is again so well done that it leaves little to “wonder” at.

Remember compilers? This was something that was almost “godlike” in its
ability to “understand” C code and generate low level machine code.
Nowadays, “it’s been done”.

There are tons of other examples.

The database guys haven’t progressed – a database is still not some kind of
wonderful, futuristic AI-like replacement for memory, it’s just a set of indexes
into files. Wheee… “it’s been done”.

The point of this tirade (and I did warn you-all it would be one > :slight_smile:> )
is that I’m really kinda bored – what’s going to be
the “next” exciting thing in the software field that you can hold in awe
and terror (like OS’s and compilers used to be)?

About the only thing that jumps out at me is the field of AI – and this is
because of two things: my own general ignorance of the state-of-the-art in
the field, and my perception that it’s all just the same old crap – do they
still use LISP? Are they any closer to making a truly artificial intelligence?
I don’t need something that looks and acts like a human – if it is truly
“artificial” it could have its own way of looking at the world and interacting
with it – that’s fine. It doesn’t need to love, or express emotion, or be
able to compose music (necessarily). It just needs to be able to do something
that shows its intelligence.

Anyway, I’m done. Anyone have any ideas on what would be “fun”? (apart from
restoring old computers, of course > :slight_smile:> )

Cheers,
-RK

Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at > www.parse.com> .
Email my initials at parse dot com.


Robert Krten, PARSE Software Devices +1 613 599 8316.
Realtime Systems Architecture, Books, Video-based and Instructor-led
Training and Consulting at www.parse.com.
Email my initials at parse dot com.