To follow this blog by email, give your address here...

Saturday, August 30, 2008

On the Preservation of Goals in Self-Modifying AI Systems

I wrote down some speculative musings on the preservations of goals in self-modifying AI systems, a couple weeks back; you can find them here:

http://www.goertzel.org/papers/PreservationOfGoals.pdf

The basic issue is: what can you do to help mitigate against the problem of "goal drift", wherein an AGI system starts out with a certain top-level goal governing its behavior, but then gradually modifies its own code in various ways, and ultimately -- through inadvertent consequences of the code revisions -- winds up drifting into having different goals than it started with. I certainly didn't answer the question but I came up with some new ways of thinking about the problem, and formalizing the problem, that I think might be interesting....

While the language of math is used in the paper, don't be fooled into thinking I've proved anything there ... the paper just contains speculative ideas without any real proof, just as surely as if they were formulated in words without any equations. I just find that math is sometimes the clearest way to say what I'm thinking, even if I haven't come close to proving the correctness of what I'm thinking yet...

An abstract of the speculative paper is:


Toward an Understanding of the Preservation of Goals
in Self-Modifying Cognitive Systems


Ben Goertzel



A new approach to thinking about the problem of “preservation of AI goal systems under repeated self-modification” (or, more compactly, “goal drift”) is presented, based on representing self-referential goals using hypersets and multi-objective optimization, and understanding self-modification of goals in terms of repeated iteration of mappings. The potential applicability of results from the theory of iterated random functions is discussed. Some heuristic conclusions are proposed regarding what kinds of concrete real-world objectives may best lend themselves to preservation under repeated self-modification. While the analysis presented is semi-rigorous at best, and highly preliminary, it does intuitively suggest that important humanly-desirable AI goals might plausibly be preserved under repeated self-modification. The practical severity of the problem of goal drift remains unresolved, but a set of conceptual and mathematical tools are proposed which may be useful for more thoroughly addressing the problem.

Wednesday, August 27, 2008

Playing Around with the Logic of Play

On the AGI email list recently, someone asked about the possible importance of creating AGI systems capable of playing.

Like many other qualities of mind, I believe that the interest in and capability for playing is something that should emerge from an AGI system rather than being explicitly programmed-in.

It may be that some bias toward play could be productively encoded in an AGI system ... I'm still not sure of this.

But anyway, in the email list discussion I formulated what seemed to be a simple and clear characterization of the "play" concept in terms of uncertain logical inference ... which I'll recount here (cleaned up a slight bit for blog-ification).

And then at the end of the blog post I'll give some further ideas which have the benefit of making play seem a bit more radical in nature ... and, well, more playful ...

Fun ideas to play with, at any rate 8-D

My suggestion is that play emerges (... as a consequence of other general cognitive processes...) in any sufficiently generally-intelligent system that is faced with goals that are very difficult for it .

If an intelligent system has a goal G which is time-consuming or difficult to achieve ... it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism


Achieving G implies reward

G1 is similar to G

|-


Achieving G1 implies reward


(which in my Probabilistic Logic Networks framework would be most naturally modeled as an "intensional implication.)

As links between goal-achievement and reward are to some extent modified by uncertain inference (or analogous process, implemented e.g. in neural nets), we thus have the emergence of "play" ... in cases where G1 is much easier to achieve than G ...

Of course, if working toward G1 is actually good practice for working toward G, this may give the intelligent system (if it's smart and mature enough to strategize) or evolution impetus to create additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the play that human kids do with blocks and sticks and so forth is a special case, oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play (for the goals that originally inspired the play) is in appropriate similarity-assessment ... i.e. in measuring similarity between G and G1 in such a way that achieving G1 actually teaches things useful for achieving G.

But of course, play often has indirect benefits and assists with goals other than the ones that originally inspired it ... and, due to its often stochastic, exploratory nature it can also have an effect of goal drift ... of causing the mind's top-level goals to change over time ... (hold that thought in mind, I'll return to it a little later in this blog post...)

The key to the above syllogism seems to be similarity-assessment. Examples of the kind of similarity I'm thinking of:

  • The analogy btw chess or go and military strategy
  • The analogy btw "roughhousing" and actual fighting

In logical terms, these are intensional rather than extensional similarities

So for any goal-achieving system that has long-term goals which it can't currently effectively work directly toward, play may be an effective strategy...

In this view, we don't really need to design an AI system with play in mind. Rather, if it can explicitly or implicitly carry out the above inference, concept-creation and subgoaling processes, play should emerge from its interaction w/ the world...

Note that in this view play has nothing intrinsically to do with having a body. An AGI concerned solely with mathematical theorem proving would also be able to play...

Another interesting thing to keep in mind when discussing play is subgoal alienation

When G1 arises as a subgoal of G, nevertheless, it may happen that G1 survives as a goal even if G disappears; or that G1 remains important even if G loses importance. One may wish to design AGI systems to minimize this phenomenon, but it certainly occurs strongly in humans.

Play, in some cases, may be an example of this. We may retain the desire to play games that originated as practice for G, even though we have no interest in G anymore.

And, subgoal alienation may occur on the evolutionary as well as the individual level: an organism may retain interest in kinds of play that resemble its evolutionary predecessors' serious goals, but not its own!

Bob may have a strong desire to play with his puppy ... a desire whose roots were surely encoded in his genome due to the evolutionary value in having organisms like to play with their own offspring and those of their kin ... yet, Bob may have no desire to have kids himself ... and may in fact be sterile, dislike children, and never do anything useful-to-himself that is remotely similar to his puppy-playing obsession.... In this case, Bob's "purely playful" desire to play with his puppy is a result of subgoal alienation on the evolutionary level. On the other hand, it may also help fulfill other goals of his, such as relaxation and the need for physical exercise.

This may seem a boring, cold, clinical diagnosis of something as unserious and silly as playing. For sure, when I'm playing (with my kids ... or my puppy! ... or myself ... er ... wait a minute, that doesn't work in modern English idiom ;-p) I'm not thinking about subgoal alienation and inference and all that.

But, when I'm engaged in the act of sexual intercourse, I'm not usually thinking about reproduction either ... and of course we have another major case of evolution-level and individual-level subgoal alienation right there....

In fact, writing blog entries like this one is largely a very dry sort of playing! ... which helps, I think, to keep my mind in practice for more serious and difficult sorts of mental exercise ... yet even if it has this origin and purpose in a larger sense, in the moment the activity seems to be its own justification!

Still, I have to come back to the tendency of play to give rise to goal drift ... this is an interesting twist that apparently relates to the wildness and spontaneity that exists in much playing. Yes, most particular forms of play do seem to arise via the syllogism I've given above. Yet, because it involves activities that originate as simulacra of goals that go BEYOND what the mind can currently do, play also seems to have an innate capability to drive the mind BEYOND its accustomed limits ... in a way that often transcends the goal G that the play-goal G1 was designed to emulate....

This brings up the topic of meta-goals: goals that have to do explicitly with goal-system maintenance and evolution. It seems that playing is in fact a meta-goal, quite separately from the fact of each instance of playing generally involving an imitation of some other specific real-life goal. Playing is a meta-goal that should be valued by organisms that value growth and spontaneity ... including growth of their goal systems in unpredictable, adaptive ways....

w0000000t!!!!

Friday, August 22, 2008

Machine Consciousness (report from the Nokia Workshop)

I just got finished with the two-day Workshop on Machine Consciousness that Pentti Haikonen organized at Nokia Research, in Helsinki.

I probably wouldn't have come to Finland just for this gathering, but it happened I was really curious to meet the people at RealXTend, the Finnish open-source-virtual-worlds team Novamente has been collaborating with (with an aim toward putting our virtual pets in RealXTend) ... so the workshop plus RealXTend was enough to get me on a plane to Helsinki (with a side trip to Oulu where RealXTend is located).

This blog post quasi-randomly summarizes a few of my many reactions to the workshop....

Many of the talks were interesting, but as occurs at many conferences, the chats in the coffee and meal breaks were really the most rewarding part for me...

I had not met Haikonen personally before, though I'd read his books; and I also met a lot of other interesting people, both Finnish and international....

I had particularly worthwhile chats with a guy named Harri Valpola, a Finnish computational neuroscience researcher who is also co-founder of an AI company initially focused on innovative neural-net approaches to industrial robotics.

Harri Valpola is the first person I've talked to who seems to have originally conceived a variant of my theory of how brains may represent and generate abstract knowledge (such as is represented in predicate logic using variables and quantifiers). In brief my theory is that the brain can re-code a neural subnetwork N so that the connection-structure of N serves as input to some other subnetwork M. This lets the brain construct "higher order functions" as used in combinatory logic or Haskell, which pose an equivalent mathematical alternative to traditional predicate logic formulations. Harri's ideas did not seem exactly identical to this, but he did have the basic idea that neural nets can generate abstraction via having subnets take-as-input aspects of the connection structure of other nets.

Once again I was struck by the way different people, from totally different approaches, may arrive at parallel ideas. I arrived at these particular ideas via combinatory logic and then sought a neuroscience analogue to combinatory logic's higher-order-functions, whereas Harri arrived at them via a more straightforward neuroscience route. So our approaches have different flavors and suggest different research directions ... but ultimately they may well contribute the same core idea.

I don't have time to write summaries of the various talks I saw or conversations I had, so I'll just convey a few general impressions of the state of "machine consciousness" research that I got while at the conference.

First of all, I'm grateful to Pentti Haikonen for organizing the workshop -- and I'm pleased to see that the notion of working on building conscious, intelligent machines, in the near term, has become so mainstream. Haikonen is a researcher at a major industry research lab, and he's explicitly saying that if the ideas in his recent book Conscious Robots are implemented, the result will be a conscious intelligent robot. Nokia does not seem to have placed a tremendous amount of resources behind this conscious-robot research program at present, but at least they are taking it seriously, rather than adopting the skeptical attitude one might expect from talking to the average member of the AAAI. (My own view is that Haikonen's architecture lacks many ingredients needed to achieve human-level AGI, but could quite possibly produce a conscious animal-level intelligence, which would certainly be a very fascinating thing....)

The speakers were a mix of people working on building AI systems aimed at artificial consciousness, and philosophers investigating the nature of consciousness in a theoretical way. A few individuals with neuroscience background were present, and there was a lot of talk about brains, but the vast majority of speakers and participants were from the computer science, engineering or philosophy worlds, not brain science. The participants were a mix of international speakers, local Finns with an interest in the topic (largely from local universities), and Nokia Research staff (some working in AI-related areas, some with other professional foci but a general interest in machine consciousness).

Regarding the philosophy of consciousness, I didn't feel any really new ground was broken at the workshops, though many of the discussants were insighful. As a generalization, there was a divide betwen participants who felt that essentially any machine with a functioning perception-action-control loop was conscious, versus those who felt that a higher level of self-reflection was necessary.

My own presentation from the workshop is here ... most of it is cut and pasted from prior presentations on AGI but the first 10 slides are so are new and discuss the philosophy of consciousness specifically (covering content previously given in my book The Hidden Pattern and various blog posts). I talked for half an hour and spent the first half on philosophy of consciousness, and the second half on AGI stuff.

I was the only vocal panpsychist at the workshop ... i.e. the only one maintaining that everything is conscious, and that it makes more sense to think of the physical world as a special case of consciousness (Peirce's "Mind is matter hide-bound with habit") than to think of consciousness as a special case of the physical world. However, one Finnish philosopher in the audience came up to me during a coffee break and told me he thought my perspective made sense, and that he was happy to see some diversity of perspective at the workshop (i.e. to see a panpsychist there alongside all the hard-core empiricists of various stripes).

My view on consciousness is that raw consciousness, Peircean First, is an aspect of everything ... so that in a sense, rocks and numbers are conscious, not just mice and people. However, different types of entities may have qualitatively different kinds of consciousness. For instance, systems that are capable of modeling themselves and intelligently governing their behavior based on their self-models, may have what I call "reflective consciousness." This is what I have tried to model with hypersets, as discussed in my presentation and in a prior blog post.

Another contentious question was whether simple AI systems can display consciousness, or whether there's a minimal level of complexity required for it. My view is that reflective consciousness probably does require a fairly high level of complexity -- and, furthermore, I think it's something that pretty much has to emerge from an AI system through its adaptive learning and world-interaction, rather than being explicitly programmed-in. My guess is that an AI system is going to need a large dynamic knowledge-store and a heck of a lot of experience to be able to usefully infer and deploy a self-model ... whereas, many of the participants in the workshop seemed to think that reflective consciousness could be created in very simple systems, so long as they had the right AI architecture (e.g. a perception-action-control loop).

Since my view is that
  • consciousness is an aspect of everything
  • enabling the emergence of reflective consciousness is an important part of achieving advanced AGI
my view of machine consciousness as a field is that
  • the study of consciousness in general is part of philosophy, or general philosophical psychology
  • the study of reflective consciousness is an important part of cognitive science, which AGI designers certainly need to pay attention to
One thing we don't know, for example, is which properties of human reflective consciousness emanate from general properties of reflective consciousness itself, and which ones are just particular to the human brain. This sort of fine-grained question didn't get that much time at the workshop, and I sorta wish it had -- but, maybe next year!

As an example, the "7 +/- 2" property of human short-term memory seems to have a very big impact on the qualitative nature of human reflective consciousness ... and I've always wondered the extent to which it represents a fundamental property of STM versus just being a limitation of the brain. It's worth noticing that other mammals have basically the same STM capacity as humans do.

(I once speculated that the size of STM is tied to the octonion algebra (an algebra that I discussed in another, also speculative cog-sci context here), but I'm not really so sure about that ... I imagine that even if there are fundamental restrictions on rapid information processing posed by algebraic facts related to octonions, AI's will have tricky ways of getting around these, so that these fundamental restrictions would be manifested in AI's in quite different ways than via limited STM capacity.)

However, it's hard to ever get to fine-grained points like that in broad public discussions of consciousness ... even among very bright, well-intentioned expert researchers .. because discussion of consciousness seems to bring up even worse contentious, endless, difficult arguments among researchers than the discussion of general intelligence ... in fact consciousness is a rare topic that is even harder to discuss than the Singularity!! This makes consciousness workshops and conferences fun, but also means that they tend to get dominated by disagreements-on-the-basics, rather than in-depth penetration of particular issues.

It's kind of hard for folks who hold different fundamental views on consciousness -- and, in many cases, also very different views on what constitute viable approaches to AGI -- to get into deep, particular, detailed discussions of the relationship between consciousness and particular AI systems!

In June 2009 there will be a consciousness conference in Hong Kong. This should be interesting on the philosophy side -- if I go there, I bet I won't be the only panpsychist ... given the long history of panpsychism in various forms in Oriental philosophy. I had to laugh when one speaker at the workshop got up and stated that, in studying consciousness, he not only didn't have any answers, he didn't know what were the interesting questions. I was tempted to raise my hand and suggest he take a look at Dharmakirti and Dignaga, the medieval Buddhist logicians. Buddhism, among other Oriental traditions of inquiry, has a lot of very refined theory regarding different states of consciousness ... and, while these traditions have probably influenced some modern consciousness studies researchers in various ways (for example my friend Allan Combs, who has sought to bridge dynamical systems theory and Eastern thought), they don't seem to have pervaded the machine-consciousness community very far. (My own work being an exception ... as the theory of mind on which my AI work is based was heavily influenced by Eastern cognitive philosophy, as recounted in The Hidden Pattern.)

I am quite eager to see AI systems like my own Novamente Cognition Engine and OpenCogPrime (and Haikonen's neural net architecture, and others!!) get to the point where we can study the precise dynamics by which reflective consciousness emerges from them. Where we can ask the AI system what it feels or thinks, and see which parts of its mind are active relevant to the items it identifies as part of its reflective consciousness. This, along with advances in brain imaging and allied advances in brain theory, will give us a heck of a lot more insight....