Like many other qualities of mind, I believe that the interest in and capability for playing is something that should emerge from an AGI system rather than being explicitly programmed-in.
It may be that some bias toward play could be productively encoded in an AGI system ... I'm still not sure of this.
But anyway, in the email list discussion I formulated what seemed to be a simple and clear characterization of the "play" concept in terms of uncertain logical inference ... which I'll recount here (cleaned up a slight bit for blog-ification).
And then at the end of the blog post I'll give some further ideas which have the benefit of making play seem a bit more radical in nature ... and, well, more playful ...
Fun ideas to play with, at any rate 8-D
My suggestion is that play emerges (... as a consequence of other general cognitive processes...) in any sufficiently generally-intelligent system that is faced with goals that are very difficult for it .
If an intelligent system has a goal G which is time-consuming or difficult to achieve ... it may then synthesize another goal G1 which is easier to achieve
We then have the uncertain syllogism
Achieving G implies reward
G1 is similar to G
|-
Achieving G1 implies reward
(which in my Probabilistic Logic Networks framework would be most naturally modeled as an "intensional implication.)
As links between goal-achievement and reward are to some extent modified by uncertain inference (or analogous process, implemented e.g. in neural nets), we thus have the emergence of "play" ... in cases where G1 is much easier to achieve than G ...
Of course, if working toward G1 is actually good practice for working toward G, this may give the intelligent system (if it's smart and mature enough to strategize) or evolution impetus to create additional bias toward the pursuit of G1
In this view, play is a quite general structural phenomenon ... and the play that human kids do with blocks and sticks and so forth is a special case, oriented toward ultimate goals G involving physical manipulation
And the knack in gaining anything from play (for the goals that originally inspired the play) is in appropriate similarity-assessment ... i.e. in measuring similarity between G and G1 in such a way that achieving G1 actually teaches things useful for achieving G.
But of course, play often has indirect benefits and assists with goals other than the ones that originally inspired it ... and, due to its often stochastic, exploratory nature it can also have an effect of goal drift ... of causing the mind's top-level goals to change over time ... (hold that thought in mind, I'll return to it a little later in this blog post...)
The key to the above syllogism seems to be similarity-assessment. Examples of the kind of similarity I'm thinking of:
- The analogy btw chess or go and military strategy
- The analogy btw "roughhousing" and actual fighting
In logical terms, these are intensional rather than extensional similarities
So for any goal-achieving system that has long-term goals which it can't currently effectively work directly toward, play may be an effective strategy...
In this view, we don't really need to design an AI system with play in mind. Rather, if it can explicitly or implicitly carry out the above inference, concept-creation and subgoaling processes, play should emerge from its interaction w/ the world...
Note that in this view play has nothing intrinsically to do with having a body. An AGI concerned solely with mathematical theorem proving would also be able to play...
Another interesting thing to keep in mind when discussing play is subgoal alienation
When G1 arises as a subgoal of G, nevertheless, it may happen that G1 survives as a goal even if G disappears; or that G1 remains important even if G loses importance. One may wish to design AGI systems to minimize this phenomenon, but it certainly occurs strongly in humans.
Play, in some cases, may be an example of this. We may retain the desire to play games that originated as practice for G, even though we have no interest in G anymore.
And, subgoal alienation may occur on the evolutionary as well as the individual level: an organism may retain interest in kinds of play that resemble its evolutionary predecessors' serious goals, but not its own!
Bob may have a strong desire to play with his puppy ... a desire whose roots were surely encoded in his genome due to the evolutionary value in having organisms like to play with their own offspring and those of their kin ... yet, Bob may have no desire to have kids himself ... and may in fact be sterile, dislike children, and never do anything useful-to-himself that is remotely similar to his puppy-playing obsession.... In this case, Bob's "purely playful" desire to play with his puppy is a result of subgoal alienation on the evolutionary level. On the other hand, it may also help fulfill other goals of his, such as relaxation and the need for physical exercise.
This may seem a boring, cold, clinical diagnosis of something as unserious and silly as playing. For sure, when I'm playing (with my kids ... or my puppy! ... or myself ... er ... wait a minute, that doesn't work in modern English idiom ;-p) I'm not thinking about subgoal alienation and inference and all that.
But, when I'm engaged in the act of sexual intercourse, I'm not usually thinking about reproduction either ... and of course we have another major case of evolution-level and individual-level subgoal alienation right there....
In fact, writing blog entries like this one is largely a very dry sort of playing! ... which helps, I think, to keep my mind in practice for more serious and difficult sorts of mental exercise ... yet even if it has this origin and purpose in a larger sense, in the moment the activity seems to be its own justification!
Still, I have to come back to the tendency of play to give rise to goal drift ... this is an interesting twist that apparently relates to the wildness and spontaneity that exists in much playing. Yes, most particular forms of play do seem to arise via the syllogism I've given above. Yet, because it involves activities that originate as simulacra of goals that go BEYOND what the mind can currently do, play also seems to have an innate capability to drive the mind BEYOND its accustomed limits ... in a way that often transcends the goal G that the play-goal G1 was designed to emulate....
This brings up the topic of meta-goals: goals that have to do explicitly with goal-system maintenance and evolution. It seems that playing is in fact a meta-goal, quite separately from the fact of each instance of playing generally involving an imitation of some other specific real-life goal. Playing is a meta-goal that should be valued by organisms that value growth and spontaneity ... including growth of their goal systems in unpredictable, adaptive ways....
w0000000t!!!!
1 comment:
* Toward General "Play-I" ?
Approaching this from a natural perspective, I wonder if the "play" model could be useful (in isolation of the mechanics) as a sort of "temporary upgrade".
The upgrade is part of a "trade" I observe as being part of play. This trade supports the analogy present in many of our play forms (as you mentioned, Chess to war).
In order to achieve play, participants need to agree to the parameters of that specific system.
* Fight Club
We say simply "let's play Chess" - but we really mean is:
Black: "If you'll follow these rules, you may wage war on me, safely."
White: "Ok, ditto!"
Black: "Prepare to suffer, mortal."
Of course, play takes many more forms than systematic games - but even at its simplest there is such a trade involved.
* Let's Make a Deal
The human experience already has plenty of rules. Who wants more? But participants gain something for complying with the temporary amendments: They become kings, nations, super heros. They become "supernormal" (if only temporarily).
Basically, they get upgraded.
Obviously, the kings, wars, and abilities are not "real", but that doesn't matter: They are completely real within the system - for as long as play is in effect. The distinction doesn't seem relevant in the simulated context of a processing framework.
{Btw: Play also happens alone (stop laughing), but a personal agreement is still taking place, to observe and enforce the extra rules. We all know instinctively that you can't get "something for nothing".}
One thing troubling me is how to fit "ball and paddle" into all of this. But perhaps the answer is simple: Ball and paddle is not play, because ball and paddle sucks. Seriously...does it even qualify?
* Little Red Bot Rod
One /could/ emulate the mechanics of play within the context of an AI framework: Trading "upgrades" for compliance, enforcing the play period, etc. It's probably a pretty straightforward proposition.
For example:
"If you limit your math operations to integers only, the system will increase your execution priority by 10%"
"If you wager 1.2 seconds of downtime, the system will double your memory buffer size for 1 minute"
Perhaps over time, a system may come to prefer responding to some tasks by engaging a "play mode".
Bots deciding to "hot rod" themselves?
But no, I suppose not: After all, if a framework provided the capability to achieve goals faster, why withhold them until an algorithm determined the best way to use them?
* You Can't Eat Checkmate...but AIs Don't Eat.
There's no place (or need) in this version of "Play-I" (tm) for what is often the most important part of human games - the winner.
If that turned out to be the case, I wouldn't be too surprised. Mama always said "it's not whether you win or lose, it's how you game the play."
{Disclaimer: I couldn't implement very much of this at all, myself.}
Post a Comment