Tuesday, September 20, 2011
Pursued on its own, this is a "narrow AI" approach, but it's also designed to be pursued in an AGI context, and integrated into an AGI system like OpenCog.
In very broad terms, these ideas are consistent with the integrative NLP approach I described in this 2008 conference paper. But the application of evolutionary learning is a new idea, which should allow a more learning-oriented integrative approach than the conference paper alluded to.
Refining and implementing these ideas would be a lot of work, probably the equivalent of a PhD thesis for a very good student.
Those with a pure "experiential learning" bent will not like the suggested approach much, because it involves making use of existing linguistic resources alongside experiential knowledge. However, there's no doubt that existing statistical and rule-based computational linguistics have made a lot of progress, in spite of not having achieved human-level linguistic performance. I think the outlined approach would be able to leverage this progress in a way that works for AGI and integrates well with experiential learning.
I also think it would be possible for an AGI system (e.g. OpenCog, or many other approaches) to learn language purely from perceptual experience. However, the possibility of such an approach, doesn't imply its optimality in practice, given the hardware, software and knowledge resources available to us right now.
Sunday, September 18, 2011
(Please note: it's fairly abstract theoretical/mathematical material, so if you're solely interested in current AGI engineering work, don't bother! The hope is that this theory will be able to help guide engineering work once it's further developed, but it's not at that stage yet. So for now my abstract mathematical AGI theory work and practical AGI engineering work are only loosely coupled.)
The crux of the paper is:
MIND-WORLD CORRESPONDENCE PRINCIPLE: For an organism with a reasonably high level of intelligence in a certain world, relative to a certain set of goals, the mind-world path transfer function is a goal-weighted approximate functor
To see what those terms mean and why it might be a useful notion, you'll have to read the paper.
A cruder expression of the same idea, with fewer special defined terms is:
MIND-WORLD CORRESPONDENCE-PRINCIPLE: For a mind to work intelligently toward certain goals in a certain world, there should be a nice mapping from goal-directed sequences of world-states into sequences of mind-states, where “nice” means that a world-state-sequence W composed of two parts W1 and W2, gets mapped into a mind-state-sequence M composed of two corresponding parts M1 and M2.As noted toward the end of the paper, this principle gives us systematic way to approach questions like: Why do real-world minds seem to be full of hierarchical structures? The answer is probably that the real world is full of goal-relevant hierarchical structures. The Mind-World Correspondence Principle explains exactly why these hierarchical structures in the world have to be reflected by hierarchical structures in the mind of any system that's intelligent in the world.
As an aside, it also occurred to me that these ideas might give us a nice way to formalize the notion of a "good mind upload," in category-theoretic terms.
I.e., if we characterize minds via transition graphs in the way done in the paper, then we can argue that mind X is a valid upload of mind Y if there is a fairly accurate approximate functor from X's transition graph to Y's.
And, if Y is a nondestructive upload (so X still exists after the uploading), it would remain a good upload of X over time if, as X and Y both changed, there was a natural transformation governing the functors between them. Of course, your upload might not WANT to remain aligned with you in this manner, but that's a different issue...