To follow this blog by email, give your address here...

Saturday, December 17, 2011

My Goal as an AGI Researcher

In a recent thread on the AGI email list, Matt Mahoney pressed me regarding my high-level goals as an AGI researcher, and a leader of the OpenCog project. This blog post repeats my answer, as I posted it on that email list. This is familiar material to those who have followed my work and thinking, but maybe I've expressed things here slightly differently than in the past....

My goal as an AGI researcher is not precisely and rigorously defined. I'm OK with this. Building AGI is a human pursuit, and human pursuits aren't always precisely and rigorously defined. Nor are scientific pursuits. Often the precise, rigorous definitions come only after a lot of the research is done.

I'm not trying to emulate human beings or human minds in detail. But nor am I trying to make a grab-bag of narrow agents, without the capability to generalize automatically to new problems radically different from the ones for which they were originally designed. I am after a system that -- in the context of the scope of contemporary human activities -- possesses humanlike (or greater) capability to generalize its knowledge from one domain to other qualitatively different domains, and to learn new things in domains different than the ones its programmers had explicitly in mind. I'm OK if this system possesses many capabilities that a human doesn't.

There are probably many ways of achieving software with this kind of general intelligence. The way I think I understand (and am trying to realize with OpenCog), is to roughly emulate the process of human child development -- where I say roughly because I'm fine with the system having some capabilities beyond those of any human. Even if it does have some specialized superhuman capabilities from the start, I think this system will develop the ability to generalize its knowledge to qualitatively different domains in the rough manner and order that a human child does.

What will I do once I have a system that has a humanlike capability of cross-domain generalization (in the scope of contemporary human activities)? Firstly I will study it, and try to create a genuine theory of general intelligence. Second I will apply it to solve various practical problems, from service robotics to research in longevity and brain-computer interfacing etc. etc. There are many, many application areas where the ability to broadly generalize is of great value, alongside specialized intelligent capabilities.

At some point, I think this is very likely to lead to an AGI system with recursive self-improving capability (noting that this capability will be exercised in close coordination with the environment, including humans and the physical world, not in an isolation chamber). Before that point, I hope that we will have developed a science of general intelligence that lets us understand issues of AGI ethics and goal system stability much better than we do now.

2 comments:

Brad Dunagan said...

Thanks for that. Very clear and concrete. A machine that learns how to learn in whatever domain it is pointed at. Not that, to me, your ambitions have been unclear in the past. It (your stuff) is so far over my head I probably just never have really cared what your goal is because it all seems appropriate and cool.

You know, "AI" (and especially, probably, General "AI") seems like something best not discussed. Like religion and politics. Why is that? I think its because intelligence (at least the measure of) is so subjective.

Another thing about "AI", I think, (especially, probably, General "AI") is that it is scary. I mean really scary. Like, maybe, something we, including researchers like yourself, don't really want. Subconsciously. Effectively impeding progress. I say this because there were notable things done in the 60's and 70's that, for some reason, never progressed like other computer applications do. I suppose the answer is that those problems were "solved" and researchers went on to other things.

But Shakey and Blocks World were cool developments. And their practical applications were clear and obvious. They just needed to be developed further. Imagine what kind of Shakey and Blocks World we would have if people just imcrementally improved on the general concepts of what those two projects represented in the last 40 years. Why has that not occured? Could it be that a machine intelligently stacking and manipulating blocks is a little scary?

There I go. Being difficult with my debatable and probably silly speculations. Sorry about that.

You know what you want. Keep truckin.

arman said...

I have studied your site fully and I realized that, it is the most beneficial for us. I want to take more information through this site.