Tuesday, July 16, 2013

Robot Toddlers and Fake AI 4 Year Olds

Oh, the irony...

At the same time as my OpenCog project is running an Indieogogo crowdfunding campaign aimed at raising funds to create a robot toddler, by using OpenCog to control a Hanson Robokind robot...

... the University of Illinois's press gurus come out with a report titled

But what is this system, that is supposedly as smart as a 4 year old?  It's a program that answers vocabulary and similarity questions as well as a human 4 year old, drawing on MIT's ConceptNet database.

Whoopie!   My calculator can answer arithmetic questions better than I can -- does that make it a superintelligence? ;-D ....

A toddler is far more than a question-answering program back-ended on a fixed database, obviously....

This Illinois/MIT program is basically like IBM Watson, but for a different set of knowledge...

ConceptNet is an intriguing resource, and one of the programmers in our Addis Ababa OpenCog lab is currently playing with importing it into OpenCog....

But obviously, this Illinois/MIT software lacks the ability to learn new skills, to play, to experiment, to build, to improvise, to discover, to generalize beyond its experience, etc.....   It has basically none of the capabilities of the mind of a 4 year old child.

BUT... one thing is clear ... these universities do have excellent PR departments!

The contrast between their system -- a question-answering system based on MIT's ConceptNet knowledge base -- and the system OpenCog, Hanson and I are building is both dramatic and instructive.

The Illinois/MIT program is, they report, as good as a human 4 year old at answering vocabulary and similarity questions.  

OK, I believe that.   But: Big deal!   A calculator is already way better than a human 4 year old at answering arithmetic questions!

What we are after with our project is not just a system that passes certain tests as well as a human toddler.  We are after a system that can understand and explore the world, and make sense of itself and its surroundings and its goals and desires and feelings and worries, in the rough manner of a human toddler.  This is a wholly different thing.

The kind of holistic toddler-like intelligence we're after, would naturally serve as a platform for building greater and greater levels of general intelligence -- moving toward adult-level AGI....

But a question-answering system based on ConceptNet doesn't particularly build toward anything -- it doesn't learn and grow.  It just replies based on the data in its database.

It is unfortunate, but not terribly surprising, that this kind of distinction still needs to be repeated over and over again.  General intelligence - the ability to achieve a variety of complex goals in a variety of complex environments, including goals and environments not foreseen in advance by the creators of the intelligent system -- is a whole different kettle of fish than engineering a specialized intelligent system for a specific purpose.


The longer I work on AGI, the more convinced I am that an embodied approach will be the best way to fully solve the common sense problem.   The AI needs to learn common sense by learning to control a robot that does commonsensical things....  Then the ability to draw analogies and understand words will emerge from the AI's ability to understand the world and relate different experiences it has had.   Whereas, a system that answers questions based on ConceptNet is just manipulating symbols without understanding their meaning, an approach that will never lead to real human-like general intelligence.

The good news is, my OpenCog colleagues and I know how to make a robot that will achieve first toddler-level commonsense knowledge, and then full-scale human-adult level AGI.   And then what?

The less exciting news is, it's going to take a lot of work -- though exactly how many years depends on how well funded our project is.

Next Big Future just ran an extensive interview with me on these topics, check it out if you're curious for more information...


20 comments:

  1. I agree with you, Ben, and hope you get the needed support.

    ReplyDelete
  2. Yes, embodiment is essential. It is all about feedback loops, you need to see what the consequences of actions are on your body and others (so maybe the equivalent of mirror neurons should be also developed in AI for a complete theory of mind).

    ReplyDelete
  3. It's good that someone still bothers to make these distinctions... btw, am curious what would be your idea of appropriate funding to accomplish your implementation of toddler level AGI?

    ReplyDelete
  4. > Whoopie! My calculator can answer arithmetic questions better
    > than I can -- does that make it a superintelligence? ;-D ....
    >
    > A toddler is far more than a question-answering program back-ended
    > on a fixed database, obviously....
    >
    > This Illinois/MIT program is basically like IBM Watson, but
    > for a different set of knowledge...
    >
    > But obviously, this Illinois/MIT software lacks the ability
    > to learn new skills, to play, to experiment, to build, to improvise,
    > to discover, to generalize beyond its experience, etc.....
    > It has basically none of the capabilities of the mind of a 4 year old child.
    >
    > BUT... one thing is clear ... these universities do have excellent
    > PR departments!

    You know, I finally gave in and bought a copy of Ray Kurzweil's
    _How to Create a Mind_ a week ago (I was visiting a friend
    near Boston, and came across a paperback copy at the Harvard Coop
    in Cambridge). [You're in the acknowledgements, of course,
    as you're well aware.]

    So naturally Kurzweil slams the nay-sayers who dismiss Watson's
    prowess at Jeopardy as "mere statistics", implying that Watson
    does indeed exhibit "intelligence" -- and indeed, **greater
    than human** intelligence!

    Kurzweil glosses over the fact that in humans, "understanding" language
    involves more than correlations among words and sentences,
    and includes a long history of embodied interaction with the
    world and other humans.

    I couldn't help but think of Joseph Weizenbaum and his efforts
    with Eliza, back in the 60's, to demonstrate how easily people
    are bamboozled by a few parlor tricks into believing that
    machines exhibit "intelligence".

    I also reflected that, by a semantic sleight of hand, somebody
    like Kurzweil might claim that superhuman AI had **already**
    been achieved here in the twenty-teens (though in the book
    he's still projecting human-level AI in 2029).

    Kurzweil also mentions that IBM is currently developing a version
    of Watson that will perform medical diagnoses (the dream of
    medical "expert systems" has been around for decades now).
    Move over, House!

    ReplyDelete
  5. thank's for your information and your share in this artikel into your blog, i like this post and very nice
    you might also like obat herbal kanker payudara or you can let me see into obat asam lambung

    ReplyDelete
  6. Anonymous7:17 PM

    Have a look at this new book, "Our Final Invention". It's very well written and very scary!

    http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/0312622376#reader_0312622376

    ReplyDelete
  7. my first visit to your blog, Greetings

    ReplyDelete
  8. I like what you guys tend to be up too. This kind of clever work and reporting!
    Keep up the very good works guys I’ve added you guys to our blogroll.

    Try to check my blog: 안마
    (jk)

    ReplyDelete