Wednesday, October 21, 2015

Is Google Deep Mind Close to Achieving AGI?

Some folks on the AGI email list have gotten very excited by a video of Demis Hassabis's latest talk about his work with Google Deep Mind, so I thought I'd briefly chip in with my own perspective.

First, regarding the video.  It's a well-delivered, clear and concise talk, but so far as I can tell there's nothing big and new there.  Demis describes Deep Mind's well-known work on reinforcement learning and video games, and then mentions their (already published) work on Neural Turing Machines...  Nothing significant seems to be mentioned beyond what has already been published and publicized previously...

Demis, Shane Legg and many other Deep Mind researchers are known to me to be brilliant people with a true passion for AGI.    What they're doing is fantastic!   However, currently none of their results look anywhere close to human-level AGI; and the design details that they've disclosed don't come anywhere near to being a comprehensive plan for building an AGI...

Of course, 100 smart guys working together toward pure & applied AGI, with savvy leadership and Google's resources at their disposal, is nothing to be sneered at....   But still, let's not overblow what they've achieved so far....   

So far, based on all available evidence, Deep Mind is doing solid R&D aimed at AGI, but they haven't yet done anything that would convince the "open-minded but skeptical researcher" that they are, say, 90% likely to be on the right path.   I would say the same about my own OpenCog project: We also haven't yet done anything that would convince the "open-minded but skeptical researcher" that we are 90% likely to be on the right path.   For now, there are multiple different approaches to AGI, with various theoretical justifications and limited-scope practical achievements associated with them; and researchers place their confidence in one approach or another based on intuition as much as evidence, since the hard evidence is incomplete and fragmentary.

Personally, I'd rather not see the first AGI owned by a megacorporation, and I'd also rather not see the first AGI be a neural-net trained by reinforcement learning (Deep Mind's preferred approach based on their public materials), since I think 

  • RL is a very limited paradigm (note some of RL's peculiarities, and also the broader perspective of open-ended intelligence)
  • Systems with an explicit probabilistic logic component  (like OpenCog) have a far greater chance of being rational (though I admit that rationality is also a limited way of viewing intelligence, it's still something I find important)


My perspective is that with an open source approach properly orchestrated and managed we could get 500-1000 people -- academics, professional developers, hobbyists -- or more actively and aggressively working together, thus far outpacing what even Google Deep Mind can do....   Toward this end, my OpenCog colleagues and I are cooking up a plan to radically grow OpenCog over the next couple years -- beginning with improved documentation and funky demos coming in early 2016.   Wish us luck!


9 comments:

  1. Another major contender in these attempts to grab the public's attention for a spotlight on AGI development has got to be the recent push by IBM to present the Watson AI project as the Next Big Thing. On Tues.6.OCT.2015 there was a special eight-page advertisement in the national edition of the New york Times with "Welcome to the Cognitive Era." Inside the eight pages they kept using the word "Cognitive." I was reading the NYT-on-a-stick at the Starbucks Reserve Roastery and Tasting Room here in Seattle so I could not take the eight pages with me and I had to go to another Starbucks to pick up the eight Watson pages that I have been carrying around with me so as to study what IBM is suddenly doing. On page A13 they list twenty-seven Watson APIs supposedly now available in the year 2015, such as Ce Concept Expansion and Ct Concept Tagging. Since my own Mentifex AI is both theoretically and pragmatically based on the creation of real concepts in a brain-mind, I intend to follow keenly these AGI quasi-frontrunners such as Deep Mind and IBM Watson to see who is really and truly closing in on the AGI success story.

    ReplyDelete
  2. Singularity is not near. First computer science has to reverse-engineering-the-brain. That includes a cloud/grid simulation of the brains biological nural networks. That can be done by an invasive brain machine interface and machinlearning. The brains functional architecture is a hierarchical learning system for pattern recognition that is represented in natrual language.

    ReplyDelete
  3. Singularity is not near. First computer science has to reverse-engineering-the-brain. That includes a cloud/grid simulation of the brains biological nural networks. That can be done by an invasive brain machine interface and machinlearning. The brains functional architecture is a hierarchical learning system for pattern recognition that is represented in natrual language.

    ReplyDelete
  4. I was more impressed with Geoffrey Hinton's presentation at the same talk. It seemed to me that his mention of their work regarding "thought vectors", as applied to interactions between vectors of word associations, would be the key. Whatever they have done since joining Google is not being presented. But Hinton is providing more hints than Hassabis of what is going on currently.

    ReplyDelete
  5. Alison B Lowndes5:48 PM

    Good luck - let us know if we can help.

    ReplyDelete
  6. Terren10:37 PM

    Thanks, Ben. I get a lot out of your straight-forward analyses of different AI, cogsci, and philosophical developments. It comes from a unique but well-earned perspective. Good luck with OpenCog, always interested to hear the latest and see demos!

    ReplyDelete
  7. Anonymous12:13 PM

    Hi Ben,

    There is no doubt that DeepMind (together with others utilizing RL, NN, DL, etc.) is slowly approaching the end of its asymptotic gain. Once the initial euphoria settles the time for the pragmatic assessment comes and one starts to realize the shortcoming of the methodology.

    The number of low-hanging fruit toy simulations you can perform is getting smaller and smaller. In the end one realizes that there was a problem with the approach that could not have been seen before.

    Some researchers will admit there were wrong some will never do. At this moment DeepMind (and others utilizing similar approaches) are (will soon) beginning to see a big tall wall waiting for them. In fact it will be not a single wall but many - one after another: scalability (distributed processing and communication limits), extremely high-dimensional optimizations (contextual pattern matching), sub-symbolic vs symbolic emergence (symbol grounding, perception), RL/probabilistic/logic control mechanism (attention, learning, goal and sub-goal generation), etc., ..., the list could go on and on and on ...

    There is also no doubt that from all the "ladders to the moon" the OpenCog one is the tallest. However many fundamental assumptions used to design OpenCog might prove to be false and/or impossible to achieve in real world situations (symbolic probabilistic logic, bridging sub-symbolic gap within atom space, glocal-memory implementation, control mechanism, etc.).

    "500-1000 people -- academics, professional developers, hobbyists -- or more" might or might not be enough. Sometimes one brilliant idea can change everything. At the beginning the quality matters later the brute force could be enough.

    I wish you guys absolutely the best luck

    Sincerely

    M./

    ReplyDelete
  8. Reversed Engineering might just be I Ching instead of T Ching. Could be that simple.

    ReplyDelete
  9. this is real next generation technology

    ReplyDelete