To follow this blog by email, give your address here...

Friday, June 15, 2007

The Pigeons of Paraguay (Further Dreams of a Ridiculous Man)

In the spirit of my prior dream-description Colors, I have written down another dream ... one I had last night ... it's in the PDF file linked to from

Copy Girl and the Pigeons of Paraguay

I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.

I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....

The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!

I'll give you some mathematics in my next blog entry ;-)

-- Ben

Saturday, June 02, 2007

Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)

From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.

First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog

we find some quotes from Google co-founder Larry Page:

"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.


a lot of our systems already use learning techniques


The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ...
You could ask 'what should I ask Larry?' and it would tell you."

Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.

Exciting rhetoric indeed!

Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.

Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.

So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:

"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "

Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....

He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."

Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....

Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.

And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...

[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]

When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..

"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.

Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."

Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either

  • they have trained dozens of their scientific staff to be really good actors, or
  • it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about

Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.

Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)

But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.

So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.

All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.

OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....

Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.

As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.

Friday, May 25, 2007

Pure Silliness

Ode to the Perplexingness of the Multiverse

A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"

On the Dangers of Incautious Research and Development

A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"

A couple clever followups to the above poem were posted by others on the Singularity email list...

On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)

A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"

By Derek Zahn:

The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.

And, less interestingly...

On the Benefits of Clarity in Verbal Presentation

There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration

Sunday, May 20, 2007

Flogging Poor Searle Again

Someone emailed me recently about Searle's Chinese Room argument,

a workhorse theme in the philosophy of AI that normally bores me to tears.

But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.

I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.

The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.

As an example of this sort of theoretical research, check out:

which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.

My feeling is that one should think about, not just

Intelligence = complexity of goals that a system can achieve

but also

Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)

According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.

Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.

Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....

A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.

And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.

The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.

So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.

This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.

(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)

In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.

However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.

But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...

What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.

That is really the moral of the Chinese room.