Friday, May 25, 2007

Pure Silliness


Ode to the Perplexingness of the Multiverse


A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"

On the Dangers of Incautious Research and Development

A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"

A couple clever followups to the above poem were posted by others on the Singularity email list...

On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)

A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"

By Derek Zahn:

The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.

And, less interestingly...

On the Benefits of Clarity in Verbal Presentation

There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration

Sunday, May 20, 2007

Flogging Poor Searle Again

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a workhorse theme in the philosophy of AI that normally bores me to tears.

But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.

I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.

The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.

As an example of this sort of theoretical research, check out:

http://www.hutter1.net/

which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.

My feeling is that one should think about, not just

Intelligence = complexity of goals that a system can achieve

but also

Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)

According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.

Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.

Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....

A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.

And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.

The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.

So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.

This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.

(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)

In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.

However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.

But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...

What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.

That is really the moral of the Chinese room.

Tuesday, May 15, 2007

Technological versus Subjective Acceleration

This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.

There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.

The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.

This breaks down into a couple sub-points.

First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.

Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.

Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.

Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.

These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.

The example of music may serve to illustrate both of these points.

The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.

Why is that?

Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...

Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.

And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.

To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).

But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....

And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.

Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.

So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.

Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...

I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.

At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.

Viva la Singularity!