Monday, October 29, 2007

On Becoming a Neuron

I was amused and delighted to read the following rather transhumanistic article in the New York Times recently.

http://www.nytimes.com/2007/10/26/opinion/26brooks.html?_r=1&oref=slogin

The writer, who does not appear to be a futurist or transhumanist or Singularitarian or anything like that, is observing the extent to which he has lost his autonomy and outsourced a variety of his cognitive functions to various devices with which he interacts. And he feels he has become stronger rather than weaker because of this -- and not any less of an individual.

This ties in deeply with the theme of the Global Brain

http://pespmc1.vub.ac.be/SUPORGLI.html

which is a concept dear to my heart ... I wrote about it extensively in my 2001 book "Creating Internet Intelligence" and (together with Francis Heylighen) co-organized the 2001 Global Brain 0 workshop in Brussels.

I have had similar thoughts to the above New York Times article many times recently... I can feel myself subjectively becoming far more part of the Global Brain than I was even 5 years ago, let alone 10...

As a prosaic example: Via making extensive use of task lists as described in the "Getting Things Done" methodology

http://en.wikipedia.org/wiki/Getting_Things_Done

I've externalized much of my medium-term memory about my work-life.

And via using Google Calendar extensively I have externalized my long-term memory... I use the calendar not only to record events but also to record information about what I should think about in the future (e.g. "Dec. 10 -- you should have time to start thinking about systems theory in connection to developmental psychology again...")

And, so much of my scientific work these days consists of reading little snippets of things that my colleagues on the Novamente project (or other intellectual collaborators) wrote, and then responding to them.... It's not that common these days that I undertake a large project myself, because I can always think of someone to collaborate with, and then the project becomes in significant part a matter of online back-and-forth....

And the process of doing computer science research is so different now than it was a decade or two ago, due to the ready availability and easy findability of so many research ideas, algorithms, code snippets etc. produced by other people.

Does this mean that I'm no longer an individual? It's certainly different than if I were sitting on a mountain for 10 years with my eagle and my lion like Nietzsche's Zarathustra.

But yet I don't feel like I've lost my distinctiveness and become somehow homogenized --
the way I interface with the synergetic network of machines and people is unique in complexly patterned ways, and constitutes my individuality.

Just as a neuron in the brain does not particularly manifest its individuality any less than a neuron floating by itself in a solution. In fact, the neuron in the brain may manifest its
individuality more greatly, due to having a richer, more complex variety of stimuli to which it may respond individually.

None of these observations are at all surprising from a Global Brain theory perspective. But, they're significant as real-time, subjectively-perceived and objectively-observed inklings of the accelerating emergence of a more and more powerful and coordinated Global Brain, of which we are parts.

And I think this ties in with Ray Kurzweil's point that by the time we have human-level AGI, it may not be "us versus them", it may be a case where it's impossible to draw the line between us and them...

-- Ben

P.S.

As a post-script, I think it's interesting to tie this Global Brain meme in with the possibility of a "controlled ascent" approach to the Singularity and the advent of the transhuman condition.

Looking forward to the stage at which we've created human-leve AGI's -- if these AGI's become smarter and smarter at an intentionally-controlled rate (say a factor of 1.2 per year, just to throw a number out there), and if humans are intimately interlinked with these AGI's in a Global Brain like fashion (as does seem to be occurring, at an accelerating rate), then we have a quite interesting scenario.

Of course I realize that guaranteeing this sort of controlled ascent is a hard problem. And I realize there are ethical issues involved in making sure a controlled ascent like this respects the rights of individuals who choose not to ascend at all. And I realize that those who want to ascend faster may get irritated at the slow pace. All these points need addressing in great detail by an informed and intelligent and relevantly educated community, but they aren't my point right now -- my point in this postcript is the synergetic interrelation of the Global Brain meme with the controlled-ascent meme.

The synergy here is that as the global brain gets smarter and smarter, and we get more and more richly integrated into it, and the AGI's that will increasingly drive the development of the global brain get smarter and smarter -- there is a possibility that we will become more and more richly integrated with a greater whole, while at the same time having greater capability to exercise our uniqueness and individually.

O Brave New Meta-mind, etc. etc. ;-)

Friday, June 15, 2007

The Pigeons of Paraguay (Further Dreams of a Ridiculous Man)

In the spirit of my prior dream-description Colors, I have written down another dream ... one I had last night ... it's in the PDF file linked to from

Copy Girl and the Pigeons of Paraguay


I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.

I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....

The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!

I'll give you some mathematics in my next blog entry ;-)

-- Ben

Saturday, June 02, 2007

Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)

From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.

First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog

http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html


we find some quotes from Google co-founder Larry Page:

"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.

...

a lot of our systems already use learning techniques


...

The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ...
You could ask 'what should I ask Larry?' and it would tell you."

Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.

Exciting rhetoric indeed!

Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.

Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.

So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:

"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "

Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....

He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."

Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....

Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.

And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...

[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]

When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..

"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.

Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."

Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either

  • they have trained dozens of their scientific staff to be really good actors, or
  • it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about

Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.

Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)

But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.

So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.

All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.

OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....

Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.

As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.

Friday, May 25, 2007

Pure Silliness


Ode to the Perplexingness of the Multiverse


A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"

On the Dangers of Incautious Research and Development

A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"

A couple clever followups to the above poem were posted by others on the Singularity email list...

On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)

A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"

By Derek Zahn:

The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.

And, less interestingly...

On the Benefits of Clarity in Verbal Presentation

There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration

Sunday, May 20, 2007

Flogging Poor Searle Again

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a workhorse theme in the philosophy of AI that normally bores me to tears.

But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.

I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.

The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.

As an example of this sort of theoretical research, check out:

http://www.hutter1.net/

which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.

My feeling is that one should think about, not just

Intelligence = complexity of goals that a system can achieve

but also

Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)

According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.

Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.

Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....

A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.

And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.

The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.

So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.

This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.

(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)

In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.

However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.

But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...

What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.

That is really the moral of the Chinese room.

Tuesday, May 15, 2007

Technological versus Subjective Acceleration

This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.

There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.

The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.

This breaks down into a couple sub-points.

First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.

Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.

Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.

Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.

These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.

The example of music may serve to illustrate both of these points.

The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.

Why is that?

Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...

Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.

And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.

To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).

But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....

And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.

Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.

So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.

Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...

I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.

At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.

Viva la Singularity!

Thursday, February 01, 2007

Colors: A Recurring Dream

I took a couple hours and wrote down a recurring dream I've had for years, which is a sort of metaphor for transhumanism and the quest to create AGI...

http://goertzel.org/Colors.pdf

Friday, December 08, 2006

Polya's Inner Neanderthal

I remember reading, years ago, the excellent book "The Psychology of Mathematical Invention" by the mathematician Jacques Hadamard...

He surveyed a bunch of mathematicians, intending to find out how mathematicians think internally. Many mathematicians thought visually, it was found; some thought in terms of sounds, some purely abstractly.

But, George Polya was the only mathematician surveyed who claimed to think internally in terms of grunts and groans like "aaah", "urrghhh" , "hmtphghhghggg"....

At the time I read this, I thought it was very odd.

However, now I have just read Mithen's book ("The Singing Neanderthals", discussed in another, recent blog of mine) claiming that the language of Neanderthals and early Cro-magnons was like that: no words, just lengthy, semi-musical grunts and groans with varying intonation patterns....

So maybe Polya was just old-fashioned.... ;-)

Anyone else out there think in terms of grunts and groans and so forth? If so please contact me....

Wednesday, December 06, 2006

Updating Kazantzakis

I saw this quote in a friend's email signature...

"What a strange machine man is! You fill him with bread, wine, fish, and radishes, and out comes sighs, laughter, and dreams."
-- Nikos Kazantzakis (1885-1957), Greek novelist.


To which my immediate mental response was:

OK, fine -- but it's what happens when you feed him with hallucinogenic mushrooms, amphetamines, ginger beer, Viagra and stir-fried snails that really fascinates me!!

Saturday, December 02, 2006

Zebulon's Favorite Place

My son Zebulon (age 13) recently had to write a brief essay for school on "My Favorite Place," as part of a national essay competition. His essay was chosen to represent his school in the county-wide competition. His theme was an amusing one which may resonate with many readers of this blog -- the third paragraph particularly reminds me of myself on some of my more productive days! (But Zeb is not working on AI, he's working on animations, see zebradillo.com)





My Favorite Place
Zebulon Goertzel



I work my way past all the furniture in my cramped room and sit down at my chair. I see a computer, a laptop. On its screen are pixels. Tiny, stabbing rays of color that drill into my eyes and let me enjoy my computer to no end despite its hideous flaws. The monitor is marked and scarred due to various past and unknown misuses. The dull keyboard is with its regular layout, usable but without an S key. I look at the front disk drive and recall being told not to remove it.

Beside my laptop is my tablet. In its middle-left side is the pen, a gnawed-on, well-used device that is often lost and found in my pocket. The tablet cover is not without scratches, some deep, some light. Each scratch is from a scribble or drawing or line somebody drew. A bright wire links my tablet to the sloppy tangle of wires, connectors and cables which is usually behind my laptop.

My computer’s fan consistently buzz-whirs with high pitch. I am hypnotized as I slowly lean forward, as I grip my tablet pen with sore, almost numb fingers, as I click and click and click. My back is hunched and my neck is out. I work. My eyes ache, but I hardly notice. My stomach is empty, but I try to ignore it. I decide to be done. I get up, stretch, and go to care for myself. My favorite place is my computer, or my desk, because there are no limits to what a computer can do, and my computer fascinates me to no end.

The Cognitive Significance of Radiohead (aka, The Historical and Possibly Current Significance in the Human Mind of Patterns of Tonal Variation)

In one of those pleasant synchronicities, a couple days ago PJ Manney started a conversation with me about music and the scientific mind, at the same time as I received in the mail a book I had ordered a couple weeks ago, "The Singing Neanderthals," about the cognitive origins of music.

So, here I'll start with some personal notes and musings in the musicaloidal direction, and finally wander around to tying them in with cognitive theory...

I had told PJ I was a spare-time semi-amateur musician (improvising and composing on the electronic keyboard -- yeah, one of these days I'll put some recordings online; I keep meaning to but other priorities intervene) and she was curious about whether this had had any effect on my AI and other scientific work.

I mentioned to her that I often remember how Nietzsche considered his music improvisation necessary to his work as a philosopher. He kept promising himself to stop spending so much time on it, and once said something like "From now on, I will pursue music only insofar as it is domestically necessary to me as a philosopher."

This is a sentiment I have expressed to myself many times (my music keyboard being a tempting 10 feet away from my work desk...). Like Nietzsche, I have found a certain degree of musicological obsession "domestically necessary" to myself as a creative thinker.... The reasons for this are interesting to explore, although one can't draw definite conclusions based on available evidence....

When I get "stuck" thinking about something really hard, I often improvise on the piano. That way one of two things happens: either

1) my mind "loosens up" and I solve the problem

or

2) I fail to solve the problem, but then instead of being frustrated about it, I abandon the attempt for a while and enjoy myself playing music ;-)

Improvising allows one's music to follow one's patterns of thought, so the music one plays can sorta reflect the structure of the intellectual problem one is struggling with....

I drew on my experiences composing/improvising music when theorizing about creativity and its role in intelligence, and cooking up the aspects of the Novamente AGI design that pertain to flexible creativity....

As well as composing and improvising, I also listen to music a lot -- basically every kind of music except pop-crap and country -- most prototypically, various species of rock while in the car, and instrumental jazz/jazz-fusion when at home working ... [I like music with lyrics, but I can't listen to it while working, it's too distracting... brings me back too much to the **human** world, away from the world of data structures and algorithms and numbers!! ... the nice thing with instrumental music is how it captures abstract patterns of flow and change and interaction, so that even if the composer was thinking about his girlfriend's titties when he wrote the song, the abstract structures (including abstract **emotional** structures) in the music may feel (and genuinely **be**) applicable to something in the abstract theory of cognition ;-) ] ... but more important than that is the almost continual unconsciously-improvised "soundtrack" inside my head. It's as though I'm thinking to music about 40% of the time, but the music is generated by my brain as some kind of interpretation of the thoughts going on.... But yet when I try to take this internal music and turn it into **real music** at the keyboard, the translation process is of course difficult, and I find that much of the internal music must exist in some kind of "abstract sound space" and could never be fully realized by any actual sounds.... (These perverted human brains we are stuck with!!!)

Now, on to Mithen's book "The Singing Neanderthals," which makes a fascinating argument for the centrality of music in the evolution of human cognition.... (His book "The Prehistory of Mind" is really good as well, and probably more of an important work overall, though not as pertinent to this discussion...)

In brief he understands music as an instantiation and complexification of an archaic system of communication that was based (not on words but) on patterns of vocal tonal variation.

(This is not hard to hear in Radiohead, but in Bach it's a bit more sublimated ;=)

This ties in with the hypothesis of Sue Savage-Rumbaugh (who works with the genius bonobo Kanzi) that language likely emerged originally from protolanguages composed of **systems of tonal variation**.

Linguist Alison Wray has made related hypotheses: that protolanguage utterances were holistic, and got partitioned into words only later on. What Savage-Rumbaugh adds is that before protolanguage was partitioned into words, it was probably possessed of a deep, complex semantics of tonal variation. She argues this is why we don't recognize most of the existing language of animals: it's not discrete-word language but continuous-tonal-variation language.

(Funny that both these famous theorists of language-as-tonal-variation are women! I have sometimes been frustrated by my mom or wife judging my statements not by their contents but by the "tone" of delivery ;-)

This suggests that a nonhuman AI without a very humanlike body is never going to experience language anywhere near the same way as a human. Even written language is full of games of implied tonal variation-pattern; and in linguistics terms, this is probably key to how we select among the many possible parses of a complex sentence.

[Side note to computational linguists and pragmatic AI people: I agree the parse selection problem can potentially be solved via statistics, like Dekang Lin does in MiniPar; or via pure semantic understanding, as we do when reading Kant in translation, or anything else highly intellectual and non-tonal in nature.... But it is interesting to note that humans probably solve parse selection in significant part thru tonal pattern recognition....]

Regarding AI and language acquisition, this line of thinking is just a further justification of taking a somewhat nonhumanlike approach to protolanguage learning; as if this sort of theory is right, the humanlike approach is currently waaay inaccessible to AI's, even ones embodied in real or simulated robots... It will be quite a while until robot bodies support deep cognitive/emotional/social experience of tonal variation patterns in the manner that we humans are capable of.... The approach to early language learning I propose for Novamente is a subtle combination of humanlike and nonhumanlike aspects.

More speculatively, there may be a cognitive flow-through from "tonal pattern recognition" to the way we partition up the overall stream of perceived/enacted data into events -- the latter is a hard cognitive/perceptual problem, which is guided by language, and may also on a lower level be guided by subtle tonal/musical communicative/introspective intuitions. (Again, from an AI perspective, this is justification in favor of a nonhumanlike route ... one of the subtler aspects of high-level AI design, I have found, is knowing how to combine human-neurocognition inspiration with computer-science inspiration... but that is a topic for another blog post some other day...)

I am also reminded of the phenomenon of the mantra -- which is a pattern of tonal variation that is found to have some particular psychospiritual effect on humans. I have never liked mantras much personally, being more driven to the spare purity of Zen meditation (in those rare moments these days when emptying the intellectual/emotional mind and seeking altered states of purer awareness seems the thing to do...); but in the context of these other ideas on music, tones and psychology, I can see that if we have built-in brain-wiring for responding to tonal variation patterns, mantras may lock into that wiring in an interesting way.

I won't try to describe for you the surreal flourish of brass-instrument sounds that I hear in my mind at this moment -- a celebratory "harmony of dissonance" tune/anti-tune apropos of the completion of this blog post, and the resumption of the software-code-debugging I was involved with before I decided to distract myself briefly via blogging...

Friday, November 10, 2006

Virtual Brilliance, Virtual Idiocy

Last night, at the offices of the Electric Sheep Company (a company devoted to creating "virtual Real Estate" in multi-participant online simulation worlds such as Second LIfe), I saw Sibley Verbeck give a lovely presentation on the state of the art in these proto-Metaverse technologies.

These days, more than 10K people are online in Second Life at any given moment, it seems. A million subscribers, half of them active. People are talking about the potential for using Second Life for business presentations, as a kind of super-pumped-up 3D avatar-infused WebEx. And of course the possibility for other cool apps not yet dreamed of.

Stirring stuff ... definitely, technology worth paying attention to.

And yet, Sibley's excellent presentation left me wondering the following: Do we really want to perpetuate all the most stupid and irritating features of human society in the metaverse ... such as obsession with fashion and hairstyles!!??

"Virtual MTV Laguna Beach", a non-Second-Life project that Electric Sheep Factory did, is technically impressive yet morally and aesthetically YUCK, from a Ben Goertzel perspective. Virtual So-Cal high school as a post-Singularity metaverse is a kind of transhumanist nightmare.

I remain unclear regarding whether there will really be any **interesting** "killer apps" for metaverse technology (and I don't find gaming or online dating all that interesting ;) before really powerful multisensory VR interfaces come about.

And even then, simulating humanity in virtuo fascinates me far less than going beyond the human body and its restrictions altogether.

But, I do note that we are currently using a 3D sim world to teach our Novamente baby AI system. Once it becomes smarter, perhaps we will release our AI in Second Life and let it learn from the humans there ... about important stuff like how to wear its hair right (grin!)

And I must admit to being excited about the potential of this sort of tech for scientific visualization. Flying your avatar through the folds of a virtual human brain, or a virtual cell full of virtual DNA, would be mighty educational. Not **fundamental** in the sense of strong AI or molecular assemblers or fully immersive VR, but a lot niftier than Virtual Laguna Beach....

-- Ben

Thursday, November 02, 2006

Music as a Force of Nature...

This is just a quick follow-up to the prior post on "Being a Force of Nature" ...

Thinking over the issues I wrote about in that post, I was reminded of a failed attempt I made many years ago to construct a more robust kind of music theory than the ones that currently exist....

(Ray Jackendoff's generative-grammar-based theory of music is a nice attempt in a similar direction to what I was trying to do, but ultimately I think he failed also....)

Existing music theory seems not to address the most important and interesting questions about music: Which melodies and rhythms are the most evocative to humans, in which ways, and why?

To put it crudely, we know how to distinguish (with fairly high accuracy) a horrible melody from an OK-or-better melody based on automated means. And we know how to distinguish (with fairly high accuracy) what sorts of emotions an OK-or-better melody is reasonably likely to evoke, by automated means.

But, we have NO handle whatsoever, scientifically or analytically, on what distinguishes a GREAT melody (or rhythm, though I've thought most about melodies) from a mediocre one.

I spent a fair bit of time looking for patterns of this nature, mostly eyeballing various representations of melodies but also using some automated software scripts. No luck ... and I long ago got to busy to keep thinking about the issue....

What was wrong with this pursuit was, roughly speaking, the same thing that's wrong with thinking about human minds as individual, separate, non-social/cultural entities....

A musical melody is a sequence of notes arranged in time, sure ... but basically it's better thought of as a kind of SOFTWARE PROGRAM intended to be executed within the human acoustic/cognitive/emotional brain.

So, analyzing melodies in terms of their note-sequences and time-delays is sort of like analyzing complex software programs in terms of their patterns of bits. (No, it's not an exact analogy by any means, but you may get the point.... The main weaknesses of the analogy are: notes and delays are higher-level than bits; and, musical melodies are control-sequences for a complex adaptive system, rather than a simpler, more deterministic system like a von Neumann computer.)

In principle one could find note/delay-level patterns to explain what distinguishes good from great music, but one would need a HUGE corpus of examples, and then the patterns would seem verrrry complex and tricky on that level.

A correct, useful music theory would need to combine the language of notes and delays and such with the language of emotional and cognitive responses. The kind of question involved is: in a given emotional/cognitive context, which specific note/delay patterns/combinations provide which kinds of shifts to the emotional/cognitive context.

However, we currently lack a good language for describing emotional/cognitive contexts.... Which makes the development of this kind of music theory pretty difficult.

So in what sense is music a force of nature? A piece of music comes out of the cultural/psychological/emotional transpersonal matrix, and has meaning and pattern mainly in combination with this matrix, as a sequence of control instructions for the human brains that form components of this matrix...

(I am reminded of Philip K. Dick's novel VALIS, in which a composer creates music that is specifically designed to act on human brains in a certain way, designed to bring them to certain spiritual realizations. Before ever reading Dick, in my late teens, I had a fantasy of composing a musical melody that was so wonderfully recursively revelatory -- in some kind of Escher-meets-Jimi-Hendrix-and-Bach sort of way -- that it would wake up the listener's mind to understand the true nature of the universe. Alas, I've been fiddling at the piano keyboard for years, and haven't come up with it yet....)

Anyway, this is far from the most important thing I could be thinking about! Compared to artificial general intelligence, music is not so deep and fascinating ... ultimately it's mostly a way of fiddling with the particularities of our human mental system, which is not so gripping as the possibility of going beyond these particularities in the right sort of way....

But yet, in spite of its relative cosmic unimportance, I can't really stay away from music for too long! The KORG keyboard sitting behind me tempts ... and many of my best ideas have come to me in the absence/presence that fills my mind while I'm improvising in those quasi-Middle-Eastern scales that I find so seductive (and my daughter, Scheherazade, says she's so sick of hearing, in spite of her Middle-Eastern name ;-)

OK... back to work! ...

Tuesday, October 31, 2006

On Being a Force of Nature...

Reading the book

Presence: An Exploration of Profound Change in People, Organizations, and Society
by Peter M. Senge, C. Otto Scharmer, Joseph Jaworski, and Betty Sue Flowers

led me inevitably to thoughts about the useful (but sometimes counterproductive) illusions of self and free will.

The authors argue that one path to achieving great things and great happiness is to let go of the illusion of autonomy and individual will, and in the words of George Bernard Shaw "be a force of nature," allowing oneself to serve as a tool of the universe, of larger forces that exist all around and within oneself, and ultimately are a critical part of one's own self-definition (whether one always realizes this or not).

The Shaw quote says:

"
This is the true joy in life, the being used for a purpose you consider a mighty one, the being a force of nature, rather than a feverish, selfish clod of ailments and grievances complaining that the world will not devote itself to making you happy.
"

A related quote from Martin Buber says of the "truly free" man, that he:

"
... intervenes no more, but at the same time, he does not let things merely happen. He listens to what is emerging from himself, to the course of being in the world; not in order to be supported by it, but in order to bring it to reality as it desires.
"

There is an interesting dilemma at the heart of this kind of wisdom, which is what I want to write about today.

A part of me rebels strongly against all this rhetoric about avoiding individual will and being a force of nature. After all, nature sucks in many ways -- nature "wants" me and my wife and kids and all the rest of you humans to die. What the natural and cultural world around me desires is in large measure repellent to me. I don't want to "get a haircut and get a real job" just because that's what the near-consensus of the world around me is ... and nor do I want to submit to death and disease. Nor do I want to listen to everything that nature has put inside me: anger, irrationality and the whole lot of it.... Nature has given me some great gifts and some nasty stuff as well.

Many of the things that are important to me are -- at least at first glance -- all about me exercising my individual will against what nature and society want me to do. Working to end the plague of involuntary death. Working to create superhuman minds. Composing music in scales few enjoy listening to; writing stories with narrative structures so peculiar only the really open-minded can appreciate them. Not devoting my life entirely or even primarily to the pursuits of money, TV-viewing, and propagating my genome.

On the other hand, it's worth reflecting on the extent to which the isolation and independence of the individual self is an illusion. We humans are not nearly so independent as modern Western -- and especially American -- culture (explicitly and implicitly) tells us. In fact the whole notion of a mind localized in a single body is not quite correct. As my dear friend Meg Heath incessantly points out, each human mind is an emergent system that involves an individual body, yes, but also a collection of tools beyond the body, and a collection of patterns of interaction and understanding within a whole bunch of minds. In practice, I am not just who I am inside my brain, I am also what I am inside the brains of those who habitually interact with me. I am not just what I do with my hands but also what I do with my computer. I wouldn't be me without my kids, nor without the corpus of mathematical and scientific knowledge and philosophical ideation that I have spent a large bulk of my life absorbing and contributing to.

So, bold and independent individual willfulness is, to an extent, an illusion. Even when we feel that we're acting independently, from the isolation of our own heart and mind, we are actually enacting distributed cultural and natural processes. A nice illustration of this is the frequency with which scientific discoveries -- even revolutionary ones -- are made simultaneously by multiple individuals. Charles Darwin and Alfred Russell Wallace were being willful, independent, deviant thinkers -- yet each of them was also serving as a nodal point for a constellation of forces existing outside himself ... a constellation of forces that was almost inevitably moving toward a certain conclusion, which had to be manifested through someone and happened to be manifested through those two men.

An analogy appears to exist with the representation of knowledge in the human brain. There is a peculiar harmony of localization and distribution in the way the brain represents knowledge. There are, in many cases, individual and highly localized brain regions corresponding to particular bits of knowledge. If you remove that little piece of the brain, the knowledge may go away (though in many but not all cases, it may later be regenerated somewhere else). But yet, that doesn't mean the knowledge is immanent only in that small region. Rather, when the knowledge is accessed or utilized or modified, a wide variety of brain regions may be activated. The localized region serves as a sort of "trigger" mechanism for unlocking a large-scale activation pattern across many parts of the brain. So, the knowledge is both localized and distributed: there are globally distributed patterns that are built so as to often be activated by specific local triggers.

We can look at humans as analogous to neurons, in the above picture. None of us contains that much in and of ourselves, but any one of us may be more or less critical in triggering large-scale activation patterns ... which in turn affect a variety of other individuals in a variety of ways....

So then, the trick in being a "force of nature" is to view yourself NOT as an individual entity with an individual mind contained in an individual body, making individual decisions ... but rather, as a potential trigger for global activity patterns; or, to put it slightly differently, as a node or nexus of a whole bunch of complex global activity patterns, with the capability to influence as well as be influenced.

When we act -- when we feel like "we" are "acting" -- it is just as fair to say that the larger (social, cultural, natural, etc.) matrix of patterns that defines us is acting thru the medium of us.

I feel analytically that what I said in the previous paragraph is true... but what is interesting is how rarely I actually feel that way, in practice, in the course of going about my daily business. Even in cases where it is very obviously the truth -- such as my work on artificial general intelligence. Yes, I have willfully chosen to do this, instead of something else easier or more profitable or more agreeable to others. On the other hand, clearly I am just serving as the tool of a larger constellation of forces -- the movement of science and technology toward AGI has been going on a long time, which is why I have at my disposal the tools to work on AGI; and a general cultural/scientific trend toward legitimization of AGI is beginning, which is why I have been able to recruit others to work on AGI with me, which has been an important ingredient for maintaining my own passion for AGI at such a high level.

How different would it be, I wonder, if in my individual daily (hourly, minutely, secondly) psychology, I much more frequently viewed myself as a node and a trigger rather than an individual. A highly specialized and directed node and trigger, of course -- not one that averages the inputs around me, but one that is highly selective and responds in a very particular way intended to cause particular classes of effects which (among other things) will come back and affect me in specific ways.

In short: Letting go of the illusion of individuality, while retaining the delights of nonconformity.

Easy enough to say and think about; and rather tricky to put into practice on a real-time basis.

Cultures seem to push you either to over-individualism or over-conformity, and finding the middle path as usual is difficult -- and as often, is not really a middle path, in the end, but some sort of "dialectical synthesis" leading beyond the opposition altogether and into a different way of being and becoming....

Sunday, September 10, 2006

Friendliness vs. Compassion, revisited (plus a bunch of babbling about what I've been up to this year)

Wow, it's been a long time since I've blogged on here -- apparently I haven't been in a bloggy mood.

It's been a busy year ... I've sent my oldest son Zarathustra off to college at age 16 (to Simon's Rock College, www.simons-rock.edu, the same place I, my sister and my ex-wife went way back in the day), which is a very odd feeling ... I finished a pretty decent draft of a novel, Echoes of the Great Farewell, which is a completely lunatic prose-poetic novel-thing told from the stream-of-consciousness point of view of a madman who believes that hallucinogenic mushrooms have told him how to create a superhuman AI (perhaps I'll actually try to get this one published, though it's not a terribly publisher-friendly beast) ... I came up with a substantial simplification of the Novamente AI design, which I'm pretty happy with due to its deep foundations in systems philosophy ... worked with my Novamente colleagues to take a few more incremental steps toward implementation of the Novamente AGI design (especial progress in the area of probabilistic reasoning, thanks to the excellent efforts of Ari Heljakka) ... did some really nice data mining work in the context of some commercial projects ... make some freaky instrumental music recordings that my wife at least enjoyed ... hiked the Na Pali Trail on Kaui and a whole bunch of trails near the Matterhorn in the Alps with my mountain-maniacal young wife Izabela ... co-organized a conference (the AGIRI workshop) ... published a philosophy book, The Hidden Pattern, which tied together a whole bunch of recent essays into a pretty coherent statement of the "world as pattern" perspective that has motivated much of my thinking ... developed a new approach to AGI developmental psychology (together with Stephan Vladimir Bugaj) ... starred in a few animations created by my son Zebulon (zebradillo.com), including one about rogue AI and another in which I mercilessly murder a lot of dogs ... helped discover what seems to be the first plausible genetic underpinnings for Chronic Fatigue Syndrome (together with colleagues at the CDC and Biomind LLC) ... and jeez, well this list is dragging on, but it's really not the half of it...

A pretty full year -- fun to live; too much going on to permit much blogging ... but frustrating in the big picture, given that it's been yet another year in which only modest incremental progress has been made toward my most important goal of creating AGI. My understanding of AGI and the universe has increased significantly this year so far, which is important. And the Novamente codebase has advanced too. Again, though, balancing the goal of achieving AGI with the goal of earning cash to support a family (send the kids to college, pay the alimony (which runs out in another 9 months -- yay!!), etc.) proves a tough nut to crack, and is just a dilemma I keep living with, without solving it satisfactorily so far.... I'll be spending much of the next 6 weeks trying to solve it again, by doing a bunch of meetings and social-networking events partially aimed at eventually putting me in touch with investors or other partners who may be interested in funding my AGI work more fully than is currently the case. (Don't get me wrong, we are moving toward AGI in the Novamente project right now, but we could be moving 10 times faster with some fairly modest investment ... the small amount of investment we've gotten so far, combined with the small surplus value my colleauges and I have managed to extract from our commercial narrow-AI contracts, is far from enough to move us along at maximum rate.)

BUT ANYWAY ... all this was not the point of this blog entry. Actually, the point was to give a link to an essay I wrote on a train from Genova to Zermatt, following a very interesting chat with Shane Legg and Izabela. Shane wrote a blog entry after our conversation, which can be found by going to his site

http://www.vetta.org/

and searching for the entry titled "Friendly AI is Bunk." I wrote an essay with a similar theme but a slightly different set of arguments. It is found at

http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

The essay is informal in the sense of a blog entry, but is too long to be a blog entry. My argument is a bit more positive than Shane's in that, although I agree with him that guaranteeing "AI Friendliness" in a Yudkowskian sense is very unlikely, I think there may be more general and abstract properties ("compassion" (properly defined, and I'm not sure how), anyone?) that can be more successfully built into a self-modifying AI.... (Shane by the way is a deep AI thinker who is now a PhD student working with Marcus Hutter on the theory of infinitely powerful AI's, and who prior to that did a bunch of things including working with me on the Webmind AI system in the late 1990's, and working with Peter Voss on the A2I2 AGI architecture.)
While you're paying attention, you may be interested in another idea I've been working on lately, which is a variant of the Lojban language (tentatively called Lojban++) that I think may be very useful for communication between humans and early-stage AGI's. If you're curious you can read about it at

http://www.goertzel.org/papers/lojbanplusplus.pdf

With a view toward making Lojban++ into something really usable, I've been spending a bit of time studying Lojban lately, which is a slow but fascinating and rewarding process that I encourage others to undertake as well (see www.lojban.org).

Well, OK ... that's enough for now ... time for bed. (I often like late-night as a time for work due to the quiet and lack of interruptions' but tonight my daughter is having a friend sleep over and they're having an extremely raucous post-midnight mop-the-dirty-kitchen-floor/mock-ice-skating party which is more conducive to blogging than serious work ;-). I hope to blog a bit more often in the next months; for whatever obscure human-psychology reason it seems to gratify some aspects of my psyche. Hopefully the rest of 2006 will be just as fun and diverse as the part so far -- and even more productive for Novamente AGI...

Wednesday, January 25, 2006

Inconsistentism

This blog entry arises from an email I sent to the SL4 email list, in response to a suggestion by Marc Geddes that perhaps the universe can best be considered as a logically inconsistent formal system.

I find that Marc's suggestion ties in interestingly with a prior subject I've dealt with in this blog: Subjective Reality.

I think it is probably not the best approach to think about the universe as a formal system. I find it more useful to consider formal systems as approximate and partial models of the universe.

So, in my view, the universe is neither consistent nor inconsistent, any more than a brick is either consistent or inconsistent. There may be mutually consistent or mutually inconsistent models of the universe, or of a brick.

The question Marc has raised, in this perspective, is whether the "best" (in some useful sense) way of understanding the universe involves constructing multiple mutually logically inconsistent models of the universe.

An alternative philosophical perspective is that, though the universe is not in itself a formal system, the "best" way of understanding it involves constructing more and more comprehensive and sophisticated consistent formal systems, each one capturing more aspects of the universe than the previous. This is fairly close to being a rephrasing of Charles S. Peirce's philosophy of science.

It seems nice to refer to these two perspectives as Inconsistent versus Consistentist views of the universe. (Being clear however that the inconsistency and consistency refer to models of the universe rather than the universe itself.)

Potentially the Inconsistentist perspective ties in with a previous thread in this blog regarding the notion of Subjective Reality. It could be that, properly formalized, the two models

A) The universe is fundamentally subjective, and the apparently objective world is constructed out of a mind's experience

B) The universe is fundamentally objective and physical, and the apparently subjective world is constructed out of physical structures and dynamics

could be viewed as two

  • individually logically consistent
  • mutually logically inconsistent
  • separately useful
models of the universe. If so, this would be a concrete argument in favor of the Inconsistentist philosophical perspective.

Inconsistentism also seems to tie in with G. Spencer Brown's notion of modeling the universe using "imaginary logic", in which contradiction is treated as an extra truth value similar in status to true and false. Francisco Varela and Louis Kauffmann extended Brown's approach to include two different imaginary truth values I and J, basically corresponding to the series

I = True, False, True, False,...

J = False, True, False, True,...

which are two "solutions" to the paradox

X = Not(X)

obtained by introducing the notion of time and rewriting the paradox as

X[t+1] = Not (X[t])

In Brownian philosophy, the universe may be viewed in two ways

  • timeless and inconsistent
  • time-ful and consistent
Tying this in with the subjective/objective distinction, we obtain the interesting idea that time emerges from the feedback between subjective and objective. That is, one may look at a paradox such as

creates(subjective reality, objective reality)
creates(objective reality, subjective reality)
creates(X,Y) --> ~ creates(Y,X)

and then a resolution such as

I = subjective, objective, subjective, objective,...
J = objective, subjective, objective, subjective,...

embodying the iteration

creates(subjective reality[t], objective reality[t+1])
creates(objective reality[t+1], subjective reality[t+2)


If this describes the universe then it would follow that the subjective/objective distinction only introduces contradiction if one ignores the existence of time.

Arguing in favor of this kind of iteration, however, is a very deep matter that I don't have time to undertake at the moment!

I have said above that it's better to think of formal systems as modeling the universe rather than as being the universe. On the other hand, taking the "patternist philosophy" I've proposed in my various cognitive science books, we may view the universe as a kind of formal system comprised of a set of propositions about patterns.

A formal system consists of a set of axioms.... OTOH, in my "pattern theory" a process F is a pattern in G if
  • F produces G
  • F is simpler than G
So I suppose you could interpret each evaluation "F is a pattern in G"as an axiom stating"F produces G and F is simpler than G"

In this sense, any set of patterns may be considered as a formal system.

I would argue that, for any consistent simplicity-evaluation-measure, the universal pattern set is a consistent formal system; but of course inconsistent simplicity-evaluation-measures will lead to inconsistent formal systems.

Whether it is useful to think about the whole universe as a formal system in this sense, I have no idea...

Thursday, December 08, 2005

A General Theory of the Development of Forms (wouldn't it be nice to have one?)

This blog entry briefly describes a long-term conceptual research project I have in mind, and have been thinking about for a while, which is to try to figure out some sort of "general theory of the development of forms/patterns in growing complex systems."

Since the Novamente AGI high-level design and the "patternist philosophy of mind" are basically completed and stable for a while (though I'm still engaged with writing them up), I need a new conceptual obsession to absorb the extremely-abstract-thinking portion of my brain... ;-)

Thinking about the development of forms, I have in mind three main specific areas:

  • developmental psychology (in humans and AI's)
  • epigenesis in biological systems
  • the growth of the early universe: the emergence of physical law from lawlessness, etc. (cf John Wheeler)

Each of these is a big area and I've decided to proceed through them in this order. Maybe I will never get to the physics part and will just try to abstract a general theory of development from the first two cases, we'll see.

I also have an intuition that it may be useful to use formal language theory of some sort as a conceptual tool for expressing developmental stages and patterns. Piaget tried to use abstract algebra in some of his writings, which was a nice idea, but didn't quite work. This ties in with Jerry Fodor's notion of a "language of thought", which I don't buy quite in all the senses he means it, but may have some real meat to it. It may be that developing minds at different stages. I don't know if anyone has taken this approach in the developmental psych literature.

For instance, it's arguable that quantifier binding is only added to the human language of thought at Piaget's formal stage, and that recursion is only added to the human language of thought at Piaget's concrete operational stage (which comes along with phrase structure syntax as opposed to simpler proto-language). What I mean by "X is added to the human language of thought at stage S" is something like "X can be used with reasonable generality and fluidity at stage S" -- of course many particular instances of recursion are used before the pre-operational phase, and many particular instances of quantifier binding are used before the formal phase. But the full "syntax"of these operations is not mastered prior to the stages I mentioned, I suggest. (Note that I am using Piaget's stage-labels only for convenience, I don't intend to use them in my own theory of forms; if I take a stage-based approach at all then I will define my own stages.)

I note that formal language theory is something that spans different domain areas in the sense that

  • there's discussion of "language of thought" in a general sense
  • natural language acquisition is a key aspect of developmental psych
  • L-system theory shows that formal languages are useful for explaining and modeling plant growth
  • "Symbolic dynamics" uses formal language theory to study the dynamics of chaotic dynamical systems in any domain, see also Crutchfield and Young

So it seems to be a potentially appropriate formal tool for such a project.

I was discussing this with my friend Stephan Bugaj recently and he and I may write a book on this theme if we can pull our thinking together into a sufficiently organized form....

Friday, December 02, 2005

More Venting about Scientific Narrowmindedness and Superintelligent Guinea Pigs

I spent the day giving a talk about bioinformatics to some smart medical researchers and then meeting with them discussing their research and how advanced narrow-AI informatics tools could be applied to help out with it.

AAARRRGGHHH!!! Amazing how difficult it is to get even clever, motivated, knowledgeable biologists to understand math/CS methods. The techniques I presented to them (a bunch of Biomind stuff) would genuinely help with their research, and are already implemented in stable software -- there's nothing too fanciful here. But the "understanding" barrier is really hard to break through -- and I'm not that bad at explaining things; in fact I've often been told I'm really good at it....

We'll publish a bunch of bioinformatics papers during the next year and eventually, in a few more years, the techniques we're using (analyzing microarray and SNP and clinical data via learning ensembles of classification rules; then data mining these rule ensembles, and clustering genes together based on whether they tend to occur in the same high-accuracy classification rules, etc.) will become accepted by 1% or 5% of biomedical researchers, I suppose. And in 10 years probably it will all be considered commonplace: no one will imagine analyzing genetics data without using such techniques....

Whether Biomind will manage to get rich during this process is a whole other story -- it's well-known that the innovative companies at the early stage of a revolution often lose out financially to companies that enter the game later once all the important ideas have already been developed. But finances aside, I'm confident that eventually, little by little, the approach I'm taking to genetic data analysis will pervade and transform the field, even if the effect is subtle and broad enough that I don't get that much credit for it....

And yet, though this Biomind stuff is complex enough to baffle most bioinformaticists and to be really tough to sell, it's REALLY REALLY SIMPLE compared to the Novamente AI design, which is one or two orders of magnitude subtler. I don't think I'm being egomaniacal when I say that no one else has really appreciated most of the subtlety in the Novamente design -- not even the other members of the Novamente team, many of whom have understood a lot. Which is verrrry different from the situation with Biomind: while the Biomind methods are too deep for most biologists, or most academic journal referees who review our papers, to understand, everyone on the Biomind team fully "gets" the algorithms and ideas.

Whether the subtlety of the Novamente design ever gets to be manifested in reality remains to be determined -- getting funding to pay a small team to build the Novamente system according to the design remains problematic, and I am open to the possibility that it will never happen, dooming me (as I've joked before) to a sort of Babbagedom. What little funding there is for AGI-ish research tends to go to folks who are better at marketing than I am, and who are willing to tell investors the story that there's some kind of simple path to AGI. Well, I don't think there is a simple path. There's at least one complex path (Novamente) and probably many other complex paths as well; and eventually someone will follow one of them if we don't annihilate ourselves first. AGI is very possible with 3-8 years effort by a small, dedicated, brilliant software team following a good design (like Novamente), but if the world can't even understand relatively simple stuff like Biomind, getting any understanding for something like Novamente is obviously going to continue to be a real uphill battle!

Relatedly, a couple weeks ago I had some long conversations with some potential investors in Novamente. But the investors ended up not making any serious investment offer -- for a variety of reasons, but I think one of them was that the Novamente design was too complex for them to easily grok. If I'd been able to offer them some easily comprehensible apparent path to AGI, I bet they would have invested. Just like it would be easier to sell Biomind to biologists if they could grok the algorithms as well as the Biomind technical team. Urrrghh!

Urrrgghhh!! urrrgghh!! ... Well, I'll keep pushing. There are plenty of investors out there. And the insights keep coming: interestingly, in the last few days a lot of beautiful parallels have emerged between some of our commercial narrow-AI work in computational linguistics and our more fundamental work in AGI (relating to making Novamente learn simple things in the AGI-SIM simulation world). It turns out that there are nice mathematical and conceptual parallels between algorithms for learning semantic rules from corpuses of texts, and the process of learning the functions of physical objects in the world. These parallels tell us a lot about how language learning works -- specifically, about how structures for manipulating language may emerge developmentally from structures for manipulating images of physical objects. This is exactly the sort of thing I want to be thinking about right now: now that the Novamente design is solid (though many details remain to be worked out, these are best worked out in the course of implementation and testing), I need to be thinking about "AGI developmental psychology," about how the learning process can be optimally tuned and tailored. But instead, to pay the bills and send the kids to college yadda yadda yadda, I'm trying to sell vastly simpler algorithms to biologists who don't want to understand why it's not clever to hunt for biomarkers for a complex disease by running an experiment with only 4 Cases and 4 Controls. (Answer: because complex diseases have biomarkers that are combinations of genes or mutations rather than individual genes/mutations, and to learn combinational rules distinguishing one category from another, a larger body of data is needed.)

Ooops! I've been blogging too long, I promised Scheherazade I would go play with her guinea pigs with her. Well, in a way the guinea pigs are a relief after dealing with humans all day ... at least I don't expect them to understand anything. Guinea pigs are really nice. Maybe a superintelligent guinea pig would be the ultimate Friendly AI. I can't remember ever seeing a guinea pig do anything mean, though occasionally they can be a bit fearful and defensive....

Tuesday, November 29, 2005

Post-Interesting

Hi all,

I have launched a second blog, which is called Post-Interesting

www.post-interesting.com

and I have invited a number of my friends to join me in posting to it (we'll see if any of them actually get around to it!).

The idea is that this current blog ("Multiverse According to Ben") will contain more personal-experience and personal-opinion type entries, whereas Post-Interesting will be more magazine-like, containing reviews, interesting links, and compact summaries of highly crisp scientific or philosophical ideas.... (Of course, even my idea of "magazine-like" contains a lot of personal opinions!)

Not that I really have time to maintain one blog let alone two, but from time to time I seem to be overtaken by an irresistable desire to expunge massive amounts of verbiage ;-D

If people make a lot of interesting posts to Post-Interesting then one day it will be a multimedia magazine and put Wired and Cosmopolitan out of business! (For now I just put three moderately interesting initial posts there....)

-- Ben

Wednesday, November 16, 2005

Reality and Religion (a follow-up to earlier posts on Objective/Subjective Reality)

This post is a response to Bob McCue's comments to my earlier blog entry on "Objective and Subjective Reality". Scroll down after going to

http://www.goertzel.org/blog/2005/07/objective-versus-subjective-reality.html

to read his comments.

Bob is a former Mormon and has written extensively and elegantly about his reasons for leaving the faith:

http://mccue.cc/bob/spirituality.htm

He read my blog on objective/subjective reality and my essay on "social/computational/probabilist" philosophy of science

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

and then posed some questions regarding the probabilistic justification of religious beliefs.

Bob: The questions you raise are deep and fascinating ones and unfortunately I don't have time right now to write a reply that does them justice.

However, I can't resist saying a few things ;-)

I was never religious but my ex-wife was and, although this led to numerous unpleasant arguments between us, it also led me to gain some degree of appreciation (OK, not all that much!) for the religious perspective. For her (as a Zen Buddhist) it was never about objective truth at all, it was always about subjective experience -- her own and that of the others in her sangha (religious group). If probability theory was relevant, it was in the context of evaluations like

Probability ( my own spiritual/emotional state is good GIVEN THAT I carry out these religious practices)

>

Probability ( my own spiritual/emotional state is good GIVEN THAT I don't carry out these religious practices)

The evaluation criterion was internal/subjective not external/objective. The actual beliefs of the religion were only evaluated in regard to their subjective effects on the believer's internal well-being. This fits in with a Nietzschean perspective in which "An organism believes what it needs to believe in order to survive", if you replace "survive" with "maximize internal satisfaction" (which ultimately approximately reduces to Nietzsche's "survival" if one takes an evolutionary view in which we have evolved to, on average, be satisfied by things correlated with our genomes' survival).

I am not sure what this has to do with religions like Mormonism though. I think my ex got interested in Zen (in her mid-20's) partly because I had talked to her about it years before that, when as a teenager I had found Huang Po's Zen writings (on exiting the world of thought and ideas and entering the world of pure truth/nothingness) really radical and fascinating. Zen is not very typical of religions and it's questionable whether it really belongs in the "religion" category -- it's a borderline case. It specifically teaches that the external, "objective" world is illusory and urges you to fully, viscerally and spiritually understand this world's construction via the mind. Thus in a Zen perspective the empirical validation or refutation of hypotheses (so critical to science) is not central, because it takes place within a sphere that is a priori considered illusory and deceptive. Because of this Zen tends not to make statements that contradict scientific law; rather it brushes the whole domain of science aside as being descriptive of an illusory reality.

I guess that Mormonism is different in that it makes hypotheses that directly contradict scientific observation (e.g. do Mormons hold the Earth was created 6000 years ago?). But still, I suspect the basic psychological dynamics is not that different. People believe in a religion because this belief helps them fulfill their own goals of personal, social or spiritual satisfaction. Religious people may also (to varying extents) have a goal of recognizing valid patterns in the observed world; but people can have multiple goals, and apparently for religious people the goal of achieving personal/social/spiritual satisfaction thru religion overwhelms the goal of recognizing valid patterns in the observed world. I find nothing very mysterious in this.

Bob: You ask about belief in Kundalini Yoga (another obsession of my ex-wife, as it happens.) I guess that the KY system helps people to improve their own internal states and in that case people may be wise to adopt it, in some cases... even though from a scientific view the beliefs it contains are a tricky mix of sense and nonsense.

However, it seems pretty clear to me that religious beliefs, though they may sometimes optimally serve the individual organism (via leading to various forms of satisfaction), are counterproductive on the species level.

As a scientific optimist and transhumanist I believe that the path to maximum satisfaction for humans as a whole DOES involve science -- both for things like medical care, air conditioning and books and music, and for things like creating AI's to help us and creating nanotech and gene therapy solutions for extending our lives indefinitely.

There's a reason that Buddhism teaches "all existence involves suffering." It's true, of course -- but it was even more true in ancient India than now. There was a lot more starvation and disease and general discomfort in life back then, which is why a suffering-focused religion like Buddhism was able to spread so widely. The "suffering is everywhere" line wouldn't sell so well in modern America or Western Europe, because although suffering still IS everywhere, it's not as extreme and not as major a component of most people's lives. Which is due, essentially, to science. (I am acutely aware that in many parts of the world suffering is a larger part of peoples' lives, but, this does not detract from the point I am making.)

Since religious belief systems detract from accurate observation of patterns in reality, they detract from science and thus from the path with the apparently maximal capacity to lead humanity toward overall satisfaction, even though they may in fact deliver maximal personal satisfaction to some people (depending on their personal psychology).

However, one may argue that some people will never be able to contribute to science anyway (due to low intelligence or other factors), so that if they hold religious beliefs and don't use them to influence the minds of science-and-technology-useful people, their beliefs are doing no harm to others but may be increasing their own satisfaction. Thus, for some people to be religious may be a good thing in terms of maximizing the average current and long term satisfaction of humanity.

There is also a risk issue here. Since religion detracts from science and technology, it maintains humans in a state where they are unlikely to annihilate the whole species, though they may kill each other in more modest numbers. Science gives us more power for positive transformation and also more power for terrible destruction. The maximum satisfaction achievable thru science is higher than thru religion (due to the potential of science to lead to various forms of massively positive transhumanism), but the odds of destruction are higher too. And we really have no way of knowing what the EXPECTED outcome of the sci-tech path is -- the probabilities of transcension versus destruction.

[As I wrote the prior paragraph I realized that no Zen practitioner would agree with me that science has the power to lead to greater satisfaction than religion. Semantics of "satisfaction" aside they would argue that "enlightenment" is the greatest quest and requires no technology anyway. But even if you buy this (which I don't, fully: I think Zen enlightenment is an interesting state of mind but with plusses and minuses compared to other ones, and I suspect that the transhuman future will contain other states of mind that are even more deep and fascinating), it seems to be the case that only a tiny fraction of humans have achieved or ever will achieve this exalted state. Transhumanist technology would seem to hold the possibility of letting any sentient being choose their own state of mind freely, subject only to constraints regarding minimizing harm to others. We can all be enlightened after the Singularity -- if we want to be! -- but we may well find more appealing ways to spend our eternity of time!! -- ]

OK, I drifted a fair way from Mormonism there, back to my usual obsessions these days. But hopefully it was a moderately interesting trajectory.

For a more interesting discussion of Mormonism, check out the South Park episode "All About Mormons." It was actually quite educational for me.