To follow this blog by email, give your address here...

Thursday, September 25, 2008

Another Transhumanist Nightmare

Some anonymous freak wrote this story, a piece of transhumanist/absurdist fantasy which includes me in a minor role ... it's childish, but I have to say, mildly amusing...

Tuesday, September 23, 2008

The End of the American Era!! (Not)

I'm not generally a very political person ... my thinking and my life-decisions are pretty strongly focused on the "big picture": superhuman AI, the Singularity, transhumanism and all that.

I was deeply into politics as a teen (largely because my parents raised me to be), but as I realized that utopian political dreams were likely to founder on the intrinsic biological perversity of human nature, I drifted away from the political sphere and started thinking more about how to improve or transcend human nature itself....

However, every now and then some piece of political stupidity gets on my nerves sufficiently that I wind up burning time thinking about it.

One of these cases has occurred recently: I've become annoyed by a large number of people proclaiming that "the American era is finally ending." No empire rules forever, and blah blah blah.

I've been hearing this sort of talk for a while, but all the more intensely given the recent week's American banking crisis.

So I decided to write a blog post to get my thoughts on the topic out of my head!

I've never been noted for my patriotism: I really don't care, at a fundamental level, about nations or other related manifestations of contemporary human society. I'll be happy to see them all go away once human nature is fundamentally reformed via radical technological advances.

I've also spent enough time living and traveling outside the US, to get some feel for the strengths and weaknesses of the good/bad old US of A.

My considered opinion of the "end of the American era" meme is that it's pretty much bullshit.

I also seem to look at the current financial crisis a little differently than most others (big surprise there, huh?).

The issues that investment banks, insurance companies and related institutions have recently experienced have been widely attributed to greed, poor government regulation, and so forth. These attributions are surely correct -- but any real event has multiple causes ("cause" being essentially a creature of subjective theory rather than physical reality anyway). And one cause is not being commented on enough, which is the phenomenal practical creativity involved in all the recondite financial instruments (credit default swaps, mortgage strips and the like) underlying the recent woes.

There is some really cool math underlying these financial devices, and this math was largely invented and pragmaticized by American entrepreneurial thinkers. American quants have developed many new fields of financial math, and brought these into the real world, thus moving the global economy to a whole new level of complexity and efficiency.

Innovation always carries risks ... and we've seen that in the markets over the last weeks and months. But let's not forget how amazing the innovations are, and what tremendous positive potential they have.

I agree that exotic derivatives should be regulated more carefully. On the other hand, I also agree with their advocates that they add significant efficiency to the financial markets, and hence are a major asset to the world economy.

Of course, one can theoretically envision socioeconomic systems in which efficiency would be achieved by other, less perverted and convoluted means. But, as history shows, theoretically-envisioned socioeconomic systems are difficult to translate into realities, because of the subtleties of human psychology and culture.

And it's precisely these "subtleties of psychology and culture" that led America to invent quantitative finance ... and so many other amazing technological and scientific developments ... which is exactly why I tend to doubt the "American era" is at its end.

My contention, and it's not a terribly original one (but I may have a somewhat original slant), is that compared to other countries on the planet right now, the USA has a combination of cultural psychology and socioeconomic institutions that is uniquely well-suited to fostering practical creativity.

Note the compound of terms: "practical" and "creativity."

I don't think the US has any kind of monopoly on creativity itself. There are brilliant, creative minds everywhere. Some cultures foster creativity more than others ... and the US is pretty good at this ... but I'm not sure it's uniquely good.

And I don't think the US has any kind of monopoly on practicality, either. Although historically this has been a US characteristic, there are surely other nations that are currently more down-to-earth and practical than the US (as a generalization across various aspects of life).

However, the US seems to be uniquely good at taking creative new ideas and finding the first ways to give them practical implementations -- an art that requires a great deal of creativity in itself, of course.

What is it about the US that fosters practical creativity? It's no one thing. It's a synergetic combination of culture and institutions. The institutions help keep the culture alive, and the culture helps keep the institutions alive. Practical creativity is something that pervades many aspects of US life -- government, research, education and industry, for example. Precisely because of its pervasive and systemic nature, the memeplex that constitutes practical creativity in the US is difficult for other nations to copy, even if they have a genuine desire to.

To see what I mean more concretely, think about three examples: the Internet, the Human Genome Project, and the personal computer. How did these come about?

The history of the PC embodies many classic stories of American entrepreneurism, including the creation of Apple and Microsoft by young nerdy entrepreneurs out of nowhere. But it also tells you something about the flexibility of large US corporations relative to similar institutions elsewhere: it was IBM striking a deal with Bill Gates, some young nerd from nowhere with no real business experience, that set the PC industry on its modern path. Not to mention the freewheeling US corporate research lab culture of the time (Xerox PARC and all that). And the government research funding establishment played its role behind the scenes, for instance in funding the creation of mainframes that Bill Gates played with (often breaking the rules to do so) in high school and college, before starting Microsoft....

The Internet began as a project of ARPA (now DARPA), a US government research funding agency that has its strengths and weaknesses, but is notable for its chaotic approach to funding. DARPA program managers cycle in and out every 4 years so that no individual has too much power over resource allocation decisions. There are certainly "old boy networks" involved, and I've personally been fairly unhappy with DARPA's funding choices in my own research field of AI. However, it's interesting to compare the DARPA funding approach with the approach of, say, the Japanese government. Historically, the Japanese have had a tendency to fund huge, comprehensive, nationwide research programmes: e.g. the Fifth Generation computing initiative (which funded a large number of researchers to work on logic-based AI), or the current focus on robotics technology. As a crude approximation, it seems the Japanese funding system tends to push researchers to "all work on the same sort of thing at the same time", whereas the American research funding system is more chaotic, leading to a greater diversity of ideas getting explored simultaneously. We still are overly trend-following and narrow-focused in the US, from my point of view: for instance, AI funding has focused on narrow-AI, logic-based systems and neural net systems for far too long; and the biology community is taking way too long to wake up to the importance of systems biology. But, compared to the rest of the world, the US research funding system is a hotbed of creative chaos.

And then, once the Internet escaped the clutches of ARPA (due to the legislative action of folks like Al Gore, who famously bragged he "invented the Internet" due to his role in this political process), it spread through the collective activity of masses of software entrepreneurs. The Web was initially developed in Europe, but what made it a huge phenomenon was American entrepreneurship, pushed on by the relative ease of securing angel and venture funding in the US. I lived in Australia in the late 1990's but when I wanted to start a software business I had to return to the US because it was so hard to secure investment for an oddball software startup anywhere else (not that it was easy in the US, but it was a bit less painfully difficult...).

The Human Genome Project (which has ushered in a completely new era of genetics and medical research) was began as a US government initiative, involving a network of university labs. And note that the US graduate education system is still by far the best in the world. Our elementary and high schools are generally pathetic compared to those of other developed nations, though there are many exceptionally good schools out there too (the US being a big, diverse place) ... but by the time one gets to grad school, the US is the place to be. Top undergrads from around the world vie to get into our grad schools, and top PhDs vie for postdoc positions at our universities.

But what accelerated the Human Genome Project was the entry of Celera Genomics into the picture -- a venture-funded entrepreneurial attempt to outdo the government genome sequencing project. The new ideas Celera introduced (shotgun sequencing) accelerated the government sequencing project as well, helping the latter to complete ahead of schedule and under budget. (Now Craig Venter, who founded Celera, is involved with a number of projects, some commercial and some nonprofit within government-funded labs ... including a far-out attempt to create the first artificial genome.)

In each of these three cases -- and I could have chosen many others -- we see a complex combination of individual scientific and entrepreneurial initiative, and the spontaneously coordinated, somewhat chaotic and happenstance interaction of government, commercial and educational institutions. This combination isn't planned in detail, and doesn't always make sense, and makes a lot of really stupid decisions (such as not funding advanced AI research much more amply), but it also does a lot of smart things ... and it interpenetrates with subtle, hard-to-describe aspects of American culture in ways that no one has yet been able to document.

Part of the story, of course, is the incredible diversity of the American population: our scientists and engineers, especially, come from all over the world ... and increasingly our business leaders do too. So American culture isn't exactly American culture: it's really world culture, but with an American slant. And this is one among many major differences between America and other contemporary nations, which is closely linked to the "practical creativity" memeplex. I can't see anywhere in Asia, or anywhere in Europe (except possibly England), adopting the "melting pot" aspect of American culture ... but without this melting-pot aspect, it seems to me that practical creativity will have a lot harder time really flourishing. The diversity of ideas and approaches that comes from welcoming and then chaotically blending cultures and outlooks from all over the world, is a major source of practical creativity.

The move from a manufacturing and service economy to a knowledge economy has become famous. The next step, I suggest, is going to be a gradual shift from a knowledge economy to a creativity economy. As knowledge work becomes commoditized, the really precious thing will be creativity work: but not abstract creativity-work detached from the everyday world ... practical-creativity work, aimed at moving the real world forward in unexpected directions. Because of this, I suspect the US will maintain its cultural and economic leadership role in the world for quite some time.

And we'd damn well better, because with all the debt we're racking up, we're basically placing a huge BET that we're going to dramatically increase our productivity via technological efficiency improvements of various sorts. It's a fairly large gamble, but calculated risks are part of the American way ... as recent events on Wall Street show, this approach definitely has its risks ... but my guess is that this gamble will ultimately pan out just fine.

Getting back to my futurist preoccupations: My best guess is that the bulk of the work of creating the Singularity is going to be centered in America. This work will surely be international -- my own current work on advanced AGI technology involves a team with members in South America, Europe, Australia, New Zealand and Asia as well as the US (no Antarcticans yet...). But there's a reason my company Novamente LLC is centered in the US and not these other countries, beyond historical happenstance ... the US is the place where businesses and nonprofit agencies are most willing to seriously consider the practical value of way-out-there technologies. So long as this doesn't change, the American era is going to keep on rolling ... at least that's my best guess at the moment ...

Monday, September 22, 2008

AGI Intelligence Testing

I spent a while this weekend thinking about what might be the right approach for testing the intelligence of early-stage AGI systems that are aimed at human-level, roughly human-like general intelligence (either as an end goal or an intermediate developmental milestone).

Some of my thoughts are summed up in an essay I posted at

I’ll quote the first few paragraphs here:

One of the many difficult issues arising in the course of research on human-level AGI is that of “evaluation and metrics” – i.e., AGI intelligence testing.

It’s not so hard to tell when you’ve achieved human-level AGI — though there is some subtlety here, which I’ll discuss below. However, assessing the quality of incremental progress toward human-level AGI is a much subtler matter. In this essay I’ll present some thoughts on this issue, culminating in a couple specific proposals:

1) Online School Tests, in which AGIs are tested via their ability to succeed in existing online educational fora

2) of more immediate interest, a series of tests called the AGI Preschool Tests (AIP Tests, for short, pronounced “ape tests”), based on the notion of “multiple intelligences” and also on some novel ideas regarding learning-based intelligence testing.

The AIP Tests suggested here are specifically intended for AGI systems that control agents embodied in 3D worlds resembling the everyday human world, via either physical robots or virtually embodied agents. Very differently embodied AGI systems (e.g. systems to be initially taught purely via text without any simulated human-like or animal-like body) would potentially need qualitatively different testing methdologies.

Saturday, August 30, 2008

On the Preservation of Goals in Self-Modifying AI Systems

I wrote down some speculative musings on the preservations of goals in self-modifying AI systems, a couple weeks back; you can find them here:

The basic issue is: what can you do to help mitigate against the problem of "goal drift", wherein an AGI system starts out with a certain top-level goal governing its behavior, but then gradually modifies its own code in various ways, and ultimately -- through inadvertent consequences of the code revisions -- winds up drifting into having different goals than it started with. I certainly didn't answer the question but I came up with some new ways of thinking about the problem, and formalizing the problem, that I think might be interesting....

While the language of math is used in the paper, don't be fooled into thinking I've proved anything there ... the paper just contains speculative ideas without any real proof, just as surely as if they were formulated in words without any equations. I just find that math is sometimes the clearest way to say what I'm thinking, even if I haven't come close to proving the correctness of what I'm thinking yet...

An abstract of the speculative paper is:

Toward an Understanding of the Preservation of Goals
in Self-Modifying Cognitive Systems

Ben Goertzel

A new approach to thinking about the problem of “preservation of AI goal systems under repeated self-modification” (or, more compactly, “goal drift”) is presented, based on representing self-referential goals using hypersets and multi-objective optimization, and understanding self-modification of goals in terms of repeated iteration of mappings. The potential applicability of results from the theory of iterated random functions is discussed. Some heuristic conclusions are proposed regarding what kinds of concrete real-world objectives may best lend themselves to preservation under repeated self-modification. While the analysis presented is semi-rigorous at best, and highly preliminary, it does intuitively suggest that important humanly-desirable AI goals might plausibly be preserved under repeated self-modification. The practical severity of the problem of goal drift remains unresolved, but a set of conceptual and mathematical tools are proposed which may be useful for more thoroughly addressing the problem.

Wednesday, August 27, 2008

Playing Around with the Logic of Play

On the AGI email list recently, someone asked about the possible importance of creating AGI systems capable of playing.

Like many other qualities of mind, I believe that the interest in and capability for playing is something that should emerge from an AGI system rather than being explicitly programmed-in.

It may be that some bias toward play could be productively encoded in an AGI system ... I'm still not sure of this.

But anyway, in the email list discussion I formulated what seemed to be a simple and clear characterization of the "play" concept in terms of uncertain logical inference ... which I'll recount here (cleaned up a slight bit for blog-ification).

And then at the end of the blog post I'll give some further ideas which have the benefit of making play seem a bit more radical in nature ... and, well, more playful ...

Fun ideas to play with, at any rate 8-D

My suggestion is that play emerges (... as a consequence of other general cognitive processes...) in any sufficiently generally-intelligent system that is faced with goals that are very difficult for it .

If an intelligent system has a goal G which is time-consuming or difficult to achieve ... it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism

Achieving G implies reward

G1 is similar to G


Achieving G1 implies reward

(which in my Probabilistic Logic Networks framework would be most naturally modeled as an "intensional implication.)

As links between goal-achievement and reward are to some extent modified by uncertain inference (or analogous process, implemented e.g. in neural nets), we thus have the emergence of "play" ... in cases where G1 is much easier to achieve than G ...

Of course, if working toward G1 is actually good practice for working toward G, this may give the intelligent system (if it's smart and mature enough to strategize) or evolution impetus to create additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the play that human kids do with blocks and sticks and so forth is a special case, oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play (for the goals that originally inspired the play) is in appropriate similarity-assessment ... i.e. in measuring similarity between G and G1 in such a way that achieving G1 actually teaches things useful for achieving G.

But of course, play often has indirect benefits and assists with goals other than the ones that originally inspired it ... and, due to its often stochastic, exploratory nature it can also have an effect of goal drift ... of causing the mind's top-level goals to change over time ... (hold that thought in mind, I'll return to it a little later in this blog post...)

The key to the above syllogism seems to be similarity-assessment. Examples of the kind of similarity I'm thinking of:

  • The analogy btw chess or go and military strategy
  • The analogy btw "roughhousing" and actual fighting

In logical terms, these are intensional rather than extensional similarities

So for any goal-achieving system that has long-term goals which it can't currently effectively work directly toward, play may be an effective strategy...

In this view, we don't really need to design an AI system with play in mind. Rather, if it can explicitly or implicitly carry out the above inference, concept-creation and subgoaling processes, play should emerge from its interaction w/ the world...

Note that in this view play has nothing intrinsically to do with having a body. An AGI concerned solely with mathematical theorem proving would also be able to play...

Another interesting thing to keep in mind when discussing play is subgoal alienation

When G1 arises as a subgoal of G, nevertheless, it may happen that G1 survives as a goal even if G disappears; or that G1 remains important even if G loses importance. One may wish to design AGI systems to minimize this phenomenon, but it certainly occurs strongly in humans.

Play, in some cases, may be an example of this. We may retain the desire to play games that originated as practice for G, even though we have no interest in G anymore.

And, subgoal alienation may occur on the evolutionary as well as the individual level: an organism may retain interest in kinds of play that resemble its evolutionary predecessors' serious goals, but not its own!

Bob may have a strong desire to play with his puppy ... a desire whose roots were surely encoded in his genome due to the evolutionary value in having organisms like to play with their own offspring and those of their kin ... yet, Bob may have no desire to have kids himself ... and may in fact be sterile, dislike children, and never do anything useful-to-himself that is remotely similar to his puppy-playing obsession.... In this case, Bob's "purely playful" desire to play with his puppy is a result of subgoal alienation on the evolutionary level. On the other hand, it may also help fulfill other goals of his, such as relaxation and the need for physical exercise.

This may seem a boring, cold, clinical diagnosis of something as unserious and silly as playing. For sure, when I'm playing (with my kids ... or my puppy! ... or myself ... er ... wait a minute, that doesn't work in modern English idiom ;-p) I'm not thinking about subgoal alienation and inference and all that.

But, when I'm engaged in the act of sexual intercourse, I'm not usually thinking about reproduction either ... and of course we have another major case of evolution-level and individual-level subgoal alienation right there....

In fact, writing blog entries like this one is largely a very dry sort of playing! ... which helps, I think, to keep my mind in practice for more serious and difficult sorts of mental exercise ... yet even if it has this origin and purpose in a larger sense, in the moment the activity seems to be its own justification!

Still, I have to come back to the tendency of play to give rise to goal drift ... this is an interesting twist that apparently relates to the wildness and spontaneity that exists in much playing. Yes, most particular forms of play do seem to arise via the syllogism I've given above. Yet, because it involves activities that originate as simulacra of goals that go BEYOND what the mind can currently do, play also seems to have an innate capability to drive the mind BEYOND its accustomed limits ... in a way that often transcends the goal G that the play-goal G1 was designed to emulate....

This brings up the topic of meta-goals: goals that have to do explicitly with goal-system maintenance and evolution. It seems that playing is in fact a meta-goal, quite separately from the fact of each instance of playing generally involving an imitation of some other specific real-life goal. Playing is a meta-goal that should be valued by organisms that value growth and spontaneity ... including growth of their goal systems in unpredictable, adaptive ways....