Thursday, September 25, 2008
Tuesday, September 23, 2008
I was deeply into politics as a teen (largely because my parents raised me to be), but as I realized that utopian political dreams were likely to founder on the intrinsic biological perversity of human nature, I drifted away from the political sphere and started thinking more about how to improve or transcend human nature itself....
However, every now and then some piece of political stupidity gets on my nerves sufficiently that I wind up burning time thinking about it.
One of these cases has occurred recently: I've become annoyed by a large number of people proclaiming that "the American era is finally ending." No empire rules forever, and blah blah blah.
I've been hearing this sort of talk for a while, but all the more intensely given the recent week's American banking crisis.
So I decided to write a blog post to get my thoughts on the topic out of my head!
I've never been noted for my patriotism: I really don't care, at a fundamental level, about nations or other related manifestations of contemporary human society. I'll be happy to see them all go away once human nature is fundamentally reformed via radical technological advances.
I've also spent enough time living and traveling outside the US, to get some feel for the strengths and weaknesses of the good/bad old US of A.
My considered opinion of the "end of the American era" meme is that it's pretty much bullshit.
I also seem to look at the current financial crisis a little differently than most others (big surprise there, huh?).
The issues that investment banks, insurance companies and related institutions have recently experienced have been widely attributed to greed, poor government regulation, and so forth. These attributions are surely correct -- but any real event has multiple causes ("cause" being essentially a creature of subjective theory rather than physical reality anyway). And one cause is not being commented on enough, which is the phenomenal practical creativity involved in all the recondite financial instruments (credit default swaps, mortgage strips and the like) underlying the recent woes.
There is some really cool math underlying these financial devices, and this math was largely invented and pragmaticized by American entrepreneurial thinkers. American quants have developed many new fields of financial math, and brought these into the real world, thus moving the global economy to a whole new level of complexity and efficiency.
Innovation always carries risks ... and we've seen that in the markets over the last weeks and months. But let's not forget how amazing the innovations are, and what tremendous positive potential they have.
I agree that exotic derivatives should be regulated more carefully. On the other hand, I also agree with their advocates that they add significant efficiency to the financial markets, and hence are a major asset to the world economy.
Of course, one can theoretically envision socioeconomic systems in which efficiency would be achieved by other, less perverted and convoluted means. But, as history shows, theoretically-envisioned socioeconomic systems are difficult to translate into realities, because of the subtleties of human psychology and culture.
And it's precisely these "subtleties of psychology and culture" that led America to invent quantitative finance ... and so many other amazing technological and scientific developments ... which is exactly why I tend to doubt the "American era" is at its end.
My contention, and it's not a terribly original one (but I may have a somewhat original slant), is that compared to other countries on the planet right now, the USA has a combination of cultural psychology and socioeconomic institutions that is uniquely well-suited to fostering practical creativity.
Note the compound of terms: "practical" and "creativity."
I don't think the US has any kind of monopoly on creativity itself. There are brilliant, creative minds everywhere. Some cultures foster creativity more than others ... and the US is pretty good at this ... but I'm not sure it's uniquely good.
And I don't think the US has any kind of monopoly on practicality, either. Although historically this has been a US characteristic, there are surely other nations that are currently more down-to-earth and practical than the US (as a generalization across various aspects of life).
However, the US seems to be uniquely good at taking creative new ideas and finding the first ways to give them practical implementations -- an art that requires a great deal of creativity in itself, of course.
What is it about the US that fosters practical creativity? It's no one thing. It's a synergetic combination of culture and institutions. The institutions help keep the culture alive, and the culture helps keep the institutions alive. Practical creativity is something that pervades many aspects of US life -- government, research, education and industry, for example. Precisely because of its pervasive and systemic nature, the memeplex that constitutes practical creativity in the US is difficult for other nations to copy, even if they have a genuine desire to.
To see what I mean more concretely, think about three examples: the Internet, the Human Genome Project, and the personal computer. How did these come about?
The history of the PC embodies many classic stories of American entrepreneurism, including the creation of Apple and Microsoft by young nerdy entrepreneurs out of nowhere. But it also tells you something about the flexibility of large US corporations relative to similar institutions elsewhere: it was IBM striking a deal with Bill Gates, some young nerd from nowhere with no real business experience, that set the PC industry on its modern path. Not to mention the freewheeling US corporate research lab culture of the time (Xerox PARC and all that). And the government research funding establishment played its role behind the scenes, for instance in funding the creation of mainframes that Bill Gates played with (often breaking the rules to do so) in high school and college, before starting Microsoft....
The Internet began as a project of ARPA (now DARPA), a US government research funding agency that has its strengths and weaknesses, but is notable for its chaotic approach to funding. DARPA program managers cycle in and out every 4 years so that no individual has too much power over resource allocation decisions. There are certainly "old boy networks" involved, and I've personally been fairly unhappy with DARPA's funding choices in my own research field of AI. However, it's interesting to compare the DARPA funding approach with the approach of, say, the Japanese government. Historically, the Japanese have had a tendency to fund huge, comprehensive, nationwide research programmes: e.g. the Fifth Generation computing initiative (which funded a large number of researchers to work on logic-based AI), or the current focus on robotics technology. As a crude approximation, it seems the Japanese funding system tends to push researchers to "all work on the same sort of thing at the same time", whereas the American research funding system is more chaotic, leading to a greater diversity of ideas getting explored simultaneously. We still are overly trend-following and narrow-focused in the US, from my point of view: for instance, AI funding has focused on narrow-AI, logic-based systems and neural net systems for far too long; and the biology community is taking way too long to wake up to the importance of systems biology. But, compared to the rest of the world, the US research funding system is a hotbed of creative chaos.
And then, once the Internet escaped the clutches of ARPA (due to the legislative action of folks like Al Gore, who famously bragged he "invented the Internet" due to his role in this political process), it spread through the collective activity of masses of software entrepreneurs. The Web was initially developed in Europe, but what made it a huge phenomenon was American entrepreneurship, pushed on by the relative ease of securing angel and venture funding in the US. I lived in Australia in the late 1990's but when I wanted to start a software business I had to return to the US because it was so hard to secure investment for an oddball software startup anywhere else (not that it was easy in the US, but it was a bit less painfully difficult...).
The Human Genome Project (which has ushered in a completely new era of genetics and medical research) was began as a US government initiative, involving a network of university labs. And note that the US graduate education system is still by far the best in the world. Our elementary and high schools are generally pathetic compared to those of other developed nations, though there are many exceptionally good schools out there too (the US being a big, diverse place) ... but by the time one gets to grad school, the US is the place to be. Top undergrads from around the world vie to get into our grad schools, and top PhDs vie for postdoc positions at our universities.
But what accelerated the Human Genome Project was the entry of Celera Genomics into the picture -- a venture-funded entrepreneurial attempt to outdo the government genome sequencing project. The new ideas Celera introduced (shotgun sequencing) accelerated the government sequencing project as well, helping the latter to complete ahead of schedule and under budget. (Now Craig Venter, who founded Celera, is involved with a number of projects, some commercial and some nonprofit within government-funded labs ... including a far-out attempt to create the first artificial genome.)
In each of these three cases -- and I could have chosen many others -- we see a complex combination of individual scientific and entrepreneurial initiative, and the spontaneously coordinated, somewhat chaotic and happenstance interaction of government, commercial and educational institutions. This combination isn't planned in detail, and doesn't always make sense, and makes a lot of really stupid decisions (such as not funding advanced AI research much more amply), but it also does a lot of smart things ... and it interpenetrates with subtle, hard-to-describe aspects of American culture in ways that no one has yet been able to document.
Part of the story, of course, is the incredible diversity of the American population: our scientists and engineers, especially, come from all over the world ... and increasingly our business leaders do too. So American culture isn't exactly American culture: it's really world culture, but with an American slant. And this is one among many major differences between America and other contemporary nations, which is closely linked to the "practical creativity" memeplex. I can't see anywhere in Asia, or anywhere in Europe (except possibly England), adopting the "melting pot" aspect of American culture ... but without this melting-pot aspect, it seems to me that practical creativity will have a lot harder time really flourishing. The diversity of ideas and approaches that comes from welcoming and then chaotically blending cultures and outlooks from all over the world, is a major source of practical creativity.
The move from a manufacturing and service economy to a knowledge economy has become famous. The next step, I suggest, is going to be a gradual shift from a knowledge economy to a creativity economy. As knowledge work becomes commoditized, the really precious thing will be creativity work: but not abstract creativity-work detached from the everyday world ... practical-creativity work, aimed at moving the real world forward in unexpected directions. Because of this, I suspect the US will maintain its cultural and economic leadership role in the world for quite some time.
And we'd damn well better, because with all the debt we're racking up, we're basically placing a huge BET that we're going to dramatically increase our productivity via technological efficiency improvements of various sorts. It's a fairly large gamble, but calculated risks are part of the American way ... as recent events on Wall Street show, this approach definitely has its risks ... but my guess is that this gamble will ultimately pan out just fine.
Getting back to my futurist preoccupations: My best guess is that the bulk of the work of creating the Singularity is going to be centered in America. This work will surely be international -- my own current work on advanced AGI technology involves a team with members in South America, Europe, Australia, New Zealand and Asia as well as the US (no Antarcticans yet...). But there's a reason my company Novamente LLC is centered in the US and not these other countries, beyond historical happenstance ... the US is the place where businesses and nonprofit agencies are most willing to seriously consider the practical value of way-out-there technologies. So long as this doesn't change, the American era is going to keep on rolling ... at least that's my best guess at the moment ...
Monday, September 22, 2008
I spent a while this weekend thinking about what might be the right approach for testing the intelligence of early-stage AGI systems that are aimed at human-level, roughly human-like general intelligence (either as an end goal or an intermediate developmental milestone).
Some of my thoughts are summed up in an essay I posted at
I’ll quote the first few paragraphs here:
One of the many difficult issues arising in the course of research on human-level AGI is that of “evaluation and metrics” – i.e., AGI intelligence testing.
It’s not so hard to tell when you’ve achieved human-level AGI — though there is some subtlety here, which I’ll discuss below. However, assessing the quality of incremental progress toward human-level AGI is a much subtler matter. In this essay I’ll present some thoughts on this issue, culminating in a couple specific proposals:
1) Online School Tests, in which AGIs are tested via their ability to succeed in existing online educational fora
2) of more immediate interest, a series of tests called the AGI Preschool Tests (AIP Tests, for short, pronounced “ape tests”), based on the notion of “multiple intelligences” and also on some novel ideas regarding learning-based intelligence testing.
The AIP Tests suggested here are specifically intended for AGI systems that control agents embodied in 3D worlds resembling the everyday human world, via either physical robots or virtually embodied agents. Very differently embodied AGI systems (e.g. systems to be initially taught purely via text without any simulated human-like or animal-like body) would potentially need qualitatively different testing methdologies.
Saturday, August 30, 2008
The basic issue is: what can you do to help mitigate against the problem of "goal drift", wherein an AGI system starts out with a certain top-level goal governing its behavior, but then gradually modifies its own code in various ways, and ultimately -- through inadvertent consequences of the code revisions -- winds up drifting into having different goals than it started with. I certainly didn't answer the question but I came up with some new ways of thinking about the problem, and formalizing the problem, that I think might be interesting....
While the language of math is used in the paper, don't be fooled into thinking I've proved anything there ... the paper just contains speculative ideas without any real proof, just as surely as if they were formulated in words without any equations. I just find that math is sometimes the clearest way to say what I'm thinking, even if I haven't come close to proving the correctness of what I'm thinking yet...
An abstract of the speculative paper is:
in Self-Modifying Cognitive Systems
A new approach to thinking about the problem of “preservation of AI goal systems under repeated self-modification” (or, more compactly, “goal drift”) is presented, based on representing self-referential goals using hypersets and multi-objective optimization, and understanding self-modification of goals in terms of repeated iteration of mappings. The potential applicability of results from the theory of iterated random functions is discussed. Some heuristic conclusions are proposed regarding what kinds of concrete real-world objectives may best lend themselves to preservation under repeated self-modification. While the analysis presented is semi-rigorous at best, and highly preliminary, it does intuitively suggest that important humanly-desirable AI goals might plausibly be preserved under repeated self-modification. The practical severity of the problem of goal drift remains unresolved, but a set of conceptual and mathematical tools are proposed which may be useful for more thoroughly addressing the problem.