To follow this blog by email, give your address here...

Thursday, January 13, 2011

The Hard Takeoff Hypothesis


I was recently invited to submit a paper to a forthcoming academic edited volume on the Singularity. As a first step I had to submit an extended abstract, around 1000 words. Here is the abstract I submitted....


Basically, the paper will be a careful examination of the conditions under which a hard takeoff might occur, including an argument (though not formal proof) as to why OpenCog may be hard-takeoff-capable if computer hardware is sufficiently capable at the time when it achieves human-level intelligence.




The Hard Takeoff Hypothesis

Ben Goertzel


Vernor Vinge, Ray Kurzweil and others have hypothesized the future occurrence of a “technological Singularity” -- meaning, roughly speaking, an interval of time during which pragmatically-important, broad-based technological change occurs so fast that the individual human mind can no longer follow what’s happening even generally and qualitatively. Plotting curves of technological progress in various areas suggests that, if current trends continue, we will reach some sort of technological Singularity around 2040-2060.


Of course, this sort of extrapolation is by no means certain. Among many counterarguments, one might argue that the inertia of human systems will cause the rate of technological progress to flatten out at a certain point. No matter how fast new ideas are conceived, human socioeconomic systems may take a certain amount of time to incorporate them, because humans intrinsically operate on a certain time-scale. For this reason Max More has suggested that we might experience something more like a Surge than a Singularity – a more gradual, though still amazing and ultimately humanity-transcending, advent of advanced technologies.


On the other hand, if a point is reached at which most humanly-relevant tasks (practical as well as scientific and technological) are carried out by advanced AI systems, then from that point on the “human inertia factor” would seem not to apply anymore. There are many uncertainties, but at very least, I believe the notion of a technological Singularity driven by Artificial General Intelligences (AGIs) discovering and then deploying new technology and science is a plausible and feasible one.


Within this vision of the Singularity, an important question arises regarding the capability for self-improvement on the part of the AGI systems driving technological development. It’s possible that human beings could architect a specific, stable AGI system with moderately greater-than-human intelligence, which would then develop technologies at an extremely rapid rate, so fast as to appear like “essentially infinitely fast technological progress” to the human mind. However, another alternative is that humans begin by architecting roughly human-level AGI systems that are capable but not astoundingly so – and then these AGI systems improve themselves, or create new and improved AGI systems, and so on and so forth through many iterations. In this case, one has the question of how rapidly this self-improvement proceeds.


In this context, some futurist thinkers have found it useful to introduce the heuristic distinction between a “hard takeoff” and a “soft takeoff.” A hard takeoff scenario is one where an AGI system increases its own intelligence sufficiently that, within a brief period of months or weeks or maybe even hours, an AGI system with roughly human-level intelligence has suddenly become an AGI system with radically superhuman general intelligence. A soft takeoff scenario is one where an AGI system gradually increases its own intelligence step-by-step over years or decades, i.e. slowly enough that humans have the chance to monitor each step of the way and adjust the AGI system as they deem necessary. Either a hard or soft takeoff fits I.J. Good’s notion of an “intelligence explosion” as a path to Singularity.


What I call the “Hard Takeoff Hypothesis” is the hypothesis that a hard takeoff will occur, and will be a major driving force behind a technological Singularity. Thus the Hard Takeoff Hypothesis is a special case of the Singularity Hypothesis.


It’s important to note that the distinction between a hard and soft takeoff is a human distinction rather than a purely technological distinction. The distinction has to do with how the rate of intelligence increase of self-improving AGI systems compares to the rate of processing of human minds and societies. However, this sort of human distinction may be very important where the Singularity is concerned, because after all the Singularity, if it occurs, will be a phenomenon of human society, not one of technology alone.



The main contribution of this paper will be to outline some fairly specific sufficient conditions for an AGI system to undertake a hard takeoff. The first condition explored is that the AGI system must lie in a connected region of “AGI system space” (which we may more informally call “mindspace”) that, roughly speaking,


  • includes AGI systems with general intelligence vastly greater than that of humans
  • has the “smoothness” property that similarly architected systems tend to have similar general intelligence levels.


If this condition holds, then it follows that one can initiate a takeoff by choosing a single AGI system in the given mindspace region, and letting it spend part of its time figuring out how to vary itself slightly to improve its general intelligence. A series of these incremental improvements will then lead to greater and greater general intelligence.


The hardness versus softness of the takeoff then has to do with the amount of time needed to carry out this process of “exploring slight variations.” This leads to the introduction of a second condition. If one’s region of mindspace obeys the first condition laid out above, and also consists of AGI systems for which adding more hardware tends to accelerate system speed significantly, without impairing intelligence, then it follows that one can make the takeoff hard by simply adding more hardware. In this case, the hard vs. soft nature of a takeoff depends largely on the cost of adding new computer hardware, at the time when an appropriate architected AI system is created.


Roughly speaking, if AGI architecture advances fast enough relative to computer hardware, we are more likely to have a soft takeoff, because the learning involved in progressive self-improvement may take a long while. But if computer hardware advances quickly enough relative to AGI architecture, then we are more likely to have a hard takeoff, via deploying AGI architectures on hardware sufficiently powerful to enable self-improvement that is extremely rapid on the human time-scale.


Of course, we must consider the possibility that the AGI itself develops new varieties of computing hardware. But this possibility doesn’t really alter the discussion so much – even so, we have to ask whether the new hardware it creates in its “youth” will be sufficiently powerful to enable hard takeoff, or whether there will be a slower “virtuous cycle” of feedback between its intelligence improvements and its hardware improvements.


Finally, to make these considerations more concrete, the final section of the paper will give some qualitative arguments that the mindspace consisting of instances of the OpenCog AGI architecture (which my colleagues and I have been developing, aiming toward the ultimate goal of AGI at the human level and beyond), very likely possesses the needed properties to enable hard takeoff. If so this is theoretically important, as an “existence argument” that hard-takeoff-capable AGI architectures do exist – i.e., as an argument that the Hard Takeoff Hypothesis is a plausible one.


Wednesday, December 29, 2010

Will Decreasing Scarcity Allow us to Approach an Optimal (Meta-)Society?

When chatting with a friend about various government systems during a long car drive the other day (returning from New York where we were hit by 2 feet of snow, to relatively dry and sunny DC), it occurred to me that one could perhaps prove something about the OPTIMAL government system, if one were willing to make some (not necessarily realistic) assumptions about resource abundance.

This led to an interesting train of thought -- that maybe, as technology reduces scarcity, society will gradually approach optimality in certain senses...

The crux of my train of thought was:

  • Marcus Hutter proved that the AIXI algorithm is an optimal approach to intelligence, given the (unrealistic) assumption of massive computational resources.
  • Similarly, I think one could prove something about the optimal approach to society and government, given the (unrealistic) assumptions of massive natural resources and a massive number of people.

I won't take time to try to prove this formally just now, but in this blog post I'll sketch out the basic idea.... I'll describe what I call the meta-society, explain the sense in which I think it's optimal, and finally why I think it might get more and more closely approximated as the future unfolds...

A Provably Optimal Intelligence

As a preliminary, first I'll review some of Hutter's relevant ideas on AI.

In Marcus Hutter's excellent (though quite technical) book Universal AI, he presents a theory of "how to build an optimally intelligent AI, given unrealistically massive computational resources."

Hutter's algorithm isn't terribly novel -- I discussed something similar in my 1993 book The Structure of Intelligence (as a side point to the main ideas of that book), and doubtless Ray Solomonoff had something similar in mind when he came up with Solomonoff induction back in the 1960s. The basic idea is: Given any computable goal, and infinite computing power, you can work toward the goal very intelligently by (my wording, not a quote) ....


at each time step, searching the space of all programs to find those programs P that (based on your historical knowledge of the world and the goal) would (if you used P to control your behaviors) give you the highest probability of achieving the goal. Then, take the shortest of all such optimal programs P and actually use it to determine your next action.


But what Hutter did uniquely is to prove that a formal version of this algorithm (which he calls AIXI) is in a mathematical sense maximally intelligent.

If you have only massive (rather than infinite) computational resources, then a variant (AIXItl) exists, the basic idea of which is: instead of searching the space of all programs, only look at those programs with length less than L and runtime less than T.

It's a nice approach if you have the resources to pay for it. It's sort of a meta-AI-design rather than an AI design. It just says: If you have enough resources, you can brute-force search the space of all possible ways of conducting yourself, and choose the simplest of the best ones and then use it to conduct yourself. Then you can repeat the search after each action that you take.

One might argue that all this bears no resemblance to anything that any actual real-world mind would do. We don't have infinite nor massive resources, so we have to actually follow some specific intelligent plans and algorithms, we can't just follow a meta-plan of searching the space of all possible plans at each time-step and then probabilistically assessing the quality of each possibility.

On the other hand, one could look at Hutter's Universal AI as a kind of ideal which real-world minds may approach more and more closely, as they get more and more resources to apply to their intelligence.

That is: If your resources are scarce, you need to rely on specialized techniques. But the more resources you have, the more you can rely on search through all the possibilities, reducing the chance that your biases cause you to miss the best solution.

(I'm not sure this is the best way to think about AIXI ... it's certainly not the only way ... but it's a suggestive way...)

Of course there are limitations to Hutter's work and the underlying way of conceptualizing intelligence. The model of minds as systems for achieving specific goals has its limitations, which I've explained how to circumvent in prior publications. But for now we're using AIXI only as a broad source of inspiration anyway, so there's no need to enter into such details....

19-Year-Old Ben Goertzel's Design for an Better Society

Now, to veer off in a somewhat different direction....

Back when I was 19 and a math grad student at NYU, I wrote (in longhand, this was before computers were so commonly used for word processing) a brief manifesto presenting a design for a better society. Among other names (many of which I can't remember) I called this design the Meta-society. I think the title of the manifesto was "The Play of Power and the Power of Play."

(At that time in my life, I was heavily influenced by various strains of Marxism and anarchism, and deeply interested in social theory and social change. These were after all major themes of my childhood environment -- my dad being a sociology professor, and my mom the executive of a social work program. I loved the Marxist idea of the mind and society improving themselves together, in a carefully coupled way -- so that perhaps the state and the self could wither away at the same time, yielding a condition of wonderful individual and social purity. Of course I realized that existing Communist systems fell very far short of this ideal though, and eventually I got pessimistic about there ever being a great society composed of and operated by humans in their current form. Rather than improving society, I decided, it made more sense to focus my time on improving humanity ... leading me to a greater focus on transhumanism, AI and related ideas.)

The basic idea for my meta-society was a simple one, and probably not that original: Just divide society into a large number of fairly small groups, and let each small group do whatever the hell it wanted on some plot of land. If one of these "city-states" got too small due to emigration it could lose its land and have it ceded to some other new group.

If some group of people get together and want to form their own city-state, then they get put in a queue to get some free land for their city-state, when the land becomes available. To avoid issues with unfairness or corruption in the allocation of land to city-states, a computer algorithm could be used to mediate the process.

There would have to be some basic ground-rules, such as: no imprisoning people in your city-state, no invading or robbing other city-states, etc. To support a police force to enforce the ground-rules would require a central government and some low level of taxation, which however could sometimes be collected in the form of goods rather than money (the central gov't could then convert the goods into money). Environmental protection poses some difficulties in this sort of system, and has to be centrally policed also.

This meta-society system my 19 year old self conceived (and I don't claim any great originality for it, though I don't currently know anything precisely the same in the literature) has something in common with Libertarian philosophy, but it's not exactly the same, because at the top there's a government that enforces a sort of "equal rights for city-state formation" for all.

One concern I always had with the meta-society was: What do you do with orphans or others who get cast out of their city-states? One possibility is for the central government to operate some city-states composed of random people who have nowhere else to go (or nowhere else they want to go).

Another concern is what do you do about city-states that oppress and psychologically brainwash their inhabitants. But I didn't really see any solution to that. One person's education is another person's brainwashing, after all. From a modern American view it's tempting to say that all city-states should allow their citizens free access to media so they can find out about other perspectives, but ultimately I decided this would be too much of an imposition on the freedom of the city-states. Letting citizens leave their city-state if they wish ultimately provides a way for any world citizen to find out what's what, although there are various strange cases to consider, such as a city-state that allows its citizens no information about the outside world, and also removes the citizenship of any citizen who goes outside its borders!

I thought the meta-society was a cool idea, and worked out a lot of details -- but ultimately I had no idea how to get it implemented, and not much desire to spend my life proselytizing for an eccentric political philosophy or government system, so I set the idea aside and focused my time on math, physics, AI and such.

As a major SF fan, it did occur to me that such a meta-society of city-states might be more easily achievable in future once space colonies were commonplace. If it were cheap to put up a small space colony for a few hundred or thousand or ten thousand people, then this could lead to a flowering of city-states of exactly the sort I was envisioning...

When I became aware of Patri Friedman's Seasteading movement, I immediately sensed a very similar line of thinking. Their mission is "To further the establishment and growth of permanent, autonomous ocean communities, enabling innovation with new political and social systems." Patri wants to make a meta-society and meta-economy on the high seas. And why not?



Design for an Optimal Society?

The new thought I had while driving the other day is: Maybe you could put my old idealistic meta-society-design together with the AIXI idea somehow, and come up with a design for a "society optimal under assumption of massive resources."

Suppose one assumes there's

  • a lot of great land (or sea + seasteading tech, or space + space colonization tech, whatever), so that fighting over land is irrelevant
  • a lot of people
  • a lot of natural resources, so that one city-state polluting another one's natural resources isn't an issue

Then it seems one could argue that my meta-society is near-optimal, under these conditions.

The basic proof would be: Suppose there were some social order X better than the meta-society. Then people could realize that X is better, and could simply design their city-states in such a way as to produce X.

For instance, if US-style capitalist democracy is better than the meta-society, and people realize it, then people can just construct their city-states to operate in the manner of US-style capitalist democracy (this would require close cooperation of multiple city-states, but that's quite feasible within the meta-society framework).

So, one could argue, any other social order can only be SLIGHTLY better than the meta-society... because if there's something significantly better, then after a little while the meta-society can come to emulate it closely.

So, under assumptions of sufficiently generous resources, the meta-society is about as good as anything.

Now there are certainly plenty of loopholes to be closed in turning this heuristic argument into a formal proof. But I hope the basic idea is clear.

As with AIXI, one can certainly question the relevance of this sort of design, since resource scarcity is a major fact of modern life. But recall that I originally started thinking about meta-societies outside the "unrealistically much resources" context.

Finally, you'll note that for simplicity, I have phrased the above discussion in terms of "people." But of course, the same sort of thinking applies for any kind of intelligent agent. The main assumption in this case is that the agents involved either have roughly equal power and intelligence, or else that if there are super-powerful agents involved, they have the will to obey the central government.

Can We Approach the Meta-Society as Technology Advances?


More and more resources are becoming available for humanity, as technology advances. Seasteading and space colonization and so forth decrease the scarcity of available "land" for human habitation. Mind uploading would do so more dramatically. Molecular nanotech (let alone femotech and so forth) may dramatically reduce material scarcity, at least on the scale interesting to humans.

So, it seems the conditions for the meta-society may be more and more closely met, as the next decades and centuries unfold.

Of course, the meta-society will remain an idealization, never precisely achievable in practice. But it may be we can approach it closer and closer as technology improves.

Marxism had the notion of society gradually becoming more and more pure, progressively approaching Perfect Communism. What I'm suggesting here is similar in form but different in content: society gradually becoming more and more like the meta-society, as scarcity of various sorts becomes less and less of an issue.

As I write about this now, it also occurs to me that this is a particularly American vision. America, in a sense, is a sort of meta-society -- the central government is relatively weak (compared to other First World countries) and there are many different subcultures, some operating with various sorts of autonomy (though also a lot of interconnectedness). In this sense, it seems I'm implicitly suggesting that America is a better model for the future than other existing nations. How very American of me!

If superhuman AI comes about (as I think it will), then the above arguments make sense only if the superhuman AI chooses to respect the meta-society social structure. The possibility even exists that a benevolent superhuman AI could serve itself as the central government of a meta-society.

And so it goes....