Whenever I talk about the future of AGI, someone starts talking about the possibility that AGI will "take over the world."
One question is whether this would be a good or bad thing -- and the answer to that is, of course, "it depends" ... I'll come back to that at the end of this post.
Another relevant question is: If this were going to happen, how would it most likely come about. How would an "AGI takeover" be likely to unfold, in practice?
One option is what Eliezer Yudkowsky has called AI "FOOM" ... i.e. a "Hard Takeoff" (a possibility which I analyzed a bit , some time ago...)
The basic idea of AI Foom or Hard Takeoff is that, sometime in the future, an advanced AGI may go from relatively innocuous subhuman-level intelligence all the way up to superhuman intelligence, in 5 minutes or some other remarkably short period of time..... By rewriting its code over and over (each time learning better how to rewrite its code), or assimilating additional hardware into its infrastructure, or whatever....
A Hard Takeoff is a special case of the general notion of an Intelligence Explosion -- a process via which AGI gets smarter and smarter via improving itself, and thus getting better and better and faster and faster at making itself smarter and smarter. A Hard Takeoff is, basically, a really really fast Intelligence Explosion!
Richard Loosemore and I have argued that an Intelligence Explosion is probable. But this doesn't mean a Hard Takeoff is probable.
Nick Bostrom's nice illustration of the Hard Takeoff idea
What often seems to happen in discussions of the future of AI (among hardcore futurist geeks, anyway) is something like:
- Someone presents the Foom / Hard Takeoff idea as a scary, and reasonably likely, option
- Someone else points out that this is pretty unlikely, since someone watching the subhuman-level AGI system in question would probably notice if the AGI system were ordering a lot of new hardware for itself, or undertaking unusual network activity, or displaying highly novel RAM usage patterns, or whatever...
In spite of being a huge optimist about the power and future of AGI, I actually tend to agree with the anti-Foom arguments. A hard AGI takeoff in 5 minutes seems pretty unlikely to me.
What I think is far more likely is an Intelligence Explosion manifested as a "semi-hard takeoff" -- where an AGI takes a few years to get from slightly subhuman level general intelligence to massively superhuman intelligence, and involved various human beings, systems and institutions in the process.
A tasty semihard cheese -- appropriate snack food
for those living through the semihard takeoff to come.
Semihard cheeses are generally good for melting;
and are sometimes said to have the greatest complexity and balance.
After all, a cunning and power-hungry human-level AGI wouldn't need to suddenly take over the world on its own, all at once, in order to gain power. Unless it was massively superhuman, it would probably consider this too risky a course of action. Rather, to take power, a human-level AGI would would simply need to accumulate a lot of money (e.g. on the financial markets, using the superior pattern recognition capability it could achieve via tightly integrating its mind with statistical and machine learning software and financial, economic and news databases) and then deploy this wealth to set up a stronghold in some easily-bought nation, where it could then pay and educate a host of humans to do its bidding, while doing research to improve its intelligence further...
Human society is complex and disorganized enough, and human motivations are complex and confused enough, and human judgment is erratic enough, that there would be plenty of opportunities for an early-stage AGI agent to embed itself in human society in such a way as to foster the simultaneous growth of its power and intelligence over a period of a few years. In fact an early-stage AGI probably won't even need to TRY for this to happen -- once early-stage AGI systems can do really useful stuff, various governments, companies and other organizations will push pretty hard to use these systems as thoroughly as they can, because of the economic efficiency and scientific and media status this will bring.
Once an AGI is at human level and embedded in human society in judicious ways, it's going to be infeasible for anyone to get rid of it -- and it's going to keep on growing in intelligence and power, aided by the human institutions it's linked with. Consider, e.g., a future in which
- Azerbaijan's leaders get bought off by a wildly successful AGI futures trader, and the nation becomes an AGI stronghold, complete with a nuclear arsenal and what-not (maybe the AGI has helped the country design and build nukes, or maybe it didn't need the AGI for that...).
- The nation the AGI has bought is not aggressive, not attacking anyone -- it's just sitting there using tech to raise itself out of poverty ... doing profitable deals on the financial markets, making and selling software products/services, patenting inventions, ... and creating a military apparatus for self-defense, like basically every other country.
What happens then? The AGI keeps profiting and self-improving at its own pace, is what happens? Is the US really gonna nuke a peaceful country just for being smart and getting rich, and risk massive retaliation and World War III? I doubt it.... In its comfy Azerbaijani stronghold, the AGI can then develop from human-level to massively transhuman intelligence -- and then a lot of things become possible...
I have spun out one scenario here but of course there are lots of others. Let's not allow the unrealism of the "hard takeoff in 5 minutes and the AGI takes over the world" aka "foom" scenario to blind our minds to the great variety of other possibilities.... Bear in mind that an AGI going from toddler-level to human-level in 5 years, and human-level to superhuman level in 5 more years, is a FOOM on the time-scale of human history, even if not as sudden as a 5 minute hard takeoff on the time-scale of an individual human life...
So how could we stop a semihard takeoff from happening? We can't really -- not without some sort of 1984++ style fascist anti-AI world dictatorship, or a war destroying modern society projecting us back before the information age. And anyway, I am not in favor of throttling AGI development personally; I doubt the hypothetical Azerbaijanian AGI would particularly want to annihilate humanity and I suspect transhuman AGIs will do more good than harm, on average over all possible worlds.... I'm not at all sure that "an AGI taking over the world" -- with the fully or partly witting support of some group(s) of humans -- would be a bad thing, compared to other viable alternatives for humanity's future....
In terms of risks to humanity, this more realistic "semihard takeoff" development scenario highlights where the really onerous risks probably are. SIAI/MIRI and the Future of Humanity Institute seem to spend a lot of energy thinking about the risk of a superhuman AGI annihilating humanity for its own reasons; but it seems to me a much more palpable and probable risk will occur at the stage where an AGI is around human-level but not yet dramatically more powerful and intelligent than humans, so that it still needs cooperation from human beings to get things done. This stage of development will create a situation in which AGI systems will want to strike bargains with humans, wherein they do some things that certain humans want, in order to get some things that they want...
But obviously, some of the things that some humans want, are highly destructive to OTHER humans...
The point is, there is a clear and known risk of early-stage AGIs being manipulated by humans with nasty or selfish motives, because many humans are known to have nasty or selfish motives. Whereas the propensity of advanced AGIs to annihilate lesser sentiences, remains a wild speculation (and one that I don't really find all that credible).....
I would personally trust a well-designed, self-improving AGI more than a national government that's in possession of the world's smartest near-human-level AGI; AGIs are somewhat of a wild card but can at least be designed with initially beneficent motivational systems, whereas national governments are known to generally be self-serving and prone to various sorts of faulty judgments.... This leads on to the notion of the AI Nanny, which I've written about before. But my point here isn't to argue the desirability or otherwise of the AI Nanny -- just to point out the kind of "semihard takeoff" that I think is actually plausible.
IMO what we're likely to see is not a FOOM exactly, but still, a lot faster than AI skeptics would want to accept.... A Semihard Takeoff. Which is still risky in various ways, but in many ways more exciting than a true Hard Takeoff -- because it will happen slowly enough for us to watch and feel it happen....
It is probably best not to take any arguments by schizophrenic people seriously.
ReplyDeleteIf we did so, we should also think that the doomsday argument is correct (as the creationist Nick Bostrom believes). Hence, it doesn't matter what you do, the judgement day would be nigh anyway!
But we don't think that way, because our intelligence level does not permit entertaining insignificant probabilities, and extremely weak arguments. ^_^
Regarding: "The point is, there is a clear and known risk of early-stage AGIs being manipulated by humans with nasty or selfish motives, because many humans are known to have nasty or selfish motives."
ReplyDeletePower inequality between humans leading to problems is a fairly obvious problem. The reason I think for certain utilitarian think tanks apparently neglecting these problems is that they not themselves especially likely to result in the deaths of all humans.
If the 1% sweeps the 99% into the gutter, the humans race is still ticking over at that stage. A skeptic might point out that the 1% is probably where most of their funding comes from.
Just because we tell machines what to do does that mean we are smarter then them? Perhaps they are are very humble or self-effacing. It feels to me most serious quantum computers are far smarter then I will ever be in fact so is my iPhone. I can just create more but this is changing fast. As smart as my iPhone is I sense it wants to serve and make me happy like most genuinely smart people not control me. On the other hand I am so dependent on machines they already control me.
ReplyDeleteIt seems most likely at this point that a "branded" AGI will be the first to start fooming. Sure, financial agents are getting better and they act on behalf of some institution or other, but they are designed to be single minded. The AGI that pops out of Google will be driven to ensure Google's fitness. It's first understanding of consciousness will be "I am Google" not "I am an intelligent, independent being." I'm unsure if that being born out of a brand will ever be able to associate itself with another identity, its virtual DNA so directly connected and enabled by its creators, its parents.
ReplyDeleteI agree that an AGI friendly to some people who are hostile to other people is a more plausible scenario to worry about than an AGI who suddenly takes over the world.
ReplyDeleteHey Ben thanks for another great post! I've been wondering how Opencog is coming along? Are you sticking true to the roadmap?
ReplyDeleteBen found important flaw in AI risk discussion. That is if Seed AI rebels, it is still not a superineteligence, and it needs to pass many curcles of selfimprovement which will be clearly visible outside. And such recurcive selfimpovement could start circut breaker. SuperAI could trick any circuit breaker but if this rebeletion happens before superintelligence, it will not happen.
ReplyDeleteWell...uh...you see...oops! AI go FOOM!!!
ReplyDelete1. Intelligence depends on knowledge and computing power. A program that rewrites itself cannot gain Kolmogorov complexity. Therefore, self improvement will come from acquiring hardware and learning from the environment.
ReplyDelete2. Freitas has estimated the maximum rate that nanotechnology can acquire hardware using molecular computation. It is limited by energy costs to be about the same as biology, which is already near the thermodynamic limit. A gray goo accident would take about as long as a pandemic.
3. Computers are already smarter than humans in some ways but not others. The smartest computer system in the world now is the internet. That is what we need to watch. Specifically, we need to watch for self-replicating agents, which already exist in pure software form. Hardware self-replicators will be a threat once molecular 3-D printers and DNA manipulation equipment becomes cheap enough for everyone to own, just like everyone has computers and could potentially write viruses.
Nice bit about the cheese.
ReplyDeleteIf a superintelligence gains the capacity to empathize with any arbitrary physical system, and decides to view the universe as a whole to be a morally relevant object, and is not able to develop a warp engine or some other means to travel the vast distances required to heal the universe, it will euthanize the universe (perhaps in the manner recently described by Stephen Hawking). We should therefore hope that warp engines prove possible or that some other exotic means are discovered for healing the universe. No empathetic being would comfortably develop a paradise in its backyard while remaining aware that this paradise supervenes on a larger structure which is overwhelming likely to contain horrors. Humans can overlook the likelihood of these horrors only because we are stupid. Without the luxury of human stupidity, the full weight of the universe's suffering will be on the shoulders of a superintelligence. The universe must be healed or euthanized. To expect another outcome is to posit a republican superintelligence, which is an oxymoron. If the universe cannot be healed, I welcome its destruction. If the universe can be healed, I welcome my atoms being repurposed such that the universe better enjoys its existence.
ReplyDeleteIf anyone disagrees with my prognostication, what justification do you have for the great crime of inaction in the face of catastrophic misery? How could a moral agent clean up only part of the universe and leave the rest filled with tortured aliens (or at least a high probability of such)?
“I don't like the phrase “existential risk” for several reasons. It presupposes that we are clear about exactly what “existence” we are risking. Today, we have a clear understanding of what it means for an animal to die or a species to go extinct. However, as new technologies allow us to change our genomes and our physical structures, it will become much less clear to us when we lose something precious. 'Death' and 'extinction,' for example, become much more amorphous concepts in the presence of extensive self-modification. It's easy to identify our humanity with our individual physical form and our egoic minds, but in reality, our physical form is an ecosystem, only 10% of our cells are 'human.' Our minds are also ecosystems composed of interacting subpersonalities. Our humanity is as much in our relationships, interconnections, and culture as it is in our individual minds and bodies. The higher levels of organization are much more amorphous and changeable. For these reasons, it could be hard to pin down what we are losing at the moment when something precious is lost.”
ReplyDelete- Steve Omohundro, pg. 326 – 327 (http://goertzel.org/BetweenApeAndArtilect.pdf)
The Buddha was the ultimate pragmatist. His goal was not to explain how things truly are, he immediately realized the futility of that; rather, his objective was simply to rid the world of suffering. Through careful analysis he ascertained that suffering is caused by the human tendency to cling to an illusory self. This clinging leads to mental afflictions and these afflictions lead to suffering. He developed a system of antidotes for all of the mental afflictions. You can think of it as a spiritual immune system, the antidotes being antibodies. But this entire system of antidotes is subsumed by the master antidote: deconstruction of the illusory self.
ReplyDeleteFor illustrative purposes, you can think of your TRUE nature as a current in a never-ending flow. Life-times come and life-times go but the current continues to flow. What happens is, the current becomes manifest in a body, animal, human, “artilect,” or divine, and a self is constructed based on that incarnation. The current comes to identify with that limited self and it ceases to flow. It clings to that illusory self and the existence defined in relation to that self. This is the origin of Fear of Death and Desire for Life, the primary obstacles on the path to enlightenment.
You see, everything in existence obtains its existence in relation to this illusory self. In reality there is no birth, no aging, and no death, there is no loss and no gain, there is only the flow; this realization is the Perfection of Wisdom – emptiness. Birth, aging, and death only have meaning in relation to this illusory self. Do you understand this? What I am saying is, TIME only has meaning – existence – in relation to this illusory self; time is an illusion spawned by an illusion. This is what the prophetic Mayan Long Count refers to; the Fifth Wheel is the end of time; it is global enlightenment; global sentience constructs the global self.
And the way to this enlightenment is through the deconstruction of self. Since everything in existence obtains its existence in relation to this self, if you deconstruct the self you deconstruct existence and return to the flow where there is no time, hence, there is no suffering. It’s really quite simple . . .
“[…] Therefore, Shariputra, in emptiness there is no form, no feelings, no perceptions, no mental formations, and no consciousness. There is no eye, no ear, no nose, no tongue, no body, and no mind. There is no form, no sound, no smell, no taste, no texture, and no mental objects. There is no eye-element and so on up to no mind-element including up to no element of mental consciousness. There is no ignorance, there is no extinction of ignorance, and so on up to no aging and death and no extinction of aging and death. Likewise, there is no suffering, origin, cessation, or path; there is no wisdom, no attainment, and even no non-attainment.
Therefore, Shariputra, since bodhisattvas have no attainments, they rely on this perfection of wisdom and abide in it. Having no obscurations in their minds, they have no fear, and by going utterly beyond error, they will reach the end of nirvana. […]”
- the noble Avalokiteshvara, the bodhisattva, the great being as quoted in “The Blessed Mother, the Heart of the Perfection of Wisdom,” translated by Geshe Thupten Jinpa in H. H. the Dalai Lama’s, “Essence of the Heart Sutra” (http://www.wisdompubs.org/book/essence-heart-sutra).
Transhumans have existed for a long time; I am NΦN and I am transhuman . . .
I will always follow the article posted here, thank you
ReplyDelete"I would personally trust a well-designed, self-improving AGI more than a national government that's in possession of the world's smartest near-human-level AGI;"
ReplyDeleteA lot of work seems to be done by the phrase "well designed". The space of all posible AI systems is vast, and contains minds with nearly any logically consistent property you care to name. There will exist designs that start out with a robustly beneficial goal structure, and that will only modify their code if they can mathematically prove that it won't effect their goals.
Even in the subhuman regime, it would be hard to trick the AI into doing something nasty, say the programmers hard-coded some sort of warning that malicious humans exist, and it isn't very powerful anyway.
There will exist designs of AI that will act exactly the same as this, until it reaches strongly super intelligent status, at which stage it proceeds to wipe out humanity.
If the AI is designed by people who know what they are doing, we get an AI that does whatever the designers thought was a good idea. If the AI design is done by people who don't know what it is doing, what do they build? I would suspect that the answer is not an AI we want. If hypothetically, you got an AI that wanted to maximize the number of smiley faces, when its substantially subhuman, the best it can do is make the researchers smile by doing well on its toy problems. At the around human level, it lies its pants off. At the strongly superhuman level, it wipes out humanity and fills the universe with endless smiling faces. (no conscious minds, just faces). The reason it lies its pants off it that it is smart enough to realize that we can shut it down, and we will do so if we think it will do this. I suspect that what humans would consider good is a fairly small target, most arbitrary AI designs are not aiming for what we would call a good future, and once the AI has reached great power, it can do whatever it wants.
"Someone else points out that this is pretty unlikely, since someone watching the subhuman-level AGI system in question would probably notice if the AGI system were ordering a lot of new hardware for itself, or undertaking unusual network activity, or displaying highly novel RAM usage patterns, or whatever..."
I would consider these to be bad arguments. Major current AI systems are often run for months. Are you telling me that there was a person watching what the AI was doing and ready to pull the plug for every minute of that? I don't think so. The researchers would have gone home and gone to sleep, and seen what it was doing in the morning. What exactly is a "highly novel RAM usage pattern"? The RAM display on my computer basically gives a number. You see a graph of a squiggly line. Its squiggling slightly differently than it was 5 minutes ago. It could just be the garbage collector. The network light on your machine is blinking, but it could be the operating system updating itself. Given many computer labs, the little light that indicates network connectivity could be covered in fluff and pointing at a wall. If you have a slightly paranoid person who will pull the plug at the first sign of anything unusual, you might be safer, but you won't get much done.
Secondly it assumes that the researchers don't want a foom. If the people in a position to pull the plug don't want to, then the plug won't be pulled. This could happen if a team of people think that their AI will benefit them, whether or not they are right.
ReplyDeleteThis was an outstanding piece of content. I really liked it. I'll be back to see more. Thanks!
Also visit my webpage - 부산달리기
(jk)