To follow this blog by email, give your address here...

Sunday, October 10, 2010

What Would It Take to Move Rapidly Toward Beneficial Human-Level AGI?

On Thursday I finished writing the last chapter of my (co-authored) two-volume book on how to create beneficial human-level AGI, Building Better Minds. I have a bunch of editing still do so, some references to add, etc. -- but the book is now basically done. Woo hoo!

The book should be published by a major scientific publisher sometime in 2011.

The last chapter describes, in moderate detail, how the CogPrime cognitive architecture (implemented in the OpenCog open-source framework) would enable a robotic or virtual embodied system to appropriately respond to the instruction "Build me something surprising out of blocks." This is in the spirit of the overall idea: Build an AGI toddler first, then teach it, study it, and use it as a platform to go further.

From an AGI toddler, I believe, one could go forward in a number of directions: toward fairly human-like AGIs, but also toward different sorts of minds formed by hybridizing the toddler with narrow-AI systems carrying out particular classes of tasks in dramatically transhuman ways.

Reading through the 900-page tome my colleagues and I have put together, I can't help reflecting on how much work is left to bring it all into reality! We have a software framework that is capable of supporting the project (OpenCog), and we have a team of people capable of doing it (people working with me on OpenCog now; people working with me on other projects now; people I used to work with but who moved on to other things, but would enthusiastically come back for a well-funded AGI project). We have a rich ecosystem of others (e.g. academic and industry AI researchers, as well as neuroscientists, philosophers, technologists, etc. etc.) who are enthusiastic to provide detailed, thoughtful advice as we proceed.

What we don't have is proper funding to implement the stuff in the book and create the virtual toddler!

This is of course a bit frustrating: I sincerely believe I have a recipe for creating a human-level thinking machine! In an ethical way, and with computing resources currently at our disposal.

But implementing this recipe would be a lot of work, involving a number of people working together in a concentrated and coordinated way over a significant period of time.

I realize I could be wrong, or I could be deluding myself. But I've become a lot more self-aware and a lot more rational through my years of adult life (I'm 43 now), and I really don't think so. I've certainly introspected and self-analyzed a lot to understand the extent to which I may be engaged in wishful thinking about AGI, and my overall conclusion (in brief) is as follows: Estimating timing is hard, for any software project, let alone one involving difficult research. And there are multiple PhD-thesis-level research problems that need to be solved in the midst of getting the whole CogPrime design to work (but by this point in my career, I believe I have a decent intuition for distinguishing tractable PhD-thesis-level research problems from intractable conundrums). And there's always the possibility of the universe being way, way different than any of us understands, in some way that stops any AGI design based on digital computers (or any current science!) from working. But all in all, evaluated objectively according to my professional knowledge, the whole CogPrime design appears sensible -- if all the parts work vaguely as expected, the whole system should lead to human-level AGI; and according to current computer science and narrow AI theory and practice, all the parts are very likely to work roughly as expected.

So: I have enough humility and breadth to realize I could be wrong, but I have studied pretty much all the relevant knowledge that's available, I've thought about this hard for a very long time and talked to a large percentage of the world's (other) experts; I'm not a fool and I'm not self-deluded in some shallow and obvious way. And I really believe this design can work!

It's the same design I've been refining since about 1996. The prototyping my colleagues and I did at Webmind Inc. (when we had a 45-person AGI research team) in 1998-2001 was valuable, both for what it taught us about what NOT to do and for positive lessons. The implementation work my colleagues at Novamente LLC and the OpenCog project have done since 2001 has been very valuable too; and it's led to an implementation of maybe 40% of the CogPrime design (depending on how you measure it). (But unfortunately 40% of a brain doesn't yield 40% of the functionality of a whole brain, particularly because in this case (beyond the core infrastructure) the 40% implemented has been largely chosen by what was useful for Novamente LLC application projects rather than what we thought would serve best as the platform for AGI.) Having so many years to think through the design, without a large implementation team to manage, has been frustrating but also good in a sense, in that it's given me and my colleagues time and space to repeatedly mull over the design and optimize it in various ways.

Now, the funding situation for the project is not totally dismal, or it least it doesn't seem so right now. For that I am grateful.

The OpenCog project does appear to be funded, at least minimally, for the next couple years. This isn't quite 100% certain, but it's close -- it seems we've lined up funding for a handful of people to work full-time on a fairly AGI-ish OpenCog application for 2 years (I'll post here about this at length once it's definite). And there's also the Xiamen University "Brain-Like Intelligent Systems" lab, in which some grad students are applying OpenCog to enable some intelligent robotic behaviors. And Novamente LLC is still able to fund a small amount of OpenCog work, via application projects that entail making some improvements to the OpenCog infrastructure along the way. So all in all, it seems, we'll probably continue making progress, which is great.

But I'm often asked, by various AGI enthusiasts, what it would take to make really fast progress toward my AGI research goals. What kind of set-up, what kind of money? Would it take a full-on "AGI Manhattan Project" -- or something smaller?

In the rest of this blog post I'm going to spell it out. The answer hasn't changed much for the last 5 years, and most likely won't change a lot during the next 5 (though I can't guarantee that).

What I'm going to describe is the minimal team required to make reasonably fast progress. Probably we could progress even faster if we had massively more funding, but I'm trying to be realistic here.

We could use a team of around 10 of the right people (mostly, great AI programmers, with a combination of theory understanding and implementation chops), working full-time on AI development.

We could use around 5 great programmers working on the infrastructure -- to get OpenCog working really efficiently on a network of distributed multi-processor machines.

If we're going to do robotics, we could use a dedicated robotics team of perhaps 5 people.

If we're going to do virtual agents, we could use 5 people working on building out the virtual world appropriately for AGI.

Add a system administrator, 2 software testers, a project manager to help us keep track of everything, and a Minister of Information to help us keep all the documentation in order.

That's 30 people. Then add me and my long-time partner Cassio Pennachin to coordinate the whole thing (and contribute to the technical work as needed), and a business manager to help with money and deal with the outside world. 33 people.

Now let's assume this is done in the US (not the only possibility, but the simplest one to consider), and let's assume we pay people close to market salaries and benefits, so that their spouses don't get mad at them and decrease their productivity (yes, it's really not optimal to do a project like this with programmers fresh out of college -- this isn't a Web 2.0 startup, it's a massively complex distributed software system based on integration of multiple research disciplines. Many of the people with the needed expertise have spouses, families, homes, etc. that are important to them). Let's assume it's not done in Silicon Valley or somewhere else where salaries are inflated, but in some other city with a reasonable tech infrastructure and lower housing costs. Then maybe, including all overheads, we're talking about $130K/year per employee (recall that we're trying to hire the best people here; some are very experienced and some just a few years out of college, but this is an average).

Salary cost comes out to $4.3M/year, at this rate.

Adding in a powerful arsenal of hardware and a nice office, we can round up to $5M/year

Let's assume the project runs for 5 years. My bet is we can get an AGI toddler by that time. But even if that's wrong, I'm damn sure we could make amazing progress by that time, suitable to convince a large number of possible funding sources to continue funding the project at the same or a greater level.

Maybe we can do it in 3 years, maybe it would take 7-8 years to get to the AGI toddler goal -- but even if it's the latter, we'd have amazing, clearly observable dramatic progress in 3-5 years.

So, $25M total.

There you go. That's what it would cost to progress toward human-level AGI, using the CogPrime design, in a no-BS straightforward way -- without any fat in the project, but also without cutting corners in ways that reduce efficiency.

If we relax the assumption that the work is done in the US and move to a less expensive place (say, Brazil or China where OpenCog already has some people working) we can probably cut thc cost by half without a big problem. We would lose some staff who wouldn't leave the US, so there would be a modest decrease in productivity, but it wouldn't kill the project. (Why does it only cut the cost by half? Because if we're importing first-worlders to the Third World to save money, we still need to pay them enough to cover expenses they may have back in the US, to fly home to see their families, etc.)

So, outside the US, $13M total over 5 years.

Or if we want to rely more on non-US people for some of the roles (e.g. systems programming, virtual worlds,...), it can probably be reduced to $10M total over 5 years, $2M/year.

If some wealthy individual or institution were willing to put in $10M -- or $25M if they're fixated on a US location (or, say, $35M if they're fixated on Silicon Valley) -- then we could progress basically full-speed-ahead toward creating beneficial human-level AGI.

Instead, we're progressing toward the same goal seriously and persistently, but much more slowly and erratically.

I have spoken personally to a decent number of individuals with this kind of money at their disposal, and many of them are respectful of and interested in the OpenCog project -- and would be willing to put in this kind of money if they had sufficient confidence the project would succeed.

But how to give potential funders this sort of confidence?

After all, when they go to the AI expert at their local university, the guy is more likely than not to tell them that human-level AI is centuries off. Or if they open up The Singularity is Near, by Ray Kurzweil who is often considered a radical techno-optimist, they see a date of 2029 for human-level AGI -- which means that as investors they would probably start worrying about it around 2025.

A 900-page book is too much to expect a potential donor or investor to read; and even if they read it (once its published), it doesn't give an iron-clad irrefutable argument that the project will succeed, "just" a careful overall qualitative argument together with detailed formal treatments of various components of the design.

The various brief conference papers I've published on the CogPrime design and OpenCog project, give a sense of the overall spirit but don't tell you enough to let you make a serious evaluation. Maybe this is a deficiency in the writing, but I suspect it's mainly a consequence of the nature of the subject matter.

The tentative conclusion that I've come to is that, barring some happy luck, we will need to come up with some amazing demo of AGI functionality -- something that will serve as an "AGI Sputnik" moment.

Sputnik, of course, caused the world to take space flight seriously. The right AGI demo could do the same. It could get OpenCog funded as described above, plus a lot of other AGI projects in parallel.

But the question is, how to get to the AGI Sputnik moment without the serious funding. A familiar, obvious chicken-and-egg problem.

One possibility is to push far enough toward a virtual toddler in a virtual world, using our current combination of very-much-valued but clearly-suboptimal funding sources -- that our animated AGI baby has AGI Sputnik power!

Maybe this will happen. I'm certainly willing to put my heart into it, and so are a number of my colleagues.

But it sure is frustrating to know that, for an amount of money that's essentially "pocket change" to a significant number of individuals and institutions on the planet, we could be progressing a lot faster toward some goals that are really important to all of us.

To quote Kurt Vonnegut: "And so it goes."

70 comments:

ringo-ring said...

Congrats Ben!

Will your AGI store all his/her collected knowledge and thoughts explicitly - so anyone can read it as a book of facts, or it will be a brain-like incomprehensible interweaving of billions of connections?

If its thinking will be explicit, then it is possible to prevent the trouble if some evil thoughts suddenly arise in the system :))

How much computational power it will demand? Will it require a supercomputer, or just a cluster of laptops?

And by the way, what do you mean by saying:

"Or if we want to rely more on non-US people for some of the roles (e.g. systems programming, virtual worlds,...),"

So do you mean non-US people cannot be trusted to develop some core components of the AGI?

Aren't you afraid that when you publish a detailed description of how to build AGI, it can be easily picked up by some "bad guys" - so they will develop it faster and will employ it for i.e. military or terroristic applications? :)

Is it possible to implement some kind of a robotic kamikaze with your architecture?

Joel said...

@Ringo-ring "Will your AGI store all his/her collected knowledge and thoughts explicitly - so anyone can read it as a book of facts, or it will be a brain-like incomprehensible interweaving of billions of connections? "

It will be a combination - some nodes and relationships will be equivalent to real world symbols we can identify, others will arise from the evolution of the system (and be potentially difficult to assign a precise symbol that's easily identifiable for human comprehension).

-Joel (opencog dev)

Anonymous said...

> "What we don't have is proper funding to implement the stuff in the book and create the virtual toddler!"

I'm actually quite glad this is the case, since the Friendly AI problem remains unsolved for the time being and by further developing a system with recursive self-improvement potentials you'd just bring the end of our world a bit closer.

Ben Goertzel said...

ringo-ring: What I meant in my reference to non-US people, is merely that being American I happen to know a lot of Americans (as well as Brazilians, Chinese, Kiwis, Europeans, etc.) with relevant AI development experience. So the easiest way for me to staff an AGI project, whether it were based in the US or elsewhere, would be to make use of the folks with whom I have existing experience. However, if the budget of a project were more limited, it might make sense to rely less on my existing contacts from the US and more on training new people who have lower habitual income requirements. But still, to have rapid success I think at least around 50% of the team should be people with prior OpenCog or Novamente engineering experience; otherwise too much time may be occupied "getting up to speed."

I didn't mean to imply that Americans are somehow more trustworthy or ethical than others. Anyway America is a multi-culture, and the "American" AI people I know come from a variety of cultural and national backgrounds (I'm American... but I'm a dual American/Brazilian citizen, and culturally/genetically Jewish, etc. ...)

About the possibility of robotic kamikazes and other bad stuff -- yes it's possible!

Please note that a lot more bad stuff is possible already than anybody does. It seems there is a strong anticorrelation between tech savvy and the desire to cause mass destruction. The two seem to involve incompatible memeplexes. Let's work to be sure it stays that way.

Any powerful technology can be used for good or for evil...

As the "good guys" who see the future more clearly than most, we have the opportunity to get there first, and create a beneficial super-powerful AGI that will militate against the creation of evil AGIs or other malevolent technological constructs.

Incipit the AGI Nanny....

Ben Goertzel said...

Anonymous: About "Friendly AI"...

First, IMO the idea that the "Friendly AI" problem is going to be solved by some sort of mathematical or philosophical theorizing, independent of actually building and experimenting with AGI systems, seems pretty fanciful to me. So far as I can tell zero conceptual or technical progress has been made on this, in spite of a lot of high-falutin' verbiage.... Whereas the path to building AGI, though it still has a long way to go, at least does seem to be showing incremental technical and conceptual progress....

Also there are many many other clearer dangers facing humanity as tech advances, besides the theoretical risk of evil super-AGIs. My own suspicion is that without some sort of beneficial "AGI Nanny" to help us watch over ourselves, the human race is gonna be borked by some other technology -- well before the futurist philosophs and would-be Messiahs figure out the purported Cosmic Equation of Friendly AI ...

ringo-ring said...

It should probably be noted here that the concept of super-intelligent "AGI Nanny" watching over human race is nothing new: it was widely described in many sources, and usually referred to as "God". What frightens me here is that ususally, God is supposed to be the creator and developer of human beings - your vision is the exact opposite: humans, in particular AGI developers, have to create God themselves to watch over them! :D

Aiming to create and deploy God to this Universe is probably the most ambitious goal human could think of: I glad to learn it will take only about 4.3 million dolars and 3 years of work, though.

Ben Goertzel said...

ringo-ring: Building a god is a bit different than what the traditional mythologies propose, no?

Also, the $4.3M is per year (reducible to $2M/year by moving to a low-cost country), and after 3-7 years will result in an AGI toddler.

Transitioning from a toddler to an AGI Nanny or AGI Nobel-prizewinning scientist, etc. may not end up taking a hell of a lot longer after that now. But my practical focus now is on getting to the AGI toddler.

ringo-ring said...

By the way, my comment about the possibility of "AGI terrorism" was not implied to show potential treats of AGI: it is clear that any technology can be used for evil - but to point out that if "good guys" will fail to get to AGI in reasonable time - due to, for example, lack of funding - then somebody else can do this earlier. Progress cannot be stopped - if some people won't build AGI, then others will do. And the motives of these "others" won't be neccesarily peace, well-being and prosperity... it could be just the desire to gain power over the others. And this desire is quite compatible with "tech-savviness" - a lot of technological progress was made specifically to serve the needs of war.

Anonymous said...

Ben, you do realize that in creating AGI we have to get it exactly right for the very first time, otherwise the future is lost (however doom-mongerish this sounds)!?

I'm open to the suggestion that there might be some insights gained by trying to build the damn things - but never actually running them, until the FAI theory (built in part with the help of these experiments) is complete - although I wouldn't bet on this; also, there's a non negligible chance that at least some work you're doing will come in handy for the implementation of the final FAI system... So, to a first aproximation, there does seem to be some value in this kind of endeavour, though IMHO not so much as in directly working on the FAI problem, as SIAI and some others are doing (e.g. http://causalityrelay.wordpress.com/).

However, creating AGI systems with the "whatever works" attitude/rushing ahead as fast as possible and seeing how they fare (in the "real world") is practically guaranteed to result in the existential catastrophe (this is just one of the small insights FAI-theory-wise); sadly, I have the impression that this is precisely what the AGI community (OpenCog project included) is currently doing.

Besides, there are some fundamental problems in FAI theory, like how to construe a Friendly optimisation target, that don't seem to have any hope of being resolved by AGI building & experimentation. What should be done about them? Just conjuring a nice-looking solution on the fly, when our AGI system is almost ready for the prime time?

I'd much rather see that we don't have any close-to-working AGIs until we solve all the problems surrounding FAI, since I can hardly believe that every developer with an almost-AGI in their hands will be sane enough not to fill in the last few details and run the thingie and, of course, doom us all.

As for FAI theory conceptual breakthroughs: well, there is the Coherent Extrapolated Volition (CEV) idea, although a few years old it's still relevant; then we have the naturalistic metaethics that leads to CEV; there's some work on decision theory currently being done by SIAI and some Less Wrong participants going in the direction of reflective decision theory that is absolutely necessary as part of the solution of AGI goal preservation through numerous cascades of recursive self-improvement problem; and there are probably some other things going on I don't know about... Nevertheless, I agree that progress has been very slow and something should really be done about this, in addition to the LessWrong project which has attracted a few people to work on FAI theory etc.

Anyway, I'm in complete agreement with you when you say that
"without some sort of beneficial "AGI Nanny" to help us watch over ourselves, the human race is gonna be borked by some other technology"
...still, if we're going to play around building AGIs without a deep understanding of benevolent intelligence, our situation regarding existential risks shall become considerably worse, mildly put, and... meh, this comment is already too long. :-)

Ben Goertzel said...

Ben, you do realize that in creating AGI we have to get it exactly right for the very first time, otherwise the future is lost (however doom-mongerish this sounds)!?

Hmmm, methinks you (the Anonymous person who wrote this comment) have fallen under the influence of SIAI ;-p

Yes, I certainly realize that certain futurist thinkers (including some affiliated with SIAI) have been actively spreading around this alarmist and undemonstrated idea for some time now!!!

But I have never seen anything remotely resembling a rigorous argument in favor of this perspective....

So it seems to me a view that arises mainly from emotional rather than rational factors (and yes, I'm aware that some of the folks promulgating this view are self-styled "rationalists").



there are some fundamental problems in FAI theory, like how to construe a Friendly optimisation target, that don't seem to have any hope of being resolved by AGI building & experimentation. What should be done about them?

"Friendly AI theory" as construed by the SIAI community, IMO, is pretty likely an intellectual dead end.

There are many fundamental problems in alchemy that also remain unsolved. They weren't solved; the world moved on.

I'm pretty sure "FAI Theory" as discussed in the SIAI community is formulating the problem in the wrong way, using the wrong conceptual framework.

IMNSHO, my GOLEM design for demonstrably beneficial AGI constitutes more progress in the general "FAI" direction than anything the SIAI guys have put out. See http:goertzel.org/GOLEM.pdf
But GOLEM isn't computationally feasible in the near or medium term, it seems.

You mention the LessWrong blog (an SIAI production): I think it's a fairly interesting blog, but I haven't read anything there making real progress on the FAI problem. Certainly nothing resembling a rigorous argument about FAI.

But this gets too much for blog comments, as you said. I do discuss AGI ethics at some length in my forthcoming technical AGI book "Building Better Minds".

Finally, I find it unfortunate that the majority of comments on this blog post are about the risk of me blowing up the world if my project gets funded.

I attribute this ultimately to the prevalence of dystopian themes in SF movies about AI.

Why aren't there an equal number of posts about the amazing bounty that advanced AGI can bring us all if it goes right?? ;-)

Bo said...

Because you know that building AGI isn't such a terrible risk after all, you know something really important that SIAI doesn't.

I'd love to see you debate the SIAI people, maybe they'd come around. This disagreement is important and needs to be resolved, because no matter which side is right, one side is wasting time and effort and money and maybe even increasing existential risk.

Insofar as you think that thinking that AGI is an existential risk is caused by having read too much dystopian SF, though... I hope that was just a throwaway remark, because it's a kind of an underestimation of the people who do think so.

Ben Goertzel said...

Bo, I know the "SIAI people" pretty well and have discussed these issues with them many times.

Remember, SIAI provided the initial funding for my OpenCog project, which was used to pay a couple AI programmers to extract some code from the Novamente AI Engine proprietary codebase and turn it into the initial version of OpenCog.

Also, SIAI's Director, Michael Vassar, serves as a Board member of the organization Humanity+, of which I am Chairman. I discussed these issues briefly with him in his apartment last year.

In years gone by, SIAI Research Fellow Eliezer Yudkowsky and I debated related issues many times on various email lists, and we've discussed them F2F a few times. At the 2006 AGI Workshop, we debated them in a sense, as part of a multi-person panel discussion on the future and ethics of AGI.

So, my difference of opinion with the SIAI folks is not due to any lack of exposure of us to each others' thinking.

I don't claim building AGI is risk-free. Of course it's a risk.

I do deny the assertion that


in creating AGI we have to get it exactly right for the very first time, otherwise the future is lost


as was stated in the comment I responded to, and as some SIAI folks have said to me before.

I believe this is paranoid, irrational thinking, ungrounded in any observed fact or logical argument.

If you think otherwise, please respond by posting, in this Comments area, a link to a solid empirical or logical argument in favor of that contention!

BTW, in the online summer class on Singularity Studies that my father and I taught last summer, under the auspices of Rutgers University, we had Anna Salamon from SIAI as one of our guest speakers. In her talk she made a point similar to -- but less exaggerated and more reasonable than -- the comment I have responded to here and referred to as "paranoid." We debated the point a bit.

Of course, in the OpenCog project, we are trying to get AGI right both cognitively and ethically. We believe the ethical aspects require extremely careful attention, integrated with all the other aspects of AGI design and instruction. Of course there are serious risks with AGI, as with nanotech, genetic engineering, and other advanced technologies.

But admitting the existence of this risk, does not equate to agreeing with senselessly paranoid exaggerations of the risk, such as the one in the comment I was responding to.

Also, as Max More has often pointed out, it's important to balance the costs of action versus the costs of inaction. AGI can do a lot of good, and many other things can do a lot of harm which AGI can prevent. You need to consider that in the equation along with the risks of AGI. But if you paranoidly exaggerate the odds of an AGI disaster, then the common-sense wisdom of balancing various costs and benefits gets lost.

Ben Goertzel said...

Regarding


Insofar as you think that thinking that AGI is an existential risk is caused by having read too much dystopian SF, though... I hope that was just a throwaway remark, because it's a kind of an underestimation of the people who do think so.


I agree that AGI is an existential risk in principle; but I don't agree with most of the SIAI people about the odds that an engineered AGI will cause great harm.

I think that the frequent SF images of evil AIs, are a big part of the reason why grandiose proclamations about the dangers of AGI are as common as they are. These movies plant the idea of evil AGI in a 5-year old's unconscious, then the 5 year old grows up to write serious papers about how AGI may destroy the world ;p ...

I have read what the SIAI folks have written on these topics. I understand they are trying to think seriously and rationally about it. I don't find their arguments on these topics particularly compelling or interesting, so far.

Nick Bostrom's writing on existential risk generally seems wiser and more reality-grounded to me than that of the SIAI folks; but he is much less prone to making grandiose statements.

But all this is broad generalization, and a serious discussion wouldn't fit in these little Comments!!!

Mitchell said...

I have a practical question and a philosophical question.

The practical question: How much computing hardware would you need to run your proto-AGI? For a while I've thought that the resources of a modern data center seem to be on about the right scale.

The philosophical question: What's your latest thinking about the value system of a mature (autonomous) AGI? You had that "growth, choice, and joy" formula a few years back; is that still the schema?

Toby said...

Well Ben, look you're a smart dude--with idiots on the internet making money by the power of crowds I bet you could come up with a clever way to make a lot of dough. My old CEO used to say, "money cures all ills." With a ton of it a lot of things would align.

Mitchell said...

Toby, could you give us some examples? And maybe Ben should make an open thread for discussion of fund-raising methods.

Toby said...

@Mitchell YouTubers are bringing in money like LisaNova. Granted she makes good funny videos but she also knows how to monetize. A blogger, raymitheminx.com lives off her monetization. It's content based and I know there may not a bunch of actors or comedians here but the point is that smart folks can come up with good ideas to make traffic. A site that provides *some* kind of useful information can bring people.

nerdkits.com is a couple MIT grads that basically sell robot kits and provide some help online. Certainly there must be some EE types around here.

Yeah a separate thread isn't a bad idea at all. Putting minds together can do a lot.

Anonymous said...

Ben, regarding the idea that we have to get the first AGI perfectly right, otherwise the future will contain almost nothing of value: the argument is based on the fragility (http://lesswrong.com/lw/y3/value_is_fragile/) and in part on the complexity of value (http://wiki.lesswrong.com/wiki/Complexity_of_value).

If we do not manage to transfer the whole of our morality to the AGI (neglecting CEV style considerations etc., for simplification purposes) that in time becomes superintelligent - and due to complexity of our morality this cannot be done by simply plugging in "growth, choice and joy" or anything similar - well, our future will be dystopic in a paperclipper, and not in SF novels, sense. At least that is what Yudkowsky's OB/LW posts argue, and I must admit, most persuasively (and yes, I have also read your writings, and seen the arguments on SL4 between yourself and E.Y. and others etc.).
Since there's a significant probability for a hard takeoff (http://wiki.lesswrong.com/wiki/Hard_takeoff), there might not be any chance for us to "take things back into our own hands" when we notice that the world isn't changing just how we expected and/or hoped; and even with soft takeoff, the AGI may act nicely up until it becomes powerfull enough to wipe us out...

In AGI there certainly lies a potential for a better world, far greater than in other advanced technologies, but risks also seem to be proportionately great, so... better be extra carefull, and complete the FAI theory first, however seducing working on AGI design and seeing yerself in "It's allive!" moments it seems. ;)

But I'm pretty sure you heard all of this before, and since you remain unconvinced, all I can say is that I hope you'll change you mind or otherwise remain unsucessfull in your AGI endeavours (sorry, I really don't wish anyone to fail in his life's work, but humanity's existence just appears a bit more important).

(I'll go through your GOLEM proposal a bit later, when I get the time - it's nice to see you haven't completely stopped thinking about these issues :-))

ringo-ring said...

@Anonymous

However, creating AGI systems with the "whatever works" attitude/rushing ahead as fast as possible and seeing how they fare (in the "real world") is practically guaranteed to result in the existential catastrophe (this is just one of the small insights FAI-theory-wise); sadly, I have the impression that this is precisely what the AGI community (OpenCog project included) is currently doing.

As far as I know, what the AGI community (OpenCog project in particular) is doing now is building AGI that lives in its own virtual world. Remember "Matrix"? :) The only difference is that in our case, no red pills are provided.

It eliminates any possibility for AGI to do any harm in this world.

Only after AGI will be tested enough in virtual reality to make sure it is not "evil", it will be allowed to enter real world.

But latter is not even neccessary, since a lot of AGI applications, such as cancer research, can be carried virtually without any problem.

Therefore development and testing, and employment of AGI systems (as soon as it is carried out in virtual mode) itself is not dangerous. The only real danger is the potential of AGI to be used for wrong purposes.

Ben Goertzel said...

OK, so a little more about the "paranoid" SIAI argument from Anonymous's comment -- that if we don't get AGI "exactly right" according to some (hypothetical) theory of "Friendly AGI", then the AGI will almost surely bork us all.

In my reply to your comment, I asked you for a rigorous argument in favor of the paranoid assertion -- and you replied in your next comment that


the argument is based on the fragility (http://lesswrong.com/lw/y3/value_is_fragile/) and in part on the complexity of value (http://wiki.lesswrong.com/wiki/Complexity_of_value).


So let's look at these blog posts.

The argument in the first of those LessWrong posts (which I've heard many times before, yes) seems to be about how a randomly selected intelligence (from some "impartial" distribution on mind-space) would be unlikely to respect human values.

This is a very fuzzy argument, because who knows what the right distribution on mind-space for minds in our universe is? You're making an essentially arbitrary assumption there.

But even if one were to provisionally grant the point (which I don't) -- so what? We are not creating a randomly selected intelligence, we're creating an intelligence as a specific outgrowth of human intelligence, with a specific intention of making it beneficial according to our standards. So why is it relevant to talk about minds randomly selected from some human-irrelevant distribution on mindspace?

Then that LessWrong post goes on:


Value isn't just complicated, it's fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so.

And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts, so that it would be more than one day's work to summarize all of it. Maybe some other week. There's so many branches I've seen that discussion tree go down.


Apparently, this is the rigorous argument as to why AGI has to be gotten "exactly right" or we're all borked.

Except that, there is no argument there.

Basically: That blog post of Eliezer's, to which you referred me, simply repeats the "paranoid" statement you made -- and then says Eliezer has a long and complex argument in favor of it, which he's too busy to summarize for us.

Fine, I'm sure Eliezer is busy -- but, you can't expect me (or anyone) to bow to the authority of an argument that someone claims to have in his mind but hasn't taken the time to write down!!! Also, in F2F or email conversations on the topic, neither Eliezer nor any other SIAI person has ever been able to tell me the argument, in spite of repeated requests.

The second LessWrong post that you mention simply says that human value systems have a high Kolmogorov complexity, relative to our currently standard computational models. I agree. This does not imply the "paranoid" statement that you made....

Bob said...

FAI was created solely to defend somebody's huge ego. Person X was trying to solve problem AI, but failed. In order to protect his ego from the fact that he could not solve, let alone contribute to, [merely] AI, person X created the problem of FAI which sounded grandiose enough such that failure to solve it would not damage his ego, but rather portray X as a hero trying to scale mount everest, thereby inflating said ego even more.

To make matters even more laughable, person X then labels himself as an AI researcher..

Ben Goertzel said...

FWIW, I actually think


Value isn't just complicated, it's fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so.


is wrong, not just poorly-justified....

I think that "human values" are a complex, autopoietic, self-organizing concept-system. I think this system is robust, in the sense that if you remove or destroy part of it, then the rest will reconstitute something related (not necessarily identical) in its place. I also think this system is growing and evolving. And I think this system will continue to grow and evolve as legacy humans give way to posthumans and AGIs (perhaps the former co-existing with the latter).

Ben Goertzel said...

Toby, your comment


Well Ben, look you're a smart dude--with idiots on the internet making money by the power of crowds I bet you could come up with a clever way to make a lot of dough. My old CEO used to say, "money cures all ills." With a ton of it a lot of things would align.


is just another variant of the old "If you're so smart, why aren't you rich?" ;-p

I almost got rich a couple times, actually, via various business pursuits. Close enough to see that getting rich is partly a matter of luck, as well as persistence and talent.

As I'm currently engaged in some interesting business pursuits, in parallel with my AGI research, I think that getting rich and then funding my own AGI research is a plausible possibility for me.

But still, there is a lot of uncertainty in those business pursuits; and even if they succeed, they will almost surely take years to get from their present condition to the point where I have enough wealth to fund an AGI project at US$2M/year....

Whereas, with the right donor or investor, we could start moving full speed ahead toward advanced beneficial AGI right now.

Also, even though I do happen to have some business experience and skills in addition to my AGI research experience, skills and ideas -- that's pretty much just happenstance. One could certainly have a great AGI researcher with no business aptitude whatsoever.

"If you're so smart, why aren't you rich" is pretty much BS, isn't it? It has the assumption that making a lot of $$ is mostly about being smart, rather than about social skills etc. There is a correlation between income and intelligence, but it's not THAT high a correlation....

Anyway I'm not trying to sell myself as the smartest guy on the planet ;-p .... I have a rather high IQ and I graduated college at 18, so I'm no idiot, but, there are lots of other smart people on the planet also.

I just happen to have put in the time and effort over decades to come up with a workable AGI design. Most of the other smart people on the planet chose different life paths.

Ben Goertzel said...

Mitchell asked


The practical question: How much computing hardware would you need to run your proto-AGI? For a while I've thought that the resources of a modern data center seem to be on about the right scale.


We don't have a precise number. The OpenCog system is built to work on a network of distributed SMP machines, so we can run it on a compute cloud and add processors as needed.

Our best OpenCog machine right now has 96GB of RAM and 16 processors.

My back-of-the-envelope calculations suggest we'll need in the range of hundreds of current uad processor machines for a human-level AGI toddler.

However, even if this is optimistic, I'm 99% sure that
a couple hundred networked quad-processor machines (probably just a couple dozen!) is enough for us to do all the research we need to figure out how to make an AGI toddler, and give some cool "working demos" in that direction.

And once we get to that "AI Sputnik" demo, then getting more $$ to pay for 5000 machines if that's what it takes (though I doubt it will), is not going to be an insurmountable problem.


The philosophical question: What's your latest thinking about the value system of a mature (autonomous) AGI? You had that "growth, choice, and joy" formula a few years back; is that still the schema?


"Growth, choice and joy" is of course just a vague indicator of a general direction.... But yes, that remains my line of thinking.

We will program our AGI with an initial value system, which will be include more precise versions of growth, choice and joy, as well as more specific ethical principles relating to the treatment of humans and other sentiences.

And then we will teach it and interact with it; and watch and learn and adapt.

QuantumG said...

My experience with OpenCog was that there was too many philosophers and not enough actual engineering. Since I stopped looking at it two years ago there's been very little progress.. the code I wrote back then is still in the tree, rotting away, and no-one has done anything with it.

In short, what you need to move AGI along is to freeze your philosophy into an actual engineering plan and find programmers who are willing to just code it rather than going off on their own little adventures.

Paying people to do it is the brute force way, and no guarantee of success.

Ben Goertzel said...

QuantumG wrote:


My experience with OpenCog was that there was too many philosophers and not enough actual engineering. Since I stopped looking at it two years ago there's been very little progress.. the code I wrote back then is still in the tree, rotting away, and no-one has done anything with it.


What a needlessly obnoxious, and ill-founded comment! I'm starting to feel sorry I made this blog post, as it's just turning out to be an excuse for obnoxious people to vent their various ill-founded peeves in the comments field. Gotta love the Internet ;p

If the code you contributed to OpenCog was not used, maybe it sucked, or maybe it addressed some part of OpenCog that isn't under active development. I'd have to know the details.

Unlike what your obnoxious comment falsely implies, there has been significant OpenCog development going on lately, for instance

-- Nil Geisweiller's work extending MOSES to handle continuous variables, and making MOSES distributed

-- Jared Wigmore's work on PLN, connecting PLN with RelEx NLP output more rigorously

-- Ruiting Lian's work on NLGen, making NLGen use a large database of sentences to provide better responses (I think this isn't merged yet)

and more....

But, yeah, progress definitely did slow down last year compared to 2008, because of funding difficulties. For a while development was funded by SIAI, and that stopped due to SIAI's change in management. And Novamente LLC used to be funding more OpenCog work; now it's still funding some, but less, due to a shift in the particular commercial contracts Novamente LLC has gotten (it's had fewer that can be done using OpenCog).

However, things are looking up!!

There will 99% likely be a major OpenCog announcement within the next few weeks -- keep your ears peeled. It seems progress is likely to step up.

In additional to the major announcement, it's also likely that Novamente LLC will shortly have a contract that will fund someone to work on spatiotemporal inference in OpenCog (in the context of some application work).

And two new PhD students are working on OpenCog now at Xiamen University, one working on creating a variant of Joscha Bach's / Dietrich Dorner's Psi model in OpenCog; the other working on improving MOSES and then connecting it with PLN. These are multi-year initiatives.

Further, a team of four students in India is just now starting a year-long school project which will focus on improving a component of OpenCog's NLP system (RelEx2Frame, which translates syntactico-semantic dependency relationships into semantic frames).

So, we're still moving in the right direction.

But yeah, I wish we were moving a lot faster! That is why I would like more funding, as noted in the blog post to which this is a comment. We're working on some exiting OpenCog funding that will very likely be announced soon, but it's not enough.

Ben Goertzel said...


In short, what you need to move AGI along is to freeze your philosophy into an actual engineering plan and find programmers who are willing to just code it rather than going off on their own little adventures.


Gee, thanks for the advice.

The core philosophy underlying OpenCog has been published long ago. It's not "frozen" (intelligent systems should adapt over time) but it's stable enough to guide development.

We have a design for the whole system, and when someone emerges to work on part of it, we can quickly create a detailed engineering plan for that part.

The Building Better Minds draft will be released to actively interested parties later this month, which explains the design in detail more clearly than has been done before. But the core design is online already in more scattered form, in the OpenCog wiki and in various books and papers.

"Finding programmers" to work for free on a project like this is not very easy, however. Sometimes it seems like everyone with a passion, aptitude and knowledge for AGI -- and suitable coding skill -- has their own AGI approach in mind and doesn't want to collaborate with others.

BTW, I seem to recall you were a participant on the OpenCog IRC list for a while. Many of the members there are not active OpenCog developers, and many OpenCog developers don't tend to use IRC. Especially the ones in China don't since IRC is blocked in China.

In an open-source project, there's neither need nor motivation to stop programmers from going off on their own little adventures, as you put it. If some folks want to do that, it's just fine. But indeed, we also need people to work hard for long hours on implementing the core AGI design. Occasionally someone emerges who will do that sort of thing without financial compensation (e.g. Jared Wigmore), but the best way to get that kind of sustained focus from the right people is to pay them.

Ben Goertzel said...

Waaaahhhhh!!!!!

At least half the comments on this post are semi-trolling like

-- My guru has a secret argument that if you succeed in building a human-level AI, you'll destroy the human race. So out of deference for his opinion, you should stop now.

-- If you're so smart, why don't you make a fortune yourself and fund your own work

-- Your project sucks, there's no coding going on, only a bunch of BS (which is demonstrably quite false)

It's striking how much hostility and deprecation one attracts by actually trying to do something dramatically positive and world-changing!!

If any of you readers wonder why more people don't try to do exciting, important stuff like build thinking machines, maybe these comments give you a clue. Not only is it a tough way to make a living, and a huge intellectual effort -- but all along the way, you get relentlessly trashed from an incredible variety of quarters.

Egads.

This kind of annoying reaction will never inhibit me from doing AGI work, but it will certainly inhibit me from making blog posts about it ;-p ...

Toby said...

Ben--we're trying to help you. Not kidding. Don't turn a compliment and encouragement "you're a smart guy" into an insult and demotivational talk. Really. We want you to succeed.

Ben Goertzel said...

Hah.. OK, Toby, thanks. I can appreciate your comments were made in a positive spirit. I guess my reactions got blurred by some of the other comments, which obviously were not. (Like the ones that said "I hope you fail" ...)

But still... when someone posts that I'm going to destroy the world, or that there is no coding going on in my project, I feel like I have to respond to set the record straight -- and then that takes time, which could probably be more usefully spent....


Onward and upward!! ;-)

Ben Goertzel said...

But Toby, I hope you can appreciate that I've already spent significant time during my life trying to make $$ in the software business, with a goal of funding AGI research myself. I have managed to fund a modest amount of AGI research, but not on the scale that's needed. Maybe I'm not smart enough in the right ways, or not appropriately talented; or maybe I got unlucky; or maybe my business success is right around the corner. But anyway, for me, making loads of $$ to fund my own research has not so far proved a no-brainer.

It's true I have not yet set aside, say, 5-10 years of my life to focus only on wealth rather than research. And really I'm not going to. It's not my nature. Probably if there were a 99% guarantee that by blowing 5 years working on pure business I'd make $20M, I would do it. But we both know that's not how the world works.

Toby said...

Ben, I didn't grasp that you had spent so much time in trying to make money in that way.

Sounds like you made an adult decision and moved off of that to more fruitful methods. Rock on!

Ben Goertzel said...


Ben, I didn't grasp that you had spent so much time in trying to make money in that way.

Sounds like you made an adult decision and moved off of that to more fruitful methods.


Haha... not exactly! Actually I'm making my living as a software entrepreneur right now, while spending the other 50% of my 80-hour workweek on AGI.... I left academia for entrepreneurship in 1997, so I know very well how fun and challenging it is, and that even if you're smart it's not a reliable panacea for generating quick wealth!!

Anonymous said...

Ben: For what it's worth, I had previously only heard the SIAI viewpoint and was starting to go along with it. This post (and really the subsequent discussion) has made me think more critically about their statements, so thanks for taking the time to engage in such detail even if you felt a bit attacked.

I've been reloading the page every few hours to see what new discussion has cropped up.

Anonymous said...

Also, to clarify, this is not the same Anonymous as earlier.

Bo said...

Thanks for the informative replies, Ben.

So looks one key disagreements between you and SIAI are that you think that value is not fragile. I suspect that "hard takeoff" is another one.

If that's the case, then well, I guess I'd like to see you address it some way or other. Any old posts where you've refuted those theses? Or if you haven't made your reasons public, some blog posts here? This comments section indeed isn't the right place for a serious object-level discussion of these theses.

You look like the most formidable and coherent SIAI critic I've encountered, so that's why I'm interested. If you presented well enough your refutations of these key theses that I think are motivating a lot of SIAI people, maybe they would stop wasting their time working at cross-purposes to your project. I'd like to witness some serious dicussion and debate without the the useless hostility in that's been displayed in this comment thread...

(BTW, the guru's arguments are not actually secret, they're just in the midst of a few hundred blog posts known as the sequences. Fortunately there's a bit more of an organized index into those posts on Lesswrong now. Anyway, I thought when you said you were already familiar with SIAI you meant that you did know and had refuted the arguments for what I see as their most important theses!)

Bo said...

P.S. Here's a narrow index into the arguments for the fragility of value: http://wiki.lesswrong.com/wiki/Complexity_of_value .

Also, the list of "followup to"-links in the VIF post itself: http://lesswrong.com/lw/y3/value_is_fragile/

Ben Goertzel said...

Bo wrote:


(BTW, the guru's arguments are not actually secret, they're just in the midst of a few hundred blog posts known as the sequences. Fortunately there's a bit more of an organized index into those posts on Lesswrong now. Anyway, I thought when you said you were already familiar with SIAI you meant that you did know and had refuted the arguments for what I see as their most important theses!)


I have read most LessWrong posts lightly, but I haven't studied them carefully, as frankly I didn't find many of them extraordinarily fascinating or well-argued. But then I was very familiar with Eliezer's thinking before reading LessWrong, so I guess would have probably enjoyed the blog more if I were experiencing his unique view of the world for the first time!


You look like the most formidable and coherent SIAI critic I've encountered, so that's why I'm interested. If you presented well enough your refutations of these key theses that I think are motivating a lot of SIAI people, maybe they would stop wasting their time working at cross-purposes to your project.


I'm busy with my own work and don't have time to write detailed refutations of the ideas of all the many, many people who disagree with me for various reasons!

However, if you give me a brief list of what you see as the key points of Eliezer-an/SIAI-ish thinking, I will try to find time to write a blog post reviewing them and briefly summarizing the reasons why I disagree, where I do.



BTW, the guru's arguments are not actually secret, they're just in the midst of a few hundred blog posts known as the sequences.


Ok, well... I have asked Eliezer, Anna Salamon and Michael Vassar (all of whom I'm quite friendly with and know F2F) for a detailed argument as to WHY they think that an AGI built without "provable Friendliness" will almost surely bork us all. None of them has ever told me or pointed me to the argument. Can you do so? Or are you telling me I'm supposed to infer it from some long sequence of articles, in which it's implicit? This is a key point, so if you guys really believe you have a rigorous argument for it, shouldn't you summarize the argument somewhere?

The idea that "a randomly chosen mind would have no reason to be nice to humans, because the class of human-friendly goal systems occupies only a small percentage of goal-system-space" is pretty pointless, as I already said. But this is the only argument I've heard out of you guys in this regard.

After all, as I said, human-created would-be-beneficial goal systems are not randomly chosen from goal-system space.

Further, it's possible that any system achieving high intelligence with finite resources, in our physical universe, will tend to manifest certain sorts of goal systems rather than others. There could be a kind of "universal morality" implicit in physics, of wich human morality is one manifestation. In this case, the AGIs we create are drawn from a special distribution (implied by their human origin), which itself is drawn from a special distribution (implied by physics).

For self-avowed Bayesian rationalists, it seems the SIAI folks are awfully durned confident about some (interesting but) awfully speculative notions...!!!!

Ben Goertzel said...


P.S. Here's a narrow index into the arguments for the fragility of value: http://wiki.lesswrong.com/wiki/Complexity_of_value .


I agree that human value systems are complex.

Almost nobody would disagree with that point, except a few eccentric philosophers, I guess.

It's common sense; after all, look at the cultural variations.


Also, the list of "followup to"-links in the VIF post itself: http://lesswrong.com/lw/y3/value_is_fragile/


I don't agree that human value is fragile; and as I already point out, this page does not contain any argument in favor of the proposition, but just refers to an unpublished argument.

I notice that in the comments to this page, Robin Hanson states


I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?


Tim Tyler in the comments says


The "open-ended" utilility functions - the ones that resulted in systems that would spread out - would almost inevitably lead to rich complexity. You can't turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology - and so on. So, you need scientists and engineers - and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.


Eliezer does not respond to these complaints (nor others) with any kind of rigorous argument....

I don't exactly agree with Tim Tyler either, but, I do think that Eliezer has not rigorously refuted his point -- nor (in this post) presented any argument besides a forcibly articulated opinion, and a reference to an unpublished argument.

Harbon Mengsk said...

I think the actual argument (why FAI matters) pivots on the following (massively simplified and de-fudded):

1) "any AI that rewrites itself may quickly and irreversibly diverge from human interests (and reduce our quality (and/or length) of life)".

2) Mind-space (whatever the fcuk that is) is practically infinite, etc blah blah chances of us building the right AI the first time blah blah you've covered this.

3) So, "without FIRST plugging in a Ghandi/Jesus/Hero 'mind' to guide the system (that doesn't gradually become a Hitler/Stalin/Antihero) we're statistically all but guaranteed to suck on the collective extinction pipe."

There are a lot of guesses/assumptions/unfounded beliefs in there. Fun to argue. But who cares. This all reminds of listening to people whine on about Anthropogenic Global Warming. A lot of time, effort and money wasted arguing about the wrong problems.

My bet remains that significant parts of this puzzle are gonna be cracked by small groups of dorks synthesising work people like you are doing Ben. Maybe even your dorks. Keep it real.

Tim Tyler said...

The "value is fragile" link still seems to be an elaborate delusion to me. It seems to get cited a lot - but nobody ever seems to defend its apparently ridiculous views.

Tim Tyler said...

> I have asked Eliezer, Anna Salamon and Michael Vassar (all of whom I'm quite friendly with and know F2F) for a detailed argument as to WHY they think that an AGI built without "provable Friendliness" will almost surely bork us all.

I can't think of such an argument. However, it should be admitted that there is at least some small chance that it will bork most humans - and that seems like it could potentially be important - for example, if it led to a major loss of knowledge.

Tim Tyler said...

> Why aren't there an equal number of posts about the amazing bounty that advanced AGI can bring us all if it goes right?? ;-)

I think that's the same reason why most people have more nightmares than dreams where they are heroes: it often pays to rehearse the negative scenarios more - to better avoid certain really bad outcomes.

Jonathan said...

Hi Ben, I've been following your work with much enthusiasm for a few years now. I suspect you'll be the first to get an AI toddler on their feet, although I'm concerned you might be overtaken by a more aggressive and substantially less ethical team.

You shouldn't get too bogged down by the negativity. Superhuman AI is a sensitive issue for a surprisingly large number of people, particularly to those outside the field.

If the AI is anything like a human, it will develop a significant bond with its creators that will guide its psychological development as it grows. Complex logical operations are built on simple ones. If an AI has psychopathic tendencies you bet these will manifest themselves in one way or another over a course of decades of training. If a remedy can't be found, you just kill it and try again, or better still, switch to another one you have been training in parallel. If all your AI's turn out to be psychopathic, guess what? There's a problem with the algorithm.

Now if Saddam Hussein were developing an AI, even for benign reasons, we have reasonable grounds for concern about the AI's mental health. With Ben Goertzel at the helm my greatest concern is that the AI will acquire an inexplicable obsessed with building AIs.

There are legitimate concerns behind the AI doomsday scenario, but their frequency and intensity doesn't correlate with the likelihood of that scenario. Nobody asked me, but I think the best precaution you can take while growing a mentally stable AI capable of safely distinguishing between a bad joke and lethal command is to have on board a team of open-minded psychologists and a big red TERMINATE button.

QuantumG said...

Haha, you have three milestones for a year and you think that's good? And what's this nonsense about funding? Dude, it's an open source project.

I don't know about now, but back when I was working on OpenCog people were clambering to join the project and being met with outright resistance. Those that managed to get past the crazy gatekeepers found that most everything was being worked on by secret Brazilians who never committed any code. Those that got past that quickly discovered that no-one was actually in charge and knew what needed to be done.

To this day OpenCog *does nothing*.

Ben Goertzel said...


Haha, you have three milestones for a year and you think that's good?


There are milestones for individual parts of the system, and then a handful of overall milestones for the project.


And what's this nonsense about funding? Dude, it's an open source project.


Since when are OSS and funding contradictory?? IBM pays many people to work on Linux. Ubuntu was well-funded, which is why it's the most usable Linux version. It's a naive comment.



I don't know about now, but back when I was working on OpenCog people were clambering to join the project and being met with outright resistance. Those that managed to get past the crazy gatekeepers found that most everything was being worked on by secret Brazilians who never committed any code. Those that got past that quickly discovered that no-one was actually in charge and knew what needed to be done.


As it happens there's not so much involvement in OpenCog by the Brazilians in the Novamente/VettaLabs office these days. But those Brazilian guys definitely committed a lot of code.

I don't recall you ever asking me what needed to be done for OpenCog, btw. Did you ask Joel Pitt (another guy who knows what needs to be done).


To this day OpenCog *does nothing*.


It's just not true. How annoying to have to spend my time refuting this rubbish ;-p

OpenCog's MOSES component is now being used in a commercial project, for solving some prediction problems involved in smart power grids. It's also been used commercially for biological data analysis and text classification.

OpenCog's RelEx component translates English text into logical relationships, and has also been commercially used on the back end of an e-learning website. It's also being used now as the basis for a PhD student's work on language generation and dialogue.

OpenCog's embodied learning component controls a virtual dog in the Multiverse virtual world (which learns new behaviors based on imitation and reinforcement), and we're about to launch a new (funded) project using it to control a virtual human that learns.

It's also been hooked up to the Nao humanoid robot, to allow the robot to navigate indoors and answer simple English questions.

So, your statement is untrue; OpenCog does do things, now.

But, it's a research system, and so far the practical uses have involved (large) components of the system used in isolation. More work needs to be done to make the whole integrated system useful, or generally intelligent.

I'm not sure why you have such a chip on your shoulder about OpenCog. It's just a guess, but perhaps your own AI and programming skills were not up to the level needed to contribute significantly to the project?

cacarr said...

Wait, the blog post was about funding, yes?

As it's really a rather modest amount of money we're talking about, what would it take to convince a Paul Allen (for whom it's barely any money at all) that there's a 10% chance that you're not crazy?

"Let's assume it's not done in Silicon Valley or somewhere else where salaries are inflated, but in some other city with a reasonable tech infrastructure and lower housing costs."

Portland proper might be a touch expensive, but it's a nice place, meets your requirements, and your group of geniuses can likely talk their families into moving there -- housing in Beaverton or Hillsboro, let's say, if necessary. People can take Portland's nice light-rail into the city.

It's settled, then; Mr. Allen digs 30 million USD and some lint out of his front pocket and plops you all down for 5 years in a nice facility in Portland's close-in industrial NW area.

;-)

So, who has Paul's phone number?

Joel said...

For those onlookers reading through some of the criticisms, I just wanted to mention that QuantumG and OpenCog folk are trying to resolve our differences elsewhere.

I can confirm Ben's comments that stuff is happening, code and software design is progressing slowly, it's just not all public facing. Partly because we don't have a publicist or an active community manager. If you have the skills for either job, particularly the latter, and want to be involved with OpenCog then feel free to volunteer ;-).

There's also an announcement on the horizon which should rapidly increase progress in certain areas of OpenCog development.

Joel said...

Oh, and my suggestion for AI research is Wellington, New Zealand. We've got a burgeoning tech/web community and NZ is an awesome place to live for relatively modest cost.

There's also various schemes where we might be able to get matching grants by the government (although I'd have to check the fine print of applicability).

Dave W Baldwin said...

Remember the word 'overhead' is important.

1) The overhead to develop the AI Tod is going to be less under OpenCog than IBM and others.

2) The natural developments along the way will lead to products pushing down the overhead for the consumer, especailly involving IT.

I am starting to bring my message around to that focus. The bar is set so low regarding claims of AI, it is hard to get someone to understand from the tech side.

On top of that, remember their view of the future is how it was in the movies they saw as a child...either no AI or silly/useless AI.

As for the paranoid, if there were a way to bring 'above human' intelligence to the machine in the snap of the finger, we would probably have to build AI Psych Wards to treat the machines frustrated with lethargy.

nick012000 said...

Even if you believe that human morals are hardy, you can't be 100% sure of that. What Eleizer is advocating is the worst-case scenario, and it's what you have to account for when you're engineering these sorts of things.

So, say that you're 99.9% sure that you're right and Eleizer is wrong. You probably aren't quite that sure, but let's say that's the case anyway. If you're wrong (a .1% chance, according to the weighting), 6 billion people die. If you go ahead with it, you're okay with a .1% chance of people dying.

Disregarding the massive negative utility of destroying humanity totally, this is equivalent to a 100% chance of 6 million people. By going ahead with a non-Friendly AI that you're not 100% sure of, you are performing the moral equivalent of committing the Holocaust.

If you don't believe that your AI would justify committing the Holocaust to create, don't create it.

Joel said...

@nick012000

Last time I checked about ~50 million people die a year. Even if AI can only protect against half of those deaths, it's still a net positive utility based on your calculation.

Ben Goertzel said...

nick012000 said...


Even if you believe that human morals are hardy, you can't be 100% sure of that. What Eleizer is advocating is the worst-case scenario, and it's what you have to account for when you're engineering these sorts of things.


Funny, this is where the argument with SIAI people always ends up!

They start off claiming to have an argument that creating an AGI without some kind of mathematical "Friendliness guarantee" is 99%+ certain to kill everyone.

Then when pressed, they can never supply this promised argument, though they often intimate that Eliezer knows the argument but just hasn't had time to write it down ;p ...

Then, they follow up by saying: "OK, but you're not sure that your AGI *won't* kill everyone."

Right, I'm not SURE of that -- but now the argument has shifted to something totally different.

There are many, many technologies under active development now, for which it is unsure whether they will ultimately kill all humans.

There is also the strong potential that advanced AGI could protect us from the dangers (and existential risks) of these technologies.

The safest thing, in terms of the medium-term survival of the human species, would be to halt all advanced technology development -- but that option isn't really on the table, even if it were deemed desirable.

So now the conversation comes down to weighing the risk of AGI versus the benefit of AGI (in terms of helping people, curing aging, improving life etc.; as well as in terms of avoiding other existential risks).

And then there's a subtler question about when it makes the most sense to try to super-carefully evaluate the risks of proceeding with AGI. Now, before any significant AGIs are built? Or does it make more sense to build, say, toddler-level AGIs and experiment with them, and then assess the risks and benefits of further progress at that point, when we'll understand AGI a lot better?

All these are important issues to discuss, no question. But, they are also old issues to which no one writing in these comments has added any new insights. And they are VERY different from the assertion that started this thread of comments, which was: That there is, somewhere, some solid argument that creating AGI without some "Friendliness proof" is almost sure to end humanity.

nick012000 said...

When you're performing engineering, you work with margins of safety and the most conservative assumptions possible, for safety's sake, because people will die if you don't.

The conservative position for AI programming is that if it's not designed properly, it will destabilize, have a steep intelligence explosion, and kill us all before we can do anything. Therefore, that should be the operating assumption for any AGI project, no matter how primitive it is.

So if you go ahead with an AGI when you're not 100% sure that it's safe, you're committing the Holocaust.

Ben Goertzel said...


When you're performing engineering, you work with margins of safety and the most conservative assumptions possible, for safety's sake, because people will die if you don't.


As Joel pointed out in his comment, people are dying every day.

One point of view is to try to minimize expected human death or suffering.

Another point of view is to try to minimize odds of existential risk, even if this increases expected human death or suffering.

Another point of view is to try to maximize human value, which may not the the same thing as either of the above.

None of these are as simple as "avoid people dying."

Of course one wishes, all else equal, to minimize risk of one's work becoming harmful.


The conservative position for AI programming is that if it's not designed properly, it will destabilize, have a steep intelligence explosion, and kill us all before we can do anything. Therefore, that should be the operating assumption for any AGI project, no matter how primitive it is.


Yes, and the conservative position about genetics research is: It may eventually lead to someone figuring out how to make a terrible bio-weapon that kills everyone ... so don't publish that genetics paper! Etc. etc. etc.

The conservative position, which has been well-articulated by others, is not to develop advanced tech because it's potentially dangerous.

But given that others ARE developing other AGIs, and other potentially dangerous technologies, then the scope of choices becomes a bit different. Then it becomes a matter of (e.g.) whether the expected benefit of building a AGI according to an ethics-focused architecture, is greater than the expected risk of having someone build a nasty AGI first, or having some other technology cause huge damage (when an ethical AGI could have prevented the damage).


So if you go ahead with an AGI when you're not 100% sure that it's safe, you're committing the Holocaust.


That's a rather irrational conclusion based on the premises you laid out. You're transitioning from a probability (maybe low) to a certainty.

A more sensible statement would be: "So if you go ahead with an AGI when you're not 100% sure that it's safe, you're accepting a nonzero odds of committing a "Holocaust" because you think this terrible risk is counterbalanced by other benefits, such as the potential for AGI to eliminate much suffering and protect against other sources of "Holocausts" "

nick012000 said...

I think you misunderstood my comment.

An analogous statement would not be talking about genetics research leading to bioweapons. An analogous statement would be that the conservative assumption for designing a bridge is that it will be packed with bumper-to-bumper traffic 24 hours a day, and needs to be designed to avoid failing down under those conditions.

When you're designing something where human lives are at stake, you need to determine the worst possible conditions, and then to design it in such a fashion that it won't catastrophically fail during them. In the case of AI, that's Friendly AI. In the case of a bridge, that's giving it enough reinforcement that it won't fall down when packed full of cars, and then some.

Ben Goertzel said...

Nick, you said


When you're designing something where human lives are at stake, you need to determine the worst possible conditions, and then to design it in such a fashion that it won't catastrophically fail during them. In the case of AI, that's Friendly AI. In the case of a bridge, that's giving it enough reinforcement that it won't fall down when packed full of cars, and then some.


We have a nice theory of bridge-building, due to having theories about the strength of materials, Newtonian physics, earth science, etc. etc.

OTOH, there is no theory of "Friendly AI" and no currently promising theoretical path toward finding one. If you believe that SIAI has a top-secret, almost-finished rigorous theory of "Friendly AI" [and note that they are certainly NOT publicly claiming this, even though I have heard some of their stronger enthusiasts claim it], then, well, I have a bridge to sell you in Brooklyn ;-) ... A very well put together bridge!!!!

The theory of how to effectively build structures on the surface of the Earth was not created by a handful of theoretical geniuses sitting at their desks and thinking. It was created by a large and diverse community of people, openly interacting and doing a lot of different practical experiments as well as theoretical work.

I totally agree that we need a solid theory of AGI -- of which AGI designs will give rise to which properties. I have spent a certain percentage of time trying to work toward such a theory, and have published some of my ideas on the matter.

However, I think that to get to such a theory, we will ultimately need to build a bunch of simple AGIs with humanlike general intelligence and experiment with them.

Then, via a combination of experiment, data analysis, creative thinking and mathematical calculation, we will come to a more solid and rigorous theory of intelligence, out of which some sort of theoretical understanding of FAI will emerge.

So, I don't advocate trying to launch a Singularity without a way better theoretical understanding of AGI.

However, I do advocate trying to build toddler-level AGIs and experiment with them, as a means of getting to a better theoretical understanding of AGI.

The standard SIAI counterargument to this is that it's too dangerous to build a toddler-level AGI, because it might undergo an unexpected "hard takeoff" and become a superhuman AGI before you know it. I just think this is fantastically unlikely. It's the sort of thing that seems plausible only to people who aren't working on practical AGI systems.

There is a real risk that, after some of us make a toddler-level AGI, and while we are studying it carefully with a goal of forming a good theory of AGI, somebody else will try to take our toddler-level AGI and accelerate its intelligence more quickly than us, bypassing the need for theoretical understanding.

But then this is "just another risk of humans doing explicitly dangerous things"... quite different than the mythical risk of a toddler becoming a superhuman via an unexpected sudden hard takeoff.

My feeling is that coming up with a theoretical understanding of how to make an "AI Nanny" that will help us watch ourselves (including helping us watch out for people irresponsibly making super-smart AGIs without understanding what they're doing), is going to be easier than coming up with a theoretical understanding of how to make a Singularity-launching AGI. So maybe the AGI Nanny will come first and then help guide us on the way to Singularity.

But anyway, these are all just wild speculations about a future we can't foresee. Real understanding about the risks and rewards of future AGI is only going to come by building AGIs and studying them, not by philosophizing or mathematizing independently of experiment.

Ben Goertzel said...

BTW, as a post-script to the earlier subset of these comments by QuantumG about his frustrations with OpenCog, I will paste here the most recent "OpenCog ReCap" which was just posted on the OpenCog email list.

I'm happy to see that QuantumG, in spite of his prior frustrations, is giving OpenCog another shot. [Item 2 in the recap]

My what a dramatic list of Comments this has turned out to be !!!


Welcome to the 7th OpenCog Recap.

1) Linux Magazine did an article on AGI, mentioning OpenCog.
http://www.linux-mag.com/id/7878

2) Old-time contributor Trent Waddington (QuantumG) is back, and fixed several bugs.

3) Linas has found time to work on OpenCog again, and is planning to
do a long-term research project, using existing algorithms to learn
knowledge, grammar rules and more from text corpora.

4) I (JaredW) made some big improvements to the PLN forward chainer,
and it is now much more efficient.
http://wiki.opencog.org/w/Jared's_work_log#October_2

5) Several new contributors are appearing, including in our Xiamen
robot lab as well as volunteers. We'll have more details when they get
started.

Onwards and upwards!
--
Jared Wigmore jared.wigmore@gmail.com

Anonymous said...

So now maybe DARPA will fund the project, based on the call for 'near human' robots to be developed to replace farmers, doctors, and of course, soldiers...

Anonymous said...

Ben, have you thought about using your existing software tools for trading financial assets? This might be the fastest and least costly way to use your current technology to generate a revenue stream that you can use for full AGI development. You could take to paths to using your software for trading the financial markets. The first would be to do some type of prediction, probably shot term oriented. This approach was successfully applied by the Prediction Company, founded by physicists Packard and Farmer, as told in the book "The Predictors" by Thomas Bass. The other approach would be data mining, thus building on your work with Biomind. I wrote a blog post this summer about hedge funds that are using narrow AI methods in this manner for trading, see "Hedge funds and narrow artificial intelligence" (http://thecybernetictrader.wordpress.com/2010/07/14/hedge-funds-and-narrow-artificial-intelligence/) You may want to get in touch with some hedge fund people to discover if they can see a way to use your software for trading applications. If you can find a partner to work with, you may be able to obtain a source of revenue fairly quickly.

Ben Goertzel said...

About AGI and financial trading. Yeah, I actually know a lot about that space. I've been involved in a few narrow-AI (and computational linguistics) based stock and futures prediction products.

One major issue with the space is that hedge fund or investment bank people are very very reluctant to fund any software development or customization. They're interested to invest money to trade systems, but not to fund you even a modest amount to develop a trading system based on your AGI system (or based on anything else). The point at which they become interested is, minimally, when you have a trading system with impressive backtesting results.

So, I'd need to either find a visionary investor, or do the initial trading system development on my own dime, and I don't have that many dimes right now.

It's a bit similar to how, in the pharma business, VCs and pharma firms generally become interested at the point when you have a drug that's proven its efficacy in trials on mice. Before that point, promising-looking paths to novel therapeutics are just too risky for them.

It's ironic, because hedge funds and pharma firms are taking huge risks all the time. But they are accustomed to accepting certain sorts of risks, and are loath to accept any other sorts.

Again, these factors may be overcome-able, but it requires a lot of luck to find an truly adventurous mind in the right position to take a chance on a novel approach.

JimmyT said...

AI's should have "provable friendliness" before they can be trusted? Since when did humanity itself reach a level of "provable friendliness"? Quite the opposite if you look at our history along with our current path!

To think... any one of the nearly 7 billion humans on earth could "hard takeoff" and destroy us all.

Marc_Geddes said...

Hey Ben,

You could try trading on the world's biggest Sports Prediction Market, Betfair, it's far easier than finanical trading, in fact I have a bot up and running right now trading away on my secret VPS :) See my blog entry on trading:

http://zarzuelazen.com/wordpress/?p=52

Any way Ben, good luck old friend, good luck. Good luck to us all. We need to stop sniping and start cooperating. No need for us to kill each other with arguments, remember the universe will do that to us all for real soon enough if we don't cooperate. I would say 'God help us', but I realize now that there ain't no help coming.

We are all suffering from a terminal condition (aging) and we all rapidly gonna find ourselves needing 'end of life care' (i.e., geriatic medicine) rather urgently mateys. So be kind to each other.

Anonymous said...

Ben, you can't give up on making blog posts like these. I was a long time SIAI advocate until having read this long and informative post along with the subsequent discussion. I believe now the fruitless philosophical research of the SingInst is a threat in its lack of initiative and its low speed. As I understand it they do not even have attempts at programming architecture. This jeopardizes humanity. If I wasn't an atheist I would pray that God reward your boldness with fortune. Hopefully your AI research finds funding soon.

Angelo said...

Hello, Ben, I thought it would be appropriate to put here my comment (below) related to OpenCog and AGI(the entire text: http://goo.gl/YEs5).

---------------------------------

I admire the work of Ben Goerzel, his tenacity, his idealism, his evangelism of AGI, the initiative of OpenCog, which can result in rich and interesting applications. I think, just for these reasons, Ben has booked a role in the history of the IA. But I disagree on two fundamental points: first, no one seems to take seriously the possibility that, with funding, AGI would be only five years ahead of us (the creation of a AI toddler, as he proposed in his blog) . Second, (and this is the most important) to achieve a level of general intelligence like humans, I think we need to know how the brain works to copy its principles, because the human intelligence seems to be a peculiar characteristic of the human brain. I think this is the lesson of half a century of AI attempts — the lesson Hawkins has learned.
(the entire text: http://goo.gl/YEs5)

Ben Goertzel said...

Hi Angelo... thanks for your comment! ...

Regarding the "5 years" figure, actually I'd rather not fixate on the precise timing. I made that estimate quite seriously based on the assumption that the OpenCog approach is correct and doesn't need major changes. But ultimately, whether it takes 2, 5 or 15 years is not the critical point. Now if it's 50 or 1000 years instead, that's a more serious difference. In terms of my own project, the key question is whether, after a few years of intensive well-funded work, we could produce something sufficiently obviously on the path to human-level AGI that the world would wake up and take notice. And *that* I am very confident of much more so than of the precise timeline to achieving "human toddler level" functionality.

About brain emulation. Of course that is one path to human-level AGI, but I'm sure it's not the only path, I actually doubt it's the best path. Note that our current computing infrastructure is very little like the brain, so there's an argument for using an architecture better suited to the hardware available. Also, we currently understand very little about the brain really, so there's no way currently do to AI work based closely on the brain.

I think we can get to AGI by piecing together knoweldge from cog sci, comp sci, philosophy of mind, neuroscience and other sources -- rather than just using neuro as a guide.

You mention Jeff Hawkins; I enjoyed his book and indeed he talks a lot about the brain -- but his actual AI system uses Bayes nets and is not a neuroscience model at all. His actual software is not significantly more brain-like than OpenCog.

Tim Tyler said...

Re: "So if you go ahead with an AGI when you're not 100% sure that it's safe, you're committing the Holocaust."

You realise, I presume, that nobody is ever "100% sure" of anything? If people followed this advice, machine intelligence would never be developed.

Anonymous said...

But what ... if you are really wrong about the approach. What if there is a need to complete change a thinking paradigm. What if there is a wall within the model of your AI and the wall can’t be breached (or yet noticed) unless one starts asking really appropriate and deep questions. I’m sure you feel you are right and this feels perfectly normal. However, our intuition is a very special “friend” - always supports an outcome that we like (mostly expected outcomes that preserve knowledge consistency of our positive hierarchy of goals).
Looking at the papers and and architecture you guys are already in front of the wall. I wish I could tell you more but I can’t and, honestly, this would not change much. The good part here is the AI crowd has developed amazing ability to find all kinds of excuses and reasons “not to eat their own food”, so … with the right back-up plan and a set of justifications you might be OK – start working on this since you don’t have much time.
In the name of cherry picking, feel free to delete this post
Good-luck
F./

Dirrogate said...

I came in here following a link about the "Scary Idea" I had to book mark this post on AGI and OpenCog. It's going to be part of the research for (possibly) the sequel to the book: "The Dirrogate - Memories with Maya"

What's a Dirrogate? - Digital Surrogate. While the science in the book is only plausible and the AI theories,at times, bordering on questionable...do take a look at the story if you can, and I'd like to hear your thoughts on the spin I've put on some of Kurzweil's AI theory from How to Create a Mind.

Keep in mind the story is also about introducing the tenets of Transhumanism to the common man in language he/she can understand.

best Regards.
Science behind the story is on Dirrogate com


Anonymous said...

obat kutil kemaluan // obat ambeien
obat kutil kemaluan // obat ambeien
obat kutil kemaluan // obat ambeien
pusat herbal // obat ambeien
pusat obat ambeien // obat ambeien