To follow this blog by email, give your address here...

Wednesday, October 13, 2010

Let's Turn Nauru Into Transtopia

Here's an off-the-wall idea that has some appeal to me ... as a long-time Transtopian fantasist and world traveler....

The desert island nation of Nauru needs money badly, and has a population of less than 15,000

There are problems with water supply, but they could surely be solved with some technical ingenuity.

The land area is about 8 square miles. But it could be expanded! Surely it's easier to extend an island with concrete platforms or anchored floating platforms of some other kind, than to seastead in the open ocean.

The country is a democracy. Currently it may not be possible to immigrate there except as a temporary tourist or business visitor. But I'd bet this could be made negotiable.

Suppose 15,000 adult transhumanists (along with some kids, one would assume) decided to emigrate to Nauru en masse over a 5-year period, on condition they could obtain full citizenship. Perhaps this could be negotiated with the Nauruan government.

Then after 5 years we would have a democracy in which transhumanists were the majority.

Isn't this the easiest way to create a transhumanist nation? With all the amazing future possibilities that that implies?

This would genuinely be of benefit to the residents of Nauru, which now has 90% unemployment. Unemployment would be reduced close to zero, and the economy would be tremendously enlarged. A win-win situation. Transhumanists would get freedom, and Nauruans would get a first-world economy.

Considerable infrastructure would need to be built. A deal would need to be struck with the government, in which, roughly,

  • They agreed to allow a certain number of outsiders citizenship, and to allow certain infrastructure development
  • Over a couple years, suitable infrastructure was built to supply electrical power, Internet, more frequent flights, etc.
  • Then, over a few years after that, the new population would flow in

This much emigration would make Nauru crowded, but not nearly as crowded as some cities. And with a seasteading mindset, it's easy to see that the island is expandable.

To ensure employment of the relocated transhumanists, we would need to get a number of companies to agree to open Nauru offices. But this would likely be tractable, given the preference of firms to have offices in major tech centers. Living expenses in Nauru would be much lower than in, say, Silicon Valley, so expenses would be lower.

Tourism could become a major income stream, given the high density of interesting people which would make Nauru into a cultural mecca. Currently there is only one small beach on Nauru (which is said to be somewhat dirty), but creation of a beautiful artificial beach on the real ocean is not a huge technological feat.

It would also be a great place to experiment with aquaculture and vertical farming.

What say you? Let's do it!


P.S.

Other candidates for the tropical island Transtopia besides Nauru would be Tuvalu and Kiribati; but Kiribati's population is much larger, and Tuvalu is spread among many islands, and is also about to become underwater due to global warming. So Nauru would seem the number one option. Though, Tuvalu could be an interesting possibility also, especially if we offered to keep the island above water by building concrete platforms or some such (a big undertaking, but much easier than seasteading). This would obviously be a major selling point to the government.

Sunday, October 10, 2010

What Would It Take to Move Rapidly Toward Beneficial Human-Level AGI?

On Thursday I finished writing the last chapter of my (co-authored) two-volume book on how to create beneficial human-level AGI, Building Better Minds. I have a bunch of editing still do so, some references to add, etc. -- but the book is now basically done. Woo hoo!

The book should be published by a major scientific publisher sometime in 2011.

The last chapter describes, in moderate detail, how the CogPrime cognitive architecture (implemented in the OpenCog open-source framework) would enable a robotic or virtual embodied system to appropriately respond to the instruction "Build me something surprising out of blocks." This is in the spirit of the overall idea: Build an AGI toddler first, then teach it, study it, and use it as a platform to go further.

From an AGI toddler, I believe, one could go forward in a number of directions: toward fairly human-like AGIs, but also toward different sorts of minds formed by hybridizing the toddler with narrow-AI systems carrying out particular classes of tasks in dramatically transhuman ways.

Reading through the 900-page tome my colleagues and I have put together, I can't help reflecting on how much work is left to bring it all into reality! We have a software framework that is capable of supporting the project (OpenCog), and we have a team of people capable of doing it (people working with me on OpenCog now; people working with me on other projects now; people I used to work with but who moved on to other things, but would enthusiastically come back for a well-funded AGI project). We have a rich ecosystem of others (e.g. academic and industry AI researchers, as well as neuroscientists, philosophers, technologists, etc. etc.) who are enthusiastic to provide detailed, thoughtful advice as we proceed.

What we don't have is proper funding to implement the stuff in the book and create the virtual toddler!

This is of course a bit frustrating: I sincerely believe I have a recipe for creating a human-level thinking machine! In an ethical way, and with computing resources currently at our disposal.

But implementing this recipe would be a lot of work, involving a number of people working together in a concentrated and coordinated way over a significant period of time.

I realize I could be wrong, or I could be deluding myself. But I've become a lot more self-aware and a lot more rational through my years of adult life (I'm 43 now), and I really don't think so. I've certainly introspected and self-analyzed a lot to understand the extent to which I may be engaged in wishful thinking about AGI, and my overall conclusion (in brief) is as follows: Estimating timing is hard, for any software project, let alone one involving difficult research. And there are multiple PhD-thesis-level research problems that need to be solved in the midst of getting the whole CogPrime design to work (but by this point in my career, I believe I have a decent intuition for distinguishing tractable PhD-thesis-level research problems from intractable conundrums). And there's always the possibility of the universe being way, way different than any of us understands, in some way that stops any AGI design based on digital computers (or any current science!) from working. But all in all, evaluated objectively according to my professional knowledge, the whole CogPrime design appears sensible -- if all the parts work vaguely as expected, the whole system should lead to human-level AGI; and according to current computer science and narrow AI theory and practice, all the parts are very likely to work roughly as expected.

So: I have enough humility and breadth to realize I could be wrong, but I have studied pretty much all the relevant knowledge that's available, I've thought about this hard for a very long time and talked to a large percentage of the world's (other) experts; I'm not a fool and I'm not self-deluded in some shallow and obvious way. And I really believe this design can work!

It's the same design I've been refining since about 1996. The prototyping my colleagues and I did at Webmind Inc. (when we had a 45-person AGI research team) in 1998-2001 was valuable, both for what it taught us about what NOT to do and for positive lessons. The implementation work my colleagues at Novamente LLC and the OpenCog project have done since 2001 has been very valuable too; and it's led to an implementation of maybe 40% of the CogPrime design (depending on how you measure it). (But unfortunately 40% of a brain doesn't yield 40% of the functionality of a whole brain, particularly because in this case (beyond the core infrastructure) the 40% implemented has been largely chosen by what was useful for Novamente LLC application projects rather than what we thought would serve best as the platform for AGI.) Having so many years to think through the design, without a large implementation team to manage, has been frustrating but also good in a sense, in that it's given me and my colleagues time and space to repeatedly mull over the design and optimize it in various ways.

Now, the funding situation for the project is not totally dismal, or it least it doesn't seem so right now. For that I am grateful.

The OpenCog project does appear to be funded, at least minimally, for the next couple years. This isn't quite 100% certain, but it's close -- it seems we've lined up funding for a handful of people to work full-time on a fairly AGI-ish OpenCog application for 2 years (I'll post here about this at length once it's definite). And there's also the Xiamen University "Brain-Like Intelligent Systems" lab, in which some grad students are applying OpenCog to enable some intelligent robotic behaviors. And Novamente LLC is still able to fund a small amount of OpenCog work, via application projects that entail making some improvements to the OpenCog infrastructure along the way. So all in all, it seems, we'll probably continue making progress, which is great.

But I'm often asked, by various AGI enthusiasts, what it would take to make really fast progress toward my AGI research goals. What kind of set-up, what kind of money? Would it take a full-on "AGI Manhattan Project" -- or something smaller?

In the rest of this blog post I'm going to spell it out. The answer hasn't changed much for the last 5 years, and most likely won't change a lot during the next 5 (though I can't guarantee that).

What I'm going to describe is the minimal team required to make reasonably fast progress. Probably we could progress even faster if we had massively more funding, but I'm trying to be realistic here.

We could use a team of around 10 of the right people (mostly, great AI programmers, with a combination of theory understanding and implementation chops), working full-time on AI development.

We could use around 5 great programmers working on the infrastructure -- to get OpenCog working really efficiently on a network of distributed multi-processor machines.

If we're going to do robotics, we could use a dedicated robotics team of perhaps 5 people.

If we're going to do virtual agents, we could use 5 people working on building out the virtual world appropriately for AGI.

Add a system administrator, 2 software testers, a project manager to help us keep track of everything, and a Minister of Information to help us keep all the documentation in order.

That's 30 people. Then add me and my long-time partner Cassio Pennachin to coordinate the whole thing (and contribute to the technical work as needed), and a business manager to help with money and deal with the outside world. 33 people.

Now let's assume this is done in the US (not the only possibility, but the simplest one to consider), and let's assume we pay people close to market salaries and benefits, so that their spouses don't get mad at them and decrease their productivity (yes, it's really not optimal to do a project like this with programmers fresh out of college -- this isn't a Web 2.0 startup, it's a massively complex distributed software system based on integration of multiple research disciplines. Many of the people with the needed expertise have spouses, families, homes, etc. that are important to them). Let's assume it's not done in Silicon Valley or somewhere else where salaries are inflated, but in some other city with a reasonable tech infrastructure and lower housing costs. Then maybe, including all overheads, we're talking about $130K/year per employee (recall that we're trying to hire the best people here; some are very experienced and some just a few years out of college, but this is an average).

Salary cost comes out to $4.3M/year, at this rate.

Adding in a powerful arsenal of hardware and a nice office, we can round up to $5M/year

Let's assume the project runs for 5 years. My bet is we can get an AGI toddler by that time. But even if that's wrong, I'm damn sure we could make amazing progress by that time, suitable to convince a large number of possible funding sources to continue funding the project at the same or a greater level.

Maybe we can do it in 3 years, maybe it would take 7-8 years to get to the AGI toddler goal -- but even if it's the latter, we'd have amazing, clearly observable dramatic progress in 3-5 years.

So, $25M total.

There you go. That's what it would cost to progress toward human-level AGI, using the CogPrime design, in a no-BS straightforward way -- without any fat in the project, but also without cutting corners in ways that reduce efficiency.

If we relax the assumption that the work is done in the US and move to a less expensive place (say, Brazil or China where OpenCog already has some people working) we can probably cut thc cost by half without a big problem. We would lose some staff who wouldn't leave the US, so there would be a modest decrease in productivity, but it wouldn't kill the project. (Why does it only cut the cost by half? Because if we're importing first-worlders to the Third World to save money, we still need to pay them enough to cover expenses they may have back in the US, to fly home to see their families, etc.)

So, outside the US, $13M total over 5 years.

Or if we want to rely more on non-US people for some of the roles (e.g. systems programming, virtual worlds,...), it can probably be reduced to $10M total over 5 years, $2M/year.

If some wealthy individual or institution were willing to put in $10M -- or $25M if they're fixated on a US location (or, say, $35M if they're fixated on Silicon Valley) -- then we could progress basically full-speed-ahead toward creating beneficial human-level AGI.

Instead, we're progressing toward the same goal seriously and persistently, but much more slowly and erratically.

I have spoken personally to a decent number of individuals with this kind of money at their disposal, and many of them are respectful of and interested in the OpenCog project -- and would be willing to put in this kind of money if they had sufficient confidence the project would succeed.

But how to give potential funders this sort of confidence?

After all, when they go to the AI expert at their local university, the guy is more likely than not to tell them that human-level AI is centuries off. Or if they open up The Singularity is Near, by Ray Kurzweil who is often considered a radical techno-optimist, they see a date of 2029 for human-level AGI -- which means that as investors they would probably start worrying about it around 2025.

A 900-page book is too much to expect a potential donor or investor to read; and even if they read it (once its published), it doesn't give an iron-clad irrefutable argument that the project will succeed, "just" a careful overall qualitative argument together with detailed formal treatments of various components of the design.

The various brief conference papers I've published on the CogPrime design and OpenCog project, give a sense of the overall spirit but don't tell you enough to let you make a serious evaluation. Maybe this is a deficiency in the writing, but I suspect it's mainly a consequence of the nature of the subject matter.

The tentative conclusion that I've come to is that, barring some happy luck, we will need to come up with some amazing demo of AGI functionality -- something that will serve as an "AGI Sputnik" moment.

Sputnik, of course, caused the world to take space flight seriously. The right AGI demo could do the same. It could get OpenCog funded as described above, plus a lot of other AGI projects in parallel.

But the question is, how to get to the AGI Sputnik moment without the serious funding. A familiar, obvious chicken-and-egg problem.

One possibility is to push far enough toward a virtual toddler in a virtual world, using our current combination of very-much-valued but clearly-suboptimal funding sources -- that our animated AGI baby has AGI Sputnik power!

Maybe this will happen. I'm certainly willing to put my heart into it, and so are a number of my colleagues.

But it sure is frustrating to know that, for an amount of money that's essentially "pocket change" to a significant number of individuals and institutions on the planet, we could be progressing a lot faster toward some goals that are really important to all of us.

To quote Kurt Vonnegut: "And so it goes."

Tuesday, September 28, 2010

Mind Uploading via Gmail

Cut and pasted from Giulio Prisco's blog here

(with one small change)

...

Mind Uploading via Gmail

To whom it may concern:

I am writing this in 2010. My Gmail account has more than 20GB of data, which contain some information about me and also some information about the persons I have exchanged email with, including some personal and private information.

I am assuming that in 2060 (50 years from now), my Gmail account will have hundreds or thousands of TB of data, which will contain a lot of information about me and the persons I exchanged email with, including a lot of personal and private information. I am also assuming that, in 2060:

  1. The data in the accounts of all Gmail users since 2004 is available.
  2. AI-based mindware technology able to reconstruct individual mindfiles by analyzing the information in their aggregate Gmail accounts and other available information, with sufficient accuracy for mind uploading via detailed personality reconstruction, is available.
  3. The technology to crack Gmail passwords is available, but illegal without the consent of the account owners (or their heirs).
  4. Many of today's Gmail users, including myself, are already dead and cannot give permission to use the data in their accounts.

If all assumptions above are correct, I hereby give permission to Google and/or other parties to read all data in my Gmail account and use them together with other available information to reconstruct my mindfile with sufficient accuracy for mind uploading via detailed personality reconstruction, and express my wish that they do so.

Signed by Ben Goertzel on September 28, 2010, and witnessed by readers.

NOTE: The accuracy of the process outlined above increases with the number of persons who give their permission to do the same. You can give your permission in comments, Twitter or other public spaces.