Tuesday, March 22, 2011

Transhumanisten Interview

This interview of me was conducted by Mads Mastrup (aka Heimdall) for the Danish website Transhumanisten. It took place via e-mail, over the course of two days: March 19-20th 2011. Since Transhumanisten will publish it only in Danish, I figured I’d post it here in English….

Heimdall: First of all Ben, I would like to thank you for taking the time to do this interview.

Goertzel: Sure, I’m always up for answering a few questions!

Heimdall: In case anyone should read this and not know who you are, could you please summarize your background and how you got to become a transhumanist?

Goertzel: I suppose I've been a transhumanist since well before I learned that word -- since 1972 or so when I was 5 or 6 years old and discovered science fiction. All the possibilities currently bandied about as part of transhumanism were well articulated in SF in the middle of the last century.... The difference is, until the advent of the public Net, it was really hard to find other weird people who took these concepts seriously. The Net made it possible for a real transhumanist community to form.... And of course as accelerating change in technology gets more obvious in regular life, it takes less and less imagination to see where the future may be leading, so the transhumanist community is growing fast...

As for my professional background, I got my math PhD when I was 22, and was an academic for 8 years (in math, comp sci and psychology, at various universities in the US, Australia and NZ); then I left academia to join the software industry. I co-founded a dot-com company that crashed and burned after a few years, and then since 2001 I've been running two small AI companies, which do a combination of consulting for companies and gov't agencies, and independent R&D. I do a lot of kinds of research but the main thrusts are: 1) working toward AI software with capability at the human level and beyond, 2) applying AI to analyze bio data and model biological systems, with a view toward abolishing involuntary death. Much of this work now involves open-source software: 1) OpenCog, and 2) OpenBiomind.

Currently I'm based near Washington DC, but this year I'll be spending between 1/4 and 1/3 of my time in China, due to some AI collaborations at Hong Kong Polytechnic University and Xiamen University.

Heimdall: Congratulations on your position at Xiamen University.

Goertzel: Actually I haven't taken on a full time position at Xiamen University, at this point -- though it's a possibility for the future. What I'm doing now is to spend part time there (including much of April this year, then much of July, for example... then another trip in the fall) and help supervise the research students in their intelligent robotics lab. I may end up going there full time later this year or next year, but that's still a point of negotiation.

Heimdall: If you do not mind me asking, what exactly does your work at Novamente LLC and Biomind LLC consist of?

Goertzel: It has two sides -- pure R&D, which focuses on two open-source projects...

  • OpenCog, which aims to make a superhuman thinking machine
  • OpenBiomind, which aims to use AI to understand how organisms work, and especially how and why they age and how to cure aging


And then, the other side is practical consulting work, for government agencies and companies, which has spanned a huge number of areas, including data mining, natural language processing, computational finance, bioinformatics, brain simulation, video game AI and virtual worlds, robotics, and more....

None of this has gotten anyone involved rich yet, partly because we've put our profits back into R&D. But it's been a fun and highly educational way to earn a living.

We've done a little product development & sales in the past (some years back), but without dramatic success (e.g. the Biomind ArrayGenius) -- but we plan to venture in that direction again in the next couple years, probably with a game AI middleware product from Novamente, and a genomics data analysis product from Biomind. Both hypothetical products would use a software-as-services model with proprietary front ends built on open-source AI back ends.

Heimdall: All that work and all those projects must be keeping you very busy, yet I know that you have also found time to be the chairman of Humanity+. How did you initially become involved with Humanity+?

Goertzel: As for Humanity+, the Board of the organization is elected by the membership, and I ran for the Board a few years ago, with a main motivation of building bridges between the transhumanist community and the AI research community. Then I got more and more deeply involved and began helping out with other aspects of their work, not directly related to AI research, and eventually, at the suggestion of other Board members, I took on the Chair role.

Heimdall: What does your work as chairman of Humanity+ involve?

Goertzel: The Chairman role in itself, formally speaking, just involves coordinating the Board's formal activities -- voting on motions and so forth. But I'm involved with a lot of other Humanity+ stuff, such as co-editing H+ Magazine, helping organize the H+ conferences, helping with fundraising, helping coordinate various small tasks that need doing, and now starting up the Seminar and Salon series.

Heimdall: I have heard about Humanity+ starting up a new project: Seminars & Salons. How will this work and what is the goal of these online seminar and salon sessions?

Goertzel: The idea is simple: every month or so we'll gather together a bunch of transhumanists in one virtual "place" using videoconferencing technology. Sometimes to hear a talk by someone, sometimes just to discuss a chosen transhumanist topic.

About the "goal" ... I remember when my oldest son was in third grade, he went to a sort of progressive school (that I helped found, in fact), and one of his teachers made all the students write down their goals for the day each day, in the morning. My son thought this was pretty stupid, so he liked to write down "My goal is not to meet my goal." Some of the other students copied him. He was also a fan of wearing his pants inside-out.

Anyway, there's not such a crisply-defined goal -- it's more of an open-ended experiment in online interaction. The broad goal is just to gather interesting people together to exchange ideas and information about transhumanist topics. We'll see what it grows into. Email and chat and IRC are great, but there's obviously an added dimension that comes from voice and video, which we'll use for the Seminar and Salon series via the Elluminate platform.

Heimdall: How did this project come about?

Goertzel: Last summer my father (who is a Rutgers professor) ran a 3 credit college class, wholly online, on Singularity Studies. This was good fun, but we found that half our students were not even interested in the college credit, they were just interested people who wanted to participate in online lectures and discussions on Singularity-related topics. So I figured it might be fun to do something similar to that class, but without bothering with the university framework and charging tuition and so forth. I floated the idea past the other Humanity+ board members, and they liked it. And who knows, maybe it could eventually grow into some kind of university course program affiliated with Humanity+ ....

Heimdall: I imagine you will be holding some sessions on AI, since this is your field of expertise, but do you believe that we will eventually be able to create AI which is anywhere similar to that of humans? And if so, when do you see this happening?

Goertzel: It's almost obvious to me that we will be able to eventually create AI that is much more generally intelligent than humans.

On the other hand, creating AI that is genuinely extremely similar to human intelligence, might in some ways be harder than creating superhumanly intelligent AI, because it might require creation of a simulated humanlike body as well as a simulated humanlike brain. I think a lot of our personality and intelligence lives in other parts of the body besides the brain. There's probably something to the idiomatic notion of a "gut feel".

As to when human-level or human-like AI will come about, I guess that depends on the amount of funding and attention paid to the problem. I think by now it's basically a matter of some large-scale software engineering plus a dozen or so (closely coordinated) PhD thesis level computer science problems. Maybe 50-100 man-years of work, Not a lot by some standards, but there's not much funding or attention going into the field right now.

My hope is to create what I think of as a "Sputnik of AI" -- that is, an impressive enough demonstration of generally intelligent software, that the world gets excited about AGI and more people start to feel like it's possible. Then the money and attention will roll in, and things will really start to accelerate.

So when will we have human-level AI? Could be 2020. Could be 2035. Depending on funding and attention. Probably won't be 2012 or 2060, in my view.

Heimdall: I quite like the idea behind the “Sputnik-AI”. Do you think that is something we will see in the near future?

Goertzel: We're hoping to create something with dramatic Sputnik-like impact within the next 5 years. Maybe sooner if funding cooperates! But it's always easier to predict what's possible, than how long it will
take....

Heimdall: With regards to more attention being paid to the field of AI, have you noticed an increased interested in AI due to IBM’s Watson appearing on Jeopardy?

Goertzel: The Jeopardy event caused a temporary increase in AI interest by media people. I'm not sure what general impact it will have on general attitudes toward AI in business and government and so forth. I'm sure it won't hurt though ;-) ..... But obviously it's too specialized an achievement to have an "AI Sputnik" effect and make the world feel like human-level AI is near and inevitable...

Heimdall: When you are talking about this Sputnik-effect, and you mention Watson being too narrow to, really impress the people who decide on the funding, what would a Sputnik-AI have to be like then? Is it enough to make an AI win the Turing-test?

Goertzel: Of course a Turing test capable AGI would be good enough -- but I think that's setting the bar too high. It doesn't have to be *that* good to have the "Sputnik effect", I suspect. It just has to give the qualitative feeling of "Wow, there's really an intelligent mind that **understands** in there." Watson doesn't do that because even if it can answer one question, it often can't answer other questions that would seem to be easily answerable (by a human) based on the same knowledge.... Watson can answer questions but doesn't give the appearance of "knowing what it's talking about." If you had a Watson that could give good explanations for all its answers (in terms of why they are true, not just where it looked up the knowledge), I'm sure that would be enough.

But a Watson-type system is not the only kind of demonstration that could be effective. For instance, Apple founder Steve Wozniak once said there will never be a robot that can go into a random house in America and figure out how to make coffee. This is a complex task because every house is laid out differently, and every coffee-maker works differently, etc. I'm sure an AI robot that could do this would be enough to have a Sputnik-type effect!

One of my own specific aims is an AI robot that can participate in preschool activities -- including learning -- in the manner of a 3 year old child. I think this could have a Sputnik effect and really excite the public imagination. And it's a warm friendly image for AGI, not like all the scary SF movies about AI.

I'm actually working on a paper together with a dozen other AGI researchers on exactly this topic -- what are a bunch of scenarios for AGI development and testing, that ultimately lead toward human-level AGI, but are good for demonstrating exciting interim results, and for showcasing the differences between AGI and narrow AI.

Heimdall: Eliezer S. Yudkowsky has written extensively on the topic of FAI. What is your view on FAI? Is it even doable?

Goertzel: I think that guarantee-ably "Friendly" AI is a chimera. Guaranteeing anything about beings massively smarter than ourselves seems implausible. But, I suspect we can bias the odds, and create AI systems that are more likely than not to be Friendly....

To do this, we need to get a number of things right

  • build our AI systems with the capability to make ethical judgments both by rationality and by empathy
  • interact with our AI systems in a way that teaches them ethics and builds an emotional bond
  • build our AI systems with rational, stable goal systems (which humans don't particularly have)
  • develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to superhuman intelligence, so we can watch and study what happens and adjust accordingly ... and that probably means trying to develop advanced AI soon, since the more advanced other technologies are by the time advanced AI comes about, the more likely a hard takeoff is...
  • integrate our AIs with the "global brain" of humanity so that the human race can democratically impact the AI's goal system
  • create a community of AIs rather than just one, so that various forms of social pressure can mitigate against any one of the AIs running amok


None of these things gives any guarantees, but combined they would seem to bias the odds in favor of a positive outcome!

Heimdall: I would tend to agree with you when it comes to a creation of FAI, but some people have speculated that even though we “build our AI systems with rational, stable goal systems” they might outsmart us and just reprogram themselves – given that they will be many times faster and more powerful than the humans who have created them. Do you think that coding into them the morals and ethics of humankind will avert this potential peril?

Goertzel: I think that "coding in" morals and ethics is certainly not an adequate approach. Teaching by example and by empathy is at least equally important. And I don't see this approach as a guarantee, but I think it can bias the odds in our favor.

It's very likely that superhuman AIs will reprogram themselves, but, I believe we can bias this process (through a combination of programming and teaching) so that the odds of them reprogramming themselves to adopt malevolent goals are very low.

I think it's fairly likely that once superhuman AIs become smart enough, they will simply find some other part of the multiverse to exist in, and leave us alone. But then we may want to create some AIs that are only mildly superhuman, and want to stay that way -- just to be sure they'll stay around and keep cooperating with us, rather than, say, flying off to somewhere that the laws of physics are more amenable to incredible supergenius.

Heimdall: AGI is a fascinating topic and we could talk about it for hours … but another fascinating field you’re also involved in is life extension. As I see it, there are three approaches to life extension: 1) to create whole brain emulation (like that which Bostrom and Sandberg talks about), a mind-uploading scenario. 2) to become cyborg and live indefinitely due to a large-scale mechanical and non-biological optimization of the human body. 3) or to reverse the natural aging process within the human body through the use of gene therapy, nano robotics and medicine. Which of the three scenarios do you find most likely? In addition, should we try to work on a combination of the above or only focus on one of them?

Goertzel: All of the above. It's easy to say what's possible, and hard to say how long each possibility will take to come about. Right now we don't have the basis to predict which of the above will come about faster, so we should pursue them all, at least will we understand more. Maybe in 5 or 10 years we'll know enough to prioritize one of them more firmly.

I'm currently working on the genomics approach (part of your option 3) with Biomind and Genescient, but am also involved in some work on brain simulation, that is moving in the direction of 1).

My main research thrust is about AGI rather than life extension – but of course, If we do achieve an advanced AGI, it may well be able to rapidly solve the tricky science problems involved in your 3 options and make all of them possible sooner.

Heimdall: What do you see as to be the main pros and cons of indefinite life?

Goertzel: I see no major disadvantages to having the option to live forever. It will obsolete some human thought/emotion-complexes, which derive meaning and purpose via the knowledge of impending death -- but it will replace these with better thought/emotion complexes that derive meaning and purpose via ongoing life instead!

Heimdall: You mentioned that there might not be any major drawbacks, when it comes to radical life extension, however many of the choices we make now are, based on the fragility of our bodies and taking the economical model of supply and demand into account, it does somehow look as though human life will change beyond recognition. If we have no upper time limit to your lives, how do you see humanity improve from this?

Goertzel: I see a drastic increase in mental health -- and a drastic increase in happiness -- resulting from the drastic reduction in the fear of death. I think the knowledge of the impending death of ourselves and our loved ones poisons our mentalities far more deeply than we normally realize. Death is just plain a Bad Thing. Yeah, people have gotten used to it -- just like people can get used to being crippled or having cancer or living in a war zone-- but that doesn't make it good.

Heimdall: Just before we conclude this interview, I have two questions on the thing which fascinates transhumanists the most, the future. Which big technological breakthroughs do you think we will see over the course of the next ten years?

Goertzel: That I don't know. I'm good at seeing what's possible, more so than predicting exact timings.

In terms of science, I think we'll see a real understanding of the biological underpinnings of aging emerge, and an understanding of how the different parts of the brain interoperate to yield human intelligence, and a reasonably well accepted theoretical model encompassing various AGI architectures. How fast those things are translated in to practical products depends on funding as much as anything. Right now the pharmaceutical business is sort of broken, and AGI and Brain Computer Interfacing are poorly funded, etc. – so whether these scientific breakthroughs lead to practical technological advances within the next decade, is going to depend on a lot of nitty gritty monetary practicalities.

Stem cell therapy will probably become mainstream in the next decade, I guess that's an uncontroversial prediction. And I'm betting on some new breakthroughs in large-scale quantum computing -- though again, when they'll be commercialized is another story.

But these are just some notions based on the particular areas of research I happen to know the most about. For a systematic high level overview of technology progress, you'll have to ask Kurzweil!

Heimdall: Where do you see yourself in 2021?

Goertzel: As the best friend of the Robot Benevolent World Dictator, of course!

(Just kidding...)

Well, according to the OpenCog Roadmap (http://opencog.org/roadmap/) we're aiming to have full human-level AGI by 2023, assuming steady increases in funding but no "AGI Manhattan Project" level funding. So my hope is to be co-leading an OpenCog project with a bunch of brilliant AI guys co-located in one place (preferably with warm weather, and by a nice beach) working on bringing the OpenCog roadmap about.


Heimdall: Thank you so much for taking the time to do this interview

Goertzel: No problem ;)




7 comments:

Unknown said...

Love this: "I think it's fairly likely that once superhuman AIs become smart enough, they will simply find some other part of the multiverse to exist in, and leave us alone."

Regarding life extension. My own prediction is that the longer life span gets, the *more* fear of death figures in our psychology. We can always get hit by a bus. There will always be diseases, poisons, radiation, asteroids, and psychotic breakdowns. Indefinite lifespan means we have more to lose, especially for those in power. I think death is healthy for the culture at large, even if as individuals we have a slightly different outlook. I recognize this is not a very popular point of view, especially in transhumanist circles.

Ben Goertzel said...

Terren -- don't you think the ability to make backup copies, will countermand the phenomenon you describe? If I die sky-diving, then my wife back home can just boot up the backup Ben that was saved last night ... and only a few hours of memories (including the memory of the splat ;p) will be lost.....

Unknown said...

Ben, are you talking about mind loading? Or are you referring to some kind of backup of a physical body? Either way, that does change the equation - it wasn't clear to me that was part of what you were talking about in the interview - but questions would remain about whether it's possible to make perfect backups (either way), and if not, whether it's desirable. Greg Egan's Permutation City explores the mind-uploading scenario very well... if you haven't read it I highly recommend!

Brian said...

"develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to superhuman intelligence"

Regarding GAI's potential contribution to life extension, how helpful would it be to have a being as intelligent as a bright scientist but with certain computer advantages such as being able to read all medical papers published, in any language?

If GAIs only as smart as humans were created, how well would they be able to cooperate on complex projects? I.e., would cooperation be an advantage of theirs the way perfect memory and the ability to devour libraries would be, or would cooperation be more similar to the ability to write a paper or play, something a GAI as smart as a human would have no advantage at (or so I guess, if it would have an advantage, let us stipulate a different ability as the example)? I ask because humans seem to be remarkable failures at cooperating when writing software. Thousands of humans can work together in other contexts; if GAIs can program together as well as we can cooperate in factory and other settings, how can hard takeoff be avoided?

beauty said...

The blog contains informational and educational material. The post enhance my thoughts and experience. So nice!

Stryke (the Aerozopher) said...

My problem is, that those AI's will fall in the hands of those with the most money and these people will exploit all of it and us.

You are creating a huge new aspect of the worldwide system and I think it is a huge responsibility. You really should think about the possible use of it and how it could affect all of us.

All the best
46175

Isabella Fanado said...

The purpose of applying AI to analyze bio data and model biological systems goes deep. At the same time, OpenAI, an organization that aims at ensuring artificial general intelligence benefits all humanity, acknowledges in its Charter that it is possible for artificial intelligence to outperform humans.

In another sense, Vercaa Coupon Codes are available in order for individuals to interact with and benefit from products or services related to your keyword. In this regard, the relationship between exploring transformative technologies and access to practical solutions demonstrates different ways through which people can play a role in shaping the future.