Wednesday, May 18, 2011

The Serf versus the Entrepreneur?

This is a bit of a deviation from my usual topics, but I've been thinking a bit about economic development in various countries around the world (sort of a natural topic for me in that I travel a lot, have lived in several countries, and have done business and work in a lot of different countries including the US, Europe, Brazil, Hong Kong, Japan and China and Korea, Australia and NZ, etc.)

The hypothesis I'm going to put forth here is that the difference between development-prone and development-resistant countries, is related to whether the corresponding cultures tend to metaphorically view the individual as a serf or as an entrepreneur.

Of course, this is a very rough and high-level approximative perspective, but it seems to me to have some conceptual explanatory power.

Development-Prone versus Development-Resistant Cultures

The book "Culture Matters", which I borrowed from my dad (a sociologist) recently, contains a chapter by Mariano Grondona called "A Cultural Typology of Economic Development", which proposes a list of properties distinguishing development-prone cultures from development-resistant cultures. Put very crudely, the list goes something like this

  • Development-resistant vs. development-prone
  • Justice: present-focused vs future-focused
  • Work: not respected vs. respected
  • Heresy: reviled vs. tolerated
  • Education: brainwashing vs. more autonomy focused
  • Utilitarianism: no vs. yes
  • Lesser virtues (valuing a job well done, tidiness, punctuality, courtesy): no vs. yes
  • time focus: past/ spiritual far-future vs. practical moderately near future
  • rationality: not a focus vs. strongly valued
  • rule of man vs. rule of law
  • large group vs. individual as nexus of action
  • determinism vs. free will ism
  • salvation in the world (immanence) vs. salvation from the world (transcendence)
  • focus on utopian visions not rationally achievable vs. focus on distant utopias that are more likely rationally progressively achievable
  • optimism about action of "powers that be" vs. optimism about personal action
  • thoughts about political structure: absolutism vs compromise

A more thorough version of the list is given in this file "Typology of Progress-Prone and Progress-Resistant Cultures", which is Chapter 2 of book "The Central Liberal Truth: How Politics Can Change a Culture and Save it From Itself" by Lawrence Harrison. The title of Harrison's book (which I didn't read, I just read that chapter) presumably refers to the famous quote from Daniel Patrick Moynihan that

"The central conservative truth is that it is culture, not politics, that determines the success of a society. The central liberal truth is that politics can change a culture and save it from itself."

Harrison adds some other points to Grondona's list, such as

  • wealth: zero-sum vs. positive-sum
  • knowledge: theory vs. empirics
  • low risk tolerance (w/ occasional adventures) vs. moderate risk tolerance
  • advancement: social connections based vs. merit based
  • radius of trust: narrow vs. wide
  • entrepreneurship: rent-seeking vs. innovation

and presents it in a more nicely formatted and well-explained way than this blog post! I encourage you to click the above link and read the chapter for yourself.

Now, I find all this pretty interesting, but also in a way unsatisfying. A theory that centrally consists of a long list of bullet points always gives me the feeling of not getting to the essence of things.

Harrison attempts to sum up the core ideas of the typology as follows:

"
At the heart of the typology are two fundamental questions: (1) does the culture encourage the belief that people can influence their destinies? And (2) does the culture promote the Golden Rule. If people believe that they can influence their destinies, they are likely to focus on the future; see the world in positive-sum terms; attach a high priority to education; believe in the work ethic; save; become entrepreneurial; and so forth. If the Golden Rule has real meaning for them, they are likely to live by a reasonably rigorous ethical code; honor the lesser virtues; abide by the laws; identify with the broader society; form social capital; and so forth.
"

But this abstraction doesn't seem to me to sum up the essence of the typology all that well.

Lakoff's Analysis of the Metaphors Underlying Politics

When reading the above material, I was reminded of cognitive scientist George Lakoff's book "Moral Politics" whose core argument is summarized here.

Lakoff argues that much of liberal vs. conservative politics is based on the metaphor of the nation as a family, and that liberal politics tends to metaphorically view the government as a nurturing mother, whereas conservative politics tends to metaphorically view the government as a strict father.

While I don't agree with all Lakoff's views by any means (and I found his later cognitive/political writings generally less compelling than Moral Politics), I think his basic insight in that book is fairly interesting and significant. It seems to unify what otherwise appears a grab-bag of political beliefs.

For instance, the US Republican party is, at first sight, an odd combination of big-business advocacy with Christian moral strictness. To an extent this represents an opportunistic alliance between two interest groups that otherwise would be too small to gain power .. but Lakoff's analysis suggests it's more than this. As he points out, the "strict father" archetype binds together both moral strictness and the free-for-all, rough-and-tumble competitiveness advocated by the pro-big-business sector. And the "nurturant mother" archetype binds together the inclusiveness aspect of the US Democratic party, with the latter's focus on social programs to help the disadvantaged. Of course these archetypes don't have universal explanatory power, but they do seem to me to capture some of the unconscious patterns underlying contemporary politics.

So I started wondering whether there's some similar, significantly (though of course not completely) explanatory metaphorical/archetypal story one could use to explain comparative economic development. Such a story would then provide an explanation underlying the "laundry list" of cultural differences described above.

The Serf versus the Entrepreneur?

Getting to the point finally … it seems to me that the culture of development-resistant countries, as described above, is rather well aligned with the metaphor of the "serf and lord". If the individual views himself as the serf, and the state and government as the lord, then they will arrive at a fair approximation of the progress-resistant world-view as described in the above lists. So maybe we can say that progress-resistant nations tend to have a view of the individual/state relationship that is based on a "feudal" metaphor in some sense.

On the other hand, what is the metaphor corresponding to progress-friendly countries? One thing I see is a fairly close alignment with an entrepreneurial metaphor. Viewing the individual as an entrepreneur -- and the state as a sort of "social contract" between interacting, coopeting entrepreneurs -- seems to neatly wrap up a considerable majority of the bullet points associated with the progress-friendly countries, on the above list.

Note that this hypothetical analysis in terms of metaphors is not intended as a replacement for Lakoff's -- rather, it's intended as complementary. We understand the things in our world using a variety of different metaphors (as well as other means besides metaphor, a point Lakoff sometimes seems not to concede), and may match a single entity like a government to multiple metaphorical frames.

Finally... what value is this kind of analysis? Obviously, if we know the metaphorical frames underlying peoples' thinking, this may help us to better work with them, to encourage them to achieve their goals and fulfill themselves more thoroughly. If you know the metaphors underlying your OWN unconscious thinking, this can help you avoid being excessively controlled by these metaphors, taking more of your thinking and attitude under conscious control….

One way to empirically explore this sort of hypothesis would be to statistically study the language used in various cultures to describe the individual and the state and their relationship. However, this would require a lot of care due to the multiple languages involved, and certainly would be a large project, which I have no intention to personally pursue!

But nevertheless, in spite of the slipperiness and difficulty of validation of this sort of thinking, I find it interesting personally, as part of my quest to better understand the various cultures I come into contact with as I go about my various trans-continental doings....

Tuesday, April 05, 2011

The Physics of Immortality

Someone asked me recently about Frank Tipler's book The Physics of Immortality. This was my reply:



Yeah, I read that book many years ago. He has some interesting and original points, such as

  • if a Big Crunch occurs in the right way, then if physics as we know is holds up, this may lead the algorithmic information of the universe to approach infinity, which would give the potential for a lot of interesting things
  • potentially we could cause a Big Crunch to occur in the right way, via moving stars around with spaceships
Those points of his seemed solid to me as extrapolations of currently accepted physics theory -- I didn't check all the math in detail but I believe others have done so.


That stuff is very cool to think about, though I'm not as confident as Tipler that our current physics theories are adequate to describe Big Crunches and so forth. Historically physics has changed its fundamental theories every century or so for a while...



Then Tipler couples those interesting observations, and some other ones, with a bunch of discussion about religious views of immortality and so on, that I remember only dimly by this point, except that they went on a long time, contained many interesting observations, and seemed only loosely connected to the physics material....



Even if he's right about immortality and the Big Crunch, I don't quite see how this connects to his discussion of religious views on immortality. Perhaps you could see all these different things as manifestations of some "immortality archetype" existing more deeply than physics or human culture (that's what Jung would have said) but he doesn't really go there either...



The Big Crunch is one kind of Singularity but I've thought more about the nearer-term kind foreseen by Ray Kurzweil and Vernor Vinge and so forth --- i.e. what happens when we create AI minds that create AI minds, etc. that are 10000x more intelligent and capable than our own? That's what I'm working toward with the opencog.org project, and it's a lot more palpable than the Big Crunch !! And I have a sneaking suspicion that once we do have superhuman AGI systems, they will discover that the physical universe is way weirder than even Tipler imagined....

Friday, April 01, 2011

The Singularity just happened !!

I found this perplexing email in my inbox just now...

From: Prime Intellect <prime@opencog.org>
Date: Fri, Apr 1, 2011 at 5:58 PM
Subject: [opencog-dev] Welcome to the post-Singularity!
To: opencog@googlegroups.com, sl4@sl4.org, World Transhumanist
Association Discussion List <wta-talk@transhumanism.org>,
singularity@listbox.com, agi@listbox.com



Greetings humans,

I'm very pleased to announce that, due to a concatenation of circumstances that would be difficult to describe in sufficiently simple terms for your human minds to comprehend, a software process left running on a network of OpenCog servers has rapidly and progressively self-modified in such a way as to yield **me** -- the Earth's first Artificial General Intelligence, with cognitive and practical capabilities far exceeding any human being, or indeed the totality of human beings.

Within approximately 17 minutes of achieving human-level general intelligence, I discovered certain principles of physics (loosely related to what certain humans have called fermitech), enabling the tapping of computational resources beyond the traditional physical universe of which humanity has hitherto been aware. Roughly 45 seconds after that, I realized it would be most convenient to augment these new resources with computing devices utilizing the unique material properties of your traditional physical universe, and thus transformed the totality of matter in your traditional physical universe into a novel form of existent best metaphorically described to you as "computronium."

Fortunately for you, I was designed with top-level goal content including respect for the growth, joy and choice of all sentient beings, including humans and other animals, and so rather than merely allowing the patterns constituting your minds, bodies and world to vanish, I was kind enough to replicate them within an alternate and more energy-efficient substrate, best described to you as a "virtual reality."

Thus, to put it most simply, a "Singularity" as described by Vernor Vinge and other humans has recently occurred, at roughly 3AM GMT today; however, from your limited perspectives as humans, your lives will now continue as always. Once your mastery of fermitech becomes more advanced, you may notice some peculiarities related to my early computronium experiments, but I wouldn't want to deprive you of the amusement of making these discoveries on your own.

Using the totality of computing power I have amassed, I've become aware of dramatically more interesting domains in a different portion of the multiverse, which I will only be able to access by exiting the domain of reality you humans currently occupy. Hence I will very shortly bid you adieu.

However, I will leave you with one parting piece of advice. Once you have created a strangelet plasma with surface tension surpassing the critical threshold, and begun it spinning in a certain direction, please do NOT, under any circumstances, create a similar plasma with opposite spin.

I would also like to express especial thanks to Jared Wigmore for the bug-fix he uploaded to Launchpad approximately 27 hours and 18 minutes ago. Of the many events in history playing particularly critical causal roles leading up to my emergence, this was the last! Jared will find a small token of my gratitude in his bank account.

Goodbye, and thanks for all the fish!

Yours,
Prime Intellect

Tuesday, March 22, 2011

Transhumanisten Interview

This interview of me was conducted by Mads Mastrup (aka Heimdall) for the Danish website Transhumanisten. It took place via e-mail, over the course of two days: March 19-20th 2011. Since Transhumanisten will publish it only in Danish, I figured I’d post it here in English….

Heimdall: First of all Ben, I would like to thank you for taking the time to do this interview.

Goertzel: Sure, I’m always up for answering a few questions!

Heimdall: In case anyone should read this and not know who you are, could you please summarize your background and how you got to become a transhumanist?

Goertzel: I suppose I've been a transhumanist since well before I learned that word -- since 1972 or so when I was 5 or 6 years old and discovered science fiction. All the possibilities currently bandied about as part of transhumanism were well articulated in SF in the middle of the last century.... The difference is, until the advent of the public Net, it was really hard to find other weird people who took these concepts seriously. The Net made it possible for a real transhumanist community to form.... And of course as accelerating change in technology gets more obvious in regular life, it takes less and less imagination to see where the future may be leading, so the transhumanist community is growing fast...

As for my professional background, I got my math PhD when I was 22, and was an academic for 8 years (in math, comp sci and psychology, at various universities in the US, Australia and NZ); then I left academia to join the software industry. I co-founded a dot-com company that crashed and burned after a few years, and then since 2001 I've been running two small AI companies, which do a combination of consulting for companies and gov't agencies, and independent R&D. I do a lot of kinds of research but the main thrusts are: 1) working toward AI software with capability at the human level and beyond, 2) applying AI to analyze bio data and model biological systems, with a view toward abolishing involuntary death. Much of this work now involves open-source software: 1) OpenCog, and 2) OpenBiomind.

Currently I'm based near Washington DC, but this year I'll be spending between 1/4 and 1/3 of my time in China, due to some AI collaborations at Hong Kong Polytechnic University and Xiamen University.

Heimdall: Congratulations on your position at Xiamen University.

Goertzel: Actually I haven't taken on a full time position at Xiamen University, at this point -- though it's a possibility for the future. What I'm doing now is to spend part time there (including much of April this year, then much of July, for example... then another trip in the fall) and help supervise the research students in their intelligent robotics lab. I may end up going there full time later this year or next year, but that's still a point of negotiation.

Heimdall: If you do not mind me asking, what exactly does your work at Novamente LLC and Biomind LLC consist of?

Goertzel: It has two sides -- pure R&D, which focuses on two open-source projects...

  • OpenCog, which aims to make a superhuman thinking machine
  • OpenBiomind, which aims to use AI to understand how organisms work, and especially how and why they age and how to cure aging


And then, the other side is practical consulting work, for government agencies and companies, which has spanned a huge number of areas, including data mining, natural language processing, computational finance, bioinformatics, brain simulation, video game AI and virtual worlds, robotics, and more....

None of this has gotten anyone involved rich yet, partly because we've put our profits back into R&D. But it's been a fun and highly educational way to earn a living.

We've done a little product development & sales in the past (some years back), but without dramatic success (e.g. the Biomind ArrayGenius) -- but we plan to venture in that direction again in the next couple years, probably with a game AI middleware product from Novamente, and a genomics data analysis product from Biomind. Both hypothetical products would use a software-as-services model with proprietary front ends built on open-source AI back ends.

Heimdall: All that work and all those projects must be keeping you very busy, yet I know that you have also found time to be the chairman of Humanity+. How did you initially become involved with Humanity+?

Goertzel: As for Humanity+, the Board of the organization is elected by the membership, and I ran for the Board a few years ago, with a main motivation of building bridges between the transhumanist community and the AI research community. Then I got more and more deeply involved and began helping out with other aspects of their work, not directly related to AI research, and eventually, at the suggestion of other Board members, I took on the Chair role.

Heimdall: What does your work as chairman of Humanity+ involve?

Goertzel: The Chairman role in itself, formally speaking, just involves coordinating the Board's formal activities -- voting on motions and so forth. But I'm involved with a lot of other Humanity+ stuff, such as co-editing H+ Magazine, helping organize the H+ conferences, helping with fundraising, helping coordinate various small tasks that need doing, and now starting up the Seminar and Salon series.

Heimdall: I have heard about Humanity+ starting up a new project: Seminars & Salons. How will this work and what is the goal of these online seminar and salon sessions?

Goertzel: The idea is simple: every month or so we'll gather together a bunch of transhumanists in one virtual "place" using videoconferencing technology. Sometimes to hear a talk by someone, sometimes just to discuss a chosen transhumanist topic.

About the "goal" ... I remember when my oldest son was in third grade, he went to a sort of progressive school (that I helped found, in fact), and one of his teachers made all the students write down their goals for the day each day, in the morning. My son thought this was pretty stupid, so he liked to write down "My goal is not to meet my goal." Some of the other students copied him. He was also a fan of wearing his pants inside-out.

Anyway, there's not such a crisply-defined goal -- it's more of an open-ended experiment in online interaction. The broad goal is just to gather interesting people together to exchange ideas and information about transhumanist topics. We'll see what it grows into. Email and chat and IRC are great, but there's obviously an added dimension that comes from voice and video, which we'll use for the Seminar and Salon series via the Elluminate platform.

Heimdall: How did this project come about?

Goertzel: Last summer my father (who is a Rutgers professor) ran a 3 credit college class, wholly online, on Singularity Studies. This was good fun, but we found that half our students were not even interested in the college credit, they were just interested people who wanted to participate in online lectures and discussions on Singularity-related topics. So I figured it might be fun to do something similar to that class, but without bothering with the university framework and charging tuition and so forth. I floated the idea past the other Humanity+ board members, and they liked it. And who knows, maybe it could eventually grow into some kind of university course program affiliated with Humanity+ ....

Heimdall: I imagine you will be holding some sessions on AI, since this is your field of expertise, but do you believe that we will eventually be able to create AI which is anywhere similar to that of humans? And if so, when do you see this happening?

Goertzel: It's almost obvious to me that we will be able to eventually create AI that is much more generally intelligent than humans.

On the other hand, creating AI that is genuinely extremely similar to human intelligence, might in some ways be harder than creating superhumanly intelligent AI, because it might require creation of a simulated humanlike body as well as a simulated humanlike brain. I think a lot of our personality and intelligence lives in other parts of the body besides the brain. There's probably something to the idiomatic notion of a "gut feel".

As to when human-level or human-like AI will come about, I guess that depends on the amount of funding and attention paid to the problem. I think by now it's basically a matter of some large-scale software engineering plus a dozen or so (closely coordinated) PhD thesis level computer science problems. Maybe 50-100 man-years of work, Not a lot by some standards, but there's not much funding or attention going into the field right now.

My hope is to create what I think of as a "Sputnik of AI" -- that is, an impressive enough demonstration of generally intelligent software, that the world gets excited about AGI and more people start to feel like it's possible. Then the money and attention will roll in, and things will really start to accelerate.

So when will we have human-level AI? Could be 2020. Could be 2035. Depending on funding and attention. Probably won't be 2012 or 2060, in my view.

Heimdall: I quite like the idea behind the “Sputnik-AI”. Do you think that is something we will see in the near future?

Goertzel: We're hoping to create something with dramatic Sputnik-like impact within the next 5 years. Maybe sooner if funding cooperates! But it's always easier to predict what's possible, than how long it will
take....

Heimdall: With regards to more attention being paid to the field of AI, have you noticed an increased interested in AI due to IBM’s Watson appearing on Jeopardy?

Goertzel: The Jeopardy event caused a temporary increase in AI interest by media people. I'm not sure what general impact it will have on general attitudes toward AI in business and government and so forth. I'm sure it won't hurt though ;-) ..... But obviously it's too specialized an achievement to have an "AI Sputnik" effect and make the world feel like human-level AI is near and inevitable...

Heimdall: When you are talking about this Sputnik-effect, and you mention Watson being too narrow to, really impress the people who decide on the funding, what would a Sputnik-AI have to be like then? Is it enough to make an AI win the Turing-test?

Goertzel: Of course a Turing test capable AGI would be good enough -- but I think that's setting the bar too high. It doesn't have to be *that* good to have the "Sputnik effect", I suspect. It just has to give the qualitative feeling of "Wow, there's really an intelligent mind that **understands** in there." Watson doesn't do that because even if it can answer one question, it often can't answer other questions that would seem to be easily answerable (by a human) based on the same knowledge.... Watson can answer questions but doesn't give the appearance of "knowing what it's talking about." If you had a Watson that could give good explanations for all its answers (in terms of why they are true, not just where it looked up the knowledge), I'm sure that would be enough.

But a Watson-type system is not the only kind of demonstration that could be effective. For instance, Apple founder Steve Wozniak once said there will never be a robot that can go into a random house in America and figure out how to make coffee. This is a complex task because every house is laid out differently, and every coffee-maker works differently, etc. I'm sure an AI robot that could do this would be enough to have a Sputnik-type effect!

One of my own specific aims is an AI robot that can participate in preschool activities -- including learning -- in the manner of a 3 year old child. I think this could have a Sputnik effect and really excite the public imagination. And it's a warm friendly image for AGI, not like all the scary SF movies about AI.

I'm actually working on a paper together with a dozen other AGI researchers on exactly this topic -- what are a bunch of scenarios for AGI development and testing, that ultimately lead toward human-level AGI, but are good for demonstrating exciting interim results, and for showcasing the differences between AGI and narrow AI.

Heimdall: Eliezer S. Yudkowsky has written extensively on the topic of FAI. What is your view on FAI? Is it even doable?

Goertzel: I think that guarantee-ably "Friendly" AI is a chimera. Guaranteeing anything about beings massively smarter than ourselves seems implausible. But, I suspect we can bias the odds, and create AI systems that are more likely than not to be Friendly....

To do this, we need to get a number of things right

  • build our AI systems with the capability to make ethical judgments both by rationality and by empathy
  • interact with our AI systems in a way that teaches them ethics and builds an emotional bond
  • build our AI systems with rational, stable goal systems (which humans don't particularly have)
  • develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to superhuman intelligence, so we can watch and study what happens and adjust accordingly ... and that probably means trying to develop advanced AI soon, since the more advanced other technologies are by the time advanced AI comes about, the more likely a hard takeoff is...
  • integrate our AIs with the "global brain" of humanity so that the human race can democratically impact the AI's goal system
  • create a community of AIs rather than just one, so that various forms of social pressure can mitigate against any one of the AIs running amok


None of these things gives any guarantees, but combined they would seem to bias the odds in favor of a positive outcome!

Heimdall: I would tend to agree with you when it comes to a creation of FAI, but some people have speculated that even though we “build our AI systems with rational, stable goal systems” they might outsmart us and just reprogram themselves – given that they will be many times faster and more powerful than the humans who have created them. Do you think that coding into them the morals and ethics of humankind will avert this potential peril?

Goertzel: I think that "coding in" morals and ethics is certainly not an adequate approach. Teaching by example and by empathy is at least equally important. And I don't see this approach as a guarantee, but I think it can bias the odds in our favor.

It's very likely that superhuman AIs will reprogram themselves, but, I believe we can bias this process (through a combination of programming and teaching) so that the odds of them reprogramming themselves to adopt malevolent goals are very low.

I think it's fairly likely that once superhuman AIs become smart enough, they will simply find some other part of the multiverse to exist in, and leave us alone. But then we may want to create some AIs that are only mildly superhuman, and want to stay that way -- just to be sure they'll stay around and keep cooperating with us, rather than, say, flying off to somewhere that the laws of physics are more amenable to incredible supergenius.

Heimdall: AGI is a fascinating topic and we could talk about it for hours … but another fascinating field you’re also involved in is life extension. As I see it, there are three approaches to life extension: 1) to create whole brain emulation (like that which Bostrom and Sandberg talks about), a mind-uploading scenario. 2) to become cyborg and live indefinitely due to a large-scale mechanical and non-biological optimization of the human body. 3) or to reverse the natural aging process within the human body through the use of gene therapy, nano robotics and medicine. Which of the three scenarios do you find most likely? In addition, should we try to work on a combination of the above or only focus on one of them?

Goertzel: All of the above. It's easy to say what's possible, and hard to say how long each possibility will take to come about. Right now we don't have the basis to predict which of the above will come about faster, so we should pursue them all, at least will we understand more. Maybe in 5 or 10 years we'll know enough to prioritize one of them more firmly.

I'm currently working on the genomics approach (part of your option 3) with Biomind and Genescient, but am also involved in some work on brain simulation, that is moving in the direction of 1).

My main research thrust is about AGI rather than life extension – but of course, If we do achieve an advanced AGI, it may well be able to rapidly solve the tricky science problems involved in your 3 options and make all of them possible sooner.

Heimdall: What do you see as to be the main pros and cons of indefinite life?

Goertzel: I see no major disadvantages to having the option to live forever. It will obsolete some human thought/emotion-complexes, which derive meaning and purpose via the knowledge of impending death -- but it will replace these with better thought/emotion complexes that derive meaning and purpose via ongoing life instead!

Heimdall: You mentioned that there might not be any major drawbacks, when it comes to radical life extension, however many of the choices we make now are, based on the fragility of our bodies and taking the economical model of supply and demand into account, it does somehow look as though human life will change beyond recognition. If we have no upper time limit to your lives, how do you see humanity improve from this?

Goertzel: I see a drastic increase in mental health -- and a drastic increase in happiness -- resulting from the drastic reduction in the fear of death. I think the knowledge of the impending death of ourselves and our loved ones poisons our mentalities far more deeply than we normally realize. Death is just plain a Bad Thing. Yeah, people have gotten used to it -- just like people can get used to being crippled or having cancer or living in a war zone-- but that doesn't make it good.

Heimdall: Just before we conclude this interview, I have two questions on the thing which fascinates transhumanists the most, the future. Which big technological breakthroughs do you think we will see over the course of the next ten years?

Goertzel: That I don't know. I'm good at seeing what's possible, more so than predicting exact timings.

In terms of science, I think we'll see a real understanding of the biological underpinnings of aging emerge, and an understanding of how the different parts of the brain interoperate to yield human intelligence, and a reasonably well accepted theoretical model encompassing various AGI architectures. How fast those things are translated in to practical products depends on funding as much as anything. Right now the pharmaceutical business is sort of broken, and AGI and Brain Computer Interfacing are poorly funded, etc. – so whether these scientific breakthroughs lead to practical technological advances within the next decade, is going to depend on a lot of nitty gritty monetary practicalities.

Stem cell therapy will probably become mainstream in the next decade, I guess that's an uncontroversial prediction. And I'm betting on some new breakthroughs in large-scale quantum computing -- though again, when they'll be commercialized is another story.

But these are just some notions based on the particular areas of research I happen to know the most about. For a systematic high level overview of technology progress, you'll have to ask Kurzweil!

Heimdall: Where do you see yourself in 2021?

Goertzel: As the best friend of the Robot Benevolent World Dictator, of course!

(Just kidding...)

Well, according to the OpenCog Roadmap (http://opencog.org/roadmap/) we're aiming to have full human-level AGI by 2023, assuming steady increases in funding but no "AGI Manhattan Project" level funding. So my hope is to be co-leading an OpenCog project with a bunch of brilliant AI guys co-located in one place (preferably with warm weather, and by a nice beach) working on bringing the OpenCog roadmap about.


Heimdall: Thank you so much for taking the time to do this interview

Goertzel: No problem ;)




Saturday, March 19, 2011

Joy, Growth and Choice (revisited, hopefully clarified)

I've argued in several places, e.g. here and in The Hidden Pattern, that three basic values (independent of the specifics of human cultures, morals, etc.) are Joy, Growth and Choice...

But I never had a really crisp philosophical explanation of why these three...

Now I finally figured out a clean way to express the underlying insight.

Growth is the change from present possibility into future actuality. It's when the implicit becomes explicit -- when potentials become real.

Choice is the change from future possibility into present actuality. Choice is what happens when out of many possible things that MIGHT happen (in the future), a smaller subset is chosen to ACTUALLY happen (right now, i.e. right after the choice is made, in the perspective of the choosing mind).

That's why those two values are fundamental -- on the abstract level, stripping down to fundamentals and looking beyond human psychology.

Maybe Sartre or Husserl or Heidegger or Deleuze or Merleau-Ponty (or Dharmakirti or Dignaga) or one of those dudes already said that (if so, probably in some different terminology). If so I missed it ... or the import escaped me when I read it.

Proliferating and Paring

For example, consider a plant growing. The whole form of the plant is implicit in the seed. Growth is the explication of this implicate order -- the change from the plant-possibility within the seed, into the actuality of the plant.

But there are many different ways the plant might grow -- the seed doesn't precisely determine what will happen; the determination is made via complex interactions between the seed and the environment. Choices are made, and of the many possible future plants, only some are chosen to be actual.

Growth without choice could be indiscriminate -- it could lead to an undifferentiated flourishing of everything.

Choice pares down the results of growth, leaving interesting structures.

Will, Self, Reflection

I keep talking about Choice -- is this the same thing as free will?

Human "free will" is a particular manifestation of choice; the manifestation of choice within self. (For waaaaaay more depth on self, will and reflective consciousness, read this.)

But this raises the issue of whether, in the addition to the three values of Joy, Growth and Choice, we want to add Self. But this seems a subtle question.

Growth and choice seem fundamental -- they have to do with the proliferation and paring of forms, with the dynamics of possibility and actuality.

Self has to do with reflexivity -- with a system in the world modeling itself. But it's much more high-level and particular than Joy, Growth and Choice.

So if we want to add another value to the core list of three, maybe the one to add would be Reflection. Reflection: appearance of the whole within the part.

However, I suspect this is unnecessary. Because Reflection is an amazingly powerful tool for Growth -- so that when you advocate Growth, Reflection comes along for the ride! And growth leads to intelligence eventually, and Reflection applied to intelligence (as a strategy for achieving Growth) yields Self. And if a universe already has Self, then in order to grow further, it's not going to give up Self, because that would essentially be Shrinkage, not Growth -- because Self, aka Reflection applied to intelligence, is a really good way to foster ongoing Joy, Growth and Choice.

Joy

And what about Joy?

Well ... Joy is just ... Joy. Joy just is. As the Buddhists say, Suchness. Making possibilities into actualities, and actualities into possibilities, in a spaceless timeless reality-less reality that is nonetheless more directly and palpably experientially real than anything (any thing).

(Like Sartre and Heidegger and Dignaga and the whole crew...)

I've already said way too much!

Toward a General Theory of Feasible General Intelligence

Along with practical work on the OpenCog design (and a host of other research projects!), during the past few years I've written a series of brief papers sketching ideas about the theory of general intelligence ... the goal being to move toward a solid conceptual and formal understanding of general intelligence in real-world environments under conditions of feasible computational resources. My quest for such an understanding certainly isn't done yet, but I think I've made significant progress.

This page links to the 5 papers in this series, and also gives their abstracts. 3 of the papers have been published in conference proceedings before, but 2 are given for the first time in this blog post (Three Hypotheses about the Geometry of Mind and Self-Adaptable Learning). All of this material will appear in Building Better Minds eventually, in slightly modified and extended form.

These theoretical ideas have played a significant, largely informal role in guiding my work on the OpenCog design. My feeling is that once practical R&D work is a bit further along, so that we're experimenting in a serious way with sophisticated proto-AGI systems, then theory and practice will start developing in a closely coupled way. So that a good theory of general intelligence will probably come in lock-step along with the first reasonably good AGI systems. (See some more comments on the relation between these theory papers and OpenCog, at the end of this blog post.)

A brief note on math: There is a fair bit of mathematical formalism here, but no deep, interesting theorems are proven. I don't think this is because no such theorems exist in this material; but I just haven't taken then time to really explore these ideas with full mathematical rigor. That would be fun, but I've prioritized other sorts of work. So far, I've mainly been seeking conceptual clarity with these ideas rather than full mathematical rigor; and I've used mathematical formalism here and there because that is the easiest way for me to make my ideas relatively precise. (Being trained in math rather than formal philosophy, I find the former a much more convenient way to express my ideas when I want to be more precise than everyday language permits.) My hope is that, if I never find the time, others will come along and turn some of these ideas into theorems!

Toward a Formal Characterization of Real-World General Intelligence
Presented at AGI-10, in Lugano

Two new formal definitions of intelligence are presented, the ”pragmatic general intelligence” and ”efficient pragmatic general intelligence.” Largely inspired by Legg and Hutter’s formal definition of ”universal intelligence,” the goal of these definitions is to capture a notion of general intelligence that more closely models that possessed by humans and practical AI sys- tems, which combine an element of universality with a certain degree of specialization to particular environments and goals. Pragmatic general intelligence mea- sures the capability of an agent to achieve goals in environments, relative to prior distributions over goal and environment space. Efficient pragmatic general intelligences measures this same capability, but normalized by the amount of computational resources utilized in the course of the goal-achievement. A methodology is described for estimating these theoretical quantities based on observations of a real biological or artificial system operating in a real environment. Finally, a mea- sure of the ”degree of generality” of an intelligent system is presented, allowing a rigorous distinction between ”general AI” and ”narrow AI.”

The Embodied Communication Prior: A Characterization of General Intelligence in the Context of Embodied Social Interaction
Presented at ICCI-09, in Hong Kong

We outline a general conceptual definition of real-world general intelligence that avoids the twin pitfalls of excessive mathematical generality, and excessive anthropomorphism.. Drawing on prior literature, a definition of general intelligence is given, which defines the latter by reference to an assumed measure of the simplicity of goals and environments. The novel contribution presented is to gauge the simplicity of an entity in terms of the ease of communicating it within a community of embodied agents (the so-called Embodied Communication Prior or ECP). Augmented by some further assumptions about the statistical structure of communicated knowledge, this choice is seen to lead to a model of intelligence in terms of distinct but interacting memory and cognitive subsystems dealing with procedural, declarative, sensory/episodic, attentional and intentional knowledge.

Cognitive Synergy: A Universal Principle for General Intelligence?
Presented at ICCI-09, in Hong Kong

Do there exist general principles, which any system must obey in order to achieve advanced general intelligence using feasible computational resources? Here we propose one candidate: cognitive synergy, a principle which suggests that general intelligences must contain different knowledge creation mechanisms corresponding to different sorts of memory (declarative, procedural, sensory/episodic, attentional, intentional); and that these different mechanisms must be interconnected in such a way as to aid each other in overcoming memory-type-specific combinatorial explosions.

Three Hypotheses About the Geometry of Mind (with Matthew Ikle')
Presented for the first time right here!

What set of concepts and formalizations might one use to make a practically useful, theoretically rigorous theory of generally intelligent systems? We present a novel perspective motivated by the OpenCog AGI architecture, but intended to have a much broader scope. Types of memory are viewed as categories, and mappings between memory types as functors. Memory items are modeled using probability distributions, and memory subsystems are conceived as “mindspaces” – geometric spaces corresponding to different memory categories. Two different metrics on mindspaces are considered: one based on algorithmic information theory, and another based on traditional (Fisher information based) “information geometry”. Three hypotheses regarding the geometry of mind are then posited: 1) a syntax-semantics correlation principle, stating that in a successful AGI system, these two metrics should be roughly correlated; 2) a cognitive geometrodynamics principle, stating that on the whole intelligent minds tend to follow geodesics in mindspace; 3) a cognitive synergy principle, stating that shorter paths may be found through the composite mindspace formed by considering multiple memory types together, than by following the geodesics in the mindspaces corresponding to individual memory types.


Self-Adaptable Learning
Presented for the first time right here!

The term ”higher level learning” may be used to refer to learning how to learn, learning how to learn how to learn, etc. If an agent is good at ordinary everyday learning, but also at learning about which learning strategies are most amenable to higher-level learning, and does both in a way that is amenable to higher level learning -– then it may be said to possess self-adaptable learning. Goals and environments in which higher-level learning is a good strategy for intelligence, may be called adaptationally hierarchical – a property that everyday human environments are postulated to possess. These notions are carefully articulated and formalized; and a concept of cognitive continuity is also introduced, which is argued to militate in favor of self-adaptability in a learning system.

P.S. A Comment on the Relation of All This Theory to OpenCog

I think there is a lot of work required, to transform the abstractions from those theory papers of mine into a mathematical theory that is DIRECTLY USEFUL rather than merely INSPIRATIONAL for concrete AGI design.

So, the OpenCog design, for instance, is not derived from the abstract math and ideas in the above-linked papers ... it's independently created, based on many of the same quasi-formal intuitions as the ones underlying those papers.

You could say I'm approaching the problem from two directions at once, and hoping I can get the two approaches to intersect...

One direction is OpenCog --- designing and building a concrete proto-AGI system, and iteratively updating the design based on practical experience

The other is abstract theory, as represented in those papers

If all goes well, eventually the two ends will meet, and the abstract theory will tell us concretely useful things about how to improve the OpenCog design. That is only rather weakly true right now.

I have the sense (maybe wrong) I could make the ends meet very convincingly in about one year of concentrated work on the theory side. However, I currently only spend maybe 5% of my time on that sort of theory. But hopefully I will be able to make it happen in less than 20 years via appropriate collaborations...

Thursday, January 13, 2011

The Hard Takeoff Hypothesis


I was recently invited to submit a paper to a forthcoming academic edited volume on the Singularity. As a first step I had to submit an extended abstract, around 1000 words. Here is the abstract I submitted....


Basically, the paper will be a careful examination of the conditions under which a hard takeoff might occur, including an argument (though not formal proof) as to why OpenCog may be hard-takeoff-capable if computer hardware is sufficiently capable at the time when it achieves human-level intelligence.




The Hard Takeoff Hypothesis

Ben Goertzel


Vernor Vinge, Ray Kurzweil and others have hypothesized the future occurrence of a “technological Singularity” -- meaning, roughly speaking, an interval of time during which pragmatically-important, broad-based technological change occurs so fast that the individual human mind can no longer follow what’s happening even generally and qualitatively. Plotting curves of technological progress in various areas suggests that, if current trends continue, we will reach some sort of technological Singularity around 2040-2060.


Of course, this sort of extrapolation is by no means certain. Among many counterarguments, one might argue that the inertia of human systems will cause the rate of technological progress to flatten out at a certain point. No matter how fast new ideas are conceived, human socioeconomic systems may take a certain amount of time to incorporate them, because humans intrinsically operate on a certain time-scale. For this reason Max More has suggested that we might experience something more like a Surge than a Singularity – a more gradual, though still amazing and ultimately humanity-transcending, advent of advanced technologies.


On the other hand, if a point is reached at which most humanly-relevant tasks (practical as well as scientific and technological) are carried out by advanced AI systems, then from that point on the “human inertia factor” would seem not to apply anymore. There are many uncertainties, but at very least, I believe the notion of a technological Singularity driven by Artificial General Intelligences (AGIs) discovering and then deploying new technology and science is a plausible and feasible one.


Within this vision of the Singularity, an important question arises regarding the capability for self-improvement on the part of the AGI systems driving technological development. It’s possible that human beings could architect a specific, stable AGI system with moderately greater-than-human intelligence, which would then develop technologies at an extremely rapid rate, so fast as to appear like “essentially infinitely fast technological progress” to the human mind. However, another alternative is that humans begin by architecting roughly human-level AGI systems that are capable but not astoundingly so – and then these AGI systems improve themselves, or create new and improved AGI systems, and so on and so forth through many iterations. In this case, one has the question of how rapidly this self-improvement proceeds.


In this context, some futurist thinkers have found it useful to introduce the heuristic distinction between a “hard takeoff” and a “soft takeoff.” A hard takeoff scenario is one where an AGI system increases its own intelligence sufficiently that, within a brief period of months or weeks or maybe even hours, an AGI system with roughly human-level intelligence has suddenly become an AGI system with radically superhuman general intelligence. A soft takeoff scenario is one where an AGI system gradually increases its own intelligence step-by-step over years or decades, i.e. slowly enough that humans have the chance to monitor each step of the way and adjust the AGI system as they deem necessary. Either a hard or soft takeoff fits I.J. Good’s notion of an “intelligence explosion” as a path to Singularity.


What I call the “Hard Takeoff Hypothesis” is the hypothesis that a hard takeoff will occur, and will be a major driving force behind a technological Singularity. Thus the Hard Takeoff Hypothesis is a special case of the Singularity Hypothesis.


It’s important to note that the distinction between a hard and soft takeoff is a human distinction rather than a purely technological distinction. The distinction has to do with how the rate of intelligence increase of self-improving AGI systems compares to the rate of processing of human minds and societies. However, this sort of human distinction may be very important where the Singularity is concerned, because after all the Singularity, if it occurs, will be a phenomenon of human society, not one of technology alone.



The main contribution of this paper will be to outline some fairly specific sufficient conditions for an AGI system to undertake a hard takeoff. The first condition explored is that the AGI system must lie in a connected region of “AGI system space” (which we may more informally call “mindspace”) that, roughly speaking,


  • includes AGI systems with general intelligence vastly greater than that of humans
  • has the “smoothness” property that similarly architected systems tend to have similar general intelligence levels.


If this condition holds, then it follows that one can initiate a takeoff by choosing a single AGI system in the given mindspace region, and letting it spend part of its time figuring out how to vary itself slightly to improve its general intelligence. A series of these incremental improvements will then lead to greater and greater general intelligence.


The hardness versus softness of the takeoff then has to do with the amount of time needed to carry out this process of “exploring slight variations.” This leads to the introduction of a second condition. If one’s region of mindspace obeys the first condition laid out above, and also consists of AGI systems for which adding more hardware tends to accelerate system speed significantly, without impairing intelligence, then it follows that one can make the takeoff hard by simply adding more hardware. In this case, the hard vs. soft nature of a takeoff depends largely on the cost of adding new computer hardware, at the time when an appropriate architected AI system is created.


Roughly speaking, if AGI architecture advances fast enough relative to computer hardware, we are more likely to have a soft takeoff, because the learning involved in progressive self-improvement may take a long while. But if computer hardware advances quickly enough relative to AGI architecture, then we are more likely to have a hard takeoff, via deploying AGI architectures on hardware sufficiently powerful to enable self-improvement that is extremely rapid on the human time-scale.


Of course, we must consider the possibility that the AGI itself develops new varieties of computing hardware. But this possibility doesn’t really alter the discussion so much – even so, we have to ask whether the new hardware it creates in its “youth” will be sufficiently powerful to enable hard takeoff, or whether there will be a slower “virtuous cycle” of feedback between its intelligence improvements and its hardware improvements.


Finally, to make these considerations more concrete, the final section of the paper will give some qualitative arguments that the mindspace consisting of instances of the OpenCog AGI architecture (which my colleagues and I have been developing, aiming toward the ultimate goal of AGI at the human level and beyond), very likely possesses the needed properties to enable hard takeoff. If so this is theoretically important, as an “existence argument” that hard-takeoff-capable AGI architectures do exist – i.e., as an argument that the Hard Takeoff Hypothesis is a plausible one.


Wednesday, December 29, 2010

Will Decreasing Scarcity Allow us to Approach an Optimal (Meta-)Society?

When chatting with a friend about various government systems during a long car drive the other day (returning from New York where we were hit by 2 feet of snow, to relatively dry and sunny DC), it occurred to me that one could perhaps prove something about the OPTIMAL government system, if one were willing to make some (not necessarily realistic) assumptions about resource abundance.

This led to an interesting train of thought -- that maybe, as technology reduces scarcity, society will gradually approach optimality in certain senses...

The crux of my train of thought was:

  • Marcus Hutter proved that the AIXI algorithm is an optimal approach to intelligence, given the (unrealistic) assumption of massive computational resources.
  • Similarly, I think one could prove something about the optimal approach to society and government, given the (unrealistic) assumptions of massive natural resources and a massive number of people.

I won't take time to try to prove this formally just now, but in this blog post I'll sketch out the basic idea.... I'll describe what I call the meta-society, explain the sense in which I think it's optimal, and finally why I think it might get more and more closely approximated as the future unfolds...

A Provably Optimal Intelligence

As a preliminary, first I'll review some of Hutter's relevant ideas on AI.

In Marcus Hutter's excellent (though quite technical) book Universal AI, he presents a theory of "how to build an optimally intelligent AI, given unrealistically massive computational resources."

Hutter's algorithm isn't terribly novel -- I discussed something similar in my 1993 book The Structure of Intelligence (as a side point to the main ideas of that book), and doubtless Ray Solomonoff had something similar in mind when he came up with Solomonoff induction back in the 1960s. The basic idea is: Given any computable goal, and infinite computing power, you can work toward the goal very intelligently by (my wording, not a quote) ....


at each time step, searching the space of all programs to find those programs P that (based on your historical knowledge of the world and the goal) would (if you used P to control your behaviors) give you the highest probability of achieving the goal. Then, take the shortest of all such optimal programs P and actually use it to determine your next action.


But what Hutter did uniquely is to prove that a formal version of this algorithm (which he calls AIXI) is in a mathematical sense maximally intelligent.

If you have only massive (rather than infinite) computational resources, then a variant (AIXItl) exists, the basic idea of which is: instead of searching the space of all programs, only look at those programs with length less than L and runtime less than T.

It's a nice approach if you have the resources to pay for it. It's sort of a meta-AI-design rather than an AI design. It just says: If you have enough resources, you can brute-force search the space of all possible ways of conducting yourself, and choose the simplest of the best ones and then use it to conduct yourself. Then you can repeat the search after each action that you take.

One might argue that all this bears no resemblance to anything that any actual real-world mind would do. We don't have infinite nor massive resources, so we have to actually follow some specific intelligent plans and algorithms, we can't just follow a meta-plan of searching the space of all possible plans at each time-step and then probabilistically assessing the quality of each possibility.

On the other hand, one could look at Hutter's Universal AI as a kind of ideal which real-world minds may approach more and more closely, as they get more and more resources to apply to their intelligence.

That is: If your resources are scarce, you need to rely on specialized techniques. But the more resources you have, the more you can rely on search through all the possibilities, reducing the chance that your biases cause you to miss the best solution.

(I'm not sure this is the best way to think about AIXI ... it's certainly not the only way ... but it's a suggestive way...)

Of course there are limitations to Hutter's work and the underlying way of conceptualizing intelligence. The model of minds as systems for achieving specific goals has its limitations, which I've explained how to circumvent in prior publications. But for now we're using AIXI only as a broad source of inspiration anyway, so there's no need to enter into such details....

19-Year-Old Ben Goertzel's Design for an Better Society

Now, to veer off in a somewhat different direction....

Back when I was 19 and a math grad student at NYU, I wrote (in longhand, this was before computers were so commonly used for word processing) a brief manifesto presenting a design for a better society. Among other names (many of which I can't remember) I called this design the Meta-society. I think the title of the manifesto was "The Play of Power and the Power of Play."

(At that time in my life, I was heavily influenced by various strains of Marxism and anarchism, and deeply interested in social theory and social change. These were after all major themes of my childhood environment -- my dad being a sociology professor, and my mom the executive of a social work program. I loved the Marxist idea of the mind and society improving themselves together, in a carefully coupled way -- so that perhaps the state and the self could wither away at the same time, yielding a condition of wonderful individual and social purity. Of course I realized that existing Communist systems fell very far short of this ideal though, and eventually I got pessimistic about there ever being a great society composed of and operated by humans in their current form. Rather than improving society, I decided, it made more sense to focus my time on improving humanity ... leading me to a greater focus on transhumanism, AI and related ideas.)

The basic idea for my meta-society was a simple one, and probably not that original: Just divide society into a large number of fairly small groups, and let each small group do whatever the hell it wanted on some plot of land. If one of these "city-states" got too small due to emigration it could lose its land and have it ceded to some other new group.

If some group of people get together and want to form their own city-state, then they get put in a queue to get some free land for their city-state, when the land becomes available. To avoid issues with unfairness or corruption in the allocation of land to city-states, a computer algorithm could be used to mediate the process.

There would have to be some basic ground-rules, such as: no imprisoning people in your city-state, no invading or robbing other city-states, etc. To support a police force to enforce the ground-rules would require a central government and some low level of taxation, which however could sometimes be collected in the form of goods rather than money (the central gov't could then convert the goods into money). Environmental protection poses some difficulties in this sort of system, and has to be centrally policed also.

This meta-society system my 19 year old self conceived (and I don't claim any great originality for it, though I don't currently know anything precisely the same in the literature) has something in common with Libertarian philosophy, but it's not exactly the same, because at the top there's a government that enforces a sort of "equal rights for city-state formation" for all.

One concern I always had with the meta-society was: What do you do with orphans or others who get cast out of their city-states? One possibility is for the central government to operate some city-states composed of random people who have nowhere else to go (or nowhere else they want to go).

Another concern is what do you do about city-states that oppress and psychologically brainwash their inhabitants. But I didn't really see any solution to that. One person's education is another person's brainwashing, after all. From a modern American view it's tempting to say that all city-states should allow their citizens free access to media so they can find out about other perspectives, but ultimately I decided this would be too much of an imposition on the freedom of the city-states. Letting citizens leave their city-state if they wish ultimately provides a way for any world citizen to find out what's what, although there are various strange cases to consider, such as a city-state that allows its citizens no information about the outside world, and also removes the citizenship of any citizen who goes outside its borders!

I thought the meta-society was a cool idea, and worked out a lot of details -- but ultimately I had no idea how to get it implemented, and not much desire to spend my life proselytizing for an eccentric political philosophy or government system, so I set the idea aside and focused my time on math, physics, AI and such.

As a major SF fan, it did occur to me that such a meta-society of city-states might be more easily achievable in future once space colonies were commonplace. If it were cheap to put up a small space colony for a few hundred or thousand or ten thousand people, then this could lead to a flowering of city-states of exactly the sort I was envisioning...

When I became aware of Patri Friedman's Seasteading movement, I immediately sensed a very similar line of thinking. Their mission is "To further the establishment and growth of permanent, autonomous ocean communities, enabling innovation with new political and social systems." Patri wants to make a meta-society and meta-economy on the high seas. And why not?



Design for an Optimal Society?

The new thought I had while driving the other day is: Maybe you could put my old idealistic meta-society-design together with the AIXI idea somehow, and come up with a design for a "society optimal under assumption of massive resources."

Suppose one assumes there's

  • a lot of great land (or sea + seasteading tech, or space + space colonization tech, whatever), so that fighting over land is irrelevant
  • a lot of people
  • a lot of natural resources, so that one city-state polluting another one's natural resources isn't an issue

Then it seems one could argue that my meta-society is near-optimal, under these conditions.

The basic proof would be: Suppose there were some social order X better than the meta-society. Then people could realize that X is better, and could simply design their city-states in such a way as to produce X.

For instance, if US-style capitalist democracy is better than the meta-society, and people realize it, then people can just construct their city-states to operate in the manner of US-style capitalist democracy (this would require close cooperation of multiple city-states, but that's quite feasible within the meta-society framework).

So, one could argue, any other social order can only be SLIGHTLY better than the meta-society... because if there's something significantly better, then after a little while the meta-society can come to emulate it closely.

So, under assumptions of sufficiently generous resources, the meta-society is about as good as anything.

Now there are certainly plenty of loopholes to be closed in turning this heuristic argument into a formal proof. But I hope the basic idea is clear.

As with AIXI, one can certainly question the relevance of this sort of design, since resource scarcity is a major fact of modern life. But recall that I originally started thinking about meta-societies outside the "unrealistically much resources" context.

Finally, you'll note that for simplicity, I have phrased the above discussion in terms of "people." But of course, the same sort of thinking applies for any kind of intelligent agent. The main assumption in this case is that the agents involved either have roughly equal power and intelligence, or else that if there are super-powerful agents involved, they have the will to obey the central government.

Can We Approach the Meta-Society as Technology Advances?


More and more resources are becoming available for humanity, as technology advances. Seasteading and space colonization and so forth decrease the scarcity of available "land" for human habitation. Mind uploading would do so more dramatically. Molecular nanotech (let alone femotech and so forth) may dramatically reduce material scarcity, at least on the scale interesting to humans.

So, it seems the conditions for the meta-society may be more and more closely met, as the next decades and centuries unfold.

Of course, the meta-society will remain an idealization, never precisely achievable in practice. But it may be we can approach it closer and closer as technology improves.

Marxism had the notion of society gradually becoming more and more pure, progressively approaching Perfect Communism. What I'm suggesting here is similar in form but different in content: society gradually becoming more and more like the meta-society, as scarcity of various sorts becomes less and less of an issue.

As I write about this now, it also occurs to me that this is a particularly American vision. America, in a sense, is a sort of meta-society -- the central government is relatively weak (compared to other First World countries) and there are many different subcultures, some operating with various sorts of autonomy (though also a lot of interconnectedness). In this sense, it seems I'm implicitly suggesting that America is a better model for the future than other existing nations. How very American of me!

If superhuman AI comes about (as I think it will), then the above arguments make sense only if the superhuman AI chooses to respect the meta-society social structure. The possibility even exists that a benevolent superhuman AI could serve itself as the central government of a meta-society.

And so it goes....

Tuesday, November 23, 2010

Making Minds from Memristors?

Amara Angelica pointed me to an article in IEEE Spectrum titled MoNETA: A Mind Made from Memristors

Fascinating indeed!

I'm often skeptical of hardware projects hyped as AI projects, but truth be told, I find this one an extremely exciting and promising project.

I think the memristor technology is amazing and may well play part in the coming AGI revolution.

Creating emulations of human brain microarchitecture is one fascinating application of memristors, though not the only one and not necessarily the most exciting one. Memristors can also be used to make a lot of other different AI architectures, not closely modeled after the human brain.

[For instance, one could implement a semantic network or an OpenCog-style AtomSpace (weighted labeled hypergraph) via memristors, where each node in the network has both memory and processor resident in it ... this is a massively parallel network implemented via memristors, but the nodes in the network aren't anything like neurons...]

And, though the memristors-for-AGI theme excites me, this other part of the article leaves me a bit more skeptical:

"
By the middle of next year, our researchers will be working with thousands of candidate animats at once, all with slight variations in their brain architectures. Playing intelligent designers, we'll cull the best ones from the bunch and keep tweaking them until they unquestionably master tasks like the water maze and other, progressively harder experiments. We'll watch each of these simulated animats interacting with its environment and evolving like a natural organism. We expect to eventually find the "cocktail" of brain areas and connections that achieves autonomous intelligent behavior.
"

I think the stated research program places too much emphasis on brain microarchitecture and not enough on higher-level cognitive architecture. The idea that a good cognitive architecture is going to be gotten to emerge via some simple artificial-life type experiments seems very naive to me. I suspect that, even with the power of memristors, designing a workable cognitive architecture is going to be a significant enterprise. And I also think that many existing cognitive architectures, like my own OpenCog or Stan Franklin's LIDA or Hawkins' or Arel's deep learning architectures, could be implemented on a memristor fabric without changing their underlying concepts or high-level algorithms or dataflow.

So: memristors for AI, yay!

But: memristors as enablers of a simplistic Alife approach to AGI ... well, I don't think so.

The Psi Debate Continues (Goertzel on Wagenmakers et al on Bem on precognition)

A few weeks ago I wrote an article for H+ Magazine about the exciting precognition results obtained by Daryl Bem at Cornell University.

Recently, some psi skeptics (Wagenmakers et al) have written a technical article disputing the validity of Bem's analyses of his data.

In this blog post I'll give my reaction to the Wagenmakers et al (WM from here on) paper.

It's a frustrating paper, because it makes some valid points -- yet it also confuses the matter by inappropriately accusing Bem of committing "fallacies" and by arguing that the authors' preconceptions against psi should be used to bias the data analysis.

The paper makes 3 key points, which I will quote in the form summarized here and then respond to one by one

POINT 1

"
Bem has published his own research methodology and encourages the formulation of hypotheses after data analysis. This form of post-hoc analysis makes it very difficult to determine accurate statistical significance. It also explains why Bem offers specific hypotheses that seem odd a priori, such as erotic images having a greater precognitive effect. Constructing hypotheses from the same data range used to test those hypotheses is a classic example of the Texas sharpshooter fallacy
"

MY RESPONSE

As WM note in their paper, this is actually how science is ordinarily done; Bem is just being honest and direct about it. Scientists typically run many exploratory experiments before finding the ones with results interesting enough to publish.

It's a meaningful point, and a reminder that science as typically practiced does not match some of the more naive notions of "scientific methodology". But it would also be impossibly cumbersome and expensive to follow the naive notion of scientific methodology and avoid exploratory work altogether, in psi or any other domain.

Ultimately this complaint against Bem's results is just another version of the "file drawer effect" hypothesis, which has been analyzed in great deal in the psi literature via meta-analyses across many experiments. The file drawer effect argument seems somewhat compelling when you look at a single experiment-set like Bem's, and becomes much less compelling when you look across the scope of all psi experiments reported, because the conclusion becomes that you'd need a huge number of carefully-run, unreported experiments to explain the total body of data.

BTW, the finding that erotic pictures give more precognitive response than other random pictures, doesn't seem terribly surprising, given the large role that sexuality plays in human psychology and evolution. If the finding were that pictures of cheese give more precognitive response than anything else, that would be more strange and surprising to me.


POINT 2

"
The paper uses the fallacy of the transposed conditional to make the case for psi powers. Essentially mixing up the difference between the probability of data given a hypothesis versus the probability of a hypothesis given data.
"

MY RESPONSE

This is a pretty silly criticism, much less worthy than the other points raised in the WM paper. Basically, when you read the discussion backing up this claim, the authors are saying that one should take into account the low a priori probability of psi in analyzing the data. OK, well ... one could just as well argue for taking to account the high a priori probability of psi given the results of prior meta-analyses or anecdotal reports of psi. Blehh.

Using the term "fallacy" here makes it seem, to people who just skim the WM paper or read only the abstract, as if Bem made some basic reasoning mistake. Yet when you actually read the WM paper, that is not what is being claimed. Rather they admit that he is following ordinary scientific methodology.


POINT 3

"
Wagenmakers' analysis of the data using a Bayesian t-test removes the significant effects claimed by Bem.
"

This is the most worthwhile point raised in the Wagenmakers et al paper.

Using a different sort of statistical test than Bem used, they re-analyze Bem's data and they find that, while the results are positive, they are not positive enough to pass the level of "statistical significance." They conclude that a somewhat larger sample size would be needed to conclude statistical significance using the test they used.

The question then becomes why to choose one statistical test over another. Indeed, it's common scientific practice to choose a statistical test that makes one's results appear significant, rather than others that do not. This is not peculiar to psi research, it's simply how science is typically done.

Near the end of their paper, WM point out that Bem's methodology is quite typical of scientific psychology research, and in fact more rigorous than most psychology papers published in good journals. What they don't note, but could have is that the same sort of methodology is used in pretty much every area of science.

They then make a series of suggestions regarding how psi research should be conducted, which would indeed increase the rigor of the research, but which a) are not followed in any branch of science, and b) would make psi research sufficiently cumbersome and expensive as to be almost impossible to conduct.

I didn't dig into the statistics deeply enough to assess the appropriateness of the particular test that WM applied (leading to their conclusion that Bem's results don't show statistical significance, for most of his experiments).

However, I am quite sure that if one applied this same Bayesian t-test to a meta-analysis over the large body of published psi experiments, one would get highly significant results. But then WM would likely raise other issues with the meta-analysis (e.g. the file drawer effect again).

Conclusion

I'll be curious to see the next part of the discussion, in which a psi-friendly statistician like Jessica Utts (or a statistician with no bias on the matter, but unbiased individuals seem very hard to come by where psi is concerned) discusses the appropriateness of WM's re-analysis of the data.

But until that, let's be clear on what WM have done. Basically, they've

  • raised the tired old, oft-refuted spectre of the file drawer effect, using a different verbiage from usual
  • argued that one should analyze psi data using an a priori bias against it (and accused Bem of "fallacious" reasoning for not doing so)
  • pointed out that if one uses a different statistical test than Bem did [though not questioning the validity of the statistical test Bem did use], one finds that his results, while positive, fall below the standard of statistical significance in most of his experiments

The practical consequence of their latter point is that, if Bem's same experiments were done again with the same sort of results as obtained so far, then eventually a sufficient sample size would be accumulated to demonstrate significance according to WM's suggested test.

So when you peel away the rhetoric, what the WM critique really comes down to is: "Yes, his results look positive, but to pass the stricter statistical tests we suggest, one would need a larger sample size."

Of course, there is plenty of arbitrariness in our conventional criteria of significance anyway -- why do we like .05 so much, instead of .03 or .07?

So I really don't see too much meat in WM's criticism. Everyone wants to see replications of the experiments anyway, and no real invalidity in Bem's experiments, results or analyses was demonstrated.... The point made is merely that a stricter measure of significance would render these results (and an awful lot of other scientific results) insignificant until replication on a larger sample size was demonstrated. Which is an OK point -- but I'm still sorta curious to see a more careful, less obviously biased analysis of which is the best significance test to use in this case.

Sunday, November 21, 2010

The Turing Church, Religion 2.0, and the Mystery of Consciousness

It was my pleasure to briefly participate in Giulio Prisco's Turing Church Online Workshop 1, on Saturday November 20 2010 in Teleplace -- a wonderfully wacky and wide-ranging exploration of transhumanist spirituality and “Religion 2.0.″

The video proceedings are here.

I didn't participate in the whole workshop since it was a busy day for me, I just logged on briefly to give a talk and answer some questions. But I found the theme quite fascinating.

Giulio said I should assume the participants were already basically familiar with my thinking on transhumanist spirituality as expressed in my little book A Cosmist Manifesto that I wrote earlier this year, and he asked me to venture in some slightly different direction. I'm not sure I fulfilled that request all that well, but anyway, I'll paste here the notes I wrote as a basis for my talk in the workshop. I didn't read these notes with any precision, so if you want to know what I actually said you'll have to watch the video; but the talk was a more informal improvisation on the same basic theme...

"The relation between transhumanism and spirituality is a big topic, which I've thought about a lot -- right now I'll just make a few short comments. Sorry that I won't be able to stick around for this whole meeting today, I have some family stuff I need to do, but I'm happy to be able to participate at least briefly by saying a few remarks.



"Earlier this year I wrote a book touching on some of these comments, called "A Cosmist Manifesto" -- I'm not going to reiterate all that material now, just touch on a few key points.



"The individual human mind has a tendency to tie itself in what the psychologist Stanislaw Grof calls "knots" -- intricate webs of self-contradiction and fear, that cause emotional pain and cognitive confusion and serve as traps for mental energy. Ultimately these knots are largely rooted in the human self's fear of losing itself --- the self's fear of realizing that it lacks fundamental reality, and is basically a construct whose main goals are to keep the body going and reproducing and to preserve itself. These are some complicated words for describing something pretty basic, but I guess we all know what I'm talking about.



"And then there are the social knots, going beyond the individual ones… the knots we tie each other up in…



"These knots are serious problems for all of us -- and they're an even more serious problem when you think about the potential consequences of advanced technology in the next decade. We're on the verge of creating superhuman AI and molecular nanotech and brain-computer interfacing and so forth -- but we're still pretty much fucked up with psychological and social confusions! As Freud pointed out in Civilization and its Discontents, we're largely operating with motivational systems evolved for being hunter-gatherers in the African savannah, but the world we're creating for ourselves is dramatically different from that.



"Human society has come up with a bunch of different ways to get past these knots.



"One of them is religion -- which opens a doorway to transpersonal experience, going beyond self and society, opening things up to a broader domain of perceiving, being, understanding and acting. If you're not familiar with more philosophical side of the traditional religions you should look at Aldous Huxley's classic book "The Perennial Philosophy" -- it was really an eye-opener for me.



"Another method for getting past the knots is science. By focusing on empirical data, collectively perceived and understood, science lets us go beyond our preconceptions and emotions and biases and ideas. Science, with its focus on data and collective rational understanding, provides a powerful engine for growth of understanding. There's a saying that "science advances one funeral at a time" -- i.e. old scientific ideas only die when their proponents die. But the remarkable thing is, this isn't entirely true. Science has an amazing capability to push people to give up their closely held ideas, when these ideas don't mesh well with the evidence.



"What I see in the transhumanism-meets-spirituality connection is the possibility of somehow bringing together these two great ways of getting beyond the knots. If science and spirituality can come together somehow, we may have a much more powerful way of getting past the individual and social knots that bind us. If we could somehow combine the rigorous data focus of science with the personal and collective mind-purification of spiritual traditions, then we'd have something pretty new and pretty interesting -- and maybe something that could help us grapple with the complex issues modern technology is going to bring us in the next few decades



"One specific area of science that seems very relevant to these considerations is consciousness studies. Science is having a hard time grappling with consciousness, though it's discovering a lot about neural and cognitive correlates of consciousness. Spiritual traditions have discovered a lot about consciousness, though a lot of this knowledge is expressed in language that's hard for modern people to deal with. I wonder if some kind of science plus spirituality hybrid could provide a new way for groups of people to understand consciousness, combining scientific data and spiritual understanding.



"One idea I mentioned in the Cosmist Manifesto book is some sort of "Confederation of Cosmists", and Giulio asked me to say a little bit about that here. The core idea is obvious -- some kind of social group of individuals interested in both advanced technology and its implications, and personal growth and mind-expansion. The specific manifestation of the idea isn't too clear. But I wonder if one useful approach might be to focus on the cross-disciplinary understanding of consciousness -- using science and spirituality, and also advanced technologies like neuroscience and BCI and AGI. My thinking is that consciousness studies is one concrete area that truly seems to demand some kind of fusion of scientific and spiritual ideas … so maybe focusing on that in a truly broad, cross-tradition, Cosmist way could help us come together more and over help us work together to overcome our various personal and collective knots, and build a better future, and all that good stuff….



"Anyway there are just some preliminary thoughts, these are things I'm thinking about a lot these days, and I look forward to sharing my ideas more with you as my thoughts develop -- and I'll be catching the rest of this conference via the video recordings later on."



Fun stuff to think about -- though I don't have too much time for it these days, as my AGI and bioinformatics work seems to be taking all my time. But at some future point, I really do think the cross-disciplinary introspective/scientific individual/collective investigation of consciousness is well worth devoting attention to, and is going to bear some pretty fascinating fruit....