To follow this blog by email, give your address here...

Tuesday, October 07, 2008

Cosmic, overblown Grand Unified Theory of Development

In the 80's I spent a lot of time in the "Q" section of various libraries, which hosted some AI books, and a lot of funky books on "General Systems Theory" and related forms of interdisciplinary scientifico-philosophical wackiness.

GST is way out of fashion in the US, supplanted by Santa Fe Institute style "complexity theory" (which takes the same basic ideas but fleshes them out differently using modern computer tech), but I still have a soft spot in my heart for it....

Anyway, today when I was cleaning out odd spots of the house looking for a lost item (which I failed to find and really need, goddamnit!!) I found some scraps of paper that I scribbled on a couple years back while on some airline flight or another, sketching out the elements of a general-systems-theory type Grand Unified Theory of Development ... an overall theory of the stages of development that complex systems go through as they travel from infancy to maturity.

I'm not going to type in the whole thing here right now, but I made a table depicting part of it, so as to record the essence of the idea in some nicer, more permanent form than the fading dirty pieces of notebook paper....

The table shows the four key stages any complex system goes through, described in general terms, and then explained in a little more detail in the context of two examples: the human (or humanlike) mind as it develops from infancy to maturity, and the maturity of life from proto-life up into its modern form.

I couldn't get the table to embed nicely in this blog interface, so it's here as a PDF:

This was in fact the train of thought that led to two papers Stephan Bugaj and I wrote over the last couple years, on the stages of cognitive development of uncertain-inference based AI systems, and the stages of ethical development of such AI systems. While not presented as such in those papers, the stages given there are really specialized manifestations of the more general stages outlined in the above table.

Stephan and I are (slowly) brewing a book on hyperset models of mind and reality, which will include some further-elaborated, rigorously-mathematized version of this general theory of development...

Long live General Systems thinking ;-)

Monday, October 06, 2008

Parable of the Researcher and the Tribesman

I run an email discussion list on Artificial General Intelligence, which is often interesting, but lately the discussions there have been more frustrating than fascinating, unfortunately.

One recent email thread has involved an individual repeatedly claiming that I have not presented any argument as to why my designs for AGI could possibly work.

When I point to my published or online works, which do present such arguments, this individual simply says that if my ideas make any sense, I should be able to summarize my arguments nontechnically in a few paragraphs in an email.

Foolishly, I managed to get sufficiently annoyed at this email thread that I posted a somewhat condescending and silly parable to the email list, which I thought I'd record here, just for the heck of it....

What I said was:

In dialogues like this, I feel somewhat like a medical researcher talking to a member of a primitive tribe, trying to explain why he thinks he has a good lead on a potential drug to cure a disease. Imagine a dialogue like this:

  • RESEARCHER: I'm fairly sure that I'll be able to create a drug curing your son's disease within a decade or so
  • TRIBESMAN: Why do you believe that? Have you cured anyone with the drug?
  • RESEARCHER: No, in fact I haven't even created the drug yet
  • TRIBESMAN: Well, do you know exactly how to make the drug?
  • RESEARCHER: No, not exactly. In fact there is bound to be some inventive research involved in making the drug.
  • TRIBESMAN: Well then how the hell can you be so confident it's possible?
  • RESEARCHER: Well I've found a compound that blocks the production of the protein I know to be responsible for causing the disease. This compound has some minor toxic effects in rats, but it's similar in relevant respects to other compounds that have shown toxic effects in rats, and then been minorly modified to yield variant compounds with the same curative impacts without toxic effects
  • TRIBESMAN: So you're saying it's cured the same disease in rats?
  • RESEARCHER: Yes, although it also makes the rats sick ... but if it didn't make them sick, it would cure them. And I'm pretty sure I know how to change it so as to make it not make the rats sick. And then it will cure them.
  • TRIBESMAN: But my son is not a rat. Are you calling my son a rat? You don't seem to understand what a great guy my son is. All the women love him. His winky is twice as long as yours. What does curing a rat have to do with curing my son? And it doesn't even cure the rat. It makes him sick. You just want to make my son sick.
  • RESEARCHER: Look, you don't understand. If you look at all the compounds in that class, you'll see there are all sorts of ways to modify them to avoid these toxic effects.
  • TRIBESMAN: So you're saying I should believe you because you're a big important scientist. But your drug hasn't actually cured anyone. I don't believe it'll possibly work. People come by here all the time trying to sell me drugs and they never work. Those diet pill were supposed to make my wife 100 pounds thinner, but she still looks like a boat.
  • RESEARCHER: I'm not responsible for the quacks who sold you diet pills
  • TRIBESMAN: They had white lab coats just like yours
  • RESEARCHER: Look, read my research papers. Then let's discuss it.
  • TRIBESMAN: I can't read that gobbledygook. Do all the other researchers agree with you?
  • RESEARCHER: Some of them do, some of them don't. But almost all of them who have read my papers carefully think I at least have a serious chance of turning my protein blocker into a cure. Even if they don't think it's the best possible approach.
  • TRIBESMAN: So all the experts don't even agree, and you expect me to take you seriously?
  • RESEARCHER: Whatever. I'll talk to you again when I actually have the cure. Have a nice few years.
  • TRIBESMAN: We won't need your cure by then, Mr. Scientist. We're curing him with leeches already.

That just about sums it up....

The point is, the researchers's confidence comes from his intuitive understanding of a body of knowledge that the tribesman cannot appreciate due to lack of education.

The tribesman says "you haven't cured anyone, therefore you know nothing about the drug" ... but the researcher has a theoretical framework that lets him understand something about the drug's activity even before trying it on people.

Similarly, some of us working on AGI have a theoretical framework that lets us understand something about our AGI systems even before they're complete ... this is what guides our work building the systems. But conveying our arguments to folks without this theoretical framework is, unfortunately, close to impossible.... If I were to write some sort of popular treatment of my AGI work, the first 75% of it would have to consist of a generic explanation of background ideas (which is part of the reason I don't take the time to write such a thing ... it seems like an awful lot of work!!).

Obvious stuff, of course. I'm metaphorically kicking myself for burning half an hour in this sort of absurd email argument tonight ... gotta be more rigorous about conserving my time and attention, there's a lot of work to be done!!!

Saturday, October 04, 2008

Reflections on "Religulous" ... and introducing the Communication Prior

I saw the documentary Religulous w/ my kids last night (well, the two who still live at home) ... it's a sort of goofball documentary involving comedian Bill Maher interviewing people with absurd religious beliefs (mostly focusing on Christians, Jews and Muslims, with a few other oddities like a Scientologist street preacher and an Amsterdam cannabis-worshipper) ...

This blog post records some of my random reactions to the movie, and then at the end gets a little deeper and presents a new theoretical idea that popped into my head while thinking about the difficulty of making a really sound intellectual refutation of common religious beliefs.

The new theoretical idea is called the Communication Prior ... and the crux is the notion that in a social group, the prior probability of a theory may be defined in terms of the ease with which one group member can rapidly and accurately communicate the theory to another. My suggestion is that the Communication Prior can serve as the basis for a pragmatic everyday interpretation of Occam's Razor (as opposed to the Solomonoff-Levin Prior, which is a formal-computer-science interpretation). This is important IMHO because science ultimately boils down to pragmatic everyday social phenomena not formal mathematical phenomena.

Random Reactions to Religulous

First a bit about Religulous, which spurred the train of thought reported here....

Some of the interviews in the movie were really funny -- for instance a fat Puerto Rican preacher named Jesus who claims to literally be the Second Coming of Christ, and to have abolished sin and hell ...

and as a whole the interviews certainly made Maher's point that all modern religions are based on beliefs that seem bizarre and twisted in the light of the modern scientific world-view ... the talking snake in the Garden of Eden ... Judgment Day when God comes to Earth and sorts the goodies from the baddies ... the notion that rapture will come only when the Muslims have finally killed all the Jews ... etc. etc. etc. etc. etc. ...

Some interesting historical tidbits were presented as well, e.g. the Egyptian figure Horus, who well predated Christ and whose life-story bears remarkable similarities to the Biblical tale of Jesus....

I've never been a huge fan of stand-up comedians; and among comedians Maher doesn't really match my taste that well ... he's not outrageous or absurd enough ... so I got a bit weary of his commentary throughout the film, but I felt the interviews and interspersed film and news snippets were well-done and made his point really well.

Of course, it's a damn easy point to make, which was part of his point: Of course all religions ancient and modern have been based on bizarre, wacky, impossible-for-any-sane-person-to-believe, fictional-sounding ideas...

One point that came up over and over again in his dialogues with religious folks was his difference with them over the basic importance (or lack thereof) of faith. "Why," he kept asking, "is faith a GOOD thing? Why is it a good thing to believe stuff that has no evidence in favor of it? Why is it a good thing to believe stuff that makes no sense and contradicts observation and apparent reality?"

The answer the religious folks invariably give him is something like "Faith is a good thing because it saved my life."

Dialogue like: "I used to be a Satan worshipper and wasted decades of my life on sex and drugs ... Getting saved by Jesus saved my life blahblaa..."

Religion and Politics: Egads!

Maher's interview with a religious fundamentalist US Senator is a bit disturbing. Indeed, to have folks who believe Judgment Day is nigh, in charge of running the most powerful country in the world, is, uh, scary....

And note that our outgoing President, W Bush, repeatedly invokes his religious beliefs in justifying his policies. He explicitly states that his faith in God is the cornerstone of his policies. Scary, scary, scary. I don't want to live in a society that is regulated based on someone's faith in a supernatural being ... based on someone's faith in the literal or metaphorical truth of some book a bunch of whacked-out, hallucinating Middle-Easterners wrote 2000 years ago....

As Maher points out, this is a completely senseless and insane basis for a modern society to base itself on....

Maher's Core Argument

I don't expect Maher's movie to un-convert a substantial number of religious folks...

Their natural reaction will be: "OK, but you just interviewed a bunch of kooks and then strung their kookiest quotes together."

Which is pretty much what he did ... and in a way that may well be compelling as a tool for helping atheists feel more comfortable publicly voicing their beliefs (which I imagine was much of his purpose) ...

And it has to be noted that a deep, serious, thorough treatment of the topic of religion and irrationality would probably never get into movie theaters.

Modern culture, especially US culture but increasingly world culture as well, has little time for deep rational argumentation. Al Gore made this book quite nicely in his book The Assault on Reason ... which however not that many people read (the book contained too much rational argumentation...).

So it's hard to fault Maher's film for staying close to the surface and presenting a shallow argument against religion ... this is the kind of argument that our culture is presently willing to accept most easily ... and if atheists restricted themselves to careful, thorough, reflective rational arguments, the result would be that even fewer people would listen to them than is now the case....

Maher's argument is basically: All religions have absurd, apparently-delusional, anti-scientific beliefs at their core ... and these absurd beliefs are directly tied to a lot of bad things in the world ... Holy Wars and so forth ....

He also, correctly, traces the bizarre beliefs at the heart of religions to altered brain-states on the part of religious prophets.

As he notes, if someone today rambled around telling everyone they'd been talking to a burning bush up on a hill, they'd likely get locked into a mental institution and force-fed antipsychotics. Yet, when this sort of experience is presented as part of the history of religion, no one seems to worry too much -- it's no longer an insane delusion, it's a proper foundation for the government of the world ;-p

What Percentage of the Population Has a World View Capable of Sensibly Confronting the Singularity?

One thing that struck me repeatedly when listening to Maher's interviews was:

Wow, given all the really HARD issues the human races during this period of rapidly-approaching Singularity ... it's pathetic that we're still absorbed with these ridiculous debates about talking snakes and Judgment Day and praying to supreme beings ... egads!!!

While a digression from this blog post, this is something I think about a lot, in the context of trying to figure out the most ethical and success-probable approach to creating superhuman AI....

On the one hand, due to various aspects of human psychology, I don't trust elitism much: the idea of a small group of folks (however gifted and thoughtful) creating a superhuman AI and then transforming the world, without broader feedback and dialogue, is a bit scary....

On the other hand, I've got to suspect that folks who believe in supreme beings, Judgment Day, jihad, reincarnation and so forth are not really likely to have much useful contribution to the actual hard issues confronting us as Singularity approaches....

Of course, one can envision a lot of ways of avoiding the difficulties alluded to in the prior two paragraphs ... but also a lot of ways of not avoiding them....

One hope is that Maher's movie and further media discourse legitimizing atheism will at least somewhat improve the intellectual level of broad public conversation ... so that, maybe, in a decade or so it won't be political suicide for a US Senatorial candidate to admit they're not religious or superstitious, for example...

On the other hand, it may well eventuate that this process of de-superstitionizing the world will be damn slow compared to the advent of technology ...

But, that's a topic for another lengthy blog post, some other weekend....

The Issues Posed by the "Problem of Induction" and the Philosophy of Science for the Argument Against Religion

Now I'll start creeping, ever so slowly, toward the more original intellectual content of this post, by asking: What might a more deeply reasoned, reflective argument against religion look like?

This topic is actually fairly subtle, because it gets at deep issues in the philosophy of science ... such as I reviewed in an essay a few years ago (included in my 2006 book The Hidden Pattern)...

Although Maher talks a lot about scientific evidence ... and correctly points out that there is no scientific evidence for the various kooky-sounding claims at the core of modern religions ... he doesn't seem to have thought much about the nature of scientific evidence itself. (Which is no surprise as he's a professional comedian and actor ... but of course, he's now a self-styled commentator on politics, science and religion, so....)

Evidence, in the sense of raw data, is not disputed that often among scientists -- and even religious folks don't dispute raw data collected by scientists that often. Statements like "this laboratory instrument, at this point in time, recorded this number on its dial" are not oft disputed. Sometimes argumentation may be made that not enough data were recorded to evaluate an empirical statement like the above (say, the temperature in the room, or the mind-state of the lab assistant, were not recorded): but this still isn't really an argument that the data are wrong, more an argument that the data are too incomplete to draw useful conclusions from them.

(The only area of research I know where raw data is routinely disputed is psi ... which I already addressed in a prior blog post.)

But the step from raw items of evidence to theory is a big one -- a bigger one than Maher or most naively-pro-science advocates care to admit.

This of course relates to the uncomfortable fact that the Humean problem of induction was never solved.

As Maher points out repeatedly in his film, we just don't really know anything for sure ... and it appears that by the basic logic of the universe and the nature of knowledge itself, we never can.

What he doesn't point out (because it's not that kind of movie) is that without making some kind of background assumptions (going beyond the raw evidence collected), we also can't really make probability estimates, or probabilistic predictions about the outcomes of experiments or situations.

Given a set of observations, can we predict the next observations we'll see? Even probabilistically? As Hume pointed out, we can do so only by making some background assumptions.

For instance, we can adopt the Occam's Razor heuristic and assume that there will be some simple pattern binding the past observations to the future ones.... But that begs the question: what is the measure of simplicity?

Hume says, in essence, that the relevant measure of simplicity is human nature.

But this conclusion may, initially, seem a bit disturbing in the context of the religion vs. science dichotomy.

Because, human nature in in many ways, not to put it too tactlessly, more than a bit fucked-up.

Maher doesn't review the evidence in this regard, but he does allude to it, e.g interviewing the discoverer of the "God gene" ... the point is: it seems to be the case that religious experience and religious delusions are deeply tied to intrinsic properties of the human brain.

What this suggests is that the reason religion is so appealing to people is precisely that it is assigned a high prior probability by their Humean "human nature" ... that our brain structure, which evolved in superstitious pre-civilized societies, biases us towards selecting theories that not only explain our everyday empirical observations, but also involve talking animals, voices speaking from the sky, tribalism, physical rewards or punishments for moral transgressions, and so forth...

So when Maher says that "it's time for us to grow up" and let go of these ancient religious superstitions and just be rational and scientific ... two big problems initially appear to arise, based on cursory consideration of the philosophy of science:

  • There is no such thing as "just being rational" ... applying rationality to real observations always involves making some background assumptions
  • The ancient religious superstitions are closely related to patterns wired into our brains by evolution ... which are naturally taken by us as background assumptions...

So when he asks folks to drop their religious beliefs, is Maher really asking folks to self-modify their brains so as not to apply prior distributions supplied by evolution (which has adapted our cognitive patterns to superstitious, tribal society), and to instead apply prior distributions supplied by the scientific and rationalist tradition...?

If so, that would seem a really tough battle to fight. If this were the case, then essentially, the transcendence of religious superstitions would require a kind of cognitive transhumanism.

Fortunately, though I don't think the situation is quite that bad. Cognitive transhumanism (which I define as the attempt to go beyond innately-human patterns of thinking) certainly can be a huge help in the transcendence of superstitions, but it's not strictly necessary.

It appears to me that it's enough "just" to get people to think more clearly about the relationship between their theories and ideas, their community, and their community's collective observations. If people understand this relationship clearly, then it's not actually necessary for them to transcend their various superstition-oriented human biases in order for them to go beyond naive religious ideas.

To elaborate on this point further I'll need to get technical for a moment and introduce a bit of Bayesian statistics and algorithmic information theory...

The Communication Prior

I'll now shift from philosophical babbling to basic math for a few paragraphs.

Recall the basics of Bayes Theorem... . Setting T for "theory" and E for "evidence", it says:

P(T|E) = P(T) P(E|T)/P(E)

... i.e., it says that a person's subjective probability that a theory T is true given that they receive evidence E, should be equal to their prior probability that T is true times the probability that they would receive evidence E if hypothesis T were true, divided by the probability of E (and the latter is usually found by summing over the weighted conditional probabilities given all potential theories).

It is critical to note that, according to Bayes rule, one's conclusion about the probability of theory T given evidence E depends upon one's prior assignment of probabilities.

Now, a real mind with computational limitations cannot always apply Bayes rule accurately ... so the best we can do is approximate.

(Some cognitive theorists, such as Pei Wang, argue that a real mind shouldn't even try to approximate Bayes rule, but should utilize a different logic specially appropriate for cognitive systems with severe resource limitations ... but I don't agree with this and for the purpose of this blog post will assume it's not the case.)

But even if a mind has enough computational resources to apply Bayes rule correctly, there remains the problem of how to arrive at the prior assignment of probabilities?

The most commonsensical way is to use Occam's Razor, the maxim stating that simpler hypotheses should be considered a priori more probable. But this also leads to some subtleties....

The Occam maxim has been given mathematical form in the Solomonoff-Levin universal prior, which says very roughly that the probability of a hypothesis is higher if the computer-programs for computing that hypothesis are shorter (yes, there's more to it, so look it up if you're curious).

Slightly more rigorously, Wikipedia notes that:

The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.

Note in the above quote that the probability of a program may be estimated as the probability that the program is found by randomly selecting bits in the program-defining section of the memory of a computer.

Anyway: That's very nice for mathematicians, but it doesn't help us much in everyday life ... because even if we wanted to apply this kind of formalization in everyday life (say, to decide an issue like evolution vs. creationism), the mapping of real-world situations into mathematical formalisms is itself highly theory-laden....

So what we really need is not just a mathematical formalization of a universal prior, but a commonsensical formalization of a prior that is helpful for everyday human situations (even if not truly universal).

One suggestion I have is to use Solomonoff's core idea here, but interpret it a bit differently, in terms of everyday human communicational operations rather than mathematical, abstracted machine operations.

Paraphrasing the above quoted text, I propose that

The communicational prior probability of any prefix p of a computable sequence x, relative to a social group G and a body of evidence E, is the sum of the communicational probabilities (calculated relative to G and E) of all programs that compute something starting with p.

But how then to compute the communicational probability of a program relative to a social group G and body of evidence E?

As the name indicates, this is defined, not in terms of bit-flipping, but in terms of communication within the group.

I define the communicational probability of a program p, as being proportional to the average amount of time it would take a randomly chosen member A of group G to communicate p to another randomly chosen member B of group G, with sufficient accuracy that G can then evaluate the outputs of p on randomly selected inputs drawn from E.

(The assumption is that A already knows how to evaluate the program on inputs drawn from E.)

One can also bake a certain error rate into this definition, so that G has to be able to correctly evaluate the outputs of p only on a certain percentage of inputs drawn from E.

This defines what I suggest to call the Communication Prior.

A variant would be the communication-and-testing probability of a program p, definable as being proportional to the average, for randomly chosen members A and B in the social group such that A already knows how to evaluate p on inputs in E, of

  • the amount of time it would take A to communicate p to B, with sufficient accuracy that B can then evaluate the outputs of p on randomly selected inputs drawn from E
  • the amount of time it actually takes B to evaluate p on a randomly selected element of E
(One can of course weight the two terms in this average, if one wants to.)

Taking a bit of terminological liberty, I will also group this communication-testing variant as being under the umbrella of the "Communication Prior."

Pragmatically, what does this mean about theories?

Roughly speaking, it means that the a priori probability of a theory (i.e. the "bias toward" a theory) has to do with ease of effectively communicating that theory within a social group ... and (in the communication-testing variant), the ease of effectively communicating how to efficiently apply the theory.

Of course, the a priori probability theory doesn't tell you how good a theory is. Communicating a theory may be very simple, but so what ... unless the theory explains something. But the "explanation" part is taken care of in Bayes Rule, in the P(E | T) / P(E) fraction. If the observed evidence is not surprisingly likely given the assumption of the theory, then this fraction will be small.

The Communication Prior is similar in spirit to the Solomonoff-Levin Universal Prior ... but it's not about formal, mathematical, theoretical systems, it's about real-world social systems, such as human communities of scientists. In terms of philosophy of science, this is sort-of a big deal, as it bridges the gap between formalist and social-psychology-based theories of science.

What's the Take-Away from All That Techno-babble?

So, roughly speaking, the nontechnical take-away from the above technical excursion should be the following suggestion:

A theory should be considered good within a social group, to the extent that it explains the evidence better than it would explain a bunch of randomly selected evidence -- and it's reasonably rapid to effectively communicate, to others in the group, information about how to efficiently apply the theory to explain the available evidence.

This may seem simple or almost obvious, but it doesn't seem to have been said before, in quite so crisp of a way.

(In my prior essay on philosophy of science, I left off without articulating any sort of specific simplicity measure: the Communication Prior fills in that gap, thus bringing the ideas in that essay closer to practical applicability.)

Consider for instance the evolution vs. creationism argument. For my new suggestion to favor evolution over creationism, what would have to be true?

Whether the simple essential core of creationism or evolution is easier to communicate within a human social group, really depends on the particular social group.

However, the simple essential core of creationism does an extremely bad job of explaining why the observed body of evidence (e.g. the fossil record) is more likely than a lot of other possible bodies of evidence.

To make a version of creationism that would explain why the observed body of evidence is particularly likely, one would need to add a heck of a lot of special-pleading-type explanations onto the essential core of creationism. This is because creationism does not effectively compress or compactify the body of observed data.

So, to get a version of creationism that is equally explanatory of the particulars of the evidence as evolution, one needs to make a version of creationism that takes a long time to communicate.

Conclusion: creationism is worse than evolution.

(OK, we don't really need to go through so much complexity to get to such an obvious conclusion! But I'm just using that example to make a more general point, obviously.)

Why Is Religion a Bad Idea?

Getting back to the initial theme of this overlong, overdiverse blog post, then: why is religion a bad idea?

Because we should judge our theories using Bayes rule with a communication prior ... or in other words, by asking that they explain the particulars of observed reality in a relatively rapidly communicable way.

There is a balance between success-at-detailed-explanation and rapid-communicability, and the exact way to strike this balance is going to be subtle and in some cases subjective. But, in the case of religious beliefs, the verdict is quite clear: the religious world view, compared to the scientific world view, fails miserably at explaining the particulars of observed reality in a relatively rapidly communicable way.

The key point here is that, even if people want to stick with their evolutionary-legacy-based inductive biases (which make them intuitively favor superstitious explanations), the failure of religious theories to explain the particulars of observed reality is now so drastic and so obvious, that anyone who really carefully considers the evidence should reject these religious theories anyway.

Maher's film points out sensationalistically silly aspects of religious belief systems. But these aren't really the right anti-religion argument to use, in terms of philosophy of science and the theory of rationality. After all, are the Big Bang and Big Crunch and the evolution of humans from apes really any less everyday-ishly wacky than Judgment Day and the talking snake in the Garden of Eden?

The right argument to use is that, if one assumes Bayes rule plus a Communication Prior (or any other sensible, everyday-reality-based prior), then religious theories fail miserably.

Of course, almost no one on the planet can understand the previous sentence, though ... which is why his approach of dramatically emphasizing the most absurdly wacky religious beliefs and believers is probably a way more effective PR strategy!

The Emotion Prior

Finally, another suggestion I have regarding the popularity of religious beliefs has to do with something my ex-wife said to me once, shortly after her religious conversion to Buddhism, a topic about which we had numerous arguments (some heated, some more rational and interesting, none usefully conclusive nor convincing to either of us). What she said was: "I believe what I need to believe in order to survive."

She didn't just mean "to survive physically" of course ... that was never at issue (except insofar as emotional issues could have threatened her physical survival) ... what she meant was "to survive emotionally" ... to emotionally flourish ...

My (rather uncontroversial) suggestion is that in many cases religious people -- and others -- have a strong bias toward theories that they enjoy believing.

Or in other words: "If believing it feels good, it can't be wrong!"

This is probably the main issue in preaching atheism: one is asking people to

  • adopt (some approximant of) Bayes rule with a Communication Prior (or similar)
  • actually carefully look at the evidence that would be used in Bayes rule

... rather than to, on the other hand,

  • avoid looking at evidence that might disconfirm one's theory
  • utilize an Emotion Prior when evaluating various theories that might explain the evidence

The question is then whether, in each individual case,

  • the Emotion Prior outweights the Communication Prior (or similar)
  • the sociopsychological pressure to look at evidence outweighs the sociopsychological pressure to ignore it
Ignoring evidence gets harder and harder as the Internet broadcasts data to everyone, all the time....

To study these choices in an interesting way, one would need to model the internals of the believer's mind more subtly that has been done in this post so far....

But anyway ... the evidence of the clock in front of me is that I have spent too much time amusing myself by writing this blog post, and now have more useful things to do ... so, till next time!

P.S. Thanks to my wife Izabela for discussions leading to the introduction of the communication-testing variant of the Communication Prior, after the more basic version had already been formulated....

Thursday, September 25, 2008

Another Transhumanist Nightmare

Some anonymous freak wrote this story, a piece of transhumanist/absurdist fantasy which includes me in a minor role ... it's childish, but I have to say, mildly amusing...

Tuesday, September 23, 2008

The End of the American Era!! (Not)

I'm not generally a very political person ... my thinking and my life-decisions are pretty strongly focused on the "big picture": superhuman AI, the Singularity, transhumanism and all that.

I was deeply into politics as a teen (largely because my parents raised me to be), but as I realized that utopian political dreams were likely to founder on the intrinsic biological perversity of human nature, I drifted away from the political sphere and started thinking more about how to improve or transcend human nature itself....

However, every now and then some piece of political stupidity gets on my nerves sufficiently that I wind up burning time thinking about it.

One of these cases has occurred recently: I've become annoyed by a large number of people proclaiming that "the American era is finally ending." No empire rules forever, and blah blah blah.

I've been hearing this sort of talk for a while, but all the more intensely given the recent week's American banking crisis.

So I decided to write a blog post to get my thoughts on the topic out of my head!

I've never been noted for my patriotism: I really don't care, at a fundamental level, about nations or other related manifestations of contemporary human society. I'll be happy to see them all go away once human nature is fundamentally reformed via radical technological advances.

I've also spent enough time living and traveling outside the US, to get some feel for the strengths and weaknesses of the good/bad old US of A.

My considered opinion of the "end of the American era" meme is that it's pretty much bullshit.

I also seem to look at the current financial crisis a little differently than most others (big surprise there, huh?).

The issues that investment banks, insurance companies and related institutions have recently experienced have been widely attributed to greed, poor government regulation, and so forth. These attributions are surely correct -- but any real event has multiple causes ("cause" being essentially a creature of subjective theory rather than physical reality anyway). And one cause is not being commented on enough, which is the phenomenal practical creativity involved in all the recondite financial instruments (credit default swaps, mortgage strips and the like) underlying the recent woes.

There is some really cool math underlying these financial devices, and this math was largely invented and pragmaticized by American entrepreneurial thinkers. American quants have developed many new fields of financial math, and brought these into the real world, thus moving the global economy to a whole new level of complexity and efficiency.

Innovation always carries risks ... and we've seen that in the markets over the last weeks and months. But let's not forget how amazing the innovations are, and what tremendous positive potential they have.

I agree that exotic derivatives should be regulated more carefully. On the other hand, I also agree with their advocates that they add significant efficiency to the financial markets, and hence are a major asset to the world economy.

Of course, one can theoretically envision socioeconomic systems in which efficiency would be achieved by other, less perverted and convoluted means. But, as history shows, theoretically-envisioned socioeconomic systems are difficult to translate into realities, because of the subtleties of human psychology and culture.

And it's precisely these "subtleties of psychology and culture" that led America to invent quantitative finance ... and so many other amazing technological and scientific developments ... which is exactly why I tend to doubt the "American era" is at its end.

My contention, and it's not a terribly original one (but I may have a somewhat original slant), is that compared to other countries on the planet right now, the USA has a combination of cultural psychology and socioeconomic institutions that is uniquely well-suited to fostering practical creativity.

Note the compound of terms: "practical" and "creativity."

I don't think the US has any kind of monopoly on creativity itself. There are brilliant, creative minds everywhere. Some cultures foster creativity more than others ... and the US is pretty good at this ... but I'm not sure it's uniquely good.

And I don't think the US has any kind of monopoly on practicality, either. Although historically this has been a US characteristic, there are surely other nations that are currently more down-to-earth and practical than the US (as a generalization across various aspects of life).

However, the US seems to be uniquely good at taking creative new ideas and finding the first ways to give them practical implementations -- an art that requires a great deal of creativity in itself, of course.

What is it about the US that fosters practical creativity? It's no one thing. It's a synergetic combination of culture and institutions. The institutions help keep the culture alive, and the culture helps keep the institutions alive. Practical creativity is something that pervades many aspects of US life -- government, research, education and industry, for example. Precisely because of its pervasive and systemic nature, the memeplex that constitutes practical creativity in the US is difficult for other nations to copy, even if they have a genuine desire to.

To see what I mean more concretely, think about three examples: the Internet, the Human Genome Project, and the personal computer. How did these come about?

The history of the PC embodies many classic stories of American entrepreneurism, including the creation of Apple and Microsoft by young nerdy entrepreneurs out of nowhere. But it also tells you something about the flexibility of large US corporations relative to similar institutions elsewhere: it was IBM striking a deal with Bill Gates, some young nerd from nowhere with no real business experience, that set the PC industry on its modern path. Not to mention the freewheeling US corporate research lab culture of the time (Xerox PARC and all that). And the government research funding establishment played its role behind the scenes, for instance in funding the creation of mainframes that Bill Gates played with (often breaking the rules to do so) in high school and college, before starting Microsoft....

The Internet began as a project of ARPA (now DARPA), a US government research funding agency that has its strengths and weaknesses, but is notable for its chaotic approach to funding. DARPA program managers cycle in and out every 4 years so that no individual has too much power over resource allocation decisions. There are certainly "old boy networks" involved, and I've personally been fairly unhappy with DARPA's funding choices in my own research field of AI. However, it's interesting to compare the DARPA funding approach with the approach of, say, the Japanese government. Historically, the Japanese have had a tendency to fund huge, comprehensive, nationwide research programmes: e.g. the Fifth Generation computing initiative (which funded a large number of researchers to work on logic-based AI), or the current focus on robotics technology. As a crude approximation, it seems the Japanese funding system tends to push researchers to "all work on the same sort of thing at the same time", whereas the American research funding system is more chaotic, leading to a greater diversity of ideas getting explored simultaneously. We still are overly trend-following and narrow-focused in the US, from my point of view: for instance, AI funding has focused on narrow-AI, logic-based systems and neural net systems for far too long; and the biology community is taking way too long to wake up to the importance of systems biology. But, compared to the rest of the world, the US research funding system is a hotbed of creative chaos.

And then, once the Internet escaped the clutches of ARPA (due to the legislative action of folks like Al Gore, who famously bragged he "invented the Internet" due to his role in this political process), it spread through the collective activity of masses of software entrepreneurs. The Web was initially developed in Europe, but what made it a huge phenomenon was American entrepreneurship, pushed on by the relative ease of securing angel and venture funding in the US. I lived in Australia in the late 1990's but when I wanted to start a software business I had to return to the US because it was so hard to secure investment for an oddball software startup anywhere else (not that it was easy in the US, but it was a bit less painfully difficult...).

The Human Genome Project (which has ushered in a completely new era of genetics and medical research) was began as a US government initiative, involving a network of university labs. And note that the US graduate education system is still by far the best in the world. Our elementary and high schools are generally pathetic compared to those of other developed nations, though there are many exceptionally good schools out there too (the US being a big, diverse place) ... but by the time one gets to grad school, the US is the place to be. Top undergrads from around the world vie to get into our grad schools, and top PhDs vie for postdoc positions at our universities.

But what accelerated the Human Genome Project was the entry of Celera Genomics into the picture -- a venture-funded entrepreneurial attempt to outdo the government genome sequencing project. The new ideas Celera introduced (shotgun sequencing) accelerated the government sequencing project as well, helping the latter to complete ahead of schedule and under budget. (Now Craig Venter, who founded Celera, is involved with a number of projects, some commercial and some nonprofit within government-funded labs ... including a far-out attempt to create the first artificial genome.)

In each of these three cases -- and I could have chosen many others -- we see a complex combination of individual scientific and entrepreneurial initiative, and the spontaneously coordinated, somewhat chaotic and happenstance interaction of government, commercial and educational institutions. This combination isn't planned in detail, and doesn't always make sense, and makes a lot of really stupid decisions (such as not funding advanced AI research much more amply), but it also does a lot of smart things ... and it interpenetrates with subtle, hard-to-describe aspects of American culture in ways that no one has yet been able to document.

Part of the story, of course, is the incredible diversity of the American population: our scientists and engineers, especially, come from all over the world ... and increasingly our business leaders do too. So American culture isn't exactly American culture: it's really world culture, but with an American slant. And this is one among many major differences between America and other contemporary nations, which is closely linked to the "practical creativity" memeplex. I can't see anywhere in Asia, or anywhere in Europe (except possibly England), adopting the "melting pot" aspect of American culture ... but without this melting-pot aspect, it seems to me that practical creativity will have a lot harder time really flourishing. The diversity of ideas and approaches that comes from welcoming and then chaotically blending cultures and outlooks from all over the world, is a major source of practical creativity.

The move from a manufacturing and service economy to a knowledge economy has become famous. The next step, I suggest, is going to be a gradual shift from a knowledge economy to a creativity economy. As knowledge work becomes commoditized, the really precious thing will be creativity work: but not abstract creativity-work detached from the everyday world ... practical-creativity work, aimed at moving the real world forward in unexpected directions. Because of this, I suspect the US will maintain its cultural and economic leadership role in the world for quite some time.

And we'd damn well better, because with all the debt we're racking up, we're basically placing a huge BET that we're going to dramatically increase our productivity via technological efficiency improvements of various sorts. It's a fairly large gamble, but calculated risks are part of the American way ... as recent events on Wall Street show, this approach definitely has its risks ... but my guess is that this gamble will ultimately pan out just fine.

Getting back to my futurist preoccupations: My best guess is that the bulk of the work of creating the Singularity is going to be centered in America. This work will surely be international -- my own current work on advanced AGI technology involves a team with members in South America, Europe, Australia, New Zealand and Asia as well as the US (no Antarcticans yet...). But there's a reason my company Novamente LLC is centered in the US and not these other countries, beyond historical happenstance ... the US is the place where businesses and nonprofit agencies are most willing to seriously consider the practical value of way-out-there technologies. So long as this doesn't change, the American era is going to keep on rolling ... at least that's my best guess at the moment ...

Monday, September 22, 2008

AGI Intelligence Testing

I spent a while this weekend thinking about what might be the right approach for testing the intelligence of early-stage AGI systems that are aimed at human-level, roughly human-like general intelligence (either as an end goal or an intermediate developmental milestone).

Some of my thoughts are summed up in an essay I posted at

I’ll quote the first few paragraphs here:

One of the many difficult issues arising in the course of research on human-level AGI is that of “evaluation and metrics” – i.e., AGI intelligence testing.

It’s not so hard to tell when you’ve achieved human-level AGI — though there is some subtlety here, which I’ll discuss below. However, assessing the quality of incremental progress toward human-level AGI is a much subtler matter. In this essay I’ll present some thoughts on this issue, culminating in a couple specific proposals:

1) Online School Tests, in which AGIs are tested via their ability to succeed in existing online educational fora

2) of more immediate interest, a series of tests called the AGI Preschool Tests (AIP Tests, for short, pronounced “ape tests”), based on the notion of “multiple intelligences” and also on some novel ideas regarding learning-based intelligence testing.

The AIP Tests suggested here are specifically intended for AGI systems that control agents embodied in 3D worlds resembling the everyday human world, via either physical robots or virtually embodied agents. Very differently embodied AGI systems (e.g. systems to be initially taught purely via text without any simulated human-like or animal-like body) would potentially need qualitatively different testing methdologies.