To follow this blog by email, give your address here...

Tuesday, October 07, 2008

Cosmic, overblown Grand Unified Theory of Development

In the 80's I spent a lot of time in the "Q" section of various libraries, which hosted some AI books, and a lot of funky books on "General Systems Theory" and related forms of interdisciplinary scientifico-philosophical wackiness.

GST is way out of fashion in the US, supplanted by Santa Fe Institute style "complexity theory" (which takes the same basic ideas but fleshes them out differently using modern computer tech), but I still have a soft spot in my heart for it....

Anyway, today when I was cleaning out odd spots of the house looking for a lost item (which I failed to find and really need, goddamnit!!) I found some scraps of paper that I scribbled on a couple years back while on some airline flight or another, sketching out the elements of a general-systems-theory type Grand Unified Theory of Development ... an overall theory of the stages of development that complex systems go through as they travel from infancy to maturity.

I'm not going to type in the whole thing here right now, but I made a table depicting part of it, so as to record the essence of the idea in some nicer, more permanent form than the fading dirty pieces of notebook paper....

The table shows the four key stages any complex system goes through, described in general terms, and then explained in a little more detail in the context of two examples: the human (or humanlike) mind as it develops from infancy to maturity, and the maturity of life from proto-life up into its modern form.

I couldn't get the table to embed nicely in this blog interface, so it's here as a PDF:


This was in fact the train of thought that led to two papers Stephan Bugaj and I wrote over the last couple years, on the stages of cognitive development of uncertain-inference based AI systems, and the stages of ethical development of such AI systems. While not presented as such in those papers, the stages given there are really specialized manifestations of the more general stages outlined in the above table.

Stephan and I are (slowly) brewing a book on hyperset models of mind and reality, which will include some further-elaborated, rigorously-mathematized version of this general theory of development...

Long live General Systems thinking ;-)

Monday, October 06, 2008

Parable of the Researcher and the Tribesman

I run an email discussion list on Artificial General Intelligence, which is often interesting, but lately the discussions there have been more frustrating than fascinating, unfortunately.

One recent email thread has involved an individual repeatedly claiming that I have not presented any argument as to why my designs for AGI could possibly work.

When I point to my published or online works, which do present such arguments, this individual simply says that if my ideas make any sense, I should be able to summarize my arguments nontechnically in a few paragraphs in an email.

Foolishly, I managed to get sufficiently annoyed at this email thread that I posted a somewhat condescending and silly parable to the email list, which I thought I'd record here, just for the heck of it....

What I said was:

In dialogues like this, I feel somewhat like a medical researcher talking to a member of a primitive tribe, trying to explain why he thinks he has a good lead on a potential drug to cure a disease. Imagine a dialogue like this:

  • RESEARCHER: I'm fairly sure that I'll be able to create a drug curing your son's disease within a decade or so
  • TRIBESMAN: Why do you believe that? Have you cured anyone with the drug?
  • RESEARCHER: No, in fact I haven't even created the drug yet
  • TRIBESMAN: Well, do you know exactly how to make the drug?
  • RESEARCHER: No, not exactly. In fact there is bound to be some inventive research involved in making the drug.
  • TRIBESMAN: Well then how the hell can you be so confident it's possible?
  • RESEARCHER: Well I've found a compound that blocks the production of the protein I know to be responsible for causing the disease. This compound has some minor toxic effects in rats, but it's similar in relevant respects to other compounds that have shown toxic effects in rats, and then been minorly modified to yield variant compounds with the same curative impacts without toxic effects
  • TRIBESMAN: So you're saying it's cured the same disease in rats?
  • RESEARCHER: Yes, although it also makes the rats sick ... but if it didn't make them sick, it would cure them. And I'm pretty sure I know how to change it so as to make it not make the rats sick. And then it will cure them.
  • TRIBESMAN: But my son is not a rat. Are you calling my son a rat? You don't seem to understand what a great guy my son is. All the women love him. His winky is twice as long as yours. What does curing a rat have to do with curing my son? And it doesn't even cure the rat. It makes him sick. You just want to make my son sick.
  • RESEARCHER: Look, you don't understand. If you look at all the compounds in that class, you'll see there are all sorts of ways to modify them to avoid these toxic effects.
  • TRIBESMAN: So you're saying I should believe you because you're a big important scientist. But your drug hasn't actually cured anyone. I don't believe it'll possibly work. People come by here all the time trying to sell me drugs and they never work. Those diet pill were supposed to make my wife 100 pounds thinner, but she still looks like a boat.
  • RESEARCHER: I'm not responsible for the quacks who sold you diet pills
  • TRIBESMAN: They had white lab coats just like yours
  • RESEARCHER: Look, read my research papers. Then let's discuss it.
  • TRIBESMAN: I can't read that gobbledygook. Do all the other researchers agree with you?
  • RESEARCHER: Some of them do, some of them don't. But almost all of them who have read my papers carefully think I at least have a serious chance of turning my protein blocker into a cure. Even if they don't think it's the best possible approach.
  • TRIBESMAN: So all the experts don't even agree, and you expect me to take you seriously?
  • RESEARCHER: Whatever. I'll talk to you again when I actually have the cure. Have a nice few years.
  • TRIBESMAN: We won't need your cure by then, Mr. Scientist. We're curing him with leeches already.

That just about sums it up....

The point is, the researchers's confidence comes from his intuitive understanding of a body of knowledge that the tribesman cannot appreciate due to lack of education.

The tribesman says "you haven't cured anyone, therefore you know nothing about the drug" ... but the researcher has a theoretical framework that lets him understand something about the drug's activity even before trying it on people.

Similarly, some of us working on AGI have a theoretical framework that lets us understand something about our AGI systems even before they're complete ... this is what guides our work building the systems. But conveying our arguments to folks without this theoretical framework is, unfortunately, close to impossible.... If I were to write some sort of popular treatment of my AGI work, the first 75% of it would have to consist of a generic explanation of background ideas (which is part of the reason I don't take the time to write such a thing ... it seems like an awful lot of work!!).

Obvious stuff, of course. I'm metaphorically kicking myself for burning half an hour in this sort of absurd email argument tonight ... gotta be more rigorous about conserving my time and attention, there's a lot of work to be done!!!

Saturday, October 04, 2008

Reflections on "Religulous" ... and introducing the Communication Prior

I saw the documentary Religulous w/ my kids last night (well, the two who still live at home) ... it's a sort of goofball documentary involving comedian Bill Maher interviewing people with absurd religious beliefs (mostly focusing on Christians, Jews and Muslims, with a few other oddities like a Scientologist street preacher and an Amsterdam cannabis-worshipper) ...

This blog post records some of my random reactions to the movie, and then at the end gets a little deeper and presents a new theoretical idea that popped into my head while thinking about the difficulty of making a really sound intellectual refutation of common religious beliefs.

The new theoretical idea is called the Communication Prior ... and the crux is the notion that in a social group, the prior probability of a theory may be defined in terms of the ease with which one group member can rapidly and accurately communicate the theory to another. My suggestion is that the Communication Prior can serve as the basis for a pragmatic everyday interpretation of Occam's Razor (as opposed to the Solomonoff-Levin Prior, which is a formal-computer-science interpretation). This is important IMHO because science ultimately boils down to pragmatic everyday social phenomena not formal mathematical phenomena.

Random Reactions to Religulous

First a bit about Religulous, which spurred the train of thought reported here....

Some of the interviews in the movie were really funny -- for instance a fat Puerto Rican preacher named Jesus who claims to literally be the Second Coming of Christ, and to have abolished sin and hell ...

and as a whole the interviews certainly made Maher's point that all modern religions are based on beliefs that seem bizarre and twisted in the light of the modern scientific world-view ... the talking snake in the Garden of Eden ... Judgment Day when God comes to Earth and sorts the goodies from the baddies ... the notion that rapture will come only when the Muslims have finally killed all the Jews ... etc. etc. etc. etc. etc. ...

Some interesting historical tidbits were presented as well, e.g. the Egyptian figure Horus, who well predated Christ and whose life-story bears remarkable similarities to the Biblical tale of Jesus....

I've never been a huge fan of stand-up comedians; and among comedians Maher doesn't really match my taste that well ... he's not outrageous or absurd enough ... so I got a bit weary of his commentary throughout the film, but I felt the interviews and interspersed film and news snippets were well-done and made his point really well.

Of course, it's a damn easy point to make, which was part of his point: Of course all religions ancient and modern have been based on bizarre, wacky, impossible-for-any-sane-person-to-believe, fictional-sounding ideas...

One point that came up over and over again in his dialogues with religious folks was his difference with them over the basic importance (or lack thereof) of faith. "Why," he kept asking, "is faith a GOOD thing? Why is it a good thing to believe stuff that has no evidence in favor of it? Why is it a good thing to believe stuff that makes no sense and contradicts observation and apparent reality?"

The answer the religious folks invariably give him is something like "Faith is a good thing because it saved my life."

Dialogue like: "I used to be a Satan worshipper and wasted decades of my life on sex and drugs ... Getting saved by Jesus saved my life blahblaa..."


Religion and Politics: Egads!


Maher's interview with a religious fundamentalist US Senator is a bit disturbing. Indeed, to have folks who believe Judgment Day is nigh, in charge of running the most powerful country in the world, is, uh, scary....

And note that our outgoing President, W Bush, repeatedly invokes his religious beliefs in justifying his policies. He explicitly states that his faith in God is the cornerstone of his policies. Scary, scary, scary. I don't want to live in a society that is regulated based on someone's faith in a supernatural being ... based on someone's faith in the literal or metaphorical truth of some book a bunch of whacked-out, hallucinating Middle-Easterners wrote 2000 years ago....

As Maher points out, this is a completely senseless and insane basis for a modern society to base itself on....


Maher's Core Argument

I don't expect Maher's movie to un-convert a substantial number of religious folks...

Their natural reaction will be: "OK, but you just interviewed a bunch of kooks and then strung their kookiest quotes together."

Which is pretty much what he did ... and in a way that may well be compelling as a tool for helping atheists feel more comfortable publicly voicing their beliefs (which I imagine was much of his purpose) ...

And it has to be noted that a deep, serious, thorough treatment of the topic of religion and irrationality would probably never get into movie theaters.

Modern culture, especially US culture but increasingly world culture as well, has little time for deep rational argumentation. Al Gore made this book quite nicely in his book The Assault on Reason ... which however not that many people read (the book contained too much rational argumentation...).

So it's hard to fault Maher's film for staying close to the surface and presenting a shallow argument against religion ... this is the kind of argument that our culture is presently willing to accept most easily ... and if atheists restricted themselves to careful, thorough, reflective rational arguments, the result would be that even fewer people would listen to them than is now the case....

Maher's argument is basically: All religions have absurd, apparently-delusional, anti-scientific beliefs at their core ... and these absurd beliefs are directly tied to a lot of bad things in the world ... Holy Wars and so forth ....

He also, correctly, traces the bizarre beliefs at the heart of religions to altered brain-states on the part of religious prophets.

As he notes, if someone today rambled around telling everyone they'd been talking to a burning bush up on a hill, they'd likely get locked into a mental institution and force-fed antipsychotics. Yet, when this sort of experience is presented as part of the history of religion, no one seems to worry too much -- it's no longer an insane delusion, it's a proper foundation for the government of the world ;-p

What Percentage of the Population Has a World View Capable of Sensibly Confronting the Singularity?

One thing that struck me repeatedly when listening to Maher's interviews was:

Wow, given all the really HARD issues the human races during this period of rapidly-approaching Singularity ... it's pathetic that we're still absorbed with these ridiculous debates about talking snakes and Judgment Day and praying to supreme beings ... egads!!!

While a digression from this blog post, this is something I think about a lot, in the context of trying to figure out the most ethical and success-probable approach to creating superhuman AI....

On the one hand, due to various aspects of human psychology, I don't trust elitism much: the idea of a small group of folks (however gifted and thoughtful) creating a superhuman AI and then transforming the world, without broader feedback and dialogue, is a bit scary....

On the other hand, I've got to suspect that folks who believe in supreme beings, Judgment Day, jihad, reincarnation and so forth are not really likely to have much useful contribution to the actual hard issues confronting us as Singularity approaches....

Of course, one can envision a lot of ways of avoiding the difficulties alluded to in the prior two paragraphs ... but also a lot of ways of not avoiding them....

One hope is that Maher's movie and further media discourse legitimizing atheism will at least somewhat improve the intellectual level of broad public conversation ... so that, maybe, in a decade or so it won't be political suicide for a US Senatorial candidate to admit they're not religious or superstitious, for example...

On the other hand, it may well eventuate that this process of de-superstitionizing the world will be damn slow compared to the advent of technology ...

But, that's a topic for another lengthy blog post, some other weekend....


The Issues Posed by the "Problem of Induction" and the Philosophy of Science for the Argument Against Religion

Now I'll start creeping, ever so slowly, toward the more original intellectual content of this post, by asking: What might a more deeply reasoned, reflective argument against religion look like?

This topic is actually fairly subtle, because it gets at deep issues in the philosophy of science ... such as I reviewed in an essay a few years ago (included in my 2006 book The Hidden Pattern)...

Although Maher talks a lot about scientific evidence ... and correctly points out that there is no scientific evidence for the various kooky-sounding claims at the core of modern religions ... he doesn't seem to have thought much about the nature of scientific evidence itself. (Which is no surprise as he's a professional comedian and actor ... but of course, he's now a self-styled commentator on politics, science and religion, so....)

Evidence, in the sense of raw data, is not disputed that often among scientists -- and even religious folks don't dispute raw data collected by scientists that often. Statements like "this laboratory instrument, at this point in time, recorded this number on its dial" are not oft disputed. Sometimes argumentation may be made that not enough data were recorded to evaluate an empirical statement like the above (say, the temperature in the room, or the mind-state of the lab assistant, were not recorded): but this still isn't really an argument that the data are wrong, more an argument that the data are too incomplete to draw useful conclusions from them.

(The only area of research I know where raw data is routinely disputed is psi ... which I already addressed in a prior blog post.)

But the step from raw items of evidence to theory is a big one -- a bigger one than Maher or most naively-pro-science advocates care to admit.

This of course relates to the uncomfortable fact that the Humean problem of induction was never solved.

As Maher points out repeatedly in his film, we just don't really know anything for sure ... and it appears that by the basic logic of the universe and the nature of knowledge itself, we never can.

What he doesn't point out (because it's not that kind of movie) is that without making some kind of background assumptions (going beyond the raw evidence collected), we also can't really make probability estimates, or probabilistic predictions about the outcomes of experiments or situations.

Given a set of observations, can we predict the next observations we'll see? Even probabilistically? As Hume pointed out, we can do so only by making some background assumptions.

For instance, we can adopt the Occam's Razor heuristic and assume that there will be some simple pattern binding the past observations to the future ones.... But that begs the question: what is the measure of simplicity?

Hume says, in essence, that the relevant measure of simplicity is human nature.

But this conclusion may, initially, seem a bit disturbing in the context of the religion vs. science dichotomy.

Because, human nature in in many ways, not to put it too tactlessly, more than a bit fucked-up.

Maher doesn't review the evidence in this regard, but he does allude to it, e.g interviewing the discoverer of the "God gene" ... the point is: it seems to be the case that religious experience and religious delusions are deeply tied to intrinsic properties of the human brain.

What this suggests is that the reason religion is so appealing to people is precisely that it is assigned a high prior probability by their Humean "human nature" ... that our brain structure, which evolved in superstitious pre-civilized societies, biases us towards selecting theories that not only explain our everyday empirical observations, but also involve talking animals, voices speaking from the sky, tribalism, physical rewards or punishments for moral transgressions, and so forth...

So when Maher says that "it's time for us to grow up" and let go of these ancient religious superstitions and just be rational and scientific ... two big problems initially appear to arise, based on cursory consideration of the philosophy of science:

  • There is no such thing as "just being rational" ... applying rationality to real observations always involves making some background assumptions
  • The ancient religious superstitions are closely related to patterns wired into our brains by evolution ... which are naturally taken by us as background assumptions...

So when he asks folks to drop their religious beliefs, is Maher really asking folks to self-modify their brains so as not to apply prior distributions supplied by evolution (which has adapted our cognitive patterns to superstitious, tribal society), and to instead apply prior distributions supplied by the scientific and rationalist tradition...?

If so, that would seem a really tough battle to fight. If this were the case, then essentially, the transcendence of religious superstitions would require a kind of cognitive transhumanism.

Fortunately, though I don't think the situation is quite that bad. Cognitive transhumanism (which I define as the attempt to go beyond innately-human patterns of thinking) certainly can be a huge help in the transcendence of superstitions, but it's not strictly necessary.

It appears to me that it's enough "just" to get people to think more clearly about the relationship between their theories and ideas, their community, and their community's collective observations. If people understand this relationship clearly, then it's not actually necessary for them to transcend their various superstition-oriented human biases in order for them to go beyond naive religious ideas.

To elaborate on this point further I'll need to get technical for a moment and introduce a bit of Bayesian statistics and algorithmic information theory...

The Communication Prior

I'll now shift from philosophical babbling to basic math for a few paragraphs.

Recall the basics of Bayes Theorem... . Setting T for "theory" and E for "evidence", it says:

P(T|E) = P(T) P(E|T)/P(E)

... i.e., it says that a person's subjective probability that a theory T is true given that they receive evidence E, should be equal to their prior probability that T is true times the probability that they would receive evidence E if hypothesis T were true, divided by the probability of E (and the latter is usually found by summing over the weighted conditional probabilities given all potential theories).

It is critical to note that, according to Bayes rule, one's conclusion about the probability of theory T given evidence E depends upon one's prior assignment of probabilities.

Now, a real mind with computational limitations cannot always apply Bayes rule accurately ... so the best we can do is approximate.

(Some cognitive theorists, such as Pei Wang, argue that a real mind shouldn't even try to approximate Bayes rule, but should utilize a different logic specially appropriate for cognitive systems with severe resource limitations ... but I don't agree with this and for the purpose of this blog post will assume it's not the case.)

But even if a mind has enough computational resources to apply Bayes rule correctly, there remains the problem of how to arrive at the prior assignment of probabilities?

The most commonsensical way is to use Occam's Razor, the maxim stating that simpler hypotheses should be considered a priori more probable. But this also leads to some subtleties....

The Occam maxim has been given mathematical form in the Solomonoff-Levin universal prior, which says very roughly that the probability of a hypothesis is higher if the computer-programs for computing that hypothesis are shorter (yes, there's more to it, so look it up if you're curious).

Slightly more rigorously, Wikipedia notes that:

The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.

Note in the above quote that the probability of a program may be estimated as the probability that the program is found by randomly selecting bits in the program-defining section of the memory of a computer.

Anyway: That's very nice for mathematicians, but it doesn't help us much in everyday life ... because even if we wanted to apply this kind of formalization in everyday life (say, to decide an issue like evolution vs. creationism), the mapping of real-world situations into mathematical formalisms is itself highly theory-laden....

So what we really need is not just a mathematical formalization of a universal prior, but a commonsensical formalization of a prior that is helpful for everyday human situations (even if not truly universal).

One suggestion I have is to use Solomonoff's core idea here, but interpret it a bit differently, in terms of everyday human communicational operations rather than mathematical, abstracted machine operations.

Paraphrasing the above quoted text, I propose that

The communicational prior probability of any prefix p of a computable sequence x, relative to a social group G and a body of evidence E, is the sum of the communicational probabilities (calculated relative to G and E) of all programs that compute something starting with p.

But how then to compute the communicational probability of a program relative to a social group G and body of evidence E?

As the name indicates, this is defined, not in terms of bit-flipping, but in terms of communication within the group.

I define the communicational probability of a program p, as being proportional to the average amount of time it would take a randomly chosen member A of group G to communicate p to another randomly chosen member B of group G, with sufficient accuracy that G can then evaluate the outputs of p on randomly selected inputs drawn from E.

(The assumption is that A already knows how to evaluate the program on inputs drawn from E.)

One can also bake a certain error rate into this definition, so that G has to be able to correctly evaluate the outputs of p only on a certain percentage of inputs drawn from E.

This defines what I suggest to call the Communication Prior.

A variant would be the communication-and-testing probability of a program p, definable as being proportional to the average, for randomly chosen members A and B in the social group such that A already knows how to evaluate p on inputs in E, of

  • the amount of time it would take A to communicate p to B, with sufficient accuracy that B can then evaluate the outputs of p on randomly selected inputs drawn from E
  • the amount of time it actually takes B to evaluate p on a randomly selected element of E
(One can of course weight the two terms in this average, if one wants to.)

Taking a bit of terminological liberty, I will also group this communication-testing variant as being under the umbrella of the "Communication Prior."

Pragmatically, what does this mean about theories?

Roughly speaking, it means that the a priori probability of a theory (i.e. the "bias toward" a theory) has to do with ease of effectively communicating that theory within a social group ... and (in the communication-testing variant), the ease of effectively communicating how to efficiently apply the theory.

Of course, the a priori probability theory doesn't tell you how good a theory is. Communicating a theory may be very simple, but so what ... unless the theory explains something. But the "explanation" part is taken care of in Bayes Rule, in the P(E | T) / P(E) fraction. If the observed evidence is not surprisingly likely given the assumption of the theory, then this fraction will be small.

The Communication Prior is similar in spirit to the Solomonoff-Levin Universal Prior ... but it's not about formal, mathematical, theoretical systems, it's about real-world social systems, such as human communities of scientists. In terms of philosophy of science, this is sort-of a big deal, as it bridges the gap between formalist and social-psychology-based theories of science.

What's the Take-Away from All That Techno-babble?

So, roughly speaking, the nontechnical take-away from the above technical excursion should be the following suggestion:

A theory should be considered good within a social group, to the extent that it explains the evidence better than it would explain a bunch of randomly selected evidence -- and it's reasonably rapid to effectively communicate, to others in the group, information about how to efficiently apply the theory to explain the available evidence.

This may seem simple or almost obvious, but it doesn't seem to have been said before, in quite so crisp of a way.

(In my prior essay on philosophy of science, I left off without articulating any sort of specific simplicity measure: the Communication Prior fills in that gap, thus bringing the ideas in that essay closer to practical applicability.)

Consider for instance the evolution vs. creationism argument. For my new suggestion to favor evolution over creationism, what would have to be true?

Whether the simple essential core of creationism or evolution is easier to communicate within a human social group, really depends on the particular social group.

However, the simple essential core of creationism does an extremely bad job of explaining why the observed body of evidence (e.g. the fossil record) is more likely than a lot of other possible bodies of evidence.

To make a version of creationism that would explain why the observed body of evidence is particularly likely, one would need to add a heck of a lot of special-pleading-type explanations onto the essential core of creationism. This is because creationism does not effectively compress or compactify the body of observed data.

So, to get a version of creationism that is equally explanatory of the particulars of the evidence as evolution, one needs to make a version of creationism that takes a long time to communicate.

Conclusion: creationism is worse than evolution.

(OK, we don't really need to go through so much complexity to get to such an obvious conclusion! But I'm just using that example to make a more general point, obviously.)

Why Is Religion a Bad Idea?

Getting back to the initial theme of this overlong, overdiverse blog post, then: why is religion a bad idea?

Because we should judge our theories using Bayes rule with a communication prior ... or in other words, by asking that they explain the particulars of observed reality in a relatively rapidly communicable way.

There is a balance between success-at-detailed-explanation and rapid-communicability, and the exact way to strike this balance is going to be subtle and in some cases subjective. But, in the case of religious beliefs, the verdict is quite clear: the religious world view, compared to the scientific world view, fails miserably at explaining the particulars of observed reality in a relatively rapidly communicable way.

The key point here is that, even if people want to stick with their evolutionary-legacy-based inductive biases (which make them intuitively favor superstitious explanations), the failure of religious theories to explain the particulars of observed reality is now so drastic and so obvious, that anyone who really carefully considers the evidence should reject these religious theories anyway.

Maher's film points out sensationalistically silly aspects of religious belief systems. But these aren't really the right anti-religion argument to use, in terms of philosophy of science and the theory of rationality. After all, are the Big Bang and Big Crunch and the evolution of humans from apes really any less everyday-ishly wacky than Judgment Day and the talking snake in the Garden of Eden?

The right argument to use is that, if one assumes Bayes rule plus a Communication Prior (or any other sensible, everyday-reality-based prior), then religious theories fail miserably.

Of course, almost no one on the planet can understand the previous sentence, though ... which is why his approach of dramatically emphasizing the most absurdly wacky religious beliefs and believers is probably a way more effective PR strategy!


The Emotion Prior

Finally, another suggestion I have regarding the popularity of religious beliefs has to do with something my ex-wife said to me once, shortly after her religious conversion to Buddhism, a topic about which we had numerous arguments (some heated, some more rational and interesting, none usefully conclusive nor convincing to either of us). What she said was: "I believe what I need to believe in order to survive."

She didn't just mean "to survive physically" of course ... that was never at issue (except insofar as emotional issues could have threatened her physical survival) ... what she meant was "to survive emotionally" ... to emotionally flourish ...

My (rather uncontroversial) suggestion is that in many cases religious people -- and others -- have a strong bias toward theories that they enjoy believing.

Or in other words: "If believing it feels good, it can't be wrong!"

This is probably the main issue in preaching atheism: one is asking people to

  • adopt (some approximant of) Bayes rule with a Communication Prior (or similar)
  • actually carefully look at the evidence that would be used in Bayes rule

... rather than to, on the other hand,

  • avoid looking at evidence that might disconfirm one's theory
  • utilize an Emotion Prior when evaluating various theories that might explain the evidence

The question is then whether, in each individual case,

  • the Emotion Prior outweights the Communication Prior (or similar)
  • the sociopsychological pressure to look at evidence outweighs the sociopsychological pressure to ignore it
Ignoring evidence gets harder and harder as the Internet broadcasts data to everyone, all the time....

To study these choices in an interesting way, one would need to model the internals of the believer's mind more subtly that has been done in this post so far....

But anyway ... the evidence of the clock in front of me is that I have spent too much time amusing myself by writing this blog post, and now have more useful things to do ... so, till next time!

P.S. Thanks to my wife Izabela for discussions leading to the introduction of the communication-testing variant of the Communication Prior, after the more basic version had already been formulated....

Thursday, September 25, 2008

Another Transhumanist Nightmare

Some anonymous freak wrote this story, a piece of transhumanist/absurdist fantasy which includes me in a minor role ... it's childish, but I have to say, mildly amusing...