To follow this blog by email, give your address here...

Thursday, December 08, 2005

A General Theory of the Development of Forms (wouldn't it be nice to have one?)

This blog entry briefly describes a long-term conceptual research project I have in mind, and have been thinking about for a while, which is to try to figure out some sort of "general theory of the development of forms/patterns in growing complex systems."

Since the Novamente AGI high-level design and the "patternist philosophy of mind" are basically completed and stable for a while (though I'm still engaged with writing them up), I need a new conceptual obsession to absorb the extremely-abstract-thinking portion of my brain... ;-)

Thinking about the development of forms, I have in mind three main specific areas:

  • developmental psychology (in humans and AI's)
  • epigenesis in biological systems
  • the growth of the early universe: the emergence of physical law from lawlessness, etc. (cf John Wheeler)

Each of these is a big area and I've decided to proceed through them in this order. Maybe I will never get to the physics part and will just try to abstract a general theory of development from the first two cases, we'll see.

I also have an intuition that it may be useful to use formal language theory of some sort as a conceptual tool for expressing developmental stages and patterns. Piaget tried to use abstract algebra in some of his writings, which was a nice idea, but didn't quite work. This ties in with Jerry Fodor's notion of a "language of thought", which I don't buy quite in all the senses he means it, but may have some real meat to it. It may be that developing minds at different stages. I don't know if anyone has taken this approach in the developmental psych literature.

For instance, it's arguable that quantifier binding is only added to the human language of thought at Piaget's formal stage, and that recursion is only added to the human language of thought at Piaget's concrete operational stage (which comes along with phrase structure syntax as opposed to simpler proto-language). What I mean by "X is added to the human language of thought at stage S" is something like "X can be used with reasonable generality and fluidity at stage S" -- of course many particular instances of recursion are used before the pre-operational phase, and many particular instances of quantifier binding are used before the formal phase. But the full "syntax"of these operations is not mastered prior to the stages I mentioned, I suggest. (Note that I am using Piaget's stage-labels only for convenience, I don't intend to use them in my own theory of forms; if I take a stage-based approach at all then I will define my own stages.)

I note that formal language theory is something that spans different domain areas in the sense that

  • there's discussion of "language of thought" in a general sense
  • natural language acquisition is a key aspect of developmental psych
  • L-system theory shows that formal languages are useful for explaining and modeling plant growth
  • "Symbolic dynamics" uses formal language theory to study the dynamics of chaotic dynamical systems in any domain, see also Crutchfield and Young

So it seems to be a potentially appropriate formal tool for such a project.

I was discussing this with my friend Stephan Bugaj recently and he and I may write a book on this theme if we can pull our thinking together into a sufficiently organized form....

Friday, December 02, 2005

More Venting about Scientific Narrowmindedness and Superintelligent Guinea Pigs

I spent the day giving a talk about bioinformatics to some smart medical researchers and then meeting with them discussing their research and how advanced narrow-AI informatics tools could be applied to help out with it.

AAARRRGGHHH!!! Amazing how difficult it is to get even clever, motivated, knowledgeable biologists to understand math/CS methods. The techniques I presented to them (a bunch of Biomind stuff) would genuinely help with their research, and are already implemented in stable software -- there's nothing too fanciful here. But the "understanding" barrier is really hard to break through -- and I'm not that bad at explaining things; in fact I've often been told I'm really good at it....

We'll publish a bunch of bioinformatics papers during the next year and eventually, in a few more years, the techniques we're using (analyzing microarray and SNP and clinical data via learning ensembles of classification rules; then data mining these rule ensembles, and clustering genes together based on whether they tend to occur in the same high-accuracy classification rules, etc.) will become accepted by 1% or 5% of biomedical researchers, I suppose. And in 10 years probably it will all be considered commonplace: no one will imagine analyzing genetics data without using such techniques....

Whether Biomind will manage to get rich during this process is a whole other story -- it's well-known that the innovative companies at the early stage of a revolution often lose out financially to companies that enter the game later once all the important ideas have already been developed. But finances aside, I'm confident that eventually, little by little, the approach I'm taking to genetic data analysis will pervade and transform the field, even if the effect is subtle and broad enough that I don't get that much credit for it....

And yet, though this Biomind stuff is complex enough to baffle most bioinformaticists and to be really tough to sell, it's REALLY REALLY SIMPLE compared to the Novamente AI design, which is one or two orders of magnitude subtler. I don't think I'm being egomaniacal when I say that no one else has really appreciated most of the subtlety in the Novamente design -- not even the other members of the Novamente team, many of whom have understood a lot. Which is verrrry different from the situation with Biomind: while the Biomind methods are too deep for most biologists, or most academic journal referees who review our papers, to understand, everyone on the Biomind team fully "gets" the algorithms and ideas.

Whether the subtlety of the Novamente design ever gets to be manifested in reality remains to be determined -- getting funding to pay a small team to build the Novamente system according to the design remains problematic, and I am open to the possibility that it will never happen, dooming me (as I've joked before) to a sort of Babbagedom. What little funding there is for AGI-ish research tends to go to folks who are better at marketing than I am, and who are willing to tell investors the story that there's some kind of simple path to AGI. Well, I don't think there is a simple path. There's at least one complex path (Novamente) and probably many other complex paths as well; and eventually someone will follow one of them if we don't annihilate ourselves first. AGI is very possible with 3-8 years effort by a small, dedicated, brilliant software team following a good design (like Novamente), but if the world can't even understand relatively simple stuff like Biomind, getting any understanding for something like Novamente is obviously going to continue to be a real uphill battle!

Relatedly, a couple weeks ago I had some long conversations with some potential investors in Novamente. But the investors ended up not making any serious investment offer -- for a variety of reasons, but I think one of them was that the Novamente design was too complex for them to easily grok. If I'd been able to offer them some easily comprehensible apparent path to AGI, I bet they would have invested. Just like it would be easier to sell Biomind to biologists if they could grok the algorithms as well as the Biomind technical team. Urrrghh!

Urrrgghhh!! urrrgghh!! ... Well, I'll keep pushing. There are plenty of investors out there. And the insights keep coming: interestingly, in the last few days a lot of beautiful parallels have emerged between some of our commercial narrow-AI work in computational linguistics and our more fundamental work in AGI (relating to making Novamente learn simple things in the AGI-SIM simulation world). It turns out that there are nice mathematical and conceptual parallels between algorithms for learning semantic rules from corpuses of texts, and the process of learning the functions of physical objects in the world. These parallels tell us a lot about how language learning works -- specifically, about how structures for manipulating language may emerge developmentally from structures for manipulating images of physical objects. This is exactly the sort of thing I want to be thinking about right now: now that the Novamente design is solid (though many details remain to be worked out, these are best worked out in the course of implementation and testing), I need to be thinking about "AGI developmental psychology," about how the learning process can be optimally tuned and tailored. But instead, to pay the bills and send the kids to college yadda yadda yadda, I'm trying to sell vastly simpler algorithms to biologists who don't want to understand why it's not clever to hunt for biomarkers for a complex disease by running an experiment with only 4 Cases and 4 Controls. (Answer: because complex diseases have biomarkers that are combinations of genes or mutations rather than individual genes/mutations, and to learn combinational rules distinguishing one category from another, a larger body of data is needed.)

Ooops! I've been blogging too long, I promised Scheherazade I would go play with her guinea pigs with her. Well, in a way the guinea pigs are a relief after dealing with humans all day ... at least I don't expect them to understand anything. Guinea pigs are really nice. Maybe a superintelligent guinea pig would be the ultimate Friendly AI. I can't remember ever seeing a guinea pig do anything mean, though occasionally they can be a bit fearful and defensive....

Tuesday, November 29, 2005

Post-Interesting

Hi all,

I have launched a second blog, which is called Post-Interesting

www.post-interesting.com

and I have invited a number of my friends to join me in posting to it (we'll see if any of them actually get around to it!).

The idea is that this current blog ("Multiverse According to Ben") will contain more personal-experience and personal-opinion type entries, whereas Post-Interesting will be more magazine-like, containing reviews, interesting links, and compact summaries of highly crisp scientific or philosophical ideas.... (Of course, even my idea of "magazine-like" contains a lot of personal opinions!)

Not that I really have time to maintain one blog let alone two, but from time to time I seem to be overtaken by an irresistable desire to expunge massive amounts of verbiage ;-D

If people make a lot of interesting posts to Post-Interesting then one day it will be a multimedia magazine and put Wired and Cosmopolitan out of business! (For now I just put three moderately interesting initial posts there....)

-- Ben

Wednesday, November 16, 2005

Reality and Religion (a follow-up to earlier posts on Objective/Subjective Reality)

This post is a response to Bob McCue's comments to my earlier blog entry on "Objective and Subjective Reality". Scroll down after going to

http://www.goertzel.org/blog/2005/07/objective-versus-subjective-reality.html

to read his comments.

Bob is a former Mormon and has written extensively and elegantly about his reasons for leaving the faith:

http://mccue.cc/bob/spirituality.htm

He read my blog on objective/subjective reality and my essay on "social/computational/probabilist" philosophy of science

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

and then posed some questions regarding the probabilistic justification of religious beliefs.

Bob: The questions you raise are deep and fascinating ones and unfortunately I don't have time right now to write a reply that does them justice.

However, I can't resist saying a few things ;-)

I was never religious but my ex-wife was and, although this led to numerous unpleasant arguments between us, it also led me to gain some degree of appreciation (OK, not all that much!) for the religious perspective. For her (as a Zen Buddhist) it was never about objective truth at all, it was always about subjective experience -- her own and that of the others in her sangha (religious group). If probability theory was relevant, it was in the context of evaluations like

Probability ( my own spiritual/emotional state is good GIVEN THAT I carry out these religious practices)

>

Probability ( my own spiritual/emotional state is good GIVEN THAT I don't carry out these religious practices)

The evaluation criterion was internal/subjective not external/objective. The actual beliefs of the religion were only evaluated in regard to their subjective effects on the believer's internal well-being. This fits in with a Nietzschean perspective in which "An organism believes what it needs to believe in order to survive", if you replace "survive" with "maximize internal satisfaction" (which ultimately approximately reduces to Nietzsche's "survival" if one takes an evolutionary view in which we have evolved to, on average, be satisfied by things correlated with our genomes' survival).

I am not sure what this has to do with religions like Mormonism though. I think my ex got interested in Zen (in her mid-20's) partly because I had talked to her about it years before that, when as a teenager I had found Huang Po's Zen writings (on exiting the world of thought and ideas and entering the world of pure truth/nothingness) really radical and fascinating. Zen is not very typical of religions and it's questionable whether it really belongs in the "religion" category -- it's a borderline case. It specifically teaches that the external, "objective" world is illusory and urges you to fully, viscerally and spiritually understand this world's construction via the mind. Thus in a Zen perspective the empirical validation or refutation of hypotheses (so critical to science) is not central, because it takes place within a sphere that is a priori considered illusory and deceptive. Because of this Zen tends not to make statements that contradict scientific law; rather it brushes the whole domain of science aside as being descriptive of an illusory reality.

I guess that Mormonism is different in that it makes hypotheses that directly contradict scientific observation (e.g. do Mormons hold the Earth was created 6000 years ago?). But still, I suspect the basic psychological dynamics is not that different. People believe in a religion because this belief helps them fulfill their own goals of personal, social or spiritual satisfaction. Religious people may also (to varying extents) have a goal of recognizing valid patterns in the observed world; but people can have multiple goals, and apparently for religious people the goal of achieving personal/social/spiritual satisfaction thru religion overwhelms the goal of recognizing valid patterns in the observed world. I find nothing very mysterious in this.

Bob: You ask about belief in Kundalini Yoga (another obsession of my ex-wife, as it happens.) I guess that the KY system helps people to improve their own internal states and in that case people may be wise to adopt it, in some cases... even though from a scientific view the beliefs it contains are a tricky mix of sense and nonsense.

However, it seems pretty clear to me that religious beliefs, though they may sometimes optimally serve the individual organism (via leading to various forms of satisfaction), are counterproductive on the species level.

As a scientific optimist and transhumanist I believe that the path to maximum satisfaction for humans as a whole DOES involve science -- both for things like medical care, air conditioning and books and music, and for things like creating AI's to help us and creating nanotech and gene therapy solutions for extending our lives indefinitely.

There's a reason that Buddhism teaches "all existence involves suffering." It's true, of course -- but it was even more true in ancient India than now. There was a lot more starvation and disease and general discomfort in life back then, which is why a suffering-focused religion like Buddhism was able to spread so widely. The "suffering is everywhere" line wouldn't sell so well in modern America or Western Europe, because although suffering still IS everywhere, it's not as extreme and not as major a component of most people's lives. Which is due, essentially, to science. (I am acutely aware that in many parts of the world suffering is a larger part of peoples' lives, but, this does not detract from the point I am making.)

Since religious belief systems detract from accurate observation of patterns in reality, they detract from science and thus from the path with the apparently maximal capacity to lead humanity toward overall satisfaction, even though they may in fact deliver maximal personal satisfaction to some people (depending on their personal psychology).

However, one may argue that some people will never be able to contribute to science anyway (due to low intelligence or other factors), so that if they hold religious beliefs and don't use them to influence the minds of science-and-technology-useful people, their beliefs are doing no harm to others but may be increasing their own satisfaction. Thus, for some people to be religious may be a good thing in terms of maximizing the average current and long term satisfaction of humanity.

There is also a risk issue here. Since religion detracts from science and technology, it maintains humans in a state where they are unlikely to annihilate the whole species, though they may kill each other in more modest numbers. Science gives us more power for positive transformation and also more power for terrible destruction. The maximum satisfaction achievable thru science is higher than thru religion (due to the potential of science to lead to various forms of massively positive transhumanism), but the odds of destruction are higher too. And we really have no way of knowing what the EXPECTED outcome of the sci-tech path is -- the probabilities of transcension versus destruction.

[As I wrote the prior paragraph I realized that no Zen practitioner would agree with me that science has the power to lead to greater satisfaction than religion. Semantics of "satisfaction" aside they would argue that "enlightenment" is the greatest quest and requires no technology anyway. But even if you buy this (which I don't, fully: I think Zen enlightenment is an interesting state of mind but with plusses and minuses compared to other ones, and I suspect that the transhuman future will contain other states of mind that are even more deep and fascinating), it seems to be the case that only a tiny fraction of humans have achieved or ever will achieve this exalted state. Transhumanist technology would seem to hold the possibility of letting any sentient being choose their own state of mind freely, subject only to constraints regarding minimizing harm to others. We can all be enlightened after the Singularity -- if we want to be! -- but we may well find more appealing ways to spend our eternity of time!! -- ]

OK, I drifted a fair way from Mormonism there, back to my usual obsessions these days. But hopefully it was a moderately interesting trajectory.

For a more interesting discussion of Mormonism, check out the South Park episode "All About Mormons." It was actually quite educational for me.