To follow this blog by email, give your address here...

Monday, March 10, 2008

A New, Improved, Completely Whacky Theory of Evolution

This blog posts presents some really weird, speculative science, that I take with multiple proverbial salt-grains ... but, well, wouldn't it be funky if it were true?

The idea came to mind in the context of a conversation with my old friend Allan Combs, with whom I co-edit the online journal Dynamical Psychology.

It basically concerns the potential synergy between two apparently radically different lines of thinking:


Morphic Fields

The basic idea of a morphic field is that, in this universe, patterns tend to continue -- even when there's not any obvious causal mechanism for it. So that, for instance, if you teach thousands of rats worldwide a certain trick, then afterwards it will be easier for additional rats to learn that trick, even though the additional rats have not communicated with the prior one.

Sheldrake and others have gathered a bunch of evidence in favor of this claim. Some say that it's fraudulent or somehow subtly methodologically flawed. It might be. But after my recent foray into studying Ed May's work on precognition, and other references from Damien Broderick's heartily-recommended book Outside the Gates of Science (see my previous blog posts on psi), I'm becoming even more willing than usual to listen to data even when it goes against prevailing ideas.

Regarding morphic fields on the whole, as with psi, I'm still undecided, but interested. The morphic field idea certainly fits naturally with my philosophy that "the domain of pattern is primary, not the domain of spacetime"

Estimation of Distribution Algorithms

EDA's, on the other hand, are a nifty computer science idea aimed at accelerating artificial evolution (that occurs within software processes)

Evolutionary algorithms are a technique in computer science in which, if you want to find/create a certain object satisfying a certain criterion, you interpret the criterion as a "fitness function" and then simulate an "artificial evolution process" to try to evolve objects better and better satisfying the criterion. A population of candidate objects is generated at random, and then, progressively, evolving objects are crossed-over and mutated with each other. The fittest are chosen for further survival, crossover and mutation; the rest are discarded.

Google "genetic algorithms" and "genetic programming" if this is novel to you.

This approach has been used to do a lot of practical stuff -- in my own work, for example, I've evolved classification rules predicting who has cancer or who doesn't based on their genetic data (see Biomind); evolved little programs controlling virtual agents in virtual worlds to carry out particular tasks (see Novamente); etc. (though in both of those cases, we have recently moved beyond standard evolutionary algorithms to use EDA's ... see below...)

EDA's mix evolutionary algorithms with probabilistic modeling. If you want to find/create an object satisfying a certain criterion, you generate a bunch of candidates -- and then, instead of letting them cross over and mutate, you do some probability theory and figure out the patterns distinguishing the fit ones from the unfit ones. Then you generate new babies, new candidates, from this probability distribution -- throw them into the evolving population; lather, rinse, repeat.

It's as if, instead of all this sexual mating bullcrap, the Federal gov't made an index of all our DNA, then did a statistical study of which combinations of genes tended to lead to "fit" individuals, then created new individuals based on this statistical information. Then these new individuals, as they grow up and live, give more statistical data to throw into the probability distribution, etc. (I'd argue that this kind of eugenics is actually a plausible future, if I didn't think that other technological/scientific developments were so likely to render it irrelevant.)

Martin Pelikan's recent book presents the idea quite well, for a technical computer science audience.

Moshe Looks' PhD thesis presents some ideas I co-developed regarding applying EDA's to automated program learning.

There is by now a lot of mathematical/computational evidence that EDA's can solve optimization problems that are "deceptive" (hence very difficult to solve) for pure evolutionary learning. To put it in simple terms, there are many broad classes of fitness functions for which pure neo-Darwinist evolution seems prone to run into dead ends, but for which EDA style evolution can jump out of the dead ends.

Morphic Fields + EDA's = ??

Anyway -- now how do these two ideas fit together?

What occurred to Allan Combs and myself in an email exchange (originating from Allan reading about EDA's in my book The Hidden Pattern) is:

If you assume the morphic field hypothesis is true, then the idea that the morphic field can serve as the "probability distribution" for an EDA (allowing EDA-like accelerated evolution) follows almost immediately...

How might this work?

One argument goes as follows.

Many aspects of evolving systems are underdetermined by their underlying genetics, and arise via self-organization (coupled to the environment and initiated via genetics). A great example is the fetal and early-infancy brain, as analyzed in detail by Edelman (in Neural Darwinism and other writings) and others. Let's take this example as a "paradigm case" for discussion.

If there is a morphic field, then it would store the patterns that occurred most often in brain-moments. The brains that survived longest would get to imprint their long-lasting patterns most heavily on the morphic field. So, the morphic field would contain a pattern P, with a probability proportional to the occurrence of P in recently living brains ... meaning that occurrence of P in the morphogenetic field would correspond roughly to the fitness of organisms containing P.

Then, when young brains were self-organizing, they would be most likely to get imprinted with the morphic-field patterns corresponding to the most-fit recent brains....

So, if one assumes a probabilistically-weighted morphic field (with the weight of a pattern proportional to the number of times it's presented) then one arrives at the conclusion that evolution uses an EDA ...

Interesting to think that the mathematical power of EDA's might underly some of the power of biological evolution!

The Role of Symbiosis?

In computer science there are other approaches than EDAs for jumping out of evolutionary-programming dead ends, though -- one is symbiosis and its potential to explore spaces of forms more efficiently than pure evolution. See e.g. Richard Watson's book from a couple year back --

Compositional Evolution: The Impact of Sex, Symbiosis, and Modularity
on the Gradualist Framework of Evolution


and, also, Google "symbiogenesis." (Marginally relevantly, I wrote a bit about Schwemmler's ideas on symbiogenesis and cancer , a while back.)

But of course, symbiosis and morphic fields are not contradictory notions.

Hypothetically, morphic fields could play a role in helping organisms to find the right symbiotic combinations...

But How Could It Be True?

How the morphic fields would work in terms of physics is a whole other question. I don't know. No one does.

As I emphasized in my posts on psi earlier this year, it's important not to reject data just because one lacks a good theory to explain it.

I do have some interesting speculations to propound, though (I bet you suspected as much ;-). I'll put these off till another blog post ... but if you want a clue of my direction of thinking, mull a bit on

http://www.physics.gatech.edu/schatz/clocks.html

Sunday, March 09, 2008

Brief Report on AGI-08

Sooo....

The AGI-08 conference (agi-08.org) occurred last weekend in Memphis...!

I had hoped to write up a real scientific summary of AGI-08, but at the moment it doesn't look like I'll find the time, so instead I'll make do with this briefer and more surface-level summary...

Firstly, the conference went VERY well. The tone was upbeat, the discussions were animated and intelligent, and all in all there was a feel of real excitement about having so many AGI people in one place at one time.

Attendance was good: We originally anticipated 80 registrants but had 120+.

The conference room was a futuristic setting called "The Zone" that looked sorta like the Star Trek bridge -- with an excellent if mildly glitchy video system that, during Q&A sessions, showed the questioner up on a big screen in front of the room.

The unconventional format (brief talks followed by long discussion/Q&A) sessions was both productive and popular. The whole thing was video-ed and at some point the video record will be made available online (I don't know the intended timing of this yet).

The proceedings volume was released by IOS Press a few weeks before the conference and is a thick impressive-looking tome.

The interdisciplinary aspect of the conference seemed to work well -- e.g. the session on virtual-worlds AI was chaired by Sibley Verbeck (CEO of Electric Sheep Company) and the session on neural nets was chaired by Randal Koene (a neuroscientist from Boston University). This definitely made the discussions deeper than if it had been an AI-researchers-only crowd.

Plenty of folks from government agencies and large and small corporations were in attendance, as well as of course many AI academics and non-affiliated AGI enthusiasts. Among the AI academics were some highly-respected stalwarts of the AI community, alongside the new generation...

There seemed to be nearly as many Europeans as Americans there, which was a pleasant surprise, and some Asians as well.

The post-conference workshop on ethical, sociocultural and futurological issues drew about 60 people and was a bit of a free-for-all, with many conflicting perspectives presented quite emphatically and vociferously. I think most of that discussion was NOT captured on video (it took place in a different room where video-ing was less convenient), though the workshop talks themselves were.

The media folks in attendance seemed most energized by the section on AI in virtual worlds, which is because in this section the presenters (me, Andrew Shilliday, and Martin Magnusson) showed movies of cute animated characters doing stuff. This gave the nontechnical observers something to grab onto, which most of the other talks did not.

As at the earlier AGI-06 workshop, one of the most obvious observations after listening to the talks was that a lot of AGI research programs are pursuing fairly similar architectures and ideas but using different languages to describe what they're doing. This suggests that making a systematic effort at finding a common language and really understanding the true overlaps and differences of the various approaches, would be very beneficial. There was some talk of organizing a small, invitation-only workshop among practicing AGI system architects, perhaps in Fall 2008, with a view toward making progress in this direction.

Much enthusiasm was expressed for an AGI-09, and it was decided that this will likely be located in Washington DC, a location that will give us the opportunity to use the conference to help energize various government agencies about AGI.

There was also talk about the possibility of an AGI online technical journal, and a group of folks will be following that up, led by Pei Wang.

An "AGI Roadmap" project was also discussed, which would involve aligning different cognitive architectures currently proposed insofar as possible, but also go beyond that. Another key aspect of the roadmap might be an agreement on certain test environments or tasks that could be used to compare and explore various AGI architectures in more of a common way than is now possible.

Lots of ideas ... lots of enthusiasm ... a strong feeling of community-building ... so, I'm really grateful to Stan Franklin, Pei Wang, Sidney DeMello and Bruce Klein and everyone else who helped to organize the conference.

Finally, an interesting piece of feedback was given by my mother, who knows nothing about AGI research (she runs a social service agency) and who did not attend the conference but read the media coverage afterwards. What she said is that the media seems to be taking a far less skeptical and mocking tone toward AGI these days, as opposed to 7-10 years ago when I first started appearing in the media now and then. I think this is true, and it signifies a real shift in cultural attitude. This shift is what allowed The Singularity Is Near to sell as many copies as it did; and what encouraged so many AI academics to come to a mildly out-of-the-mainstream conference on AGI. Society, including the society of scientists, is starting to wake up to the notion that, given modern technology and science, human-level AGI is no longer a pipe dream but a potential near-term reality. w00t! Of course there is a long way to go in terms of getting this kind of work taken as seriously as it should be, but at least things seem to be going in the right direction.

Balancing concrete work on AGI with community-building work like co-organizing AGI is always a tricky decision for me.... But in this case, the conference went sufficiently well that I think it was worthwhile to deviate some time from the R&D to help out with it. (And now, back to the mass of other work that piled up for me during the conference!)

Yet More Rambling on Will (Beyond the Rules vs. Randomness Dichotomy)

A bit more on this nasty issue of will ... complementing rather than contradicting my previously-expressed ideas.

(A lot of these theory-of-mind blog posts are gonna ultimately get revised and make their way into The Web of Pattern, the sequel to The Hidden Pattern that I've been brewing in my mind for a while...)

What occurred to me recently was a way out of the old argument that "free will can't exist because the only possibilities are RULES versus RANDOMNESS."

In other words, the old argument goes: Either a given behavior is determined, or it's random. And in either case, where's the will? Granted, a random coin-toss (quantum or otherwise) may be considered "free" in a sense, but it's not willed -- it's just random.

What occurred to me is that this dichotomy is oversimplified because it fails to take two factors into account:

  1. A subjectively experienced moment occurs over a fuzzy span of time, not at a single physical moment
  2. "Random" always means "random with respect to some observer."

To clarify the latter point: "S is random to system X" just means "S contains no patterns that system X could identify."

System Y may be able to recognize some patterns in S, even though X can't.

And, X may later evolve into X1, which can recognize patterns in S.

Something that was random to me thirty years ago, or thirty seconds ago, may be patterned to me now.

Consider the perspective of the deliberative, rational component of my mind, when it needs to make a choice. It can determine something internally, or it can draw on an outside source, whose outcome may not be predictable to it (that is, it may make a "random" choice). Regarding outside sources, options include

  1. a random or pseudorandom number generator
  2. feedback from the external physical world, or from another mind in the vicinity
  3. feedback from the unconscious (or less conscious) non-deliberative part of the mind

Any one of these may introduce a "random" stimulus that is unpatterned from the point of view of the deliberative decision-maker.

But of course, options 2 and 3 have some different properties from option 1. This is because, in options 2 or 3, something that appears random at a certain moment, may appear non-random a little later, once the deliberative mind has learned a little more (and is thus able to recognize more or different patterns).

Specifically, in the case of option 3, it is possible for the deliberative mind to draw on the unconscious mind for a "random" choice, and then a half-moment later, import more information from the unconscious that allows it to see some of the patterns underlying the previously-random choice. We may call this process "internal patternization."

Similarly, in the case of option 2, it is possible for the deliberative mind to draw on another mind for a "random" choice, and then a half-moment later, import more information from the other mind that allows it to see some of the patterns underlying the previously random choice. We may call this process "social patternization."

There's also "physical patternization" where the random choice comes from an orderly (but initially random to the perceiving mind) process in the external world.

These possibilities are interesting to consider in the light of the non-instantaneity of the subjective moment. Because, the process of patternization may occur within a single experienced moment.

The subjective experience of will, I suggest, is closely tied to the process of internal patternization. When we have the feeling of making a willed decision, we are often making a "random" choice (random from the perspective of our deliberative component), and then immediately having the feeling of seeing some of the logic and motivations under that choice (as information passes from unconscious to conscious). But the information passed into the deliberative mind is of course never complete and there's always still some indeterminacy left, due to the limited capacity of deliberative mind as compared to unconscious mind.

So, what is there besides RULES plus RANDOMNESS?

There is the feeling of RANDOMNESS transforming into RULES (i.e. patterns), within a single subjective moment.

When this feeling involves patterns of the form "Willing X is causing {Willing X plus the occurrence of S}", then we have the "free will" experience. (This is the tie-in with my discourse on free will and hypersets, a few blog posts ago.)

That is, the deliberative content of recursive willing is automatized and made part of the unconscious, through repeated enaction. It then plays a role in unconscious action determination, which is perceived as random by the deliberative mind -- until, toward the tail end of a subjective moment, it becomes more patterned (from the view of the deliberative mind) due to receiving more attention.

Getting practical for a moment: None of this, as I see it, is stuff that you should program into an AGI system. Rather it is stuff that should emerge within the system as a part of its ongoing recognition of patterns in the world and itself, oriented toward achieving its goals. In this particular case the dynamics of attention allocation is key -- the process by which low-attention items (unconscious) can rapidly gain attention (become intensely deliberatively conscious) within a single subjective moment, but can also have a decisive causal impact prior to this increase in attention. The nonlinear dynamics of attention, in other words, is one of the underpinnings of the subjective experience of will.

What I'm trying to do here is connect phenomenology, cognitive science and AGI design. It seems to work, conceptually, in terms of according with my own subjective experience and also with known data on human brain/mind and my intuition/experience with AGI design.

Tuesday, February 19, 2008

Characterizing Consciousness and Will in Terms of Hypersets

This is another uber-meaty blog post, which reports a train of thought I had while eating dinner with my wife last night, which appears to me to provide a new perspective on two of the thorniest issues in the philosophy of mind: consciousness and will.

(No, I wasn't eating any hallucinogenic mushrooms for dinner; just some grilled chicken with garlic and ginger and soy sauce, on garlic naan. Go figure.)

These are of course very old issues and it may seem every possible perspective on them has already been put forth, without anything fundamentally being resolved.

However, it seems to me that the perspectives on these topics explored so far constitute only a small percentage of the perspectives that may sensibly be taken.

What I'm going to do here is to outline a new approach to these issues, which is based on hyperset theory -- and which ties in with various things I've written on these topics before, inspired by neuropsychology and systems theory and probabilistic logic and so on and so forth.

(A brief digressive personal comment: I've been sooooo overwhelmingly busy with Novamente-related business stuff lately, it's really been a pleasure to take a couple hours to write down some thoughts on these more abstract topics! Of course, no matter what I'm doing with my time as I go through my days, my unconscious is constantly churning on conceptual issues like the ones I talk about in this blog post -- but time to write down my thoughts on such things is so damn scant lately.... One of the next things to get popped off the stack is the relation of the model of will given here with ethical decision-making, as related to the iterated prisoner's dilemma, the voting problem, and so forth. Well, maybe next week ... or next month.... I certainly feel like working toward making a thinking machine for real, is more critical than exploring concepts in the theory of mind; but on a minute-by-minute basis, I have to admit I find the latter more fun....)

Hypersets

One of the intuitions underlying the explorations presented here is that possibly it's worth considering hypersets as an important part of our mathematical and conceptual model of mind -- and consciousness and will in particular.

A useful analogy might be the way that differential equations are an important part of our mathematical and conceptual model of physical reality. Differential equations aren't in the world; and hypersets aren't in the mind; but these sorts of mathematical abstractions may be extremely useful for modeling and understanding what's going on.

In brief, hypersets are sets that allow circular membership structures, e.g. you can have

A = {A}

A = {B,{A}}

and so forth. It follows that you can have functions that take themselves as arguments, and lots of other stuff that doesn't work according to the standard axioms of set theory.

While exotic, hypersets are well-defined mathematical structures, and in fact simple hypersets have fewer conceptual conundrums associated with them than the real number system (which is assumed in nearly all our physics theories).

The best treatment of hypersets for non-mathematicians that I know of is the book The Liar, which I highly recommend.

Anyway, getting down to business, let's start with consciousness, and then after that we'll proceed to will.

Disambiguating Consciousness

Of course the natural language term "consciousness" is heavily polysemous, and I'm not going to try to grapple with every one of its meanings. Specifically, I'm going to focus on the meaning that might be specified as "reflective consciousness." Which is different from the "raw awareness" that, arguably, worms and bugs have, along with us bigger creatures.

Raw awareness is also an interesting topic, though I tend toward a kind of panpsychism, meaning that I tend to believe everything (even a rock or an electron) possesses some level of raw awareness. Which means that raw awareness is then just an aspect of being, rather than a separate quality that some entities possess and not others.

Beyond raw awareness, though, it's clear that different entities in the universe manifest different kinds of awareness. Worms are aware in a different way than rocks; and, I argue, dogs, pigs, pigeons and people are aware in a different way from worms. What I'll (try to) deal with here is the sense in which the latter beasts are conscious whereas worms are not -- i.e. what might be called "reflective consciousness." (Not a great term, but I don't know any great terms in this domain.)

Defining Reflective Consciousness

So, getting down to business.... My starting-point is the old cliche' that

Consciousness is consciousness of consciousness

This is very nice, but doesn't really serve as a definition or precise characterization.

In hyperset theory, one can write an equation

f = f(f)

with complete mathematical consistency. You feed f, as input, f; and you receive, as output, f.

It seems evident, though, that while this sort of anti-foundational recursion may be closely associated with consciousness, this simple equation itself doesn't tell you much about consciousness. We don't really want to say

Consciousness = Consciousness(Consciousness)

I think it's probably more useful to say:

Consciousness is a hyperset, and consciousness is contained in its membership scope

Here by the "membership scope" of a hyperset S, what I mean is the members of S, plus the members of the members of S, etc.

This is no longer a definition of consciousness, merely a characterization.

What is says is that consciousness must be defined anti-foundationally as some sort of construct via which consciousness builds consciousness from consciousness -- but it doesn't specify exactly how.

Next, I want to introduce the observation, which I made in The Hidden Pattern (and in an earlier essay) that the subjective experience of being conscious of some entity X, is correlated with the presence of a very intense pattern in one's overall mind-state, corresponding to X. This idea is also the essence of neuroscientist Susan Greenfield's theory of consciousness (but in her theory, "overall mind-state" is replaced with "brain-state").

Putting these pieces together (hypersets, patterns and correlations), we arrive at the following working definition of consciousness:

"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}

In other words: Being conscious of a pig, means having in one's mind declarative knowledge of the form that one's consciousness of that pig is correlated with that pig being a pattern in one's overall mind-state.

Note that this declarative knowledge must be expressed in some language such as hyperset theory, in which anti-foundational inclusions are permitted. But of course, it doesn't have to be a well-formalized language -- just as pigeons, for instance, can carry out deductive reasoning without having a formalization of the rules of Boolean or probabilistic logic in their brains. All that is required is that the conscious mind has an internal informal language capable of expressing and manipulating simple hypersets.

To make this formal, one requires also a definition of pattern, which I've supplied in The Hidden Pattern.

OK, so much for consciousness. Now, on to our other old friend, will.

Defining Will

The same approach, I suggest, can be used to define the notion of "will," by which I mean the sort of willing process that we carry out in our minds when we subjectively feel like we are deciding to make one choice rather than another.

In brief:

"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}

To fully explicate this is slightly more complicated than in the case of consciousness, due to the need to unravel what's meant by "causal implication." This is done in my forthcoming book Probabilistic Logic Networks in some detail, but I'll give the basic outline here.

Causal implication may be defined as: Predictive implication combined with the existence of a plausible causal mechanism.

More precisely, if A and B are two classes of events, then A "predictively implies B" if it's probabilistically true that in a situation where A occurs, B often occurs afterwards. (Yes, this is dependent on a model of what is a "situation", which is assumed to be part of the mind assessing the predictive implication.)

And, a "plausible causal mechanism" associated with the assertion "A predictively implies B" means that, if one removed from one's knowledge base all specific instances of situations providing direct evidence for "A predictively implies B", then the inferred evidence for "A predictively implies B" would still be reasonably strong. (In a certain logical lingo, this means there is strong intensional evidence for the predictive implication, along with extensional evidence.)

If X and Y are particular events, then the probability of "X causally implies Y" may be assessed by probabilistic inference based on the classes (A, B, etc.) of events that X and Y belong to.

In What Sense Is Will Free?

But what does this say about the philosophical issues traditionally associated with the notion of "free will"?

Well, it doesn't suggest any validity for the idea that will somehow adds a magical ingredient beyond the familiar ingredients of "rules" plus "randomness." In that sense, it's not a very radical approach. It fits in with the modern understanding that free will is to a certain extent an "illusion."

However, it also suggests that "illusion" is not quite the right word.

The notion that willed actions somehow avoid the apparently-deterministic/stochastic nature of the universe is not really part of the subjective experience of free will ... it's a conceptual add-on that comes from trying to integrate our subjective experience with the modern scientific understanding of the world, in an overly simplistic and partially erroneous way.

An act of will may have causal implication, according to the psychological definition of the latter, without this action of will violating the basic deterministic/stochastic equations of the universe. The key point is that causality is itself a psychological notion (where within "psychological" I include cultural as well as individual psychology). Causality is not a physical notion; there is no branch of science that contains the notion of causation within its formal language.

In the internal language of mind, acts of will have causal impacts -- and this is consistent with the hypothesis that mental actions may potentially be ultimately determined via determistic/stochastic lower-level dynamics. Acts of will exist on a different level of description than these lower-level dynamics.

The lower-level dynamics are part of a theory that compactly explains the behavior of cells, molecules and particles; and some aspects of complex higher-level systems like brains, bodies and societies. Will is part of a theory that compactly explains the decisions of a mind to itself.

My own perspective is that neither the lower-level dynamics (e.g. physics equations) nor will should be considered as "absolutely real" -- there is no such thing as absolute reality. The equations of physics, glorious as they are, are abstractions we've created, and that we accept due to their utility for helping us carry out various goals and recognize various regularities in our own subjective experience.


Connecting Will and Consciousness


Connecting back to our first topic, consciousness, we may say that:


In the domain of reflective conscious experiences, acts of will are experienced as causal.

This of course looks like a perfectly obvious assertion. What's nice is that it seems to fall out of a precise, abstract characterization of consciousness and will.

Free Will and Virtual Multiverse Modeling

In a previous essay, written a few years back and ultimately incorporated into The Hidden Pattern, I gave an analysis of the psychological dynamics underlying free will, the essence of which may be grokked from the following excerpt:

For example, suppose I am trying to decide whether to kiss my beautiful neighbor. One part of my brain is involved in a dynamic which will actually determine whether I kiss her or not. Another part of my brain is modeling that first part, and doesn’t know what’s going to happen. A virtual multiverse occurs in this second part of the brain, one branch in which I kiss her, the other in which I don’t. Finally, the first part comes to a conclusion; and the second part collapses its virtual multiverse model almost instantly thereafter.

The brain uses these virtual multiverse models to plan for multiple contingencies, so that it is prepared in advance, no matter what may happen. In the case that one part of the brain is modeling another part of the brain, sometimes the model produced by the second part may affect the actions taken by the first part. For instance, the part (call it B) modeling the action of kissing my neighbor may come to the conclusion that the branch in which I carry out the action is a bad one. This may affect the part (call it A) actually determining whether to carry out the kiss, causing the kiss not to occur. The dynamic in A which causes the kiss not to occur, is then reflected in B as a collapse in its virtual multiverse model of A.


Now, suppose that the timing of these two causal effects (from B to A and from A to B) is different. Suppose that the effect of B on A (of the model on the action) takes a while to happen (spanning several subjective moments), whereas the effect of A and B (of the action on the model) is nearly instantaneous (occurring within a single subjective moment). Then, another part of the brain, C, may record the fact that a collapse to definiteness in B’s virtual multiverse model of A, preceded an action in A. On the other hand, the other direction of causality, in which the action in A caused a collapse in B’s model of A, may be so fast that no other part of the brain notices that this was anything but simultaneous. In this case, various parts of the brain may gather the mistaken impression that virtual multiverse collapse causes actions; when in fact it’s the other way around. This, I conjecture, is the origin of our mistaken impression that we make “decisions” that cause our actions.



How does this relate to the current analysis in terms of hypersets?

The current analysis adds an extra dimension to the prior one, which has to do with what in the above quote is called the "second part" of the brain involved with the experience of will -- the "virtual multiverse modeler" component.

The extra dimension has to do with the ability of the virtual multiverse modeler to model itself and its own activity.

My previous theory discusses perceived causal implications between actions taken by one part of the brain, and models of the consequences of these actions occurring in another part (the virtual multiverse modeler). It notes that sometimes the mind makes mistakes in perceiving a causal implication between a collapse in the virtual multiverse model and an action, when a more careful understanding of the mental dynamics would reveal a more powerful causal implication in the other direction. There is much evidence for this in the neuropsychology literature, some of which is reviewed in my previous article.

The new ingredient added by the present discussion is an understanding that the virtual multiverse modeler can model its own activity and its relationship with the execution of actions. Specifically, the virtual multiverse modeler can carry out modeling in terms of an intuitive notion of "will" that may be formalized as I described above;


"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}



where "S" refers specifically to the virtual multiverse modeler component, the nexus of the feeling of will.

And, as noted in my prior essay, it may do so whether or not this causal implication would hold up when the dynamics involved were examined at a finer level of granularity.

Who Cares?

Well, now, that's a whole other question, isn't it....

Personally, I find it interesting to progressively move toward a greater and greater understanding of the processes that occur in my own mind everyday. Since understanding (long ago) that the classical notion of "free will" is full of confusions, I've struggled to figure out the right attitude to take in my own mind, toward decisions that come up in my own life.

Day after day, hour after hour, minute after minute, I'm faced with deciding between option A and option B -- yet how seriously can I take this decision process if I know I have no real will anyway?

But the way I try to think about it is as follows: Within the descriptive language in which my reflective consciousness exists, my will does exist. It may not exist within the descriptive language of physics, but that's OK. None of these descriptive languages has an absolute reality. But, each of these descriptive languages can usefully help us understand the others (as well as helping us to understand the world directly); and having an understanding of the systematic biases made by the virtual multiverse modeler in my brain has certainly been useful to me. It has given me a lot more respect for the underlying unconscious dynamics governing my decisions, and this I believe has helped me to learn to make better decisions.

In terms of my AI work, the main implication of the train of thought reported here is that in order to experience reflective consciousness and will, an AI system needs to possess an informal internal language allowing the expression of basic hyperset constructs. Of course, in different AI designs this could be achieved in different ways, for instance it could be explicitly wired into the formalism of a logic-based AI system, or it could emerge spontaneously from the dynamics of a neural net based AI system. In a recent paper I explored some hypothetical means via which a neural system could give rise to a neural module F that acts as a function taking F as an input; this sort of phenomenon could potentially serve as a substrate for an internal hyperset language in the brain.

There is lots left to explore and understand, of course. But my feeling is that reflective consciousness and will, as described here, are not really so much trickier than other mental phenomena like logical reasoning, language understanding and long-term memory organization. Hypersets are a different formalism than the ones typically used to model these other aspects of cognition, but ultimately they're not so complex or problematic.

Onward!

Thursday, February 14, 2008

Psi, Closed-Mindedness and Fear

Some of the followup (private) emails I've gotten in regard to my just-prior blog post on Damien Broderick's book on psi, have really boggled my mind.

These emails basically present arguments of two forms:

  1. You're nuts, don't you know all the psi experiments are fraud and experimental error, everyone knows that...
  2. Look, even if there's a tiny chance that some psi phenomena are real, you're a fool to damage your reputation by aligning yourself with the kooks who believe in it

What shocks me (though it shouldn't, as I've been around 41 years and seen a lot of human nature already) about arguments of the first form is the irrational degree of skepticism toward this subject, displayed by otherwise highly rational and reflective individuals.

It's not as though these people have read Damien's book or carefully studied the relevant literature. I would welcome debate with suitably informed skeptics. Rather, these people dismiss the experimental literature on psi based on hearsay, and don't consider it worth their while to spend the 3-10 hours (depending on individual reading speed) required to absorb a fairly straightforward nontechnical book on the subject, like Damien's.

What shocks me about arguments of the second form is how often they come from individuals who are publicly aligned with other extremely radical ideas. For instance a few Singularitarians have emailed me and warned me that me talking about psi is bad, because then people will think Singularitarians are kooks.

(Amusingly, one Singularitarian pointed out in their conversation with me that, to them, the best argument for the possibility of psi that they know of is the Simulation Argument, which contends that we probably live in a computer simulation. This is I suppose based on the idea that the laws of physics somehow rule out psi, which they don't; but anyway it's an odd argument because whether we live in a simulation or not, the laws of physics are merely a compact summary of our empirical observations of the world we see, and so if psi data are real, they need to be incorporated into our observation-set and accounted for in our theories, regardless of whether we interpret these theories as being about a "real" world or a "simulated" one.)

Whoa!! So psi is so far out there that people who believe the universe is a simulation and the Singularity is near don't want their reputations poisoned by association with it?

This really baffles me.

I have no personal axe to grind regarding psi.

I have never had any unambiguous, personally convincing psi experiences (except when under the influence of various psychotropic compounds, but that's a whole other story ;-)....

I don't actually care much whether psi is real or not.

About psi and physics ... I am skeptical of attempts to explain psi based on quantum theory, due to not understanding how decoherence would be avoided in the hypothesized long-range quantum nonlocal binding between brains and other systems; but I recognize that quantum theory as such does not actually rule out psi. And, I am acutely aware that modern physics theories are incomplete, even leaving out psi data -- just taking into account well-accepted physics data. Modern physics does not provide a complete, conceptually consistent accounting of all well-accepted physics data. So all in all, our incomplete physics model doesn't rule out psi but makes it hard to explain. This does not seem a strong enough reason to ignore the available psi data on theoretical-physics grounds.

My observation is merely that, after spending a few dozen hours perusing the available data, it seems fascinating and compelling. Ed May's data is not the only good data out there by any means, but it's a great place to start if you want to dig into it.

I do not think we, as a community of thinking and understanding minds, should be ignoring all this high-quality data collected by serious, intelligent, careful scientists.

What is the reason for ignoring it? Presumably the reason is that a bunch of bullshit about psi has been promoted by a bunch of flakes and kooks. It's true. I admit it, Damien admits it, it's obvious. Let's get over that historical and cultural reality and look at the actual data -- quite possibly there's something to be learned from it. I don't know exactly what, but that's how science works -- you investigate and then you find out. What's frustrating is that in this extremely fascinating, important, potentially highly impactful area, research is proceeding so slowly because of excesses of skepticism and fear in the scientific community.

Scientists want to preserve their careers and reputations, so going out on a limb for something perceived as wacky is something very few of them are willing to do. As a consequence our understanding of the universe advances much more slowly than it otherwise could.

Finally, a brief aside.... For those who believe a Singularity is likely but who are highly skeptical of psi (a small percentage of the world, but disproportionately represented in the readership of this blog, I would imagine), I ask you this: Wouldn't it be nice to understand the universe a little better before launching a Singularity? If psi is real that would seem to have various serious implications for what superhuman AI's may be like post-Singularity, for example.

Well, anyway. I'm going to drop this topic for now as I have other stuff to focus on, like building AGI.... And I've been (finally) mixing down some of my music from MIDI to MP3; I'll post some on my website within the next month or so.... I don't have time to push ahead psi research myself nor to actively advocate for funding for those doing the research; but by writing these blog posts and reviewing Damien's book on Amazon.com, I've tried to do what I can (within my limited available time) to nudge the world toward being less closed-minded and less fearful in this regard.

Come on, people! Really! Have some guts and some mental-openness -- it's a big, weird, mysterious world out there, and I'm damn sure we understand only a teensy weensy bit of it. Experience gives us clues, empirical science gives us clues -- and the extent to which we manage to ignore some of the most interesting clues the world provides us, is pretty disappointing...