Tuesday, April 27, 2010

The Brain is Not an Uber-Intelligent Mega-computer

A friend forwarded me a recent New Scientist article containing some freaky, grandiose anti-AI rhetoric...

Some interesting tidbits about clever things single cells can do, are followed by the following dramatic conclusion:

For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and "artificial intelligence" a grandiose misnomer.

I think it is time to acknowledge fully that living cells make us what we are, and to abandon reductionist thinking in favour of the study of whole cells. Reductionism has us peering ever closer at the fibres in the paper of a musical score, and analysing the printer's ink. I want us to experience the symphony.


Actually, I'm a big fan of complex systems biology, as opposed to naive molecular biology reductionism.

But just because cells and organisms are complex systems, doesn't mean they're non-simulably superintelligent!

What's funny is that all this grandiose rhetoric is being flung about without ANY evidence whatsoever of actual acts of human intelligence being carried out by this posited low-level intra-cellular computing!!

Still, this relates to the basic reason why I'm not trying to do AGI via brain simulation. There is too much unknown about the brain....

But even though there is much unknown about the brain, I totally don't buy the idea that every neuron is doing super-sophisticated computing, so that the brain is a network of 100 billion intelligent computers achieving some kind of emergent superintelligence....

I don't know how this kind of argument explains stuff like Poggio's model of feedforward processing in visual cortex. By modeling at the neuronal group level, he gets NNs to give very similar behavior to human brains classifying images and recognizing objects under strict time-constraints. If the brain is doing all this molecular supercomputing, how come when it's given only half a second to recognize an object in a picture, it performs semi-stupidly, just like Poggio's feedforward NNs? How come digital computer programs can NOW outperform the brain in time-constrained (half-second) object recognition? Wouldn't it have been to our evolutionary advantage to be able to accurately recognize objects more effectively?

How about motion detection neurons -- for each small region in the visual field, there are tens of thousands of them, with an average 80 degrees or so error in which direction they point Averaging their outputs together gives a reasonably accurate read-out of the direction of motion in that region. If all this molecular supercomputing is going on, why all the error in motion detection? Why bother with all the mess of averaging together erroneous results?

And why the heck do we make so many basic cognitive errors, as diagnosed in the heuristics and biases literature? Is it THAT hard to avoid them, that a network of 100 billion sophisticated computers can't do it, when there would obviously be SOME evolutionary advantage
in doing so....

Also, the comment on "far superior to any digital system" is especially naive. What physics theory does this guy advocate? Any classical physics based system can be emulated by a digital computer to within arbitrary accuracy. Quantum computers can compute some functions faster than digital ones, on an average case basis -- so is he saying cells are quantum computers. Stuart Hameroff has flogged that one pretty hard, and there is NO evidence of it yet.

And even so, quantum doesn't mean superior. Birds seem to use some sort of quantum nonlocality to sense the direction of the Earth's magnetic field, which is funky -- but we have electronic devices that do the same thing BETTER, without any quantum weirdness....

OK, we haven't yet proved that digital computer systems can be intelligent like humans. But this guy certainly is not providing anything even vaguely resembling evidence to the contrary...

Wishful thinking if you ask me -- wishful thinking about the grandiosity about human intelligence. We're clever apes who stumbled upon the invention of language and culture, not uber-intelligent mega-computers....

Just to be clear: I can believe that individual cells do a LOT of sophisticated stuff internally, but I'm unclear how necessary all that they do is for intelligence...

To repeat a time-worn analogy, the cells in a bird's wing probably do a LOT also, yet airplanes and spacecraft work well without mega-ubercomputers or quantum computers to emulate all that intra-bird-wing sub-cellular processing...


Saturday, April 24, 2010

Modeling Consciousness, Self and Will Using Hypersets

I finally did a more careful write-up of some ideas I developed in blog posts a while ago, about how to model reflective consciousness, self and will using hypersets (non-foundational sets)

http://goertzel.org/consciousness/consciousness_paper.pdf

Even if you don't want to read the paper, look at the pictures at the end -- Figure 6 is pleasantly wacky and Figure 8 has a nice painting by my daughter in it....

Bask in the transreal glory of the fractallic mind!!! ;-D

Friday, April 16, 2010

"Conceptual Spaces" and AGI

One of my AI collaborators from the late 1990s, Alexandru Czimbor, recently suggested I take a look at Peter Gardenfors book "Conceptual Spaces."

I read it and found it interesting, and closely related to some aspects of my own AGI approach ... this post contains some elements of my reaction to the book.

Gardenfors' basic thesis is that it makes sense to view a lot of mind-stuff in terms of topological or geometrical spaces: for example topological spaces with betweenness, or metric spaces, or finite-dimensional real spaces. He views this as a fundamentally different mind-model than the symbolic or connectionist perspectives we commonly hear about. Many of his examples are drawn from perception (e.g. color space) but he also discusses abstract concepts. He views both conceptual spaces and robust symbolic functionality as (very different) emergent properties of intelligent systems. Specific cognitive functions that he analyzes in terms of conceptual spaces include concept formation, classification, and inductive learning.

About the Book Itself

This blog post is mainly a review of the most AGI-relevant ideas in Gardenfors book, and their relationship to my own AI work ... not a review of his book. But I'll start with a few comments on the book as a book.

Basically, the book reads sorta like a series of academic philosophy journal papers, carefully woven together into a book. It's carefully written, and technical points are elucidated in ordinary language. There are a few equations here and there, but you could skip them without being too baffled. The pace is "measured." The critiques of alternate perspectives on AI strike me as rather facile in some places (more on that below), and -- this is a complaint lying on the border between exposition and content -- there is a persistent unclarity regarding which of his ideas require a dimensional model of mind-stuff, versus which merely require a metric-space or weaker topological model. More on the latter point below.

If you're interested in absorbing a variety of well-considered perspectives on the nature of the mind, this is certainly a worthwhile book to pay attention to. I'd stop short of calling it a must-read, though.

Mindspace as Metric Space

I'll start with the part of Gardenfors thesis that I most firmly agree with.

I agree that it makes sense to view mind-stuff as a metric space. Percepts, concepts, actions, relationships and so forth can be used as elements of a metric space, so that one can calculate distances and similarities between them.

As Gardenfors points out, this metric structures lets one do a lot of interesting things.

For instance, it gives us a notion of between-ness. As an example of why this is helpful, suppose one wants to find a way of drawing conclusions about Chinese politics from premises about Chinese individual personality. It's very helpful, in this case, to know which concepts lie in some sense "between" personality and politics in the conceptual metric space.

It also lets us specify the "exemplar" theory of concepts in an elegant way. Suppose that we have N prototypes, or more generally N "prototype-sets", each corresponding to a certain concept. We can then assign a new entity X to one of these concepts, based on which prototype or prototype-set it's closest to (where "close" is defined in terms of the metric structure).

Mindspace as Dimensional Space

Many of Gardenfors ideas only require metric space, but others go further and require dimensional space -- and one of my complaints with the book is that he's not really clear on which ideas fall into which category.

For instance, he cites some theorems that if one defines concepts via proximity to prototypes (as suggested above) in a dimensional space, then it follows that concepts are convex sets. The theorem he gives holds in dimensional spaces but it seems to me this should also hold in more general metric spaces, though I haven't checked the mathematics.

This leads up to his bold and interesting hypothesis that natural concepts are convex sets in mindspace.

I find this hypothesis fascinating, partly because it ties in with the heuristic assumption made in my own Probabilistic Logic Networks book, that natural concepts are spheres in mindspace. Of course I don't really believe natural concepts are spheres, but this was a convenient assumption to make to derive certain probabilistic inference formulas.

So my own suspicion is that cognitively natural concepts don't need to be convex, but there is a bias for them to be. And they also don't need to be roughly spherical, but again I suspect there is a bias for them to be.

So I suspect that Gardenfors hypothesis about the convexity of natural concepts is an exaggeration of the reality -- but still a quite interesting idea.

If one is designing a fitness function F for a concept-formation heuristic, so that F(C) estimates the likely utility of concept C, then it may be useful to incorporate both convexity and sphericality as part of the fitness function.

Conceptual Space and the Problem of Induction

Gardenfors presents the "convexity of natural concepts" approach as a novel solution to the problem of induction, via positing a hypothesis that when comparing multiple concepts encapsulating past observations, one should choose the convex concepts as the basis for extrapolation into the future. This is an interesting and potentially valuable idea, but IMO positing it as a solution to the philosophical induction problem is a bit peculiar.

What he's doing is making an a priori assumption that convex concepts -- in the dimensional space that the brain has chosen -- are more likely to persist from past to future. Put differently, he is assuming that "the tendency of convex concepts to continue from past into future",
a pattern he has observed during his past, is going to continue into his future. So, from the perspective of the philosophical problem of induction, his approach this still requires one to make a certain assumption about some properties of past experience continuing into the future.

He doesn't really solve the problem of induction -- what he does is suggest a different a priori assumption, a different "article of faith", which if accepted can guide be used to guide induction. Hume (when he first posed the problem of induction) suggested that "human nature" guides induction, and perhaps Gardenfors' suggestion is part of human nature.

Relating Probabilistic Logic and Conceptual Geometry

Gardenfors conceives the conceptual-spaces perspective as a radically different alternative to
the symbolic and subsymbolic perspectives. However, I don't think this is the right way to look at it. Rather, I think that

  1. a probabilistic logic system can be considered as a metric space (and this is explained in detail in the PLN book)
  2. either a probabilistic logic system or a neural network system can be projected into a dimensional space (using dimensional embedding algorithms such as developed by Haren and Koren among others, and discussed on the OpenCog wiki site)

Because of point 1, it seems that most of Gardenfors' points actually apply within a probabilistic logic system. One can even talk about convexity in a general metric space context.

However, there DO seem to be advantages to projecting logical knowledge bases into dimensional spaces, because certain kinds of computation are much more efficient in dimensional spaces than in straightforward logical representations. Gardenfors doesn't make this point in this exact way, but he hints at it when he says that dimensional spaces get around some of the computational problems plaguing symbolic systems. For instance, if you want to quickly get a list of everything reasonably similar to a given concept -- or everything along a short path between concept A and concept B -- these queries are much more efficiently done in a dimensional- space representation than in a traditional logic representation.

Gardenfors points out that, in a dimensional formulation, prototype-based concepts correspond to cells in Voronoi or generalized Voronoi tesselations. This is interesting, and in a system that generates dimensional spaces from probabilistic logical representations, it suggests a nice concept formation heuristic: tesselate the dimensional space based on a set of prototypes, and then create new concepts based on the cells in the tesselation.

This brings up the question of how to choose the prototypes. If one uses the Harel and Koren embedding algorithm, it's tempting to choose the prototypes as equivalent to the pivots, for which we already have a heuristic algorithm. But this deserves more thought.

Summary

Gardenfors' book gives many interesting ideas, and in an AGI design/engineering context, suggests some potentially valuable new heuristics. However its claim to have a fundamentally novel approach to modeling and understanding intelligence seems a bit exaggerated. Rather than a fundamentally disjoint sort of representation, "topological and geometric spaces" are just a different way of looking at the same knowledge represented by other methods such as probabilistic logic. Probabilistic logic networks are metric spaces, and can be projected into dimensional spaces; and the same things are likely true for many other representation schemes as well. But Gardenfors gives some insightful and maybe useful new twists on the use of dimensional spaces in intelligent systems.

Owning Our Actions: Natural Autonomy versus Free Will

At the Toward a Scinece of Consciousness conference earlier this week, I picked up a rather interesting book to read on the flight home: Henrik Walter's "The Neurophilosophy of Free Will" ....

It's an academic philosophy tome -- fairly well-written and clear for such, but still possessing the dry and measured style that comes with that genre.

But the ideas are quite interesting!

Walter addresses the problem: what kind of variant of the intuitive "free will" concept might be compatible with what neuroscience and physics tell us.

He decomposes the intuitive notion of free will into three aspects:

  1. Freedom: being able to do otherwise
  2. Intelligibility: being able to understand the reasons for one's actions
  3. Agency: being the originator of one's actions

He argues, as many others have done, that there is no way to salvage the three of these in their obvious forms, that is consistent with known physics and neuroscience. I won't repeat those arguments here. [There are much better references, but I summarized some of the literature here, along with some of my earlier ideas on free will (which don't contradict Walter's ideas, but address different aspects)]

Walter then argues for a notion of "natural autonomy," which replaces the first and third of these aspects with weaker things, but has the advantage of being compatible with known science.

First I'll repeat his capsule summary of his view, and then translate it into my own language, which may differ slightly from his intentions.

He argues that "we possess natural autonomy when

  1. under very similar circumstances we could also do other than what we do (because of the chaotic nature of the brain)
  2. this choice is understandable (intelligible -- it is determined by past events, by immediate adaptation processes in the brain, and partially by our linguistically formed environment)
  3. it is authentic (when through reflection loops with emotional adjustments we can identify with that action)"

The way I think about this is that, in natural autonomy as opposed to free will,

  • Freedom is replaced with: being able to do otherwise in very similar circumstances
  • Agency is replaced with: emotionally identifying one's phenomenal self as closely dynamically coupled with the action

Another way to phrase this is: if an action is something that


  • depends sensitively on our internals, in the sense that slight variations in the environment or our internals could cause us to do something significantly different
  • we can at least roughly model and comprehend in a rational way, as a dynamical unfolding from precursors and environment into action was closely coupled with our holistic structure and dynamics, as modeled by our phenomenal self

then there is a sense in which "we own the action." And this sense of "ownership of an action" or "natural autonomy" is compatible with both classical and quantum physics, and with the known facts of neurobiology.

Perhaps "owning an action" can take the place of "willing an action" in the internal folk psychology of people who are not comfortable with the degree to which the classical notion of free will is illusory.

Another twist that Walter doesn't emphasize is that even actions which we do own, often

  • depend with some statistical predictability upon our internals, in the sense that agents with very similar internals and environments to us, have a distinct but not necessarily overwhelming probabilistic bias to take similar actions to us

This is important for reasoning rationally about our own past and future actions -- it means we can predict ourselves statistically even though we are naturally autonomous agents who own our own actions.

Free will is often closely tied with morality, and natural autonomy retains this. People who don't "take responsibility for their actions" in essence aren't accepting a close dynamical coupling between their phenomenal self and their actions. They aren't owning their actions, in the sense of natural autonomy -- they are modeling themselves as NOT being naturally autonomous systems, but rather as systems whose actions are relatively uncoupled with their phenomenal self, and perhaps coupled with other external forces instead.

None of this is terribly shocking or revolutionary-sounding -- but I think it's important nonetheless. What's important is that there are rational, sensible ways of thinking about ourselves and our decisions that don't require the illusion of free will, and also don't necessarily make us feel like meaningless, choiceless deterministic or stochastic automata.