This is another uber-meaty blog post, which reports a train of thought I had while eating dinner with my wife last night, which appears to me to provide a new perspective on two of the thorniest issues in the philosophy of mind: consciousness and will.
(No, I wasn't eating any hallucinogenic mushrooms for dinner; just some grilled chicken with garlic and ginger and soy sauce, on garlic naan. Go figure.)
These are of course very old issues and it may seem every possible perspective on them has already been put forth, without anything fundamentally being resolved.
However, it seems to me that the perspectives on these topics explored so far constitute only a small percentage of the perspectives that may sensibly be taken.
What I'm going to do here is to outline a new approach to these issues, which is based on hyperset theory -- and which ties in with various things I've written on these topics before, inspired by neuropsychology and systems theory and probabilistic logic and so on and so forth.
(A brief digressive personal comment: I've been sooooo overwhelmingly busy with Novamente-related business stuff lately, it's really been a pleasure to take a couple hours to write down some thoughts on these more abstract topics! Of course, no matter what I'm doing with my time as I go through my days, my unconscious is constantly churning on conceptual issues like the ones I talk about in this blog post -- but time to write down my thoughts on such things is so damn scant lately.... One of the next things to get popped off the stack is the relation of the model of will given here with ethical decision-making, as related to the iterated prisoner's dilemma, the voting problem, and so forth. Well, maybe next week ... or next month.... I certainly feel like working toward making a thinking machine for real, is more critical than exploring concepts in the theory of mind; but on a minute-by-minute basis, I have to admit I find the latter more fun....)
Hypersets
One of the intuitions underlying the explorations presented here is that possibly it's worth considering hypersets as an important part of our mathematical and conceptual model of mind -- and consciousness and will in particular.
A useful analogy might be the way that differential equations are an important part of our mathematical and conceptual model of physical reality. Differential equations aren't in the world; and hypersets aren't in the mind; but these sorts of mathematical abstractions may be extremely useful for modeling and understanding what's going on.
In brief, hypersets are sets that allow circular membership structures, e.g. you can have
A = {A}
A = {B,{A}}
and so forth. It follows that you can have functions that take themselves as arguments, and lots of other stuff that doesn't work according to the standard axioms of set theory.
While exotic, hypersets are well-defined mathematical structures, and in fact simple hypersets have fewer conceptual conundrums associated with them than the real number system (which is assumed in nearly all our physics theories).
The best treatment of hypersets for non-mathematicians that I know of is the book The Liar, which I highly recommend.
Anyway, getting down to business, let's start with consciousness, and then after that we'll proceed to will.
Disambiguating Consciousness
Of course the natural language term "consciousness" is heavily polysemous, and I'm not going to try to grapple with every one of its meanings. Specifically, I'm going to focus on the meaning that might be specified as "reflective consciousness." Which is different from the "raw awareness" that, arguably, worms and bugs have, along with us bigger creatures.
Raw awareness is also an interesting topic, though I tend toward a kind of panpsychism, meaning that I tend to believe everything (even a rock or an electron) possesses some level of raw awareness. Which means that raw awareness is then just an aspect of being, rather than a separate quality that some entities possess and not others.
Beyond raw awareness, though, it's clear that different entities in the universe manifest different kinds of awareness. Worms are aware in a different way than rocks; and, I argue, dogs, pigs, pigeons and people are aware in a different way from worms. What I'll (try to) deal with here is the sense in which the latter beasts are conscious whereas worms are not -- i.e. what might be called "reflective consciousness." (Not a great term, but I don't know any great terms in this domain.)
Defining Reflective Consciousness
So, getting down to business.... My starting-point is the old cliche' that
Consciousness is consciousness of consciousness
This is very nice, but doesn't really serve as a definition or precise characterization.
In hyperset theory, one can write an equation
f = f(f)
with complete mathematical consistency. You feed f, as input, f; and you receive, as output, f.
It seems evident, though, that while this sort of anti-foundational recursion may be closely associated with consciousness, this simple equation itself doesn't tell you much about consciousness. We don't really want to say
Consciousness = Consciousness(Consciousness)
I think it's probably more useful to say:
Consciousness is a hyperset, and consciousness is contained in its membership scope
Here by the "membership scope" of a hyperset S, what I mean is the members of S, plus the members of the members of S, etc.
This is no longer a definition of consciousness, merely a characterization.
What is says is that consciousness must be defined anti-foundationally as some sort of construct via which consciousness builds consciousness from consciousness -- but it doesn't specify exactly how.
Next, I want to introduce the observation, which I made in The Hidden Pattern (and in an earlier essay) that the subjective experience of being conscious of some entity X, is correlated with the presence of a very intense pattern in one's overall mind-state, corresponding to X. This idea is also the essence of neuroscientist Susan Greenfield's theory of consciousness (but in her theory, "overall mind-state" is replaced with "brain-state").
Putting these pieces together (hypersets, patterns and correlations), we arrive at the following working definition of consciousness:
"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}
In other words: Being conscious of a pig, means having in one's mind declarative knowledge of the form that one's consciousness of that pig is correlated with that pig being a pattern in one's overall mind-state.
Note that this declarative knowledge must be expressed in some language such as hyperset theory, in which anti-foundational inclusions are permitted. But of course, it doesn't have to be a well-formalized language -- just as pigeons, for instance, can carry out deductive reasoning without having a formalization of the rules of Boolean or probabilistic logic in their brains. All that is required is that the conscious mind has an internal informal language capable of expressing and manipulating simple hypersets.
To make this formal, one requires also a definition of pattern, which I've supplied in The Hidden Pattern.
OK, so much for consciousness. Now, on to our other old friend, will.
Defining Will
The same approach, I suggest, can be used to define the notion of "will," by which I mean the sort of willing process that we carry out in our minds when we subjectively feel like we are deciding to make one choice rather than another.
In brief:
"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}
To fully explicate this is slightly more complicated than in the case of consciousness, due to the need to unravel what's meant by "causal implication." This is done in my forthcoming book Probabilistic Logic Networks in some detail, but I'll give the basic outline here.
Causal implication may be defined as: Predictive implication combined with the existence of a plausible causal mechanism.
More precisely, if A and B are two classes of events, then A "predictively implies B" if it's probabilistically true that in a situation where A occurs, B often occurs afterwards. (Yes, this is dependent on a model of what is a "situation", which is assumed to be part of the mind assessing the predictive implication.)
And, a "plausible causal mechanism" associated with the assertion "A predictively implies B" means that, if one removed from one's knowledge base all specific instances of situations providing direct evidence for "A predictively implies B", then the inferred evidence for "A predictively implies B" would still be reasonably strong. (In a certain logical lingo, this means there is strong intensional evidence for the predictive implication, along with extensional evidence.)
If X and Y are particular events, then the probability of "X causally implies Y" may be assessed by probabilistic inference based on the classes (A, B, etc.) of events that X and Y belong to.
In What Sense Is Will Free?
But what does this say about the philosophical issues traditionally associated with the notion of "free will"?
Well, it doesn't suggest any validity for the idea that will somehow adds a magical ingredient beyond the familiar ingredients of "rules" plus "randomness." In that sense, it's not a very radical approach. It fits in with the modern understanding that free will is to a certain extent an "illusion."
However, it also suggests that "illusion" is not quite the right word.
The notion that willed actions somehow avoid the apparently-deterministic/stochastic nature of the universe is not really part of the subjective experience of free will ... it's a conceptual add-on that comes from trying to integrate our subjective experience with the modern scientific understanding of the world, in an overly simplistic and partially erroneous way.
An act of will may have causal implication, according to the psychological definition of the latter, without this action of will violating the basic deterministic/stochastic equations of the universe. The key point is that causality is itself a psychological notion (where within "psychological" I include cultural as well as individual psychology). Causality is not a physical notion; there is no branch of science that contains the notion of causation within its formal language.
In the internal language of mind, acts of will have causal impacts -- and this is consistent with the hypothesis that mental actions may potentially be ultimately determined via determistic/stochastic lower-level dynamics. Acts of will exist on a different level of description than these lower-level dynamics.
The lower-level dynamics are part of a theory that compactly explains the behavior of cells, molecules and particles; and some aspects of complex higher-level systems like brains, bodies and societies. Will is part of a theory that compactly explains the decisions of a mind to itself.
My own perspective is that neither the lower-level dynamics (e.g. physics equations) nor will should be considered as "absolutely real" -- there is no such thing as absolute reality. The equations of physics, glorious as they are, are abstractions we've created, and that we accept due to their utility for helping us carry out various goals and recognize various regularities in our own subjective experience.
Connecting Will and Consciousness
Connecting back to our first topic, consciousness, we may say that:
In the domain of reflective conscious experiences, acts of will are experienced as causal.
This of course looks like a perfectly obvious assertion. What's nice is that it seems to fall out of a precise, abstract characterization of consciousness and will.
Free Will and Virtual Multiverse Modeling
In a previous essay, written a few years back and ultimately incorporated into The Hidden Pattern, I gave an analysis of the psychological dynamics underlying free will, the essence of which may be grokked from the following excerpt:
For example, suppose I am trying to decide whether to kiss my beautiful neighbor. One part of my brain is involved in a dynamic which will actually determine whether I kiss her or not. Another part of my brain is modeling that first part, and doesn’t know what’s going to happen. A virtual multiverse occurs in this second part of the brain, one branch in which I kiss her, the other in which I don’t. Finally, the first part comes to a conclusion; and the second part collapses its virtual multiverse model almost instantly thereafter.
The brain uses these virtual multiverse models to plan for multiple contingencies, so that it is prepared in advance, no matter what may happen. In the case that one part of the brain is modeling another part of the brain, sometimes the model produced by the second part may affect the actions taken by the first part. For instance, the part (call it B) modeling the action of kissing my neighbor may come to the conclusion that the branch in which I carry out the action is a bad one. This may affect the part (call it A) actually determining whether to carry out the kiss, causing the kiss not to occur. The dynamic in A which causes the kiss not to occur, is then reflected in B as a collapse in its virtual multiverse model of A.
Now, suppose that the timing of these two causal effects (from B to A and from A to B) is different. Suppose that the effect of B on A (of the model on the action) takes a while to happen (spanning several subjective moments), whereas the effect of A and B (of the action on the model) is nearly instantaneous (occurring within a single subjective moment). Then, another part of the brain, C, may record the fact that a collapse to definiteness in B’s virtual multiverse model of A, preceded an action in A. On the other hand, the other direction of causality, in which the action in A caused a collapse in B’s model of A, may be so fast that no other part of the brain notices that this was anything but simultaneous. In this case, various parts of the brain may gather the mistaken impression that virtual multiverse collapse causes actions; when in fact it’s the other way around. This, I conjecture, is the origin of our mistaken impression that we make “decisions” that cause our actions.
How does this relate to the current analysis in terms of hypersets?
The current analysis adds an extra dimension to the prior one, which has to do with what in the above quote is called the "second part" of the brain involved with the experience of will -- the "virtual multiverse modeler" component.
The extra dimension has to do with the ability of the virtual multiverse modeler to model itself and its own activity.
My previous theory discusses perceived causal implications between actions taken by one part of the brain, and models of the consequences of these actions occurring in another part (the virtual multiverse modeler). It notes that sometimes the mind makes mistakes in perceiving a causal implication between a collapse in the virtual multiverse model and an action, when a more careful understanding of the mental dynamics would reveal a more powerful causal implication in the other direction. There is much evidence for this in the neuropsychology literature, some of which is reviewed in my previous article.
The new ingredient added by the present discussion is an understanding that the virtual multiverse modeler can model its own activity and its relationship with the execution of actions. Specifically, the virtual multiverse modeler can carry out modeling in terms of an intuitive notion of "will" that may be formalized as I described above;
"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}
where "S" refers specifically to the virtual multiverse modeler component, the nexus of the feeling of will.
And, as noted in my prior essay, it may do so whether or not this causal implication would hold up when the dynamics involved were examined at a finer level of granularity.
Who Cares?
Well, now, that's a whole other question, isn't it....
Personally, I find it interesting to progressively move toward a greater and greater understanding of the processes that occur in my own mind everyday. Since understanding (long ago) that the classical notion of "free will" is full of confusions, I've struggled to figure out the right attitude to take in my own mind, toward decisions that come up in my own life.
Day after day, hour after hour, minute after minute, I'm faced with deciding between option A and option B -- yet how seriously can I take this decision process if I know I have no real will anyway?
But the way I try to think about it is as follows: Within the descriptive language in which my reflective consciousness exists, my will does exist. It may not exist within the descriptive language of physics, but that's OK. None of these descriptive languages has an absolute reality. But, each of these descriptive languages can usefully help us understand the others (as well as helping us to understand the world directly); and having an understanding of the systematic biases made by the virtual multiverse modeler in my brain has certainly been useful to me. It has given me a lot more respect for the underlying unconscious dynamics governing my decisions, and this I believe has helped me to learn to make better decisions.
In terms of my AI work, the main implication of the train of thought reported here is that in order to experience reflective consciousness and will, an AI system needs to possess an informal internal language allowing the expression of basic hyperset constructs. Of course, in different AI designs this could be achieved in different ways, for instance it could be explicitly wired into the formalism of a logic-based AI system, or it could emerge spontaneously from the dynamics of a neural net based AI system. In a recent paper I explored some hypothetical means via which a neural system could give rise to a neural module F that acts as a function taking F as an input; this sort of phenomenon could potentially serve as a substrate for an internal hyperset language in the brain.
There is lots left to explore and understand, of course. But my feeling is that reflective consciousness and will, as described here, are not really so much trickier than other mental phenomena like logical reasoning, language understanding and long-term memory organization. Hypersets are a different formalism than the ones typically used to model these other aspects of cognition, but ultimately they're not so complex or problematic.
Onward!
Tuesday, February 19, 2008
Thursday, February 14, 2008
Psi, Closed-Mindedness and Fear
Some of the followup (private) emails I've gotten in regard to my just-prior blog post on Damien Broderick's book on psi, have really boggled my mind.
These emails basically present arguments of two forms:
What shocks me (though it shouldn't, as I've been around 41 years and seen a lot of human nature already) about arguments of the first form is the irrational degree of skepticism toward this subject, displayed by otherwise highly rational and reflective individuals.
It's not as though these people have read Damien's book or carefully studied the relevant literature. I would welcome debate with suitably informed skeptics. Rather, these people dismiss the experimental literature on psi based on hearsay, and don't consider it worth their while to spend the 3-10 hours (depending on individual reading speed) required to absorb a fairly straightforward nontechnical book on the subject, like Damien's.
What shocks me about arguments of the second form is how often they come from individuals who are publicly aligned with other extremely radical ideas. For instance a few Singularitarians have emailed me and warned me that me talking about psi is bad, because then people will think Singularitarians are kooks.
(Amusingly, one Singularitarian pointed out in their conversation with me that, to them, the best argument for the possibility of psi that they know of is the Simulation Argument, which contends that we probably live in a computer simulation. This is I suppose based on the idea that the laws of physics somehow rule out psi, which they don't; but anyway it's an odd argument because whether we live in a simulation or not, the laws of physics are merely a compact summary of our empirical observations of the world we see, and so if psi data are real, they need to be incorporated into our observation-set and accounted for in our theories, regardless of whether we interpret these theories as being about a "real" world or a "simulated" one.)
Whoa!! So psi is so far out there that people who believe the universe is a simulation and the Singularity is near don't want their reputations poisoned by association with it?
This really baffles me.
I have no personal axe to grind regarding psi.
I have never had any unambiguous, personally convincing psi experiences (except when under the influence of various psychotropic compounds, but that's a whole other story ;-)....
I don't actually care much whether psi is real or not.
About psi and physics ... I am skeptical of attempts to explain psi based on quantum theory, due to not understanding how decoherence would be avoided in the hypothesized long-range quantum nonlocal binding between brains and other systems; but I recognize that quantum theory as such does not actually rule out psi. And, I am acutely aware that modern physics theories are incomplete, even leaving out psi data -- just taking into account well-accepted physics data. Modern physics does not provide a complete, conceptually consistent accounting of all well-accepted physics data. So all in all, our incomplete physics model doesn't rule out psi but makes it hard to explain. This does not seem a strong enough reason to ignore the available psi data on theoretical-physics grounds.
My observation is merely that, after spending a few dozen hours perusing the available data, it seems fascinating and compelling. Ed May's data is not the only good data out there by any means, but it's a great place to start if you want to dig into it.
I do not think we, as a community of thinking and understanding minds, should be ignoring all this high-quality data collected by serious, intelligent, careful scientists.
What is the reason for ignoring it? Presumably the reason is that a bunch of bullshit about psi has been promoted by a bunch of flakes and kooks. It's true. I admit it, Damien admits it, it's obvious. Let's get over that historical and cultural reality and look at the actual data -- quite possibly there's something to be learned from it. I don't know exactly what, but that's how science works -- you investigate and then you find out. What's frustrating is that in this extremely fascinating, important, potentially highly impactful area, research is proceeding so slowly because of excesses of skepticism and fear in the scientific community.
Scientists want to preserve their careers and reputations, so going out on a limb for something perceived as wacky is something very few of them are willing to do. As a consequence our understanding of the universe advances much more slowly than it otherwise could.
Finally, a brief aside.... For those who believe a Singularity is likely but who are highly skeptical of psi (a small percentage of the world, but disproportionately represented in the readership of this blog, I would imagine), I ask you this: Wouldn't it be nice to understand the universe a little better before launching a Singularity? If psi is real that would seem to have various serious implications for what superhuman AI's may be like post-Singularity, for example.
Well, anyway. I'm going to drop this topic for now as I have other stuff to focus on, like building AGI.... And I've been (finally) mixing down some of my music from MIDI to MP3; I'll post some on my website within the next month or so.... I don't have time to push ahead psi research myself nor to actively advocate for funding for those doing the research; but by writing these blog posts and reviewing Damien's book on Amazon.com, I've tried to do what I can (within my limited available time) to nudge the world toward being less closed-minded and less fearful in this regard.
Come on, people! Really! Have some guts and some mental-openness -- it's a big, weird, mysterious world out there, and I'm damn sure we understand only a teensy weensy bit of it. Experience gives us clues, empirical science gives us clues -- and the extent to which we manage to ignore some of the most interesting clues the world provides us, is pretty disappointing...
These emails basically present arguments of two forms:
- You're nuts, don't you know all the psi experiments are fraud and experimental error, everyone knows that...
- Look, even if there's a tiny chance that some psi phenomena are real, you're a fool to damage your reputation by aligning yourself with the kooks who believe in it
What shocks me (though it shouldn't, as I've been around 41 years and seen a lot of human nature already) about arguments of the first form is the irrational degree of skepticism toward this subject, displayed by otherwise highly rational and reflective individuals.
It's not as though these people have read Damien's book or carefully studied the relevant literature. I would welcome debate with suitably informed skeptics. Rather, these people dismiss the experimental literature on psi based on hearsay, and don't consider it worth their while to spend the 3-10 hours (depending on individual reading speed) required to absorb a fairly straightforward nontechnical book on the subject, like Damien's.
What shocks me about arguments of the second form is how often they come from individuals who are publicly aligned with other extremely radical ideas. For instance a few Singularitarians have emailed me and warned me that me talking about psi is bad, because then people will think Singularitarians are kooks.
(Amusingly, one Singularitarian pointed out in their conversation with me that, to them, the best argument for the possibility of psi that they know of is the Simulation Argument, which contends that we probably live in a computer simulation. This is I suppose based on the idea that the laws of physics somehow rule out psi, which they don't; but anyway it's an odd argument because whether we live in a simulation or not, the laws of physics are merely a compact summary of our empirical observations of the world we see, and so if psi data are real, they need to be incorporated into our observation-set and accounted for in our theories, regardless of whether we interpret these theories as being about a "real" world or a "simulated" one.)
Whoa!! So psi is so far out there that people who believe the universe is a simulation and the Singularity is near don't want their reputations poisoned by association with it?
This really baffles me.
I have no personal axe to grind regarding psi.
I have never had any unambiguous, personally convincing psi experiences (except when under the influence of various psychotropic compounds, but that's a whole other story ;-)....
I don't actually care much whether psi is real or not.
About psi and physics ... I am skeptical of attempts to explain psi based on quantum theory, due to not understanding how decoherence would be avoided in the hypothesized long-range quantum nonlocal binding between brains and other systems; but I recognize that quantum theory as such does not actually rule out psi. And, I am acutely aware that modern physics theories are incomplete, even leaving out psi data -- just taking into account well-accepted physics data. Modern physics does not provide a complete, conceptually consistent accounting of all well-accepted physics data. So all in all, our incomplete physics model doesn't rule out psi but makes it hard to explain. This does not seem a strong enough reason to ignore the available psi data on theoretical-physics grounds.
My observation is merely that, after spending a few dozen hours perusing the available data, it seems fascinating and compelling. Ed May's data is not the only good data out there by any means, but it's a great place to start if you want to dig into it.
I do not think we, as a community of thinking and understanding minds, should be ignoring all this high-quality data collected by serious, intelligent, careful scientists.
What is the reason for ignoring it? Presumably the reason is that a bunch of bullshit about psi has been promoted by a bunch of flakes and kooks. It's true. I admit it, Damien admits it, it's obvious. Let's get over that historical and cultural reality and look at the actual data -- quite possibly there's something to be learned from it. I don't know exactly what, but that's how science works -- you investigate and then you find out. What's frustrating is that in this extremely fascinating, important, potentially highly impactful area, research is proceeding so slowly because of excesses of skepticism and fear in the scientific community.
Scientists want to preserve their careers and reputations, so going out on a limb for something perceived as wacky is something very few of them are willing to do. As a consequence our understanding of the universe advances much more slowly than it otherwise could.
Finally, a brief aside.... For those who believe a Singularity is likely but who are highly skeptical of psi (a small percentage of the world, but disproportionately represented in the readership of this blog, I would imagine), I ask you this: Wouldn't it be nice to understand the universe a little better before launching a Singularity? If psi is real that would seem to have various serious implications for what superhuman AI's may be like post-Singularity, for example.
Well, anyway. I'm going to drop this topic for now as I have other stuff to focus on, like building AGI.... And I've been (finally) mixing down some of my music from MIDI to MP3; I'll post some on my website within the next month or so.... I don't have time to push ahead psi research myself nor to actively advocate for funding for those doing the research; but by writing these blog posts and reviewing Damien's book on Amazon.com, I've tried to do what I can (within my limited available time) to nudge the world toward being less closed-minded and less fearful in this regard.
Come on, people! Really! Have some guts and some mental-openness -- it's a big, weird, mysterious world out there, and I'm damn sure we understand only a teensy weensy bit of it. Experience gives us clues, empirical science gives us clues -- and the extent to which we manage to ignore some of the most interesting clues the world provides us, is pretty disappointing...
Saturday, February 02, 2008
The Scientific Evidence for Psi (is most likely stronger than you think)
My goal in this blog is to convince you to read Damien Broderick's book Outside the Gates of Science: Why It's Time for the Paranormal to Come in From the Cold.
Reviewing a host of research done by others over many decades, the book makes a remarkably and excitingly strong case that psi phenomena are worthy of intensive further investigation....
Let me explain why I'm so excited by Broderick's work.
Having grown up on SF, and being a generally open-minded person but also mathematician/scientist with a strong rationalist and empiricist bent, I've never quite known what to make of psi. (Following Broderick, I'm using "psi" as an umbrella term for ESP, precognition, psychokinesis, and the familiar array of suspects...).
Broderick's book is the first I've read that rationally, scientifically, even-handedly and maturely, reviews what it makes sense to think about psi given the available evidence.
(A quick word on my science background, for those who don't know me and may be new to this blog: I have a math PhD and although my main research areas are AI and cognitive science, I've also spent a lot of time working on empirical biological science as a data analyst. I was a professor for a 8 years but have been doing research in the software industry for the last decade.)
My basic attitude on psi has always been curious but ambivalent. One way to summarize it would be via the following three points....
First: Psi seems, on the face of it, is not wildly scientifically implausible after the fashion of, say, perpetual motion machines built out of wheels and pulleys and spinning chanbers filled with ball bearings. Science, at this point, understands the world only very approximately, and there is plenty of room in our current understanding of the physical universe for psi. Quantum theory's notions of nonlocality and resonance are conceptually somewhat harmonious with some aspects of psi, but that's not the main point. The main point is that science does not rule out psi, in the sense that it rules out various sorts of crackpottery.
Second: Anecdotal evidence for psi is so strong and so prevalent that it's hard to ignore. Yes, people can lie, and they can also be very good at fooling themselves. But the number of serious, self-reflective intelligent people to report various sorts of psi experiences is not something that should be glibly ignored.
Third: There is by now a long history of empirical laboratory work on psi, with results that are complex, perplexing, but in many ways so apparently statistically significant as to indicate that SOMETHING important is almost surely going on in these psi experiments...
Broderick, also being an open-minded rationalist/empiricist, seems to have started out his investigation of psi, as reported in his book, with the same basic intuition as I've described in the above three points. And he covers all three of these points in the book, but the main service he provides is to very carefully address my third point above: the scientific evidence.
His discussion of possible physical mechanisms of psi is competent but not all that complete or imaginative; and he wisely shies away from an extensive treatment of anecdotal evidence (this stuff has been discussed ad nauseum elsewhere). But his treatment of the scientific literature regarding psi is careful, masterful and compellingly presented. And this is no small achievement.
The scientific psi literature is large, complex, multifaceted and subtle -- and in spite of a lifelong peripheral fascination with psi, I have never taken the time to go through all that much it myself. I'm too busy doing other sorts of scientific, mathematical and engineering work. Broderick has read the literature, sifted out the good from the bad, summarized the most important statistical and conceptual results, and presented his conclusions in ordinary English that anyone with a strong high school education should be able to understand.
His reviews of the work on remote viewing and precognition I found particularly fascinating, and convincing. It is hard to see how any fair-minded reader could come away from his treatments of these topics without at least a sharp pang of curiousity regarding what might actually be going on.
Perhaps my most valued discovery, based on Broderick's book, was Edwin May's work on precognition and related phenomena. Anyone with a science background is strongly encouraged to inspect the website of May's Cognitive Sciences Laboratory, which hosts an impressive collection of papers on his team's government-funded psi research.
What is my conclusion about psi after reading Damien's book, and exploring in more depth the work of May's team and others?
Still not definitive -- and indeed, Broderick's own attitude as expressed in the book is not definitive.
I still can't feel absolutely certain whether psi is a real phenomenon; or whether the clearly statistically significant patterns observed across the body of psi experiments bespeak some deep oddities in the scientific method and the statistical paradigm that we don't currently understand.
But after reading Broderick's book, I am much more firmly convinced than before that psi phenomena are worthy of intensive, amply-funded scientific exploration. Psi should not be a fringe topic, it should be a core area of scientific investigation, up there with, say, unified physics, molecular biology, AI and so on and so forth.
Read the book for yourself, and if you're not hopelessly biased in your thinking, I suspect you'll come to a conclusion somewhat similar to mine.
As a bonus, as well as providing a profound intellectual and cultural service, the book is a lot of fun to read, due to Broderick's erudite literary writing style and ironic sense of humor.
My worry -- and I hope it doesn't eventuate -- is that the book is just too far ahead of its time. I wonder if the world is ready for a rational, scientific, even-handed treatment of psi phenomena.
Clearly, Broderick's book is too scientific and even-handed for die-hard psi believers; and too psi-friendly (though in a level-headed, evidence-based way) for the skeptical crowd. My hope is that it will find a market among those who are committed to really understanding the world, apart from the psychological pathologies of dogmatism or excessive skepticism.
I note that Broderick has a history of being ahead of his time as a nonfiction writer. His 1997 book "The Spike" put forth basically the same ideas that Ray Kurzweil later promulgated in his 2005 book "The Singularity Is near." Kurzweil's book is a very good one, but so was Broderick's; yet Kurzweil's got copious media attention whereas Broderick's did not ... for multiple reasons, one of which, however, was simply timing. The world in 1997 wasn't ready to hear about the Singularity. The world in 2006 is.
The question is: is the world in 2008 ready to absorb the complex, fascinating reality of psi research? If so, Broderick's book should strike a powerful chord. It certainly did for me.
Reviewing a host of research done by others over many decades, the book makes a remarkably and excitingly strong case that psi phenomena are worthy of intensive further investigation....
Let me explain why I'm so excited by Broderick's work.
Having grown up on SF, and being a generally open-minded person but also mathematician/scientist with a strong rationalist and empiricist bent, I've never quite known what to make of psi. (Following Broderick, I'm using "psi" as an umbrella term for ESP, precognition, psychokinesis, and the familiar array of suspects...).
Broderick's book is the first I've read that rationally, scientifically, even-handedly and maturely, reviews what it makes sense to think about psi given the available evidence.
(A quick word on my science background, for those who don't know me and may be new to this blog: I have a math PhD and although my main research areas are AI and cognitive science, I've also spent a lot of time working on empirical biological science as a data analyst. I was a professor for a 8 years but have been doing research in the software industry for the last decade.)
My basic attitude on psi has always been curious but ambivalent. One way to summarize it would be via the following three points....
First: Psi seems, on the face of it, is not wildly scientifically implausible after the fashion of, say, perpetual motion machines built out of wheels and pulleys and spinning chanbers filled with ball bearings. Science, at this point, understands the world only very approximately, and there is plenty of room in our current understanding of the physical universe for psi. Quantum theory's notions of nonlocality and resonance are conceptually somewhat harmonious with some aspects of psi, but that's not the main point. The main point is that science does not rule out psi, in the sense that it rules out various sorts of crackpottery.
Second: Anecdotal evidence for psi is so strong and so prevalent that it's hard to ignore. Yes, people can lie, and they can also be very good at fooling themselves. But the number of serious, self-reflective intelligent people to report various sorts of psi experiences is not something that should be glibly ignored.
Third: There is by now a long history of empirical laboratory work on psi, with results that are complex, perplexing, but in many ways so apparently statistically significant as to indicate that SOMETHING important is almost surely going on in these psi experiments...
Broderick, also being an open-minded rationalist/empiricist, seems to have started out his investigation of psi, as reported in his book, with the same basic intuition as I've described in the above three points. And he covers all three of these points in the book, but the main service he provides is to very carefully address my third point above: the scientific evidence.
His discussion of possible physical mechanisms of psi is competent but not all that complete or imaginative; and he wisely shies away from an extensive treatment of anecdotal evidence (this stuff has been discussed ad nauseum elsewhere). But his treatment of the scientific literature regarding psi is careful, masterful and compellingly presented. And this is no small achievement.
The scientific psi literature is large, complex, multifaceted and subtle -- and in spite of a lifelong peripheral fascination with psi, I have never taken the time to go through all that much it myself. I'm too busy doing other sorts of scientific, mathematical and engineering work. Broderick has read the literature, sifted out the good from the bad, summarized the most important statistical and conceptual results, and presented his conclusions in ordinary English that anyone with a strong high school education should be able to understand.
His reviews of the work on remote viewing and precognition I found particularly fascinating, and convincing. It is hard to see how any fair-minded reader could come away from his treatments of these topics without at least a sharp pang of curiousity regarding what might actually be going on.
Perhaps my most valued discovery, based on Broderick's book, was Edwin May's work on precognition and related phenomena. Anyone with a science background is strongly encouraged to inspect the website of May's Cognitive Sciences Laboratory, which hosts an impressive collection of papers on his team's government-funded psi research.
What is my conclusion about psi after reading Damien's book, and exploring in more depth the work of May's team and others?
Still not definitive -- and indeed, Broderick's own attitude as expressed in the book is not definitive.
I still can't feel absolutely certain whether psi is a real phenomenon; or whether the clearly statistically significant patterns observed across the body of psi experiments bespeak some deep oddities in the scientific method and the statistical paradigm that we don't currently understand.
But after reading Broderick's book, I am much more firmly convinced than before that psi phenomena are worthy of intensive, amply-funded scientific exploration. Psi should not be a fringe topic, it should be a core area of scientific investigation, up there with, say, unified physics, molecular biology, AI and so on and so forth.
Read the book for yourself, and if you're not hopelessly biased in your thinking, I suspect you'll come to a conclusion somewhat similar to mine.
As a bonus, as well as providing a profound intellectual and cultural service, the book is a lot of fun to read, due to Broderick's erudite literary writing style and ironic sense of humor.
My worry -- and I hope it doesn't eventuate -- is that the book is just too far ahead of its time. I wonder if the world is ready for a rational, scientific, even-handed treatment of psi phenomena.
Clearly, Broderick's book is too scientific and even-handed for die-hard psi believers; and too psi-friendly (though in a level-headed, evidence-based way) for the skeptical crowd. My hope is that it will find a market among those who are committed to really understanding the world, apart from the psychological pathologies of dogmatism or excessive skepticism.
I note that Broderick has a history of being ahead of his time as a nonfiction writer. His 1997 book "The Spike" put forth basically the same ideas that Ray Kurzweil later promulgated in his 2005 book "The Singularity Is near." Kurzweil's book is a very good one, but so was Broderick's; yet Kurzweil's got copious media attention whereas Broderick's did not ... for multiple reasons, one of which, however, was simply timing. The world in 1997 wasn't ready to hear about the Singularity. The world in 2006 is.
The question is: is the world in 2008 ready to absorb the complex, fascinating reality of psi research? If so, Broderick's book should strike a powerful chord. It certainly did for me.
Friday, January 25, 2008
Yverse: A New Model of the Universe
A new model of the universe?
Actually, yeah.
It starts out with the familiar concept of the "multiverse," which is mainly associated with the many-universes interpretation of quantum theory.
According to one verbalization of the multiversal interpretation of quantum theory, every time a quantum-random "choice" is made (say, an electron spins up instead of down), there is a "branching" into two possible universes: one where the electron spins up, another where it spins down.
Similarly, if a bus drives at you while you're walking across the street, there may be two possible universes ahead of you: one where you get flattened, and another where you don't. (Actually, there are a lot of other choices going on in your life too, so it's more accurate to say there is one set of universes where you get flattened and another where you don't).
The collection of all these possible universes is known as the "multiverse."
In fact the language of "choice" used in the above description of the multiverse is a bit suspect. It's more accurate to say that corresponding to each possible state of the electron (up/down) once it is coupled with the external environment (so that it decoheres), there is a set of branches of the multiverse, and leave the ambiguous and misleading language of "choice" out of it.
Anyway, the multiverse is fascinating enough, but it's just the beginning.
It's easy enough to think of multiple possible multiverses. After all, there could be a multiverse in which Ben Goertzel never existed at all, in any of its branches.
One way to think about backwards time travel, for instance, is as a mechanism for selecting between multiverses. If you go back in time and change something, then you're effectively departing your original multiverse and entering a new one.
So, we can think about a multi-multiverse, i.e. a collection of multiverses, with a certain probability distribution over them.
I don't posit this hypothesis all that seriously, but I'm going to throw it out there anyway: It seems possible to conceive of consciousness as a faculty that facilitates movement between multiverses!
Well, I guess you can see where all this is going.
If there's a multi-multiverse, there can also be a multi-multi-multiverse. And so on.
But that is not all -- oh no, that is not all ;-)
What about the multi-multi-...-multi-multiverse?
I.e. the entity Yverse so that
Yverse = multi-Yverse
??
Math wonks will have already inferred that I chose the name Yverse because of the Y-combinator in combinatory logic, which is defined via
Yf = f(Yf)
In other words
Yf = ...ffff...
(where the ... goes on infinitely many times)
So the Yverse is the (Y multi-) universe ...
In the Yverse, there are multiple branches, each one of which is itself a Yverse....
Two Yverses may have two kinds of relationship: sibling (two branches of the same parent Yverse) or parent-child.
Backwards time travel may jolt you from one Yverse to a parent Yverse. Ordinary quantum decoherence events merely correspond to differences between sibling Yverses.
If there is a probability distribution across a set of sibling Yverses, it may be conceived as an infinite-order probability distribution. (A first-order probability distribution is a distribution across some ordinary things like numbers or particles, or universes. A second-order probability distribution is a distribution across a set of first-order probability distributions. Well, you get the picture.... An infinite-order probability distribution is a probability distribution over a set of infinite-order probability distributions. I've worked out some of the math of this kind of probability distribution, and it seems to make sense.)
What use is the Yverse model? I'm not really sure.
It seems to be an interesting way to think about things, though.
If I had more time for pure intellectual entertainment, I'd put some effort into developing a variant of quantum theory based on Yverses and infinite-order probabilities. It seems a notion worth exploring, especially given work by Saul Youssef and others showing that the laws of quantum theory emerge fairly naturally from the laws of probability theory, with a few extra assumptions (for instance, in Youssef's work, the assumption that probabilities are complex rather than real numbers).
And reading Damien Broderick's excellent book on psi, "Outside the Gates of Science," got me thinking a bit about what kinds of models of the universe might be useful for explaining psi phenomena.
Yes, quantum theory is in principle generally compatible with psi, so one doesn't need wacky ideas like Yverses to cope with psi, but it's fun to speculate. It seems to me that for quantum theory to account for psi phenomena would require some really far-out long-range quantum-coherence to exist in the universe, which doesn't seem to be there. So in my view it's at least sensible to speculate about how post-quantum physics might account for psi more sensibly.
This babbling about psi leads back to my wacko speculation above that consciousness could be associated with action in the multi-multiverse. In the Yverse model, the idea becomes that consciousness could be associated with action in the parent Yverse.
Could the difference between physical action and mental action be that the former has to do with movement between sibling Yverses, whereas the latter has to do with movement between parent and child Yverses?
Well I'll leave you on that note --
I've gone pretty far "out there", I guess about as far as it's possible to go ;-> ....
(Unless I could work Elvis into the picture somehow. I thought about it, but didn't come up with anything....)
-- (semi-relevant, rambling) P.S. Those who are interested in my AI work may be interested to know that I don't consider any of these funky speculations contradictory to the idea of creating AI on digital computers. The whole connection between probability, complex probability, quantum theory, determinism and complexity fascinates me -- and I consider it extremely poorly understood. For example, I find the whole notion of "determinism" in very complex systems suspect ... in what sense is a digital computer program determinate relative to me, if I lack the computational capability to understand its state or predict what it will do? If I lack the computational capability to understand some thing X, then relative to my own world-view, should X be modeled according to complex rather than real probabilities, in the vein of Yousseffian quantum probability theory? I suspect so. But I won't pursue this any more here -- I'll leave it for a later blog post. Suffice to say, for now, that I have a feeling that our vocabulary for describing complex systems, with words like "determinate" and "random", is woefully inaccurate and doesn't express the really relevant distinctions.
Saturday, January 19, 2008
Japanese Gods Pray for a Positive Singularity
In September 2007 I went on a two week business/science trip to China (Wuhan and Beijing) and Japan (Tokyo). In between some very interesting and productive meetings, I had a bit of free time, and so among other things I wound up formally submitting a prayer to the Japanese gods for a rapid, beneficial technological Singularity. Let's hope they were listening!
I wrote this blog post on the flight home but wasn't in a silly enough mood to post it till now.
(Scroll to the bottom if you're in a hurry; after all the irrelevant rambling beforehand, there's a sort of punchline there, involving the mysterious inscription in the above picture.)
My trip started in Wuhan, where I gave two talks at an AI conference and visited with Hugo de Garis and his students (his apprentice "brain builders"). Their near-term goal is to use genetic algorithms running on field-programmable gate arrays to control a funky little robot.
China was probably the most fascinating place I've ever visited (and I've visited and lived a lot of places), though in this brief trip I hardly got to know it at all. Society there is Westernizing fast (I've never seen anywhere more capitalist than modern China), but, there are still incredibly deep and dramatic differences between the Chinese and Western ways of thinking and living. As soon as I stepped into the airport, I was struck by the collectivist nature of their culture ...
... so very different from my own upbringing in which individuality was always held out as one of the highest values (I remember a book my mother got me as a young child, entitled Dare to Be Different -- a sort of history of famous nonconformists). There are of course many Chinese nonconformists (there are so many Chinese, there are many Chinese everything!), but in so many ways their whole society and culture is based on placing the group above the individual. (Which leads, among other things, to their enthusiasm for importing individualist Western scientists like Hugo de Garis.... But this is a topic for another blog post, some other day ... let me get on with my little story....)
Wuhan was a fascinating slice of "old China", with folks sitting out on the streets cooking weird food in woks, strange old men looking like they lived in 500 BC, and everywhere people, people, people. Alas I forgot to take pictures during my walks through the streets there.
Beijing by comparison was not too interesting -- too much like a modern Western city, but with terrible, yellow, reeking air. But the Great Wall, a bit north of Beijing, was really an amazing place. Too bad you aren't allowed to hike its full distance.
While hiking along the Great Wall, I asked for a sign from the Chinese gods that a positive Singularity was truly near. As if in some kind of response, a sudden gust of wind came up at that point...
I thought maybe the local gods would look more favorably on me if I ate some of the local cuisine, so I filled up on donkey, whole bullfrog, sea cucumber, duck's blood and pig foot fur and so forth. Not so bad as it sounds, but I still preferred the kung pao chicken.
(As well as consuming various recondite foodstuff items, in Beijing I visited the offices of HiPiHi.com, a very exciting Chinese virtual-worlds company ... but that's another story for another time....)
Next, I moved on to Tokyo (after some inordinately unpleasant logistical experiences in Beijing Capital airport, which I'd rather not revisit even in memory). The company I was visiting there was based in Shibuya, a suitably colorful and hypermodern Tokyo neighborhood:
Based on years of looking over my sons' shoulders as they watch anime', I expected all the Japanese people to look like these statues near Shibuya station:
In fact, some of the people I saw weren't so far off:
But more of them looked like this:
The Japanese love robots and cyborgs, and many of them seem to exhibit this love via making their own human selves as robotic as possible -- which is fascinating but odd, from my aging-American-hippy perspective. (I badly want to go beyond the human forms of body and mind, but I suppose that once this becomes possible, the result won't be much like contemporary machines -- rather it'll be something more fluid and flexible and creative than rigid old humanity.)
Toward the end of my stay, I got fed up with the hypermodernity, and I visited an old-time shrine in a beautiful park...
where I happened upon an intriguing site where Japanese go to submit prayers to the gods.
Each prayer is written down on a little piece of wood (which you buy for five dollars), then placed on a special prayer rack with all the others. The gods then presumably sort through them all (maybe with secretarial help from demigods or some such -- I didn't ask for the details), and decide which ones are worth granting, based on whatever godly criteria they utilize.
At first, the very concept caused the sea cucumber, duck's blood and twice-cooked donkey I'd eaten a few days before, much of which was still lingering in my stomach enjoying itself, to surge up through my gastrointestinal tract in a kind of disturbingly pleasing psychedelic can-can dance....
My next reaction was curiosity regarding what everyone else had prayed for. Sure, I could sorta guess, but it would have been nice to know in detail. But as the prayers were nearly all in Japanese, I couldn't really tell what they were all about, though a few gave small clues:
In the end, not wanting to be left out, I plunked down some yen to buy a little piece of wood and submitted my own prayer to the Japanese gods, to be considered along with the multitude of other human wants and needs. Hopefully the Japanese gods were in a generous mood that day -- for all our sakes!
Sunday, January 06, 2008
Nincompoopic Neurons, Global Brains and the Potential Sociological Applications of Adaptive Stochastic Resonance
My immediately previous blog post, on the apparently in-large-part nincompoopic nature of the emerging global brain
http://www.goertzel.org/blog/2007/12/global-moron-awakens.html
attracted so many comments (largely on various mailing lists I posted the blog URL to), that I figured I'd post a brief response here, expanding on some of the ideas in the responses and connecting them with some ideas from dynamical systems theory.
Most of the feedback I got was in the general vein of a blog post I wrote a couple months earlier, entitled "On Becoming a Neuron":
http://www.goertzel.org/blog/2007/10/on-becoming-neuron.html
The theme of "Becoming a Neuron" was how dependent we are, these days, on the global communication network and the emerging human group mind.
The theme of "The Global Nincompoop Awakens" was how many of the communications between the "human neurons" comprising the global brain seem completely idiotic in nature.
Reading through the comments on the Global Nincompoop post, I was struck by the theme of Bart Kosko' book Noise
http://www.amazon.com/Noise-Bart-Kosko/dp/0670034959
(a somewhat erratic book, but containing some very interesting ideas). Among other topics he reviews the way the brain's self-organizing cognitive dynamics depend on the high level of noise present in the brain, introducing the general notion of "adaptive stochastic resonance", according to which
Noise can amplify a faint signal in some feedback nonlinear systems even though too much noise can swamp the signal. This implies that a system’s optimal noise level need not be zero
(Google or Wikipedia "adaptive stochastic resonance" for a load of technical papers on the topic, by Kosko and others).
An interesting illustration of this phenomenon is the following figure from Kosko's paper

This picture shows nicely how, in the context of the human perceptual system, adding noise can help make patterns more perceptible.
(What's happening in the picture is that he's adding noise to the pixels in the picture, then applying a threshold rule to decide which pixels are black enough to display. Without enough noise, not enough pixels meet the threshold; with too much noise, too many pixels randomly meet the threshold. But it's worth letting a bunch of pixels randomly meet the threshold, in order to cause ENOUGH pixels to meet the threshold. So to optimize perception by a threshold-based system, you want to have an amount of noise lying in a certain interval -- not too little nor too much.)
Now, Kosko verges on insinuating that this kind of exploitation of noise is somehow a NECESSARY property of intelligent systems, which I doubt. However, it seems plausible that he's right about its role in the human brain and human perception/cognition.
Semi-relatedly, I recall reading somewhere that motion-sensing neurons in the brain are, on average, off by around 80 degrees in their assessment of the direction of motion of a percept at a certain point in the visual field. But we can still assess the direction of motion of an object fairly accurately, because our brains perform averaging, and the noisy data gets washed out in the average.
In other words, brains contain a lot of noise, and they contain mechanisms for working around this fact (e.g. averaging) and creatively exploiting it (e.g. adaptive stochastic resonance).
Now, it's not too surprising if the emerging Global Brain of humanity is more like a brain than like a well-engineered computer program. In other words: most of what goes on in the global brain, like most of what goes on in the human brain, is likely to be noise ... and there are likely to be mechanisms for both working around the noise, and exploiting it.
This brings up the interesting question of what techniques may exist in sociological dynamics for exploiting noise.
How might adaptive stochastic resonance, for example, play a role in sociodynamics? Could it be that the random noise of nincompoopic social interactions serve to make significant sociodynamic patterns stand out more clearly to our minds, thus actually enhancing the ability of the Global Brain to recognize patterns in itself?
I wonder how one would make an experiment to demonstrate or refute this? It would of course be difficult due to the massive number of confounding factors in any social system, and the difficulty of defining things like pattern and noise in the social domain as precisely as is possible in a domain like image processing (where of course these terms are still susceptible to a variety of interpretations).
And surely this simple idea -- obtained by extrapolating Kosko's image-processing example to the sociological domain -- is not the only possible way that social systems could profitably exploit their intrinsic noisiness.
But still, it's an intriguing train of thought....
(P.S. The question of whether this kind of chaotic, noisy, self-organizing system is remotely the best way to carry out creative computation is a whole other question, of course. My own strong suspicion is that human brains are incredibly inefficient at using their computational power, compared to other sorts of intelligent systems that will exist in the future; and the Global Brain likely shares this inefficiency, for similar reasons. However, this inefficiency is partially compensated for in both cases by biological systems' (neurons' and humans') prodigious capability for replication....)
http://www.goertzel.org/blog/2007/12/global-moron-awakens.html
attracted so many comments (largely on various mailing lists I posted the blog URL to), that I figured I'd post a brief response here, expanding on some of the ideas in the responses and connecting them with some ideas from dynamical systems theory.
Most of the feedback I got was in the general vein of a blog post I wrote a couple months earlier, entitled "On Becoming a Neuron":
http://www.goertzel.org/blog/2007/10/on-becoming-neuron.html
The theme of "Becoming a Neuron" was how dependent we are, these days, on the global communication network and the emerging human group mind.
The theme of "The Global Nincompoop Awakens" was how many of the communications between the "human neurons" comprising the global brain seem completely idiotic in nature.
Reading through the comments on the Global Nincompoop post, I was struck by the theme of Bart Kosko' book Noise
http://www.amazon.com/Noise-Bart-Kosko/dp/0670034959
(a somewhat erratic book, but containing some very interesting ideas). Among other topics he reviews the way the brain's self-organizing cognitive dynamics depend on the high level of noise present in the brain, introducing the general notion of "adaptive stochastic resonance", according to which
Noise can amplify a faint signal in some feedback nonlinear systems even though too much noise can swamp the signal. This implies that a system’s optimal noise level need not be zero
(Google or Wikipedia "adaptive stochastic resonance" for a load of technical papers on the topic, by Kosko and others).
An interesting illustration of this phenomenon is the following figure from Kosko's paper
This picture shows nicely how, in the context of the human perceptual system, adding noise can help make patterns more perceptible.
(What's happening in the picture is that he's adding noise to the pixels in the picture, then applying a threshold rule to decide which pixels are black enough to display. Without enough noise, not enough pixels meet the threshold; with too much noise, too many pixels randomly meet the threshold. But it's worth letting a bunch of pixels randomly meet the threshold, in order to cause ENOUGH pixels to meet the threshold. So to optimize perception by a threshold-based system, you want to have an amount of noise lying in a certain interval -- not too little nor too much.)
Now, Kosko verges on insinuating that this kind of exploitation of noise is somehow a NECESSARY property of intelligent systems, which I doubt. However, it seems plausible that he's right about its role in the human brain and human perception/cognition.
Semi-relatedly, I recall reading somewhere that motion-sensing neurons in the brain are, on average, off by around 80 degrees in their assessment of the direction of motion of a percept at a certain point in the visual field. But we can still assess the direction of motion of an object fairly accurately, because our brains perform averaging, and the noisy data gets washed out in the average.
In other words, brains contain a lot of noise, and they contain mechanisms for working around this fact (e.g. averaging) and creatively exploiting it (e.g. adaptive stochastic resonance).
Now, it's not too surprising if the emerging Global Brain of humanity is more like a brain than like a well-engineered computer program. In other words: most of what goes on in the global brain, like most of what goes on in the human brain, is likely to be noise ... and there are likely to be mechanisms for both working around the noise, and exploiting it.
This brings up the interesting question of what techniques may exist in sociological dynamics for exploiting noise.
How might adaptive stochastic resonance, for example, play a role in sociodynamics? Could it be that the random noise of nincompoopic social interactions serve to make significant sociodynamic patterns stand out more clearly to our minds, thus actually enhancing the ability of the Global Brain to recognize patterns in itself?
I wonder how one would make an experiment to demonstrate or refute this? It would of course be difficult due to the massive number of confounding factors in any social system, and the difficulty of defining things like pattern and noise in the social domain as precisely as is possible in a domain like image processing (where of course these terms are still susceptible to a variety of interpretations).
And surely this simple idea -- obtained by extrapolating Kosko's image-processing example to the sociological domain -- is not the only possible way that social systems could profitably exploit their intrinsic noisiness.
But still, it's an intriguing train of thought....
(P.S. The question of whether this kind of chaotic, noisy, self-organizing system is remotely the best way to carry out creative computation is a whole other question, of course. My own strong suspicion is that human brains are incredibly inefficient at using their computational power, compared to other sorts of intelligent systems that will exist in the future; and the Global Brain likely shares this inefficiency, for similar reasons. However, this inefficiency is partially compensated for in both cases by biological systems' (neurons' and humans') prodigious capability for replication....)
Saturday, December 08, 2007
The Global Nincompoop Awakens
On a recent business trip to New York, I found myself sitting for a couple hours in a Starbucks in the midst of the campus of New York University (which is not a walled campus, but rather a collection of buildings strewn semi-haphazardly across a few blocks of Greenwich Village).
While sitting there typing into my laptop, I couldn't help being distracted by the conversations of the students around me. I attended NYU in the mid-80's (doing a bit of graduate study there on the way to my PhD), and I was curious to see how the zeitgest of the student body had changed.
Admittedly, this was a highly nonrepresentative sample, as I was observing only students who chose to hang out in Starbucks. (Most likely all the math and CS grad students were doing as I'd done during my time at NYU, and hanging out in the Courant Institute building, which was a lot quieter than any cafe' ...). And, the population of Starbucks seemed about 65% female, for whatever reason.
The first thing that struck me was the everpresence of technology. The students around me were constantly texting each other -- there was a lot of texting going on between people sitting in different parts of the Starbucks, or people waiting in line and other people sitting down, etc.
And, there was a lot of talk about Facebook. Pretty much anytime someone unfamiliar (to any of the conversation participants) was mentioned in conversation the question was asked "Are they on Facebook?" Of course, plenty of the students had laptops there and could write on each others Facebook walls while texting each other and slipping in the occasional voice phone call or email as well.
All in all I found the density and rapidity of information interchange extremely impressive. The whole social community of the Starbucks started to look like a multi-bodied meta-mind, with information zipping back and forth everywhere by various media. All the individuals comprising parts of the mind were obviously extremely well-attuned to the various component media and able to multiprocess very effectively, e.g. writing on someone's Facebook wall and then texting someone else while holding on an F2F conversation, all while holding a book in their lap and allegedly sort-of studying.
Exciting! The only problem was: The contents of what was being communicated was so amazingly trivial and petty it started to make me feel physically ill.
Pretty much all the electronic back-and-forth was about which guys were cute and might be interested in going to which party with which girls; or, how pathetic it was that a certain group of girls had "outgrown" a certain other group via being accepted into a certain sorority and developing a fuller and more mature appreciation for the compulsive consumption of alcohol ... and so forth.
Which led me to the following thought: Wow! With all our incredible communications technologies, we are creating a global brain! But 99.99% of this global brain's thoughts are going to be completely trite and idiotic.
Are we, perhaps, creating a global moron or at least a global nincompoop?
If taken seriously, this notion becomes a bit frightening.
Let's suppose that, at some point, the global communication network itself achieves some kind of spontaneous, self-organizing sentience.
(Yeah, this is a science-fictional hypothesis, and I don't think it's extremely likely to happen, but it's interesting to think about.)
Won't the contents of its mind somehow reflect the contents of the information being passed around the global communications network?
Say: porn, spam e-mails, endless chit-chat about whose buns are cuter, and so forth?
Won't the emergent global mind of the Internet thus inevitably be a shallow-minded, perverted and ridiculous dipshit?
Is this what we really want for the largest, most powerful mind on the planet?
What happens when this Global Moron asserts its powers over us? Will we all find our thoughts and behaviors subtly or forcibly directed by the Internet Overmind?? -- whose psyche is primarily directed by the contents of the Internet traffic from which it evolved ... which is primarily constituted of ... well... yecchh...
(OK .. fine ... this post is a joke... OR IS IT???)
While sitting there typing into my laptop, I couldn't help being distracted by the conversations of the students around me. I attended NYU in the mid-80's (doing a bit of graduate study there on the way to my PhD), and I was curious to see how the zeitgest of the student body had changed.
Admittedly, this was a highly nonrepresentative sample, as I was observing only students who chose to hang out in Starbucks. (Most likely all the math and CS grad students were doing as I'd done during my time at NYU, and hanging out in the Courant Institute building, which was a lot quieter than any cafe' ...). And, the population of Starbucks seemed about 65% female, for whatever reason.
The first thing that struck me was the everpresence of technology. The students around me were constantly texting each other -- there was a lot of texting going on between people sitting in different parts of the Starbucks, or people waiting in line and other people sitting down, etc.
And, there was a lot of talk about Facebook. Pretty much anytime someone unfamiliar (to any of the conversation participants) was mentioned in conversation the question was asked "Are they on Facebook?" Of course, plenty of the students had laptops there and could write on each others Facebook walls while texting each other and slipping in the occasional voice phone call or email as well.
All in all I found the density and rapidity of information interchange extremely impressive. The whole social community of the Starbucks started to look like a multi-bodied meta-mind, with information zipping back and forth everywhere by various media. All the individuals comprising parts of the mind were obviously extremely well-attuned to the various component media and able to multiprocess very effectively, e.g. writing on someone's Facebook wall and then texting someone else while holding on an F2F conversation, all while holding a book in their lap and allegedly sort-of studying.
Exciting! The only problem was: The contents of what was being communicated was so amazingly trivial and petty it started to make me feel physically ill.
Pretty much all the electronic back-and-forth was about which guys were cute and might be interested in going to which party with which girls; or, how pathetic it was that a certain group of girls had "outgrown" a certain other group via being accepted into a certain sorority and developing a fuller and more mature appreciation for the compulsive consumption of alcohol ... and so forth.
Which led me to the following thought: Wow! With all our incredible communications technologies, we are creating a global brain! But 99.99% of this global brain's thoughts are going to be completely trite and idiotic.
Are we, perhaps, creating a global moron or at least a global nincompoop?
If taken seriously, this notion becomes a bit frightening.
Let's suppose that, at some point, the global communication network itself achieves some kind of spontaneous, self-organizing sentience.
(Yeah, this is a science-fictional hypothesis, and I don't think it's extremely likely to happen, but it's interesting to think about.)
Won't the contents of its mind somehow reflect the contents of the information being passed around the global communications network?
Say: porn, spam e-mails, endless chit-chat about whose buns are cuter, and so forth?
Won't the emergent global mind of the Internet thus inevitably be a shallow-minded, perverted and ridiculous dipshit?
Is this what we really want for the largest, most powerful mind on the planet?
What happens when this Global Moron asserts its powers over us? Will we all find our thoughts and behaviors subtly or forcibly directed by the Internet Overmind?? -- whose psyche is primarily directed by the contents of the Internet traffic from which it evolved ... which is primarily constituted of ... well... yecchh...
(OK .. fine ... this post is a joke... OR IS IT???)
Monday, October 29, 2007
On Becoming a Neuron
I was amused and delighted to read the following rather transhumanistic article in the New York Times recently.
http://www.nytimes.com/2007/10/26/opinion/26brooks.html?_r=1&oref=slogin
The writer, who does not appear to be a futurist or transhumanist or Singularitarian or anything like that, is observing the extent to which he has lost his autonomy and outsourced a variety of his cognitive functions to various devices with which he interacts. And he feels he has become stronger rather than weaker because of this -- and not any less of an individual.
This ties in deeply with the theme of the Global Brain
http://pespmc1.vub.ac.be/SUPORGLI.html
which is a concept dear to my heart ... I wrote about it extensively in my 2001 book "Creating Internet Intelligence" and (together with Francis Heylighen) co-organized the 2001 Global Brain 0 workshop in Brussels.
I have had similar thoughts to the above New York Times article many times recently... I can feel myself subjectively becoming far more part of the Global Brain than I was even 5 years ago, let alone 10...
As a prosaic example: Via making extensive use of task lists as described in the "Getting Things Done" methodology
http://en.wikipedia.org/wiki/Getting_Things_Done
I've externalized much of my medium-term memory about my work-life.
And via using Google Calendar extensively I have externalized my long-term memory... I use the calendar not only to record events but also to record information about what I should think about in the future (e.g. "Dec. 10 -- you should have time to start thinking about systems theory in connection to developmental psychology again...")
And, so much of my scientific work these days consists of reading little snippets of things that my colleagues on the Novamente project (or other intellectual collaborators) wrote, and then responding to them.... It's not that common these days that I undertake a large project myself, because I can always think of someone to collaborate with, and then the project becomes in significant part a matter of online back-and-forth....
And the process of doing computer science research is so different now than it was a decade or two ago, due to the ready availability and easy findability of so many research ideas, algorithms, code snippets etc. produced by other people.
Does this mean that I'm no longer an individual? It's certainly different than if I were sitting on a mountain for 10 years with my eagle and my lion like Nietzsche's Zarathustra.
But yet I don't feel like I've lost my distinctiveness and become somehow homogenized --
the way I interface with the synergetic network of machines and people is unique in complexly patterned ways, and constitutes my individuality.
Just as a neuron in the brain does not particularly manifest its individuality any less than a neuron floating by itself in a solution. In fact, the neuron in the brain may manifest its
individuality more greatly, due to having a richer, more complex variety of stimuli to which it may respond individually.
None of these observations are at all surprising from a Global Brain theory perspective. But, they're significant as real-time, subjectively-perceived and objectively-observed inklings of the accelerating emergence of a more and more powerful and coordinated Global Brain, of which we are parts.
And I think this ties in with Ray Kurzweil's point that by the time we have human-level AGI, it may not be "us versus them", it may be a case where it's impossible to draw the line between us and them...
-- Ben
P.S.
As a post-script, I think it's interesting to tie this Global Brain meme in with the possibility of a "controlled ascent" approach to the Singularity and the advent of the transhuman condition.
Looking forward to the stage at which we've created human-leve AGI's -- if these AGI's become smarter and smarter at an intentionally-controlled rate (say a factor of 1.2 per year, just to throw a number out there), and if humans are intimately interlinked with these AGI's in a Global Brain like fashion (as does seem to be occurring, at an accelerating rate), then we have a quite interesting scenario.
Of course I realize that guaranteeing this sort of controlled ascent is a hard problem. And I realize there are ethical issues involved in making sure a controlled ascent like this respects the rights of individuals who choose not to ascend at all. And I realize that those who want to ascend faster may get irritated at the slow pace. All these points need addressing in great detail by an informed and intelligent and relevantly educated community, but they aren't my point right now -- my point in this postcript is the synergetic interrelation of the Global Brain meme with the controlled-ascent meme.
The synergy here is that as the global brain gets smarter and smarter, and we get more and more richly integrated into it, and the AGI's that will increasingly drive the development of the global brain get smarter and smarter -- there is a possibility that we will become more and more richly integrated with a greater whole, while at the same time having greater capability to exercise our uniqueness and individually.
O Brave New Meta-mind, etc. etc. ;-)
http://www.nytimes.com/2007/10/26/opinion/26brooks.html?_r=1&oref=slogin
The writer, who does not appear to be a futurist or transhumanist or Singularitarian or anything like that, is observing the extent to which he has lost his autonomy and outsourced a variety of his cognitive functions to various devices with which he interacts. And he feels he has become stronger rather than weaker because of this -- and not any less of an individual.
This ties in deeply with the theme of the Global Brain
http://pespmc1.vub.ac.be/SUPORGLI.html
which is a concept dear to my heart ... I wrote about it extensively in my 2001 book "Creating Internet Intelligence" and (together with Francis Heylighen) co-organized the 2001 Global Brain 0 workshop in Brussels.
I have had similar thoughts to the above New York Times article many times recently... I can feel myself subjectively becoming far more part of the Global Brain than I was even 5 years ago, let alone 10...
As a prosaic example: Via making extensive use of task lists as described in the "Getting Things Done" methodology
http://en.wikipedia.org/wiki/Getting_Things_Done
I've externalized much of my medium-term memory about my work-life.
And via using Google Calendar extensively I have externalized my long-term memory... I use the calendar not only to record events but also to record information about what I should think about in the future (e.g. "Dec. 10 -- you should have time to start thinking about systems theory in connection to developmental psychology again...")
And, so much of my scientific work these days consists of reading little snippets of things that my colleagues on the Novamente project (or other intellectual collaborators) wrote, and then responding to them.... It's not that common these days that I undertake a large project myself, because I can always think of someone to collaborate with, and then the project becomes in significant part a matter of online back-and-forth....
And the process of doing computer science research is so different now than it was a decade or two ago, due to the ready availability and easy findability of so many research ideas, algorithms, code snippets etc. produced by other people.
Does this mean that I'm no longer an individual? It's certainly different than if I were sitting on a mountain for 10 years with my eagle and my lion like Nietzsche's Zarathustra.
But yet I don't feel like I've lost my distinctiveness and become somehow homogenized --
the way I interface with the synergetic network of machines and people is unique in complexly patterned ways, and constitutes my individuality.
Just as a neuron in the brain does not particularly manifest its individuality any less than a neuron floating by itself in a solution. In fact, the neuron in the brain may manifest its
individuality more greatly, due to having a richer, more complex variety of stimuli to which it may respond individually.
None of these observations are at all surprising from a Global Brain theory perspective. But, they're significant as real-time, subjectively-perceived and objectively-observed inklings of the accelerating emergence of a more and more powerful and coordinated Global Brain, of which we are parts.
And I think this ties in with Ray Kurzweil's point that by the time we have human-level AGI, it may not be "us versus them", it may be a case where it's impossible to draw the line between us and them...
-- Ben
P.S.
As a post-script, I think it's interesting to tie this Global Brain meme in with the possibility of a "controlled ascent" approach to the Singularity and the advent of the transhuman condition.
Looking forward to the stage at which we've created human-leve AGI's -- if these AGI's become smarter and smarter at an intentionally-controlled rate (say a factor of 1.2 per year, just to throw a number out there), and if humans are intimately interlinked with these AGI's in a Global Brain like fashion (as does seem to be occurring, at an accelerating rate), then we have a quite interesting scenario.
Of course I realize that guaranteeing this sort of controlled ascent is a hard problem. And I realize there are ethical issues involved in making sure a controlled ascent like this respects the rights of individuals who choose not to ascend at all. And I realize that those who want to ascend faster may get irritated at the slow pace. All these points need addressing in great detail by an informed and intelligent and relevantly educated community, but they aren't my point right now -- my point in this postcript is the synergetic interrelation of the Global Brain meme with the controlled-ascent meme.
The synergy here is that as the global brain gets smarter and smarter, and we get more and more richly integrated into it, and the AGI's that will increasingly drive the development of the global brain get smarter and smarter -- there is a possibility that we will become more and more richly integrated with a greater whole, while at the same time having greater capability to exercise our uniqueness and individually.
O Brave New Meta-mind, etc. etc. ;-)
Friday, June 15, 2007
The Pigeons of Paraguay (Further Dreams of a Ridiculous Man)
In the spirit of my prior dream-description Colors, I have written down another dream ... one I had last night ... it's in the PDF file linked to from
Copy Girl and the Pigeons of Paraguay
I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.
I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....
The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!
I'll give you some mathematics in my next blog entry ;-)
-- Ben
Copy Girl and the Pigeons of Paraguay
I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.
I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....
The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!
I'll give you some mathematics in my next blog entry ;-)
-- Ben
Saturday, June 02, 2007
Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)
From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.
First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog
http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html
we find some quotes from Google co-founder Larry Page:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.
...
a lot of our systems already use learning techniques
...
The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ... You could ask 'what should I ask Larry?' and it would tell you."
Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.
Exciting rhetoric indeed!
Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.
Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.
So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:
"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "
Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....
He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."
Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....
Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.
And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...
[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]
When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..
"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.
Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."
Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either
Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.
Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)
But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.
So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.
All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.
OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....
Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.
As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.
First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog
http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html
we find some quotes from Google co-founder Larry Page:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.
...
a lot of our systems already use learning techniques
...
The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ... You could ask 'what should I ask Larry?' and it would tell you."
Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.
Exciting rhetoric indeed!
Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.
Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.
So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:
"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "
Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....
He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."
Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....
Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.
And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...
[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]
When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..
"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.
Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."
Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either
- they have trained dozens of their scientific staff to be really good actors, or
- it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about
Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.
Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)
But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.
So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.
All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.
OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....
Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.
As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.
Friday, May 25, 2007
Pure Silliness
Ode to the Perplexingness of the Multiverse
A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"
On the Dangers of Incautious Research and Development
A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"
A couple clever followups to the above poem were posted by others on the Singularity email list...
On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)
A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"
By Derek Zahn:
The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.
And, less interestingly...
On the Benefits of Clarity in Verbal Presentation
There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration
Sunday, May 20, 2007
Flogging Poor Searle Again
Someone emailed me recently about Searle's Chinese Room argument,
http://en.wikipedia.org/wiki/Chinese_room
a workhorse theme in the philosophy of AI that normally bores me to tears.
But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.
I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.
The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.
As an example of this sort of theoretical research, check out:
http://www.hutter1.net/
which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.
My feeling is that one should think about, not just
Intelligence = complexity of goals that a system can achieve
but also
Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)
According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.
Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.
Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....
A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.
And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.
The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.
So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.
This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.
(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)
In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.
However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.
But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.
I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...
What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.
That is really the moral of the Chinese room.
http://en.wikipedia.org/wiki
a workhorse theme in the philosophy of AI that normally bores me to tears.
But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.
I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.
The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.
As an example of this sort of theoretical research, check out:
http://www.hutter1.net/
which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.
My feeling is that one should think about, not just
Intelligence = complexity of goals that a system can achieve
but also
Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)
According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.
Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.
Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....
A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.
And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.
The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.
So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.
This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.
(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)
In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.
However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.
But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.
I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...
What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.
That is really the moral of the Chinese room.
Tuesday, May 15, 2007
Technological versus Subjective Acceleration
This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.
There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.
The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.
This breaks down into a couple sub-points.
First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.
Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.
Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.
Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.
These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.
The example of music may serve to illustrate both of these points.
The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.
Why is that?
Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...
Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.
And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.
To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).
But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....
And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.
Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.
So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.
Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...
I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.
At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.
Viva la Singularity!
There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.
The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.
This breaks down into a couple sub-points.
First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.
Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.
Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.
Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.
These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.
The example of music may serve to illustrate both of these points.
The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.
Why is that?
Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...
Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.
And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.
To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).
But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....
And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.
Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.
So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.
Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...
I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.
At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.
Viva la Singularity!
Subscribe to:
Posts (Atom)