Sunday, December 02, 2012

What Will Come After Language?

I just gave a talk, via Skype from Hong Kong, at the Humanity+ San Francisco conference….  Here are some notes I wrote before the talk, basically summarizing what I said in the talk (though of course, in the talk I ended up phrasing many things a bit differently...).

I'm going to talk a bit about language, and how it relates to mind and reality … and about what may come AFTER language as we know it, when mind and reality change dramatically due to radical technological advances

Language is, obviously, one of the main things distinguishing humans from other animals.   Dogs and apes and so forth, they do have their own languages, which do have their own kinds of sophistication -- but these animal languages seem to be lacking in some of the subtler aspects of human languages.  They don't have the recursive phrase structure that lets us construct and communicate complex conceptual structures.

Dolphins and whales may have languages as sophisticated as ours -- we really don't know -- but if so their language may be very different.  Their language may have to do with continuous wave-forms rather than discrete entities like words, letters and sentences.  Continuous communication may be better in some ways -- I can imagine it being better for conveying emotion, just as for us humans, tone and gesture can be better at conveying emotion than words are.  Yet, our discrete, chunky human language seems to match naturally with our human cognitive propensity to break things down into parts, and with our practical ability to build stuff out of parts, using tools.

I've often imagined the cavemen who first invented language, sitting around in their cave speculating and worrying about the future changes their invention might cause.  Maybe they wondered whether language would be a good thing after all -- whether it would somehow mess up their wonderful caveman way of life.  Maybe these visionary cavemen foresaw the way language would enable more complex social structures, and better passage of knowledge from generation to generation.  But I doubt these clever cavement foresaw Shakespeare, William Burroughs, Youtube comment spam, differential calculus, mathematical logic or C++ ….   I suppose we are in a similar position to these hypothetical cavemen when we speculate about the future situations our current technologies might lead to.  We can see a small distance into the future, but after that, things are going to happen that we utterly lack the capability to comprehend…

The question I want to pose now is: What comes after language?  What's the next change in communication?

My suggestion is simple but radical: In the future, the distinction between linguistic utterances and minds is going to dissolve.

In the not too distant future, a linguistic utterance is simply going to be a MIND with a particular sort of cognitive focus and bias.

I came up with this idea in the course of my work on the OpenCog AI system.  OpenCog is an open-source software system that a number of us are building, with the goal of  eventually turning it into an artificial general intelligence system with capability at the human level and beyond.  We're using it to control intelligent video game characters, and next year we'll be working with David Hanson to use it to control humanoid robots.

What happens when two OpenCog systems want to communicate with each other?  They don't need to communicate using words and sentences and so forth.  They can just exchange chunks of mind directly.  They can exchange semantic graphs -- networks of nodes and links, whose labels and whose patterns of connectivity represent ideas.

But you can't just take a chunk of one guy's mind, and stick it into another guy's mind.   When you're merging a semantic graph from one mind, into another mind, some translation is required -- because different minds will tend to organize knowledge differently.  There are various ways to handle this.

One way is to create a sort of "standard reference mind" -- so that, when mind A wants to communicate with mind B, it first expresses its idiosyncratic concepts in terms of the concepts of the standard reference mind.   This is a scheme I invented in the late 1990s -- I called it "Psy-nese."   A standard reference mind is sort of like a language, but without so much mess.  It doesn't require thoughts to be linearized into sequences of symbols.  It just standardizes the nodes and links in semantic graphs used for communication.

But Psynese is a fairly blunt instrument.  Wouldn't it be better if a semantic graph created by mind A, had the savvy to figure out how to translate itself into a form comprehensible by mind B?  What if a linguistic utterance contained, not only a set of ideas created by the sender, but the cognitive capability to morph itself into a form comprehensible by the recipient?  This is weird relative to how language currently works, but it's a perfectly sensible design pattern…

That's my best guess at what comes after language.  Impromptu minds, synthesized on the fly, with the goals of translating particular networks of thought into the internal languages of various recipients.

If I really stretch  my brain, I can dimly imagine what such a system of thought and communication would be like.  It would weave together a group of minds into an interesting kind of global brain.  But we can't foresee the particulars of what this kind of communication would lead to, any more than a bunch of cavemen could foresee Henry Miller, reddit or loop quantum gravity.

Finally, I'll pose you one more question, which I'm not going to answer for you.  How can we write about the future NOW, in a way that starts to move toward a future in which linguistic utterances and minds are the same thing?

Sunday, November 25, 2012

Complex-Probability Random Walks and the Emergence of Continuous General-Relativistic Spacetime from Quantum Dynamics

(A post presenting some interesting, but still only half-baked, physics ideas....)

The issue of unifying quantum mechanics and general relativity is perenially bouncing around in the back of my mind.   I don't spend that much time thinking about it, because I decided years ago to focus most of my intellectual energy on AI and understanding the mind, but I can't help now and then revisiting the good old physics problem, and doing occasional relevant background reading....

Of course there are loads of approaches to unified physics out there these days, some of them extremely sophisticated.  Yet I can't help hoping for a conceptually simpler unification.   Here's what I'm thinking today....

I've been enjoying Frank Blume's 2006 paper A Nontemporal Probabilistic Approach to Specialand General Relativity....   It consists of fairly elementary calculations done in pursuit of a philosophical point.  Blume wanted to show that the continuous spacetime assumed in special and general relativity, can be approximated arbitrarily well by discrete random walks.   The subtle point is that these discrete random walks hop around randomly (according to a certain specified probability distribution) not only in space, but also in time.   So Blume's picture has particles hopping back and forth in time, which in his view is in accordance with Julian Barbour's perspective that "physical reality is essentially nontemporal and is best thought of as an ordered sequence of discrete static images" (see Barbour's book  The End of Time).  

I don't feel confident I know how physical reality is "best thought of" ... but I do agree with Barbour and Blume that the view of time as flowing forward from past to future is badly flawed.  This sense of unidirectional time-flow is part of  human psychology, and perhaps part of the dissipative nature of the human mind/body as a macroscopic, thermodynamic system ... but it's not fundamental in the way that people sometimes naively assume.   It's not there in microphysics, either -- at the quantum level the flowing of time from past to future is an alien concept.  If you think this sounds like nonsense, read Barbour's book!

But the philosophy of time is somewhat peripheral to the point I want to make here.   What I've been thinking about is the possibility of replacing Blume's random walk, which is defined in terms of ordinary real-number probabilities, with an analogous random walk defined in terms of complex-number probabilities.   

Saul Youssef, in a series of interesting papers (click here and scroll down to Youssef's name) has shown that if one replaces ordinary real-number probabilities with complex-number probabilities, and adds a few other commonsensical assumptions, then the equations of quantum theory basically pop out.        

This direction of research seems natural once one notes that, according to the basic math of probability theory, there are four options for creating probabilities that obey all the standard probability rules: real-number, complex-number, quaternionic and octonionic probabilities.  Classical physics uses the standard real-number option.  Quantum physics uses the complex-number option.

Ordinary quantum logic uses real-number probabilities, but uses an unusual logic (lattice meet and join on the lattice of subspaces of a complex Hilbert space), which lacks some of the normal rules of Boolean logic, such as distributivity.    Youssef's exotic probability approach retains ordinary Boolean logic rules, but moves to complex number probabilities.   

What I began wondering is: What if you replace Blume's conventional random walk with a random walk in which each movement of a particle is quantified by a certain complex-number probability?

Then a particle may move in various spatiotemporal directions, and there is the possibility for constructive or destructive interference between the different directions.  

And it seems that, in the case where the interference between the different directions cancels out, one would get the same behavior as a real-probability random walk.  

So based on back-of-the-envelope calculations I did the other day, it looks like one can probably get General Relativity to emerge as a statistical approximation to the large-scale behavior of complex-number-probability (quantum) random walks, under conditions of minimal interference.

How far does a perspective like this go, in terms of explaining the particulars of unified physics?  I don't know, and don't seem to have the time to do the rigorous calculations to find out, right now.  But it seems an interesting direction....   If you're a physicist interested in helping work out the details, drop me a line! ...


Monday, October 29, 2012

Avoiding the Tyranny of the Majority in Collaborative Filtering

One of the more annoying aspects of the modern Internet is crap comments.  For instance, it's improved in recent years, but for a while the typical comments on Youtube music videos were among the most idiotic examples of human "thought" and behavior I've ever seen…

A common solution to the problem is to have readers rate comments.  Then comments that are highly-rated by readers get ranked near the top of the list, and comments that are panned by readers get ranked near the bottom of the list.  This mechanism is used to good effect on general-purpose sites like Reddit, and specialized-community sites like Less Wrong.

Obviously this mechanism is very similar to the one used on Slashdot and Digg and other such sites, for collaborative rating of news items, web pages, and so forth.

There are many refinements of the methodology.  For instance, if an individual tends to make highly-rated comments, one can have the rating algorithm give extra weight to their ratings of others' comments.

Such algorithms are interesting and effective, but have some shortcomings as well, one of which is a tendency toward "dictatorship of the majority."  For instance, if you have a content that's loved by a certain 20% of readers but hated by the other 80%, it will get badly down-voted.

I started wondering recently whether this problem could be interestingly solved via an appropriate application of basic graph theory and machine learning.

That is, suppose one is given: A pool of texts (e.g. comments on some topic), and a set of ratings for each text, and information on the ratings made by each rater across a variety of texts.

Then, one can analyze this data to discover *clusters of raters* and *networks of raters*.

A cluster of raters is a set of folks who tend to rate things roughly the same way.   Clusters might be defined in a context-specific way -- e.g. one could have a set of raters who form a cluster in the context of music video comments, determined via only looking at music video comments and ignoring all other texts.

A network of raters is a set of folks who tend to rate each others' texts highly, or who tend to write texts that are replies to each others' texts.

Given information on the clusters and networks of raters present in a community, one can then rank texts using this information.  One can rank a text highly if some reasonably definite cluster or network of raters tends to rank it highly.

This method would remove the "dictatorship of the majority" problem, and result in texts being highly rated if any "meaningful subgroup" of people liked it.  

Novel methods of browsing content also pop to mind here.  For instance: instead of just a ranked list of texts, one could show a set of tabs, each giving a ranked list of texts according to some meaningful subgroup.

Similar ideas could also be applied to the results of a search engine.  In this case, the role of "ratings of text X" would be played by links from other websites to site X.   The PageRank formula gives highest rank to sites that are linked to by other sites (with highest weight given to links from other sites with high PageRank, using a recursive algorithm).  Other graph centrality formulas work similarly.  As an alternative to this approach, one could give high rank to a site if there is some meaningful subgroup of other sites that links to it (where a meaningful subgroup is defined as a cluster of sites that link to similar pages, or a cluster of sites with similar content according to natural language analysis, or a network of richly inter-linking sites).   Instead of a single list of search results, one could give a set of tabs of results, each tab listing the results ranked according to a certain (automatically discovered) meaningful subgroup.

There are many ways to tune and extend this kind of methodology.   After writing the above, a moment's Googling found a couple papers on related topics, such as:

But it doesn't seem that anyone has rolled out these sorts of ideas into the Web at large, which is unfortunate….

But the Web is famously fast-advancing, so there's reason to be optimistic about the future.  Some sort of technology like I've described here, deployed on a mass scale, is going to be important for the development of the Internet and its associated human community into an increasingly powerful "global brain" …

Friday, October 19, 2012

Can Computers Be Creative?

Can Computers Be Creative? -- A Dialogue on Creativity, Radical Novelty, AGI, Physics and the Brain

Over the years, I've repeatedly encountered people making arguments of the form: "Computers can't be creative in the same way that people can." Such arguments always boil down, eventually, to an assertion that human mind/brains have recourse to some sort of "radical novelty" going beyond the mere repermutation of inputs and initial state that computers are capable of.

This argument is there in Roger Penrose's "Emperor's New Mind", in Kampis's "Self-Modifying Systems", and in all manner of other literature. It will surely be around until the first fully human-level AGIs have been created -- and will probably continue even after that, at least to some extent, since no issue verging on philosophy has ever been fully resolved!

The following dialogue, between two imaginary characters A and B, is my attempt to summarize the crux of the argument, in a way that's admittedly biased by my own peculiar species of pro-AGI perspective, but also attempts to incorporate my understanding of the AGI skeptic's point of view.

The dialogue was inspired in part by a recent dialogue on the AGI email list, in which perpetual AGI gadfly Mike Tintner was playing the role of AGI skeptic "A" in the dialogue, and the role "B" was played by pretty much everyone else on the list. But it's often hard to really get at the crux of an issue in the herky-jerky context of mailing list discussion. I hope I've been able to do better here. I was also heavily inspired by conversations I had years previously, with my friends Margeret Heath and Cliff Joslyn, on the concept of "radical novelty" and what it might mean.

A: It's obvious AIs can never be creative and innovative in the same sense that people are. They're just programs, they just recombine their inputs in ways determined by their programming.

B: How can you say that, though? If you look at the transcript of a computer chess player's game, you'll see plenty of creative moves -- that is, moves you'd call creative if you saw them made by a human player. I wrote some computer music composition software that made up some really cool melodies. If I'd made them up myself, you'd call them creative.

A: OK, but a program is never going to make up a new game, or a new instrument, or a new genre of music.

B: How do you know? Anyway once those things happen, then you'll find some reason to classify *those* achievements as not creative. This is just a variant of the principle that "AI is defined as whatever seems intelligent when people can do it, that computers can't yet do."

A: No, there's a fundamental difference between how computers are doing these things, and how people do these things. A person has to set up the situation for a computer to do these things. A person feeds the computer the input and configures the computer to have a certain goal. Whereas a human's creative activity is autonomous -- the human's not just a tool of some other being.

B: Ah, falling back on mystical notions of free will, are we? But think about it -- if you don't take care to feed a human child proper input, and set up their situation properly, and guide them toward a certain goal -- then they're not going to be playing chess or composing music. They're going to be a "wild child", capable only of hunting and foraging for food like a non-human animal. No one who can read this is independent of their cultural programming.

A: That's not a fair analogy. Computers need much more specialized preparation for each task they're given, than people do.

B: Yes, that's true. Nobody has achieved human-level AGI yet. I believe we're on the path to get there, but we're not there yet. But I never claimed that computer programs are currently creative and innovative on the level of highly creative adult humans. Actually it's hard to compare. Current computer programs can create some things humans can't -- extremely complex circuit designs, music with 10000-voice polyphony, fractal art in 128 dimensions, and so forth -- but they also fall far short of humans in many areas. Your original statement wasn't merely "We don't yet have computers that are as creative as innovative as humans" -- that's obvious. Your statement was that computers intrinsically aren't creative and innovative, in the same manner than humans are. And I don't think you've demonstrated that at all.

A: It's so obvious, it doesn't need demonstration. A computer will never do more than rearrange the elements that have been fed into it. Whereas, a human can come up with something fundamentally new -- a new element that neither it, nor anybody else, has ever heard of.

B: Ah, now I see what you're getting at -- the notion of "radical novelty." I've had this argument before!

A: Yes, radical novelty. The human mind is capable of radical novelty. That's the crux of our general intelligence, our creative innovations. And computers can't do it, because all they can do is rearrange their inputs and their programming -- they can't introduce anything new.

B: You do realize you're not the first one to think of this argument, right? It's been around a rather long time. I myself first encountered it in George Kampis's book "Self-Modifying Systems in Biology and Cognitive Science", which was published in the early 1990s. But of course the argument's been around since long before that. I'm sure someone who knew the history of philosophy better could trace it back far before the advent of computers. There are really two arguments here. One is: Is there more to creativity than combination of pre-existing elements, plus introduction of occasional randomness. The other is: If there some additional, magic ingredient, can computers do it too?

A: What do you mean "Is there more to creativity than combination of pre-existing elements, plus introduction of occasional randomness." Of course there is; that's utterly obvious!

B: Is it? Think about it -- is evolution creative? Evolution created the human body, the human brain, the human eye, the snake's locomotion, the dolphin's sonar, the beautifully patterned wings of the Monarch butterfly. But what does evolution do? It combines previously known elements, it makes use of randomness, and it leverages the intrinsic creativity of the self-organizing processes in the physical world. Or are you going to plead Creationism here?

A: You admit evolution leverages the self-organizing processes of the physical world. The brain is also part of the physical world. A computer is different. The physical world has more creativity built into it. 

B: You admit a computer is part of the physical world, right? It's not some kind of ghostly presence…

A: Yes, but it's a very limited part of the physical world, it doesn't display all the phenomena you can see in the real world.

B: A brain is a very limited part of the physical world too, of course. And so is the Earth. And insofar as we understand the laws of physics, every phenomenon that can occur in the physical world can be simulated in a computer.

A: Ah, but a simulation isn't the real thing! You can't cook your food with a simulation of fire!

B: This brings us rather far afield, I think. I'm sure you're aware of the argument made by Nick Bostrom and many others before him, that it's highly possible we ourselves live in some kind of simulation world. You saw "The Matrix" too, I assume. A simulation isn't always going to look like one to the creatures living inside it.

A: OK OK, I agree, that's a digression -- let's not go there now. Leave that for another day.

B: So do you agree that evolution is creative?

A: Yes, but I'm not sure your version of the evolutionary story is correct. I think there's some fundamental creativity in the dynamics of the physical world, which guides evolution and neural process, but isn't present in digital computers.

B: And what evidence do you have of this? You do realize that there is no support for this in any current theory of physics, right?

A: And you do realize that current fundamental physics is not complete, right? There is no unified theory including gravity and all the other forces. Nor can we, in practice, explain how brains work using the underlying known physics. We can't even, yet, derive the periodic table of the elements from physical principles, without setting a lot of parameters using chemistry-level know-how. Clearly we have a lot more to discover.

B: Sure, no doubt. But none of the concrete proposals out there for unifying physics would introduce this sort of radical creativity and novelty you're looking for. It's ironic to look at physicist Roger Penrose and his twistor theory, for example. Penrose agrees with you that nature includes some kind of radical creativity not encompassable by computers. Yet his own proposal for unifying physics, twistors, is quite discrete and computational in nature -- and his idea of some mystical, trans-computational theory of physics remains a vague speculation.

A: So you really think this whole universe we're in, is nothing but a giant computer, each step determined by the previous one, with maybe some random variations?

B: Like Bill Clinton said in the Monica Lewinsky trial: That depends on what the meaning of is, is.

A: Sorry, you'll have to elaborate a bit. Clintonian metaphysics is outside my range of expertise…

B: I think that, from the perspective of science, there's no reason to choose a non-computational model of the observed data about the universe. This is inevitable, because the totality of all scientific data is just a giant, but finite collection of finite-precision numbers. It's just one big, finite bit-set. So of course we can model this finite bit-set using computational tools. Now, there may be some other way of understanding this data too -- but there is no empirical, scientific way to validate the idea that the best way to model this finite bit-set is using a non-computational model. If you choose to find a non-computational model of a finite set of bits simpler and preferable, I can't stop you from doing or saying that. What I can say though is that: from the perspective of science, there's no reason to choose a non-computational model of the observed data about the universe.

A: That seems like a limitation of science, rather than a limitation of the universe!

B: Maybe so. Maybe some future discipline, descending from our current notion of science, will encompass more possibilities. I've speculated that it may be necessary to expand the notion and practice of science to come to a good understanding of consciousness on the individual and group level. But that's another digression. My strong suspicion is that to build an advanced AGI, with intelligence and creativity at and then beyond the human level, the scientific understanding of the mind is good enough.

A: Hmmm…. You admit that science may not be good enough to fully understand consciousness, or to encompass non-computational models of our observations of intelligent systems. But then why do you think science is good enough to guide the construction of thinking machines?

B: I can't know this for sure. To some extent, in life, one is always guided by one's intuition. Just because I saw the sun rise 1000 mornings in a row, I can't know for sure it's going to rise the next day. As Hume argued long ago, the exercise of induction requires some intuitive judgment as to which hypothesis is simpler. To me, by far the simplest hypothesis about intelligence is that if we engineer mechanisms implementing basically the same sorts of functions that the human brain does then we're going to get a system that's intelligent in basically the same sorts of ways that the brain is. And if there's some aspect of the human mind/brain that goes beyond what mechanism can explain -- well hey, there may well be some aspect of our engineered AGI mind/brain that also goes beyond what mechanism can explain. Both the brain and computer are arrangements of matter in the same universe.

A: Heh…. I guess we digressed again, didn't we.

B: It seems that's how these arguments usually go. We started out with creativity and ended up with life, the universe and everything. So let's get back to radical novelty for a bit. I want to run through my thinking about that for you a little more carefully, OK?

A: Sure, go for it!

B: Ok…. Consider, in the fashion of second-order cybernetics, that it's often most sensible to consider a system S in the context of some observer O of the system.

A: Sure. Quantum mechanics would also support that sort of perspective.

B: Indeed -- but that's another digression! So let's go on...

My first point is: It's obvious that, in many cases, a system S can display radical novelty relative to an observer O. O may have devised some language L for describing the behaviors and internal states of S, and then S may do something which O finds is more easily describable using a new language L1, that has some additional words in it, and/or additional rules for interaction of the words in the language.

A: That's a bit abstract, can you give me an example or something?

B: Sure. Consider a pot of water on the stove, gradually heating up but not yet boiling. An observer of that pot may come up with a set of descriptions of the water, with a theory of the water's behavior based on his observations of the water. But then when the temperature gets high enough and the water starts to boil -- all of a sudden he sees new stuff, and he has to add new words to his language for describing the water. Words for bubbles, for example.

A: Yes. The pot of water has introduced radical novelty. It's come up with a new element - bubbles -- that didn't exist there before.

B: Yeah -- but now we get to the tricky point, which is the crux of the matter. In general, for a given system S, a certain behavior or internal state change may appear as radical novelty to observer O but not to observer O1.

In the case of the pot of water, suppose that in addition to our original observer O, we had another observer O1 who was watching every elementary particle in the pot of water, to the extent physics allows; and who was smart enough to infer from these observations the laws of physics as currently understood. This observer O1 would not be surprised when the water started to boil, because he would have predicted it using his knowledge of the laws of physics. O1 would be describing the water's structures and dynamics using the language of particle physics, whereas O would be describing it using "naive physics" language regarding the macroscopic appearance of the water. The boiling of the water would introduce radical novelty from the perspective of O, but not O1.

For a slightly broader example, think about any deterministic system S, and an observer O1 who has complete knowledge of S's states and behaviors as they unfold over time. From the view of O1, S will never do anything radically novel, because O1 can describe S using the language of S's exact individual states and behaviors; and each new thing that emerges in S is by assumption determined by the previous states of S and S's environment. But from the view of another observer O, one which has a coarser-grained model of S's states or behaviors, S may well display radical novelty at some points in time.

The question regarding radical novelty then becomes: given a system S and an observer O who perceives S as displaying radical novelty at some point in time, how do we know that there isn't some other observer O1 who would not see any radical novelty where O does? Can we ever say, for sure, that S is in a condition such that any possible observer would perceive S to display radical novelty?

It seems we could never say this for sure, because any observer O, ultimately, only sees that data that it sees.

A: That's quite interesting, indeed, and I'll probably need some time to digest it fully.

But I still wonder if you're fudging the distinction between digital systems like computers and real physical systems like brains.

I mean: in the case of a computer, we can easily see that it's doing nothing new, just repermuting what comes in through its sensors, and what it was given in its initial programming. In the case of a human, you could try to argue that a human brain is just doing the same thing -- repermuting its sensations and its initial brain-state. But then, the rules of quantum mechanics forbid us from knowing the initial brain state or the sensations in full detail. So doesn't this leave the brain more room to be creative?

B: Not if you really look at what quantum mechanics says. Quantum mechanics portrays the dynamics of the world as a sort of deterministic unfolding in an abstract mathematical space, which isn't fully observable. But it's proved quite clearly that a quantum system can't actually do anything beyond what an ordinary computer system can do, though in some cases it can do things much faster. So any quantum system you build, can be imitated precisely by some ordinary computer system, but the ordinary computer system may run much slower.

The arguments get subtle and threaten to turn into yet another digression -- but the bottom line is, quantum theory isn't going to save your position. That's why Penrose, who understands very well the limits of quantum theory, needs to posit some as yet unspecified future unified physics to support his intuitions about radical novelty.

A: Hmm, OK, let's set that aside for now. Let's focus back on the computer, since we both agree that physicists don't yet really understand the brain.

How can you say a computer can be fundamentally creative, when we both know it just repermutes its program and its inputs?

B: This becomes a quite funny and subtle question. It's very obvious to me that, to an observer with a coarse-grained perspective, a computer can appear to have radical novelty -- can appear quite creative.

A: Yes, but that's only because the observer doesn't really know what's going on inside the computer!

B: So then the real question becomes: For a given computer, is there hypothetically some observer who could understand the computer's inputs and program well enough to predict everything the computer does, even as it explores complex environments in the real world. So for this observer, the computer would display no radical novelty.

A: Yes. For any computer, there is an observer like that, at least hypothetically. And for a brain, I really doubt it, no matter what our current physics theories might suggest.

B: But why do you doubt it so much? Because of your own subjective feeling of radical novelty, right? But consider: The deliberative, reflective part of your mind, which is explicitly aware of this conversation, is just one small part of your whole mental story. Your reflective conscious mind has only a coarse-grained view of your massive, teeming "unconscious" mind (I hesitate to really call the rest of your mind "unconscious" because I tend toward panpsychism -- but that would be yet another digression!). This is because the "conscious" mind has numerous information-processing limitations relative to the "unconscious" -- for instance the working memory limitation of 7 +/-2 items. Given this coarse-grained view, your "conscious" mind is going to view your "unconscious" mind as possessing radical novelty. But to another observer with fuller visibility into your "unconscious" mind, this radical novelty might not be there.

We all know the conscious mind is good at fooling itself. The radical novelty that you feel so acutely, may be no more real than the phantom limbs that some peoples' brains tell them so vividly are really there. In one sense, they are there. In another, they are not.

Let me go back to the Matrix scenario for a moment…

A: You're kind of obsessed with that movie, aren't you? I wonder what that tells us about YOUR unconscious?

B: Heh… actually, I thought the first Matrix movie was pretty good, but it's not really a personal favorite film. And let's not even get started on the sequels… All in all, Cronenberg's "Existenz" had a similar theme and agreed with my aesthetics better…. But anyway…

A: … you digress

B: Indeed! I'm not trying to do "Roger Ebert Goes to the AI Lab" here, I just want to use the Matrix as a prop for another point.

Imagine we live in a Matrix type simulation world, but the overlords of the world -- who live outside the simulation -- are subtly guiding us by introducing new ideas into our minds now and then. And also by introducing new ideas into the minds of our computer programs. They are tweaking the states of our brains, and the RAM-states of the computers running our AI systems, in tricky ways that don't disrupt our basic functioning, but that introduce new ideas. Suppose these ideas are radically new -- i.e. they're things that we would never be able to think of, on our own.

A: So this scenario is like divine inspiration, but with the overlords of the Matrix instead of God?

B: Yeah, basically. But I wanted to leave the religious dimension out of it.

A: Sure… understandably.  We've made a big enough mess already!

B: So my point is: if this were the case, how would we ever know? We could never know.

A: That's true I suppose, but so what?

B: Now think about all the strange, ill-understood phenomena in the Cosmos -- psi phenomena, apparent reincarnation, and so forth. I know many of my colleagues think these things are just a load of BS, but I've looked at the data fairly carefully, and I'm convinced there's something strange going on there.

A: OK, sure, actually I tend to agree with you on that point. I've had some strange experiences myself. But aren't you just digressing again?

B: Partly. But my point is, if psi processes exist, they could potentially be responsible for acting sort of like the Matrix overlords in my example -- introducing radical novelty into our minds, and potentially the minds of our computers. Introducing inputs that are radically novel from some perspectives, anyway. If some kind of morphogenetic field input new stuff into your brain, it would be radically novel from your brain's view, but not from the view of the universe.

A: You're trying to out-weird me, is that it?

B: Aha, you caught me!!… Well, maybe. But if so that's a secondary, unconscious motive!

No, seriously…. My point with that digression was: We don't really understand the universe that well.

In actual reality, nobody can predict what a complex computer program is going to do, when it's interacting with a complex world.

You want to tell a story that some hypothetical super-observer could predict exactly what any complex computer program will do -- and hence, for any computer program, there is some perspective from which it has no radical novelty.

And then you want to tell a story that, for a human brain, some mysterious future physics will prove current physics wrong in its assertion that there is an analogous hypothetical super-observer for human brains.

But I don't particularly believe either of these stories. I think we have a lot to learn about the universe, and probably from the view of the understanding we'll have 100 years from now, both of these stories will seem a bit immature and silly.

A: Immature and silly, huh?? I know you are but what am I !!!

But if there are such big holes in our understanding of the universe, how come you think you know enough to build a thinking machine? Isn't *that* a bit immature and silly?

B: We started out with your argument that no computer can ever be creative and innovative, because the human mind/brain has some capability for radical novelty that computers lack -- remember?

A: Vaguely. My brain's a bit dizzied by all the quantum mechanics and psychic powers. Maybe I'd be less confused if my brain were hybridized with a computer.

B: But that's another digression…

A: Exactly…

B: But really -- after more careful consideration, what's left of your argument? What evidence is there for radical novelty in the human mind/brain, that's not there in computers? Basically the hypothesis of this special additional radical novelty in humans, comes down to your intuitive sense of what creativity feels like to you, plus some observations about the limits of current computer programs, plus some loosely connected, wild speculations about possible future physics. It's far from a compelling argument.

A: And where is your compelling argument that computers CAN display the same kinds of creativity and innovation that humans can? You haven't proved that to me at all. All you have is your faith that you can somehow make future Ai programs way more creative and innovative than any program has ever been so far.

B: I have that scientific intuition -- and I also have the current laws of physics, which imply that a digital computer can do everything the brain does. It's possible, according to physics, that we'll need a quantum computer rather than a conventional digital computer to make AGI systems run acceptably fast -- but so far there's no real evidence of that.

And then there's current neuroscience, psychology and so forth -- all of which appear quite compatible with current physics.

I'm willing to believe current science has profound limitations. But given the choice between 1) current science, versus 2) your own subjective intuition about your thought process and your utterly scientifically ungrounded speculations about possible future physics -- which am I going to choose as a guide for engineering physical systems? Sorry my friend, but I'm gonna go with current science, as a provisional guide at any rate.

In everything we've learned so far about human cognition and neuroscience (and physics and chemistry and biology, etc. etc.), there's no specific thing that seems to go beyond what we can do on digital computers. What it seems to me is that human-level intelligence using available computational resources is just a complex thing to do. It requires lots of different computational processes, all linked together in complex ways. And these processes are different from the serial calculations that current computer architectures are best at performing, which means that implementing them involves a lot of programming and software design trickery. Furthermore we don't understand the brain that well yet, due to limitations of current brain scanning equipment -- so we have to piece together models of intelligence based on integrating scattered information from various disciplines. So implementing human-level AGI is a difficult task right now. 50 or 100 years from now it will probably be an exercise for schoolchildren!!

A: That's not a compelling argument, hombre. That's just a statement of your views.

B: Urrghh…. Look, what happens inside the mind is a lot like evolution! Not exactly the same, but closely analogous. The mind -- be it a human or computer mind -- generates a lot of different ideas, internally. It generates them by randomly tweaking its prior ideas, and by combining its previous ideas. It also has a good trick that evolution doesn't have: it can explicitly generalize its previous ideas to form new, more abstract ones, that then get thrown back into the ever-evolving idea pool. Most of the ideas emerging from this pool are rubbish. Eventually it finds some interesting ones and refines them.

What's complicated is going all this re-combination, mutation and generalization of ideas judiciously given limited computing resources and current computer architectures, which were built for other sorts of things. This requires somewhat complex cognitive architectures, which take a while to implement and refine... which leads us back to linking together appropriate complex computational processes to as to carry out these processes effectively on current hardware, as we're trying to do with OpenCog...

A: Blah, blah, blah...

B: OK, OK…. I guess this argument has gone on long enough. I admit neither of us has a 100% solid argument in favor our our position -- but that's because anything regarding the mind eventually impinges on philosophy, and there are no 100% solid arguments in philosophy of any kind. Philosophical issues can get obsoleted (not many folks bother arguing about how many angels can dance on the head of a pin anymore), but they never get convincingly resolved….

But do you admit, at least, the matter isn't as clear as you thought initially? That the inability of computers to be truly creative isn't so entirely obvious as you were saying at first?

A: Yes, I understand now, that it's possible that computers can achieve creativity via the influx of radical novelty into their RAM-states via psychic projection from the Matrix Overlords.

B: Ah, good, good, glad we cleared that up.

Uh, you do understand that was just a rhetorical illustration, right?

A: A rhetorical illustration, eh? I know you are but what am I !!!

Tuesday, October 16, 2012

Reports of Reincarnation: What's Really Going On?

For most of my life I considered belief in reincarnation completely ridiculous, and an obvious example of wishful thinking.   These people just don't want to face the reality of their impending doom, I figured, so they latch onto crazy stories about life after death and their souls moving on to occupy other bodies.  Yeah, right.

But the more I read about the topic, the more this attitude of facile dismissal started to grate on me.   I encourage anyone interested in understanding this aspect of universe to read Ian Stephenson's book Children Who Remember Previous Lives: A Question of Reincarnation ... and the Wikipedia page on Reincarnation Research gives some other useful references too. 

This research definitely doesn't prove that reincarnation exists, in any of the classic definitions of reincarnation.   Yet, it's also very difficult to dismiss as simple fraud or self-deception.If you really read the evidence carefully, you come to the inescapable conclusion -- Something Strange Is Going On Here.
A similar situation is noted in Robert McLuhan's book Randi's Prize, which recounts many examples of strange phenomena occurring during seances in previous centuries.   When one really studies the historical record regarding these phenomena, explanations in terms of hoaxes and self-deception become difficult to maintain.  One comes to the conclusion: something strange was happening there, though due to its peculiar nature it's hard to study with repeatable experiments.  Yeah, there was lots of fraud and self-deception; yet when you finally consider all the evidence carefully, it's not really plausible that these account for all of it.   It seems hard to account for the evidence without positing some kind of psi phenomena -- though exactly what kind remains unclear.

My general thoughts on psi are given on this page. I'm not going to try to convince skeptics to believe in psi, reincarnation or anything, in the space of this short blog post.  If you're truly skeptical about psi, but open-minded, I encourage you to read the references given on that page, and Stephenson's book on reincarnation as well.

What I'm musing about is: Supposing some of the evidence gathered by Anderson and others about reincarnation is real (as seems likely to me) -- then, what the heck is going on in this universe?

The classic religious stories of reincarnation don't make much sense.   Many folks have poked many holes in them; it's extremely easy to do.

Yet, there does seem to be evidence that knowledge about one person's life, often a recently dead one, can somehow leak into the mind of some other human, often a very young one.   

One can imagine a lot of different psi phenomena that could lead to this.  My own speculation is that the phenomenon has something to do with the primacy of pattern over time.  

I realize that, with this sort of speculation, I'll utterly lose anyone who takes a reductionist, naive-realist type view of the world -- a view in which ordinary physical reality is primary and mental, subjective reality is an epiphenomenon.  But, so it goes...

The linear flow of time, as we perceive it subjectively in ordinary human states of mind and as we study it in physics, is not necessarily a fundamental property of the universe.  It may be "just one way that information self-organizes".   If one views the universe as a sort of pool of forms, patterns, feelings etc., with the conventional linear time axis being just one form of organization among many that emerges in this pool, then reincarnation-type phenomena seem a lot less strange.

Suppose, for instance, that the two brains involved in a reincarnation-type phenomenon have some sort of similarity to them, in their pattern of organization or dynamics.  Could this similarity set up some kind of "resonance" between these brain/minds, acting in the broader pattern-space of the universe but outside the particular pattern that is the linear order of time?   This possibility fits in well with Sheldrake's ideas about morphogenetic fields, which I've tried to tie in with some proposed modifications to quantum mechanics.

Sorry to disappoint, but I have no grand conclusion on these matters.  My current way of thinking is:
  • There is some valid, strange (to our normal world-view) phenomenon going on, underlying some of the known examples of reincarnation-like phenomena
  • The classic religious stories about reincarnation are almost surely not correct
  • Maybe, maybe, maybe some morphogenetic field type explanation could help explain what's really going on

I look forward to the science --- or trans-science --- of the future, which I suspect will be able to explain reincarnation-type phenomena and other paranormalities to us, using entirely new concepts, or concepts like morphogenetic fields that currently exist only in very blurry form....

Monday, August 27, 2012

Finding the "Right" Computational Model to Support Occam's Razor

(This is a rather abstract science-y post, beware....  If you don't like theoretical computer science, you'll be bored and/or baffled....  But if you do -- or if you like being bored and/or baffled, read on!  These are some technical speculations that I believe may have interesting conceptual/philosophical implications....)

One question I've been thinking about a lot lately is: What is the "right" computational model?

Occam's Razor is a powerful tool in AI, both in machine learning / data mining applications, and in a more ambitious AGI context.   But the formulation of Occam's Razor, in a practical computer science context, contains a certain arbitrariness.

Occam's Razor says to prefer the simpler hypothesis; or more subtly, to prefer hypotheses, in some sense, proportionally to their simplicity.

But what's "simple"?

An old friend of mine, Mariela Ivanova, once said: "Simplicity is so complex, I couldn't understand it."

Sometimes folks define "simplicity of X" as "length of the shortest expression of X in a specified programming language P" -- or something like that.   But of course this is too crude, so there are variations, such as "runtime of the program giving the shortest expression to X in language P"; or various weighted combinations of length and runtime as Juergen Schmidhuber and colleagues have explored in their work on "frontier search."

One can also look at a "total computational cost" simplicity measure such as "the minimum amount of space and time resources needed to compute X" on a specific machine M.

The program-length simplicity metric is related to the Solomonoff-Levin measure on program space, which defines the probability of a program as inversely related to the length of the shortest program computing it.  Variant simplicity metrics lead to other measures on program space

All this is very nice and elegant, to write about and to prove theorems about.  But it all depends on a seemingly arbitrary assumption: what is the programming language, or machine, involved?

There's some nice old theory saying that, in a sense, for the case of "shortest program length," this choice doesn't matter.  Any universal Turing machine can simulate any other one, given a constant overhead.

But in practice, this constant may be very large, and it sometimes does matter.

And for other simplicity measures, such as "total computational cost", no comparable theories are known.

To complete the story of the computer science basis for Occam's Razor, it seems what we need is a theory of what is the "right" programming language or machine; what is the "right" computational model.

Of course, this comes down to deciding what "right" means.

What I'd like to find are some nice theorems of the form: IF you want a programming-language/computational-model satisfying simple, elegant criteria A-B-C, THEN you  must use a programming-language/computational model essentially equivalent to P*.

If we had a good theory of this nature, then the choice of which programming-language/computational-model to use, would come down to choosing criteria A-B-C.

I looked around a bit and couldn't find any really great theoretical results of this nature.   But it's possible they exist and I just didn't figure out the right terms to search for.

My intuition is that: one can formulate some simple, elegant criteria that mathematically imply the right computational model is some sort of parallel graph rewriting.

In other words, I suspect there will be some simple, elegant criteria that are equivalent to the use of something vaguely like core Haskell.

What kind of criteria am I thinking of?  I know I'm getting a little fuzzy at this point, but I'm thinking about stuff like: "Computing W and computing f(W) should take the same amount of space and time resources," for cases where it seems intuitively obvious that W and f(W) should take the same amount of space and time resources.

Might it be possible, by imposing a few natural criteria of this nature, to derive results implying that the only way to fulfill the criteria is to use some sort of simple parallel graph rewriting based computational model?

I strongly suspect this is possible, but I've been thinking about this a while, and (a familiar story for me lately!) haven't found time to spend the needed hours/days/weeks to try to work out any details.

There is a math literature on abstract rewrite systems, as well as on the practical use of parallel graph rewriting systems to do computation.   These may provide raw material to be used to prove theorems of the sort I'm alluding to.  Or maybe some theorems equivalent to what I'm dreaming about exist somewhere, and I just didn't find them yet!

So, let's suppose someone proves theorems of the nature I'm suggesting.  What would that get us?

It would tell us, in an elegant and intuitively compelling way, what computational model to assume when applying Occam's Razor.

Of course, someone could always say "That math is nice, but I see no reason to accept your criteria.  Sure they're simple and elegant, but I happen to prefer a computational model fulfilling different criteria."   That's inevitable.

But: A community sharing a common intuition regarding which mathematical criteria are elegant and simple, would have a strong mutual intuitive reason to assume a common computational model, and thus to apply the Occam's Razor heuristic in the same way.

Sunday, July 29, 2012


A brief fictional dialogue to brighten up your July ....

So, I understand that, in a quest to get new insights into how to guide the Singularity in a positive direction for everyone, you took a massive dose of acid and queried the universe for inspiration?

Yes, that's right.  A massive dose of liquid acid, to the tune of Scriabin's "Prometheus."  On my friend's new psychedelic seastead, well out into international waters.  He brews the stuff there and you take it out there -- beyond the jurisdiction of any government, so it's all totally legal.


Wow indeed.  Of course, there's always the risk of some pirates or military gunboats popping up to ruin the party.  But nothing like that has happened so far.

Well that's good.  Enjoy it while it lasts, I guess.


So -- what insights did the universe have for you?  Anything you can can share with us, back here on the boring old plane of everyday reality?

Heh…  Well, in a sense I guess.  But, you may not like the message….

Stop toying with me, O Great One!   Come on, shoot…

OK, well… so, we tried to focus the trip on the Singularity, right?  We were trying to use that state of mind to get insight into Friendly AGI, paths to Singularity, and so forth….

But you kept getting distracted by the dolphins doing flips on the wave next-door?

No… there were no distractions.  We were pretty fast outside the physical realm of being, right up close to the pulsing heart of being.   But the overwhelming message we got from the pulsing heart of being was simply: "YES, THE UNIVERSE IS CONTINUING" ... i.e. "YES, THE UNIVERSE IS CONTINUING TO REVOLUTIONIZE ITSELF, THAT'S WHAT IT DOES" ... i.e. "YES, THE UNIVERSE IS CONTINUING TO REVOLUTIONIZE ITSELF, THAT'S WHAT IT DOES, IF YOU LOOK AT IT FROM THE VIEW OF A LINEAR FLOW OF TIME" ....

In other words, what we're thinking of as "Singularity" is just yet another manifestation of the under/over-lying "enlightenment" that is already immanent in the universe, and that was manifested in other big changes like the cooling-down of the initial miasma into solid matter, and the emergence of life, and of intelligence, and of language, and of machinery and technology, etc. ....  The process of re-valuing all its values and decomposing and recomposing all its forms is what the universe is all about -- and if you view it from something closer to the universe's perspective, without the limitation imposed by the "linear flow of time" mindset, then it doesn't really seem like development or progress, it looks more like a web of inter-creating process....

So, we were looking for insight into how to maximize the continuity of human consciousness as Singularity approaches and unfolds, so that we can experience Singularity while having the sense of remaining ourselves, and so that the superhuman uber-bots can emerge and roam the galaxy and invent amazing new stuff without killing cute little human babies and puppies in the process -- but what we got was a big dose of "human life and death are not particularly more or less significant than all the other forms and patterns in the Cosmos, and regardless of humanity, the profound wider and deeper intelligence of the universe goes on and keeps on unfolding and doing stuff that will look to human-like minds like creation and destruction."

I see.  So the message the universe gave you is: sure, the Singularity is going to happen, like lots of other Singularities have happened, in this Singular universe … this Singular Universal mind.  But as for the particularities of you mere humans, who really gives a crap?  

Sort of like that -- but without any negative emotional tone.  Humans are here now, within certain subjectively perceived temporal moments and spatial regions.  Other temporal moments and spatial regions don't contain humans.  That's just the way it is.  The universe is rich in various forms and patterns, and humans are part of that, but just a teeny tiny part.  Crap is part of that too! .... The view of forms and patterns as unfolding and developing is just one perspective on the whole web of patterns, which seems interesting from a human view, and less so to other sorts of minds or mindplexes or being-webs or whatever….

There was also a strong sense of the presence of other, individuated non-human minds out there -- faintly amused by us humans' panicked worry about the continuation of various of our pet patterns through our funny little quasi-illusory historical time axis....  These other minds were a little more aware of their own relationship to the Cosmos, and their own role in regards to the overall web of process....

It's not that individuated minds like ours are unnecessary or irrelevant -- we are part of the overall process; patterns emerge in us and emerge from the arrangement of us and other things ... without individuated minds like ours, various other more abstract and broad patterns wouldn't be able to exist ... but the particularities of our minds and beings don't really "deserve" the profound significance that we are habituated to attach to them...

Yes, I get it.  The message is quite clear, at any rate.

As to whether I like it ... I guess that's not a question with a well-defined answer.  It's not the sort of thing you can like or dislike, really.

But, hmmm ... so for you, was it a life-changing experience?  Are you going to give up the search for Friendly AGI, and submit yourself to the cosmic will of the universe as it unfolds?

Heh, no, not really … now that I'm back here in the ordinary dimension, my enthusiasm is undimmed for my quest to create friendly AGI and a positive Singularity for all humans and animals.  But sure, the trip did sorta fill my head with a deeper sense of the limited scope of this quest.

At the moment I look at the quest for a human-positive Singularity more like I look at the quest to build a really nice habitat for my pet bunnies, so they can live in the back yard and enjoy the fresh air and grass without getting eaten by cats or flooded by the rain.  It's important in a sense -- I love the bunnies and want them to be happy and live.  But if it doesn't work and a cat gets in and kills them, well, so it goes ... life goes on...

Sunday, April 15, 2012

Someone Should Build a Psychedelic Resort/Lab Seastead

While taking the train from Hong Kong to Shenzhen last night, I started chatting with Ruiting about seasteading, and before long I came up with what may possibly be the wackiest workable business model ever: a seastead focused on creating and experimenting with psychedelics, with a dual business model of psychedelic tourism, and patenting of newly discovered psychedelic-related psychotherapeutics.

I'm too busy trying to beat the Hong Kong stock market, create AGI and understand human aging to actually build such a seastead, so I'm hoping that one of you readers will take up the idea -- and then invite me to build a cabana on the outskirts of the psychedelic sea village ;-) ...

A bit of a prelude...

I've been chatting online recently with various folks about relatively inexpensive ways to make seasteads -- offshore living/working facilities, in international waters, beyond the rules of any national government.

For instance, Steve Rolland pointed out to me that there are many places in the world where, just a few dozen miles offshore, the ocean is only 20-30 feet deep.   In a place like that, it wouldn't be such a big trick to put some platforms on the ocean floor and build atop them.   I started thinking about the potential for concrete monolithic domes in this sort of setting, and found some cool musings online about floating concrete spheres.  Then I found that Shaun Waterford has a fully fleshed out design for a fully undersea concrete dome home, which he would like to build as part of an undersea tourist attraction for divers, and use to beat the world record for number of consecutive days spent undersea.

So on the train from Hong Kong to Shenzhen last night, I was musing on the following question: Barring the advent of some suitably-enthusiastic rich person, how might one get $$ to build such a seastead?  What might be a reasonable business model corresponding to such an endeavor?

The idea of a novelty dive park, or a mid-sea resort, makes lots of sense.  Yet it's a lot cheaper to do that stuff right offshore, and it's not clear how much benefit one gets from putting that sort of thing further out in the ocean.   So maybe an underwater dome as part of a dive park is a good idea, but not necessarily as part of a seasteading venture.

The idea of doing out-there medical and biological research on a seastead, away from the laws of any nation, seems cost of appealing.  Yet the cost of doing research mid-sea instead of on land seems potentially high -- and again, for almost any weird research you want to do, there's probably some country that will allow it....

So I scratched my head for a while... and then inspiration hit!

A Psychedelic Resort/Lab Seastead

OK, so imagine this:

  • An offshore village of concrete dome homes, on platforms interconnected by walkways, a dozen miles off the coast of Mexico (where the ocean's only 20-30 feet deep) ...
  • Some of the domes are private residences, some are cabanas for visitors; some are labs for brewing psychedelics like LSD and DMT, some are mushroom farms; a couple are psychopharmacology research labs; one holds some sensory deprivation tanks
  • No psychedelics are sold for use outside the village (to avoid conflict with governments of conventional land-bound countries)
  • The first-phase business model is psychedelic tourism: Folks will pay to come hang out in the resort, soak up the sun, swim in the beautiful ocean, and take the locally-created psychedelics in a safe & lovely environment.  This "psychedelic tourism" will generate enough revenue to keep the village operating
  • The second-phase business model is patenting of novel psychedelic psychotherapies -- that have been found in the village's research labs to have therapeutic value.   Note that research on psychedelics has basically halted worldwide, due to legal issues.  So there is a huge amount of research into psychedelic-related psychotherapeutic substances, that is begging to be done but remains unexplored for legal reasons.   Getting the patents ensuing from this research properly tested and approved for use in major nations will take some time, but once the approval comes, this could be a multibillion dollar moneymaker, as well as a beautiful thing for humanity.

Beautiful, right?   Clearly this would be for the good of the world!  It sounds incredibly wacky, yet the business model actually makes sense.  And different countries have all sorts of different drug laws, so I don't think any of the conventional nations is really going to worry too much about a few freaks out in the ocean brewing psychedelics for consumption on their own premises.

The only catch I can think of is, piracy might be an issue -- so you'd probably need a few thugs in gunboats out there alongside all the psychedelic freaks and psychopharmacologists...

How much would it cost?  Based on a bit of preliminary investigation, I'd roughly estimate the cost of putting a 750 square foot dome home on a platform in shallow ocean water, at roughly US$500K (assuming many are being built at once).   So for an initial village of, say, 30 domes, we'd be looking at US$15M total.   That's a lot more money than I currently have, yet I also know a number of individuals who could spare that amount without missing it at all.

And, hey -- if nobody actually does it, maybe I'll use it as a premise for a novel one day, if I ever get time for fiction writing again!

Wednesday, March 21, 2012

More on Kurzweil's Predictions

After I wrote my last blog post reacting to Alex Knapp's critique of Ray Kurzweil's predictive accuracy, Ray Kurzweil wrote his own rebuttal of Alex's argument.

Ray then emailed me, thanking me for my defense of his predictions, but questioning my criticism of his penchant for focusing on precise predictions about future technology. I'm copying my reply to Ray here, as it may be of general interest...

Hi Ray,

I wrote that blog post in a hurry and in hindsight wish I had framed things more carefully there....  But of course, it was just a personal blog post not a journalistic article, and in that context a bit of sloppiness is OK I guess...

Whether YOU should emphasize precise predictions less is a complex question, and I don't have a clear idea about that.  As a maverick myself, I don't like telling others what to do!  You're passionate about predictions and pretty good at making them, so maybe making predictions is what you should do ;-) ....  And you've been wonderfully successful at publicizing the Singularity idea, so obviously there's something major that's right about your approach, in terms of appealing to the mass human psyche.

 I do have a clear feeling that the making of temporally precise predictions should play a smaller role in discussion of the Singularity than it now does.   But this outcome might be better achieved via the emergence of additional, vocal Singularity pundits alongside you, with approaches complementing your prediction-based approach -- rather than via you toning down your emphasis on precise prediction, which after all is what comes naturally to you...

One thing that worries me about your precise predictions is that in some cases they  may serve to slow progress down.  For example, you predict human-level AGI around 2029 -- and to the extent that your views are influential, this may dissuade investors from funding AGI projects now ... because it seems too far away!  Whereas if potential AGI investors more fully embraced the uncertainty in the timeline to human-level AGI, they might be more eager for current investment.

Thinking more about the nature of your predictions ... one thing that these discussions of your predictive accuracy highlights is that the assessment of partial fulfillment of a prediction is extremely qualitative.  For instance, consider a prediction like “The majority of text is created using continuous speech recognition.”   You rate this as partially correct, because of voice recognition on smartphones.  Alex Knapp rates this as "not even close."   But really -- what percentage of text do you think is created using continuous speech recognition, right now?  If we count on a per character basis, I'm sure it's well below 1%.  So on a mathematical basis, it's hard to justify "1%" as a partially correct estimate of ">50%.   Yet in some sense, your prediction *is* qualitatively partially correct.  If the prediction had been "Significant subsets of text production will be conducted using continuous speech recognition", then the prediction would have to be judged valid or almost valid.

One problem with counting partial fulfillment of predictions, and not specifying the criteria for partial fulfillment, is that assessment of predictive accuracy then becomes very theory-dependent.  Your assessment of your accuracy is driven by your theoretical view, and Alex Knapp's is driven by his own theoretical view. 

Another problem with partial fulfillment is that the criteria for it, are usually determined *after the fact*.   To the extent that one is attempting scientific prediction rather than qualitative, evocative prediction, it would be better to rigorously specify the criteria for partial fulfillment, at least to some degree, in advance, along with the predictions.

So all in all, if one allows partial fulfillment, then precise predictions become not much different from highly imprecise, explicitly hand-wavy predictions.   Once one allows partial matching via criteria defined subjectively on the fly, “The majority of text will be created using continuous speech recognition in 2009” becomes not that different from just saying something qualitative like "In the next decade or so, continuous speech recognition will become a lot more prevalent."  So precise predictions with undefined partial matching, are basically just a precise-looking way of making rough qualitative predictions ;)

If one wishes to avoid this problem, my suggestion is to explicitly supply more precise criteria for partial fulfillment along with each prediction.  Of course this shouldn't be done in the body of a book, because it would make the book boring.  But it could be offered in endnotes or online supplementary material.  Obviously this would not eliminate the theory-dependence of partial fulfillment assessment -- but it might diminish it considerably.

For example the prediction “The majority of text is created using continuous speech recognition.” could have been accompanied with information such as "I will consider this prediction strongly partially validated if, for example, more than 25% of the text produced in some population comprising more than 25% of people is produced by continuous speech recognition; or if more than 25% of text in some socially important text production domain is produced by continuous speech recognition."   This would make assessment of the prediction's partial match to current reality a lot easier.

I'm very clear on the value of qualitative predictions like "In the next decade or so, continuous speech recognition will become a lot more prevalent."  I'm much less clear on the value of trying to make predictions more precisely than this.   But maybe most of your readers actually, implicitly interpret your precise predictions as qualitative predictions... in which case the precise/qualitative distinction is largely stylistic rather than substantive


Interesting stuff to think about ;)

Tuesday, March 20, 2012

Ray Kurzweil's (Sometimes) Wrong Predictions

Note: there was a followup blog post to this one, presenting some complementary views that I also hold, and linking to some more recent comments by Ray Kurzweil on the  matter.

Forbes blogger Alex Knapp, who often covers advanced technology and futurist topics, recently wrote a post titled Ray Kurzweil's Predictions for 2009 Were Mostly Inaccurate ...

Some of Knapp's posts are annoyingly opinionated and closed-minded, but this one was well-put together, and I made a lengthy comment there, which I repeat here.  You should read his post first to get the context...

And also, once you read his post, you might want to read Ray's rebuttal to Michael Anissimov's earlier critique of his predictions. 

Ray rates himself as 90% right out of 100+ predictions; Michael looks at only a handful of Ray's predictions and finds most of them unfulfilled.

Looking at the "90% right" that Ray claims, it seems to me about half of these are strong wins, and the other half are places where the technologies Ray has forecast DO now exist, but aren't as good or as prevalent as he had envisioned.

On the other hand, Alex Knapp in Forbes took Ray's top 10 predictions rather than the full 100+, and found a lower accuracy for these.

An excerpt from my comment to Alex's post on the Forbes site (with light edits) is:



One thing that should be clarified for the general readership is that the vast majority of those of us in the "Singularitarian" community do not, and never did, buy into all of Ray Kurzweil's temporally-specific predictions.  We love Ray dearly and respect him immensely -- and I think the world owes Ray a great debt for all he's done, not only as an inventor, but to bring the world's attention to the Singularity and related themes.  However, nearly all of us who believe a technological Singularity is a likely event this century, prefer to shy away from the extreme specificity of Ray's predictions.

Predicting a Singularity in 2045 makes headlines, and is evocative.  Predicting exactly which technologies will succeed by 2009 or 2019 makes headlines, and is evocative.  But most Singularitarians understand that predictions with this level of predictions aren't plausible to make.

The main problem with specific technology forecasts, is highlighted by thinking about multiple kinds of predictions one could make in reference to any technology X:

1) How long would it take to develop X if a number of moderately large, well-organized, well-funded teams of really smart people were working on it continuously?

2) How long would it take to develop X if a large, well-funded, bloated, inefficient government or corporate bureaucracy were working on it continuously?

3) How long would it take to develop X if there were almost no $$ put into the development of X, so X had to be developed by ragtag groups of mavericks working largely in their spare time?

4) How long would it take to develop X if a handful of well-run but closed-minded large companies dominated the X industry with moderately-functional tools, making it nearly impossible to get funding for alternate, radical approaches to X with more medium-term potential

When thinking about the future of a technology one loves or wants, it's easy to fall into making predictions based on Case 1.  But in reality what we often have is Case 2 or 3 or 4.

Predicting the future of a technology is not just about what is "on the horizon" in terms of science and technology, but also about how society will "choose" to handle that technology.   That's what's hard to predict.

For example a lot of Ray's failed top predictions had to do with speech technology.  As that is pretty close to my own research area, I can say pretty confidently that we COULD have had great text to speech technology by now.  But instead we've had Case 4 above -- a few large companies have dominated the market with mediocre HMM-based text to speech systems.  These work well enough that it's hard to make something better, using a deeper and more ultimately promising approach, without a couple years effort by a dedicated team of professionals.  But nobody wants to fund that couple years effort commercially, because the competition from HMM based systems seems too steep.  And it's not the kind of work that is effectively done in universities, as it requires a combination of engineering and research.

Medical research, unfortunately, is Case 2.  Pharma firms are commonly bloated and inefficient and shut off to new ideas, partly because of their co-dependent relationship with the FDA.  Radical new approaches to medicine have terrible trouble getting funded lately.  You can't get VC $$ for a new therapeutic approach until you've shown it to work in mouse trials or preferably human trials -- so how do you get the $$ to fund the research leading up to those trials?

Artificial General Intelligence, my  main research area, is of course Case 3.  There's essentially no direct funding for AGI on the planet, so we need to get AGI research done via getting funding for other sorts of  projects and cleverly working AGI into these projects....  A massive efficiency drain!!

If speech-to-text, longevity therapy or AGI had been worked on in the last 10 years with the efficiency that Apple put into building the iPad, or Google put into building its search and ad engines, then we'd be a heck of a lot further advanced on all three.

Ray's predictive methodology tries to incorporate all these social and funding related factors into its extrapolations, but ultimately that's too hard to do, because the time series being extrapolated aren't that long and depend on so many factors.

However, the failure of many of his specific predictions, does not remotely imply he got the big picture wrong.  Lots of things have developed faster than he or anyone thought they would in 2009, just as some developed more slowly.

To my mind, the broad scope of exponential technological acceleration is very clear and obvious, and predicting the specifics is futile and unnecessary -- except, say, for marketing purposes, or for trying to assess the viability of a particular business in a particular area.

The nearness of the Singularity does not depend on whether text-to-speech matures in 2009 or 2019 -- nor on whether AGI or longevity pills emerge in 2020 or 2040.  

To me, as a 45 year old guy, it matters a lot personally whether the Singularity happens in 2025, 2045 or 2095.  But in the grand scope of human history, it may not matter at all....

The overall scope and trend of technology development is harder to capsulize in sound bites and blog posts than specific predictions -- hence we have phenomena like Ray's book with its overly specific predictions, and your acute blog post refuting them. 

Anyway, anyone who is reading this and not familiar with the issues involved, I encourage you to read Ray's book the Singularity is Near -- and also Damien  Broderick's book "The Spike."  

Broderick's book made very similar points around a decade earlier, -- but it didn't get famous.  Why?  Because "Spike" sounds less funky than "Singularity", because the time wasn't quite ripe then, and because Broderick restricted himself to pointing out the very clear general trends rather than trying and failing to make overly precise predictions!

Ben Goertzel

P.S. Regarding Ray's prediction that "“The neo-Luddite movement is growing.” -- I think that the influence of the Taliban possibly should push this into the "Prediction Met" or "Partially Met" category.  The prediction was wrong if restricted to the US, but scarily correct globally...

Sunday, March 11, 2012

Will Corporations Prevent the Singularity?

It occurred to me yesterday that the world possesses some very powerful intelligent organisms that are directly and clearly opposed to the Singularity -- corporations.

Human beings are confused and confusing creatures -- we don't have very clear goal systems, and are quite willing and able to adapt our top-level goals to the circumstances.  I have little doubt that most humans will go with the flow as Singularity approaches.

But corporations are a different matter.  Corporations are entities/organisms unto themselves these days, with wills and cognitive structures quite distinct from the people that comprise them.   Public corporations have much clearer goal systems than humans: To maximize shareholder value.

And rather clearly, a Singularity is not a good way to maximize shareholder value.  It introduces way too much uncertainty.  Abolishing money and scarcity is not a good route to maximizing shareholder value -- and nor is abolishing shareholders via uploading them into radical transhuman forms!

So one can expect corporations -- as emergent, self-organizing, coherent minds of their own -- to act against the emergence of a true Singularity, and act in favor of some kind of future in which money and shareholding still has meaning.

Sure, corporations may adapt to the changes as Singularity approaches.  But my point is that corporations may be inherently less pliant than individual humans, because their goals are more precisely defined and less nebulous.  The relative inflexibility of large corporations is certainly well known.

Charles Stross, in his wonderful novel Accelerando, presents an alternate view, in which corporations themselves become superintelligent self-modifying systems -- and leave Earth to populate space-based computer systems where they communicate using sophisticated forms of auctioning.   This is not wholly implausible.   Yet my own intuition is that notions of money and economic exchange will become less relevant as intelligence exceeds the human level.  I suspect the importance of money and economic exchange is an artifact of the current domain of relative material scarcity in which we find ourselves, and that once advanced technology (nanotech, femtotech, etc.) radically diminishes material scarcity, the importance of economic thinking will drastically decrease.  So that far from becoming dominant as in Accelerando, corporations will become increasingly irrelevant post-Singularity.  But if they are smart enough to foresee this, they will probably try to prevent it.

Ultimately corporations are composed of people (until AGI advances a lot more at any rate), so maybe this issue will be resolved as Singularity comes nearer, by people choosing to abandon corporations in favor of other structures guided by their ever-changing value systems.   But one can be sure that corporations will fight to stop this from happening.

One might expect large corporations to push hard for some variety of "AI Nanny" type scenario, in which truly radical change would be forestalled and their own existence persisted, as part of the AI Nanny's global bureaucratic infrastructure.  M&A with the AI Nanny may be seen as preferable to the utter uncertainty of Singularity.

The details are hard to foresee, but the interplay between individuals and corporations as Singularity approaches should be fascinating to watch.

Are Prediction and Reward Relevant to Superintelligences?

In response to some conversation on an AGI mailing list today, I started musing about the relationship between prediction, reward and intelligence.

Obviously, in everyday human and animal life, there's a fairly close relationship between prediction, reward and intelligence.  Many intelligent acts boil down to predicting the future; and smarter people tend to be better at prediction.  And much of life is about seeking rewards of one kind or another.  To the extent that intelligence is about choosing actions that are likely to achieve one's goals given one's current context, prediction and reward are extremely useful for intelligence.

But some mathematics-based interpretations of "intelligence" extend the relation between intelligence and prediction/reward far beyond  human and animal life.  This is something that I question.

Solomonoff induction is a mathematical theory of agents that predict the future of a computational system at least as well as any other possible computational agents.  Hutter's "Universal AI" theory is a mathematical theory of agents that achieve (computably predictable) reward at least as well as any other possible computational agents acting in a computable environment.   Shane Legg and Marcus Hutter have defined intelligence in these terms, essentially positing intelligence as generality of predictive power, or degree of approximation to the optimally predictive computational reward-seeking agent AIXI.   I have done some work in this direction as well, modifying Legg and Hutter's definition into something more realistic -- conceiving intelligence as (roughly speaking) the degree to which a system can be modeled as efficiently using its resources to help it achieve computably predictable rewards across some relevant probability distribution of computable environments.  Indeed, way back in 1993 before knowing about Marcus Hutter, I posited something similar to his approach to intelligence as part of my first book The Structure of Intelligence (though with much less mathematical rigor).

I think this general line of thinking about intelligence is useful, to an extent.  But I shrink back a bit from taking it as a general foundational understanding of intelligence.

It is becoming more and more common, in parts of the AGI community, to interpret these mathematical theories as positing that general intelligence, far above the human level, is well characterized in terms of prediction capability and reward maximization.  But this isn't very clear to me (which is the main point of this blog post).  To me this seems rather presumptuous regarding the nature of massively superhuman minds!

It may well be that, once one gets into domains of vastly greater than human intelligence, other concepts besides prediction and reward start to seem more relevant to intelligence, and prediction and reward start to seem less relevant.

Why might this be the case?

Regarding prediction: Consider the possibility that superintelligent minds might perceive time very differently than we do.  If superintelligent minds' experience goes beyond the sense of a linear flow of time, then maybe prediction becomes only semi-relevant to them.  Maybe other concepts we don't now know become more relevant.  So that thinking about superintelligent minds in terms of prediction may be a non-sequitur.

It's similarly quite quite unclear that it makes sense to model superintelligences in terms of reward.  One thinks about the "intelligent" ocean in Lem's Solaris.  Maybe a fixation on maximizing reward is an artifact of early-stage minds living in a primitive condition of material scarcity.

Matt Mahoney made the following relevant comment, regarding an earlier version of this post: "I can think of 3 existing examples of systems that already exceed the human brain in both knowledge and computing power: evolution, humanity, and the internet.  It does not seem to me that any of these can be modeled as reinforcement learners (except maybe evolution), or that their intelligence is related to prediction in any of them."

All these are  speculative thoughts, of course... but please bear in mind that the relation of Solomonoff induction and "Universal AI" to real-world general intelligence of any kind is also rather wildly speculative...  This stuff is beautiful math, but does it really have anything to do with real-world intelligence?  These theories have little to say about human intelligence, and they're not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on "scaling them down" to make them realistic; so far this only works for very simple toy problems, and it's hard to see how to extend the approach broadly to yield anything near human-level AGI).  And it's not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.

So by all means enjoy the nice math, but please take it with the appropriate fuzzy number of grains of salt ;-) ...

It's fun to think about various kinds of highly powerful hypothetical computational systems, and fun to speculate about the nature of incredibly smart superintelligences.  But fortunately it's not necessary to resolve these matters -- or even think about them much -- to design and build human-level AGI systems.