Can Computers Be Creative? -- A Dialogue on Creativity, Radical Novelty, AGI, Physics and the Brain
Over the years, I've repeatedly encountered people making arguments of the form: "Computers can't be creative in the same way that people can." Such arguments always boil down, eventually, to an assertion that human mind/brains have recourse to some sort of "radical novelty" going beyond the mere repermutation of inputs and initial state that computers are capable of.
This argument is there in Roger Penrose's "
Emperor's New Mind", in Kampis's "
Self-Modifying Systems", and in all manner of other literature. It will surely be around until the first fully human-level AGIs have been created -- and will probably continue even after that, at least to some extent, since no issue verging on philosophy has ever been fully resolved!
The following dialogue, between two imaginary characters A and B, is my attempt to summarize the crux of the argument, in a way that's admittedly biased by my own peculiar species of pro-AGI perspective, but also attempts to incorporate my understanding of the AGI skeptic's point of view.
The dialogue was inspired in part by a recent dialogue on the AGI email list, in which perpetual AGI gadfly Mike Tintner was playing the role of AGI skeptic "A" in the dialogue, and the role "B" was played by pretty much everyone else on the list. But it's often hard to really get at the crux of an issue in the herky-jerky context of mailing list discussion. I hope I've been able to do better here. I was also heavily inspired by conversations I had years previously, with my friends Margeret Heath and Cliff Joslyn, on the concept of "radical novelty" and what it might mean.
A: It's obvious AIs can never be creative and innovative in the same sense that people are. They're just programs, they just recombine their inputs in ways determined by their programming.
B: How can you say that, though? If you look at the transcript of a computer chess player's game, you'll see plenty of creative moves -- that is, moves you'd call creative if you saw them made by a human player. I wrote some computer music composition software that made up some really cool melodies. If I'd made them up myself, you'd call them creative.
A: OK, but a program is never going to make up a new game, or a new instrument, or a new genre of music.
B: How do you know? Anyway once those things happen, then you'll find some reason to classify *those* achievements as not creative. This is just a variant of the principle that "AI is defined as whatever seems intelligent when people can do it, that computers can't yet do."
A: No, there's a fundamental difference between how computers are doing these things, and how people do these things. A person has to set up the situation for a computer to do these things. A person feeds the computer the input and configures the computer to have a certain goal. Whereas a human's creative activity is autonomous -- the human's not just a tool of some other being.
B: Ah, falling back on mystical notions of free will, are we? But think about it -- if you don't take care to feed a human child proper input, and set up their situation properly, and guide them toward a certain goal -- then they're not going to be playing chess or composing music. They're going to be a "wild child", capable only of hunting and foraging for food like a non-human animal. No one who can read this is independent of their cultural programming.
A: That's not a fair analogy. Computers need much more specialized preparation for each task they're given, than people do.
B: Yes, that's true. Nobody has achieved human-level AGI yet. I believe we're on the path to get there, but we're not there yet. But I never claimed that computer programs are currently creative and innovative on the level of highly creative adult humans. Actually it's hard to compare. Current computer programs can create some things humans can't -- extremely complex circuit designs, music with 10000-voice polyphony, fractal art in 128 dimensions, and so forth -- but they also fall far short of humans in many areas. Your original statement wasn't merely "We don't yet have computers that are as creative as innovative as humans" -- that's obvious. Your statement was that computers intrinsically aren't creative and innovative, in the same manner than humans are. And I don't think you've demonstrated that at all.
A: It's so obvious, it doesn't need demonstration. A computer will never do more than rearrange the elements that have been fed into it. Whereas, a human can come up with something fundamentally new -- a new element that neither it, nor anybody else, has ever heard of.
B: Ah, now I see what you're getting at -- the notion of "radical novelty." I've had this argument before!
A: Yes, radical novelty. The human mind is capable of radical novelty. That's the crux of our general intelligence, our creative innovations. And computers can't do it, because all they can do is rearrange their inputs and their programming -- they can't introduce anything new.
B: You do realize you're not the first one to think of this argument, right? It's been around a rather long time. I myself first encountered it in George Kampis's book "Self-Modifying Systems in Biology and Cognitive Science", which was published in the early 1990s. But of course the argument's been around since long before that. I'm sure someone who knew the history of philosophy better could trace it back far before the advent of computers. There are really two arguments here. One is: Is there more to creativity than combination of pre-existing elements, plus introduction of occasional randomness. The other is: If there some additional, magic ingredient, can computers do it too?
A: What do you mean "Is there more to creativity than combination of pre-existing elements, plus introduction of occasional randomness." Of course there is; that's utterly obvious!
B: Is it? Think about it -- is evolution creative? Evolution created the human body, the human brain, the human eye, the snake's locomotion, the dolphin's sonar, the beautifully patterned wings of the Monarch butterfly. But what does evolution do? It combines previously known elements, it makes use of randomness, and it leverages the intrinsic creativity of the self-organizing processes in the physical world. Or are you going to plead Creationism here?
A: You admit evolution leverages the self-organizing processes of the physical world. The brain is also part of the physical world. A computer is different. The physical world has more creativity built into it.
B: You admit a computer is part of the physical world, right? It's not some kind of ghostly presence…
A: Yes, but it's a very limited part of the physical world, it doesn't display all the phenomena you can see in the real world.
B: A brain is a very limited part of the physical world too, of course. And so is the Earth. And insofar as we understand the laws of physics, every phenomenon that can occur in the physical world can be simulated in a computer.
A: Ah, but a simulation isn't the real thing! You can't cook your food with a simulation of fire!
B: This brings us rather far afield, I think. I'm sure you're aware of the argument made by Nick Bostrom and many others before him, that it's highly possible we ourselves live in some kind of simulation world. You saw "The Matrix" too, I assume. A simulation isn't always going to look like one to the creatures living inside it.
A: OK OK, I agree, that's a digression -- let's not go there now. Leave that for another day.
B: So do you agree that evolution is creative?
A: Yes, but I'm not sure your version of the evolutionary story is correct. I think there's some fundamental creativity in the dynamics of the physical world, which guides evolution and neural process, but isn't present in digital computers.
B: And what evidence do you have of this? You do realize that there is no support for this in any current theory of physics, right?
A: And you do realize that current fundamental physics is not complete, right? There is no unified theory including gravity and all the other forces. Nor can we, in practice, explain how brains work using the underlying known physics. We can't even, yet, derive the periodic table of the elements from physical principles, without setting a lot of parameters using chemistry-level know-how. Clearly we have a lot more to discover.
B: Sure, no doubt. But none of the concrete proposals out there for unifying physics would introduce this sort of radical creativity and novelty you're looking for. It's ironic to look at physicist Roger Penrose and his twistor theory, for example. Penrose agrees with you that nature includes some kind of radical creativity not encompassable by computers. Yet his own proposal for unifying physics, twistors, is quite discrete and computational in nature -- and his idea of some mystical, trans-computational theory of physics remains a vague speculation.
A: So you really think this whole universe we're in, is nothing but a giant computer, each step determined by the previous one, with maybe some random variations?
B: Like Bill Clinton said in the Monica Lewinsky trial: That depends on what the meaning of is, is.
A: Sorry, you'll have to elaborate a bit. Clintonian metaphysics is outside my range of expertise…
B: I think that, from the perspective of science, there's no reason to choose a non-computational model of the observed data about the universe. This is inevitable, because the totality of all scientific data is just a giant, but finite collection of finite-precision numbers. It's just one big, finite bit-set. So of course we can model this finite bit-set using computational tools. Now, there may be some other way of understanding this data too -- but there is no empirical, scientific way to validate the idea that the best way to model this finite bit-set is using a non-computational model. If you choose to find a non-computational model of a finite set of bits simpler and preferable, I can't stop you from doing or saying that. What I can say though is that: from the perspective of science, there's no reason to choose a non-computational model of the observed data about the universe.
A: That seems like a limitation of science, rather than a limitation of the universe!
B: Maybe so. Maybe some future discipline, descending from our current notion of science, will encompass more possibilities. I've speculated that it may be necessary to expand the notion and practice of science to come to a good understanding of consciousness on the individual and group level. But that's another digression. My strong suspicion is that to build an advanced AGI, with intelligence and creativity at and then beyond the human level, the scientific understanding of the mind is good enough.
A: Hmmm…. You admit that science may not be good enough to fully understand consciousness, or to encompass non-computational models of our observations of intelligent systems. But then why do you think science is good enough to guide the construction of thinking machines?
B: I can't know this for sure. To some extent, in life, one is always guided by one's intuition. Just because I saw the sun rise 1000 mornings in a row, I can't know for sure it's going to rise the next day. As Hume argued long ago, the exercise of induction requires some intuitive judgment as to which hypothesis is simpler. To me, by far the simplest hypothesis about intelligence is that if we engineer mechanisms implementing basically the same sorts of functions that the human brain does then we're going to get a system that's intelligent in basically the same sorts of ways that the brain is. And if there's some aspect of the human mind/brain that goes beyond what mechanism can explain -- well hey, there may well be some aspect of our engineered AGI mind/brain that also goes beyond what mechanism can explain. Both the brain and computer are arrangements of matter in the same universe.
A: Heh…. I guess we digressed again, didn't we.
B: It seems that's how these arguments usually go. We started out with creativity and ended up with life, the universe and everything. So let's get back to radical novelty for a bit. I want to run through my thinking about that for you a little more carefully, OK?
A: Sure, go for it!
B: Ok…. Consider, in the fashion of second-order cybernetics, that it's often most sensible to consider a system S in the context of some observer O of the system.
A: Sure. Quantum mechanics would also support that sort of perspective.
B: Indeed -- but that's another digression! So let's go on...
My first point is: It's obvious that, in many cases, a system S can display radical novelty relative to an observer O. O may have devised some language L for describing the behaviors and internal states of S, and then S may do something which O finds is more easily describable using a new language L1, that has some additional words in it, and/or additional rules for interaction of the words in the language.
A: That's a bit abstract, can you give me an example or something?
B: Sure. Consider a pot of water on the stove, gradually heating up but not yet boiling. An observer of that pot may come up with a set of descriptions of the water, with a theory of the water's behavior based on his observations of the water. But then when the temperature gets high enough and the water starts to boil -- all of a sudden he sees new stuff, and he has to add new words to his language for describing the water. Words for bubbles, for example.
A: Yes. The pot of water has introduced radical novelty. It's come up with a new element - bubbles -- that didn't exist there before.
B: Yeah -- but now we get to the tricky point, which is the crux of the matter. In general, for a given system S, a certain behavior or internal state change may appear as radical novelty to observer O but not to observer O1.
In the case of the pot of water, suppose that in addition to our original observer O, we had another observer O1 who was watching every elementary particle in the pot of water, to the extent physics allows; and who was smart enough to infer from these observations the laws of physics as currently understood. This observer O1 would not be surprised when the water started to boil, because he would have predicted it using his knowledge of the laws of physics. O1 would be describing the water's structures and dynamics using the language of particle physics, whereas O would be describing it using "naive physics" language regarding the macroscopic appearance of the water. The boiling of the water would introduce radical novelty from the perspective of O, but not O1.
For a slightly broader example, think about any deterministic system S, and an observer O1 who has complete knowledge of S's states and behaviors as they unfold over time. From the view of O1, S will never do anything radically novel, because O1 can describe S using the language of S's exact individual states and behaviors; and each new thing that emerges in S is by assumption determined by the previous states of S and S's environment. But from the view of another observer O, one which has a coarser-grained model of S's states or behaviors, S may well display radical novelty at some points in time.
The question regarding radical novelty then becomes: given a system S and an observer O who perceives S as displaying radical novelty at some point in time, how do we know that there isn't some other observer O1 who would not see any radical novelty where O does? Can we ever say, for sure, that S is in a condition such that any possible observer would perceive S to display radical novelty?
It seems we could never say this for sure, because any observer O, ultimately, only sees that data that it sees.
A: That's quite interesting, indeed, and I'll probably need some time to digest it fully.
But I still wonder if you're fudging the distinction between digital systems like computers and real physical systems like brains.
I mean: in the case of a computer, we can easily see that it's doing nothing new, just repermuting what comes in through its sensors, and what it was given in its initial programming. In the case of a human, you could try to argue that a human brain is just doing the same thing -- repermuting its sensations and its initial brain-state. But then, the rules of quantum mechanics forbid us from knowing the initial brain state or the sensations in full detail. So doesn't this leave the brain more room to be creative?
B: Not if you really look at what quantum mechanics says. Quantum mechanics portrays the dynamics of the world as a sort of deterministic unfolding in an abstract mathematical space, which isn't fully observable. But it's proved quite clearly that a quantum system can't actually do anything beyond what an ordinary computer system can do, though in some cases it can do things much faster. So any quantum system you build, can be imitated precisely by some ordinary computer system, but the ordinary computer system may run much slower.
The arguments get subtle and threaten to turn into yet another digression -- but the bottom line is, quantum theory isn't going to save your position. That's why Penrose, who understands very well the limits of quantum theory, needs to posit some as yet unspecified future unified physics to support his intuitions about radical novelty.
A: Hmm, OK, let's set that aside for now. Let's focus back on the computer, since we both agree that physicists don't yet really understand the brain.
How can you say a computer can be fundamentally creative, when we both know it just repermutes its program and its inputs?
B: This becomes a quite funny and subtle question. It's very obvious to me that, to an observer with a coarse-grained perspective, a computer can appear to have radical novelty -- can appear quite creative.
A: Yes, but that's only because the observer doesn't really know what's going on inside the computer!
B: So then the real question becomes: For a given computer, is there
hypothetically some observer who could understand the computer's inputs and program well enough to predict everything the computer does, even as it explores complex environments in the real world. So for this observer, the computer would display no radical novelty.
A: Yes. For any computer, there is an observer like that, at least hypothetically. And for a brain, I really doubt it, no matter what our current physics theories might suggest.
B: But why do you doubt it so much? Because of your own subjective feeling of radical novelty, right? But consider: The deliberative, reflective part of your mind, which is explicitly aware of this conversation, is just one small part of your whole mental story. Your reflective conscious mind has only a coarse-grained view of your massive, teeming "unconscious" mind (I hesitate to really call the rest of your mind "unconscious" because I tend toward panpsychism -- but that would be yet another digression!). This is because the "conscious" mind has numerous information-processing limitations relative to the "unconscious" -- for instance the working memory limitation of 7 +/-2 items. Given this coarse-grained view, your "conscious" mind is going to view your "unconscious" mind as possessing radical novelty. But to another observer with fuller visibility into your "unconscious" mind, this radical novelty might not be there.
We all know the conscious mind is good at fooling itself. The radical novelty that you feel so acutely, may be no more real than the phantom limbs that some peoples' brains tell them so vividly are really there. In one sense, they are there. In another, they are not.
Let me go back to the Matrix scenario for a moment…
A: You're kind of obsessed with that movie, aren't you? I wonder what that tells us about YOUR unconscious?
B: Heh… actually, I thought the first Matrix movie was pretty good, but it's not really a personal favorite film. And let's not even get started on the sequels… All in all, Cronenberg's "Existenz" had a similar theme and agreed with my aesthetics better…. But anyway…
A: … you digress
B: Indeed! I'm not trying to do "Roger Ebert Goes to the AI Lab" here, I just want to use the Matrix as a prop for another point.
Imagine we live in a Matrix type simulation world, but the overlords of the world -- who live outside the simulation -- are subtly guiding us by introducing new ideas into our minds now and then. And also by introducing new ideas into the minds of our computer programs. They are tweaking the states of our brains, and the RAM-states of the computers running our AI systems, in tricky ways that don't disrupt our basic functioning, but that introduce new ideas. Suppose these ideas are radically new -- i.e. they're things that we would never be able to think of, on our own.
A: So this scenario is like divine inspiration, but with the overlords of the Matrix instead of God?
B: Yeah, basically. But I wanted to leave the religious dimension out of it.
A: Sure… understandably. We've made a big enough mess already!
B: So my point is: if this were the case, how would we ever know? We could never know.
A: That's true I suppose, but so what?
B: Now think about all the strange, ill-understood phenomena in the Cosmos --
psi phenomena,
apparent reincarnation, and so forth. I know many of my colleagues think these things are just a load of BS, but I've looked at the data fairly carefully, and I'm convinced there's something strange going on there.
A: OK, sure, actually I tend to agree with you on that point. I've had some strange experiences myself. But aren't you just digressing again?
B: Partly. But my point is, if psi processes exist, they could potentially be responsible for acting sort of like the Matrix overlords in my example -- introducing radical novelty into our minds, and potentially the minds of our computers. Introducing inputs that are radically novel from some perspectives, anyway. If some kind of morphogenetic field input new stuff into your brain, it would be radically novel from your brain's view, but not from the view of the universe.
A: You're trying to out-weird me, is that it?
B: Aha, you caught me!!… Well, maybe. But if so that's a secondary, unconscious motive!
No, seriously…. My point with that digression was: We don't really understand the universe that well.
In actual reality, nobody can predict what a complex computer program is going to do, when it's interacting with a complex world.
You want to tell a story that some hypothetical super-observer could predict exactly what any complex computer program will do -- and hence, for any computer program, there is some perspective from which it has no radical novelty.
And then you want to tell a story that, for a human brain, some mysterious future physics will prove current physics wrong in its assertion that there is an analogous hypothetical super-observer for human brains.
But I don't particularly believe either of these stories. I think we have a lot to learn about the universe, and probably from the view of the understanding we'll have 100 years from now, both of these stories will seem a bit immature and silly.
A: Immature and silly, huh?? I know you are but what am I !!!
But if there are such big holes in our understanding of the universe, how come you think you know enough to build a thinking machine? Isn't *that* a bit immature and silly?
B: We started out with your argument that no computer can ever be creative and innovative, because the human mind/brain has some capability for radical novelty that computers lack -- remember?
A: Vaguely. My brain's a bit dizzied by all the quantum mechanics and psychic powers. Maybe I'd be less confused if my brain were hybridized with a computer.
B: But that's another digression…
A: Exactly…
B: But really -- after more careful consideration, what's left of your argument? What evidence is there for radical novelty in the human mind/brain, that's not there in computers? Basically the hypothesis of this special additional radical novelty in humans, comes down to your intuitive sense of what creativity feels like to you, plus some observations about the limits of current computer programs, plus some loosely connected, wild speculations about possible future physics. It's far from a compelling argument.
A: And where is your compelling argument that computers CAN display the same kinds of creativity and innovation that humans can? You haven't proved that to me at all. All you have is your faith that you can somehow make future Ai programs way more creative and innovative than any program has ever been so far.
B: I have that scientific intuition -- and I also have the current laws of physics, which imply that a digital computer can do everything the brain does. It's possible, according to physics, that we'll need a quantum computer rather than a conventional digital computer to make AGI systems run acceptably fast -- but so far there's no real evidence of that.
And then there's current neuroscience, psychology and so forth -- all of which appear quite compatible with current physics.
I'm willing to believe current science has profound limitations. But given the choice between 1) current science, versus 2) your own subjective intuition about your thought process and your utterly scientifically ungrounded speculations about possible future physics -- which am I going to choose as a guide for engineering physical systems? Sorry my friend, but I'm gonna go with current science, as a provisional guide at any rate.
In everything we've learned so far about human cognition and neuroscience (and physics and chemistry and biology, etc. etc.), there's no specific thing that seems to go beyond what we can do on digital computers. What it seems to me is that human-level intelligence using available computational resources is just a complex thing to do. It requires lots of different computational processes, all linked together in complex ways. And these processes are different from the serial calculations that current computer architectures are best at performing, which means that implementing them involves a lot of programming and software design trickery. Furthermore we don't understand the brain that well yet, due to limitations of current brain scanning equipment -- so we have to piece together models of intelligence based on integrating scattered information from various disciplines. So implementing human-level AGI is a difficult task right now. 50 or 100 years from now it will probably be an exercise for schoolchildren!!
A: That's not a compelling argument, hombre. That's just a statement of your views.
B: Urrghh…. Look, what happens inside the mind is a lot like evolution! Not exactly the same, but closely analogous. The mind -- be it a human or computer mind -- generates a lot of different ideas, internally. It generates them by randomly tweaking its prior ideas, and by combining its previous ideas. It also has a good trick that evolution doesn't have: it can explicitly generalize its previous ideas to form new, more abstract ones, that then get thrown back into the ever-evolving idea pool. Most of the ideas emerging from this pool are rubbish. Eventually it finds some interesting ones and refines them.
What's complicated is going all this re-combination, mutation and generalization of ideas judiciously given limited computing resources and current computer architectures, which were built for other sorts of things. This requires somewhat complex cognitive architectures, which take a while to implement and refine... which leads us back to linking together appropriate complex computational processes to as to carry out these processes effectively on current hardware, as we're trying to do with
OpenCog...
A: Blah, blah, blah...
B: OK, OK…. I guess this argument has gone on long enough. I admit neither of us has a 100% solid argument in favor our our position -- but that's because anything regarding the mind eventually impinges on philosophy, and there are no 100% solid arguments in philosophy of any kind. Philosophical issues can get obsoleted (not many folks bother arguing about how many angels can dance on the head of a pin anymore), but they never get convincingly resolved….
But do you admit, at least, the matter isn't as clear as you thought initially? That the inability of computers to be truly creative isn't so entirely obvious as you were saying at first?
A: Yes, I understand now, that it's possible that computers can achieve creativity via the influx of radical novelty into their RAM-states via psychic projection from the Matrix Overlords.
B: Ah, good, good, glad we cleared that up.
Uh, you do understand that was just a rhetorical illustration, right?
A: A rhetorical illustration, eh? I know you are but what am I !!!