This brief post is an afterthought to the just-previous post about the nature of reality.
As a side point in that post, I observed that one can often replace counterfactuals with analogies, thus making things a bit clearer.
It occurred to me this morning as I lay in bed waking up, that one can apply this method to the feeling of free will.
I've previously written about the limitations of the "free will" concept, and made agreeable noises about the alternate concept of "natural autonomy." Here, however, my point is a slightly different (though related) one.
One of the key aspects of the feeling of free will is the notion "In situation S, if I had done X differently, then the consequences would have been different." This is one of the criteria that makes us feel like we've exercised free will in doing X.
Natural autonomy replaces this with, roughly speaking "If someone slightly different than me had done something slightly different than X, in a situation slightly different from X, then the result would likely have been different than when I did X in S." This is no longer a counterfactual, it's a probabilistic statement about actions and consequences drawn from an ensemble of actions and consequences done by various actors.
But perhaps that rephrasing doesn't quite get at the essence. It may be more to the point to say: "In future situations similar to S, if I do something that's not analogous to X, then something not analogous to what happened after S in situation X is likely to happen."
Or in cases of binary choice: "In future situations similar to S, if I do something analogous to Y instead of something analogous to X, then a consequence analogous to CY instead of a consequence analogous to CX is likely to occur."
This is really the crux of the matter, isn't it? Not hypothesizing about alternate pasts, nor choices from an ensemble of similar beings -- but rather, resolutions about what to do in the future.
In this view, an "act of will" is something like "an action in a situation, corresponding to specific predictions about which of one's actions will predictively imply which consequences in analogous future situations."
That's boring-sounding, but avoids confusing talk of possible worlds.
Mathematically, this is equivalent to a formulation in terms of counterfactuals ... but, counterfactuals seem to lead human minds in confusing directions, so using them as sparingly as possible seems like a good idea...
Thursday, December 29, 2011
Wednesday, December 28, 2011
What Are These Things Called "Realities"?
Here follow some philosophical musings, pursued by my rambling mind one evening during the Xmas / New years interval.... I inflicted these ramblings on my kids for a while then finally decided to shut up and write them down....
The basic theme: What is this thing called "reality"? Or if you prefer a broader view: What are these things called realities??
After yakking a while, eventually I'll give a concrete and (I think) somewhat novel definition/characterization of "reality."
Real vs. Apparent
Where did this idea come from -- the "real" world versus the "apparent" world.
Nietzsche was quite insistent regarding this distinction -- in his view, there is only the apparent world, and talk of some other "real world" is a bunch of baloney. He lays this idea out quite clearly in The Twilight of the Idols, one of my favorite books.
There's certainly some truth to Nietzsche's perspective in this regard.
After all, in a sense, the idea of a "real world" is just another idea in the individual and collective mind -- just another notion that some people have made up as a consequence of their attempt to explain their sense perceptions and the patterns they detect therein.
But of course, the story told in the previous sentence is ALSO just another idea, another notion that some people made up … blah blah blah …
One question that emerges at this point is: Why did people bother to make up the idea of the "real world" at all … if there is only the apparent world?
Nietzsche, in The Twilight of the Idols, argues against Kant's philosophical theory of noumena (fundamentally real entities, not directly observable but underlying all the phenomena we observe). Kant viewed noumena as something that observed phenomena (the perceived, apparent world) can approximate, but never quite find or achieve -- a perplexing notion.
But really, to me, the puzzle isn't Kant's view of fundamental reality, it's the everyday commonsense view of a "real world" distinct from the apparent world. Kant dressed up this commonsense view in fancy language and expressed it with logical precision, and there may have been problems with how he did it (in spite of his brilliance) -- but, the real puzzle is the commonsense view underneath.
Mirages
To get to the bottom of the notion of "reality", think about the example of a mirage in the desert.
Consider a person wandering in the desert, hot and thirsty, heading south toward a lake that his GPS tells him is 10 miles ahead. But suppose he then sees a closer lake off to the right. He may then wonder: is that lake a mirage or not?
In a sense, it seems, this means he wonders: is that lake a real or apparent reality?
This concept of "reality" seems useful, not some sort of philosophical or mystical trickery.
The mirage seems real at the moment one sees it. But the problem is, once one walks to the mirage to drink the water in the mirage-lake, one finds one can't actually drink it! If one could feel one's thirst being quenched by drinking the mirage-water, then the mirage-water wouldn't be so bad. Unless of course, the quenching of one's thirst wasn't actually real… etc. etc.
The fundamental problem underlying the mirage is not what it does directly in the moment one sees it -- the fundamental problem is that it leads to prediction errors, which are revealed only in the future. Seeing the mirage leads one to predict one will find water in a certain direction -- but the water isn't there!
So then, in what sense does this make the mirage-lake "only apparent"? If one had not seen the mirage-lake, but had seen only desert in its place, then one would not have made the prediction error.
This leads to a rather mundane, but useful, pragmatic characterization of "reality": Something is real to a certain mind in a certain interval of time, to the extent that perceiving it leads that mind to make correct predictions about the mind's future reality.
Reality is a Property of Systems
Yeah, yeah, I know that characterization of reality is circular: it defines an entity as "real" if perceiving it tends to lead to correct predictions about "real" things.
But I think that circularity is correct and appropriate. It means that "reality" is a property attributable to systems of entities. There could be multiple systems of entities, constituting alternate realities A and B, so we could say
I think this is a nicer characterization of reality than Philip K. Dick's wonderful quote, "Reality is whatever doesn't go away when you stop believing in it."
The reason certain things don't go away when you stop believing in them, I suggest, is that the "you" which sometimes stops believing in something, is actually only a tiny aspect of the overall mind-network. Just because the reflective self stops believing in something, doesn't stop the "unconscious" mind from assuming that thing's existence, because it may be bound up in networks of implication and prediction with all sorts of other useful things (including in ways that the reflective self can't understand due to its own bandwidth limitations).
So, the mirage is not part of the same reality-system, the same reality, as the body which is thirsty and needs water. That's the problem with it -- from the body's perspective.
The body's relationship to thirst and its quenching is something that the reflective self associated with that body can't shake off -- because in the end that self is just one part of the overall mind-network associated with that body.
Counterfactuals and Analogies
After one has seen the mirage and wandered toward it through the desert and found nothing -- then one may think to oneself "Damn! If I had just seen the desert in that place, instead of that mirage-lake, I wouldn't have wasted my time and energy wandering through the desert to the mirage-lake."
This is a philosophically interesting thought, because what one is saying is that IF one had perceived something different in the past, THEN one would have made more accurate predictions after that point. One is positing a counterfactual, or put differently, one is imagining an alternate past.
This act of imagination, of envisioning a possible world, is one strategy that allows the mind to construct the idea of an alternate "real" world that is different from the "apparent" world. The key mental act, in this strategy, is the one that says: "I would have predicted better if, 30 minutes ago, I had perceived desert over there instead of (mirage-) lake!"
But in discussing this with my son Zar, who doesn't like counterfactuals, I quickly realized, one can do the same thing without counterfactuals. The envisioning of an alternate reality is unnecessary -- what's important is the resolution that: "I will be better off if, in future cases analogous to the past one where I saw a mirage-lake instead of the desert, I see the analogue of the desert rather than the analogue of the mirage-lake." This formulation in terms of analogues is logically equivalent to the previous formulation in terms of counterfactuals, but is a bit more pragmatic-looking, and avoids the potentially troublesome postulation of alternate possible worlds….
In general, if one desires more accurate prediction within a certain reality-system, one may then seek to avoid future situations similar to past ones in which one's remembered perceptions differ from related ones that would have been judged "real" by that system.
Realities: What and Why
This seems a different way of looking at real vs. apparent reality than the one Kant proposed and Nietzsche rejected. In the perspective, we have
So, the value of distinguishing "real" from "apparent" reality emerges from the value of having a distinguished system of classes of phenomena, that mutually allow relatively accurate prediction of each other. Relative to this system, individual phenomena may be judged more or less real. A mind inclined toward counterfactuals may judge something that was NOT perceived as more "real" than something that was perceived; but this complication may be avoided by worrying about adjusting one's perceptions in future analogues to past situations, rather than about counterfactual past possibilities.
Better Half-Assed than Wrong-Headed!
After I explained all the above ideas to my son Zar, his overall reaction was that it generally made sense but seemed a sort of half-assed theory of reality.
My reaction was: In a sense, yeah, but the only possible whole-assed approaches seem to involve outright assumption of some absolute reality, or else utter nihilism. Being "half assed" lets one avoid these extremes by associating reality with systems rather than individual entities.
An analogue (and more than that) is Imre Lakatos's theory of research programs in science, as I discussed in an earlier essay. Lakatos observed that, since the interpretation of a given scientific fact is always done in the context of some theory, and the interpretation of a scientific theory is always done in the context of some overall research program -- the only things in science one can really compare to each other in a broad sense are research programs themselves. Research programs are large networks of beliefs, not crisp statements of axioms nor lists of experimental results.
Belief systems guide science, they guide the mind, and they underly the only sensible conception of reality I can think of. I wrote about this a fair bit in Chaotic Logic, back in the early 1990s; but back then I didn't see the way reality is grounded in predictions, not nearly as clearly as I do now.
Ingesting is Believing?
In practical terms, the circular characterization of reality I've given above doesn't solve anything -- unless you're willing to assume something as preferentially more real than other things.
In the mirage case, "seeing is believing" is proved false because one gets to the mirage-lake, one can't actually drink any of that mirage-water. One thing this proves is that "ingesting is believing" would be a better maxim than "seeing is believing." Ultimately, as embodied creatures, we can't get much closer to an a priori assumptive reality than the feeling of ingesting something into our bodies (which is part of the reason, obviously, that sexual relations seem so profoundly and intensely real to us).
And in practice, we humans can't help assuming something as preferentially real -- as Phil Dick observes, some things, like the feeling of drinking water, don't go away even if we stop believing in them … which is because the network of beliefs to which they belong is bigger and stronger than the reflective self that owns the feeling of "choice" regarding what to believe or not. (The status of this feeling of choice being another big topic unto itself, which I've discussed before, e.g. in a chapter of the Cosmist Manifesto.).... This is the fundamental "human nature" with which Hume "solved" the problem of induction, way back when....
Now, what happens to these basic assumptions when we, say, upload our mind-patterns into robot bodies ... or replace our body parts incrementally with engineered alternatives ... so that (e.g.) ingesting is no longer believing? What happens is that our fundamental reality-systems will change. (Will a digital software mind feel like "self-reprogramming is believing"??1) Singularity-enabling technologies are going to dramatically change realities as we know them.
And so it goes…
The basic theme: What is this thing called "reality"? Or if you prefer a broader view: What are these things called realities??
After yakking a while, eventually I'll give a concrete and (I think) somewhat novel definition/characterization of "reality."
Real vs. Apparent
Where did this idea come from -- the "real" world versus the "apparent" world.
Nietzsche was quite insistent regarding this distinction -- in his view, there is only the apparent world, and talk of some other "real world" is a bunch of baloney. He lays this idea out quite clearly in The Twilight of the Idols, one of my favorite books.
There's certainly some truth to Nietzsche's perspective in this regard.
After all, in a sense, the idea of a "real world" is just another idea in the individual and collective mind -- just another notion that some people have made up as a consequence of their attempt to explain their sense perceptions and the patterns they detect therein.
But of course, the story told in the previous sentence is ALSO just another idea, another notion that some people made up … blah blah blah …
One question that emerges at this point is: Why did people bother to make up the idea of the "real world" at all … if there is only the apparent world?
Nietzsche, in The Twilight of the Idols, argues against Kant's philosophical theory of noumena (fundamentally real entities, not directly observable but underlying all the phenomena we observe). Kant viewed noumena as something that observed phenomena (the perceived, apparent world) can approximate, but never quite find or achieve -- a perplexing notion.
But really, to me, the puzzle isn't Kant's view of fundamental reality, it's the everyday commonsense view of a "real world" distinct from the apparent world. Kant dressed up this commonsense view in fancy language and expressed it with logical precision, and there may have been problems with how he did it (in spite of his brilliance) -- but, the real puzzle is the commonsense view underneath.
Mirages
To get to the bottom of the notion of "reality", think about the example of a mirage in the desert.
Consider a person wandering in the desert, hot and thirsty, heading south toward a lake that his GPS tells him is 10 miles ahead. But suppose he then sees a closer lake off to the right. He may then wonder: is that lake a mirage or not?
In a sense, it seems, this means he wonders: is that lake a real or apparent reality?
This concept of "reality" seems useful, not some sort of philosophical or mystical trickery.
The mirage seems real at the moment one sees it. But the problem is, once one walks to the mirage to drink the water in the mirage-lake, one finds one can't actually drink it! If one could feel one's thirst being quenched by drinking the mirage-water, then the mirage-water wouldn't be so bad. Unless of course, the quenching of one's thirst wasn't actually real… etc. etc.
The fundamental problem underlying the mirage is not what it does directly in the moment one sees it -- the fundamental problem is that it leads to prediction errors, which are revealed only in the future. Seeing the mirage leads one to predict one will find water in a certain direction -- but the water isn't there!
So then, in what sense does this make the mirage-lake "only apparent"? If one had not seen the mirage-lake, but had seen only desert in its place, then one would not have made the prediction error.
This leads to a rather mundane, but useful, pragmatic characterization of "reality": Something is real to a certain mind in a certain interval of time, to the extent that perceiving it leads that mind to make correct predictions about the mind's future reality.
Reality is a Property of Systems
Yeah, yeah, I know that characterization of reality is circular: it defines an entity as "real" if perceiving it tends to lead to correct predictions about "real" things.
But I think that circularity is correct and appropriate. It means that "reality" is a property attributable to systems of entities. There could be multiple systems of entities, constituting alternate realities A and B, so we could say
- an entity is real_A if perceiving it tends to lead to correct predictions about real_A things
- an entity is real_B if perceiving it tends to lead to correct predictions about real_B things
I think this is a nicer characterization of reality than Philip K. Dick's wonderful quote, "Reality is whatever doesn't go away when you stop believing in it."
The reason certain things don't go away when you stop believing in them, I suggest, is that the "you" which sometimes stops believing in something, is actually only a tiny aspect of the overall mind-network. Just because the reflective self stops believing in something, doesn't stop the "unconscious" mind from assuming that thing's existence, because it may be bound up in networks of implication and prediction with all sorts of other useful things (including in ways that the reflective self can't understand due to its own bandwidth limitations).
So, the mirage is not part of the same reality-system, the same reality, as the body which is thirsty and needs water. That's the problem with it -- from the body's perspective.
The body's relationship to thirst and its quenching is something that the reflective self associated with that body can't shake off -- because in the end that self is just one part of the overall mind-network associated with that body.
Counterfactuals and Analogies
After one has seen the mirage and wandered toward it through the desert and found nothing -- then one may think to oneself "Damn! If I had just seen the desert in that place, instead of that mirage-lake, I wouldn't have wasted my time and energy wandering through the desert to the mirage-lake."
This is a philosophically interesting thought, because what one is saying is that IF one had perceived something different in the past, THEN one would have made more accurate predictions after that point. One is positing a counterfactual, or put differently, one is imagining an alternate past.
This act of imagination, of envisioning a possible world, is one strategy that allows the mind to construct the idea of an alternate "real" world that is different from the "apparent" world. The key mental act, in this strategy, is the one that says: "I would have predicted better if, 30 minutes ago, I had perceived desert over there instead of (mirage-) lake!"
But in discussing this with my son Zar, who doesn't like counterfactuals, I quickly realized, one can do the same thing without counterfactuals. The envisioning of an alternate reality is unnecessary -- what's important is the resolution that: "I will be better off if, in future cases analogous to the past one where I saw a mirage-lake instead of the desert, I see the analogue of the desert rather than the analogue of the mirage-lake." This formulation in terms of analogues is logically equivalent to the previous formulation in terms of counterfactuals, but is a bit more pragmatic-looking, and avoids the potentially troublesome postulation of alternate possible worlds….
In general, if one desires more accurate prediction within a certain reality-system, one may then seek to avoid future situations similar to past ones in which one's remembered perceptions differ from related ones that would have been judged "real" by that system.
Realities: What and Why
This seems a different way of looking at real vs. apparent reality than the one Kant proposed and Nietzsche rejected. In the perspective, we have
- reality-systems -- i.e. systems of entities whose perception enables relatively accurate prediction of each other
- estimations that, in future situations analogous to one's past experiences, one will do better to take certain measures so as to nudge one's perceptions in the direction of greater harmony with the elements of some particular reality-system
So, the value of distinguishing "real" from "apparent" reality emerges from the value of having a distinguished system of classes of phenomena, that mutually allow relatively accurate prediction of each other. Relative to this system, individual phenomena may be judged more or less real. A mind inclined toward counterfactuals may judge something that was NOT perceived as more "real" than something that was perceived; but this complication may be avoided by worrying about adjusting one's perceptions in future analogues to past situations, rather than about counterfactual past possibilities.
Better Half-Assed than Wrong-Headed!
After I explained all the above ideas to my son Zar, his overall reaction was that it generally made sense but seemed a sort of half-assed theory of reality.
My reaction was: In a sense, yeah, but the only possible whole-assed approaches seem to involve outright assumption of some absolute reality, or else utter nihilism. Being "half assed" lets one avoid these extremes by associating reality with systems rather than individual entities.
An analogue (and more than that) is Imre Lakatos's theory of research programs in science, as I discussed in an earlier essay. Lakatos observed that, since the interpretation of a given scientific fact is always done in the context of some theory, and the interpretation of a scientific theory is always done in the context of some overall research program -- the only things in science one can really compare to each other in a broad sense are research programs themselves. Research programs are large networks of beliefs, not crisp statements of axioms nor lists of experimental results.
Belief systems guide science, they guide the mind, and they underly the only sensible conception of reality I can think of. I wrote about this a fair bit in Chaotic Logic, back in the early 1990s; but back then I didn't see the way reality is grounded in predictions, not nearly as clearly as I do now.
Ingesting is Believing?
In practical terms, the circular characterization of reality I've given above doesn't solve anything -- unless you're willing to assume something as preferentially more real than other things.
In the mirage case, "seeing is believing" is proved false because one gets to the mirage-lake, one can't actually drink any of that mirage-water. One thing this proves is that "ingesting is believing" would be a better maxim than "seeing is believing." Ultimately, as embodied creatures, we can't get much closer to an a priori assumptive reality than the feeling of ingesting something into our bodies (which is part of the reason, obviously, that sexual relations seem so profoundly and intensely real to us).
And in practice, we humans can't help assuming something as preferentially real -- as Phil Dick observes, some things, like the feeling of drinking water, don't go away even if we stop believing in them … which is because the network of beliefs to which they belong is bigger and stronger than the reflective self that owns the feeling of "choice" regarding what to believe or not. (The status of this feeling of choice being another big topic unto itself, which I've discussed before, e.g. in a chapter of the Cosmist Manifesto.).... This is the fundamental "human nature" with which Hume "solved" the problem of induction, way back when....
Now, what happens to these basic assumptions when we, say, upload our mind-patterns into robot bodies ... or replace our body parts incrementally with engineered alternatives ... so that (e.g.) ingesting is no longer believing? What happens is that our fundamental reality-systems will change. (Will a digital software mind feel like "self-reprogramming is believing"??1) Singularity-enabling technologies are going to dramatically change realities as we know them.
And so it goes…
Saturday, December 17, 2011
My Goal as an AGI Researcher
In a recent thread on the AGI email list, Matt Mahoney pressed me regarding my high-level goals as an AGI researcher, and a leader of the OpenCog project. This blog post repeats my answer, as I posted it on that email list. This is familiar material to those who have followed my work and thinking, but maybe I've expressed things here slightly differently than in the past....
My goal as an AGI researcher is not precisely and rigorously defined. I'm OK with this. Building AGI is a human pursuit, and human pursuits aren't always precisely and rigorously defined. Nor are scientific pursuits. Often the precise, rigorous definitions come only after a lot of the research is done.
I'm not trying to emulate human beings or human minds in detail. But nor am I trying to make a grab-bag of narrow agents, without the capability to generalize automatically to new problems radically different from the ones for which they were originally designed. I am after a system that -- in the context of the scope of contemporary human activities -- possesses humanlike (or greater) capability to generalize its knowledge from one domain to other qualitatively different domains, and to learn new things in domains different than the ones its programmers had explicitly in mind. I'm OK if this system possesses many capabilities that a human doesn't.
There are probably many ways of achieving software with this kind of general intelligence. The way I think I understand (and am trying to realize with OpenCog), is to roughly emulate the process of human child development -- where I say roughly because I'm fine with the system having some capabilities beyond those of any human. Even if it does have some specialized superhuman capabilities from the start, I think this system will develop the ability to generalize its knowledge to qualitatively different domains in the rough manner and order that a human child does.
What will I do once I have a system that has a humanlike capability of cross-domain generalization (in the scope of contemporary human activities)? Firstly I will study it, and try to create a genuine theory of general intelligence. Second I will apply it to solve various practical problems, from service robotics to research in longevity and brain-computer interfacing etc. etc. There are many, many application areas where the ability to broadly generalize is of great value, alongside specialized intelligent capabilities.
At some point, I think this is very likely to lead to an AGI system with recursive self-improving capability (noting that this capability will be exercised in close coordination with the environment, including humans and the physical world, not in an isolation chamber). Before that point, I hope that we will have developed a science of general intelligence that lets us understand issues of AGI ethics and goal system stability much better than we do now.
My goal as an AGI researcher is not precisely and rigorously defined. I'm OK with this. Building AGI is a human pursuit, and human pursuits aren't always precisely and rigorously defined. Nor are scientific pursuits. Often the precise, rigorous definitions come only after a lot of the research is done.
I'm not trying to emulate human beings or human minds in detail. But nor am I trying to make a grab-bag of narrow agents, without the capability to generalize automatically to new problems radically different from the ones for which they were originally designed. I am after a system that -- in the context of the scope of contemporary human activities -- possesses humanlike (or greater) capability to generalize its knowledge from one domain to other qualitatively different domains, and to learn new things in domains different than the ones its programmers had explicitly in mind. I'm OK if this system possesses many capabilities that a human doesn't.
There are probably many ways of achieving software with this kind of general intelligence. The way I think I understand (and am trying to realize with OpenCog), is to roughly emulate the process of human child development -- where I say roughly because I'm fine with the system having some capabilities beyond those of any human. Even if it does have some specialized superhuman capabilities from the start, I think this system will develop the ability to generalize its knowledge to qualitatively different domains in the rough manner and order that a human child does.
What will I do once I have a system that has a humanlike capability of cross-domain generalization (in the scope of contemporary human activities)? Firstly I will study it, and try to create a genuine theory of general intelligence. Second I will apply it to solve various practical problems, from service robotics to research in longevity and brain-computer interfacing etc. etc. There are many, many application areas where the ability to broadly generalize is of great value, alongside specialized intelligent capabilities.
At some point, I think this is very likely to lead to an AGI system with recursive self-improving capability (noting that this capability will be exercised in close coordination with the environment, including humans and the physical world, not in an isolation chamber). Before that point, I hope that we will have developed a science of general intelligence that lets us understand issues of AGI ethics and goal system stability much better than we do now.
Subscribe to:
Posts (Atom)