Intuitively, it is tempting (to some people anyway!) to think of the potential future Technological Singularity as somehow "sucking us in" -- as a future force that reaches back in time and guides events so as to bring about its future existence. Terrence McKenna was one of the more famous and articulate advocates of this sort of perspective.
This way of thinking relates to Aristotle's notion of "Final Causation" -- the final cause of a process being its ultimate purpose or goal. Modern science doesn't have much of a place for final causes in this sense; evolutionary theories often seem to be teleological in a "final causation" way on the surface, but then can generally be reformulated otherwise. (We colloquially will say "evolution was trying to do X," but actually our detailed models of how evolution was working toward X, don't require any notion of "trying", but only notions of mutation, crossover and differential survival...)
It seems to me, though, that the Surprising Multiverse theory presented in one of my recent blog posts (toward the end), actually implies a different sort of final causation -- not quite the same as what Aristotle suggested, but vaguely similar. And this different sort of final causation does, in a sense, suggest that the Singularity may be sucking us in....
The basic concept of the Surprising Multiverse theory is that, in the actual realized rather than merely potential world, patterns with high information-theoretic surprisingness are more likely to occur. This implies that, among the many possible universes consistent with a given set of observations (e.g. a given history over a certain interval of time), those universes containing more surprisingness are more likely to occur.
Consider, then, a set of observations during a certain time interval -- a history as known to a certain observer, or a family of histories as known to a set of communicating observers -- and the question of what will happen AFTER that time interval is done. For instance, consider human history up till 2014, and the question of the human race's future afterwards.
Suppose that, of the many possible futures, some contain more information-theoretic surprisingness. Then, if the Surprising Multiverse hypothesis holds, these branches of the multiverse -- these possible universes -- will have boosted probabilities, relative to other options. The surprisingness weighting may then be viewed intuitively as "pulling the probability distribution over universes, toward those with greater surprisingness."
The "final cause" of some pattern P according to observer O, may be viewed as the set of future surprising patterns Q that are probabilistically caused by P, from the perspective of observer O. (There are many ways to quantify the conceptual notion of probabilistic causation -- perhaps the most compelling is as "P having nonneutralized positive component effect on Q, based on the knowledge of O", as defined in the interesting paper A Probabilistic Analysis of Causation.)
So the idea is: final causation can be viewed as the probabilistic causation that has the added oomph of surprisingness (and then viewed in the backwards direction). A final cause of P is something that is probabilistically caused by P, and that has enough surprisingness to be significantly overweighted in the Surprising Multiverse weighting function that balances P's various possible futures.
So what of the Singularity? We may suppose that a Technological Singularity will display a high degree of probabilistic surprisingness, relative to other alternative futures for humanity and its surrounds. If so, branches of the multiverse involving a Singularity would be preferentially weighted higher, according to the Surprising Multiverse hypothesis. The Singularity is thus a final cause of human history. QED....
A fair example of the kind of thing that passes through my head at 2:12 AM Sunday morning ;-) ...
11 comments:
Hmm, Reddit has a link to here. Who is this Ben Goertzel guy, anyway? Some kind of Singularity guru?
Ahem, Old Singularitarian, if you have to ask who Ben Goertzel is, you haven't been hanging around singularitarians that long.
Incidentally, Memo to Ben:
I am aware that you have elsewhere commented that the 'entities' one encounters while under certain entheogens seem as though they could be real intelligences with an existence independent from one's own mind.
I've been working on a novel for quite a while now which treats this suggestion as a given of the plot. So I have you (among a few others) for giving me room to say, "Well, I can think of at least one smarter guy than me who takes this idea seriously."
I am not aware of a technical definition of 'surprisingness', but semantics tell me that it is an extremely relative concept. "from the perspective of observer O", which would be all of humanity if we assume that it will affect the whole planet. Very possibly the tech singularity would be surprising to a current majority, but the closer 'it comes' the smaller the proportion of surprisees will be. Following this logic it would become less likely the greater its probability looks to a majority. On the other hand it will be more likely that something completely different happens, the most likely event 'drawing' us in will be one that absolutely nobody anticipates.
If God did not exist, of would be necessary to invent him. That's what civilization is.
God:humans::you: your cells
Should we not expect 'intelligence' to ultimately discover an optimal future state?
Terrence McKenna talked a lot about novelty (suprisingness).
Kiss boredom goodbye & hang-on for the ride.
Interesting. I haven't had the time to fully read the Surprising Multiverse Theory. But it seems similar to Hans Moravec's Quantum Immortality theory, in the sense that there is a mathematical tendency for a particular set of universes to be selected and persisted. Additionally, both sets of universes would lead towards a singularity, because in Quantum Immortality, in the world we live in, you would essentially need a Singularity in order to be immortal.
Although I try not to think about such things, because it doesn't have any practical use that I can think of :O)
Could "surprisingness" be quantified as a "hard to predict"? For example, a problem whose solution is hard to find but easy to check is in this category.
Could also be that low entropy universes are generally preferred, and a singularity could potentially reduce entropy by a huge amount.
If the universe actively pursues low entropy via consciousness, and a singularity would increase the scope and quality of consciousness in the universe, then perhaps that is the preferred path.
I prefer call Omega Point, when all is possible. Omnipotent Artilects
I’ve recently started a site, the information you offer on this site has helped me greatly. Thanks for all of your time & work.
Also visit my webpage - 강남오피
(jk)
Post a Comment