Sunday, March 11, 2012
Are Prediction and Reward Relevant to Superintelligences?
In response to some conversation on an AGI mailing list today, I started musing about the relationship between prediction, reward and intelligence.
Obviously, in everyday human and animal life, there's a fairly close relationship between prediction, reward and intelligence. Many intelligent acts boil down to predicting the future; and smarter people tend to be better at prediction. And much of life is about seeking rewards of one kind or another. To the extent that intelligence is about choosing actions that are likely to achieve one's goals given one's current context, prediction and reward are extremely useful for intelligence.
But some mathematics-based interpretations of "intelligence" extend the relation between intelligence and prediction/reward far beyond human and animal life. This is something that I question.
Solomonoff induction is a mathematical theory of agents that predict the future of a computational system at least as well as any other possible computational agents. Hutter's "Universal AI" theory is a mathematical theory of agents that achieve (computably predictable) reward at least as well as any other possible computational agents acting in a computable environment. Shane Legg and Marcus Hutter have defined intelligence in these terms, essentially positing intelligence as generality of predictive power, or degree of approximation to the optimally predictive computational reward-seeking agent AIXI. I have done some work in this direction as well, modifying Legg and Hutter's definition into something more realistic -- conceiving intelligence as (roughly speaking) the degree to which a system can be modeled as efficiently using its resources to help it achieve computably predictable rewards across some relevant probability distribution of computable environments. Indeed, way back in 1993 before knowing about Marcus Hutter, I posited something similar to his approach to intelligence as part of my first book The Structure of Intelligence (though with much less mathematical rigor).
I think this general line of thinking about intelligence is useful, to an extent. But I shrink back a bit from taking it as a general foundational understanding of intelligence.
It is becoming more and more common, in parts of the AGI community, to interpret these mathematical theories as positing that general intelligence, far above the human level, is well characterized in terms of prediction capability and reward maximization. But this isn't very clear to me (which is the main point of this blog post). To me this seems rather presumptuous regarding the nature of massively superhuman minds!
It may well be that, once one gets into domains of vastly greater than human intelligence, other concepts besides prediction and reward start to seem more relevant to intelligence, and prediction and reward start to seem less relevant.
Why might this be the case?
Regarding prediction: Consider the possibility that superintelligent minds might perceive time very differently than we do. If superintelligent minds' experience goes beyond the sense of a linear flow of time, then maybe prediction becomes only semi-relevant to them. Maybe other concepts we don't now know become more relevant. So that thinking about superintelligent minds in terms of prediction may be a non-sequitur.
It's similarly quite quite unclear that it makes sense to model superintelligences in terms of reward. One thinks about the "intelligent" ocean in Lem's Solaris. Maybe a fixation on maximizing reward is an artifact of early-stage minds living in a primitive condition of material scarcity.
Matt Mahoney made the following relevant comment, regarding an earlier version of this post: "I can think of 3 existing examples of systems that already exceed the human brain in both knowledge and computing power: evolution, humanity, and the internet. It does not seem to me that any of these can be modeled as reinforcement learners (except maybe evolution), or that their intelligence is related to prediction in any of them."
All these are speculative thoughts, of course... but please bear in mind that the relation of Solomonoff induction and "Universal AI" to real-world general intelligence of any kind is also rather wildly speculative... This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they're not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on "scaling them down" to make them realistic; so far this only works for very simple toy problems, and it's hard to see how to extend the approach broadly to yield anything near human-level AGI). And it's not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.
So by all means enjoy the nice math, but please take it with the appropriate fuzzy number of grains of salt ;-) ...
It's fun to think about various kinds of highly powerful hypothetical computational systems, and fun to speculate about the nature of incredibly smart superintelligences. But fortunately it's not necessary to resolve these matters -- or even think about them much -- to design and build human-level AGI systems.
Subscribe to:
Post Comments (Atom)
9 comments:
Prediction is very important - but there's also tree pruning and evaluation. Most of the brain's work is probably doing prediction. It's also the easiest thing to use to generate automated tests. However, evaluation and tree pruning can't be ignored completely - even though they depend on what the goals of the agent are.
It seems that you are saying basically, "talking about superintelligence beyond a certain threshold is silly because of all the unpredictable novelty sure to emerge." Would you agree, though, that absent evidence to the contrary, prediction/reward is the de facto spine of intelligence? After all, doesn't our definition of intelligence include the concept of a reward being optimized for? (Though maybe not prediction.)
Also, I'm curious as to your description of the internet as intelligent - if it is optimizing something like organization of knowledge or connectedness, is there not predictive work going on in its component human minds to get there?
Forecasting, rewards/utility - AND tree pruning. Forecasting gives you an enormous tree of possible futures weighted by their probabilities. Intelligent agents will need to cut that tree down to size.
> It is becoming more and more common, in parts of the AGI community, to interpret these mathematical theories as positing that general intelligence, far above the human level, is well characterized in terms of prediction capability and reward maximization.
You're really talking about two different things, - a fitness value of intelligence: prediction, & a fitness values of some presumably quasi-biomorphic system that uses intelligence: rewards. The first part is what we've been discussing, & is best addressed technically. Regarding the second part, we're really trying to generalize human values here, that's our main point of reference. And our most important values are conditioned, the innate part is extremely primitive & ultimately overridden by conditioning. And conditioned values always start as instrumental, & are overridden as they stop being instrumental. So, assuming an indefinite time horizon, the top value will be the most generally instrumental one. And there's only one such meta-instrumental value, - intelligence, because we need it to figure what *is* instrumental. So, these two value systems should ultimately converge. I elaborate on that on my Meta Evolution blog: http://meta-evolution.blogspot.com/, more specifically on Cognitive Takeover post: http://meta-evolution.blogspot.com/2012/01/cognitive-takeover-curiosity-as.html, if you want to move from questions to answers :).
> To me this seems rather presumptuous regarding the nature of massively superhuman minds!
Unpresumptuous people go to church.
This is a better-quality article as they all are. I am waiting to read even more about this topic. I make fun of been wonder wide this an eye to some beat now. Thanks for sharing....
I have studied your site fully and I realized that, it is the most beneficial for us. I want to take more information through this site.
Very well said my friend. This is very deep logical explanation about prediction, intelligence and rewards. But I don't know, life is a mystery for all of us and no matter how we can be able to predict the future, the event still dictates what's going to happen and it's still questionable for all of us. I know that there are still people who are gifted to know the future but haven't witness it yet.
Hello Benjamin,
I really love the template you are using. It's really a cool template, very minimalist and loading is also very fast. I really liked the look of the first template like this. Success always for you. contoh advertisement text
Very good written article. It will be supportive to anyone who utilizes it, including me.
Keep doing what you are doing ? can’r wait to read more posts.
very nice article.
Again, awesome web site! 부산오피
(jk)
Post a Comment