So I have written in a recent post about what it would mean for a value system to be coherent -- i.e. fully self-consistent -- and I have noted that human value systems tend to be wildly incoherent. I have posited that coherence is an interesting property to think about in terms of designing and fostering emergence of AGI value systems.

Now it's time for the other shoe to drop -- I want to talk a bit about Open-Ended Intelligence and why incoherence in value systems (and multivalue systems) may be valuable and productive in the context of minds that are undergoing radical developmental changes in the context of an intelligent broader world.

(For more on open-ended intelligence, see the panel at AGI-20 a couple weeks ago, and Weaver's talk at AGI-16)

My earlier post on value system coherence focused on the case where a mind is concerned with maximizing a single value function. Here I will broaden the scope a bit to minds that have multiple value functions -- which is how we have generally thought about values and goals in OpenCog, and which I think is a less inaccurate mathematical model of human intelligence. This shift from value systems to multivalue systems opens the door to a bunch of other issues related to the nature of mental development, and the relationship between developing minds and their external environments.

TL;DR of my core point here is --

*in an open-ended intelligence that is developing in a world filled with other broader intelligences, incoherence with respect to current value function sets may build toward coherence with respect to future value function sets.*
As a philosophical aphorism, this may seem obvious, once you sort through all the technical-ish terminology. However, building a bridge leading to this philosophical obvious-ness from the math of goal-pursuit as value-function-optimization is somewhat entertaining (to those of us with certain peculiar tastes, anyway) and highlights a few other interesting points along the way.

In the next section of this post I will veer fairly far into the formal logic/math direction, but then in the final two sections will veer back toward practical and philosophical aspects...

##
**So let's go step by step...**

1) Conceptual starting-point: Open-ended intelligence is better approximated by the quest for Pareto-optimality across a possibly large set of different objective functions, than by attempting to optimize any one objective function... (This is not to say that Pareto-optimality questing fully captures the nature of open-ended intelligence or complex self-organization and autopoiesis etc. -- it surely doesn't -- just that it captures some core aspects that single-goal-function-optimization doesn't.)

2) One can formulate a notion of what it means for a set of value functions to be coherent as a group. Basically, the argmax(F) in the definition of value-system-coherence is just replaced with "being located on the Pareto frontier of F1, F2...,Fn". The idea is that the Pareto frontier of the values for a composite system should be the composition of the Pareto frontiers of the values for the components of the composite.

3) One can also think about the "crypticity" or difficulty of discovering a certain value system (a term due to Charles H. Bennett from way back). Given a certain amount R of resources and a constraint C and a probability p, one can ask what is the most coherent value system one can find with probability >p that satisfies C, using the available resources. Or if C is fuzzy, one can ask what is the most coherent value system one can find with probability >p that is on the Pareto frontier of coherence and C, given the available resources.

4) So open-ended intelligence involves [among other things] the emergence of coherent multivalued value-systems (multivalue systems) that involve a large number of different value functions, and that are tractably-discoverable (i.e. not too cryptic)

5) Suppose one is given a set of value-functions as initial "constraints", say C1, C2 ,..., CK -- and is then looking for the most coherent multivalue system one can find with high odds using limited resources, that is compatible with C1,...,Ck? I.e. one is asking, what is the most coherent tractably-findable value system compatible with the initial values?

Then, suppose one is alternatively looking at a subset of the initial values, say C1,...,Ck ... and looking for the most coherent tractably-findable value system compatible with these?

6) The most coherent tractably-findable value systems according to C1,...,Ck -- may not be compatible with the most coherent tractably-findable value systems according to C1,...,CK. Why? The reason for this would be: In some cases, adding in the extra value functions (k+1,...,K) may make it computationally simpler to find Pareto optima involving the original k value functions (1,...,k). This could be the case if there were interaction informations between the value functions 1,...,k and the value functions k+1,...,K

7) So we have here a sort of Fundamental Principle of Valuable Value-Incoherence -- i.e. if you have limited resources and you want to build toward multivalued coherence in the context of a bunch of different initial value-functions, the best routes could be through value-systems that are fairly incoherent in the context of various subsets of this bunch of initial value-functions.

8) So if a system is in a situation where new external value functions that will serve as constraints are progressively revealed over time, and these new external value functions have interaction information with one's previous constraint-value-functions, then one may find that one's current incoherence helps build toward one's future coherence.

9) This seems especially relevant to the context of

**in the context of a world filled with broader intelligences than oneself -- in which case one is indeed being confronted with (and developing to internalize) new external value functions that are related to one's prior value functions in complex ways.***development*
10) So in this sort of context (development in a world that keeps feeding new stuff that's informationally interactive w/ the old), it could be that seeking coherence is suboptimal in a similar way to how seeking piece count in the early stages of a chess game, or seeking board coverage in the early stages of an Othello game, is suboptimal.... Instead one often wants to seek mobility and maximization of options, in the early to mid stages of such games ... and the same may be the case w/ value systems in this sort of situation...

11) A major question then becomes: When and how big are there actual tradeoffs btw multivalue system coherence and open-mindness (aka agility/mobility).... What is the sense in which an incoherent system can have more information than a coherent one?

12) It is possible that the theory of paraconsistent logic might yield some insight here. If you assume value system coherence as an axiom, then for a mind to have an incoherent value system will make it an overall inconsistent system (what sort of paraconsistency it will have depends on various details) -- whereas for a mind to have a coherent value system will land it in the realm of Godelian restrictions (i.e. via Godel's Second Incompleteness Theorem and its variants...)

13) If you look at the set of theorems provable by a consistent logic, there's a limit due to Godel. If you look at the set of theorems provable in a paraconsistent logic (e.g. a dialetheist logic, aka a logic in which there are true statements whose negations are also true) it can be "larger" in a sense, e.g. a dialetheic logic can prove its own Godel sentence as well as its own soundness. This doesn't show that a paraconsistent logic can be more informative than a consistent one, but it opens the door for this to maybe be true... It seems we are now pushing in directions where modern math-logic isn't yet fully fleshed out.

14) The notion of an "experimental logic" seems also relevant here, .. basically a dynamic process in which new axioms are added to one's logic over time. This is one analogue in logic-system-land of "development" in psychology-land... Of course if one assumes there is a finite program whose behavior corresponds to some fixed logic generating the new axioms, then one can't escape Godel this way. But if one assumes the new axioms are emanating in part from some imperfectly understood external source (which could be a hypercomputer for all one knows... or at least could be massively more intelligent/complex than stuff one can understand), then one has a funky situation.

15) Also it seems one could capture a sort of experimental logic as a relevance-logic layer on top of dialetheic logic. I.e. assume a dialetheic logic that can generate everything, and then put a relevance/importance distribution on axioms, and then the development process is one of gradually extending importance to more and more axioms.... This sort of open-ended logic potentially is in some useful senses fundamentally informationally richer than consistent logic... and in the domain of reasoning about values, incoherent value systems could open the door to this sort of breadth...

15) Also it seems one could capture a sort of experimental logic as a relevance-logic layer on top of dialetheic logic. I.e. assume a dialetheic logic that can generate everything, and then put a relevance/importance distribution on axioms, and then the development process is one of gradually extending importance to more and more axioms.... This sort of open-ended logic potentially is in some useful senses fundamentally informationally richer than consistent logic... and in the domain of reasoning about values, incoherent value systems could open the door to this sort of breadth...

(Possibly relevantly -- While researching the above, I encountered the paper "Expanding the Logic of Paradox with a Difference-Making Relevant Implication" by Peter VerdÃ©e, which made me wonder whether relevance logic is somehow morphic to the theory of algorithmic causal dags.... I.e. in a relevance logic one basically only accepts the conclusion to follow from the premises, if there is some compressibility of the conclusion based on the premise list alone, without including the other axioms of the logic ... )

##
**Back to basics**

OK well that got pretty deep and convoluted...

So let's go back to the basic conclusion/concept I gave at the beginning --

*in an open-ended intelligence that is developing in a world filled with other broader intelligences, incoherence with respect to current value function sets may build toward coherence with respect to future value function sets.*
In the current commercial/academic AI mainstream, the default way of thinking about AI motivation is in terms of the maximization of expected reward. Hutter's beautiful and important theory of Universal AI takes this as a premise for many of its core theorems, for example.

I have argued previously that some of the pathologies of expected reward maximization can be avoided via focusing instead on maximizing goal functions defined over future histories.

In my practical proto-AGI work with OpenCog, I have preferred to use motivational systems with multiple goals and not average these into a single meta-goal.

On the other hand, I have also been intrigued by the notion of open-ended intelligence, and in general by the conceptualization and modeling of intelligences as SCADS, Self-organizing Complex Adaptive Dynamical Systems, in which goals arise and are pursued and then discarded as part of the broader self-organizing dynamics of system and environment.

What I'm suggesting here is that approximations of the SCADS perspective on open-ended intelligences may be constructed by looking at systems with large numbers of goals (aka. multivalue systems) that are engaged in developmental processes wherein new values are ongoingly added in an informationally rich interaction with an intelligent external environment.

The ideas sketched here may form a partial bridge between the open-ended intelligence perspective -- which captures the fundamental depth of intelligence and mind -- and the function-optimization perspective, which has a lot of practical value in terms of current real-world system engineering and experimentation.

This line of thinking also exposes some areas in which modern math, logic and computing are not yet adequately developed. There are relations between paraconsistent logic, gradual typing systems as are likely valuable in integrative multi-paradigm AGI systems, the fundamental nature of value in developing intelligences, and the nature of creativity and radical novelty -- which we are barely at the edge of being able to formalize ... which is both fascinating and frustrating, in that there clearly are multiple PhD theses and research papers between here and a decent mathematical/conceptual understanding of these matters... (or alternately, a few seconds of casual thought by a decent posthuman AGI mind...)

**Philosophical Post-lude**

If one digs a bit deeper in a conceptual sense, beyond the math and the AI context, what we're talking about here in a way is a bridge between utilitarian-type thinking (which has been highly valuable in economics and evolutionary biology and other areas, yet also clearly has fundamental limits) and more postmodernist type thinking (which views minds as complex self-organizing systems ongoingly reconstructing themselves and their realities in a polyphonic interactive inter-constructive process with other minds).

Conventional RL based ML is utilitarianism projected into the algorithmic and mechanical domain, whereas Open-Ended Intelligence is postmodernism and a bit of Eastern philosophy projected into the realm of modern science.

Expanding and generalizing the former so that it starts to approximate significant aspects of the latter, is interesting both for various practical engineering and science reasons, and as part of the general project of stretching the contemporary technosphere to a point where it can make rich contact with broader "non-reductionist" aspects of the universe it has hitherto mainly ignored.

*Om!*